{ "paper_id": "S13-1034", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T15:42:55.114090Z" }, "title": "CNGL-CORE: Referential Translation Machines for Measuring Semantic Similarity", "authors": [ { "first": "Ergun", "middle": [], "last": "Bi\u00e7ici", "suffix": "", "affiliation": { "laboratory": "Centre for Next Generation Localisation", "institution": "Dublin City University", "location": { "settlement": "Dublin", "country": "Ireland" } }, "email": "ebicici@computing.dcu.ie" }, { "first": "Josef", "middle": [], "last": "Van Genabith", "suffix": "", "affiliation": { "laboratory": "", "institution": "Dublin City University", "location": { "settlement": "Dublin", "country": "Ireland" } }, "email": "" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "We invent referential translation machines (RTMs), a computational model for identifying the translation acts between any two data sets with respect to a reference corpus selected in the same domain, which can be used for judging the semantic similarity between text. RTMs make quality and semantic similarity judgments possible by using retrieved relevant training data as interpretants for reaching shared semantics. An MTPP (machine translation performance predictor) model derives features measuring the closeness of the test sentences to the training data, the difficulty of translating them, and the presence of acts of translation involved. We view semantic similarity as paraphrasing between any two given texts. Each view is modeled by an RTM model, giving us a new perspective on the binary relationship between the two. Our prediction model is the 15th on some tasks and 30th overall out of 89 submissions in total according to the official results of the Semantic Textual Similarity (STS 2013) challenge.", "pdf_parse": { "paper_id": "S13-1034", "_pdf_hash": "", "abstract": [ { "text": "We invent referential translation machines (RTMs), a computational model for identifying the translation acts between any two data sets with respect to a reference corpus selected in the same domain, which can be used for judging the semantic similarity between text. RTMs make quality and semantic similarity judgments possible by using retrieved relevant training data as interpretants for reaching shared semantics. An MTPP (machine translation performance predictor) model derives features measuring the closeness of the test sentences to the training data, the difficulty of translating them, and the presence of acts of translation involved. We view semantic similarity as paraphrasing between any two given texts. Each view is modeled by an RTM model, giving us a new perspective on the binary relationship between the two. Our prediction model is the 15th on some tasks and 30th overall out of 89 submissions in total according to the official results of the Semantic Textual Similarity (STS 2013) challenge.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "We introduce a fully automated judge for semantic similarity that performs well in the semantic textual similarity (STS) task (Agirre et al., 2013) . STS is a degree of semantic equivalence between two texts based on the observations that \"vehicle\" and \"car\" are more similar than \"wave\" and \"car\". Accurate prediction of STS has a wide application area including: identifying whether two tweets are talking about the same thing, whether an answer is correct by comparing it with a reference answer, and whether a given shorter text is a valid summary of another text. The translation quality estimation task (Callison-Burch et al., 2012) aims to develop quality indicators for translations at the sentence-level and predictors without access to a reference translation. Bicici et al. (2013) develop a top performing machine translation performance predictor (MTPP), which uses machine learning models over features measuring how well the test set matches the training set relying on extrinsic and language independent features.", "cite_spans": [ { "start": 126, "end": 147, "text": "(Agirre et al., 2013)", "ref_id": "BIBREF1" }, { "start": 609, "end": 638, "text": "(Callison-Burch et al., 2012)", "ref_id": "BIBREF13" }, { "start": 771, "end": 791, "text": "Bicici et al. (2013)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Semantic Textual Similarity Judgments", "sec_num": "1" }, { "text": "The semantic textual similarity (STS) task (Agirre et al., 2013) addresses the following problem. Given two sentences S 1 and S 2 in the same language, quantify the degree of similarity with a similarity score, which is a number in the range [0, 5] . The semantic textual similarity prediction problem involves finding a function f approximating the semantic textual similarity score given two sentences, S 1 and S 2 :", "cite_spans": [ { "start": 43, "end": 64, "text": "(Agirre et al., 2013)", "ref_id": "BIBREF1" }, { "start": 242, "end": 245, "text": "[0,", "ref_id": null }, { "start": 246, "end": 248, "text": "5]", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Semantic Textual Similarity Judgments", "sec_num": "1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "f (S 1 , S 2 ) \u2248 q(S 1 , S 2 ).", "eq_num": "(1)" } ], "section": "Semantic Textual Similarity Judgments", "sec_num": "1" }, { "text": "We approach f as a supervised learning problem with (S 1 , S 2 , q(S 1 , S 2 )) tuples being the training data and q(S 1 , S 2 ) being the target similarity score. We model the problem as a translation task where one possible interpretation is obtained by translating S 1 (the source to translate, S) to S 2 (the target translation, T). Since linguistic processing can reveal deeper similarity relationships, we also look at the translation task at different granularities of information: plain text (R for regular) , after lemmatization (L), after part-of-speech (POS) tagging (P), and after removing 128 English stop-words (S) 1 . Thus, we obtain 4 different perspectives on the binary relationship between S 1 and S 2 .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Semantic Textual Similarity Judgments", "sec_num": "1" }, { "text": "Referential translation machines (RTMs) we develop provide a computational model for quality and semantic similarity judgments using retrieval of relevant training data (Bi\u00e7ici and Yuret, 2011a; Bi\u00e7ici, 2011) as interpretants for reaching shared semantics (Bi\u00e7ici, 2008) . We show that RTM achieves very good performance in judging the semantic similarity of sentences and we can also use RTM to automatically assess the correctness of student answers to obtain better results than the state-of-the-art (Dzikovska et al., 2012) .", "cite_spans": [ { "start": 169, "end": 194, "text": "(Bi\u00e7ici and Yuret, 2011a;", "ref_id": "BIBREF6" }, { "start": 195, "end": 208, "text": "Bi\u00e7ici, 2011)", "ref_id": "BIBREF9" }, { "start": 256, "end": 270, "text": "(Bi\u00e7ici, 2008)", "ref_id": "BIBREF10" }, { "start": 503, "end": 527, "text": "(Dzikovska et al., 2012)", "ref_id": "BIBREF15" } ], "ref_spans": [], "eq_spans": [], "section": "Referential Translation Machine (RTM)", "sec_num": "2" }, { "text": "RTM is a computational model for identifying the acts of translation for translating between any given two data sets with respect to a reference corpus selected in the same domain. RTM can be used for automatically judging the semantic similarity between texts. An RTM model is based on the selection of common training data relevant and close to both the training set and the test set where the selected relevant set of instances are called the interpretants. Interpretants allow shared semantics to be possible by behaving as a reference point for similarity judgments and providing the context. In semiotics, an interpretant I interprets the signs used to refer to the real objects (Bi\u00e7ici, 2008) . RTMs provide a model for computational semantics using interpretants as a reference according to which semantic judgments with translation acts are made. Each RTM model is a data translation model between the instances in the training set and the test set. We use the FDA (Feature Decay Algorithms) instance selection model for selecting the interpretants (Bi\u00e7ici and Yuret, 2011a ) from a given corpus, which can be monolingual when modeling paraphrasing acts, in which case the MTPP model (Section 2.1) is built using the interpretants themselves as both the source and the target side of the parallel corpus. RTMs map the training and test data to a space where translation acts can be identified. We view that acts of translation are ubiquitously used during communication:", "cite_spans": [ { "start": 685, "end": 699, "text": "(Bi\u00e7ici, 2008)", "ref_id": "BIBREF10" }, { "start": 1058, "end": 1082, "text": "(Bi\u00e7ici and Yuret, 2011a", "ref_id": "BIBREF6" } ], "ref_spans": [], "eq_spans": [], "section": "Referential Translation Machine (RTM)", "sec_num": "2" }, { "text": "Every act of communication is an act of translation (Bliss, 2012).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Referential Translation Machine (RTM)", "sec_num": "2" }, { "text": "src/backend/snowball/stopwords/ Translation need not be between different languages and paraphrasing or communication also contain acts of translation. When creating sentences, we use our background knowledge and translate information content according to the current context. Given a training set train, a test set test, and some monolingual corpus C, preferably in the same domain as the training and test sets, the RTM steps are:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Referential Translation Machine (RTM)", "sec_num": "2" }, { "text": "1. T = train \u222a test. 2. select(T, C) \u2192 I 3. MTPP(I, train) \u2192 F train 4. MTPP(I, test) \u2192 F test 5. learn(M, F train ) \u2192 M 6. predict(M, F test ) \u2192q", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Referential Translation Machine (RTM)", "sec_num": "2" }, { "text": "Step 2 selects the interpretants, I, relevant to the instances in the combined training and test data.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Referential Translation Machine (RTM)", "sec_num": "2" }, { "text": "Steps 3, 4 use I to map train and test to a new space where similarities between translation acts can be derived more easily.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Referential Translation Machine (RTM)", "sec_num": "2" }, { "text": "Step 5 trains a learning model M over the training features, F train , and", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Referential Translation Machine (RTM)", "sec_num": "2" }, { "text": "Step 6 obtains the predictions. RTM relies on the representativeness of I as a medium for building translation models for translating between train and test.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Referential Translation Machine (RTM)", "sec_num": "2" }, { "text": "Our encouraging results in the STS task provides a greater understanding of the acts of translation we ubiquitously use when communicating and how they can be used to predict the performance of translation, judging the semantic similarity between text, and evaluating the quality of student answers. RTM and MTPP models are not data or language specific and their modeling power and good performance are applicable across different domains and tasks. RTM expands the applicability of MTPP by making it feasible when making monolingual quality and similarity judgments and it enhances the computational scalability by building models over smaller but more relevant training data as interpretants.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Referential Translation Machine (RTM)", "sec_num": "2" }, { "text": "In machine translation (MT), pairs of source and target sentences are used for training statistical MT (SMT) models. SMT system performance is affected by the amount of training data used as well as the closeness of the test set to the training set. MTPP ) is a top performing machine translation performance predictor, which uses machine learning models over features measuring how well the test set matches the training set to predict the quality of a translation without using a ref- (Papineni et al., 2002) , NIST (Doddington, 2002) , or F 1 (Bi\u00e7ici and Yuret, 2011b) for q. \u2022 Character n-grams {4}: Calculates the cosine between the character n-grams (for n=2,3,4,5) obtained for S and T (B\u00e4r et al., 2012) . \u2022 LIX {2}: Calculates the LIX readability score (Wikipedia, 2013; Bj\u00f6rnsson, 1968 ) for S and T. 2", "cite_spans": [ { "start": 487, "end": 510, "text": "(Papineni et al., 2002)", "ref_id": "BIBREF18" }, { "start": 518, "end": 536, "text": "(Doddington, 2002)", "ref_id": "BIBREF14" }, { "start": 693, "end": 711, "text": "(B\u00e4r et al., 2012)", "ref_id": "BIBREF3" }, { "start": 762, "end": 779, "text": "(Wikipedia, 2013;", "ref_id": null }, { "start": 780, "end": 795, "text": "Bj\u00f6rnsson, 1968", "ref_id": "BIBREF11" } ], "ref_spans": [], "eq_spans": [], "section": "The Machine Translation Performance Predictor (MTPP)", "sec_num": "2.1" }, { "text": "STS contains sentence pairs from news headlines (headlines), sense definitions from semantic lexical resources (OnWN is from OntoNotes (Pradhan et al., 2007) and WordNet (Miller, 1995) and FNWN is from FrameNet (Baker et al., 1998) ", "cite_spans": [ { "start": 125, "end": 157, "text": "OntoNotes (Pradhan et al., 2007)", "ref_id": null }, { "start": 170, "end": 184, "text": "(Miller, 1995)", "ref_id": "BIBREF17" }, { "start": 211, "end": 231, "text": "(Baker et al., 1998)", "ref_id": "BIBREF2" } ], "ref_spans": [], "eq_spans": [], "section": "Experiments", "sec_num": "3" }, { "text": "We obtain CNGL results for the STS task as follows. For each perspective described in Section 1, we build an RTM model. Each RTM model views the STS task from a different perspective using the 289 features extracted dependent on the interpretants using MTPP. We extract the features both on 2 LIX= A B + C 100 A , where A is the number of words, C is words longer than 6 characters, B is words that start or end with any of \".\", \":\", \"!\", \"?\" similar to (Hagstr\u00f6m, 2012 .7904 .7502 .8200 .7788 .8074 .8232 .8101 .8247 .8218 .8509 .8266 .8172 .8304 .8530 .8323 .8499 SVR .8311 .8060 .8443 .8330 .8404 .8517 .8498 .8501 .8593 .8556 .8496 .8422 .8586 .8579 .8527 .8564 S 2 \u2192 S 1 RR .7922 .7651 .8169 .7891 .8064 .8196 .8136 .8219 .8257 .8257 .8226 .8164 .8284 .8284 .8313 .8324 SVR .8308 .8165 .8407 .8302 .8361 .8506 .8467 .8510 .8567 .8567 .8525 .8460 .8588 .8588 .8575 .8574 .8079 .787 .8279 .8101 .8216 .8333 .8275 .8346 .8375 .8409 .8361 .8312 .8412 .8434 .8432 .844 SVR .8397 .8237 .8554 .841 .8432 .857 .851 .8557 .8605 .8626 .8505 .8505 .8591 .8622 .8602 .8588 roparl (Callison-Burch et al., 2012) 3 . In-domain corpora are likely to improve the performance. We use the Stanford POS tagger (Toutanova et al., 2003) to obtain the perspectives P and L. We use the training corpus to build a 5-gram target LM.", "cite_spans": [ { "start": 454, "end": 469, "text": "(Hagstr\u00f6m, 2012", "ref_id": "BIBREF16" }, { "start": 470, "end": 665, "text": ".7904 .7502 .8200 .7788 .8074 .8232 .8101 .8247 .8218 .8509 .8266 .8172 .8304 .8530 .8323 .8499 SVR .8311 .8060 .8443 .8330 .8404 .8517 .8498 .8501 .8593 .8556 .8496 .8422 .8586 .8579 .8527 .8564", "ref_id": null }, { "start": 676, "end": 868, "text": "RR .7922 .7651 .8169 .7891 .8064 .8196 .8136 .8219 .8257 .8257 .8226 .8164 .8284 .8284 .8313 .8324 SVR .8308 .8165 .8407 .8302 .8361 .8506 .8467 .8510 .8567 .8567 .8525 .8460 .8588 .8588 .8575", "ref_id": null }, { "start": 875, "end": 1065, "text": ".8079 .787 .8279 .8101 .8216 .8333 .8275 .8346 .8375 .8409 .8361 .8312 .8412 .8434 .8432 .844 SVR .8397 .8237 .8554 .841 .8432 .857 .851 .8557 .8605 .8626 .8505 .8505 .8591 .8622 .8602 .8588", "ref_id": null }, { "start": 1066, "end": 1102, "text": "roparl (Callison-Burch et al., 2012)", "ref_id": null }, { "start": 1195, "end": 1219, "text": "(Toutanova et al., 2003)", "ref_id": "BIBREF22" } ], "ref_spans": [], "eq_spans": [], "section": "RTM Models", "sec_num": "3.1" }, { "text": "S 1 S 2 RR", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "RTM Models", "sec_num": "3.1" }, { "text": "We use ridge regression (RR) and support vector regression (SVR) with RBF kernel (Smola and Sch\u00f6lkopf, 2004) . Both of these models learn a regression function using the features to estimate a numerical target value. The parameters that govern the behavior of RR and SVR are the regularization \u03bb for RR and the C, , and \u03b3 parameters for SVR. At testing time, the predictions are bounded to obtain scores in the range [0, 5] . We perform tuning on a subset of the training set separately for each RTM model and optimize against the performance evaluated with R 2 , the coefficient of determination.", "cite_spans": [ { "start": 81, "end": 108, "text": "(Smola and Sch\u00f6lkopf, 2004)", "ref_id": "BIBREF21" }, { "start": 417, "end": 420, "text": "[0,", "ref_id": null }, { "start": 421, "end": 423, "text": "5]", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "RTM Models", "sec_num": "3.1" }, { "text": "We do not build a separate model for different types of sentences and instead use all of the training set for building a large prediction model. We also use transductive learning since using only the relevant training data for training can improve the performance (Bi\u00e7ici, 2011) . Transductive learning is performed at the sentence level where for each test instance, we select 1250 relevant training instances using the cosine similarity metric over the feature vectors and build an individual model for the test instance and predict the similarity score. 3 We use WMT'13 corpora from www.statmt.org/wmt13/. Table 1 lists the 10-fold cross-validation (CV) results on the training set for RR and SVR for different RTM systems using optimized parameters. As we combine different perspectives, the performance improves and we use the L+S with SVR for run 1 (LSSVR), L+P+S with SVR for run 2 (LPSSVR), and L+P+S with SVR using transductive learning for run 3 (LPSSVRTL) all in the translation direction S 1 \u2192 S 2 . Lemmatized RTM, L, performs the best among the individual perspectives. We also build RTM models in the direction S 2 \u2192 S 1 , which gives similar results. The last main row combines them to obtain the bi-directional results, S 1 S 2 , which improves the performance. Each additional perspective adds another 289 features to the representation and the bi-directional results double the number of features. Thus, S 1 S 2 L+P+S is using 1734 features. Table 2 presents the STS challenge r and ranking results containing our CNGL submissions, the best system result, and the mean results over all submissions. There were 89 submissions from 35 competing systems (Agirre et al., 2013) . The results are ranked according to the mean r obtained. We also include the mean result over all of the submissions and its corresponding rank.", "cite_spans": [ { "start": 264, "end": 278, "text": "(Bi\u00e7ici, 2011)", "ref_id": "BIBREF9" }, { "start": 557, "end": 558, "text": "3", "ref_id": null }, { "start": 1670, "end": 1691, "text": "(Agirre et al., 2013)", "ref_id": "BIBREF1" } ], "ref_spans": [ { "start": 609, "end": 616, "text": "Table 1", "ref_id": "TABREF2" }, { "start": 1461, "end": 1468, "text": "Table 2", "ref_id": "TABREF4" } ], "eq_spans": [], "section": "RTM Models", "sec_num": "3.1" }, { "text": "According to the official results, CNGL-LSSVR is the 30th system from the top based on the mean r obtained and CNGL-LPSSVR is 15th according to the results on OnWN out of 89 submissions in total. CNGL submissions perform unexpectedly low in the FNWN task and only slightly better than the average in the SMT task. The lower performance is likely to be due to using an out-of-domain corpus for building the RTM models and it may also be due to using and optimizing a single model for all types of tasks.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "STS Challenge Results", "sec_num": "3.3" }, { "text": "The STS task similarity score is directional invariant: q(S 1 , S 2 ) = q(S 2 , S 1 ). We develop RTM models in the reverse direction and obtain bi-directional RTM models by combining both. Table 3 lists the bi-directional results on the STS challenge test set after tuning, which shows that slight improvement in the scores are possible when compared with Table 2 . Transductive learning improves the performance in general. We also compare with the performance obtained when combining uni-directional models with mean, min, or max functions. Taking the minimum performs better than other combination approaches and can achieve r = 0.5129 with TL. One can also take the individual confidence scores obtained for each score when combining scores.", "cite_spans": [], "ref_spans": [ { "start": 190, "end": 197, "text": "Table 3", "ref_id": null }, { "start": 357, "end": 364, "text": "Table 2", "ref_id": "TABREF4" } ], "eq_spans": [], "section": "Bi-directional RTM Models", "sec_num": "3.4" }, { "text": "Referential translation machines provide a clean and intuitive computational model for automatically measuring semantic similarity by measuring the acts of translation involved and achieve to be the 15th on some tasks and 30th overall in the STS challenge out of 89 submissions in total. RTMs make quality and semantic similarity judgments possible based on the retrieval of relevant training data as interpretants for reaching shared semantics. Table 3 : Bi-directional STS challenge r and ranking results ranked according to the mean r obtained. We combine the two directions by taking the mean, min, or the max or use the bi-directional RTM model S 1 S 2 .", "cite_spans": [], "ref_spans": [ { "start": 446, "end": 453, "text": "Table 3", "ref_id": null } ], "eq_spans": [], "section": "Conclusion", "sec_num": "4" }, { "text": "http://anoncvs.postgresql.org/cvsweb.cgi/pgsql/", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "This work is supported in part by SFI (07/CE/I1142) as part of the Centre for Next Generation Localisation (www.cngl.ie) at Dublin City University and in part by the European Commission through the QTLaunchPad FP7 project (No: 296347). We also thank the SFI/HEA Irish Centre for High-End Computing (ICHEC) for the provision of computational facilities and support.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgments", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Semeval-2012 task 6: A pilot on semantic textual similarity", "authors": [ { "first": "Eneko", "middle": [], "last": "Agirre", "suffix": "" }, { "first": "Daniel", "middle": [], "last": "Cer", "suffix": "" }, { "first": "Mona", "middle": [], "last": "Diab", "suffix": "" }, { "first": "Aitor", "middle": [], "last": "Gonzalez-Agirre", "suffix": "" } ], "year": 2012, "venue": "*SEM 2012: The First Joint Conference on Lexical and Computational Semantics", "volume": "1", "issue": "", "pages": "7--8", "other_ids": {}, "num": null, "urls": [], "raw_text": "Eneko Agirre, Daniel Cer, Mona Diab, and Aitor Gonzalez-Agirre. 2012. Semeval-2012 task 6: A pilot on semantic textual similarity. In *SEM 2012: The First Joint Conference on Lexical and Computa- tional Semantics -Volume 1: Proceedings of the main conference and the shared task, and Volume 2: Pro- ceedings of the Sixth International Workshop on Se- mantic Evaluation (SemEval 2012), pages 385-393, Montr\u00e9al, Canada, 7-8 June. Association for Compu- tational Linguistics.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "*SEM 2013 shared task: Semantic textual similarity, including a pilot on typed-similarity", "authors": [ { "first": "Eneko", "middle": [], "last": "Agirre", "suffix": "" }, { "first": "Daniel", "middle": [], "last": "Cer", "suffix": "" }, { "first": "Mona", "middle": [], "last": "Diab", "suffix": "" }, { "first": "Aitor", "middle": [], "last": "Gonzalez-Agirre", "suffix": "" }, { "first": "Weiwei", "middle": [], "last": "Guo", "suffix": "" } ], "year": 2013, "venue": "*SEM 2013: The Second Joint Conference on Lexical and Computational Semantics", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Eneko Agirre, Daniel Cer, Mona Diab, Aitor Gonzalez- Agirre, and Weiwei Guo. 2013. *SEM 2013 shared task: Semantic textual similarity, including a pilot on typed-similarity. In *SEM 2013: The Second Joint Conference on Lexical and Computational Semantics. Association for Computational Linguistics.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "The berkeley framenet project", "authors": [ { "first": "Collin", "middle": [ "F" ], "last": "Baker", "suffix": "" }, { "first": "Charles", "middle": [ "J" ], "last": "Fillmore", "suffix": "" }, { "first": "John", "middle": [ "B" ], "last": "Lowe", "suffix": "" } ], "year": 1998, "venue": "Proceedings of the 36th Annual Meeting of the Association for Computational Linguistics and 17th International Conference on Computational Linguistics", "volume": "1", "issue": "", "pages": "86--90", "other_ids": {}, "num": null, "urls": [], "raw_text": "Collin F. Baker, Charles J. Fillmore, and John B. Lowe. 1998. The berkeley framenet project. In Proceed- ings of the 36th Annual Meeting of the Association for Computational Linguistics and 17th International Conference on Computational Linguistics -Volume 1, ACL '98, pages 86-90, Stroudsburg, PA, USA. Asso- ciation for Computational Linguistics.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Ukp: Computing semantic textual similarity by combining multiple content similarity measures", "authors": [ { "first": "Daniel", "middle": [], "last": "B\u00e4r", "suffix": "" }, { "first": "Chris", "middle": [], "last": "Biemann", "suffix": "" }, { "first": "Iryna", "middle": [], "last": "Gurevych", "suffix": "" }, { "first": "Torsten", "middle": [], "last": "Zesch", "suffix": "" } ], "year": 2012, "venue": "*SEM 2012: The First Joint Conference on Lexical and Computational Semantics", "volume": "1", "issue": "", "pages": "7--8", "other_ids": {}, "num": null, "urls": [], "raw_text": "Daniel B\u00e4r, Chris Biemann, Iryna Gurevych, and Torsten Zesch. 2012. Ukp: Computing semantic textual simi- larity by combining multiple content similarity mea- sures. In *SEM 2012: The First Joint Conference on Lexical and Computational Semantics -Volume 1: Proceedings of the main conference and the shared task, and Volume 2: Proceedings of the Sixth Inter- national Workshop on Semantic Evaluation (SemEval 2012), pages 435-440, Montr\u00e9al, Canada, 7-8 June. Association for Computational Linguistics.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "CNGL: Grading student answers by acts of translation", "authors": [ { "first": "Ergun", "middle": [], "last": "Bi\u00e7ici", "suffix": "" }, { "first": "Josef", "middle": [], "last": "Van Genabith", "suffix": "" } ], "year": 2013, "venue": "*SEM 2013: The First Joint Conference on Lexical and Computational Semantics and Proceedings of the Seventh International Workshop on Semantic Evaluation", "volume": "", "issue": "", "pages": "14--15", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ergun Bi\u00e7ici and Josef van Genabith. 2013. CNGL: Grading student answers by acts of translation. In *SEM 2013: The First Joint Conference on Lexical and Computational Semantics and Proceedings of the Seventh International Workshop on Semantic Evalua- tion (SemEval 2013), Atlanta, Georgia, USA, 14-15", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Instance selection for machine translation using feature decay algorithms", "authors": [ { "first": "Ergun", "middle": [], "last": "Bi\u00e7ici", "suffix": "" }, { "first": "Deniz", "middle": [], "last": "Yuret", "suffix": "" } ], "year": 2011, "venue": "Proceedings of the Sixth Workshop on Statistical Machine Translation", "volume": "", "issue": "", "pages": "272--283", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ergun Bi\u00e7ici and Deniz Yuret. 2011a. Instance selec- tion for machine translation using feature decay al- gorithms. In Proceedings of the Sixth Workshop on Statistical Machine Translation, pages 272-283, Edin- burgh, Scotland, July. Association for Computational Linguistics.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "RegMT system for machine translation, system combination, and evaluation", "authors": [ { "first": "Ergun", "middle": [], "last": "Bi\u00e7ici", "suffix": "" }, { "first": "Deniz", "middle": [], "last": "Yuret", "suffix": "" } ], "year": 2011, "venue": "Proceedings of the Sixth Workshop on Statistical Machine Translation", "volume": "", "issue": "", "pages": "323--329", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ergun Bi\u00e7ici and Deniz Yuret. 2011b. RegMT system for machine translation, system combination, and evalua- tion. In Proceedings of the Sixth Workshop on Sta- tistical Machine Translation, pages 323-329, Edin- burgh, Scotland, July. Association for Computational Linguistics.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Predicting sentence translation quality using extrinsic and language independent features", "authors": [ { "first": "Ergun", "middle": [], "last": "Bi\u00e7ici", "suffix": "" }, { "first": "Declan", "middle": [], "last": "Groves", "suffix": "" }, { "first": "Josef", "middle": [], "last": "Van Genabith", "suffix": "" } ], "year": 2013, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ergun Bi\u00e7ici, Declan Groves, and Josef van Genabith. 2013. Predicting sentence translation quality using ex- trinsic and language independent features. Machine Translation.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "The Regression Model of Machine Translation", "authors": [ { "first": "Ergun", "middle": [], "last": "Bi\u00e7ici", "suffix": "" } ], "year": 2011, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ergun Bi\u00e7ici. 2011. The Regression Model of Machine Translation. Ph.D. thesis, Ko\u00e7 University. Supervisor: Deniz Yuret.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Consensus ontologies in socially interacting multiagent systems", "authors": [ { "first": "Ergun", "middle": [], "last": "Bi\u00e7ici", "suffix": "" } ], "year": 2008, "venue": "Journal of Multiagent and Grid Systems", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ergun Bi\u00e7ici. 2008. Consensus ontologies in socially interacting multiagent systems. Journal of Multiagent and Grid Systems.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "L\u00e4sbarhet. Liber. Chris Bliss. 2012. Comedy is translation, February", "authors": [ { "first": "Carl", "middle": [], "last": "Hugo Bj\u00f6rnsson", "suffix": "" } ], "year": 1968, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Carl Hugo Bj\u00f6rnsson. 1968. L\u00e4sbarhet. Liber. Chris Bliss. 2012. Comedy is transla- tion, February. http://www.ted.com/talks/ chris bliss comedy is translation.html.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "The mathematics of statistical machine translation: Parameter estimation", "authors": [ { "first": "F", "middle": [], "last": "Peter", "suffix": "" }, { "first": "Stephen", "middle": [ "A Della" ], "last": "Brown", "suffix": "" }, { "first": "Vincent", "middle": [ "J" ], "last": "Pietra", "suffix": "" }, { "first": "Robert", "middle": [ "L" ], "last": "Della Pietra", "suffix": "" }, { "first": "", "middle": [], "last": "Mercer", "suffix": "" } ], "year": 1993, "venue": "Computational Linguistics", "volume": "19", "issue": "2", "pages": "263--311", "other_ids": {}, "num": null, "urls": [], "raw_text": "Peter F. Brown, Stephen A. Della Pietra, Vincent J. Della Pietra, and Robert L. Mercer. 1993. The mathemat- ics of statistical machine translation: Parameter esti- mation. Computational Linguistics, 19(2):263-311, June.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Findings of the 2012 workshop on statistical machine translation", "authors": [ { "first": "Chris", "middle": [], "last": "Callison-Burch", "suffix": "" }, { "first": "Philipp", "middle": [], "last": "Koehn", "suffix": "" }, { "first": "Christof", "middle": [], "last": "Monz", "suffix": "" }, { "first": "Matt", "middle": [], "last": "Post", "suffix": "" }, { "first": "Radu", "middle": [], "last": "Soricut", "suffix": "" }, { "first": "Lucia", "middle": [], "last": "Specia", "suffix": "" } ], "year": 2012, "venue": "Proceedings of the Seventh Workshop on Statistical Machine Translation", "volume": "", "issue": "", "pages": "10--51", "other_ids": {}, "num": null, "urls": [], "raw_text": "Chris Callison-Burch, Philipp Koehn, Christof Monz, Matt Post, Radu Soricut, and Lucia Specia. 2012. Findings of the 2012 workshop on statistical machine translation. In Proceedings of the Seventh Work- shop on Statistical Machine Translation, pages 10- 51, Montr\u00e9al, Canada, June. Association for Compu- tational Linguistics.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Automatic evaluation of machine translation quality using n-gram co-occurrence statistics", "authors": [ { "first": "George", "middle": [], "last": "Doddington", "suffix": "" } ], "year": 2002, "venue": "Proceedings of the second international conference on Human Language Technology Research", "volume": "", "issue": "", "pages": "138--145", "other_ids": {}, "num": null, "urls": [], "raw_text": "George Doddington. 2002. Automatic evaluation of ma- chine translation quality using n-gram co-occurrence statistics. In Proceedings of the second interna- tional conference on Human Language Technology Research, pages 138-145, San Francisco, CA, USA. Morgan Kaufmann Publishers Inc.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Towards effective tutorial feedback for explanation questions: A dataset and baselines", "authors": [ { "first": "O", "middle": [], "last": "Myroslava", "suffix": "" }, { "first": "Rodney", "middle": [ "D" ], "last": "Dzikovska", "suffix": "" }, { "first": "Chris", "middle": [], "last": "Nielsen", "suffix": "" }, { "first": "", "middle": [], "last": "Brew", "suffix": "" } ], "year": 2012, "venue": "Proceedings of the 2012 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "", "issue": "", "pages": "200--210", "other_ids": {}, "num": null, "urls": [], "raw_text": "Myroslava O. Dzikovska, Rodney D. Nielsen, and Chris Brew. 2012. Towards effective tutorial feedback for explanation questions: A dataset and baselines. In Proceedings of the 2012 Conference of the North American Chapter of the Association for Computa- tional Linguistics: Human Language Technologies, pages 200-210, Montr\u00e9al, Canada, June. Association for Computational Linguistics.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Swedish readability calculator", "authors": [ { "first": "Kenth", "middle": [], "last": "Hagstr\u00f6m", "suffix": "" } ], "year": 2012, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kenth Hagstr\u00f6m. 2012. Swedish readability calcula- tor. https://github.com/keha76/Swedish-Readability- Calculator.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Wordnet: a lexical database for english", "authors": [ { "first": "A", "middle": [], "last": "George", "suffix": "" }, { "first": "", "middle": [], "last": "Miller", "suffix": "" } ], "year": 1995, "venue": "Communications of the ACM", "volume": "38", "issue": "11", "pages": "39--41", "other_ids": {}, "num": null, "urls": [], "raw_text": "George A. Miller. 1995. Wordnet: a lexical database for english. Communications of the ACM, 38(11):39-41, November.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "BLEU: a method for automatic evaluation of machine translation", "authors": [ { "first": "Kishore", "middle": [], "last": "Papineni", "suffix": "" }, { "first": "Salim", "middle": [], "last": "Roukos", "suffix": "" }, { "first": "Todd", "middle": [], "last": "Ward", "suffix": "" }, { "first": "Wei-Jing", "middle": [], "last": "Zhu", "suffix": "" } ], "year": 2002, "venue": "Proceedings of 40th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "311--318", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kishore Papineni, Salim Roukos, Todd Ward, and Wei- Jing Zhu. 2002. BLEU: a method for automatic eval- uation of machine translation. In Proceedings of 40th Annual Meeting of the Association for Computational Linguistics, pages 311-318, Philadelphia, Pennsylva- nia, USA, July. Association for Computational Lin- guistics.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Ontonotes: a unified relational semantic representation", "authors": [ { "first": "S", "middle": [], "last": "Sameer", "suffix": "" }, { "first": "Eduard", "middle": [ "H" ], "last": "Pradhan", "suffix": "" }, { "first": "Mitchell", "middle": [ "P" ], "last": "Hovy", "suffix": "" }, { "first": "Martha", "middle": [], "last": "Marcus", "suffix": "" }, { "first": "Lance", "middle": [ "A" ], "last": "Palmer", "suffix": "" }, { "first": "Ralph", "middle": [ "M" ], "last": "Ramshaw", "suffix": "" }, { "first": "", "middle": [], "last": "Weischedel", "suffix": "" } ], "year": 2007, "venue": "Int. J. Semantic Computing", "volume": "1", "issue": "4", "pages": "405--419", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sameer S. Pradhan, Eduard H. Hovy, Mitchell P. Mar- cus, Martha Palmer, Lance A. Ramshaw, and Ralph M. Weischedel. 2007. Ontonotes: a unified relational semantic representation. Int. J. Semantic Computing, 1(4):405-419.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "Learning Syntactic Structure", "authors": [ { "first": "Yoav", "middle": [], "last": "Seginer", "suffix": "" } ], "year": 2007, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yoav Seginer. 2007. Learning Syntactic Structure. Ph.D. thesis, Universiteit van Amsterdam.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "A tutorial on support vector regression", "authors": [ { "first": "Alex", "middle": [ "J" ], "last": "Smola", "suffix": "" }, { "first": "Bernhard", "middle": [], "last": "Sch\u00f6lkopf", "suffix": "" } ], "year": 2004, "venue": "Statistics and Computing", "volume": "14", "issue": "3", "pages": "199--222", "other_ids": {}, "num": null, "urls": [], "raw_text": "Alex J. Smola and Bernhard Sch\u00f6lkopf. 2004. A tutorial on support vector regression. Statistics and Comput- ing, 14(3):199-222, August.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "Feature-rich part-of-speech tagging with a cyclic dependency network", "authors": [ { "first": "Kristina", "middle": [], "last": "Toutanova", "suffix": "" }, { "first": "Dan", "middle": [], "last": "Klein", "suffix": "" }, { "first": "Christopher", "middle": [ "D" ], "last": "Manning", "suffix": "" }, { "first": "Yoram", "middle": [], "last": "Singer", "suffix": "" } ], "year": 2003, "venue": "Proceedings of the 2003 Conference of the North American Chapter of the Association for Computational Linguistics on Human Language Technology", "volume": "1", "issue": "", "pages": "173--180", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kristina Toutanova, Dan Klein, Christopher D. Manning, and Yoram Singer. 2003. Feature-rich part-of-speech tagging with a cyclic dependency network. In Pro- ceedings of the 2003 Conference of the North Ameri- can Chapter of the Association for Computational Lin- guistics on Human Language Technology -Volume 1, NAACL '03, pages 173-180, Stroudsburg, PA, USA. Association for Computational Linguistics.", "links": null } }, "ref_entries": { "FIGREF0": { "num": null, "text": "and WordNet), and statistical machine translation (SMT)(Agirre et al., 2013). STS challenge results are evaluated with the Pearson's correlation score (r).The test set contains 2250 (S 1 , S 2 ) sentence pairs with 750, 561, 189, and 750 sentences from each type respectively. The training set contains 5342 sentence pairs with 1500 each from MSRpar and MSRvid (Microsoft Research paraphrase and video description corpus(Agirre et al., 2012)), 1592 from SMT, and 750 from OnWN.", "uris": null, "type_str": "figure" }, "TABREF0": { "html": null, "type_str": "table", "num": null, "content": "
\u2022 Minimum Bayes Retrieval Risk {4}: Calculates
the translation probability for the translation
having the minimum Bayes risk among the re-
trieved training instances.
\u2022 Sentence Translation Performance {3}: Calcu-
lates translation scores obtained according to
q(T, R) using BLEU
2.2 MTPP Features for Translation Acts
MTPP uses n-gram features defined over text or
common cover link (CCL) (Seginer, 2007) struc-
tures as the basic units of information over which
similarity calculations are made. Unsupervised
parsing with CCL extracts links from base words
to head words, which allow us to obtain structures
representing the grammatical information instanti-
", "text": "erence translation. MTPP measures the coverage of individual test sentence features and syntactic structures found in the training set and derives feature functions measuring the closeness of test sentences to the available training data, the difficulty of translating the sentence, and the presence of acts of translation for data transformation. Calculates translation scores achievable according to the n-gram coverage. \u2022 Length {4}: Calculates the number of words and characters for S and T and their ratios. \u2022 Feature Vector Similarity {16}: Calculates the similarities between vector representations. \u2022 Perplexity {90}: Measures the fluency of the sentences according to language models (LM). We use both forward ({30}) and backward ({15}) LM based features for S and T. \u2022 Entropy {4}: Calculates the distributional similarity of test sentences to the training set. Measures the diversity of cooccurring features in the training set. \u2022 IBM1 Translation Probability {16}: Calculates the translation probability of test sentences using the training set(Brown et al., 1993)." }, "TABREF2": { "html": null, "type_str": "table", "num": null, "content": "
the training set and the test set. The training cor-
pus used is the English side of an out-of-domain
corpus on European parliamentary discussions, Eu
", "text": "CV performance on the training set with tuning. Underlined are the settings we use in our submissions. RTM models in directions S 1 \u2192 S 2 , S 2 \u2192 S 1 , and the bi-directional models S 1 S 2 are displayed." }, "TABREF3": { "html": null, "type_str": "table", "num": null, "content": "", "text": "System head OnWN FNWN SMT mean rank CNGL-LSSVR .6552 .6943 .2016 .3005 .5086 30 CNGL-LPSSVRTL .6385 .6756 .1823 .3098 .4998 33 CNGL-LPSSVR .6510 .6971 .1180 .2861 .4961 36 UMBC-EB.-PW .7642 .7529 .5818 .3804 .6181 1 mean .6071 .5089 .2906 .3004 .4538 57" }, "TABREF4": { "html": null, "type_str": "table", "num": null, "content": "
", "text": "STS challenge r and ranking results ranked according to the mean r obtained. head is headlines and mean is the mean of all submissions." }, "TABREF5": { "html": null, "type_str": "table", "num": null, "content": "
", "text": "System head OnWN FNWN SMT mean LS mean .6552 .6943 .2016 .3005 .5086 mean TL .6397 .6808 .1776 .3147 .5028 min .6512 .6947 .2003 .2984 .5066 min TL .6416 .6853 .1903 .3143 .5055 max .6669 .6680 .1867 .2737 .4958 max TL .6493 .6805 .1846 .3127 .5059 S1 S2 .6388 .6695 .1667 .2999 .4938 S1 S2 TL .6285 .6686 .0918 .2931 .4816 LPS mean .6510 .6971 .1179 .2861 .4961 mean TL .6524 .6918 .1940 .3176 .5121 min .6608 .6953 .1704 .2922 .5053 min TL .6509 .6864 .1792 .3156 .5084 max .6588 .6800 .1355 .2868 .4961 max TL .6493 .6805 .1846 .3127 .5059 S1 S2 .6251 .6843 .0677 .2994 .4845 S1 S2 TL .6370 .6978 .0951 .2980 .4936 RLPS mean .6517 .7136 .1002 .2880 .4996 mean TL .6383 .6841 .2434 .3063 .5059 min .6615 .7099 .1644 .2877 .5072 min TL .6606 .6987 .1972 .3059 .5129 max .6589 .7019 .0995 .2935 .5008 max TL .6362 .6896 .2044 .3153 .5063 S1 S2 .6300 .7011 .0817 .2798 .4850 S1 S2 TL .6321 .6956 .1995 .3128 .5052" } } } }