{ "paper_id": "2021", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T10:38:30.887450Z" }, "title": "Reference-Free Word-and Sentence-Level Translation Evaluation with Token-Matching Metrics", "authors": [ { "first": "Christoph", "middle": [ "Wolfgang" ], "last": "Leiter", "suffix": "", "affiliation": {}, "email": "christoph.leiter@stud.tu-darmstadt.de" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "Many modern machine translation evaluation metrics like BERTScore, BLEURT, COMET, MonoTransquest or XMoverScore are based on black-box language models. Hence, it is difficult to explain why these metrics return certain scores. This year's Eval4NLP shared task tackles this challenge by searching for methods that can extract feature importance scores that correlate well with human word-level error annotations. In this paper we show that unsupervised metrics that are based on tokenmatching can intrinsically provide such scores. The submitted system interprets the similarities of the contextualized word-embeddings that are used to compute (X)BERTScore as word-level importance scores. We make our code available 1 .", "pdf_parse": { "paper_id": "2021", "_pdf_hash": "", "abstract": [ { "text": "Many modern machine translation evaluation metrics like BERTScore, BLEURT, COMET, MonoTransquest or XMoverScore are based on black-box language models. Hence, it is difficult to explain why these metrics return certain scores. This year's Eval4NLP shared task tackles this challenge by searching for methods that can extract feature importance scores that correlate well with human word-level error annotations. In this paper we show that unsupervised metrics that are based on tokenmatching can intrinsically provide such scores. The submitted system interprets the similarities of the contextualized word-embeddings that are used to compute (X)BERTScore as word-level importance scores. We make our code available 1 .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "In recent years, machine translation evaluation metrics constantly improved in their correlation with human judgements (e.g. Mathur et al., 2020; Specia et al., 2020) . However, this improvement comes at a loss of understandability. Early metrics such as BLEU (Papineni et al., 2002) and METEOR (Lavie et al., 2004; Banerjee and Lavie, 2005) follow a clearly defined algorithm without learnable weights. Therefore, these metrics are interpretable by design and could even be computed per hand. Newer metrics such BERTScore (Zhang et al., 2020) , BLEURT (Sellam et al., 2020) , COMET (Rei et al., 2020a) , MonoTransquest (Ranasinghe et al., 2020a,b) , MoverScore (Zhao et al., 2019) or XMoverScore (Zhao et al., 2020) instead leverage transformer (Vaswani et al., 2017) based language models. As these base their predictions on thousands of learned parameters, they are too complex to understand without employing further techniques. Such techniques that aim to support the understanding of black-box models are the scope of XAI (eXplainable Artificial Intelligence) (e.g. Carvalho et al., 2019; Bodria et al., 2021) .", "cite_spans": [ { "start": 125, "end": 145, "text": "Mathur et al., 2020;", "ref_id": "BIBREF21" }, { "start": 146, "end": 166, "text": "Specia et al., 2020)", "ref_id": "BIBREF32" }, { "start": 260, "end": 283, "text": "(Papineni et al., 2002)", "ref_id": "BIBREF23" }, { "start": 295, "end": 315, "text": "(Lavie et al., 2004;", "ref_id": "BIBREF15" }, { "start": 316, "end": 341, "text": "Banerjee and Lavie, 2005)", "ref_id": "BIBREF0" }, { "start": 523, "end": 543, "text": "(Zhang et al., 2020)", "ref_id": "BIBREF37" }, { "start": 553, "end": 574, "text": "(Sellam et al., 2020)", "ref_id": "BIBREF30" }, { "start": 583, "end": 602, "text": "(Rei et al., 2020a)", "ref_id": "BIBREF27" }, { "start": 620, "end": 648, "text": "(Ranasinghe et al., 2020a,b)", "ref_id": null }, { "start": 662, "end": 681, "text": "(Zhao et al., 2019)", "ref_id": "BIBREF39" }, { "start": 697, "end": 716, "text": "(Zhao et al., 2020)", "ref_id": "BIBREF38" }, { "start": 746, "end": 768, "text": "(Vaswani et al., 2017)", "ref_id": "BIBREF35" }, { "start": 1072, "end": 1094, "text": "Carvalho et al., 2019;", "ref_id": "BIBREF2" }, { "start": 1095, "end": 1115, "text": "Bodria et al., 2021)", "ref_id": "BIBREF1" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "This year's Eval4NLP shared task (Fomicheva et al., 2021a) considers to what extend XAI techniques extract feature importance scores from metrics that correlate with word-level error annotations. Some embedding based metrics, such as MoverScore, XMoverScore and BERTScore can be categorized as unsupervised matching (Yuan et al., 2021) . These metrics are unsupervised, as they are not fine-tuned on human annotated translation scores. And they perform matching, as the sentence-level score is calculated based on how well each token in one sentence matches to tokens in the other sentence.", "cite_spans": [ { "start": 33, "end": 58, "text": "(Fomicheva et al., 2021a)", "ref_id": "BIBREF7" }, { "start": 316, "end": 335, "text": "(Yuan et al., 2021)", "ref_id": "BIBREF36" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "This work evaluates the usage of the tokenlevel matches of BERTScore and XMoverScore as feature-importance explanation of the sentencelevel score. It was conducted as part of a master thesis by Leiter (2021).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "This system paper is related to work in the fields of machine translation evaluation metrics and explainable artificial intelligence.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "A large number of metrics has been proposed to grade the quality of machine translations (e.g. Mathur et al., 2020; Specia et al., 2020) . Referencebased metrics grade machine translations based on one or more reference translations. Reference-free metrics grade machine translations based on the source sentence. Due to the structure of the shared task this paper considers reference-free metrics, in specific BERTScore (Zhang et al., 2020) with multilingual language embeddings (reference-free usage is proposed by Zhou et al., 2020; Song et al., 2021) and XMoverScore (Zhao et al., 2020) . To differentiate, we will refer to the reference-free BERTScore as XBERTScore. Other reference-free metrics are for example MonoTransquest (Ranasinghe et al., 2020a,b) and COMET for quality estimation (Rei et al., 2020b) . Many reference-free metrics have been enabled by the pre-training of multilingual language models on large scale datasets. Examples are multilingual BERT (Devlin et al., 2018) and XLM-Roberta (Conneau et al., 2020) . The discussed metrics produce a single score per translation. In contrast, word-level metrics such as the metrics by Lee (2020) and Ranasinghe et al. (2021) predict word-level errors. Word-level metrics are closely related to the goal of the Eval4NLP shared task, as the extracted feature importance scores are evaluated with word-level error annotations (Fomicheva et al., 2021a) .", "cite_spans": [ { "start": 95, "end": 115, "text": "Mathur et al., 2020;", "ref_id": "BIBREF21" }, { "start": 116, "end": 136, "text": "Specia et al., 2020)", "ref_id": "BIBREF32" }, { "start": 421, "end": 441, "text": "(Zhang et al., 2020)", "ref_id": "BIBREF37" }, { "start": 517, "end": 535, "text": "Zhou et al., 2020;", "ref_id": "BIBREF40" }, { "start": 536, "end": 554, "text": "Song et al., 2021)", "ref_id": "BIBREF31" }, { "start": 571, "end": 590, "text": "(Zhao et al., 2020)", "ref_id": "BIBREF38" }, { "start": 732, "end": 760, "text": "(Ranasinghe et al., 2020a,b)", "ref_id": null }, { "start": 794, "end": 813, "text": "(Rei et al., 2020b)", "ref_id": "BIBREF28" }, { "start": 970, "end": 991, "text": "(Devlin et al., 2018)", "ref_id": "BIBREF6" }, { "start": 1008, "end": 1030, "text": "(Conneau et al., 2020)", "ref_id": "BIBREF3" }, { "start": 1150, "end": 1160, "text": "Lee (2020)", "ref_id": "BIBREF16" }, { "start": 1165, "end": 1189, "text": "Ranasinghe et al. (2021)", "ref_id": "BIBREF26" }, { "start": 1388, "end": 1413, "text": "(Fomicheva et al., 2021a)", "ref_id": "BIBREF7" } ], "ref_spans": [], "eq_spans": [], "section": "Metrics", "sec_num": "2.1" }, { "text": "As summarized in related surveys (e.g. Carvalho et al., 2019; Lertvittayakumjorn and Toni, 2021; Linardatos et al., 2021) , explainability techniques can be categorized along several dimensions. Intrinsic (self-explaining) models explain their output during the original computation, while post-hoc methods are applied afterwards. Model-agnostic techniques can be applied to any model, while model-specific techniques are specific to certain architectures. Also, global methods try to explain a model as a whole, while local methods give insights into single pairs of input/output.", "cite_spans": [ { "start": 39, "end": 61, "text": "Carvalho et al., 2019;", "ref_id": "BIBREF2" }, { "start": 62, "end": 96, "text": "Lertvittayakumjorn and Toni, 2021;", "ref_id": "BIBREF18" }, { "start": 97, "end": 121, "text": "Linardatos et al., 2021)", "ref_id": "BIBREF19" } ], "ref_spans": [], "eq_spans": [], "section": "Explainable Artificial Intelligence", "sec_num": "2.2" }, { "text": "The goal of the Eval4NLP shared task is the extraction of feature importance scores as wordlevel error indications (Fomicheva et al., 2021a) , i.e. each input feature (here tokens) should be assigned a score of how important it is for a predicted output. As these are assigned per input, they can be counted towards the local techniques. Further, the methods proposed in this paper are intrinsic and model specific. Note that even though the model itself produces the explanation, i.e. a token level output, the approaches we present do not explain the internal workings of the underlying language model.", "cite_spans": [ { "start": 115, "end": 140, "text": "(Fomicheva et al., 2021a)", "ref_id": "BIBREF7" } ], "ref_spans": [], "eq_spans": [], "section": "Explainable Artificial Intelligence", "sec_num": "2.2" }, { "text": "Other model-specific post-hoc feature importance methods are, for example, Integrated Gradients (Sundararajan et al., 2017) , DiffMask (De Cao et al., 2020) and (Guan et al., 2019) . Modelagnostic post-hoc feature importance methods are for example LIME (Ribeiro et al., 2016) , SHAP (Lundberg and Lee, 2017) and Input Marginalization (Kim et al., 2020) . Fomicheva et al. (2021b) present the first evaluation of explainabilitiy techniques in the same context as the shared task.", "cite_spans": [ { "start": 96, "end": 123, "text": "(Sundararajan et al., 2017)", "ref_id": "BIBREF33" }, { "start": 135, "end": 156, "text": "(De Cao et al., 2020)", "ref_id": "BIBREF5" }, { "start": 161, "end": 180, "text": "(Guan et al., 2019)", "ref_id": "BIBREF11" }, { "start": 249, "end": 276, "text": "LIME (Ribeiro et al., 2016)", "ref_id": null }, { "start": 298, "end": 308, "text": "Lee, 2017)", "ref_id": "BIBREF20" }, { "start": 335, "end": 353, "text": "(Kim et al., 2020)", "ref_id": "BIBREF12" }, { "start": 356, "end": 380, "text": "Fomicheva et al. (2021b)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Explainable Artificial Intelligence", "sec_num": "2.2" }, { "text": "Token-Matching", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Feature Importance from", "sec_num": "3" }, { "text": "In this section we describe the extraction of wordlevel importance scores from XBERTScore and XMoverScore. In specific, we consider that words that are well aligned between source and translation are important for the sentence-level score and are likely to be correct translations. If a word does not align well, it is likely to be an error. Hence, the maximal similarity (or minimal dissimilarity) of each word between source and translation can be interpreted as word-level (importance) score.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Feature Importance from", "sec_num": "3" }, { "text": "We choose x = (x 1 , ..., x n ) to represent a source sentence and y = (y 1 , ..., y m ) to represent a translation where x i and y j refer to arbitrary token embeddings in x and y.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Feature Importance from", "sec_num": "3" }, { "text": "XBERTScore computes a reference-free sentence score as follows (Zhang et al., 2020; Zhou et al., 2020; Song et al., 2021 ):", "cite_spans": [ { "start": 63, "end": 83, "text": "(Zhang et al., 2020;", "ref_id": "BIBREF37" }, { "start": 84, "end": 102, "text": "Zhou et al., 2020;", "ref_id": "BIBREF40" }, { "start": 103, "end": 120, "text": "Song et al., 2021", "ref_id": "BIBREF31" } ], "ref_spans": [], "eq_spans": [], "section": "XBERTScore", "sec_num": "3.1" }, { "text": "1. A multilingual pre-trained transformer model is chosen and contextualized embeddings are extracted for each word in translation and source. These are obtained by performing a forward pass and extracting the hidden states at a layer of choice.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "XBERTScore", "sec_num": "3.1" }, { "text": "2. A matrix S \u2208 IR n\u00d7m of cosine similarities between each embedding of source and translation is constructed. In other words, entries in S are computed as", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "XBERTScore", "sec_num": "3.1" }, { "text": "S ij = x T i y j ||x i || ||y j || .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "XBERTScore", "sec_num": "3.1" }, { "text": "3. Two vectors x max and y max are determined.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "XBERTScore", "sec_num": "3.1" }, { "text": "x max contains the maximum similarity of each token in x to tokens in y:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "XBERTScore", "sec_num": "3.1" }, { "text": "x max = (max S 1, * , ..., max S n, * )", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "XBERTScore", "sec_num": "3.1" }, { "text": "Respectively y max contains the maximum similarity to each token in y to tokens in x:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "XBERTScore", "sec_num": "3.1" }, { "text": "y max = (max S * ,1 , ..., max S * ,m )", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "XBERTScore", "sec_num": "3.1" }, { "text": "4. Zhang et al. (2020) propose three different scores: R BERT , P BERT and F BERT . R BERT computes the recall R BERT = mean(x max ). P BERT computes the precision P BERT = mean(y max ). The F BERT -Score is computed as 2 P BERT * R BERT P BERT +R BERT . 5. They describe further steps such as idfweighting and rescaling of scores, which we don't apply in this paper. Idf-weighting over many sentences potentially increases the sentence level scores.", "cite_spans": [ { "start": 3, "end": 22, "text": "Zhang et al. (2020)", "ref_id": "BIBREF37" } ], "ref_spans": [], "eq_spans": [], "section": "XBERTScore", "sec_num": "3.1" }, { "text": "Zhang et al. (2020) compute R BERT and P BERT from embeddings in a single formula. In above's description we describe the construction of the matrix S and the vectors x max and y max as extra steps, as we interpret these vectors as tokenlevel importance scores. To explain, we treat x max i as the importance score for embedding x i in x (and the token at the i-th position of x), the same applying for y.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "XBERTScore", "sec_num": "3.1" }, { "text": "Many language-models use sub-word tokenization (e.g. Sentencepiece (Kudo and Richardson, 2018) ), so that the importance-scores are at a subword level. To receive word-level scores, we parse the scored tokens to be aligned with the input sentences. Multiple scores that belong to a single word are averaged. If a token did not receive a score, e.g. as punctuation was dropped (see XMover-Score(mBERT) in section 4), we assign the score of the previous token.", "cite_spans": [ { "start": 67, "end": 94, "text": "(Kudo and Richardson, 2018)", "ref_id": "BIBREF13" } ], "ref_spans": [], "eq_spans": [], "section": "XBERTScore", "sec_num": "3.1" }, { "text": "To further improve the correlation to word-level error annotations, we ensemble word-level and sentence-level (F BERT ) scores by summing them across different models:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "XBERTScore", "sec_num": "3.1" }, { "text": "F ensemble = z i=1 F BERT i x max ensemble = z i=1 x max i", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "XBERTScore", "sec_num": "3.1" }, { "text": "Here, F BERT i denotes the XBERTScore returned by using the i-th of z models to extract contextualized embeddings and x max ensemble describes the element-wise sum of respective x max vectors. Again, x max ensemble i is treated as importance score for embedding x i in x. y max ensemble is calculated analogous.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "XBERTScore", "sec_num": "3.1" }, { "text": "In section 4, the F-Score is evaluated in terms of its pearson correlation to sentence-level scores.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "XBERTScore", "sec_num": "3.1" }, { "text": "x max ensemble is evaluated in terms of its correlation to word-level error annotations of the source and y max ensemble is evaluated in terms of its correlation to word-level error annotations of the hypothesis. Zhao et al. (2020) propose XMoverScore (XMS), a metric that matches n-grams of tokens based on the word mover's distance (WMD) (Kusner et al., 2015) . In the case of unigrams, they first compute a matrix C \u2208 IR n\u00d7m , with C ij = ||x i \u2212 y j || 2 . Then, based on C, they minimize the WMD to determine the optimal alignment between the two sentences.", "cite_spans": [ { "start": 213, "end": 231, "text": "Zhao et al. (2020)", "ref_id": "BIBREF38" }, { "start": 340, "end": 361, "text": "(Kusner et al., 2015)", "ref_id": "BIBREF14" } ], "ref_spans": [], "eq_spans": [], "section": "XBERTScore", "sec_num": "3.1" }, { "text": "Using the same notation as for XBERTScore, we obtain token-level scores as follows:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "XMoverScore", "sec_num": "3.2" }, { "text": "x min = (min C 1, * , ..., min C n, * ) y min = (min C * ,1 , ..., min C * ,m )", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "XMoverScore", "sec_num": "3.2" }, { "text": "As for XBERTScore, we obtain word-level scores by aligning the token-level scores based on the input sentences. Again, word-and sentence-level scores can be ensembled via summation. Zhao et al. (2020) further improve the sentencelevel score by remapping the token-embeddings and employing a target-side language model. The remapping assumes that tokens in the cross-lingual embedding space are not fully aligned between languages. They propose two techniques for mitigation. Linear cross-lingual projection (CLP) learns a projection matrix that projects tokens of the source language such that the distance to tokens of the target language is minimized. Universal language mismatch-direction (UMD) determines a global direction along which the embeddings of two languages are misaligned. Then the projection along this direction is subtracted form each embedding. Both techniques use embeddings that were aligned using small parallel corpora. Zhao et al. (2020) employ the target-side language model as an additional measure of fluency of translations. In our experiments we do not use this model, as it might lower the degree to which the word-level scores explain the sentence-level scores.", "cite_spans": [ { "start": 182, "end": 200, "text": "Zhao et al. (2020)", "ref_id": "BIBREF38" }, { "start": 943, "end": 961, "text": "Zhao et al. (2020)", "ref_id": "BIBREF38" } ], "ref_spans": [], "eq_spans": [], "section": "XMoverScore", "sec_num": "3.2" }, { "text": "In the Eval4NLP shared task errors are considered as important for the sentence-level score (Fomicheva et al., 2021a) , i.e. they should receive a higher feature-importance than correct words.", "cite_spans": [ { "start": 92, "end": 117, "text": "(Fomicheva et al., 2021a)", "ref_id": "BIBREF7" } ], "ref_spans": [], "eq_spans": [], "section": "Inversion", "sec_num": "3.3" }, { "text": "Hence, we invert the word-level scores and use \u2212x max and \u2212y max for XBERTScore (likewise \u2212x min and \u2212y min for XMoverScore).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Inversion", "sec_num": "3.3" }, { "text": "We calculate word-and sentence-level scores for the dev sets 2 of the Eval4NLP shared task (Fomicheva et al., 2021a) , which are a subset of the MLQE-PE corpus by Fomicheva et al. (2020b,a) . The organizers provide 1000 samples for the ro-en (Romanian-English) and et-en (Estonian-English) language pairs each. For every sample they provide a source sentence, a translation, a sentence-level ground truth score and word-level ground truth labels for source and translation. On the word-level they label a word with 1 if it is erroneous and 0 if it is correct. Zhao et al. (2019) show that the usage of language models fine-tuned for Natural Language Inference (NLI) improves the results of MoverScore. Therefore, we evaluate models fine-tuned for NLI for XBertScore and XMoverScore. The results of the following configurations are reported:", "cite_spans": [ { "start": 91, "end": 116, "text": "(Fomicheva et al., 2021a)", "ref_id": "BIBREF7" }, { "start": 163, "end": 189, "text": "Fomicheva et al. (2020b,a)", "ref_id": null }, { "start": 560, "end": 578, "text": "Zhao et al. (2019)", "ref_id": "BIBREF39" } ], "ref_spans": [], "eq_spans": [], "section": "Experiment Setup", "sec_num": "4" }, { "text": "\u2022 XBERTScore(XLMR): XBERTScore using the pre-trained XLMR-large model (Conneau et al., 2020) .", "cite_spans": [ { "start": 70, "end": 92, "text": "(Conneau et al., 2020)", "ref_id": "BIBREF3" } ], "ref_spans": [], "eq_spans": [], "section": "Experiment Setup", "sec_num": "4" }, { "text": "\u2022 XBERTScore(XLMR N LI1 ): XBERTScore using an XLMR-large model fine-tuned on XNLI (Conneau et al., 2018) from the Huggingface model hub 3 .", "cite_spans": [ { "start": 83, "end": 105, "text": "(Conneau et al., 2018)", "ref_id": "BIBREF4" } ], "ref_spans": [], "eq_spans": [], "section": "Experiment Setup", "sec_num": "4" }, { "text": "\u2022 XBERTScore(XLMR N LI2 ): XBERTScore using another XLMR-large model fine-tuned on XNLI (Conneau et al., 2018) and ANLI (Nie et al., 2020 ) from the Huggingface model hub 4 .", "cite_spans": [ { "start": 88, "end": 110, "text": "(Conneau et al., 2018)", "ref_id": "BIBREF4" }, { "start": 120, "end": 137, "text": "(Nie et al., 2020", "ref_id": "BIBREF22" } ], "ref_spans": [], "eq_spans": [], "section": "Experiment Setup", "sec_num": "4" }, { "text": "\u2022 XBERTScore(XLMR Ensemble ): An ensemble version of the three models above that uses the ensembling step described in section 3.1.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experiment Setup", "sec_num": "4" }, { "text": "\u2022 XBERTScore(mBERT): XBERTScore using multilingual BERT (Devlin et al., 2018) to extract contextualized embeddings.", "cite_spans": [ { "start": 56, "end": 77, "text": "(Devlin et al., 2018)", "ref_id": "BIBREF6" } ], "ref_spans": [], "eq_spans": [], "section": "Experiment Setup", "sec_num": "4" }, { "text": "\u2022 XBERTScore(mBART): XBERTScore using mBart-large 50 many-to-many (Tang et al., 2020) .", "cite_spans": [ { "start": 66, "end": 85, "text": "(Tang et al., 2020)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Experiment Setup", "sec_num": "4" }, { "text": "\u2022 XMoverScore(mBERT): We report the scores for XMS 5 with unigrams and CLP remapping mode. XMS is based on the 12th layer of multilingual BERT.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experiment Setup", "sec_num": "4" }, { "text": "\u2022 XMoverScore(mBERT)-KEEP: The original implementation of XMS by Zhao et al. (2020) drops embeddings of sub-words that are not the start of a word as well as punctuation. This configuration keeps them during the computation.", "cite_spans": [ { "start": 65, "end": 83, "text": "Zhao et al. (2020)", "ref_id": "BIBREF38" } ], "ref_spans": [], "eq_spans": [], "section": "Experiment Setup", "sec_num": "4" }, { "text": "\u2022 XMoverScore(XLMR Ensemble )-KEEP: XMS using the ensemble configuration described for XBERTScore above. Additionally, CLP and UMD mappings were trained on 30k sentences for each ensembled model and respective layer. The scores were summed across CLP and UMD mappings. Embeddings of punctuation and sub-words were kept.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experiment Setup", "sec_num": "4" }, { "text": "\u2022 XMoverScore+SHAP (Baseline): A baseline copied from the shared task (Fomicheva et al., 2021a) . The output score of XMS is explained with SHAP (Lundberg and Lee, 2017) .", "cite_spans": [ { "start": 70, "end": 95, "text": "(Fomicheva et al., 2021a)", "ref_id": "BIBREF7" }, { "start": 159, "end": 169, "text": "Lee, 2017)", "ref_id": "BIBREF20" } ], "ref_spans": [], "eq_spans": [], "section": "Experiment Setup", "sec_num": "4" }, { "text": "The result of (X)BERTScore by Zhang et al. (2020) depends on the choice of the layer to extract embeddings from. For the models already included in their library 6 , we use the layers they tested perform best in a reference-based setting. For XLMR-NLI1 we choose layer 16 and for XMLR-NLI2 we choose layer 17, which we determined to perform best on a small subset of et-en data from the MLQE-PE corpus. Appendix A lists hashes produced by the BERTScore library that summarize the configurations. For XMoverScore(XLMR Ensemble )-KEEP we choose the same layers. The word-level scores are evaluated with Area Under the Curve (AUC), Recall at top K (RtopK) and Average Precision (AP) using the implementation by the organizers of the Eval4NLP shared task 7 .", "cite_spans": [ { "start": 30, "end": 49, "text": "Zhang et al. (2020)", "ref_id": "BIBREF37" } ], "ref_spans": [], "eq_spans": [], "section": "Experiment Setup", "sec_num": "4" }, { "text": "5 https://github.com/AIPHES/ ACL20-Reference-Free-MT-Evaluation/blob/ master/score_utils.py 6 https://github.com/Tiiiger/bert_score 7 https://github.com/eval4nlp/ SharedTask2021/blob/main/scripts/ evaluate.py Table 1 and 2 show the results for the different configurations and language pairs. Metrics based on XLMR-large achieve the highest correlations. This is expected as it uses 24 layers in contrast to mBERT and mBART (encoder) with 12 layers. Also, the models fine-tuned for NLI perform better than the pre-trained XLMR model. Amongst all configurations, the XLMR-Ensembles perform best. Only for the AP and RtopK of the source in ro-en a single NLI model performed better. XMoverScore(mBERT)-KEEP achieves higher word-level scores than XBERTScore(mBERT), which indicates the successfulness of the applied remapping of embeddings. XMoverScore(mBERT) is worse at the word-level, as the scores of the dropped punctuation are inferred from the previous token. Further, XMoverScore(mBERT) being worse than XBERTScore(mBERT) on sentencelevel might be caused by XMS using the 12th layer instead of the 9th. XMoverScore(XLMR Ensemble )-KEEP, which also uses remappings, achieves slightly higher word-level correlations than XBERTScore(mBERT) for et-en but not for roen. This indicates that the applied remapping techniques are less effective for XLMR-large. Another interesting observation is that the sentence-level scores of XBERTScore with mBERT and mBART are much lower than the others for et-en, suggesting a weakness of these embeddings when compared with greedy matching rather than XMS's word mover's distance.", "cite_spans": [], "ref_spans": [ { "start": 209, "end": 216, "text": "Table 1", "ref_id": "TABREF1" } ], "eq_spans": [], "section": "Experiment Setup", "sec_num": "4" }, { "text": "In the test-phase of the shared task we submitted XBERTScore(XLMR Ensemble ), which achieved its highest rank for the zero-shot language pair ru-de (Russian-German) and its lowest rank for de-zh (German-Chinese). For the latter one, the sentence scores even had a negative correlation. The cause of this remains to be investigated in the future.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Results", "sec_num": "5" }, { "text": "In this paper we have evaluated XBERTScore and XMoverScore for word-level error annotations in a reference-free setup. The best reported configurations are based on multiple XLMR models. For future work it might be interesting to apply XLMR models that are remapped with novel cross-lingual alignment techniques. Also, it could be considered to incorporate the token-probabilities of the targetside language model of XMS into the word-level scores.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "6" }, { "text": "https://github.com/Gringham/ WordAndSentScoresFromTokenMatching", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "https://github.com/eval4nlp/ SharedTask2021/tree/main/data/dev 3 https://huggingface.co/joeddav/ xlm-roberta-large-xnli 4 https://huggingface.co/vicgalle/ xlm-roberta-large-xnli-anli", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "https://github.com/Tiiiger/bert_score", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "The BERTScore library by Zhang et al. (2020) provides a function to generate hashes of the metric's configuration to allow better reproducibility 8 . Here we list the hashes of the configurations we used:\u2022 XBERTScore(XLMR):xlm-roberta-large_L17_no-idf_version=0.3.10(hug_trans=4.4.0)\u2022 XBERTScore(XLMR_N LI1): joeddav/xlm-roberta-large-xnli_L16_no-idf_version=0.3.10(hug_trans=4.4.0)\u2022 XBERTScore(XLMR_N LI2): vicgalle/xlm-roberta-large-xnli-anli_L17_no-idf_version=0.3.10(hug_trans=4.4.0)\u2022 XBERTScore(XLMR_Ensemble):xlm-roberta-large_L17_no-idf_version=0.3.10(hug_trans=4.4.0) joeddav/xlm-robertalarge-xnli_L16_no-idf_version=0.3.10(hug_trans=4.4.0) vicgalle/xlm-robertalarge-xnli-anli_L17_no-idf_version=0.3.10(hug_trans=4.4.0)\u2022 XBERTScore(mBERT): bert-base-multilingual-cased_L9_no-idf_version=0.3.10(hug_trans=4.4.0)\u2022 XBERTScore(mBART): facebook/mbart-large-50many-to-many-mmt_L12_no-idf_version=0.3.10(hug_trans=4.4.0)", "cite_spans": [ { "start": 25, "end": 44, "text": "Zhang et al. (2020)", "ref_id": "BIBREF37" } ], "ref_spans": [], "eq_spans": [], "section": "A BERTScore Hashes", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "METEOR: An automatic metric for MT evaluation with improved correlation with human judgments", "authors": [ { "first": "Satanjeev", "middle": [], "last": "Banerjee", "suffix": "" }, { "first": "Alon", "middle": [], "last": "Lavie", "suffix": "" } ], "year": 2005, "venue": "Proceedings of the ACL Workshop on Intrinsic and Extrinsic Evaluation Measures for Machine Translation and/or Summarization", "volume": "", "issue": "", "pages": "65--72", "other_ids": {}, "num": null, "urls": [], "raw_text": "Satanjeev Banerjee and Alon Lavie. 2005. METEOR: An automatic metric for MT evaluation with im- proved correlation with human judgments. In Pro- ceedings of the ACL Workshop on Intrinsic and Ex- trinsic Evaluation Measures for Machine Transla- tion and/or Summarization, pages 65-72, Ann Arbor, Michigan. Association for Computational Linguis- tics.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Benchmarking and survey of explanation methods for black box models", "authors": [ { "first": "Francesco", "middle": [], "last": "Bodria", "suffix": "" }, { "first": "Fosca", "middle": [], "last": "Giannotti", "suffix": "" }, { "first": "Riccardo", "middle": [], "last": "Guidotti", "suffix": "" }, { "first": "Francesca", "middle": [], "last": "Naretto", "suffix": "" }, { "first": "Dino", "middle": [], "last": "Pedreschi", "suffix": "" }, { "first": "Salvatore", "middle": [], "last": "Rinzivillo", "suffix": "" } ], "year": 2021, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Francesco Bodria, Fosca Giannotti, Riccardo Guidotti, Francesca Naretto, Dino Pedreschi, and Salvatore Rinzivillo. 2021. Benchmarking and survey of expla- nation methods for black box models.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Machine learning interpretability: A survey on methods and metrics. Electronics", "authors": [ { "first": "V", "middle": [], "last": "Diogo", "suffix": "" }, { "first": "Eduardo", "middle": [ "M" ], "last": "Carvalho", "suffix": "" }, { "first": "Jaime", "middle": [ "S" ], "last": "Pereira", "suffix": "" }, { "first": "", "middle": [], "last": "Cardoso", "suffix": "" } ], "year": 2019, "venue": "", "volume": "8", "issue": "", "pages": "", "other_ids": { "DOI": [ "10.3390/electronics8080832" ] }, "num": null, "urls": [], "raw_text": "Diogo V. Carvalho, Eduardo M. Pereira, and Jaime S. Cardoso. 2019. Machine learning interpretability: A survey on methods and metrics. Electronics, 8(8).", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Unsupervised cross-lingual representation learning at scale", "authors": [ { "first": "Alexis", "middle": [], "last": "Conneau", "suffix": "" }, { "first": "Kartikay", "middle": [], "last": "Khandelwal", "suffix": "" }, { "first": "Naman", "middle": [], "last": "Goyal", "suffix": "" }, { "first": "Vishrav", "middle": [], "last": "Chaudhary", "suffix": "" }, { "first": "Guillaume", "middle": [], "last": "Wenzek", "suffix": "" }, { "first": "Francisco", "middle": [], "last": "Guzm\u00e1n", "suffix": "" }, { "first": "Edouard", "middle": [], "last": "Grave", "suffix": "" }, { "first": "Myle", "middle": [], "last": "Ott", "suffix": "" }, { "first": "Luke", "middle": [], "last": "Zettlemoyer", "suffix": "" }, { "first": "Veselin", "middle": [], "last": "Stoyanov", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "8440--8451", "other_ids": { "DOI": [ "10.18653/v1/2020.acl-main.747" ] }, "num": null, "urls": [], "raw_text": "Alexis Conneau, Kartikay Khandelwal, Naman Goyal, Vishrav Chaudhary, Guillaume Wenzek, Francisco Guzm\u00e1n, Edouard Grave, Myle Ott, Luke Zettle- moyer, and Veselin Stoyanov. 2020. Unsupervised cross-lingual representation learning at scale. In Pro- ceedings of the 58th Annual Meeting of the Asso- ciation for Computational Linguistics, pages 8440- 8451, Online. Association for Computational Lin- guistics.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "XNLI: Evaluating crosslingual sentence representations", "authors": [ { "first": "Alexis", "middle": [], "last": "Conneau", "suffix": "" }, { "first": "Ruty", "middle": [], "last": "Rinott", "suffix": "" }, { "first": "Guillaume", "middle": [], "last": "Lample", "suffix": "" }, { "first": "Adina", "middle": [], "last": "Williams", "suffix": "" }, { "first": "Samuel", "middle": [], "last": "Bowman", "suffix": "" }, { "first": "Holger", "middle": [], "last": "Schwenk", "suffix": "" }, { "first": "Veselin", "middle": [], "last": "Stoyanov", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "2475--2485", "other_ids": { "DOI": [ "10.18653/v1/D18-1269" ] }, "num": null, "urls": [], "raw_text": "Alexis Conneau, Ruty Rinott, Guillaume Lample, Adina Williams, Samuel Bowman, Holger Schwenk, and Veselin Stoyanov. 2018. XNLI: Evaluating cross- lingual sentence representations. In Proceedings of the 2018 Conference on Empirical Methods in Nat- ural Language Processing, pages 2475-2485, Brus- sels, Belgium. Association for Computational Lin- guistics.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "How do decisions emerge across layers in neural models? interpretation with differentiable masking", "authors": [ { "first": "Nicola", "middle": [], "last": "De Cao", "suffix": "" }, { "first": "Michael", "middle": [ "Sejr" ], "last": "Schlichtkrull", "suffix": "" }, { "first": "Wilker", "middle": [], "last": "Aziz", "suffix": "" }, { "first": "Ivan", "middle": [], "last": "Titov", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)", "volume": "", "issue": "", "pages": "3243--3255", "other_ids": { "DOI": [ "10.18653/v1/2020.emnlp-main.262" ] }, "num": null, "urls": [], "raw_text": "Nicola De Cao, Michael Sejr Schlichtkrull, Wilker Aziz, and Ivan Titov. 2020. How do decisions emerge across layers in neural models? interpretation with differentiable masking. In Proceedings of the 2020 Conference on Empirical Methods in Natural Lan- guage Processing (EMNLP), pages 3243-3255, On- line. Association for Computational Linguistics.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "BERT: pre-training of deep bidirectional transformers for language understanding", "authors": [ { "first": "Jacob", "middle": [], "last": "Devlin", "suffix": "" }, { "first": "Ming-Wei", "middle": [], "last": "Chang", "suffix": "" }, { "first": "Kenton", "middle": [], "last": "Lee", "suffix": "" }, { "first": "Kristina", "middle": [], "last": "Toutanova", "suffix": "" } ], "year": 2018, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. BERT: pre-training of deep bidirectional transformers for language under- standing. CoRR, abs/1810.04805.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "The eval4nlp shared task on explainable quality estimation: Overview and results", "authors": [ { "first": "Marina", "middle": [], "last": "Fomicheva", "suffix": "" }, { "first": "Piyawat", "middle": [], "last": "Lertvittayakumjorn", "suffix": "" }, { "first": "Wei", "middle": [], "last": "Zhao", "suffix": "" }, { "first": "Steffen", "middle": [], "last": "Eger", "suffix": "" }, { "first": "Yang", "middle": [], "last": "Gao", "suffix": "" } ], "year": 2021, "venue": "Proceedings of the 2nd Workshop on Evaluation and Comparison of NLP Systems", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Marina Fomicheva, Piyawat Lertvittayakumjorn, Wei Zhao, Steffen Eger, and Yang Gao. 2021a. The eval4nlp shared task on explainable quality estima- tion: Overview and results. In Proceedings of the 2nd Workshop on Evaluation and Comparison of NLP Systems.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "and Nikolaos Aletras. 2021b. Translation error detection as rationale extraction", "authors": [ { "first": "Marina", "middle": [], "last": "Fomicheva", "suffix": "" }, { "first": "Lucia", "middle": [], "last": "Specia", "suffix": "" } ], "year": null, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Marina Fomicheva, Lucia Specia, and Nikolaos Ale- tras. 2021b. Translation error detection as rationale extraction.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "MLQE-PE: A multilingual quality estimation and post-editing dataset", "authors": [ { "first": "Marina", "middle": [], "last": "Fomicheva", "suffix": "" }, { "first": "Shuo", "middle": [], "last": "Sun", "suffix": "" }, { "first": "Erick", "middle": [], "last": "Fonseca", "suffix": "" }, { "first": "Fr\u00e9d\u00e9ric", "middle": [], "last": "Blain", "suffix": "" }, { "first": "Vishrav", "middle": [], "last": "Chaudhary", "suffix": "" }, { "first": "Francisco", "middle": [], "last": "Guzm\u00e1n", "suffix": "" }, { "first": "Nina", "middle": [], "last": "Lopatina", "suffix": "" }, { "first": "Lucia", "middle": [], "last": "Specia", "suffix": "" }, { "first": "Andr\u00e9", "middle": [ "F T" ], "last": "Martins", "suffix": "" } ], "year": 2020, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:2010.04480" ] }, "num": null, "urls": [], "raw_text": "Marina Fomicheva, Shuo Sun, Erick Fonseca, Fr\u00e9d\u00e9ric Blain, Vishrav Chaudhary, Francisco Guzm\u00e1n, Nina Lopatina, Lucia Specia, and Andr\u00e9 F. T. Martins. 2020a. MLQE-PE: A multilingual quality esti- mation and post-editing dataset. arXiv preprint arXiv:2010.04480.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Nikolaos Aletras, Vishrav Chaudhary, and Lucia Specia. 2020b. Unsupervised quality estimation for neural machine translation", "authors": [ { "first": "Marina", "middle": [], "last": "Fomicheva", "suffix": "" }, { "first": "Shuo", "middle": [], "last": "Sun", "suffix": "" }, { "first": "Lisa", "middle": [], "last": "Yankovskaya", "suffix": "" }, { "first": "Fr\u00e9d\u00e9ric", "middle": [], "last": "Blain", "suffix": "" }, { "first": "Francisco", "middle": [], "last": "Guzm\u00e1n", "suffix": "" }, { "first": "Mark", "middle": [], "last": "Fishel", "suffix": "" } ], "year": null, "venue": "Transactions of the Association for Computational Linguistics", "volume": "8", "issue": "", "pages": "539--555", "other_ids": { "DOI": [ "10.1162/tacl_a_00330" ] }, "num": null, "urls": [], "raw_text": "Marina Fomicheva, Shuo Sun, Lisa Yankovskaya, Fr\u00e9d\u00e9ric Blain, Francisco Guzm\u00e1n, Mark Fishel, Nikolaos Aletras, Vishrav Chaudhary, and Lucia Spe- cia. 2020b. Unsupervised quality estimation for neu- ral machine translation. Transactions of the Associa- tion for Computational Linguistics, 8:539-555.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Towards a deep and unified understanding of deep neural models in NLP", "authors": [ { "first": "Chaoyu", "middle": [], "last": "Guan", "suffix": "" }, { "first": "Xiting", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Quanshi", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Runjin", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Di", "middle": [], "last": "He", "suffix": "" }, { "first": "Xing", "middle": [], "last": "Xie", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 36th International Conference on Machine Learning", "volume": "97", "issue": "", "pages": "2454--2463", "other_ids": {}, "num": null, "urls": [], "raw_text": "Chaoyu Guan, Xiting Wang, Quanshi Zhang, Runjin Chen, Di He, and Xing Xie. 2019. Towards a deep and unified understanding of deep neural models in NLP. In Proceedings of the 36th International Con- ference on Machine Learning, volume 97 of Pro- ceedings of Machine Learning Research, pages 2454- 2463. Pmlr.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Interpretation of NLP models through input marginalization", "authors": [ { "first": "Siwon", "middle": [], "last": "Kim", "suffix": "" }, { "first": "Jihun", "middle": [], "last": "Yi", "suffix": "" }, { "first": "Eunji", "middle": [], "last": "Kim", "suffix": "" }, { "first": "Sungroh", "middle": [], "last": "Yoon", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)", "volume": "", "issue": "", "pages": "3154--3167", "other_ids": { "DOI": [ "10.18653/v1/2020.emnlp-main.255" ] }, "num": null, "urls": [], "raw_text": "Siwon Kim, Jihun Yi, Eunji Kim, and Sungroh Yoon. 2020. Interpretation of NLP models through input marginalization. In Proceedings of the 2020 Con- ference on Empirical Methods in Natural Language Processing (EMNLP), pages 3154-3167, Online. As- sociation for Computational Linguistics.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "SentencePiece: A simple and language independent subword tokenizer and detokenizer for neural text processing", "authors": [ { "first": "Taku", "middle": [], "last": "Kudo", "suffix": "" }, { "first": "John", "middle": [], "last": "Richardson", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing: System Demonstrations", "volume": "", "issue": "", "pages": "66--71", "other_ids": { "DOI": [ "10.18653/v1/D18-2012" ] }, "num": null, "urls": [], "raw_text": "Taku Kudo and John Richardson. 2018. SentencePiece: A simple and language independent subword tok- enizer and detokenizer for neural text processing. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pages 66-71, Brussels, Belgium. Association for Computational Linguistics.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "From word embeddings to document distances", "authors": [ { "first": "Matt", "middle": [], "last": "Kusner", "suffix": "" }, { "first": "Yu", "middle": [], "last": "Sun", "suffix": "" }, { "first": "Nicholas", "middle": [], "last": "Kolkin", "suffix": "" }, { "first": "Kilian", "middle": [], "last": "Weinberger", "suffix": "" } ], "year": 2015, "venue": "Proceedings of the 32nd International Conference on Machine Learning", "volume": "37", "issue": "", "pages": "957--966", "other_ids": {}, "num": null, "urls": [], "raw_text": "Matt Kusner, Yu Sun, Nicholas Kolkin, and Kilian Wein- berger. 2015. From word embeddings to document distances. In Proceedings of the 32nd International Conference on Machine Learning, volume 37 of Pro- ceedings of Machine Learning Research, pages 957- 966, Lille, France. Pmlr.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "The significance of recall in automatic metrics for mt evaluation", "authors": [ { "first": "Alon", "middle": [], "last": "Lavie", "suffix": "" }, { "first": "Kenji", "middle": [], "last": "Sagae", "suffix": "" }, { "first": "Shyamsundar", "middle": [], "last": "Jayaraman", "suffix": "" } ], "year": 2004, "venue": "Machine Translation: From Real Users to Research", "volume": "", "issue": "", "pages": "134--143", "other_ids": {}, "num": null, "urls": [], "raw_text": "Alon Lavie, Kenji Sagae, and Shyamsundar Jayaraman. 2004. The significance of recall in automatic metrics for mt evaluation. In Machine Translation: From Real Users to Research, pages 134-143, Berlin, Hei- delberg. Springer Berlin Heidelberg.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Two-phase cross-lingual language model fine-tuning for machine translation quality estimation", "authors": [ { "first": "Dongjun", "middle": [], "last": "Lee", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the Fifth Conference on Machine Translation", "volume": "", "issue": "", "pages": "1024--1028", "other_ids": {}, "num": null, "urls": [], "raw_text": "Dongjun Lee. 2020. Two-phase cross-lingual language model fine-tuning for machine translation quality es- timation. In Proceedings of the Fifth Conference on Machine Translation, pages 1024-1028, Online. Association for Computational Linguistics.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Explaining machine translation metrics -application and assessment of explainability techniques in the domain of machine translation evaluation", "authors": [ { "first": "Christoph", "middle": [ "Wolfgang" ], "last": "Leiter", "suffix": "" } ], "year": 2021, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Christoph Wolfgang Leiter. 2021. Explaining machine translation metrics -application and assessment of explainability techniques in the domain of machine translation evaluation. Unpublished thesis. TU Darm- stadt.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Explanation-based human debugging of nlp models: A survey", "authors": [ { "first": "Piyawat", "middle": [], "last": "Lertvittayakumjorn", "suffix": "" }, { "first": "Francesca", "middle": [], "last": "Toni", "suffix": "" } ], "year": 2021, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:2104.15135" ] }, "num": null, "urls": [], "raw_text": "Piyawat Lertvittayakumjorn and Francesca Toni. 2021. Explanation-based human debugging of nlp models: A survey. arXiv preprint arXiv:2104.15135.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Explainable ai: A review of machine learning interpretability methods", "authors": [ { "first": "Pantelis", "middle": [], "last": "Linardatos", "suffix": "" }, { "first": "Vasilis", "middle": [], "last": "Papastefanopoulos", "suffix": "" }, { "first": "Sotiris", "middle": [], "last": "Kotsiantis", "suffix": "" } ], "year": 2021, "venue": "Entropy", "volume": "", "issue": "1", "pages": "", "other_ids": { "DOI": [ "10.3390/e23010018" ] }, "num": null, "urls": [], "raw_text": "Pantelis Linardatos, Vasilis Papastefanopoulos, and Sotiris Kotsiantis. 2021. Explainable ai: A review of machine learning interpretability methods. Entropy, 23(1).", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "A unified approach to interpreting model predictions", "authors": [ { "first": "M", "middle": [], "last": "Scott", "suffix": "" }, { "first": "Su-In", "middle": [], "last": "Lundberg", "suffix": "" }, { "first": "", "middle": [], "last": "Lee", "suffix": "" } ], "year": 2017, "venue": "Advances in Neural Information Processing Systems", "volume": "30", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Scott M Lundberg and Su-In Lee. 2017. A unified approach to interpreting model predictions. In Ad- vances in Neural Information Processing Systems, volume 30. Curran Associates, Inc.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "Results of the WMT20 metrics shared task", "authors": [ { "first": "Nitika", "middle": [], "last": "Mathur", "suffix": "" }, { "first": "Johnny", "middle": [], "last": "Wei", "suffix": "" }, { "first": "Markus", "middle": [], "last": "Freitag", "suffix": "" }, { "first": "Qingsong", "middle": [], "last": "Ma", "suffix": "" }, { "first": "Ond\u0159ej", "middle": [], "last": "Bojar", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the Fifth Conference on Machine Translation", "volume": "", "issue": "", "pages": "688--725", "other_ids": {}, "num": null, "urls": [], "raw_text": "Nitika Mathur, Johnny Wei, Markus Freitag, Qingsong Ma, and Ond\u0159ej Bojar. 2020. Results of the WMT20 metrics shared task. In Proceedings of the Fifth Con- ference on Machine Translation, pages 688-725, On- line. Association for Computational Linguistics.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "Adversarial NLI: A new benchmark for natural language understanding", "authors": [ { "first": "Yixin", "middle": [], "last": "Nie", "suffix": "" }, { "first": "Adina", "middle": [], "last": "Williams", "suffix": "" }, { "first": "Emily", "middle": [], "last": "Dinan", "suffix": "" }, { "first": "Mohit", "middle": [], "last": "Bansal", "suffix": "" }, { "first": "Jason", "middle": [], "last": "Weston", "suffix": "" }, { "first": "Douwe", "middle": [], "last": "Kiela", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "4885--4901", "other_ids": { "DOI": [ "10.18653/v1/2020.acl-main.441" ] }, "num": null, "urls": [], "raw_text": "Yixin Nie, Adina Williams, Emily Dinan, Mohit Bansal, Jason Weston, and Douwe Kiela. 2020. Adversarial NLI: A new benchmark for natural language under- standing. In Proceedings of the 58th Annual Meet- ing of the Association for Computational Linguistics, pages 4885-4901, Online. Association for Computa- tional Linguistics.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "Bleu: a method for automatic evaluation of machine translation", "authors": [ { "first": "Kishore", "middle": [], "last": "Papineni", "suffix": "" }, { "first": "Salim", "middle": [], "last": "Roukos", "suffix": "" }, { "first": "Todd", "middle": [], "last": "Ward", "suffix": "" }, { "first": "Wei-Jing", "middle": [], "last": "Zhu", "suffix": "" } ], "year": 2002, "venue": "Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "311--318", "other_ids": { "DOI": [ "10.3115/1073083.1073135" ] }, "num": null, "urls": [], "raw_text": "Kishore Papineni, Salim Roukos, Todd Ward, and Wei- Jing Zhu. 2002. Bleu: a method for automatic evalu- ation of machine translation. In Proceedings of the 40th Annual Meeting of the Association for Compu- tational Linguistics, pages 311-318, Philadelphia, Pennsylvania, USA. Association for Computational Linguistics.", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "TransQuest at WMT2020: Sentencelevel direct assessment", "authors": [ { "first": "Tharindu", "middle": [], "last": "Ranasinghe", "suffix": "" }, { "first": "Constantin", "middle": [], "last": "Orasan", "suffix": "" }, { "first": "Ruslan", "middle": [], "last": "Mitkov", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the Fifth Conference on Machine Translation", "volume": "", "issue": "", "pages": "1049--1055", "other_ids": {}, "num": null, "urls": [], "raw_text": "Tharindu Ranasinghe, Constantin Orasan, and Ruslan Mitkov. 2020a. TransQuest at WMT2020: Sentence- level direct assessment. In Proceedings of the Fifth Conference on Machine Translation, pages 1049- 1055, Online. Association for Computational Lin- guistics.", "links": null }, "BIBREF25": { "ref_id": "b25", "title": "TransQuest: Translation quality estimation with cross-lingual transformers", "authors": [ { "first": "Tharindu", "middle": [], "last": "Ranasinghe", "suffix": "" }, { "first": "Constantin", "middle": [], "last": "Orasan", "suffix": "" }, { "first": "Ruslan", "middle": [], "last": "Mitkov", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 28th International Conference on Computational Linguistics", "volume": "", "issue": "", "pages": "5070--5081", "other_ids": { "DOI": [ "10.18653/v1/2020.coling-main.445" ] }, "num": null, "urls": [], "raw_text": "Tharindu Ranasinghe, Constantin Orasan, and Ruslan Mitkov. 2020b. TransQuest: Translation quality esti- mation with cross-lingual transformers. In Proceed- ings of the 28th International Conference on Com- putational Linguistics, pages 5070-5081, Barcelona, Spain (Online). International Committee on Compu- tational Linguistics.", "links": null }, "BIBREF26": { "ref_id": "b26", "title": "An exploratory analysis of multilingual word-level quality estimation with cross-lingual transformers", "authors": [ { "first": "Tharindu", "middle": [], "last": "Ranasinghe", "suffix": "" }, { "first": "Constantin", "middle": [], "last": "Orasan", "suffix": "" }, { "first": "Ruslan", "middle": [], "last": "Mitkov", "suffix": "" } ], "year": 2021, "venue": "Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing", "volume": "2", "issue": "", "pages": "434--440", "other_ids": { "DOI": [ "10.18653/v1/2021.acl-short.55" ] }, "num": null, "urls": [], "raw_text": "Tharindu Ranasinghe, Constantin Orasan, and Ruslan Mitkov. 2021. An exploratory analysis of multilin- gual word-level quality estimation with cross-lingual transformers. In Proceedings of the 59th Annual Meeting of the Association for Computational Lin- guistics and the 11th International Joint Conference on Natural Language Processing (Volume 2: Short Papers), pages 434-440, Online. Association for Computational Linguistics.", "links": null }, "BIBREF27": { "ref_id": "b27", "title": "COMET: A neural framework for MT evaluation", "authors": [ { "first": "Ricardo", "middle": [], "last": "Rei", "suffix": "" }, { "first": "Craig", "middle": [], "last": "Stewart", "suffix": "" }, { "first": "Ana", "middle": [ "C" ], "last": "Farinha", "suffix": "" }, { "first": "Alon", "middle": [], "last": "Lavie", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)", "volume": "", "issue": "", "pages": "2685--2702", "other_ids": { "DOI": [ "10.18653/v1/2020.emnlp-main.213" ] }, "num": null, "urls": [], "raw_text": "Ricardo Rei, Craig Stewart, Ana C Farinha, and Alon Lavie. 2020a. COMET: A neural framework for MT evaluation. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Process- ing (EMNLP), pages 2685-2702, Online. Association for Computational Linguistics.", "links": null }, "BIBREF28": { "ref_id": "b28", "title": "Unbabel's participation in the WMT20 metrics shared task", "authors": [ { "first": "Ricardo", "middle": [], "last": "Rei", "suffix": "" }, { "first": "Craig", "middle": [], "last": "Stewart", "suffix": "" }, { "first": "Ana", "middle": [ "C" ], "last": "Farinha", "suffix": "" }, { "first": "Alon", "middle": [], "last": "Lavie", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the Fifth Conference on Machine Translation", "volume": "", "issue": "", "pages": "911--920", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ricardo Rei, Craig Stewart, Ana C Farinha, and Alon Lavie. 2020b. Unbabel's participation in the WMT20 metrics shared task. In Proceedings of the Fifth Con- ference on Machine Translation, pages 911-920, On- line. Association for Computational Linguistics.", "links": null }, "BIBREF29": { "ref_id": "b29", "title": "why should i trust you?\": Explaining the predictions of any classifier", "authors": [ { "first": "Sameer", "middle": [], "last": "Marco Tulio Ribeiro", "suffix": "" }, { "first": "Carlos", "middle": [], "last": "Singh", "suffix": "" }, { "first": "", "middle": [], "last": "Guestrin", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, Kdd '16", "volume": "", "issue": "", "pages": "1135--1144", "other_ids": { "DOI": [ "10.1145/2939672.2939778" ] }, "num": null, "urls": [], "raw_text": "Marco Tulio Ribeiro, Sameer Singh, and Carlos Guestrin. 2016. \"why should i trust you?\": Explain- ing the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, Kdd '16, page 1135\u00d4\u00c7\u00f41144, New York, NY, USA. Associa- tion for Computing Machinery.", "links": null }, "BIBREF30": { "ref_id": "b30", "title": "BLEURT: Learning robust metrics for text generation", "authors": [ { "first": "Thibault", "middle": [], "last": "Sellam", "suffix": "" }, { "first": "Dipanjan", "middle": [], "last": "Das", "suffix": "" }, { "first": "Ankur", "middle": [], "last": "Parikh", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "7881--7892", "other_ids": { "DOI": [ "10.18653/v1/2020.acl-main.704" ] }, "num": null, "urls": [], "raw_text": "Thibault Sellam, Dipanjan Das, and Ankur Parikh. 2020. BLEURT: Learning robust metrics for text genera- tion. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 7881-7892, Online. Association for Computational Linguistics.", "links": null }, "BIBREF31": { "ref_id": "b31", "title": "SentSim: Crosslingual semantic evaluation of machine translation", "authors": [ { "first": "Yurun", "middle": [], "last": "Song", "suffix": "" }, { "first": "Junchen", "middle": [], "last": "Zhao", "suffix": "" }, { "first": "Lucia", "middle": [], "last": "Specia", "suffix": "" } ], "year": 2021, "venue": "Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "", "issue": "", "pages": "3143--3156", "other_ids": { "DOI": [ "10.18653/v1/2021.naacl-main.252" ] }, "num": null, "urls": [], "raw_text": "Yurun Song, Junchen Zhao, and Lucia Specia. 2021. SentSim: Crosslingual semantic evaluation of ma- chine translation. In Proceedings of the 2021 Con- ference of the North American Chapter of the Asso- ciation for Computational Linguistics: Human Lan- guage Technologies, pages 3143-3156, Online. As- sociation for Computational Linguistics.", "links": null }, "BIBREF32": { "ref_id": "b32", "title": "Findings of the WMT 2020 shared task on quality estimation", "authors": [ { "first": "Lucia", "middle": [], "last": "Specia", "suffix": "" }, { "first": "Fr\u00e9d\u00e9ric", "middle": [], "last": "Blain", "suffix": "" }, { "first": "Marina", "middle": [], "last": "Fomicheva", "suffix": "" }, { "first": "Erick", "middle": [], "last": "Fonseca", "suffix": "" }, { "first": "Vishrav", "middle": [], "last": "Chaudhary", "suffix": "" }, { "first": "Francisco", "middle": [], "last": "Guzm\u00e1n", "suffix": "" }, { "first": "Andr\u00e9", "middle": [ "F T" ], "last": "Martins", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the Fifth Conference on Machine Translation", "volume": "", "issue": "", "pages": "743--764", "other_ids": {}, "num": null, "urls": [], "raw_text": "Lucia Specia, Fr\u00e9d\u00e9ric Blain, Marina Fomicheva, Er- ick Fonseca, Vishrav Chaudhary, Francisco Guzm\u00e1n, and Andr\u00e9 F. T. Martins. 2020. Findings of the WMT 2020 shared task on quality estimation. In Proceed- ings of the Fifth Conference on Machine Translation, pages 743-764, Online. Association for Computa- tional Linguistics.", "links": null }, "BIBREF33": { "ref_id": "b33", "title": "Axiomatic attribution for deep networks", "authors": [ { "first": "Mukund", "middle": [], "last": "Sundararajan", "suffix": "" }, { "first": "Ankur", "middle": [], "last": "Taly", "suffix": "" }, { "first": "Qiqi", "middle": [], "last": "Yan", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the 34th International Conference on Machine Learning", "volume": "70", "issue": "", "pages": "3319--3328", "other_ids": { "DOI": [ "https://dl.acm.org/doi/10.5555/3305890.3306024" ] }, "num": null, "urls": [], "raw_text": "Mukund Sundararajan, Ankur Taly, and Qiqi Yan. 2017. Axiomatic attribution for deep networks. In Proceed- ings of the 34th International Conference on Machine Learning -Volume 70, ICML'17, page 3319-3328. JMLR.org.", "links": null }, "BIBREF34": { "ref_id": "b34", "title": "Jiatao Gu, and Angela Fan. 2020. Multilingual translation with extensible multilingual pretraining and finetuning", "authors": [ { "first": "Yuqing", "middle": [], "last": "Tang", "suffix": "" }, { "first": "Chau", "middle": [], "last": "Tran", "suffix": "" }, { "first": "Xian", "middle": [], "last": "Li", "suffix": "" }, { "first": "Peng-Jen", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Naman", "middle": [], "last": "Goyal", "suffix": "" }, { "first": "Vishrav", "middle": [], "last": "Chaudhary", "suffix": "" } ], "year": null, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yuqing Tang, Chau Tran, Xian Li, Peng-Jen Chen, Na- man Goyal, Vishrav Chaudhary, Jiatao Gu, and An- gela Fan. 2020. Multilingual translation with exten- sible multilingual pretraining and finetuning. CoRR, abs/2008.00401.", "links": null }, "BIBREF35": { "ref_id": "b35", "title": "Attention is all you need", "authors": [ { "first": "Ashish", "middle": [], "last": "Vaswani", "suffix": "" }, { "first": "Noam", "middle": [], "last": "Shazeer", "suffix": "" }, { "first": "Niki", "middle": [], "last": "Parmar", "suffix": "" }, { "first": "Jakob", "middle": [], "last": "Uszkoreit", "suffix": "" }, { "first": "Llion", "middle": [], "last": "Jones", "suffix": "" }, { "first": "Aidan", "middle": [ "N" ], "last": "Gomez", "suffix": "" }, { "first": "Illia", "middle": [], "last": "Kaiser", "suffix": "" }, { "first": "", "middle": [], "last": "Polosukhin", "suffix": "" } ], "year": 2017, "venue": "Advances in Neural Information Processing Systems", "volume": "30", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, \u0141 ukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Pro- cessing Systems, volume 30. Curran Associates, Inc.", "links": null }, "BIBREF36": { "ref_id": "b36", "title": "Bartscore: Evaluating generated text as text generation", "authors": [ { "first": "Weizhe", "middle": [], "last": "Yuan", "suffix": "" }, { "first": "Graham", "middle": [], "last": "Neubig", "suffix": "" }, { "first": "Pengfei", "middle": [], "last": "Liu", "suffix": "" } ], "year": 2021, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Weizhe Yuan, Graham Neubig, and Pengfei Liu. 2021. Bartscore: Evaluating generated text as text genera- tion.", "links": null }, "BIBREF37": { "ref_id": "b37", "title": "Bertscore: Evaluating text generation with bert", "authors": [ { "first": "Tianyi", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Varsha", "middle": [], "last": "Kishore", "suffix": "" }, { "first": "Felix", "middle": [], "last": "Wu", "suffix": "" }, { "first": "Kilian", "middle": [ "Q" ], "last": "Weinberger", "suffix": "" }, { "first": "Yoav", "middle": [], "last": "Artzi", "suffix": "" } ], "year": 2020, "venue": "International Conference on Learning Representations", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Tianyi Zhang, Varsha Kishore, Felix Wu, Kilian Q. Weinberger, and Yoav Artzi. 2020. Bertscore: Eval- uating text generation with bert. In International Conference on Learning Representations.", "links": null }, "BIBREF38": { "ref_id": "b38", "title": "On the limitations of cross-lingual encoders as exposed by reference-free machine translation evaluation", "authors": [ { "first": "Wei", "middle": [], "last": "Zhao", "suffix": "" }, { "first": "Goran", "middle": [], "last": "Glava\u0161", "suffix": "" }, { "first": "Maxime", "middle": [], "last": "Peyrard", "suffix": "" }, { "first": "Yang", "middle": [], "last": "Gao", "suffix": "" }, { "first": "Robert", "middle": [], "last": "West", "suffix": "" }, { "first": "Steffen", "middle": [], "last": "Eger", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "1656--1671", "other_ids": {}, "num": null, "urls": [], "raw_text": "Wei Zhao, Goran Glava\u0161, Maxime Peyrard, Yang Gao, Robert West, and Steffen Eger. 2020. On the lim- itations of cross-lingual encoders as exposed by reference-free machine translation evaluation. In Proceedings of the 58th Annual Meeting of the Asso- ciation for Computational Linguistics, pages 1656- 1671, Online. Association for Computational Linguis- tics.", "links": null }, "BIBREF39": { "ref_id": "b39", "title": "MoverScore: Text generation evaluating with contextualized embeddings and earth mover distance", "authors": [ { "first": "Wei", "middle": [], "last": "Zhao", "suffix": "" }, { "first": "Maxime", "middle": [], "last": "Peyrard", "suffix": "" }, { "first": "Fei", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Yang", "middle": [], "last": "Gao", "suffix": "" }, { "first": "Christian", "middle": [ "M" ], "last": "Meyer", "suffix": "" }, { "first": "Steffen", "middle": [], "last": "Eger", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)", "volume": "", "issue": "", "pages": "563--578", "other_ids": { "DOI": [ "10.18653/v1/D19-1053" ] }, "num": null, "urls": [], "raw_text": "Wei Zhao, Maxime Peyrard, Fei Liu, Yang Gao, Chris- tian M. Meyer, and Steffen Eger. 2019. MoverScore: Text generation evaluating with contextualized em- beddings and earth mover distance. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th Interna- tional Joint Conference on Natural Language Pro- cessing (EMNLP-IJCNLP), pages 563-578, Hong Kong, China. Association for Computational Lin- guistics.", "links": null }, "BIBREF40": { "ref_id": "b40", "title": "Zeroshot translation quality estimation with explicit crosslingual patterns", "authors": [ { "first": "Lei", "middle": [], "last": "Zhou", "suffix": "" }, { "first": "Liang", "middle": [], "last": "Ding", "suffix": "" }, { "first": "Koichi", "middle": [], "last": "Takeda", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the Fifth Conference on Machine Translation", "volume": "", "issue": "", "pages": "1068--1074", "other_ids": {}, "num": null, "urls": [], "raw_text": "Lei Zhou, Liang Ding, and Koichi Takeda. 2020. Zero- shot translation quality estimation with explicit cross- lingual patterns. In Proceedings of the Fifth Con- ference on Machine Translation, pages 1068-1074, Online. Association for Computational Linguistics.", "links": null } }, "ref_entries": { "TABREF1": { "text": "Results on the et-en dev set of the shared task. Metrics for word level outputs are Area Under the Curve, Average Precision and Recall at top K. The sentence-level correlation to human judgements is denoted as Pearson.", "type_str": "table", "num": null, "html": null, "content": "
HypothesisSource
" }, "TABREF2": { "text": "", "type_str": "table", "num": null, "html": null, "content": "" } } } }