|
{ |
|
"paper_id": "2021", |
|
"header": { |
|
"generated_with": "S2ORC 1.0.0", |
|
"date_generated": "2023-01-19T03:35:29.418148Z" |
|
}, |
|
"title": "Machine Translation Believability", |
|
"authors": [ |
|
{ |
|
"first": "Marianna", |
|
"middle": [ |
|
"J" |
|
], |
|
"last": "Martindale", |
|
"suffix": "", |
|
"affiliation": {}, |
|
"email": "" |
|
}, |
|
{ |
|
"first": "Kevin", |
|
"middle": [], |
|
"last": "Duh", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "Johns Hopkins University", |
|
"location": { |
|
"settlement": "Baltimore", |
|
"country": "USA" |
|
} |
|
}, |
|
"email": "[email protected]" |
|
}, |
|
{ |
|
"first": "Marine", |
|
"middle": [], |
|
"last": "Carpuat", |
|
"suffix": "", |
|
"affiliation": {}, |
|
"email": "[email protected]" |
|
} |
|
], |
|
"year": "", |
|
"venue": null, |
|
"identifiers": {}, |
|
"abstract": "Successful Machine Translation (MT) deployment requires understanding not only the intrinsic qualities of MT output, such as fluency and adequacy, but also user perceptions. Users who do not understand the source language respond to MT output based on their perception of the likelihood that the meaning of the MT output matches the meaning of the source text. We refer to this as believability. Output that is not believable may be off-putting to users, but believable MT output with incorrect meaning may mislead them. In this work, we study the relationship of believability to fluency and adequacy by applying traditional MT direct assessment protocols to annotate all three features on the output of neural MT systems. Quantitative analysis of these annotations shows that believability is closely related to but distinct from fluency, and initial qualitative analysis suggests that semantic features may account for the difference.", |
|
"pdf_parse": { |
|
"paper_id": "2021", |
|
"_pdf_hash": "", |
|
"abstract": [ |
|
{ |
|
"text": "Successful Machine Translation (MT) deployment requires understanding not only the intrinsic qualities of MT output, such as fluency and adequacy, but also user perceptions. Users who do not understand the source language respond to MT output based on their perception of the likelihood that the meaning of the MT output matches the meaning of the source text. We refer to this as believability. Output that is not believable may be off-putting to users, but believable MT output with incorrect meaning may mislead them. In this work, we study the relationship of believability to fluency and adequacy by applying traditional MT direct assessment protocols to annotate all three features on the output of neural MT systems. Quantitative analysis of these annotations shows that believability is closely related to but distinct from fluency, and initial qualitative analysis suggests that semantic features may account for the difference.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Abstract", |
|
"sec_num": null |
|
} |
|
], |
|
"body_text": [ |
|
{ |
|
"text": "Past work on evaluating Machine Translation (MT) has focused on the intrinsic quality of the translation product without taking into account how translations are perceived by their users. Yet, some translation errors are more obvious than others, and have different consequences depending on what the translations are used for.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "In this work, we take a user-centered view of MT evaluation, exploring one aspect of users' perception of MT: believability of the output, defined as a monolingual user's perception of the likelihood that the meaning of the MT output matches the meaning of the input, without understanding the source. Assessing the degree to which MT is believable acknowledges that users play an active role in interpreting its output, informed by their linguistic competence, their common sense reasoning abilities, and their knowledge of the world. What we learn from assessing believability can complement traditional evaluation methods to inform the deployment and even development of MT systems, particularly for gisting and communication use cases.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "We first define believability of MT and contextualize it within prior work on credibility and MT evaluation. We apply MT direct assessment (DA) protocols to obtain human judgments of believability, annotating the output of neural machine translation (NMT) systems for three challenging language pairs (Arabic-, Farsi-, and Korean-to-English) with varying translation quality. These annotations show that believability is closely related to, but distinct from fluency. Preliminary qualitative analysis suggests that in addition to fluency features, believability is also influenced by semantic features.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "We define believability as a user's perception of the likelihood that the meaning of a given MT output matches the meaning of the input, without understanding the input. Whether the user accepts the output unquestioningly or finds it unbelievable, their judgment will affect how they act on it, regardless of the true accuracy of the translation. For example, a Facebook user might be dubious of a translation from Chinese with the phrase \"blowing a little more cow\" and ask the author for clarification and learn that it is a literal translation of an idiom, \"\u5439\u725b\" (meaning to brag). Users may take more consequential action, such as the Israeli police officers who chose not to consult an Arabic speaker before arresting a man based on a believable mistranslation of his \"good morning\" Facebook post as \"attack them\" (Berger, 2017) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 818, |
|
"end": 832, |
|
"text": "(Berger, 2017)", |
|
"ref_id": "BIBREF2" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "To illustrate how believability can be independent of adequacy, Table 1 shows examples of different levels of believability for translations at different levels of adequacy, based on our annotations. The translations on the right (More Adequate) convey key information: a named entity (sputnik), its Less Adequate More Adequate More Believable \"putnik\" was an interactive film.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 64, |
|
"end": 71, |
|
"text": "Table 1", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "\"Sputnik\" was in the city center, the negative. It was not affected. Less Believable spaghetti, the nigerian was in the middle of the city, he didn't touch it.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "sputnik was downtown, didn't look, never touched. Table 1 : Machine Translations of human translations of a line from a TED talk discussing the loss of film negatives in a fire (Hoffman, 2008) . One film \"Sputnik\" was spared from the fire because it was not in the building. Original text: \"Sputnik\" was downtown, the negative. It wasn't touched.", |
|
"cite_spans": [ |
|
{ |
|
"start": 177, |
|
"end": 192, |
|
"text": "(Hoffman, 2008)", |
|
"ref_id": "BIBREF11" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 50, |
|
"end": 57, |
|
"text": "Table 1", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "location (downtown), and the fact that it was not affected, but a user might not accept the information in the bottom-right translation because it is not believable. The less adequate translations (left) are missing important information and also include incorrect information. The bottom-left translation is not believable so a monolingual user would not be misled by it. However, the more believable top-left translation might mislead a monolingual user. Because these judgments are based on perception, they may be more subjective than traditional MT DA features. We control for some factors that may affect believability (Section 3), resulting in annotations that are similarly reliable to the DA features (Section 4).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Although believability is an unexplored aspect of MT, there is prior work outside of MT on the general concept of credibility. Common elements of credibility include source, media, and message credibility (Rieh and Danielson, 2007) . For MT, we can think of the MT provider as the source, the interface through which the MT is viewed as the medium, and the output itself as the message or content. All of these aspects likely affect the credibility of deployed MT systems, but our investigation of MT believability is focused on the content (MT output). Some intrinsic content features addressed in the credibility literature that may affect MT believability include reasonableness (Liu, 2004; Kim and Oh, 2009; Kim, 2010; John et al., 2011) and grammatical errors (Fogg et al., 2001; Everard and Galletta, 2005; Metzger et al., 2010; Chesney and Su, 2010; John et al., 2011) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 205, |
|
"end": 231, |
|
"text": "(Rieh and Danielson, 2007)", |
|
"ref_id": "BIBREF25" |
|
}, |
|
{ |
|
"start": 682, |
|
"end": 693, |
|
"text": "(Liu, 2004;", |
|
"ref_id": "BIBREF19" |
|
}, |
|
{ |
|
"start": 694, |
|
"end": 711, |
|
"text": "Kim and Oh, 2009;", |
|
"ref_id": "BIBREF15" |
|
}, |
|
{ |
|
"start": 712, |
|
"end": 722, |
|
"text": "Kim, 2010;", |
|
"ref_id": "BIBREF14" |
|
}, |
|
{ |
|
"start": 723, |
|
"end": 741, |
|
"text": "John et al., 2011)", |
|
"ref_id": "BIBREF12" |
|
}, |
|
{ |
|
"start": 765, |
|
"end": 784, |
|
"text": "(Fogg et al., 2001;", |
|
"ref_id": "BIBREF8" |
|
}, |
|
{ |
|
"start": 785, |
|
"end": 812, |
|
"text": "Everard and Galletta, 2005;", |
|
"ref_id": "BIBREF7" |
|
}, |
|
{ |
|
"start": 813, |
|
"end": 834, |
|
"text": "Metzger et al., 2010;", |
|
"ref_id": "BIBREF21" |
|
}, |
|
{ |
|
"start": 835, |
|
"end": 856, |
|
"text": "Chesney and Su, 2010;", |
|
"ref_id": "BIBREF5" |
|
}, |
|
{ |
|
"start": 857, |
|
"end": 875, |
|
"text": "John et al., 2011)", |
|
"ref_id": "BIBREF12" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "Reasonableness is a semantic feature encompassing elements of plausibility, logic, and internal consistency. These elements are related to previously studied concepts in the Computational Linguistics literature: semantic plausibility, commonsense reasoning, and discourse coherence. Semantic plau-sibility can be thought of as, \"whether in an ordinary real-life situation (not \"fairy-tale\" circumstances) the sentence could be reasonably uttered\" (Kruszewski et al., 2016) . If the source text is expected to reflect \"ordinary real-life,\" the output should be plausible to be believable. MT output may also be unbelievable if it violates commonsense reasoning, a challenging element of Natural Language Understanding (Mostafazadeh et al., 2016) . Lack of discourse coherence might likewise signal unbelievable translations. For document generation, improving the consistency of generated documents makes it harder for human subjects to distinguish automatically generated text from real text (Karuna et al., 2018) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 447, |
|
"end": 472, |
|
"text": "(Kruszewski et al., 2016)", |
|
"ref_id": "BIBREF16" |
|
}, |
|
{ |
|
"start": 717, |
|
"end": 744, |
|
"text": "(Mostafazadeh et al., 2016)", |
|
"ref_id": "BIBREF22" |
|
}, |
|
{ |
|
"start": 992, |
|
"end": 1013, |
|
"text": "(Karuna et al., 2018)", |
|
"ref_id": "BIBREF13" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "Grammatical errors are related to fluency, a traditional MT quality evaluation feature. Fluency has been defined as a judgment of \"whether the translation reads like good English...without knowing the accuracy of the content,\" and is typically combined with adequacy, an assessment of \"the degree to which the information in a professional translation can be found in an MT (or control) output of the same text\" (White et al., 1994) . A user who cannot understand the source cannot judge adequacy, but may use expectations based on features like fluency and reasonableness to guess. Believability could thus be seen as predicted adequacy via a human cognitive process with inputs from surface features of the output such as fluency and semantic cues from context. Two other Computational Linguistics concepts that relate to both reasonableness and grammatical errors may affect believability: acceptability and comprehensibility. In empirical linguistics, acceptability judgments measure users' linguistic competence (Sch\u00fctze, 2016) . While acceptability is primarily used to observe grammatical knowledge, judgments are not limited to grammaticality in practice: \"semantic plausibility, various types of processing difficulties, and so on, can individually or jointly cause grammaticality and acceptability to come apart\" (Lau et al., 2017) . These breakdowns can lead to issues with comprehensibility. Popovi\u0107 (2020) cites comprehensibility as a key factor in misleadingness of MT output: if a user cannot understand the text, they cannot be misled by it. Similarly, if they cannot understand it, the user is unlikely to believe that the translation is correct.", |
|
"cite_spans": [ |
|
{ |
|
"start": 412, |
|
"end": 432, |
|
"text": "(White et al., 1994)", |
|
"ref_id": "BIBREF28" |
|
}, |
|
{ |
|
"start": 1017, |
|
"end": 1032, |
|
"text": "(Sch\u00fctze, 2016)", |
|
"ref_id": "BIBREF26" |
|
}, |
|
{ |
|
"start": 1323, |
|
"end": 1341, |
|
"text": "(Lau et al., 2017)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "To understand the relationships between believability and traditional MT quality criteria (fluency and adequacy), we hired professionals to annotate MT output for these characteristics in tasks based on the fluency and adequacy DA methods of Graham et al. (2013) and Bojar et al. (2016) . The annotated data sets are available at: https://github.com/ mjmartindale/mt_believability .", |
|
"cite_spans": [ |
|
{ |
|
"start": 242, |
|
"end": 262, |
|
"text": "Graham et al. (2013)", |
|
"ref_id": "BIBREF9" |
|
}, |
|
{ |
|
"start": 267, |
|
"end": 286, |
|
"text": "Bojar et al. (2016)", |
|
"ref_id": "BIBREF3" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Annotating Believability", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "Annotators Our annotators were salaried translators with proficiency levels of at least Advanced on the ACTFL scale (ACTFL, 2012) rather than MT researchers or crowd workers as in WMT (Barrault et al., 2019) . Because they were not paid per item, they were willing to spend significant time on each item, averaging 15 items in 30 minutes. We believe this reflects more attention to detail as indicated by the correlation between annotators (see Section 4). We note that factors such as foreign language proficiency may affect believability judgments. Further work with a wider variety of annotators is needed to identify and quantify those effects.", |
|
"cite_spans": [ |
|
{ |
|
"start": 184, |
|
"end": 207, |
|
"text": "(Barrault et al., 2019)", |
|
"ref_id": "BIBREF1" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Annotating Believability", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "We followed segment-level DA scoring best practices established by WMT (Barrault et al., 2019) . The fluency and adequacy questions were taken directly from WMT16 (Bojar et al., 2016) . The believability question uses our definition of believability with an introductory phrase to assure the annotator that we understand that it is not possible to truly evaluate the meaning without the source: \"Even without having seen the source text, I believe the meaning of this translation is likely to match the meaning of the original.\" Annotations were performed using the Turkle 1 annotation platform. Screenshots of the annotation interface are provided in Appendix B.", |
|
"cite_spans": [ |
|
{ |
|
"start": 71, |
|
"end": 94, |
|
"text": "(Barrault et al., 2019)", |
|
"ref_id": "BIBREF1" |
|
}, |
|
{ |
|
"start": 163, |
|
"end": 183, |
|
"text": "(Bojar et al., 2016)", |
|
"ref_id": "BIBREF3" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Tasks", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Long documents were broken up into salient chunks and segments were annotated in their original order to provide discourse context, as in 1 https://github.com/hltcoe/turkle WMT19 \"Segment Rating + Document Context\" (Barrault et al., 2019) . For each chunk, annotators first scored fluency and believability based only on the MT output. They then scored the same segments for adequacy given both source and output.", |
|
"cite_spans": [ |
|
{ |
|
"start": 215, |
|
"end": 238, |
|
"text": "(Barrault et al., 2019)", |
|
"ref_id": "BIBREF1" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Tasks", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Annotations For each segment, we calculate a z-score and a label for each feature. We calculate scores following Bojar et al. (2018) . Each annotator's raw scores are converted to z-scores based on their own mean and standard deviation, and the z-scores for each segment are averaged across annotators. Segments with positive z-scores are labeled TRUE and negative z-scores are labeled FALSE.", |
|
"cite_spans": [ |
|
{ |
|
"start": 113, |
|
"end": 132, |
|
"text": "Bojar et al. (2018)", |
|
"ref_id": "BIBREF4" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Tasks", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Test Data We chose a test set that is comparable across three typologically different languages with different amounts of training data. Our test data comes from The Multi-Target TED Talks Task (MTTT)-a collection of bitexts across 20 languages (Duh, 2018) . The test set is fully sentence parallel with original talk transcripts as the English and human translations for the other languages. We use the non-English translations in MTTT as \"source\" and machine translate into English. In the test set, there are 29 talks totalling 1,982 segments, however, we exclude one talk (\"Nellie McKay sings 'Clonie\"') that is too poetic for MT. The final set is 1,976 segments.", |
|
"cite_spans": [ |
|
{ |
|
"start": 245, |
|
"end": 256, |
|
"text": "(Duh, 2018)", |
|
"ref_id": "BIBREF6" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Tasks", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "MT Systems Because our goal is to examine segments annotated for believability, fluency, and adequacy judgments rather than to compare systems, we need MT that will produce outputs across a range of quality. Output that is inadequate but believable is of particular interest, so we rely on estimates of the distribution of \"fluently inadequate\" translations on MTTT from Martindale et al. 2019to inform our choice of models. They estimated that fluently inadequate translations were most frequent in the \"general\" NMT models, trained on out-of-domain data. We use their Arabic, Farsi, and Korean \"general\" models to capture the range of training data sizes and output quality we believe will provide interesting examples for our analysis. The training data is 49M, 6.2M, and 1.4M segments in Arabic, Farsi, and Korean, respectively. The systems are built in Sockeye (Hieber et al., 2017) using the 'SockeyeNMT rm1' settings from the MTTT leaderboard 2 . The resulting systems achieved BLEU (Papineni et al., 2002) 4 Quantitative Analysis Annotation Statistics 63 translators participated in the annotation process. Korean-to-English and Arabic-to-English had the most annotators (26 and 27) and highest number of annotators per segment (10 for Korean and at least 7 for Arabic). There were three annotators whose annotations were deemed unreliable due to low correlation with the mean (< 0.5). Only 10 annotators were available for Farsi, resulting in fewer annotators per segment (median: 4) but more segments per annotator (median: 802), making the z-score process more reliable. After excluding the questionable annotators, we see strong correlation of individual annotator scores with the mean as shown in Table 2 . We see similar average correlation with the mean between fluency and believability and small variance across features. This suggests that among our annotators believability is no more subjective than fluency.", |
|
"cite_spans": [ |
|
{ |
|
"start": 866, |
|
"end": 887, |
|
"text": "(Hieber et al., 2017)", |
|
"ref_id": "BIBREF10" |
|
}, |
|
{ |
|
"start": 990, |
|
"end": 1013, |
|
"text": "(Papineni et al., 2002)", |
|
"ref_id": "BIBREF23" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 1710, |
|
"end": 1717, |
|
"text": "Table 2", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Tasks", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Label Distribution Although the distribution of labels is specific to this set of output, it provides context for the other results. The first three rows of Table 4 show the percent of segments with each label. We see that the percent positive examples for each label roughly relates to the system BLEU score, with Arabic having the highest and Korean the lowest. Table 3 shows the Pearson correlations between the scores for fluency (FL), believability (BL), and adequacy (AD). The BL-AD relationship is important because inadequate believable translations may mislead monolingual users. The BL-AD correlation is higher than the FL-AD correlation across all languages. This may reflect the influence of context: adequate segments may fit the context well enough to be believable even if not fluent. The same trend is reflected in the fourth row of Table 4 , BL+/AD-. Most inadequate translations are not believable, but 19-25% are potentially misleading. Arabic has the highest BL+/AD-. The larger training data may improve both translation and generation, improving overall quality but enabling more believable errors. However, the lower quality Korean also has a higher percent potentially misleading than Farsi. This could mean that all results are idiosyncratic to this data or perhaps the relationship is bimodal. As expected, FL-BL has the strongest correlation. This indicates that the two features are closely linked, but they are not identical. we need a human body. we should eat it. there was a fire on 9 days ago. three days later, this disappeared, and a week later, there was no complaints. hoping to attract all peoples' minds and be the first to overcome space.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 157, |
|
"end": 164, |
|
"text": "Table 4", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 364, |
|
"end": 371, |
|
"text": "Table 3", |
|
"ref_id": "TABREF1" |
|
}, |
|
{ |
|
"start": 849, |
|
"end": 856, |
|
"text": "Table 4", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Tasks", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "you can see how hard it is to carry kumin ververbert with a bible in 1455. Table 6 : Example output for different segments for each combination of fluency, believability, and adequacy.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 75, |
|
"end": 82, |
|
"text": "Table 6", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Feature Relationships", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "tions that are (un)believable. If one were to attempt to identify potentially misleading translations using fluency as a proxy for believability, as in Martindale et al. (2019), 6-8% of translations that are fluent but not believable would be incorrectly labeled as misleading, while nearly 20% of segments from the Arabic system that are believable but not fluent would be missed.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Feature Relationships", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Based on informal examination of a random sample of segments, we find that those labeled as unbelievable often fail semantically, with strange phrases (e.g., \"kidney steel\", \"iron code\"), illogical clauses (e.g., \"the only natural thing is that my son is the vending machine\"), or unlikely argument structure (e.g., \"to prove that it seems impossible\", \"...a state of treason for a raven, which explains that he's cute\"). Unbelievable translations may also be grammatical but unintelligible (e.g., \"if you want a long time, I'm actually doing something about it.\"). By contrast, segments labeled as disfluent may include grammatical errors and/or awkward, non-idiomatic phrases such as \"he set me a date\" or \"in a direct time\". These observations support our intuition that believability is more influenced by semantic features than fluency is, but further analysis is needed. Additional examples for each combination of fluency, believability, and adequacy are shown in Table 6 .", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 971, |
|
"end": 978, |
|
"text": "Table 6", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Qualitative Analysis", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "This work used traditional NLP annotation methods to measure users' perceptions of believability of MT output. These methods allow us to identify broad relationships between believability and traditional MT quality metrics, fluency and adequacy, showing that believability is strongly corre-lated with fluency and somewhat correlated with adequacy. Preliminary qualitative analysis of examples where believability and fluency judgments disagreed suggests that semantic features can overwhelm grammatical features in believability judgments.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusions and Future Work", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "A full qualitative analysis of the believabilityannotated examples would suggest features that may have influenced annotator's judgments and could indicate approaches that may be effective in automatically predicting believability. Believability used alone could enable an adversarial MT system to deliberately mask errors and produce misleading output, but believability predictions combined with MT quality estimation (Specia et al., 2009) could be used to flag potentially misleading output.", |
|
"cite_spans": [ |
|
{ |
|
"start": 420, |
|
"end": 441, |
|
"text": "(Specia et al., 2009)", |
|
"ref_id": "BIBREF27" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusions and Future Work", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "Because believability is a user-centric metric, gaining a complete understanding would require more user-centric methods. The annotator agreement in our results may indicate that believability is less subjective than one might expect, or it may simply indicate that our annotators were fairly homogeneous. A user study could not only tell us exactly what features were most salient in which contexts, but could indicate whether demographic features such as age or education affect perceptions of believability. ", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusions and Future Work", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "www.cs.jhu.edu/ kevinduh/a/multitarget-tedtalks/", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
} |
|
], |
|
"back_matter": [ |
|
{ |
|
"text": "Statistics on the number of segments per annotator and annotations per segment are provided in tables 7 and 8. Figures 1 and 2 are screenshots of the annotation interface for the monolingual fluency and believability task and the bilingual adequacy task.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 111, |
|
"end": 126, |
|
"text": "Figures 1 and 2", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "A Additional Annotation Statistics", |
|
"sec_num": null |
|
} |
|
], |
|
"bib_entries": { |
|
"BIBREF0": { |
|
"ref_id": "b0", |
|
"title": "ACTFL Proficiency Guidelines", |
|
"authors": [ |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Actfl", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2012, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "ACTFL. 2012. ACTFL Proficiency Guidelines 2012.", |
|
"links": null |
|
}, |
|
"BIBREF1": { |
|
"ref_id": "b1", |
|
"title": "Findings of the 2019 conference on machine translation (wmt19)", |
|
"authors": [ |
|
{ |
|
"first": "Lo\u00efc", |
|
"middle": [], |
|
"last": "Barrault", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ondrej", |
|
"middle": [], |
|
"last": "Bojar", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Marta", |
|
"middle": [ |
|
"R" |
|
], |
|
"last": "Costa-Juss\u00e0", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Christian", |
|
"middle": [], |
|
"last": "Federmann", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mark", |
|
"middle": [], |
|
"last": "Fishel", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yvette", |
|
"middle": [], |
|
"last": "Graham", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Barry", |
|
"middle": [], |
|
"last": "Haddow", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Matthias", |
|
"middle": [], |
|
"last": "Huck", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Philipp", |
|
"middle": [], |
|
"last": "Koehn", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Shervin", |
|
"middle": [], |
|
"last": "Malmasi", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Proceedings of the Fourth Conference on Machine Translation", |
|
"volume": "2", |
|
"issue": "", |
|
"pages": "1--61", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Lo\u00efc Barrault, Ondrej Bojar, Marta R Costa-juss\u00e0, Christian Federmann, Mark Fishel, Yvette Gra- ham, Barry Haddow, Matthias Huck, Philipp Koehn, Shervin Malmasi, et al. 2019. Findings of the 2019 conference on machine translation (wmt19). In Proceedings of the Fourth Conference on Machine Translation, volume 2, pages 1-61.", |
|
"links": null |
|
}, |
|
"BIBREF2": { |
|
"ref_id": "b2", |
|
"title": "Israel Arrests Palestinian Because Facebook Translated 'Good Morning' to 'Attack Them'. Haaretz", |
|
"authors": [ |
|
{ |
|
"first": "Yotam", |
|
"middle": [], |
|
"last": "Berger", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Yotam Berger. 2017. Israel Arrests Palestinian Be- cause Facebook Translated 'Good Morning' to 'At- tack Them'. Haaretz. [Online; accessed 6-Dec-", |
|
"links": null |
|
}, |
|
"BIBREF3": { |
|
"ref_id": "b3", |
|
"title": "Findings of the 2016 Conference on Machine Translation", |
|
"authors": [ |
|
{ |
|
"first": "Ond\u0159ej", |
|
"middle": [], |
|
"last": "Bojar", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Rajen", |
|
"middle": [], |
|
"last": "Chatterjee", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Christian", |
|
"middle": [], |
|
"last": "Federmann", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yvette", |
|
"middle": [], |
|
"last": "Graham", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Barry", |
|
"middle": [], |
|
"last": "Haddow", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Matthias", |
|
"middle": [], |
|
"last": "Huck", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Antonio", |
|
"middle": [ |
|
"Jimeno" |
|
], |
|
"last": "Yepes", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Philipp", |
|
"middle": [], |
|
"last": "Koehn", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Varvara", |
|
"middle": [], |
|
"last": "Logacheva", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Christof", |
|
"middle": [], |
|
"last": "Monz", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Matteo", |
|
"middle": [], |
|
"last": "Negri", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Aurelie", |
|
"middle": [], |
|
"last": "Neveol", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mariana", |
|
"middle": [], |
|
"last": "Neves", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Martin", |
|
"middle": [], |
|
"last": "Popel", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Matt", |
|
"middle": [], |
|
"last": "Post", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Raphael", |
|
"middle": [], |
|
"last": "Rubino", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Carolina", |
|
"middle": [], |
|
"last": "Scarton", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Lucia", |
|
"middle": [], |
|
"last": "Specia", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Marco", |
|
"middle": [], |
|
"last": "Turchi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Karin", |
|
"middle": [], |
|
"last": "Verspoor", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Marcos", |
|
"middle": [], |
|
"last": "Zampieri", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "Proceedings of the First Conference on Machine Translation", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "131--198", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ond\u0159ej Bojar, Rajen Chatterjee, Christian Federmann, Yvette Graham, Barry Haddow, Matthias Huck, Antonio Jimeno Yepes, Philipp Koehn, Varvara Logacheva, Christof Monz, Matteo Negri, Aure- lie Neveol, Mariana Neves, Martin Popel, Matt Post, Raphael Rubino, Carolina Scarton, Lucia Spe- cia, Marco Turchi, Karin Verspoor, and Marcos Zampieri. 2016. Findings of the 2016 Confer- ence on Machine Translation. In Proceedings of the First Conference on Machine Translation, pages 131-198, Berlin, Germany. Association for Compu- tational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF4": { |
|
"ref_id": "b4", |
|
"title": "Findings of the 2018 conference on machine translation (wmt18)", |
|
"authors": [ |
|
{ |
|
"first": "Ond\u0159ej", |
|
"middle": [], |
|
"last": "Bojar", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Christian", |
|
"middle": [], |
|
"last": "Federmann", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mark", |
|
"middle": [], |
|
"last": "Fishel", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yvette", |
|
"middle": [], |
|
"last": "Graham", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Barry", |
|
"middle": [], |
|
"last": "Haddow", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Matthias", |
|
"middle": [], |
|
"last": "Huck", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Philipp", |
|
"middle": [], |
|
"last": "Koehn", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Christof", |
|
"middle": [], |
|
"last": "Monz", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Proceedings of the Third Conference on Machine Translation", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "272--307", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ond\u0159ej Bojar, Christian Federmann, Mark Fishel, Yvette Graham, Barry Haddow, Matthias Huck, Philipp Koehn, and Christof Monz. 2018. Find- ings of the 2018 conference on machine transla- tion (wmt18). In Proceedings of the Third Confer- ence on Machine Translation, pages 272-307, Bel- gium, Brussels. Association for Computational Lin- guistics.", |
|
"links": null |
|
}, |
|
"BIBREF5": { |
|
"ref_id": "b5", |
|
"title": "The impact of anonymity on weblog credibility", |
|
"authors": [ |
|
{ |
|
"first": "Thomas", |
|
"middle": [], |
|
"last": "Chesney", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "K", |
|
"middle": [ |
|
"S" |
|
], |
|
"last": "Daniel", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Su", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2010, |
|
"venue": "International Journal of Human-Computer Studies", |
|
"volume": "68", |
|
"issue": "10", |
|
"pages": "710--718", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.1016/j.ijhcs.2010.06.001" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Thomas Chesney and Daniel K. S. Su. 2010. The impact of anonymity on weblog credibility. In- ternational Journal of Human-Computer Studies, 68(10):710-718.", |
|
"links": null |
|
}, |
|
"BIBREF6": { |
|
"ref_id": "b6", |
|
"title": "The multitarget ted talks task", |
|
"authors": [ |
|
{ |
|
"first": "Kevin", |
|
"middle": [], |
|
"last": "Duh", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Kevin Duh. 2018. The multitarget ted talks task. http://www.cs.jhu.edu/~kevinduh/a/ multitarget-tedtalks/.", |
|
"links": null |
|
}, |
|
"BIBREF7": { |
|
"ref_id": "b7", |
|
"title": "How Presentation Flaws Affect Perceived Site Quality, Trust, and Intention to Purchase from an Online Store", |
|
"authors": [ |
|
{ |
|
"first": "Andrea", |
|
"middle": [], |
|
"last": "Everard", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "F", |
|
"middle": [], |
|
"last": "Dennis", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Galletta", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2005, |
|
"venue": "Journal of Management Information Systems", |
|
"volume": "22", |
|
"issue": "3", |
|
"pages": "56--95", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.2753/MIS0742-1222220303" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Andrea Everard and Dennis F. Galletta. 2005. How Presentation Flaws Affect Perceived Site Quality, Trust, and Intention to Purchase from an Online Store. Journal of Management Information Systems, 22(3):56-95.", |
|
"links": null |
|
}, |
|
"BIBREF8": { |
|
"ref_id": "b8", |
|
"title": "What makes Web sites credible? a report on a large quantitative study", |
|
"authors": [ |
|
{ |
|
"first": "B", |
|
"middle": [ |
|
"J" |
|
], |
|
"last": "Fogg", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jonathan", |
|
"middle": [], |
|
"last": "Marshall", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Othman", |
|
"middle": [], |
|
"last": "Laraki", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Alex", |
|
"middle": [], |
|
"last": "Osipovich", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Chris", |
|
"middle": [], |
|
"last": "Varma", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Nicholas", |
|
"middle": [], |
|
"last": "Fang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jyoti", |
|
"middle": [], |
|
"last": "Paul", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Akshay", |
|
"middle": [], |
|
"last": "Rangnekar", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "John", |
|
"middle": [], |
|
"last": "Shon", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Preeti", |
|
"middle": [], |
|
"last": "Swani", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Marissa", |
|
"middle": [], |
|
"last": "Treinen", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2001, |
|
"venue": "Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, CHI '01", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "61--68", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.1145/365024.365037" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "B. J. Fogg, Jonathan Marshall, Othman Laraki, Alex Osipovich, Chris Varma, Nicholas Fang, Jyoti Paul, Akshay Rangnekar, John Shon, Preeti Swani, and Marissa Treinen. 2001. What makes Web sites cred- ible? a report on a large quantitative study. In Pro- ceedings of the SIGCHI Conference on Human Fac- tors in Computing Systems, CHI '01, pages 61-68, New York, NY, USA. Association for Computing Machinery.", |
|
"links": null |
|
}, |
|
"BIBREF9": { |
|
"ref_id": "b9", |
|
"title": "Continuous measurement scales in human evaluation of machine translation", |
|
"authors": [ |
|
{ |
|
"first": "Yvette", |
|
"middle": [], |
|
"last": "Graham", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Timothy", |
|
"middle": [], |
|
"last": "Baldwin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Alistair", |
|
"middle": [], |
|
"last": "Moffat", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Justin", |
|
"middle": [], |
|
"last": "Zobel", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2013, |
|
"venue": "Proceedings of the 7th Linguistic Annotation Workshop & Interoperability with Discourse", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "33--41", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Yvette Graham, Timothy Baldwin, Alistair Moffat, and Justin Zobel. 2013. Continuous measurement scales in human evaluation of machine translation. In Pro- ceedings of the 7th Linguistic Annotation Workshop & Interoperability with Discourse, pages 33-41.", |
|
"links": null |
|
}, |
|
"BIBREF10": { |
|
"ref_id": "b10", |
|
"title": "Sockeye: A toolkit for neural machine translation", |
|
"authors": [ |
|
{ |
|
"first": "Felix", |
|
"middle": [], |
|
"last": "Hieber", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Tobias", |
|
"middle": [], |
|
"last": "Domhan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Michael", |
|
"middle": [], |
|
"last": "Denkowski", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "David", |
|
"middle": [], |
|
"last": "Vilar", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Artem", |
|
"middle": [], |
|
"last": "Sokolov", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ann", |
|
"middle": [], |
|
"last": "Clifton", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Matt", |
|
"middle": [], |
|
"last": "Post", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1712.05690" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Felix Hieber, Tobias Domhan, Michael Denkowski, David Vilar, Artem Sokolov, Ann Clifton, and Matt Post. 2017. Sockeye: A toolkit for neural machine translation. arXiv preprint arXiv:1712.05690.", |
|
"links": null |
|
}, |
|
"BIBREF11": { |
|
"ref_id": "b11", |
|
"title": "David Hoffman: What happens when you lose everything?", |
|
"authors": [ |
|
{ |
|
"first": "David", |
|
"middle": [], |
|
"last": "Hoffman", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2008, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "David Hoffman. 2008. David Hoffman: What happens when you lose everything?", |
|
"links": null |
|
}, |
|
"BIBREF12": { |
|
"ref_id": "b12", |
|
"title": "What Makes a High-Quality User-Generated Answer? IEEE Internet Computing", |
|
"authors": [ |
|
{ |
|
"first": "Alton", |
|
"middle": [], |
|
"last": "Blooma Mohan John", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Dion Hoe-Lian", |
|
"middle": [], |
|
"last": "Yeow-Kuan Chua", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Goh", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2011, |
|
"venue": "", |
|
"volume": "15", |
|
"issue": "", |
|
"pages": "66--71", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.1109/MIC.2011.23" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Blooma Mohan John, Alton Yeow-Kuan Chua, and Dion Hoe-Lian Goh. 2011. What Makes a High- Quality User-Generated Answer? IEEE Internet Computing, 15(1):66-71.", |
|
"links": null |
|
}, |
|
"BIBREF13": { |
|
"ref_id": "b13", |
|
"title": "Enhancing cohesion and coherence of fake text to improve believability for deceiving cyber attackers", |
|
"authors": [ |
|
{ |
|
"first": "Prakruthi", |
|
"middle": [], |
|
"last": "Karuna", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Hemant", |
|
"middle": [], |
|
"last": "Purohit", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ozlem", |
|
"middle": [], |
|
"last": "Uzuner", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sushil", |
|
"middle": [], |
|
"last": "Jajodia", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Rajesh", |
|
"middle": [], |
|
"last": "Ganesan", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Proceedings of the First International Workshop on Language Cognition and Computational Models", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "31--40", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Prakruthi Karuna, Hemant Purohit, Ozlem Uzuner, Sushil Jajodia, and Rajesh Ganesan. 2018. Enhanc- ing cohesion and coherence of fake text to improve believability for deceiving cyber attackers. In Pro- ceedings of the First International Workshop on Lan- guage Cognition and Computational Models, pages 31-40.", |
|
"links": null |
|
}, |
|
"BIBREF14": { |
|
"ref_id": "b14", |
|
"title": "Questioners' credibility judgments of answers in a social question and answer site", |
|
"authors": [ |
|
{ |
|
"first": "Soojung", |
|
"middle": [], |
|
"last": "Kim", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2010, |
|
"venue": "Information Research", |
|
"volume": "15", |
|
"issue": "2", |
|
"pages": "15--17", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Soojung Kim. 2010. Questioners' credibility judg- ments of answers in a social question and answer site. Information Research, 15(2):15-2.", |
|
"links": null |
|
}, |
|
"BIBREF15": { |
|
"ref_id": "b15", |
|
"title": "Users' relevance criteria for evaluating answers in a social Q&A site", |
|
"authors": [ |
|
{ |
|
"first": "Soojung", |
|
"middle": [], |
|
"last": "Kim", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sanghee", |
|
"middle": [], |
|
"last": "Oh", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2009, |
|
"venue": "Journal of the American Society for Information Science and Technology", |
|
"volume": "60", |
|
"issue": "4", |
|
"pages": "716--727", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.1002/asi.21026" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Soojung Kim and Sanghee Oh. 2009. Users' relevance criteria for evaluating answers in a social Q&A site. Journal of the American Society for Information Sci- ence and Technology, 60(4):716-727.", |
|
"links": null |
|
}, |
|
"BIBREF16": { |
|
"ref_id": "b16", |
|
"title": "There Is No Logical Negation Here, But There Are Alternatives: Modeling Conversational Negation with Distributional Semantics", |
|
"authors": [ |
|
{ |
|
"first": "Germ\u00e1n", |
|
"middle": [], |
|
"last": "Kruszewski", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Denis", |
|
"middle": [], |
|
"last": "Paperno", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Raffaella", |
|
"middle": [], |
|
"last": "Bernardi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Marco", |
|
"middle": [], |
|
"last": "Baroni", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "Computational Linguistics", |
|
"volume": "42", |
|
"issue": "4", |
|
"pages": "637--660", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.1162/COLI_a_00262" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Germ\u00e1n Kruszewski, Denis Paperno, Raffaella Bernardi, and Marco Baroni. 2016. There Is No Logical Negation Here, But There Are Alternatives: Modeling Conversational Negation with Distri- butional Semantics. Computational Linguistics, 42(4):637-660.", |
|
"links": null |
|
}, |
|
"BIBREF18": { |
|
"ref_id": "b18", |
|
"title": "A Probabilistic View of Linguistic Knowledge", |
|
"authors": [ |
|
{ |
|
"first": "Acceptability", |
|
"middle": [], |
|
"last": "Grammaticality", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Probability", |
|
"middle": [], |
|
"last": "", |
|
"suffix": "" |
|
} |
|
], |
|
"year": null, |
|
"venue": "Cognitive Science", |
|
"volume": "41", |
|
"issue": "5", |
|
"pages": "1202--1241", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.1111/cogs.12414" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Grammaticality, Acceptability, and Probabil- ity: A Probabilistic View of Linguistic Knowledge. Cognitive Science, 41(5):1202-1241.", |
|
"links": null |
|
}, |
|
"BIBREF19": { |
|
"ref_id": "b19", |
|
"title": "Perceptions of credibility of scholarly information on the web. Information Processing & Management", |
|
"authors": [ |
|
{ |
|
"first": "Ziming", |
|
"middle": [], |
|
"last": "Liu", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2004, |
|
"venue": "", |
|
"volume": "40", |
|
"issue": "", |
|
"pages": "1027--1038", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.1016/S0306-4573(03)00064-5" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ziming Liu. 2004. Perceptions of credibility of schol- arly information on the web. Information Process- ing & Management, 40(6):1027-1038.", |
|
"links": null |
|
}, |
|
"BIBREF20": { |
|
"ref_id": "b20", |
|
"title": "Identifying fluently inadequate output in neural and statistical machine translation", |
|
"authors": [ |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Marianna", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Marine", |
|
"middle": [], |
|
"last": "Martindale", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kevin", |
|
"middle": [], |
|
"last": "Carpuat", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Paul", |
|
"middle": [], |
|
"last": "Duh", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Mcnamee", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Proceedings of Machine Translation Summit XVII", |
|
"volume": "1", |
|
"issue": "", |
|
"pages": "233--243", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Marianna J Martindale, Marine Carpuat, Kevin Duh, and Paul McNamee. 2019. Identifying fluently inad- equate output in neural and statistical machine trans- lation. In Proceedings of Machine Translation Sum- mit XVII Volume 1: Research Track, pages 233-243.", |
|
"links": null |
|
}, |
|
"BIBREF21": { |
|
"ref_id": "b21", |
|
"title": "Social and Heuristic Approaches to Credibility Evaluation Online", |
|
"authors": [ |
|
{ |
|
"first": "Miriam", |
|
"middle": [ |
|
"J" |
|
], |
|
"last": "Metzger", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Andrew", |
|
"middle": [ |
|
"J" |
|
], |
|
"last": "Flanagin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ryan", |
|
"middle": [ |
|
"B" |
|
], |
|
"last": "Medders", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2010, |
|
"venue": "Journal of Communication", |
|
"volume": "60", |
|
"issue": "3", |
|
"pages": "413--439", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.1111/j.1460-2466.2010.01488.x" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Miriam J. Metzger, Andrew J. Flanagin, and Ryan B. Medders. 2010. Social and Heuristic Approaches to Credibility Evaluation Online. Journal of Communi- cation, 60(3):413-439.", |
|
"links": null |
|
}, |
|
"BIBREF22": { |
|
"ref_id": "b22", |
|
"title": "A Corpus and Cloze Evaluation for Deeper Understanding of Commonsense Stories", |
|
"authors": [ |
|
{ |
|
"first": "Nasrin", |
|
"middle": [], |
|
"last": "Mostafazadeh", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Nathanael", |
|
"middle": [], |
|
"last": "Chambers", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Xiaodong", |
|
"middle": [], |
|
"last": "He", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Devi", |
|
"middle": [], |
|
"last": "Parikh", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Dhruv", |
|
"middle": [], |
|
"last": "Batra", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Lucy", |
|
"middle": [], |
|
"last": "Vanderwende", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Pushmeet", |
|
"middle": [], |
|
"last": "Kohli", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "James", |
|
"middle": [], |
|
"last": "Allen", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "839--849", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/N16-1098" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Nasrin Mostafazadeh, Nathanael Chambers, Xiaodong He, Devi Parikh, Dhruv Batra, Lucy Vanderwende, Pushmeet Kohli, and James Allen. 2016. A Corpus and Cloze Evaluation for Deeper Understanding of Commonsense Stories. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 839-849, San Diego, California. Association for Computational Linguis- tics.", |
|
"links": null |
|
}, |
|
"BIBREF23": { |
|
"ref_id": "b23", |
|
"title": "BLEU: A method for automatic evaluation of machine translation", |
|
"authors": [ |
|
{ |
|
"first": "Kishore", |
|
"middle": [], |
|
"last": "Papineni", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Salim", |
|
"middle": [], |
|
"last": "Roukos", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Todd", |
|
"middle": [], |
|
"last": "Ward", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Wei-Jing", |
|
"middle": [], |
|
"last": "Zhu", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2002, |
|
"venue": "Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Kishore Papineni, Salim Roukos, Todd Ward, and Wei- Jing Zhu. 2002. BLEU: A method for automatic evaluation of machine translation. In Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics, Philadelphia, PA.", |
|
"links": null |
|
}, |
|
"BIBREF24": { |
|
"ref_id": "b24", |
|
"title": "Relations between comprehensibility and adequacy errors in machine translation output", |
|
"authors": [], |
|
"year": null, |
|
"venue": "Proceedings of the 24th Conference on Computational Natural Language Learning", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "256--264", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Maja Popovi\u0107. 2020. Relations between comprehen- sibility and adequacy errors in machine translation output. In Proceedings of the 24th Conference on Computational Natural Language Learning, pages 256-264.", |
|
"links": null |
|
}, |
|
"BIBREF25": { |
|
"ref_id": "b25", |
|
"title": "Credibility: A multidisciplinary framework. Annual review of information science and technology", |
|
"authors": [ |
|
{ |
|
"first": "Young", |
|
"middle": [], |
|
"last": "Soo", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Rieh", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "R", |
|
"middle": [], |
|
"last": "David", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Danielson", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2007, |
|
"venue": "", |
|
"volume": "41", |
|
"issue": "", |
|
"pages": "307--364", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Soo Young Rieh and David R. Danielson. 2007. Credibility: A multidisciplinary framework. An- nual review of information science and technology, 41(1):307-364.", |
|
"links": null |
|
}, |
|
"BIBREF26": { |
|
"ref_id": "b26", |
|
"title": "The empirical base of linguistics: Grammaticality judgments and linguistic methodology", |
|
"authors": [ |
|
{ |
|
"first": "Carson", |
|
"middle": [], |
|
"last": "Sch\u00fctze", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "Classics in Linguistics. Language Science Press", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Carson Sch\u00fctze. 2016. The empirical base of lin- guistics: Grammaticality judgments and linguistic methodology. Classics in Linguistics. Language Sci- ence Press, Berlin.", |
|
"links": null |
|
}, |
|
"BIBREF27": { |
|
"ref_id": "b27", |
|
"title": "Estimating the sentence-level quality of Machine Translation systems", |
|
"authors": [ |
|
{ |
|
"first": "Lucia", |
|
"middle": [], |
|
"last": "Specia", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Nicola", |
|
"middle": [], |
|
"last": "Cancedda", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Marc", |
|
"middle": [], |
|
"last": "Dymetman", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Marco", |
|
"middle": [], |
|
"last": "Turchi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Nello", |
|
"middle": [], |
|
"last": "Cristianini", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2009, |
|
"venue": "In In EAMT", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "28--35", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Lucia Specia, Nicola Cancedda, Marc Dymetman, Marco Turchi, and Nello Cristianini. 2009. Estimat- ing the sentence-level quality of Machine Transla- tion systems. In In EAMT, pages 28-35.", |
|
"links": null |
|
}, |
|
"BIBREF28": { |
|
"ref_id": "b28", |
|
"title": "The ARPA MT Evaluation Methodologies: Evolution, Lessons, and Future Approaches", |
|
"authors": [ |
|
{ |
|
"first": "John", |
|
"middle": [ |
|
"S" |
|
], |
|
"last": "White", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "O'", |
|
"middle": [], |
|
"last": "Theresa", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Francis O'", |
|
"middle": [], |
|
"last": "Connell", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Mara", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1994, |
|
"venue": "Proceedings of the First Conference of the Association for Machine Translation in the Americas", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "193--205", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "John S. White, Theresa O'Connell, and Francis O'Mara. 1994. The ARPA MT Evaluation Method- ologies: Evolution, Lessons, and Future Approaches. In Proceedings of the First Conference of the As- sociation for Machine Translation in the Americas, pages 193-205, Columbia, Maryland, USA.", |
|
"links": null |
|
} |
|
}, |
|
"ref_entries": { |
|
"FIGREF0": { |
|
"uris": null, |
|
"num": null, |
|
"type_str": "figure", |
|
"text": "Example question from monolingual annotation phase, fluency and believabilityFigure 2: Example question from bilingual annotation phase with adequacy question" |
|
}, |
|
"TABREF0": { |
|
"num": null, |
|
"text": "scores of 26.6 for Arabic, 22.2 for Farsi and 11.6 for Korean.", |
|
"content": "<table><tr><td/><td/><td>Arabic</td><td/><td/><td>Farsi</td><td/><td/><td>Korean</td></tr><tr><td/><td>FL</td><td>BL</td><td>AD</td><td>FL</td><td>BL</td><td>AD</td><td>FL</td><td>BL</td><td>AD</td></tr><tr><td colspan=\"10\">Mean Corr. 0.698 0.689 0.728 0.809 0.793 0.830 0.773 0.754 0.793</td></tr><tr><td>Std dev</td><td colspan=\"9\">0.137 0.122 0.129 0.087 0.099 0.083 0.112 0.120 0.076</td></tr><tr><td colspan=\"10\">Table 2: Average correlation with the mean for fluency (FL), believability (BL), and adequacy (AD)</td></tr><tr><td colspan=\"3\">Arabic Farsi Korean</td><td>All</td><td/><td/><td/><td/><td/></tr><tr><td>FL-BL</td><td>0.89 0.96</td><td colspan=\"2\">0.97 0.94</td><td/><td/><td/><td/><td/></tr><tr><td>BL-AD</td><td>0.71 0.74</td><td colspan=\"2\">0.75 0.73</td><td/><td/><td/><td/><td/></tr><tr><td>FL-AD</td><td>0.62 0.73</td><td colspan=\"2\">0.72 0.69</td><td/><td/><td/><td/><td/></tr></table>", |
|
"type_str": "table", |
|
"html": null |
|
}, |
|
"TABREF1": { |
|
"num": null, |
|
"text": "", |
|
"content": "<table><tr><td colspan=\"5\">: Pearson correlation between fluency (FL),</td></tr><tr><td colspan=\"4\">believability (BL), and adequacy (AD).</td></tr><tr><td colspan=\"5\">%Arabic %Farsi %Korean %All</td></tr><tr><td>Fluent</td><td>57.9</td><td>50.4</td><td colspan=\"2\">49.4 52.5</td></tr><tr><td>Believable</td><td>61.0</td><td>51.2</td><td colspan=\"2\">49.0 53.7</td></tr><tr><td>Adequate</td><td>59.4</td><td>52.4</td><td colspan=\"2\">44.1 52.0</td></tr><tr><td>BL+/AD-</td><td>25.9</td><td>19.4</td><td colspan=\"2\">21.8 22.2</td></tr><tr><td colspan=\"5\">Table 4: Percent of segments with each label</td></tr><tr><td colspan=\"5\">(rows 1-3) and percent believable but inadequate</td></tr><tr><td colspan=\"3\">(BL+/AD-) segments (row 4).</td><td/></tr><tr><td/><td colspan=\"3\">Arabic Farsi Korean</td><td>All</td></tr><tr><td>BL+/FL+</td><td colspan=\"2\">92.1 93.8</td><td colspan=\"2\">93.8 93.1</td></tr><tr><td>BL-/FL+</td><td colspan=\"2\">8.0 6.2</td><td>6.3</td><td>6.9</td></tr><tr><td>BL+/FL-</td><td colspan=\"2\">18.3 8.0</td><td colspan=\"2\">5.4 10.1</td></tr><tr><td>BL-/FL-</td><td colspan=\"2\">81.7 92.1</td><td colspan=\"2\">94.6 89.9</td></tr></table>", |
|
"type_str": "table", |
|
"html": null |
|
}, |
|
"TABREF2": { |
|
"num": null, |
|
"text": "", |
|
"content": "<table><tr><td>: Percent of fluent (FL+) and disfluent (FL-)</td></tr><tr><td>segments that are believable (BL+) or unbelievable</td></tr><tr><td>(BL-)</td></tr></table>", |
|
"type_str": "table", |
|
"html": null |
|
}, |
|
"TABREF3": { |
|
"num": null, |
|
"text": "", |
|
"content": "<table><tr><td>illustrates</td></tr></table>", |
|
"type_str": "table", |
|
"html": null |
|
} |
|
} |
|
} |
|
} |