Datasets:

ArXiv:
Libraries:
Datasets
License:
gsarti commited on
Commit
e8a0bff
·
1 Parent(s): be5e0d0

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +46 -24
README.md CHANGED
@@ -27,14 +27,18 @@ task_categories:
27
 
28
  # Dataset Card for DivEMT
29
 
 
 
30
  ## Dataset Description
31
  - **Source:** [Github](https://github.com/gsarti/divemt)
32
  - **Paper:** [Arxiv](https://arxiv.org/abs/2205.12215)
33
  - **Point of Contact:** [Gabriele Sarti](mailto:[email protected])
34
 
35
- ![DivEMT Visualization](https://huggingface.co/datasets/GroNLP/divemt/resolve/main/divemt.png)
 
 
 
36
 
37
- *For an overview of DivEMT, see our [Paper](https://arxiv.org/abs/2205.12215) and our [Github repository](https://github.com/gsarti/divemt)*
38
 
39
  ### Dataset Summary
40
 
@@ -58,11 +62,13 @@ The following fields are contained in the training set:
58
 
59
  |Field|Description|
60
  |-----|-----------|
61
- |`unit_id` | The full entry identifier. Format: `flores101-{config}-{lang}-{doc_id}-{modality}-{sent_num}` |
62
  |`flores_id` | Index of the sentence in the original [Flores-101](https://huggingface.co/datasets/gsarti/flores_101) dataset |
63
- |`item_id` | The sentence identifier. The first digits of the number represent the document containing the sentence, while the last digit of the number represents the sentence position inside the document. Documents can contain from 3 to 5 semantically-related sentences each. |
64
  |`subject_id` | The identifier for the translator performing the translation from scratch or post-editing task. Values: `t1`, `t2` or `t3`. |
65
- |`task_type` | The modality of the translation task. Values: `ht` (translation from scratch), `pe1` (post-editing Google Translate translations), `pe2` (post-editing [mBART](https://huggingface.co/facebook/mbart-large-50-one-to-many-mmt) translations). |
 
 
66
  |`translation_type` | Either `ht` for from scratch or `pe` for post-editing |
67
  |`src_len_chr` | Length of the English source text in number of characters |
68
  |`mt_len_chr` | Length of the machine translation in number of characters (NaN for ht) |
@@ -86,29 +92,37 @@ The following fields are contained in the training set:
86
  |`len_pause_geq_300` | Total duration of pauses of 300ms or more, in milliseconds. |
87
  |`n_pause_geq_1000` | Number of pauses of 1s or more during the translation. |
88
  |`len_pause_geq_1000` | Total duration of pauses of 1000ms or more, in milliseconds. |
89
- |`event_time` | Total time summed across all translation events, should be comparable to `edit_time` |
90
- |`num_annotations` | Number of times the translator focused the texbox for performing the translation of the sentence during the translation session. E.g. 1 means the translation was performed once and never revised. |
91
  |`n_insert` | Number of post-editing insertions (empty for modality `ht`) computed using the [tercom](https://github.com/jhclark/tercom) library. |
92
  |`n_delete` | Number of post-editing deletions (empty for modality `ht`) computed using the [tercom](https://github.com/jhclark/tercom) library. |
93
  |`n_substitute` | Number of post-editing substitutions (empty for modality `ht`) computed using the [tercom](https://github.com/jhclark/tercom) library. |
94
  |`n_shift` | Number of post-editing shifts (empty for modality `ht`) computed using the [tercom](https://github.com/jhclark/tercom) library. |
95
  |`tot_shifted_words` | Total amount of shifted words from all shifts present in the sentence. |
96
  |`tot_edits` | Total of all edit types for the sentence. |
97
- |`hter` | Human-mediated Translation Edit Rate score computed between the MT and post-edited outputs using the [tercom](https://github.com/jhclark/tercom) library. |
98
- |`cer` | Character-level HTER score computed between the MT and post-edited outputs using the [CharacTER](https://github.com/rwth-i6/CharacTER) library.
99
- |`bleu` | Sentence-level BLEU score between MT and post-edited fields (empty for modality `ht`) computed using the [SacreBLEU](https://github.com/mjpost/sacrebleu) library with default parameters. |
100
- |`chrf` | Sentence-level chrF score between MT and post-edited fields (empty for modality `ht`) computed using the [SacreBLEU](https://github.com/mjpost/sacrebleu) library with default parameters. |
101
- |`lang_id` | Language identifier for the sentence |
102
- |`doc_id` | Document identifier for the sentence |
103
- |`time_s` | Edit time expressed in seconds. `time_m` and `time_h` also available for minutes and hours respectively. |
104
- |`time_per_char` | Edit time per source character, expressed in seconds. Also available as `time_per_word`. |
 
105
  |`key_per_char` | Proportion of keys per character needed to perform the translation. |
106
- |`words_per_hour` | Amount of source words translated or post-edited per hour. Also available as `words_per_minute`. |
 
107
  |`per_subject_visit_order` | Id denoting the order in which the translator accessed documents. 1 correspond to the first accessed document. |
108
  |`src_text` | The original source sentence extracted from Wikinews, wikibooks or wikivoyage. |
109
  |`mt_text` | Missing if tasktype is `ht`. Otherwise, contains the automatically-translated sentence before post-editing. |
110
  |`tgt_text` | Final sentence produced by the translator (either via translation from scratch of `sl_text` or post-editing `mt_text`) |
111
  |`aligned_edit` | Aligned visual representation of REF (`mt_text`), HYP (`tl_text`) and edit operations (I = Insertion, D = Deletion, S = Substitution) performed on the field. Replace `\\n` with `\n` to show the three aligned rows.|
 
 
 
 
 
 
112
 
113
  ### Data Splits
114
 
@@ -191,7 +205,9 @@ The text is provided as-is, without further preprocessing or tokenization.
191
 
192
  ### Dataset Creation
193
 
194
- The dataset was parsed from PET XML files into CSV format using a script adapted from the one by [Antonio Toral](https://research.rug.nl/en/persons/antonio-toral-ruiz) found at the following link: [https://github.com/antot/postediting_novel_frontiers](https://github.com/antot/postediting_novel_frontiers).
 
 
195
 
196
  ## Additional Information
197
 
@@ -201,12 +217,18 @@ For problems related to this 🤗 Datasets version, please contact me at [g.sart
201
  ### Citation Information
202
 
203
  ```bibtex
204
- @article{sarti-etal-2022-divemt,
205
- title={{DivEMT}: Neural Machine Translation Post-Editing Effort Across Typologically Diverse Languages},
206
- author={Sarti, Gabriele and Bisazza, Arianna and Guerberof Arenas, Ana and Toral, Antonio},
207
- journal={ArXiv preprint 2205.12215},
208
- url={https://arxiv.org/abs/2205.12215},
209
- year={2022},
210
- month={may}
 
 
 
 
 
 
211
  }
212
  ```
 
27
 
28
  # Dataset Card for DivEMT
29
 
30
+ *For more details on DivEMT, see our [EMNLP 2022 Paper](https://arxiv.org/abs/2205.12215) and our [Github repository](https://github.com/gsarti/divemt)*
31
+
32
  ## Dataset Description
33
  - **Source:** [Github](https://github.com/gsarti/divemt)
34
  - **Paper:** [Arxiv](https://arxiv.org/abs/2205.12215)
35
  - **Point of Contact:** [Gabriele Sarti](mailto:[email protected])
36
 
37
+ [Gabriele Sarti](https://gsarti.com) • [Arianna Bisazza](https://www.cs.rug.nl/~bisazza/) • [Ana Guerberof Arenas](https://scholar.google.com/citations?user=i6bqaTsAAAAJ) • [Antonio Toral](https://antoniotor.al/)
38
+
39
+
40
+ <img src="https://huggingface.co/datasets/GroNLP/divemt/resolve/main/divemt.png" alt="DivEMT annotation pipeline" width="600"/>
41
 
 
42
 
43
  ### Dataset Summary
44
 
 
62
 
63
  |Field|Description|
64
  |-----|-----------|
65
+ |`unit_id` | The full entry identifier. Format: `flores101-{config}-{lang}-{doc_id}-{modality}-{sent_in_doc_num}` |
66
  |`flores_id` | Index of the sentence in the original [Flores-101](https://huggingface.co/datasets/gsarti/flores_101) dataset |
67
+ |`item_id` | The sentence identifier. The first digits of the number represent the document containing the sentence, while the last digit of the number represents the sentence position inside the document. Documents can contain from 3 to 5 contiguous sentences each. |
68
  |`subject_id` | The identifier for the translator performing the translation from scratch or post-editing task. Values: `t1`, `t2` or `t3`. |
69
+ |`lang_id` | Language identifier for the sentence, using Flores-101 three-letter format (e.g. `ara`, `nld`)|
70
+ |`doc_id` | Document identifier for the sentence |
71
+ |`task_type` | The modality of the translation task. Values: `ht` (translation from scratch), `pe1` (post-editing Google Translate translations), `pe2` (post-editing [mBART 1-to-50](https://huggingface.co/facebook/mbart-large-50-one-to-many-mmt) translations). |
72
  |`translation_type` | Either `ht` for from scratch or `pe` for post-editing |
73
  |`src_len_chr` | Length of the English source text in number of characters |
74
  |`mt_len_chr` | Length of the machine translation in number of characters (NaN for ht) |
 
92
  |`len_pause_geq_300` | Total duration of pauses of 300ms or more, in milliseconds. |
93
  |`n_pause_geq_1000` | Number of pauses of 1s or more during the translation. |
94
  |`len_pause_geq_1000` | Total duration of pauses of 1000ms or more, in milliseconds. |
95
+ |`event_time` | Total time summed across all translation events, should be comparable to `edit_time` in most cases. |
96
+ |`num_annotations` | Number of times the translator focused the textbox for performing the translation of the sentence during the translation session. E.g. 1 means the translation was performed once and never revised. |
97
  |`n_insert` | Number of post-editing insertions (empty for modality `ht`) computed using the [tercom](https://github.com/jhclark/tercom) library. |
98
  |`n_delete` | Number of post-editing deletions (empty for modality `ht`) computed using the [tercom](https://github.com/jhclark/tercom) library. |
99
  |`n_substitute` | Number of post-editing substitutions (empty for modality `ht`) computed using the [tercom](https://github.com/jhclark/tercom) library. |
100
  |`n_shift` | Number of post-editing shifts (empty for modality `ht`) computed using the [tercom](https://github.com/jhclark/tercom) library. |
101
  |`tot_shifted_words` | Total amount of shifted words from all shifts present in the sentence. |
102
  |`tot_edits` | Total of all edit types for the sentence. |
103
+ |`hter` | Human-mediated Translation Edit Rate score computed between MT and post-edited TGT (empty for modality `ht`) using the [tercom](https://github.com/jhclark/tercom) library. |
104
+ |`cer` | Character-level HTER score computed between MT and post-edited TGT (empty for modality `ht`) using [CharacTER](https://github.com/rwth-i6/CharacTER).
105
+ |`bleu` | Sentence-level BLEU score between MT and post-edited TGT (empty for modality `ht`) computed using the [SacreBLEU](https://github.com/mjpost/sacrebleu) library with default parameters. |
106
+ |`chrf` | Sentence-level chrF score between MT and post-edited TGT (empty for modality `ht`) computed using the [SacreBLEU](https://github.com/mjpost/sacrebleu) library with default parameters. |
107
+ |`time_s` | Edit time expressed in seconds. |
108
+ |`time_m` | Edit time expressed in minutes. |
109
+ |`time_h` | Edit time expressed in hours. |
110
+ |`time_per_char` | Edit time per source character, expressed in seconds. |
111
+ |`time_per_word` | Edit time per source word, expressed in seconds. |
112
  |`key_per_char` | Proportion of keys per character needed to perform the translation. |
113
+ |`words_per_hour` | Amount of source words translated or post-edited per hour. |
114
+ |`words_per_minute` | Amount of source words translated or post-edited per minute. |
115
  |`per_subject_visit_order` | Id denoting the order in which the translator accessed documents. 1 correspond to the first accessed document. |
116
  |`src_text` | The original source sentence extracted from Wikinews, wikibooks or wikivoyage. |
117
  |`mt_text` | Missing if tasktype is `ht`. Otherwise, contains the automatically-translated sentence before post-editing. |
118
  |`tgt_text` | Final sentence produced by the translator (either via translation from scratch of `sl_text` or post-editing `mt_text`) |
119
  |`aligned_edit` | Aligned visual representation of REF (`mt_text`), HYP (`tl_text`) and edit operations (I = Insertion, D = Deletion, S = Substitution) performed on the field. Replace `\\n` with `\n` to show the three aligned rows.|
120
+ |`src_tokens` | List of tokens obtained tokenizing `src_text` with Stanza using default params. |
121
+ |`src_annotations` | List of lists (one per `src_tokens` token) containing dictionaries (one per word, >1 for mwt) with pos, ner and other info parsed by Stanza |
122
+ |`mt_tokens` | List of tokens obtained tokenizing `mt_text` with Stanza using default params. |
123
+ |`mt_annotations` | List of lists (one per `mt_tokens` token) containing dictionaries (one per word, >1 for mwt) with pos, ner and other info parsed by Stanza |
124
+ |`tgt_tokens` | List of tokens obtained tokenizing `tgt_text` with Stanza using default params. |
125
+ |`tgt_annotations` | List of lists (one per `tgt_tokens` token) containing dictionaries (one per word, >1 for mwt) with pos, ner and other info parsed by Stanza |
126
 
127
  ### Data Splits
128
 
 
205
 
206
  ### Dataset Creation
207
 
208
+ The dataset was parsed from PET XML files into CSV format using the scripts available in the [DivEMT Github repository](https://github.com/gsarti/divemt).
209
+
210
+ Those are adapted from the ones by [Antonio Toral](https://research.rug.nl/en/persons/antonio-toral-ruiz) found at the following link: [https://github.com/antot/postediting_novel_frontiers](https://github.com/antot/postediting_novel_frontiers).
211
 
212
  ## Additional Information
213
 
 
217
  ### Citation Information
218
 
219
  ```bibtex
220
+ @inproceedings{sarti-etal-2022-divemt,
221
+ title = "{D}iv{EMT}: Neural Machine Translation Post-Editing Effort Across Typologically Diverse Languages",
222
+ author = "Sarti, Gabriele and
223
+ Bisazza, Arianna and
224
+ Guerberof-Arenas, Ana and
225
+ Toral, Antonio",
226
+ booktitle = "Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing",
227
+ month = dec,
228
+ year = "2022",
229
+ address = "Abu Dhabi, United Arab Emirates",
230
+ publisher = "Association for Computational Linguistics",
231
+ url = "https://aclanthology.org/2022.emnlp-main.532",
232
+ pages = "7795--7816",
233
  }
234
  ```