Datasets:
Update links and bibtex
Browse files
README.md
CHANGED
@@ -300,7 +300,7 @@ pretty_name: ParaCrawl_Context
|
|
300 |
|
301 |
<!-- Provide a quick summary of the dataset. -->
|
302 |
|
303 |
-
This is a dataset for document-level machine translation introduced in the ACL 2024 paper **Document-Level Machine Translation with Large-Scale Public Parallel Data
|
304 |
|
305 |
## Dataset Details
|
306 |
|
@@ -313,8 +313,7 @@ This dataset adds document-level context to parallel corpora released by [ParaCr
|
|
313 |
- **Language pairs:** eng-deu, eng-fra, eng-ces, eng-pol, eng-rus
|
314 |
- **License:** Creative Commons Zero v1.0 Universal (CC0)
|
315 |
- **Repository:** https://github.com/Proyag/ParaCrawl-Context
|
316 |
-
- **Paper:** https://
|
317 |
-
<!-- Replace with ACL anthology link when available-->
|
318 |
|
319 |
## Uses
|
320 |
|
@@ -324,7 +323,7 @@ This dataset is intended for document-level (context-aware) machine translation.
|
|
324 |
### Direct Use
|
325 |
|
326 |
<!-- This section describes suitable use cases for the dataset. -->
|
327 |
-
The ideal usage of this dataset is to use the sentence fields as the source and target translations, and provide the contexts as additional information to a model. This could be done, for example, with a dual-encoder model, where one encoder encodes the source sentence, while the second encoder encodes the source/target context. For an example, see our associated [paper](https://
|
328 |
|
329 |
### Out-of-Scope Use
|
330 |
|
@@ -394,7 +393,7 @@ This dataset was extracted entirely from [parallel corpora](https://paracrawl.eu
|
|
394 |
|
395 |
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
|
396 |
|
397 |
-
To extract the contexts for ParaCrawl sentence pairs, we used the following method (copied from the [paper](https://
|
398 |
1. Extract the source URLs and corresponding sentences from the TMX files from [ParaCrawl release 9](https://paracrawl.eu/releases) (or the bonus release in the case of eng-rus). Each sentence is usually associated with many different source URLs, and we keep all of them.
|
399 |
2. Match the extracted URLs with the URLs from all the raw text data and get the corresponding base64-encoded webpage/document, if available.
|
400 |
3. Decode the base64 documents and try to match the original sentence. If the sentence is not found in the document, discard the document. Otherwise, keep the 512 tokens preceding the sentence (where a token is anything separated by a space), replace line breaks with a special `<docline>` token, and store it as the document context. Since some very common sentences correspond to huge numbers of source URLs, we keep a maximum of 1000 unique contexts per sentence separated by a delimiter `|||` in the final dataset.
|
@@ -415,7 +414,7 @@ This dataset is constructed from web crawled data, and thus may contain sensitiv
|
|
415 |
## Bias, Risks, and Limitations
|
416 |
|
417 |
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
|
418 |
-
\[This section has been copied from the [paper](https://
|
419 |
|
420 |
**Relevance of context**: Our work assumes that any extracted text preceding a given sentence on a webpage is relevant “document context” for that sentence. However, it is likely in many cases that the extracted context is unrelated to the sentence, since most webpages are not formatted as a coherent “document”. As a result, the dataset often includes irrelevant context like lists of products, UI elements, or video titles extracted from webpages which will not be directly helpful to document-level translation models.
|
421 |
|
@@ -442,19 +441,27 @@ Until the ACL Anthology is updated with ACL 2024 papers, you can use the followi
|
|
442 |
|
443 |
<!-- Update with ACL Anthology bibtex-->
|
444 |
```
|
445 |
-
@inproceedings{
|
446 |
-
|
447 |
-
|
448 |
-
|
449 |
-
|
450 |
-
|
451 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
452 |
}
|
453 |
```
|
454 |
|
455 |
## Dataset Card Authors
|
456 |
|
457 |
-
This dataset card was written by [Proyag Pal](https://proyag.github.io/). The [paper](https://
|
458 |
|
459 |
## Dataset Card Contact
|
460 |
|
|
|
300 |
|
301 |
<!-- Provide a quick summary of the dataset. -->
|
302 |
|
303 |
+
This is a dataset for document-level machine translation introduced in the ACL 2024 paper [**Document-Level Machine Translation with Large-Scale Public Parallel Data**](https://aclanthology.org/2024.acl-long.712/). It is a dataset consisting of parallel sentence pairs from the [ParaCrawl](https://paracrawl.eu/) dataset along with corresponding preceding context extracted from the webpages the sentences were crawled from.
|
304 |
|
305 |
## Dataset Details
|
306 |
|
|
|
313 |
- **Language pairs:** eng-deu, eng-fra, eng-ces, eng-pol, eng-rus
|
314 |
- **License:** Creative Commons Zero v1.0 Universal (CC0)
|
315 |
- **Repository:** https://github.com/Proyag/ParaCrawl-Context
|
316 |
+
- **Paper:** https://aclanthology.org/2024.acl-long.712/
|
|
|
317 |
|
318 |
## Uses
|
319 |
|
|
|
323 |
### Direct Use
|
324 |
|
325 |
<!-- This section describes suitable use cases for the dataset. -->
|
326 |
+
The ideal usage of this dataset is to use the sentence fields as the source and target translations, and provide the contexts as additional information to a model. This could be done, for example, with a dual-encoder model, where one encoder encodes the source sentence, while the second encoder encodes the source/target context. For an example, see our associated [paper](https://aclanthology.org/2024.acl-long.712/).
|
327 |
|
328 |
### Out-of-Scope Use
|
329 |
|
|
|
393 |
|
394 |
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
|
395 |
|
396 |
+
To extract the contexts for ParaCrawl sentence pairs, we used the following method (copied from the [paper](https://aclanthology.org/2024.acl-long.712/)):
|
397 |
1. Extract the source URLs and corresponding sentences from the TMX files from [ParaCrawl release 9](https://paracrawl.eu/releases) (or the bonus release in the case of eng-rus). Each sentence is usually associated with many different source URLs, and we keep all of them.
|
398 |
2. Match the extracted URLs with the URLs from all the raw text data and get the corresponding base64-encoded webpage/document, if available.
|
399 |
3. Decode the base64 documents and try to match the original sentence. If the sentence is not found in the document, discard the document. Otherwise, keep the 512 tokens preceding the sentence (where a token is anything separated by a space), replace line breaks with a special `<docline>` token, and store it as the document context. Since some very common sentences correspond to huge numbers of source URLs, we keep a maximum of 1000 unique contexts per sentence separated by a delimiter `|||` in the final dataset.
|
|
|
414 |
## Bias, Risks, and Limitations
|
415 |
|
416 |
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
|
417 |
+
\[This section has been copied from the [paper](https://aclanthology.org/2024.acl-long.712/), which you can refer to for details.\]
|
418 |
|
419 |
**Relevance of context**: Our work assumes that any extracted text preceding a given sentence on a webpage is relevant “document context” for that sentence. However, it is likely in many cases that the extracted context is unrelated to the sentence, since most webpages are not formatted as a coherent “document”. As a result, the dataset often includes irrelevant context like lists of products, UI elements, or video titles extracted from webpages which will not be directly helpful to document-level translation models.
|
420 |
|
|
|
441 |
|
442 |
<!-- Update with ACL Anthology bibtex-->
|
443 |
```
|
444 |
+
@inproceedings{pal-etal-2024-document,
|
445 |
+
title = "Document-Level Machine Translation with Large-Scale Public Parallel Corpora",
|
446 |
+
author = "Pal, Proyag and
|
447 |
+
Birch, Alexandra and
|
448 |
+
Heafield, Kenneth",
|
449 |
+
editor = "Ku, Lun-Wei and
|
450 |
+
Martins, Andre and
|
451 |
+
Srikumar, Vivek",
|
452 |
+
booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
|
453 |
+
month = aug,
|
454 |
+
year = "2024",
|
455 |
+
address = "Bangkok, Thailand",
|
456 |
+
publisher = "Association for Computational Linguistics",
|
457 |
+
url = "https://aclanthology.org/2024.acl-long.712",
|
458 |
+
pages = "13185--13197",
|
459 |
}
|
460 |
```
|
461 |
|
462 |
## Dataset Card Authors
|
463 |
|
464 |
+
This dataset card was written by [Proyag Pal](https://proyag.github.io/). The [paper](https://aclanthology.org/2024.acl-long.712/) this dataset was created for was written by Proyag Pal, Alexandra Birch, and Kenneth Heafield at the University of Edinburgh.
|
465 |
|
466 |
## Dataset Card Contact
|
467 |
|