Datasets:

Modalities:
Text
Formats:
parquet
DOI:
Libraries:
Datasets
pandas
License:
fdelucaf commited on
Commit
ea0e809
1 Parent(s): 1bc6ebb

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +122 -1
README.md CHANGED
@@ -1,3 +1,124 @@
1
  ---
2
- license: cc-by-nc-sa-4.0
 
 
 
 
 
 
 
 
 
 
 
 
 
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ language:
3
+ - ca
4
+ - it
5
+ - multilingual
6
+ multilinguality:
7
+ - translation
8
+ pretty_name: CA-IT Parallel Corpus
9
+ size_categories:
10
+ - 1M<n<10M
11
+ source_datasets:
12
+ - original
13
+ task_categories:
14
+ - translation
15
+ task_ids: []
16
  ---
17
+
18
+ # Dataset Card for CA-IT Parallel Corpus
19
+
20
+ ## Table of Contents
21
+ - [Dataset Description](#dataset-description)
22
+ - [Dataset Summary](#dataset-summary)
23
+ - [Supported Tasks](#supported-tasks-and-leaderboards)
24
+ - [Languages](#languages)
25
+ - [Dataset Structure](#dataset-structure)
26
+ - [Data Splits](#data-instances)
27
+ - [Dataset Creation](#dataset-creation)
28
+ - [Source Data](#source-data)
29
+ - [Data preparation](#data-preparation)
30
+ - [Personal and Sensitive Information](#personal-and-sensitive-information)
31
+ - [Considerations for Using the Data](#considerations-for-using-the-data)
32
+ - [Social Impact of Dataset](#social-impact-of-dataset)
33
+ - [Discussion of Biases](#discussion-of-biases)
34
+ - [Other Known Limitations](#other-known-limitations)
35
+ - [Additional Information](#additional-information)
36
+ - [Author](#author)
37
+ - [Contact Information](#contact-information)
38
+ - [Copyright](#copyright)
39
+ - [Licensing information](#licenciung-informatrion)
40
+ - [Funding](#funding)
41
+
42
+ ## Dataset Description
43
+
44
+ ### Dataset Summary
45
+
46
+ The CA-IT Parallel Corpus is a Catalan-Italian dataset of **XXX** parallel sentences. The dataset was created to support Catalan NLP tasks, e.g.,
47
+ Machine Translation.
48
+
49
+ ### Supported Tasks and Leaderboards
50
+
51
+ The dataset can be used to train a model for Multilingual Machine Translation. Success on this task is typically measured by achieving a high BLEU score.
52
+
53
+ ### Languages
54
+
55
+ The texts in the dataset are in Catalan and Italian.
56
+
57
+ ## Dataset Structure
58
+
59
+ Two separated txt files are provided with the sentences sorted in the same order:
60
+
61
+ - xxx.ca: contains XXX Catalan sentences.
62
+
63
+ - xxx.it: contains 9.892.953 Italian sentences.
64
+
65
+ ### Data Splits
66
+
67
+ The dataset contains a single split: `train`.
68
+
69
+ ## Dataset Creation
70
+
71
+ ### Source Data
72
+
73
+ The dataset is a combination of the following authentic datasets:
74
+
75
+ | Dataset | Sentences |
76
+ |---------------|-----------|
77
+
78
+
79
+ All corpora except Europarl were collected from [Opus](https://opus.nlpl.eu/).
80
+ The Europarl corpus is a synthetic parallel corpus created from the original Spanish-Catalan corpus by [SoftCatalà](https://github.com/Softcatala/Europarl-catalan).
81
+
82
+ The remaining **XXX** sentences are synthetic parallel data created from a random sampling of the Spanish-Italian corpora available on [Opus](https://opus.nlpl.eu/) and translated into Catalan using the [PlanTL es-ca](https://huggingface.co/PlanTL-GOB-ES/mt-plantl-es-ca) model.
83
+
84
+ ### Data preparation
85
+
86
+ All datasets are deduplicated and filtered to remove any sentence pairs with a cosine similarity of less than 0.75.
87
+ This is done using sentence embeddings calculated using [LaBSE](https://huggingface.co/sentence-transformers/LaBSE).
88
+ The filtered datasets are then concatenated to form a final corpus of **XXX** parallel sentences and before training the punctuation is normalized using a modified version of the join-single-file.py script from [SoftCatalà](https://github.com/Softcatala/nmt-models/blob/master/data-processing-tools/join-single-file.py).
89
+
90
+ ### Personal and Sensitive Information
91
+
92
+ No anonymisation process was performed.
93
+
94
+ ## Considerations for Using the Data
95
+
96
+ ### Social Impact of Dataset
97
+
98
+ The purpose of this dataset is to help develop Machines Translation tasks for low-resource languages such as Catalan.
99
+
100
+ ### Discussion of Biases
101
+
102
+ We are aware that since part of the data comes from unreliable web pages and non-curated texts, some biases may be present in the dataset.
103
+ Nonetheless, we have not applied any steps to reduce their impact.
104
+
105
+ ### Other Known Limitations
106
+
107
+ The dataset contains data of a general domain. Application of this dataset in more specific domains such as biomedical, legal etc. would be of limited use.
108
+
109
+ ## Additional Information
110
+
111
+ ### Author
112
+ Language Technologies Unit (LangTech) at the Barcelona Supercomputing Center.
113
+
114
+ ### Contact information
115
+ For further information, please send an email to [email protected].
116
+
117
+ ### Copyright
118
+ Copyright Language Technologies Unit at Barcelona Supercomputing Center (2023).
119
+
120
+ ### Licensing information
121
+ This work is licensed under a [Attribution-NonCommercial-ShareAlike 4.0 International](https://creativecommons.org/licenses/by-nc-sa/4.0/).
122
+
123
+ ### Funding
124
+ This work was funded by the [Departament de la Vicepresidència i de Polítiques Digitals i Territori de la Generalitat de Catalunya](https://politiquesdigitals.gencat.cat/ca/inici/index.html#googtrans(ca|en) within the framework of [Projecte AINA](https://politiquesdigitals.gencat.cat/ca/economia/catalonia-ai/aina).