mapama247 commited on
Commit
04f8453
1 Parent(s): db3f659

update model card

Browse files
Files changed (1) hide show
  1. README.md +553 -3
README.md CHANGED
@@ -1,3 +1,553 @@
1
- ---
2
- license: apache-2.0
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ library_name: transformers
4
+ pipeline_tag: text-generation
5
+ language:
6
+ - bg
7
+ - ca
8
+ - code
9
+ - cs
10
+ - cy
11
+ - da
12
+ - de
13
+ - el
14
+ - en
15
+ - es
16
+ - et
17
+ - eu
18
+ - fi
19
+ - fr
20
+ - ga
21
+ - gl
22
+ - hr
23
+ - hu
24
+ - it
25
+ - lt
26
+ - lv
27
+ - mt
28
+ - nl
29
+ - nn
30
+ - \no
31
+ - oc
32
+ - pl
33
+ - pt
34
+ - ro
35
+ - ru
36
+ - sh
37
+ - sk
38
+ - sl
39
+ - sr
40
+ - sv
41
+ - uk
42
+ base_model:
43
+ - BSC-LT/salamandra-2b
44
+ ---
45
+
46
+ ![](./images/salamandra_header.png)
47
+
48
+ # Salamandra Model Card
49
+
50
+ Salamandra is a highly multilingual model pre-trained from scratch that comes in three different
51
+ sizes — 2B, 7B and 40B parameters — with their respective base and instruction-tuned variants.
52
+ This model card corresponds to the 2B instructed version specific for [AinaHack](https://projecteaina.cat/ainahack/),
53
+ an event launched by Generalitat de Catalunya to create AI tools for the Catalan administration.
54
+
55
+ To visit the model cards of other Salamandra versions, please refer to the [Model Index](#model-index).
56
+
57
+ The entire Salamandra family is released under a permissive [Apache 2.0 license]((https://www.apache.org/licenses/LICENSE-2.0)).
58
+ Along with the open weights, all training scripts and configuration files are made publicly available in [this GitHub repository](https://github.com/langtech-bsc/salamandra).
59
+
60
+ > [!WARNING]
61
+ > **DISCLAIMER:** This model is a first proof-of-concept designed to demonstrate the instruction-following capabilities of recently released base models.
62
+ > It has been optimized to engage in conversation but has *NOT* been aligned through RLHF to filter or avoid sensitive topics.
63
+ > As a result, it may generate harmful or inappropriate content.
64
+ > The team is actively working to enhance its performance through further instruction and alignment with RL techniques.
65
+
66
+ ---
67
+
68
+ ## Model Details
69
+
70
+ ### Description
71
+
72
+ Transformer-based decoder-only language model that has been pre-trained from scratch on 7.8 trillion tokens of highly curated data.
73
+ The pre-training corpus contains text in 35 European languages and code.
74
+
75
+ ### Hyperparameters
76
+
77
+ The full list of hyperparameters for each model can be found [here](https://github.com/langtech-bsc/salamandra/tree/main/configs).
78
+
79
+ ### Architecture
80
+
81
+ | | |
82
+ |-------------------------|:--------------|
83
+ | Total Parameters | 2,253,490,176 |
84
+ | Embedding Parameters | 524,288,000 |
85
+ | Layers | 24 |
86
+ | Hidden size | 2,048 |
87
+ | Attention heads | 16 |
88
+ | Context length | 8,192 |
89
+ | Vocabulary size | 256,000 |
90
+ | Precision | bfloat16 |
91
+ | Embedding type | RoPE |
92
+ | Activation Function | SwiGLU |
93
+ | Layer normalization | RMS Norm |
94
+ | Flash attention | ✅ |
95
+ | Grouped Query Attention | ❌ |
96
+ | Num. query groups | N/A |
97
+
98
+ ---
99
+
100
+ ## Intended Use
101
+
102
+ ### Direct Use
103
+
104
+ The models are intended for both research and commercial use in any of the languages included in the training data.
105
+ The base models are intended either for language generation or to be further fine-tuned for specific use-cases.
106
+ The instruction-tuned variants can be used as general-purpose assistants, as long as the user is fully aware of the model’s limitations.
107
+
108
+ ### Out-of-scope Use
109
+
110
+ The model is not intended for malicious activities, such as harming others or violating human rights.
111
+ Any downstream application must comply with current laws and regulations.
112
+ Irresponsible usage in production environments without proper risk assessment and mitigation is also discouraged.
113
+
114
+ ---
115
+
116
+ ## Hardware and Software
117
+
118
+ ### Training Framework
119
+
120
+ Pre-training was conducted using NVIDIA’s [NeMo Framework](https://docs.nvidia.com/nemo-framework/index.html),
121
+ which leverages PyTorch Lightning for efficient model training in highly distributed settings.
122
+
123
+ The instruction-tuned versions were produced with [FastChat](https://github.com/lm-sys/FastChat).
124
+
125
+ ### Compute Infrastructure
126
+
127
+ All models were trained on [MareNostrum 5](https://www.bsc.es/ca/marenostrum/marenostrum-5), a pre-exascale EuroHPC supercomputer hosted and
128
+ operated by Barcelona Supercomputing Center.
129
+
130
+ The accelerated partition is composed of 1,120 nodes with the following specifications:
131
+ - 4x Nvidia Hopper GPUs with 64 HBM2 memory
132
+ - 2x Intel Sapphire Rapids 8460Y+ at 2.3Ghz and 32c each (64 cores)
133
+ - 4x NDR200 (BW per node 800Gb/s)
134
+ - 512 GB of Main memory (DDR5)
135
+ - 460GB on NVMe storage
136
+
137
+ |Model|Nodes|GPUs|
138
+ |:---:|:---:|:---:|
139
+ |2B|64|256|
140
+ |7B|128|512|
141
+ |40B|256 / 512|1,024 / 2,048|
142
+
143
+ ---
144
+
145
+ ## How to use
146
+
147
+ The instruction-following models use the commonly adopted ChatML template:
148
+
149
+ ```jinja
150
+ {%- if not date_string is defined %}{%- set date_string = "2024-09-30" %}{%- endif %}{%- set system_message = messages[0].content if messages[0].role == "system" else "system message. Today Date: "+ date_string -%}{%- if messages[0].role == "system" -%}{%- set messages = messages[1:] -%}{%- endif -%}{{ "<|im_start|>system\n" + system_message + "<|im_end|>\n" }}{% for message in messages %}{{'<|im_start|>' + message['role'] + '\n' + message['content'] + '<|im_end|>' + '\n'}}{% endfor %}{% if add_generation_prompt %}{{ '<|im_start|>assistant\n' }}{% endif %}
151
+ ```
152
+ Where `system_message` is used to guide the model during generation and `date_string` can be set to allow the model to respond with the current date.
153
+
154
+ The exact same chat template should be used for an enhanced conversational experience.
155
+ The easiest way to apply it is by using the tokenizer's built-in functions, as shown in the following snippet.
156
+
157
+ ```python
158
+ from datetime import datetime
159
+ from transformers import AutoTokenizer, AutoModelForCausalLM
160
+ import transformers
161
+ import torch
162
+
163
+ model_id = "BSC-LT/salamandra-2b-instruct-aina-hack"
164
+
165
+ text = "At what temperature does water boil?"
166
+
167
+ tokenizer = AutoTokenizer.from_pretrained(model_id)
168
+ model = AutoModelForCausalLM.from_pretrained(
169
+ model_id,
170
+ device_map="auto",
171
+ torch_dtype=torch.bfloat16
172
+ )
173
+
174
+ message = [ { "role": "user", "content": text } ]
175
+ date_string = datetime.today().strftime('%Y-%m-%d')
176
+
177
+ prompt = tokenizer.apply_chat_template(
178
+ message,
179
+ tokenize=False,
180
+ add_generation_prompt=True,
181
+ date_string=date_string
182
+ )
183
+
184
+ inputs = tokenizer.encode(prompt, add_special_tokens=False, return_tensors="pt")
185
+ outputs = model.generate(input_ids=inputs.to(model.device), max_new_tokens=200)
186
+
187
+ print(tokenizer.decode(outputs[0], skip_special_tokens=True))
188
+ ```
189
+ Using this template, each turn is preceded by a `<|im_start|>` delimiter and the role of the entity
190
+ (either `user`, for content supplied by the user, or `assistant` for LLM responses), and finished with the `<|im_end|>` token.
191
+
192
+ ---
193
+
194
+ ## Data
195
+
196
+ ### Pretraining Data
197
+
198
+ The training corpus consists of 2.4 trillion tokens, including 35 European languages and 92 programming languages. It amounts to a total of 33TB of pre-processed text.
199
+ Languages were sampled manually by giving x2 oversampling to Spain's co-official languages (Spanish, Catalan, Galician and Basque), code was undersampled by half,
200
+ and the rest of the languages were kept as is, resulting in the following distribution:
201
+
202
+ ![lang distrib](./images/corpus_languages.png)
203
+
204
+ This highly multilingual corpus is predominantly composed of data from Colossal OSCAR,
205
+ which contributes a significant 66.06% of the total tokens.
206
+ Following this, Starcoder provides 11.91%, and Spanish Crawling adds 3.34%.
207
+ The next largest sources are French FR at 3.12% and Proof Pile at 1.98%.
208
+ Other notable contributions include Macocu, Pile of Law, and Eurlex, each contributing around 1.5% to 1.3%.
209
+ These major sources collectively form the bulk of the corpus, ensuring a rich and diverse dataset for training the language model.
210
+ The remaining 10% comes from smaller sources in various languages.
211
+
212
+ The model was trained for 3 epochs, with two final rounds of 0.3B higher-quality tokens each,
213
+ meaning that the total number of tokens seen during pre-training amounts to roughly 7.8 trillion tokens.
214
+
215
+ We provide an extense Datasheet section following the best practices defined by [(Gebru et al., 2021)](https://arxiv.org/pdf/1803.09010).
216
+
217
+ <details>
218
+ <summary>Datasheet</summary>
219
+
220
+ #### Motivation
221
+
222
+ **For what purpose was the dataset created? Was there a specific task in mind? Was there a specific gap that needed to be filled? Please provide a description.**
223
+
224
+ The purpose of creating this dataset is to pre-train the Salamandra family of multilingual models with high performance in a large number of
225
+ European languages (35) and code (including 92 different programming languages). In addition, we aim to represent especially the co-official
226
+ languages of Spain: Spanish, Catalan, Galician, and Basque. This is the reason why we carry out an oversampling of these languages.
227
+
228
+ We detected that there is a great lack of massive multilingual data, especially in minority languages (Ostendorff & Rehm, 2023), so part of
229
+ our efforts in the creation of this pre-training dataset have resulted in the contribution to large projects such as the Community OSCAR
230
+ (Brack et al., 2024), which includes 151 languages and 40T words, or CATalog (Palomar-Giner et al., 2024), the largest open dataset in
231
+ Catalan in the world.
232
+
233
+ **Who created the dataset (e.g., which team, research group) and on behalf of which entity (e.g., company, institution, organization)?**
234
+
235
+ The dataset has been created by the Language Technologies unit (LangTech) of the Barcelona Supercomputing Center - Centro Nacional de
236
+ Supercomputación (BSC-CNS), which aims to advance the field of natural language processing through cutting-edge research and development
237
+ and the use of HPC. In particular, it was created by the unit's data team, the main contributors being Javier Saiz, Ferran Espuña, and
238
+ Jorge Palomar.
239
+
240
+ However, the creation of the dataset would not have been possible without the collaboration of a large number of collaborators, partners,
241
+ and public institutions, which can be found in detail in the acknowledgements.
242
+
243
+ **Who funded the creation of the dataset? If there is an associated grant, please provide the name of the grantor and the grant name and number.**
244
+
245
+ This work/research has been promoted and financed by the Government of Catalonia through the [Aina project](https://projecteaina.cat/).
246
+
247
+ #### Composition
248
+
249
+ **What do the instances that comprise the dataset represent (e.g., documents, photos, people, countries)? Are there multiple types of instances (e.g., movies, users, and ratings; people and interactions between them; nodes and edges)? Please provide a description.**
250
+
251
+ The dataset consists entirely of text documents in various languages. Specifically, data was mainly sourced from the following databases and
252
+ repositories:
253
+ - **Common Crawl:** Repository that holds website data and is run by the Common Crawl non-profit organization. It is updated monthly and is
254
+ distributed under the CC0 1.0 public domain license.
255
+ - **GitHub:** Community platform that allows developers to create, store, manage, and share their code. Repositories are crawled and then
256
+ distributed with their original licenses, which may vary from permissive to non-commercial licenses.
257
+ - **Wikimedia:** Database that holds the collection databases managed by the Wikimedia Foundation, including Wikipedia, Wikibooks, Wikinews,
258
+ Wikiquote, Wikisource, and Wikivoyage. It is updated monthly and is distributed under Creative Commons Attribution-ShareAlike License 4.0.
259
+ - **EurLex:** Repository that holds the collection of legal documents from the European Union, available in all of the EU’s 24 official
260
+ languages and run by the Publications Office of the European Union. It is updated daily and is distributed under the Creative Commons
261
+ Attribution 4.0 International license.
262
+ - **Other repositories:** Specific repositories were crawled under permission for domain-specific corpora, which include academic, legal,
263
+ and newspaper repositories.
264
+
265
+ We provide a complete list of dataset sources at the end of this section.
266
+
267
+ **How many instances are there in total (of each type, if appropriate)?**
268
+
269
+ The dataset contains a diverse range of instances across multiple languages, with notable adjustments for certain languages. English
270
+ represents the largest portion, accounting for 39.08% of the total data. Spanish was upsampled by a factor of 2, bringing its share to 16.59%,
271
+ while Catalan (1.84%), Basque (0.26%), and Galician (0.36%) were also upsampled by 2. On the other hand, code-related data was downsampled
272
+ by half, making up 6.42% of the total. Other prominent languages include French (6.59%), Russian (5.39%), German (4.25%), and Hungarian
273
+ (3.93%), with several additional languages contributing between 1% and 2%, and smaller portions represented by a variety of others.
274
+
275
+ **Does the dataset contain all possible instances or is it a sample (not necessarily random) of instances from a larger set? If the dataset is a sample, then what is the larger set? Is the sample representative of the larger set (e.g., geographic coverage)? If so, please describe how this representativeness was validated/verified. If it is not representative of the larger set, please describe why not (e.g., to cover a more diverse range of instances, because instances were withheld or unavailable).**
276
+
277
+ The dataset is a sample from multiple sources, with different weights based on the primary language of the content: Spanish, Catalan,
278
+ Basque, and Galician content was upsampled by a factor of two, while programming languages were downsampled by a factor of half. Other
279
+ sources were sampled in proportion to their occurrence.
280
+
281
+ **What data does each instance consist of? “Raw” data (e.g., unprocessed text or images) or features? In either case, please provide a description.**
282
+
283
+ Each instance consists of a text document processed for deduplication, language identification, and source-specific filtering. Some
284
+ documents required optical character recognition (OCR) to extract text from non-text formats such as PDFs.
285
+
286
+ **Is there a label or target associated with each instance? If so, please provide a description.**
287
+
288
+ Each instance is labeled with a unique identifier, the primary language of the content, and the URL for web-sourced instances. Additional
289
+ labels were automatically assigned to detect specific types of content —harmful or toxic content— and to assign preliminary indicators of
290
+ undesired qualities —very short documents, high density of symbols, etc.— which were used for filtering instances.
291
+
292
+ **Is any information missing from individual instances? If so, please provide a description, explaining why this information is missing (e.g., because it was unavailable). This does not include intentionally removed information, but might include, e.g., redacted text.**
293
+
294
+ No significant information is missing from the instances.
295
+
296
+ **Are relationships between individual instances made explicit (e.g., users’ movie ratings, social network links)? If so, please describe how these relationships are made explicit.**
297
+
298
+ Instances are related through shared metadata, such as source and language identifiers.
299
+
300
+ **Are there recommended data splits (e.g., training, development/validation, testing)? If so, please provide a description of these splits, explaining the rationale behind them.**
301
+
302
+ The dataset is split randomly into training, validation, and test sets.
303
+
304
+ **Are there any errors, sources of noise, or redundancies in the dataset? If so, please provide a description.**
305
+
306
+ Despite removing duplicated instances within each source, redundancy remains at the paragraph and sentence levels, particularly in
307
+ web-sourced instances where SEO techniques and templates contribute to repeated textual patterns. Some instances may also be duplicated
308
+ across sources due to format variations.
309
+
310
+ **Is the dataset self-contained, or does it link to or otherwise rely on external resources (e.g., websites, tweets, other datasets)? If it links to or relies on external resources, a) are there guarantees that they will exist, and remain constant, over time; b) are there official archival versions of the complete dataset (i.e., including the external resources as they existed at the time the dataset was created); c) are there any restrictions (e.g., licenses, fees) associated with any of the external resources that might apply to a dataset consumer? Please provide descriptions of all external resources and any restrictions associated with them, as well as links or other access points, as appropriate.**
311
+
312
+ The dataset is self-contained and does not rely on external resources.
313
+
314
+ **Does the dataset contain data that might be considered confidential (e.g., data that is protected by legal privilege or by doctor–patient confidentiality, data that includes the content of individuals’ non-public communications)? If so, please provide a description.**
315
+
316
+ The dataset does not contain confidential data.
317
+
318
+ **Does the dataset contain data that, if viewed directly, might be offensive, insulting, threatening, or might otherwise cause anxiety? If so, please describe why. If the dataset does not relate to people, you may skip the remaining questions in this section.**
319
+
320
+ The dataset includes web-crawled content, which may overrepresent pornographic material across languages (Kreutzer et al., 2022). Although
321
+ pre-processing techniques were applied to mitigate offensive content, the heterogeneity and scale of web-sourced data make exhaustive
322
+ filtering challenging, which makes it next to impossible to identify all adult content without falling into excessive filtering, which may
323
+ negatively influence certain demographic groups (Dodge et al., 2021).
324
+
325
+ **Does the dataset identify any subpopulations (e.g., by age, gender)? If so, please describe how these subpopulations are identified and provide a description of their respective distributions within the dataset.**
326
+
327
+ The dataset does not explicitly identify any subpopulations.
328
+
329
+ **Is it possible to identify individuals (i.e., one or more natural persons), either directly or indirectly (i.e., in combination with other data) from the dataset? If so, please describe how.**
330
+
331
+ Web-sourced instances in the dataset may contain personally identifiable information (PII) that is publicly available on the Web, such as
332
+ names, IP addresses, email addresses, and phone numbers. While it would be possible to indirectly identify individuals through the
333
+ combination of multiple data points, the nature and scale of web data makes it difficult to parse such information. In any case, efforts are
334
+ made to filter or anonymize sensitive data during pre-processing, but some identifiable information may remain in the dataset.
335
+
336
+ **Does the dataset contain data that might be considered sensitive in any way? If so, please provide a description.**
337
+
338
+ Given that the dataset includes web-sourced content and other publicly available documents, instances may inadvertently reveal financial
339
+ information, health-related details, or forms of government identification, such as social security numbers (Subramani et al., 2023),
340
+ especially if the content originates from less-regulated sources or user-generated platforms.
341
+
342
+ #### Collection Process
343
+
344
+ **How was the data collected?**
345
+
346
+ This dataset is constituted by combining several sources, whose acquisition methods can be classified into three groups:
347
+ - Web-sourced datasets with some preprocessing available under permissive license (p.e. Common Crawl).
348
+ - Domain-specific or language-specific raw crawls (p.e. Spanish Crawling).
349
+ - Manually curated data obtained through collaborators, data providers (by means of legal assignment agreements) or open source projects
350
+ (p.e. CATalog).
351
+
352
+ **What mechanisms or procedures were used to collect the data? How were these mechanisms or procedures validated?**
353
+
354
+ According to the three groups previously defined, these are the mechanisms used in each of them:
355
+ - Open direct download. Validation: data integrity tests.
356
+ - Ad-hoc scrapers or crawlers. Validation: software unit and data integrity tests.
357
+ - Direct download via FTP, SFTP, API or S3. Validation: data integrity tests.
358
+
359
+ **If the dataset is a sample from a larger set, what was the sampling strategy?**
360
+
361
+ The sampling strategy was to use the whole dataset resulting from the filtering explained in the ‘preprocessing/cleaning/labelling’ section,
362
+ with the particularity that an upsampling of 2 (i.e. twice the probability of sampling a document) was performed for the co-official
363
+ languages of Spain (Spanish, Catalan, Galician, Basque), and a downsampling of 1/2 was applied for code (half the probability of sampling a
364
+ code document, evenly distributed among all programming languages).
365
+
366
+ **Who was involved in the data collection process and how were they compensated?**
367
+
368
+ This data is generally extracted, filtered and sampled by automated processes. The code required to run these processes has been developed
369
+ entirely by members of the LangTech data team, or otherwise obtained from open-source software. Furthermore, there has been no monetary
370
+ consideration for acquiring data from suppliers.
371
+
372
+ **Over what timeframe was the data collected? Does this timeframe match the creation timeframe of the data associated with the instances? If not, please describe the timeframe in which the data associated with the instances was created.**
373
+
374
+ Data were acquired and processed from April 2023 to April 2024. However, as mentioned, much data has been obtained from open projects such
375
+ as Common Crawl, which contains data from 2014, so it is the end date (04/2024) rather than the start date that is important.
376
+
377
+ **Were any ethical review processes conducted? If so, please provide a description of these review processes, including the outcomes, as well as a link or other access point to any supporting documentation.**
378
+
379
+ No particular ethical review process has been carried out as the data is mostly open and not particularly sensitive. However, we have an
380
+ internal evaluation team and a bias team to monitor ethical issues. In addition, we work closely with ‘Observatori d'Ètica en Intel·ligència
381
+ Artificial’ (OEIAC) and ‘Agencia Española de Supervisión de la Inteligencia Artificial’ (AESIA) to audit the processes we carry out from an
382
+ ethical and legal point of view, respectively.
383
+
384
+ #### Preprocessing
385
+
386
+ **Was any preprocessing/cleaning/labeling of the data done? If so, please provide a description. If not, you may skip the remaining questions in this section.**
387
+
388
+ Instances of text documents were not altered, but web-sourced documents were filtered based on specific criteria along two dimensions:
389
+ - Quality: documents with a score lower than 0.8, based on undesired qualities, such as documents with low number of lines, very short
390
+ sentences, presence of long footers and headers, and high percentage of punctuation, obtained through CURATE (Palomar-Giner et al., 2024)
391
+ were filtered out.
392
+ - Harmful or adult content: documents originating from Colossal OSCAR were filtered using LLM-Datasets (Ostendorff et al., 2024) based on
393
+ the perplexity from a language model (‘harmful_pp’ field) provided by the Ungoliant pipeline (Abadji et al., 2021).
394
+
395
+ **Was the “raw” data saved in addition to the preprocessed/cleaned/labeled data? If so, please provide a link or other access point to the “raw” data.**
396
+
397
+ The original raw data was not kept.
398
+
399
+ **Is the software that was used to preprocess/clean/label the data available? If so, please provide a link or other access point.**
400
+
401
+ Yes, the preprocessing and filtering software is open-sourced. The [CURATE](https://github.com/langtech-bsc/CURATE) pipeline was used for Spanish Crawling and CATalog,
402
+ and the [Ungoliant](https://github.com/oscar-project/ungoliant) pipeline was used for the OSCAR project.
403
+
404
+ #### Uses
405
+
406
+ **Has the dataset been used for any tasks already? If so, please provide a description.**
407
+
408
+ Pre-train the Salamandra model family.
409
+
410
+ **What (other) tasks could the dataset be used for?**
411
+
412
+ The data can be used primarily to pre-train other language models, which can then be used for a wide range of use cases. The dataset could
413
+ also be used for other tasks such as fine-tuning language models, cross-lingual NLP tasks, machine translation, domain-specific text
414
+ generation, and language-specific data analysis.
415
+
416
+ **Is there anything about the composition of the dataset or the way it was collected and preprocessed/cleaned/labeled that might impact future uses? Is there anything a dataset consumer could do to mitigate these risks or harms?**
417
+
418
+ Web-crawled content is over-represented with standard language varieties, impacting language model performance for minority languages.
419
+ Language diversity in data is crucial to avoid bias, especially in encoding non-standard dialects, preventing the exclusion of demographic
420
+ groups. Moreover, despite legal uncertainties in web-scraped data, we prioritize permissive licenses and privacy protection measures,
421
+ acknowledging the challenges posed by personally identifiable information (PII) within large-scale datasets. Our ongoing efforts aim to
422
+ address privacy concerns and contribute to a more inclusive linguistic dataset.
423
+
424
+ **Are there tasks for which the dataset should not be used?**
425
+
426
+ -
427
+
428
+ #### Distribution
429
+
430
+ **Will the dataset be distributed to third parties outside of the entity on behalf of which the dataset was created? If so, please provide a description.**
431
+
432
+ The dataset will not be released or distributed to third parties. Any related question to distribution is omitted in this section.
433
+
434
+ #### Maintenance
435
+
436
+ **Who will be supporting/hosting/maintaining the dataset?**
437
+
438
+ The dataset will be hosted by the Language Technologies unit (LangTech) of the Barcelona Supercomputing Center (BSC). The team will ensure
439
+ regular updates and monitor the dataset for any issues related to content integrity, legal compliance, and bias for the sources they are
440
+ responsible for.
441
+
442
+ **How can the owner/curator/manager of the dataset be contacted?**
443
+
444
+ The data owner may be contacted with the email address [email protected].
445
+
446
+ **Will the dataset be updated?**
447
+
448
+ The dataset will not be updated.
449
+
450
+ **If the dataset relates to people, are there applicable limits on the retention of the data associated with the instances? If so, please describe these limits and explain how they will be enforced.**
451
+
452
+ The dataset does not keep sensitive data that could allow direct identification of individuals, apart from the data that is publicly
453
+ available in web-sourced content. Due to the sheer volume and diversity of web data, it is not feasible to notify individuals or manage data
454
+ retention on an individual basis. However, efforts are made to mitigate the risks associated with sensitive information through
455
+ pre-processing and filtering to remove identifiable or harmful content. Despite these measures, vigilance is maintained to address potential
456
+ privacy and ethical issues.
457
+
458
+ **Will older versions of the dataset continue to be supported/hosted/maintained? If so, please describe how. If not, please describe how its obsolescence will be communicated to dataset consumers.**
459
+
460
+ Since the dataset will not be updated, only the final version will be kept.
461
+
462
+ **If others want to extend/augment/build on/contribute to the dataset, is there a mechanism for them to do so?**
463
+
464
+ The dataset does not allow for external contributions.
465
+
466
+ </details>
467
+
468
+ ### Finetuning Data
469
+
470
+ This instruction-tuned variant has been trained with a mixture of 276k English, Spanish, and Catalan multi-turn instructions gathered from open datasets:
471
+ | Dataset | ca | en | es |
472
+ |-----------------------|:------:|:------:|:------:|
473
+ | alpaca-cleaned | - | 50,000 | - |
474
+ | aya-dataset | - | 3,944 | 3,854 |
475
+ | CoQCat | 4,797 | - | - |
476
+ | databricks-dolly-15k | - | 15,011 | - |
477
+ | dolly-3k-ca | 3,232 | - | - |
478
+ | flores-instr | 1,994 | 1,994 | 3,988 |
479
+ | MentorCA | 7,122 | - | - |
480
+ | MentorES | - | - | 7,122 |
481
+ | no-robots | - | 9,499 | - |
482
+ | oasst-ca | 2,518 | - | - |
483
+ | oasst2 | 750 | 31,086 | 15,438 |
484
+ | open-orca | - | 50,000 | - |
485
+ | RagMultilingual | 16,043 | 14,997 | 11,263 |
486
+ | tower-blocks | - | 19,895 | 2,000 |
487
+ | **Total** | **36,456** | **196,426** | **43,665** |
488
+
489
+
490
+ ---
491
+
492
+ ## Ethical Considerations and Limitations
493
+
494
+ We examine the presence of undesired societal and cognitive biases present in this model using different benchmarks. For societal biases, we test performance using the BBQ dataset (Parrish et al., 2022) in the original English and the Regard dataset (Sheng et al., 2019). We report that moderate accuracies (between 0.5 and 0.6 depending on the social groups) in disambiguated settings, the model performs very poorly in ambiguous setting. Taken together, these results suggest the pervasiveness of social biases that may have an effect on task performance
495
+
496
+ Our cognitive bias analysis focuses on positional effects in 0-shot settings, and majority class bias in few-shot settings. For positional effects, we leverage the ARC Multiple Choice Question dataset (Clark et al., 2018). We observe significant, but moderate weak primacy effects, whereby the model shows a preference for answers towards the beginning of the list of provided answers. We measure effects of majority class effects in few-shot settings using SST-2 (Socher et al., 2013). We again detect significant effects, with a small effect size. This suggests that the model is relatively robust against the examined cognitive biases.
497
+
498
+ We highlight that our analyses of these biases are by no means exhaustive and are limited by the relative scarcity of adequate resources in all languages present in the training data. We aim to gradually extend and expand our analyses in future work.
499
+
500
+ These results can be expected from a model that has undergone only a preliminary instruction tuning. These tests are performed in order to show the biases the model may contain. We urge developers to take them into account and perform safety testing and tuning tailored to their specific applications of the model.
501
+
502
+ ---
503
+
504
+ ## Additional information
505
+
506
+ ### Author
507
+ The Language Technologies Unit from Barcelona Supercomputing Center.
508
+
509
+ ### Contact
510
+ For further information, please send an email to <[email protected]>.
511
+
512
+ ### Copyright
513
+ Copyright(c) 2024 by Language Technologies Unit, Barcelona Supercomputing Center.
514
+
515
+ ### Funding
516
+ This work has been promoted and financed by the Government of Catalonia through the [Aina Project](https://projecteaina.cat/).
517
+
518
+ This work is funded by the _Ministerio para la Transformación Digital y de la Función Pública_ - Funded by EU – NextGenerationEU
519
+ within the framework of [ILENIA Project](https://proyectoilenia.es/) with reference 2022/TL22/00215337.
520
+
521
+ ### Acknowledgements
522
+
523
+ This project has benefited from the contributions of numerous teams and institutions, mainly through data contributions, knowledge transfer or technical support.
524
+
525
+ In Catalonia, many institutions have been involved in the project. Our thanks to Òmnium Cultural, Parlament de Catalunya, Institut d'Estudis Aranesos, Racó Català, Vilaweb, ACN, Nació Digital, El món and Aquí Berguedà.
526
+
527
+ At national level, we are especially grateful to our ILENIA project partners: CENID, HiTZ and CiTIUS for their participation. We also extend our genuine gratitude to the Spanish Senate and Congress, Fundación Dialnet, Fundación Elcano and the ‘Instituto Universitario de Sistemas Inteligentes y Aplicaciones Numéricas en Ingeniería (SIANI)’ of the University of Las Palmas de Gran Canaria.
528
+
529
+ At the international level, we thank the Welsh government, DFKI, Occiglot project, especially Malte Ostendorff, and The Common Crawl Foundation, especially Pedro Ortiz, for their collaboration. We would also like to give special thanks to the NVIDIA team, with whom we have met regularly, specially to: Ignacio Sarasua, Adam Henryk Grzywaczewski, Oleg Sudakov, Sergio Perez, Miguel Martinez, Felipes Soares and Meriem Bendris. Their constant support has been especially appreciated throughout the entire process.
530
+
531
+ Their valuable efforts have been instrumental in the development of this work.
532
+
533
+ ### Disclaimer
534
+ Be aware that the model may contain biases or other unintended distortions.
535
+ When third parties deploy systems or provide services based on this model, or use the model themselves,
536
+ they bear the responsibility for mitigating any associated risks and ensuring compliance with applicable regulations,
537
+ including those governing the use of Artificial Intelligence.
538
+
539
+ The Barcelona Supercomputing Center, as the owner and creator of the model, shall not be held liable for any outcomes resulting from third-party use.
540
+
541
+ ### Citation
542
+
543
+ Technical report and paper coming soon.
544
+
545
+ ### License
546
+ [Apache License, Version 2.0](https://www.apache.org/licenses/LICENSE-2.0)
547
+
548
+ ## Model Index
549
+ |Model|Base|Instruct|
550
+ |:---:|:---:|:---:|
551
+ |2B| [Link](https://huggingface.co/BSC-LT/salamandra-2b) | [Link](https://huggingface.co/BSC-LT/salamandra-2b-instruct) |
552
+ |7B| [Link](https://huggingface.co/BSC-LT/salamandra-7b) | [Link](https://huggingface.co/BSC-LT/salamandra-7b-instruct) |
553
+ |40B| WiP | WiP |