File size: 7,285 Bytes
ed8c5c9
87ed7e8
 
 
 
 
f8df299
87ed7e8
 
 
f8df299
87ed7e8
f8df299
87ed7e8
f8df299
 
 
 
 
 
 
87ed7e8
 
f8df299
 
 
 
87ed7e8
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
da9b9a7
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
87ed7e8
 
ed8c5c9
f8df299
 
 
87ed7e8
 
 
 
33fa021
87ed7e8
00d3d0b
87ed7e8
 
 
 
00d3d0b
87ed7e8
00d3d0b
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
f8df299
 
 
 
 
 
 
da9b9a7
 
 
 
 
 
 
 
f8df299
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
04ffaea
f8df299
 
00d3d0b
 
 
 
 
 
f8df299
 
 
 
 
87ed7e8
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
---
language:
- en
- es
- ca
licence: apache-2.0
tags:
- spanish
- catalan
- falcon-7b
datasets:
- BSC-LT/open_data_26B_tokens_balanced_es_ca
metrics:
- ppl
model-index:
- name: falcon_7b_balanced_tokenizer_fp16_CPT_open_data_26B_tokens_balanced_es_ca
  results:
  - task:
      name: Causal Language Modeling
      type: text-generation
    dataset:
      name: BSC-LT/open_data_26B_tokens_balanced_es_ca
      type: Causal Language Modeling
      config: default
      split: validation
      args: default
    metrics:
    - name: Perplexity
      type: ppl
      value: 8.59
widget:
- text: |-
    Respòn a la pregunta següent.
    Pregunta: "Qui viu a França?"
    Resposta: "A França viuen els francesos."
    ----
    Respòn a la pregunta següent.
    Pregunta: "Quina és la capital de Suècia?"
    Resposta: "La capital de Suècia és Estocolm."
    ----
    Respòn a la pregunta següent.
    Pregunta: "Quina beguda es consumeix als matins per despertar-se?"
    Resposta: "La majoria de gent consumeix cafè per despertar-se."
    ----
    Respòn a la pregunta següent.
    Pregunta: "Qui és Leo Messi?"
    Resposta:
  example_title: Pregunta-Resposta
- text: |-
    Extrae las entidades nombradas del siguiente texto:
    Texto: "Me llamo Wolfgang y vivo en Berlin"
    Entidades: Wolfgang:PER, Berlin:LOC
    ----
    Extrae las entidades nombradas del siguiente texto:
    Texto: "Hoy voy a visitar el parc güell tras salir del barcelona supercomputing center"
    Entidades: parc güell:LOC, barcelona supercomputing center:LOC
    ----
    Extrae las entidades nombradas del siguiente texto:
    Texto: "Maria y Miguel no tienen ningún problema contigo"
    Entidades: Maria:PER, Miguel:PER
    ----
    Extrae las entidades nombradas del siguiente texto:
    Texto: "Damián se cortó el pelo"
    Entidades: Damián:PER
    ----
    Extrae las entidades nombradas del siguiente texto:
    Texto: "Lo mejor de Barcelona és el bar de mi amigo Pablo"
    Entidades: Pablo:PER, Barcelona:LOC
    ----
    Extrae las entidades nombradas del siguiente texto:
    Texto: "Carlos comparte piso con Marc"
    Entidades:
  example_title: Entidades-Nombradas
license: apache-2.0
pipeline_tag: text-generation
---

# falcon_7b_balanced_tokenizer_fp16_CPT_open_data_26B_tokens_balanced_es_ca

## Overview

This model is a new result towards the long-run problem of "What is the best strategy for training a model in my language (not English)?"

This model adapts the [falcon-7b](https://huggingface.co/tiiuae/falcon-7b) to the new target languages Spanish and Catalan by swapping the tokenizer and adjusting the embedding layer before training with 26B tokens in the target languages.

## Language Adaptation

When adapting a model from English to other languages the tokenizer plays a crucial role. 

If the tokenizer does not include the target language in its training data, the resulting model will need many more tokens to perform the same task.
We solve this problem by creating a new tokenizer in the target languages (Spanish and Catalan) and adapting the embedding layer to it.

### New Tokenizer
We trained a new BPE Tokenizer for the Catalan and Spanish languages (equal representation). We shuffle a small amount of English in the mixture (since English is in the model training data).
The resulting data has the following language distribution:

|Language|%|
|---|---|
|En|16.84%|
|Es|41.38%|
|Ca|41.79%|

*P.D: It was meant to be the same distribution as the model train data (presented in Continual Pre-Training section)*

This reduces drastically the amount of tokens required to tokenize a text in the target languages (~70 %) while the English tokenization shows a small increase (~115 %).

### Embedding Layer Initialization
In order to fully take advantage of the English Pre-Training of the original Falcon model, we decided to re-use the embedding weights of the original model for those tokens shared between the two Tokenizers (the new and the old one). The rest of the embedding weights are initialized as the mean value of the weights of the original Tokenizer.

### Continual Pre-Training
Once the model has been successfully initialized, we continue its pre-training in the two target languages: Catalan and Spanish. We also shuffle a small amount of English in order to avoid catastrophic forgetting. The datasets used to train this model follow:

| Dataset             | Language | Tokens (pre-epoch) | Epochs       |
|---------------------|----------|--------------------|--------------|
| Wikipedia           | en       |           2169.97M |  1.428144485 |
| Lyrics              | en       |            100.60M | 0.7140722425 |
| C4_es               | es       |          53709.80M | 0.1049686196 |
| Biomedical          | es       |            455.03M | 0.7140722425 |
| Legal               | es       |            995.70M | 0.7140722425 |
| Wikipedia           | es       |            693.60M |  1.428144485 |
| Lyrics              | es       |            125.93M | 0.7140722425 |
| Gutenberg           | es       |             53.18M | 0.7140722425 |
| C4_ca               | ca       |           2826.00M |  2.142216727 |
| Biomedical          | ca       |             11.80M |  1.428144485 |
| RacoCatalá Noticias | ca       |             17.16M |  2.142216727 |
| RacoCatalá Forums   | ca       |            333.73M |  2.142216727 |
| CaWaC               | ca       |             57.79M |  2.142216727 |
| Wikipedia           | ca       |            228.01M |  3.570361212 |
| Vilaweb             | ca       |             50.34M |  2.142216727 |
| Lyrics              | ca       |              0.50M |  2.142216727 |

The resulting dataset has the following language distribution:

|Language|%|
|---|---|
|En|16.84%|
|Es|41.38%|
|Ca|41.79%|

## Model description

More information needed

## Intended uses & limitations

The model is ready-to-use only for causal language modeling to perform text-generation tasks.
However, it is intended to be fine-tuned on a generative downstream task.


## Limitations and biases
At the time of submission, no measures have been taken to estimate the bias and toxicity embedded in the model. 
However, we are well aware that our models may be biased since the corpora have been collected using crawling techniques on multiple web sources. 
We intend to conduct research in these areas in the future, and if completed, this model card will be updated. 

## Training and evaluation data

More information needed

## Training procedure

### Training hyperparameters

The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- total_train_batch_size: 8
- total_eval_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1.0

### Training results

![Training Loss](https://huggingface.co/BSC-LT/falcon_7b_CPT_open_data_26B_tokens_balanced_es_ca/blob/main/images/training_loss_condor.png?raw=true)


## Eval results

It achieves the following results on the evaluation set:
- Loss: 2.1504
- Accuracy: 0.5258

### Framework versions

- Transformers 4.30.2
- Pytorch 2.0.0
- Datasets 2.13.1
- Tokenizers 0.13.3