aguila-7b / README.md
Severino's picture
Update README.md
04ffaea
|
raw
history blame
7.29 kB
---
language:
- en
- es
- ca
licence: apache-2.0
tags:
- spanish
- catalan
- falcon-7b
datasets:
- BSC-LT/open_data_26B_tokens_balanced_es_ca
metrics:
- ppl
model-index:
- name: falcon_7b_balanced_tokenizer_fp16_CPT_open_data_26B_tokens_balanced_es_ca
results:
- task:
name: Causal Language Modeling
type: text-generation
dataset:
name: BSC-LT/open_data_26B_tokens_balanced_es_ca
type: Causal Language Modeling
config: default
split: validation
args: default
metrics:
- name: Perplexity
type: ppl
value: 8.59
widget:
- text: |-
Respòn a la pregunta següent.
Pregunta: "Qui viu a França?"
Resposta: "A França viuen els francesos."
----
Respòn a la pregunta següent.
Pregunta: "Quina és la capital de Suècia?"
Resposta: "La capital de Suècia és Estocolm."
----
Respòn a la pregunta següent.
Pregunta: "Quina beguda es consumeix als matins per despertar-se?"
Resposta: "La majoria de gent consumeix cafè per despertar-se."
----
Respòn a la pregunta següent.
Pregunta: "Qui és Leo Messi?"
Resposta:
example_title: Pregunta-Resposta
- text: |-
Extrae las entidades nombradas del siguiente texto:
Texto: "Me llamo Wolfgang y vivo en Berlin"
Entidades: Wolfgang:PER, Berlin:LOC
----
Extrae las entidades nombradas del siguiente texto:
Texto: "Hoy voy a visitar el parc güell tras salir del barcelona supercomputing center"
Entidades: parc güell:LOC, barcelona supercomputing center:LOC
----
Extrae las entidades nombradas del siguiente texto:
Texto: "Maria y Miguel no tienen ningún problema contigo"
Entidades: Maria:PER, Miguel:PER
----
Extrae las entidades nombradas del siguiente texto:
Texto: "Damián se cortó el pelo"
Entidades: Damián:PER
----
Extrae las entidades nombradas del siguiente texto:
Texto: "Lo mejor de Barcelona és el bar de mi amigo Pablo"
Entidades: Pablo:PER, Barcelona:LOC
----
Extrae las entidades nombradas del siguiente texto:
Texto: "Carlos comparte piso con Marc"
Entidades:
example_title: Entidades-Nombradas
license: apache-2.0
pipeline_tag: text-generation
---
# falcon_7b_balanced_tokenizer_fp16_CPT_open_data_26B_tokens_balanced_es_ca
## Overview
This model is a new result towards the long-run problem of "What is the best strategy for training a model in my language (not English)?"
This model adapts the [falcon-7b](https://huggingface.co/tiiuae/falcon-7b) to the new target languages Spanish and Catalan by swapping the tokenizer and adjusting the embedding layer before training with 26B tokens in the target languages.
## Language Adaptation
When adapting a model from English to other languages the tokenizer plays a crucial role.
If the tokenizer does not include the target language in its training data, the resulting model will need many more tokens to perform the same task.
We solve this problem by creating a new tokenizer in the target languages (Spanish and Catalan) and adapting the embedding layer to it.
### New Tokenizer
We trained a new BPE Tokenizer for the Catalan and Spanish languages (equal representation). We shuffle a small amount of English in the mixture (since English is in the model training data).
The resulting data has the following language distribution:
|Language|%|
|---|---|
|En|16.84%|
|Es|41.38%|
|Ca|41.79%|
*P.D: It was meant to be the same distribution as the model train data (presented in Continual Pre-Training section)*
This reduces drastically the amount of tokens required to tokenize a text in the target languages (~70 %) while the English tokenization shows a small increase (~115 %).
### Embedding Layer Initialization
In order to fully take advantage of the English Pre-Training of the original Falcon model, we decided to re-use the embedding weights of the original model for those tokens shared between the two Tokenizers (the new and the old one). The rest of the embedding weights are initialized as the mean value of the weights of the original Tokenizer.
### Continual Pre-Training
Once the model has been successfully initialized, we continue its pre-training in the two target languages: Catalan and Spanish. We also shuffle a small amount of English in order to avoid catastrophic forgetting. The datasets used to train this model follow:
| Dataset | Language | Tokens (pre-epoch) | Epochs |
|---------------------|----------|--------------------|--------------|
| Wikipedia | en | 2169.97M | 1.428144485 |
| Lyrics | en | 100.60M | 0.7140722425 |
| C4_es | es | 53709.80M | 0.1049686196 |
| Biomedical | es | 455.03M | 0.7140722425 |
| Legal | es | 995.70M | 0.7140722425 |
| Wikipedia | es | 693.60M | 1.428144485 |
| Lyrics | es | 125.93M | 0.7140722425 |
| Gutenberg | es | 53.18M | 0.7140722425 |
| C4_ca | ca | 2826.00M | 2.142216727 |
| Biomedical | ca | 11.80M | 1.428144485 |
| RacoCatalá Noticias | ca | 17.16M | 2.142216727 |
| RacoCatalá Forums | ca | 333.73M | 2.142216727 |
| CaWaC | ca | 57.79M | 2.142216727 |
| Wikipedia | ca | 228.01M | 3.570361212 |
| Vilaweb | ca | 50.34M | 2.142216727 |
| Lyrics | ca | 0.50M | 2.142216727 |
The resulting dataset has the following language distribution:
|Language|%|
|---|---|
|En|16.84%|
|Es|41.38%|
|Ca|41.79%|
## Model description
More information needed
## Intended uses & limitations
The model is ready-to-use only for causal language modeling to perform text-generation tasks.
However, it is intended to be fine-tuned on a generative downstream task.
## Limitations and biases
At the time of submission, no measures have been taken to estimate the bias and toxicity embedded in the model.
However, we are well aware that our models may be biased since the corpora have been collected using crawling techniques on multiple web sources.
We intend to conduct research in these areas in the future, and if completed, this model card will be updated.
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- total_train_batch_size: 8
- total_eval_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1.0
### Training results
![Training Loss](https://huggingface.co/BSC-LT/falcon_7b_CPT_open_data_26B_tokens_balanced_es_ca/blob/main/images/training_loss_condor.png?raw=true)
## Eval results
It achieves the following results on the evaluation set:
- Loss: 2.1504
- Accuracy: 0.5258
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.0
- Datasets 2.13.1
- Tokenizers 0.13.3