Model Card for Latxa 7b
IMPORTANT: This model is outdated and made available publicly for reproducibility purposes only. Please utilize the most recent version found in our HuggingFace collection.
Latxa is a collection of foundation models specifically tuned for Basque. Based on Meta’s LLaMA 2 model family, these models were further trained with Euscrawl, a highly curated Basque corpora (Artetxe et al., 2022). Ranging from 7 billion to 70 billion parameters, these models are currently the biggest and best-performing LLMs built for Basque. This is the 7b repository, links to other models can be found in the Latxa Collection.
Read more about Latxa in our website or in LinkedIn!
Model Details
Model Description
Latxa is a family of Large Language Models (LLM) based on Meta’s LLaMA models. Current LLMs exhibit incredible performance for high-resource languages such as English, but, in the case of Basque and other low-resource languages, their performance is close to a random guesser. These limitations widen the gap between high- and low-resource languages when it comes to digital development. We present Latxa to overcome these limitations and promote the development of LLM-based technology and research for the Basque language. Latxa models follow the same architecture as their original counterparts and were further trained in Euscrawl v1 (Artetxe et al., 2022), a high-quality Basque corpora.
The models are released in three sizes: 7B, 13B and 70B.
- Developed by: HiTZ Research Center & IXA Research group (University of the Basque Country UPV/EHU)
- Model type: Language model
- Language(s) (NLP): en, eu
- License: llama2
- Parent Model: meta-llama/Llama-2-7b
- Contact: [email protected]
Getting started
Use the code below to get started with the model.
from transformers import pipeline
pipe = pipeline("text-generation", model=”HiTZ/latxa-7b-v1”)
text = "Euskara adimen artifizialera iritsi da!"
pipe(text, max_new_tokens=50, num_beams=5)
>> [
{
'generated_text': 'Euskara adimen artifizialera iritsi da!\nEuskararen eta adimen artifizialaren arteko harremana aspaldikoa da,'
' baina azken urteotan aurrerapauso handiak eman dira arlo horretan'
}
]
Uses
Latxa models are intended to be used with Basque data; for any other language the performance is not guaranteed. Same as the original, Latxa inherits the LLaMA-2 License which allows for commercial and research use.
Direct Use
Latxa family models are pre-trained LLMs without any task-specific or instruction fine-tuning. That is, the model can either be prompted to perform a specific task or further fine-tuned for specific use cases.
Out-of-Scope Use
The model was not fine-tuned to follow instructions or to work as a chat assistant, therefore, this kind of usage is not tested nor recommended.
Bias, Risks, and Limitations
In an effort to alleviate the potentially disturbing or harmful content, Latxa has been trained on carefully selected and processed data which comes mainly from local media, national/regional newspapers, encyclopedias and blogs (see Euscrawl below). Still, the model is based on LLaMA models and can potentially carry the same bias, risk and limitations.
Please see the LLaMA’s _Ethical Considerations and Limitations _for further information.
Training Details
Training Data
The models were trained on EusCrawl v1, a high-quality corpus for Basque comprising 1.72M documents, 288M words, totalling 2.1GiB of uncompressed text. EusCrawl was built using ad-hoc scrapers to extract text from 33 Basque websites with high-quality content, resulting in cleaner text compared to general-purpose approaches.
See more details in the EusCrawl dataset card.
Additionally, 100K documents of English data randomly selected from the Pile dataset were also included to avoid catastrophic forgetting.
Training Procedure
The models were trained using the GPT-Neox library on the HPC CINECA computing cluster. All the models were approximately trained with an effective batch size of 2M tokens for 1000 to 2000 steps.
Model | Steps | Sequence length | Effective Batch size | Total tokens | GPU hours |
Latxa 7B | 2000 |
4096 |
2M tokens/step |
4B |
359.2h |
Latxa 13B | 1000 |
4096 |
2M tokens/step |
2B |
468.8h |
Latxa 70B | 1680 |
4096 |
2M tokens/step |
3.4B |
*6475.52h |
- indicates the time for the entire training process (2000 steps), however the weights of the step 1680 are shared as it is the best checkpoint according to validation loss.
Evaluation
We evaluated the models on zero-shot and few-shot settings on generative, multiple-choice and classification tasks. We used the basque partitions of each dataset.
Testing Data, Factors & Metrics
Testing Data
- Belebele (Bandarkar et al.): Belebele is a multiple-choice machine reading comprehension (MRC) dataset spanning 122 language variants. We evaluated the model in a 5-shot fashion.
- X-StoryCloze (Lin et al.): XStoryCloze consists of the professionally translated version of the English StoryCloze dataset to 10 non-English languages. Story Cloze is a commonsense reasoning dataset which consists of choosing the correct ending to a four-sentence story. We evaluated the model in a 0-shot fashion.
- BasqueGLUE (Urbizu et al.): BasqueGLUE is a NLU benchmark for Basque. We evaluated the model in a 5-shot fashion on the following tasks:
- Data card: https://huggingface.co/datasets/orai-nlp/basqueGLUE.
- Tasks:
- BEC2016eu: Sentiment analysis on tweets about the 2016 Basque elections campaign.
- VaxxStance: Stance detection on tweets around the anti-vaccine movement.
- BTHCv2: Topic classification of news extracts with 12 categories.
- EpecKorrefBin: Correference detection task similar to WSC.
- QNLIeu: Q&A NLI built from the Basque Wikipedia.
- WiCeu: Basque Word-in-Context task.
Metrics
- Accuracy: Belebele, X-StoryCloze, EpecKorrefBin, QNLI-eu, and, WiC-eu
- Micro F1: BEC2016-eu and BHTCv2
- Macro F1: VaxxStance (favor & against)
Results
The model was evaluated using the LM Evaluation harness library from Eleuther AI. In order to reproduce our results please follow the instructions in Latxa's Github repository.
Model | Belebele | X-StoryCloze | BEC | Vaxx | BHTC | coref | QNLI | WiC | Average |
Random | 25.00 | 50.00 | 33.33 | 33.33 | 8.33 | 50.00 | 50.00 | 50.00 | 37.50 |
LLaMA 2 7B | 26.22 | 50.43 | 41.63 | 18.60 | 20.06 | 50.94 | 48.32 | 49.64 | 38.23 |
LLaMA 2 13B | 32.00 | 50.63 | 41.09 | 18.25 | 27.35 | 49.23 | 48.74 | 49.21 | 39.56 |
LLaMA 2 70B | 33.56 | 51.62 | 47.47 | 21.01 | 31.01 | 52.98 | 51.26 | 51.57 | 42.56 |
BLOOM 7B | 27.00 | 57.18 | 37.94 | 20.72 | 39.10 | 48.21 | 47.48 | 47.57 | 40.65 |
XGLM 7B | 23.88 | 57.71 | 39.94 | 21.58 | 36.73 | 50.94 | 50.42 | 49.21 | 41.30 |
Latxa 7B | 35.67 | 63.13 | 55.61 | 45.93 | 44.44 | 50.43 | 55.04 | 50.14 | 50.05 |
Latxa 13B | 53.56 | 65.85 | 53.23 | 48.66 | 53.61 | 62.52 | 57.14 | 54.21 | 56.10 |
Latxa 70B | 71.78 | 67.57 | 63.52 | 48.95 | 49.51 | 79.90 | 58.82 | 55.50 | 61.94 |
Environmental Impact
Carbon emissions are estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type: HPC Cluster, 4x A100 64Gb nodes
- Hours used: 359.2h + 468.8h + 6475.52h = 7303.52h
- Compute cluster: CINECA HPC
- Compute Region: Italy
- Carbon Emitted: 673.75kg CO2 eq
Acknowledgements
This work has been partially supported by the Basque Government (IKER-GAITU project). The models were trained on the Leonardo supercomputer at CINECA under the EuroHPC Joint Undertaking, project EHPC-EXT-2023E01-013.
- Downloads last month
- 37