File size: 1,769 Bytes
a821f2b 114625f a821f2b 114625f a821f2b 114625f a821f2b 114625f a821f2b |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 |
---
library_name: transformers
pipeline_tag: text-generation
language:
- multilingual
tags:
- generation
- question answering
- instruction tuning
datasets:
- MBZUAI/Bactrian-X
license: cc-by-nc-4.0
---
### Model Description
This HF repository hosts instruction fine-tuned multilingual BLOOM model using the parallel instruction dataset called Bactrain-X in 52 languages.
We progressively add a language during instruction fine-tuning at each time, and train 52 models in total. Then, we evaluate those models in three multilingual benchmarks.
Please refer to [our paper](https://arxiv.org/abs/2404.04850) for more details.
* Base model: [BLOOM 7B1](https://huggingface.co/bigscience/bloom-7b1)
* Instruction languages: English, Chinese, Afrikaans, Arabic, Azerbaijani, Bengali, Czech, German, Spanish, Estonian, Farsi, Finnish, French, Galician, Gujarati, Hebrew, Hindi, Croatian, Indonesian, Italian, Japanese, Georgian, Kazakh, Khmer, Korean, Lithuanian
* Instruction language codes: en, zh, af, ar, az, bn, cs, de, es, et, fa, fi, fr, gl, gu, he, hi, hr, id, it, ja, ka, kk, km, ko, lt
* Training method: full-parameter fine-tuning.
### Usage
The model checkpoint should be loaded using `transformers` library.
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("MaLA-LM/lucky52-bloom-7b1-no-26")
model = AutoModelForCausalLM.from_pretrained("MaLA-LM/lucky52-bloom-7b1-no-26")
```
### Citation
```
@misc{lucky52,
title = "Lucky 52: How Many Languages Are Needed to Instruction Fine-Tune Large Language Models?",
author = "Shaoxiong Ji and Pinzhen Chen",
year = "2024",
eprint = "2404.04850",
archiveprefix = "arXiv",
primaryclass = "cs.CL"
}
```
|