File size: 2,743 Bytes
dc6292a ac4bf9c 0d2c441 4e0aae7 6c877e1 dc6292a bef7899 dc6292a cd7db55 dc6292a cd7db55 5ef4ee0 dc6292a 4c916ef dc6292a cd7db55 dc6292a cd7db55 dc6292a cd7db55 dc6292a d3fe347 e25ffa3 d3fe347 dc6292a 4eb0ecf dc6292a cd7db55 dc6292a ac4bf9c |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 |
---
language:
- it
- en
library_name: transformers
license: apache-2.0
---
# Qwen2 1.5B: Almost the Same Performance as ITALIA (iGenius) but 6 Times Smaller 🚀
### Model Overview
**Model Name:** Qwen2 1.5B Fine-tuned for Italian Language
**Version:** 1.5b
**Model Type:** Language Model
**Parameter Count:** 1.5 billion
**Language:** Italian
**Comparable Model:** [ITALIA by iGenius](https://huggingface.co/iGeniusAI) (9 billion parameters)
### Model Description
Qwen2 1.5B is a compact language model specifically fine-tuned for the Italian language. Despite its relatively small size of 1.5 billion parameters, Qwen2 1.5B demonstrates strong performance, nearly matching the capabilities of larger models, such as the **9 billion parameter ITALIA model by iGenius**. The fine-tuning process focused on optimizing the model for various language tasks in Italian, making it highly efficient and effective for Italian language applications.
### Performance Evaluation
The performance of Qwen2 1.5B was evaluated on several benchmarks and compared against the ITALIA model. The results are as follows:
### Performance Evaluation
| Model | Parameters | Average | MMLU | ARC | HELLASWAG |
|:----------:|:----------:|:-------:|:-----:|:-----:|:---------:|
| ITALIA | 9B | 43.5 | 35.22 | **38.49** | **56.79** |
| Qwen2-1.5B-Ita | 1.5B | **46.12** | **52.16** | 36.06 | 50.15 |
## How to Use
```python
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
model_name = "DeepMount00/Qwen2-1.5B-Ita"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype=torch.bfloat16,
device_map="auto",
)
prompt = [{'role': 'user', 'content': """Marco ha comprato 5 scatole di cioccolatini. Ogni scatola contiene 12 cioccolatini. Ha deciso di dare 3 cioccolatini a ciascuno dei suoi 7 amici. Quanti cioccolatini gli rimarranno dopo averli distribuiti ai suoi amici?"""}]
inputs = tokenizer.apply_chat_template(
prompt,
add_generation_prompt=True,
return_tensors='pt'
)
tokens = model.generate(
inputs.to(model.device),
max_new_tokens=1024,
temperature=0.001,
do_sample=True
)
print(tokenizer.decode(tokens[0], skip_special_tokens=False))
```
### Conclusion
Qwen2 1.5B demonstrates that a smaller, more efficient model can achieve performance levels comparable to much larger models. It excels in the MMLU benchmark, showing its strength in multitask language understanding. While it scores slightly lower in the ARC and HELLASWAG benchmarks, its overall performance makes it a viable option for Italian language tasks, offering a balance between efficiency and capability. |