File size: 1,684 Bytes
f3002ea
 
dc58f83
 
 
 
186d8c0
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
---
license: cc-by-nc-nd-4.0
language:
- it
- en
library_name: transformers
---

# Stambecco 🦌: Italian Instruction-following LLaMA Model

Stambecco is a Italian Instruction-following model based on the [LLaMA](https://ai.facebook.com/blog/large-language-model-llama-meta-ai/) model.
It comes in two versions: 7b and 13b parameters.

It is trained on an Italian version of the [GPT-4-LLM](https://github.com/Instruction-Tuning-with-GPT-4/GPT-4-LLM) dataset, a dataset of `GPT-4` generated instruction-following data.

This repo contains a low-rank adapter for LLaMA-13b.

For more information, please visit [the project's website](https://github.com/mchl-labs/stambecco).

### 💪 Training hyperparameters

The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 4
- eval_batch_size: 8
- gradient_accumulation_steps: 32
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- num_epochs: 4
- mixed_precision_training: Native AMP
- LoRA R: 8
- LoRA target modules: q_proj, v_proj


## Intended uses & limitations

**Usage and License Notices**: Same as [Stanford Alpaca](https://github.com/tatsu-lab/stanford_alpaca), Stambecco is intended and licensed for research use only. The models should not be used outside of research purposes.

Please note that it is highly possible that the model output contains biased, conspiracist, offensive, or otherwise inappropriate and potentially harmful content.
The model is intended for **research purposes only** and should be used with caution at your own risk. **Production usage is not allowed.**