Update README.md
Browse files
README.md
CHANGED
@@ -15,26 +15,25 @@ The goal of the project was to adapt large language models for the Arabic langua
|
|
15 |
Our final model was trained on a high-quality instruction fine-tuning (IFT) dataset, generated synthetically and then evaluated using the Hugging Face Arabic leaderboard.
|
16 |
|
17 |
# Training :
|
18 |
-
This model is the
|
19 |
|
20 |
# Evaluation :
|
21 |
-
|
22 |
| Metric | Slim205/Barka-9b-it |
|
23 |
|----------------------|---------------------|
|
24 |
-
| Average |
|
25 |
-
| ACVA |
|
26 |
-
| AlGhafa |
|
27 |
-
| MMLU |
|
28 |
-
| EXAMS |
|
29 |
-
| ARC Challenge |
|
30 |
-
| ARC Easy |
|
31 |
-
| BOOLQ |
|
32 |
-
| COPA |
|
33 |
-
| HELLAWSWAG |
|
34 |
-
| OPENBOOK QA |
|
35 |
-
| PIQA |
|
36 |
-
| RACE |
|
37 |
-
| SCIQ |
|
38 |
-
| TOXIGEN |
|
39 |
|
40 |
Please refer to https://github.com/Slim205/Arabicllm/ for more details.
|
|
|
15 |
Our final model was trained on a high-quality instruction fine-tuning (IFT) dataset, generated synthetically and then evaluated using the Hugging Face Arabic leaderboard.
|
16 |
|
17 |
# Training :
|
18 |
+
This model is the 9B version. It was trained for a week on 4 A100 GPUs using LoRA with a rank of 128, a learning rate of 1e-4, and a cosine learning rate schedule.
|
19 |
|
20 |
# Evaluation :
|
|
|
21 |
| Metric | Slim205/Barka-9b-it |
|
22 |
|----------------------|---------------------|
|
23 |
+
| Average | 61.71 |
|
24 |
+
| ACVA | 73.68 |
|
25 |
+
| AlGhafa | 54.42 |
|
26 |
+
| MMLU | 52.52 |
|
27 |
+
| EXAMS | 52.51 |
|
28 |
+
| ARC Challenge | 59.14 |
|
29 |
+
| ARC Easy | 59.69 |
|
30 |
+
| BOOLQ | 86.41 |
|
31 |
+
| COPA | 58.89 |
|
32 |
+
| HELLAWSWAG | 38.04 |
|
33 |
+
| OPENBOOK QA | 56.16 |
|
34 |
+
| PIQA | 72.01 |
|
35 |
+
| RACE | 48.71 |
|
36 |
+
| SCIQ | 66.43 |
|
37 |
+
| TOXIGEN | 85.35 |
|
38 |
|
39 |
Please refer to https://github.com/Slim205/Arabicllm/ for more details.
|