update table
Browse files
README.md
CHANGED
@@ -518,41 +518,42 @@ This repo includes scripts needed to run our full pipeline, including data prepr
|
|
518 |
|
519 |
## Results
|
520 |
|
521 |
-
Below, we present
|
522 |
-
|
523 |
|
524 |
---
|
525 |
|
526 |
-
|
|
527 |
-
|
528 |
-
|
|
529 |
-
|
|
530 |
-
|
|
531 |
-
|
|
532 |
-
|
|
533 |
-
|
|
534 |
-
|
|
535 |
-
|
|
536 |
-
|
|
537 |
-
|
|
538 |
-
|
|
539 |
-
|
|
540 |
-
|
|
541 |
-
|
|
542 |
-
|
|
543 |
-
|
|
544 |
-
|
|
545 |
-
|
|
546 |
-
|
|
547 |
-
|
|
548 |
-
|
|
549 |
-
|
|
550 |
-
|
|
551 |
-
|
|
552 |
-
|
|
553 |
-
|
|
554 |
-
|
|
555 |
|
|
|
556 |
|
557 |
## File Format
|
558 |
|
|
|
518 |
|
519 |
## Results
|
520 |
|
521 |
+
Below, we present the performance of **L-Lens: LlamaLens** , where *"Eng"* refers to the English-instructed model and *"Native"* refers to the model trained with native language instructions. The results are compared against the SOTA (where available) and the Base: **Llama-Instruct 3.1 baseline**. The **Δ** (Delta) column indicates the difference between LlamaLens and the SOTA performance, calculated as (LlamaLens – SOTA).
|
522 |
+
|
523 |
|
524 |
---
|
525 |
|
526 |
+
| **Task** | **Dataset** | **Metric** | **SOTA** | **Base** | **L-Lens-Eng** | **L-Lens-Native** | **Δ (L-Lens (Eng) - SOTA)** |
|
527 |
+
|:----------------------------------:|:--------------------------------------------:|:----------:|:--------:|:---------------------:|:---------------------:|:--------------------:|:------------------------:|
|
528 |
+
| Attentionworthiness Detection | CT22Attentionworthy | W-F1 | 0.412 | 0.158 | 0.425 | 0.454 | 0.013 |
|
529 |
+
| Checkworthiness Detection | CT24_checkworthy | F1_Pos | 0.569 | 0.610 | 0.502 | 0.509 | -0.067 |
|
530 |
+
| Claim Detection | CT22Claim | Acc | 0.703 | 0.581 | 0.734 | 0.756 | 0.031 |
|
531 |
+
| Cyberbullying Detection | ArCyc_CB | Acc | 0.863 | 0.766 | 0.870 | 0.833 | 0.007 |
|
532 |
+
| Emotion Detection | Emotional-Tone | W-F1 | 0.658 | 0.358 | 0.705 | 0.736 | 0.047 |
|
533 |
+
| Emotion Detection | NewsHeadline | Acc | 1.000 | 0.406 | 0.480 | 0.458 | -0.520 |
|
534 |
+
| Factuality | Arafacts | Mi-F1 | 0.850 | 0.210 | 0.771 | 0.738 | -0.079 |
|
535 |
+
| Factuality | COVID19Factuality | W-F1 | 0.831 | 0.492 | 0.800 | 0.840 | -0.031 |
|
536 |
+
| Harmfulness Detection | CT22Harmful | F1_Pos | 0.557 | 0.507 | 0.523 | 0.535 | -0.034 |
|
537 |
+
| Hate Speech Detection | annotated-hatetweets-4-classes | W-F1 | 0.630 | 0.257 | 0.526 | 0.517 | -0.104 |
|
538 |
+
| Hate Speech Detection | OSACT4SubtaskB | Mi-F1 | 0.950 | 0.819 | 0.955 | 0.955 | 0.005 |
|
539 |
+
| News Categorization | ASND | Ma-F1 | 0.770 | 0.587 | 0.919 | 0.929 | 0.149 |
|
540 |
+
| News Categorization | SANADAkhbarona-news-categorization | Acc | 0.940 | 0.784 | 0.954 | 0.953 | 0.014 |
|
541 |
+
| News Categorization | SANADAlArabiya-news-categorization | Acc | 0.974 | 0.893 | 0.987 | 0.985 | 0.013 |
|
542 |
+
| News Categorization | SANADAlkhaleej-news-categorization | Acc | 0.986 | 0.865 | 0.984 | 0.982 | -0.002 |
|
543 |
+
| News Categorization | UltimateDataset | Ma-F1 | 0.970 | 0.376 | 0.865 | 0.880 | -0.105 |
|
544 |
+
| News Credibility | NewsCredibilityDataset | Acc | 0.899 | 0.455 | 0.935 | 0.933 | 0.036 |
|
545 |
+
| News Summarization | xlsum | R-2 | 0.137 | 0.034 | 0.129 | 0.130 | -0.009 |
|
546 |
+
| Offensive Language Detection | ArCyc_OFF | Ma-F1 | 0.878 | 0.489 | 0.877 | 0.879 | -0.001 |
|
547 |
+
| Offensive Language Detection | OSACT4SubtaskA | Ma-F1 | 0.905 | 0.782 | 0.896 | 0.882 | -0.009 |
|
548 |
+
| Propaganda Detection | ArPro | Mi-F1 | 0.767 | 0.597 | 0.747 | 0.731 | -0.020 |
|
549 |
+
| Sarcasm Detection | ArSarcasm-v2 | F1_Pos | 0.584 | 0.477 | 0.520 | 0.542 | -0.064 |
|
550 |
+
| Sentiment Classification | ar_reviews_100k | F1_Pos | -- | 0.681 | 0.785 | 0.779 | -- |
|
551 |
+
| Sentiment Classification | ArSAS | Acc | 0.920 | 0.603 | 0.800 | 0.804 | -0.120 |
|
552 |
+
| Stance Detection | stance | Ma-F1 | 0.767 | 0.608 | 0.926 | 0.881 | 0.159 |
|
553 |
+
| Stance Detection | Mawqif-Arabic-Stance-main | Ma-F1 | 0.789 | 0.764 | 0.853 | 0.826 | 0.065 |
|
554 |
+
| Subjectivity Detection | ThatiAR | f1_pos | 0.800 | 0.562 | 0.441 | 0.383 | -0.359 |
|
555 |
|
556 |
+
---
|
557 |
|
558 |
## File Format
|
559 |
|