Update README.md
Browse files
README.md
CHANGED
@@ -77,14 +77,13 @@ print(generated_text)
|
|
77 |
|
78 |
## Results
|
79 |
|
80 |
-
Below, we present the performance of **LlamaLens** , where *"
|
81 |
-
calculated as **(LLamalens – SOTA)**.
|
82 |
|
83 |
---
|
84 |
|
85 |
## Arabic
|
86 |
|
87 |
-
| **Task** | **Dataset** | **Metric** | **SOTA** | **
|
88 |
|:----------------------------------:|:--------------------------------------------:|:----------:|:--------:|:---------------------:|:---------------------:|:--------------------:|:------------------------:|
|
89 |
| Attentionworthiness Detection | CT22Attentionworthy | W-F1 | 0.412 | 0.158 | 0.425 | 0.454 | 0.013 |
|
90 |
| Checkworthiness Detection | CT24_checkworthy | F1_Pos | 0.569 | 0.610 | 0.502 | 0.509 | -0.067 |
|
|
|
77 |
|
78 |
## Results
|
79 |
|
80 |
+
Below, we present the performance of **L-Lens: LlamaLens** , where *"Eng"* refers to the English-instructed model and *"Native"* refers to the model trained with native language instructions. The results are compared against the SOTA (where available) and the Base: **Llama-Instruct 3.1 baseline**. The **Δ** (Delta) column indicates the difference between LlamaLens and the SOTA performance, calculated as (LlamaLens – SOTA).
|
|
|
81 |
|
82 |
---
|
83 |
|
84 |
## Arabic
|
85 |
|
86 |
+
| **Task** | **Dataset** | **Metric** | **SOTA** | **Base** | **L-Lens-Eng** | **L-Lens-Native** | **Δ (L-Lens (Eng) - SOTA)** |
|
87 |
|:----------------------------------:|:--------------------------------------------:|:----------:|:--------:|:---------------------:|:---------------------:|:--------------------:|:------------------------:|
|
88 |
| Attentionworthiness Detection | CT22Attentionworthy | W-F1 | 0.412 | 0.158 | 0.425 | 0.454 | 0.013 |
|
89 |
| Checkworthiness Detection | CT24_checkworthy | F1_Pos | 0.569 | 0.610 | 0.502 | 0.509 | -0.067 |
|