Datasets:
QCRI
/

Modalities:
Text
Formats:
json
Languages:
English
ArXiv:
Libraries:
Datasets
pandas
License:
AliShahroor commited on
Commit
0f6f067
·
verified ·
1 Parent(s): f40b862

Add Results

Browse files
Files changed (1) hide show
  1. README.md +26 -0
README.md CHANGED
@@ -330,6 +330,32 @@ This repo includes scripts needed to run our full pipeline, including data prepr
330
  | Subjectivity | clef2024-checkthat-lab | 2 | 825 | 484 | 219 |
331
 
332
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
333
  ## File Format
334
 
335
  Each JSONL file in the dataset follows a structured format with the following fields:
 
330
  | Subjectivity | clef2024-checkthat-lab | 2 | 825 | 484 | 219 |
331
 
332
 
333
+ ## Results
334
+
335
+ Below, we present the performance of **LlamaLens** in **English** compared to existing SOTA (if available) and the Llama-Instruct baseline, The “Delta” column here is
336
+ calculated as **(LLamalens – SOTA)**.
337
+
338
+ | **Task** | **Dataset** | **Metric** | **SOTA** | **Llama-instruct** | **LLamalens** | **Delta** (LLamalens - SOTA) |
339
+ |----------------------|---------------------------|-----------:|--------:|--------------------:|--------------:|------------------------------:|
340
+ | News Summarization | xlsum | R-2 | 0.152 | 0.074 | 0.141 | -0.011 |
341
+ | News Genre | CNN_News_Articles | Acc | 0.940 | 0.644 | 0.915 | -0.025 |
342
+ | News Genre | News_Category | Ma-F1 | 0.769 | 0.970 | 0.505 | -0.264 |
343
+ | News Genre | SemEval23T3-ST1 | Mi-F1 | 0.815 | 0.687 | 0.241 | -0.574 |
344
+ | Subjectivity | CT24_T2 | Ma-F1 | 0.744 | 0.535 | 0.508 | -0.236 |
345
+ | Emotion | emotion | Ma-F1 | 0.790 | 0.353 | 0.878 | 0.088 |
346
+ | Sarcasm | News-Headlines | Acc | 0.897 | 0.668 | 0.956 | 0.059 |
347
+ | Sentiment | NewsMTSC | Ma-F1 | 0.817 | 0.628 | 0.627 | -0.190 |
348
+ | Checkworthiness | CT24_T1 | F1_Pos | 0.753 | 0.404 | 0.877 | 0.124 |
349
+ | Claim | claim-detection | Mi-F1 | – | 0.545 | 0.915 | – |
350
+ | Factuality | News_dataset | Acc | 0.920 | 0.654 | 0.946 | 0.026 |
351
+ | Factuality | Politifact | W-F1 | 0.490 | 0.121 | 0.290 | -0.200 |
352
+ | Propaganda | QProp | Ma-F1 | 0.667 | 0.759 | 0.851 | 0.184 |
353
+ | Cyberbullying | Cyberbullying | Acc | 0.907 | 0.175 | 0.847 | -0.060 |
354
+ | Offensive | Offensive_Hateful | Mi-F1 | – | 0.692 | 0.805 | – |
355
+ | Offensive | offensive_language | Mi-F1 | 0.994 | 0.646 | 0.884 | -0.110 |
356
+ | Offensive & Hate | hate-offensive-speech | Acc | 0.945 | 0.602 | 0.924 | -0.021 |
357
+
358
+
359
  ## File Format
360
 
361
  Each JSONL file in the dataset follows a structured format with the following fields: