metadata
license: cc-by-nc-sa-4.0
task_categories:
- text-classification
language:
- hi
tags:
- Social Media
- News Media
- Sentiment
- Stance
- Emotion
pretty_name: >-
LlamaLens: Specialized Multilingual LLM for Analyzing News and Social Media
Content -- Hindi
size_categories:
- 10K<n<100K
dataset_info:
- config_name: Sentiment_Analysis
splits:
- name: train
num_examples: 10039
- name: dev
num_examples: 1258
- name: test
num_examples: 1259
- config_name: MC_Hinglish1
splits:
- name: train
num_examples: 5177
- name: dev
num_examples: 2219
- name: test
num_examples: 1000
- config_name: Offensive_Speech_Detection
splits:
- name: train
num_examples: 2172
- name: dev
num_examples: 318
- name: test
num_examples: 636
- config_name: xlsum
splits:
- name: train
num_examples: 70754
- name: dev
num_examples: 8847
- name: test
num_examples: 8847
- config_name: Hindi-Hostility-Detection-CONSTRAINT-2021
splits:
- name: train
num_examples: 5718
- name: dev
num_examples: 811
- name: test
num_examples: 1651
- config_name: hate-speech-detection
splits:
- name: train
num_examples: 3327
- name: dev
num_examples: 476
- name: test
num_examples: 951
- config_name: fake-news
splits:
- name: train
num_examples: 8393
- name: dev
num_examples: 1417
- name: test
num_examples: 2743
- config_name: Natural_Language_Inference
splits:
- name: train
num_examples: 1251
- name: dev
num_examples: 537
- name: test
num_examples: 447
configs:
- config_name: Sentiment_Analysis
data_files:
- split: test
path: Sentiment_Analysis/test.json
- split: dev
path: Sentiment_Analysis/dev.json
- split: train
path: Sentiment_Analysis/train.json
- config_name: MC_Hinglish1
data_files:
- split: test
path: MC_Hinglish1/test.json
- split: dev
path: MC_Hinglish1/dev.json
- split: train
path: MC_Hinglish1/train.json
- config_name: Offensive_Speech_Detection
data_files:
- split: test
path: Offensive_Speech_Detection/test.json
- split: dev
path: Offensive_Speech_Detection/dev.json
- split: train
path: Offensive_Speech_Detection/train.json
- config_name: xlsum
data_files:
- split: test
path: xlsum/test.json
- split: dev
path: xlsum/dev.json
- split: train
path: xlsum/train.json
- config_name: Hindi-Hostility-Detection-CONSTRAINT-2021
data_files:
- split: test
path: Hindi-Hostility-Detection-CONSTRAINT-2021/test.json
- split: dev
path: Hindi-Hostility-Detection-CONSTRAINT-2021/dev.json
- split: train
path: Hindi-Hostility-Detection-CONSTRAINT-2021/train.json
- config_name: hate-speech-detection
data_files:
- split: test
path: hate-speech-detection/test.json
- split: dev
path: hate-speech-detection/dev.json
- split: train
path: hate-speech-detection/train.json
- config_name: fake-news
data_files:
- split: test
path: fake-news/test.json
- split: dev
path: fake-news/dev.json
- split: train
path: fake-news/train.json
- config_name: Natural_Language_Inference
data_files:
- split: test
path: Natural_Language_Inference/test.json
- split: dev
path: Natural_Language_Inference/dev.json
- split: train
path: Natural_Language_Inference/train.json
LlamaLens: Specialized Multilingual LLM Dataset
Overview
LlamaLens is a specialized multilingual LLM designed for analyzing news and social media content. It focuses on 19 NLP tasks, leveraging 52 datasets across Arabic, English, and Hindi.
LlamaLens
This repo includes scripts needed to run our full pipeline, including data preprocessing and sampling, instruction dataset creation, model fine-tuning, inference and evaluation.
Features
- Multilingual support (Arabic, English, Hindi)
- 19 NLP tasks with 52 datasets
- Optimized for news and social media content analysis
📂 Dataset Overview
Hindi Datasets
Task | Dataset | # Labels | # Train | # Test | # Dev |
---|---|---|---|---|---|
Cyberbullying | MC-Hinglish1.0 | 7 | 7,400 | 1,000 | 2,119 |
Factuality | fake-news | 2 | 8,393 | 2,743 | 1,417 |
Hate Speech | hate-speech-detection | 2 | 3,327 | 951 | 476 |
Hate Speech | Hindi-Hostility-Detection-CONSTRAINT-2021 | 15 | 5,718 | 1,651 | 811 |
Natural_Language_Inference | Natural_Language_Inference | 2 | 1,251 | 447 | 537 |
Summarization | xlsum | -- | 70,754 | 8,847 | 8,847 |
Offensive Speech | Offensive_Speech_Detection | 3 | 2,172 | 636 | 318 |
Sentiment | Sentiment_Analysis | 3 | 10,039 | 1,259 | 1,258 |
Results
Below, we present the performance of LlamaLens in Hindi compared to existing SOTA (if available) and the Llama-Instruct baseline, The “Delta” column here is calculated as (LLamalens – SOTA).
Task | Dataset | Metric | SOTA | Llama-instruct | LLamalens | Delta (LLamalens - SOTA) |
---|---|---|---|---|---|---|
NLI | NLI_dataset | W-F1 | 0.646 | 0.633 | 0.655 | 0.009 |
News Summarization | xlsum | R-2 | 0.136 | 0.078 | 0.117 | -0.019 |
Sentiment | Sentiment Analysis | Acc | 0.697 | 0.552 | 0.669 | -0.028 |
Factuality | fake-news | Mi-F1 | – | 0.759 | 0.713 | – |
Hate Speech | hate-speech-detection | Mi-F1 | 0.639 | 0.750 | 0.994 | 0.355 |
Hate Speech | Hindi-Hostility | W-F1 | 0.841 | 0.469 | 0.720 | -0.121 |
Offensive | Offensive Speech | Mi-F1 | 0.723 | 0.621 | 0.847 | 0.124 |
Cyberbullying | MC_Hinglish1 | Acc | 0.609 | 0.233 | 0.587 | -0.022 |
File Format
Each JSONL file in the dataset follows a structured format with the following fields:
id
: Unique identifier for each data entry.original_id
: Identifier from the original dataset, if available.input
: The original text that needs to be analyzed.output
: The label assigned to the text after analysis.dataset
: Name of the dataset the entry belongs.task
: The specific task type.lang
: The language of the input text.instructions
: A brief set of instructions describing how the text should be labeled.text
: A formatted structure including instructions and response for the task in a conversation format between the system, user, and assistant, showing the decision process.
Example entry in JSONL file:
{
"id": "2b1878df-5a4f-4f74-bcd8-e38e1c3c7cf6",
"original_id": null,
"input": "sub गंदा है पर धंधा है ये . .",
"output": "neutral",
"dataset": "Sentiment_Analysis",
"task": "Sentiment",
"lang": "hi",
"instruction": "Identify the sentiment in the text and label it as positive, negative, or neutral. Return only the label without any explanation, justification or additional text."
}
Model
Replication Scripts
📢 Citation
If you use this dataset, please cite our paper:
@article{kmainasi2024llamalensspecializedmultilingualllm,
title={LlamaLens: Specialized Multilingual LLM for Analyzing News and Social Media Content},
author={Mohamed Bayan Kmainasi and Ali Ezzat Shahroor and Maram Hasanain and Sahinur Rahman Laskar and Naeemul Hassan and Firoj Alam},
year={2024},
journal={arXiv preprint arXiv:2410.15308},
volume={},
number={},
pages={},
url={https://arxiv.org/abs/2410.15308},
eprint={2410.15308},
archivePrefix={arXiv},
primaryClass={cs.CL}
}