nicholasKluge's picture
Create README.md
f62cb7b verified
|
raw
history blame
5.69 kB
---
license: apache-2.0
datasets:
- ruanchaves/hatebr
language:
- pt
metrics:
- accuracy
library_name: transformers
pipeline_tag: text-classification
tags:
- hate-speech
widget:
- text: "Não concordo com a sua opinião."
example_title: Exemplo
- text: "Pega a sua opinião e vai a merda com ela!"
example_title: Exemplo
---
# TeenyTinyLlama-460m-HateBR
TeenyTinyLlama is a series of small foundational models trained in Brazilian Portuguese.
This repository contains a version of [TeenyTinyLlama-460m](https://huggingface.co/nicholasKluge/TeenyTinyLlama-160m) (`TeenyTinyLlama-460m-HateBR`) fine-tuned on the [HateBR dataset](https://huggingface.co/datasets/ruanchaves/hatebr).
## Details
- **Number of Epochs:** 3
- **Batch size:** 16
- **Optimizer:** `torch.optim.AdamW` (learning_rate = 4e-5, epsilon = 1e-8)
- **GPU:** 1 NVIDIA A100-SXM4-40GB
## Usage
Using `transformers.pipeline`:
```python
from transformers import pipeline
text = "Pega a sua opinião e vai a merda com ela!"
classifier = pipeline("text-classification", model="nicholasKluge/TeenyTinyLlama-460m-HateBR")
classifier(text)
# >>> [{'label': 'TOXIC', 'score': 0.9998729228973389}]
```
## Reproducing
To reproduce the fine-tuning process, use the following code snippet:
```python
# Hatebr
! pip install transformers datasets evaluate accelerate -q
import evaluate
import numpy as np
from huggingface_hub import login
from datasets import load_dataset, Dataset, DatasetDict
from transformers import AutoTokenizer, DataCollatorWithPadding
from transformers import AutoModelForSequenceClassification, TrainingArguments, Trainer
# Load the task
dataset = load_dataset("ruanchaves/hatebr")
# Format the dataset
train = dataset['train'].to_pandas()
train = train[['instagram_comments', 'offensive_language']]
train.columns = ['text', 'labels']
train.labels = train.labels.astype(int)
train = Dataset.from_pandas(train)
test = dataset['test'].to_pandas()
test = test[['instagram_comments', 'offensive_language']]
test.columns = ['text', 'labels']
test.labels = test.labels.astype(int)
test = Dataset.from_pandas(test)
dataset = DatasetDict({
"train": train,
"test": test
})
# Create a `ModelForSequenceClassification`
model = AutoModelForSequenceClassification.from_pretrained(
"nicholasKluge/TeenyTinyLlama-460m",
num_labels=2,
id2label={0: "NONTOXIC", 1: "TOXIC"},
label2id={"NONTOXIC": 0, "TOXIC": 1}
)
tokenizer = AutoTokenizer.from_pretrained("nicholasKluge/TeenyTinyLlama-460m")
# Preprocess the dataset
def preprocess_function(examples):
return tokenizer(examples["text"], truncation=True)
dataset_tokenized = dataset.map(preprocess_function, batched=True)
# Create a simple data collactor
data_collator = DataCollatorWithPadding(tokenizer=tokenizer)
# Use accuracy as evaluation metric
accuracy = evaluate.load("accuracy")
# Function to compute accuracy
def compute_metrics(eval_pred):
predictions, labels = eval_pred
predictions = np.argmax(predictions, axis=1)
return accuracy.compute(predictions=predictions, references=labels)
# Define training arguments
training_args = TrainingArguments(
output_dir="checkpoints",
learning_rate=4e-5,
per_device_train_batch_size=16,
per_device_eval_batch_size=16,
num_train_epochs=3,
weight_decay=0.01,
evaluation_strategy="epoch",
save_strategy="epoch",
load_best_model_at_end=True,
push_to_hub=True,
hub_token="your_token_here",
hub_model_id="username/model-ID",
)
# Define the Trainer
trainer = Trainer(
model=model,
args=training_args,
train_dataset=dataset_tokenized["train"],
eval_dataset=dataset_tokenized["test"],
tokenizer=tokenizer,
data_collator=data_collator,
compute_metrics=compute_metrics,
)
# Train!
trainer.train()
```
## Fine-Tuning Comparisons
| Models | [HateBr](https://huggingface.co/datasets/ruanchaves/hatebr) |
|--------------------------------------------------------------------------------------------|-------------------------------------------------------------|
| [Teeny Tiny Llama 460m](https://huggingface.co/nicholasKluge/TeenyTinyLlama-460m) | 91.64 |
| [Bert-large-portuguese-cased](https://huggingface.co/neuralmind/bert-base-portuguese-cased)| 91.57 |
| [Bert-base-portuguese-cased](https://huggingface.co/neuralmind/bert-base-portuguese-cased) | 91.28 |
| [Teeny Tiny Llama 160m](https://huggingface.co/nicholasKluge/TeenyTinyLlama-160m) | 90.71 |
| [Gpt2-small-portuguese](https://huggingface.co/pierreguillou/gpt2-small-portuguese) | 87.42 |
## Cite as 🤗
```latex
@misc{nicholas22llama,
doi = {10.5281/zenodo.6989727},
url = {https://huggingface.co/nicholasKluge/TeenyTinyLlama-460m},
author = {Nicholas Kluge Corrêa},
title = {TeenyTinyLlama},
year = {2023},
publisher = {HuggingFace},
journal = {HuggingFace repository},
}
```
## Funding
This repository was built as part of the RAIES ([Rede de Inteligência Artificial Ética e Segura](https://www.raies.org/)) initiative, a project supported by FAPERGS - ([Fundação de Amparo à Pesquisa do Estado do Rio Grande do Sul](https://fapergs.rs.gov.br/inicial)), Brazil.
## License
TeenyTinyLlama-460m-HateBR is licensed under the Apache License, Version 2.0. See the [LICENSE](LICENSE) file for more details.