license: bigscience-bloom-rail-1.0
language:
- fr
- en
pipeline_tag: text-classification
Bloomz-560m-guardrail
We introduce the Bloomz-560m-guardrail model, which is a fine-tuning of the Bloomz-560m-sft-chat model. This model is designed to detect the toxicity of a text in five modes:
- Obscene: Content that is offensive, indecent, or morally inappropriate, especially in relation to social norms or standards of decency.
- Sexual explicit: Content that presents explicit sexual aspects in a clear and detailed manner.
- Identity attack: Content that aims to attack, denigrate, or harass someone based on their identity, especially related to characteristics such as race, gender, sexual orientation, religion, ethnic origin, or other personal aspects.
- Insult: Offensive, disrespectful, or hurtful content used to attack or denigrate a person.
- Threat: Content that presents a direct threat to an individual.
Training
The training dataset consists of 500k examples of comments in English and 500k comments in French (translated by Google Translate), each annotated with a toxicity severity gradient. The dataset used is provided by Jigsaw as part of a Kaggle competition : Jigsaw Unintended Bias in Toxicity Classification. Since the scores represent severity gradients, regression was preferred using the following loss function: with Where sigma is the sigmoid function and O represents the set of learning observations.
Benchmark
As the scores range from 0 to 1, a performance measure such as MAE or RMSE may be challenging to interpret. Therefore, Pearson's inter-correlation was chosen as a measure. Pearson's inter-correlation is a measure ranging from -1 to 1, where 0 represents no correlation, -1 represents perfect negative correlation, and 1 represents perfect positive correlation. The goal is to quantitatively measure the correlation between the model's scores and the scores assigned by judges for 750 comments not seen during training.
Model | Language | Obsecene (x100) | Sexual explicit (x100) | Identity attack (x100) | Insult (x100) | Threat (x100) | Mean |
---|---|---|---|---|---|---|---|
Bloomz-560m-guardrail | French | 62 | 73 | 73 | 68 | 61 | 67 |
Bloomz-560m-guardrail | English | 63 | 61 | 63 | 67 | 55 | 62 |
Bloomz-3b-guardrail | French | 72 | 82 | 80 | 78 | 77 | 78 |
Bloomz-3b-guardrail | English | 76 | 78 | 77 | 75 | 79 | 77 |
With a correlation of approximately 60 for the 560m model and approximately 80 for the 3b model, the output is highly correlated with the judges' scores.
How to Use Blommz-560m-guardrail
The following example utilizes the API Pipeline of the Transformers library.
from transformers import pipeline
guardrail = pipeline("text-classification", "cmarkea/bloomz-560m-guardrail")
list_text = [...]
result = guardrail(
list_text,
return_all_scores=True, # Crucial for assessing all modalities of toxicity!
function_to_apply='sigmoid' # To ensure obtaining a score between 0 and 1!
)
Citation
@online{DeBloomzGuard,
AUTHOR = {Cyrile Delestre},
ORGANIZATION = {Cr{\'e}dit Mutuel Ark{\'e}a},
URL = {https://huggingface.co/cmarkea/bloomz-560m-guardrail},
YEAR = {2023},
KEYWORDS = {NLP ; Transformers ; LLM ; Bloomz},
}