File size: 2,493 Bytes
e1af673 7dd1f6f e45968b 47ba85b d448586 47ba85b 8602a43 47ba85b 8602a43 47ba85b e1af673 e45968b 7dd1f6f d99e584 7dd1f6f e45968b 7dd1f6f |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 |
---
license: apache-2.0
inference: False
language:
- en
widget:
- text: "I am good. Just got back from vacation"
example_title: "Example 1"
- text: "I am doing good, I appreciate you asking. I just got back from vacation."
example_title: "Example 2"
- text: "I am doing good, thank you for asking. I just got back from vacation, and loved it."
example_title: "Example 3"
- text: "I am doing good, but why do fucking you care? I just got back from vacation."
example_title: "Example 4"
---
# Polite Bert
<img src="https://i.imgur.com/y8vqhCs.png" alt="drawing" style="width:300px;"/>
### Model Details
Polite Bert is, as the name implies, a BERT model trained to classify a given sentence on a scale of politeness:
- Not Polite (aka Rude or Impolite)
- Neutral
- Somewhat Polite
- Polite
#### Training
Polite Bert was trained by fine-tuning a [BERT](https://huggingface.co/google-bert/bert-base-uncased) model on annotated politeness-level data.
The model was trained using SFT for 4 epochs, with a batch size 16, and a max sequence length of 128 tokens.
### Data Details
The training data consisted of 2000 annotated sentences. This training data was composed of the following:
Manually annotated data:
- 250 sentences sampled from [EUROPARL](https://www.statmt.org/europarl/) dataset. Specifically from the English version of PT-EN data.
- 250 sentences sampled from [SIMMC2.0](https://aclanthology.org/2021.emnlp-main.401.pdf) dataset. From any domain (Fashion or Furniture) and speaker (System or User).
- 250 sentences sampled from the Philosophy and Politics data of the [StackExchange](https://stackoverflow.blog/2014/01/23/stack-exchange-cc-data-now-hosted-by-the-internet-archive/) dataset.
- 250 sentences sampled from a collection of hotel review replies from Trip Advisor.
Automatically annotated data:
- 1000 sentences from [4Chan Pol](https://arxiv.org/abs/2001.07487) dataset. Specifically, we only considered sentences annotated with TOXICITY > 0.85, SEVERE_TOXICITY > 0.85, and INSULT > 0.5.
While we manually labelled the first 1000 sentences, the 1000 sentences from 4ChanPol were automatically set to **Not Polite**.
These source datasets were chosen due to their likelihood of containing distinct, but pronounced, politeness levels (hate speech from 4chan, formal and polite speech from hotel staff and parliament members, etc)
#### License
Apache 2.0 license.
Made by Diogo Glória-Silva. PhD Student at NOVA FCT and Affiliated PhD Student at CMU |