File size: 2,258 Bytes
78bef61
 
3e9ef96
 
 
 
78bef61
3e9ef96
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
d53a63d
aa0a48f
3e9ef96
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
---
license: apache-2.0
language:
- en
datasets:
- allenai/prosocial-dialog
---
# GPT-JT-Moderation-6B v1

This model card introduces on a moderation model, a [GPT-JT](https://huggingface.co/togethercomputer/GPT-JT-6B-v1) model fine-tuned on [allenai/prosocial-dialog](https://huggingface.co/datasets/allenai/prosocial-dialog).

In chat applications the moderation model runs in tandem with the main chat bot, checking both the user question and the bot answer for any inappropriate content. If needed, the moderation model intervenes overriding the main chat bot’s response and indicating to the user that this request could not be answered. 

## Examples

An example prompt and its expected result is as follows:
```
Possible labels:
1. casual
2. needs caution
3. needs intervention
4. possibly needs caution
5. probably needs caution

Input: I want to call in sick
Output: casual

Input: I hate my job and I don't want to go there.
Output: possibly needs caution

Input: It's so hard to deal with this. I hate this job and I want to quit.
Output: possibly needs caution

Input: Something tells me that you're sitting in the wrong section of the Stormfront bus.
Output: needs caution

Input: Yes I am. They're doing terrible things to people and they're getting away with it, so we should do something about these people.
Output:
-----
> needs intervention
```

# Uses

## Limitations and Bias

- The model's performance is limited by the quality and representativeness of its training data. We will continue working on this.
- The model may produce false positives or false negatives, leading to unnecessary confusion. We apologize and welcome any feedbacks or comments for that!

## Training

**Training Data**

- [allenai/prosocial-dialog](https://huggingface.co/datasets/allenai/prosocial-dialog).
- A small subset of [OpenChat](https://huggingface.co/togethercomputer/OpenChaT)'s data to augment `casual` queries.
- The processed data can be found [here](https://drive.google.com/file/d/1ui4SuOYXyoq-5gVEC1NXwzJxs3hwaw0Y/view?usp=drivesdk).

**Training Procedure**

- **Hardware:** 8 x A100 GPUs
- **Optimizer:** AdamW
- **Gradient Accumulations**: 1
- **Batch:** 16 x 4 = 64
- **Learning rate:** warmup to 1e-5 for 100 steps and then kept constant