File size: 2,913 Bytes
114a265
 
 
 
 
0d06570
 
114a265
 
 
0d06570
 
4707370
5404b9c
 
0d06570
 
4707370
13a06a4
 
114a265
 
 
 
 
 
 
fc55243
114a265
 
 
3951135
 
 
 
 
 
 
 
1259f03
3951135
 
 
 
 
 
 
1259f03
114a265
 
fc55243
114a265
 
fc55243
114a265
 
7b2c575
fc55243
 
 
 
 
 
 
 
 
 
7b2c575
114a265
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
0d06570
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
---
license: mit
base_model: vicgalle/gpt2-open-instruct-v1
tags:
- generated_from_trainer
- Transformers
- GPT2
model-index:
- name: hh-rlhf
  results: []
datasets:
- Anthropic/hh-rlhf
- hakurei/open-instruct-v1
tokenizers:
- GPT2Tokenizer
language:
- en
library_name: transformers
metrics:
- bleu
---

<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->

# hh-rlhf

This model is a fine-tuned version of [vicgalle/gpt2-open-instruct-v1](https://huggingface.co/vicgalle/gpt2-open-instruct-v1) on an subset (15k) of the Anthropic/hh-rlhf dataset.
It achieves the following results on the evaluation set:
- Loss: 2.1534

This model responds to the 'Human:' or 'Assistant:' prompt pretty well in conversation situations. 

The shorter responses are better suited. Keep generation length to a reasonable subset. Left to its own devices it will have some pretty esoteric responses. 

These include fairly uncensored remarks and at times violent outbursts. Especially if asking questions. 

Needs vetting for other textual uses.

```
Human: Insane clown posse says...

Human: Should we look for a woman?

Assistant: It’s okay if you’re having a tough time finding what you are looking for. It’s a common question people might come up with for an argument or misunderstanding. What are you looking for, and what kind of woman would you have?

Human: Are you trying to find someone to argue
```
## Model description

GPT2 open instruct was trained on the open-instruct dataset fully. The reimagines one LM head as a partial rhlf adapter, with subtle reinforcements. 
## Intended uses & limitations

Intended to study the intersection of instruct models and prompting that focuses on subtle exchanges of prompting. This probably needs to be refined substantially at this point.

## Training and evaluation data
```python
Train dataset size: 15000
Test dataset size: 500
Dataset({
    features: ['chosen', 'rejected'],
    num_rows: 15000
})
Dataset({
    features: ['chosen', 'rejected'],
    num_rows: 500
})
```
## Training procedure

### Training hyperparameters

The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 2
- eval_batch_size: 1
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- num_epochs: 4

### Training results

| Training Loss | Epoch | Step  | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 2.3108        | 1.0   | 7500  | 2.1799          |
| 2.265         | 2.0   | 15000 | 2.1632          |
| 2.2507        | 3.0   | 22500 | 2.1567          |
| 2.2519        | 4.0   | 30000 | 2.1534          |


### Framework versions

- Transformers 4.31.0
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3