File size: 855 Bytes
7fc0cce
 
 
1b901ae
 
90d89ed
1b901ae
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
---
license: apache-2.0
---
# Better Implementation for [*PairRM*](https://huggingface.co/llm-blender/PairRM)

# **Introduction**

This version of PairRM have some fixes on training process, which improve model's performance significantly.

## **Minor Fixes**

### Longer Context Length (2048 -> 3380)

Thanks to deberta's tokenzer, original PairRM model had enough Context Length.

But, the longer the better :>

---

## **Major Fixes**

### Change Prompt Format

Why use something like
```
<Response i + 1> {response}
```

So, I changed to a format based on Vicuna 1.1.

---

### Change Truncate side

The original process was using right side truncate even on Input. This can cause serious problem when Input exceeds model's seq len.

---

### Dataset Filter

There was decent amount of empty assistant response on original dataset. So, I dropped them.