Update README.md
Browse files
README.md
CHANGED
@@ -7,32 +7,29 @@ tags:
|
|
7 |
- trl
|
8 |
- sft
|
9 |
licence: license
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
10 |
---
|
11 |
|
12 |
-
# Model Card for judgelm_llama_31_8b_toxic_ckpt_ep2
|
13 |
|
14 |
This model is a fine-tuned version of [meta-llama/Llama-3.1-8B-Instruct](https://huggingface.co/meta-llama/Llama-3.1-8B-Instruct).
|
15 |
-
It has been trained using [TRL](https://github.com/huggingface/trl).
|
16 |
|
17 |
## Quick start
|
18 |
|
19 |
-
```python
|
20 |
-
from transformers import pipeline
|
21 |
|
22 |
-
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
|
23 |
-
generator = pipeline("text-generation", model="textdetox/judgelm_llama_31_8b_toxic_ckpt_ep2", device="cuda")
|
24 |
-
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
|
25 |
-
print(output["generated_text"])
|
26 |
-
```
|
27 |
|
28 |
-
## Training procedure
|
29 |
|
30 |
-
|
31 |
-
|
32 |
-
|
33 |
-
This model was trained with SFT.
|
34 |
-
|
35 |
-
### Framework versions
|
36 |
|
37 |
- TRL: 0.16.0
|
38 |
- Transformers: 4.50.1
|
@@ -40,19 +37,4 @@ This model was trained with SFT.
|
|
40 |
- Datasets: 3.4.1
|
41 |
- Tokenizers: 0.21.1
|
42 |
|
43 |
-
## Citations
|
44 |
-
|
45 |
-
|
46 |
-
|
47 |
-
Cite TRL as:
|
48 |
-
|
49 |
-
```bibtex
|
50 |
-
@misc{vonwerra2022trl,
|
51 |
-
title = {{TRL: Transformer Reinforcement Learning}},
|
52 |
-
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
|
53 |
-
year = 2020,
|
54 |
-
journal = {GitHub repository},
|
55 |
-
publisher = {GitHub},
|
56 |
-
howpublished = {\url{https://github.com/huggingface/trl}}
|
57 |
-
}
|
58 |
-
```
|
|
|
7 |
- trl
|
8 |
- sft
|
9 |
licence: license
|
10 |
+
language:
|
11 |
+
- am
|
12 |
+
- ar
|
13 |
+
- de
|
14 |
+
- en
|
15 |
+
- es
|
16 |
+
- hi
|
17 |
+
- ru
|
18 |
+
- uk
|
19 |
+
- zh
|
20 |
+
license: apache-2.0
|
21 |
---
|
22 |
|
|
|
23 |
|
24 |
This model is a fine-tuned version of [meta-llama/Llama-3.1-8B-Instruct](https://huggingface.co/meta-llama/Llama-3.1-8B-Instruct).
|
25 |
+
It has been trained using [TRL](https://github.com/huggingface/trl) with [textdetox/detoxification_pairwise_style_evaluation](https://huggingface.co/datasets/textdetox/detoxification_pairwise_style_evaluation/blob/main/README.md) dataset
|
26 |
|
27 |
## Quick start
|
28 |
|
|
|
|
|
29 |
|
|
|
|
|
|
|
|
|
|
|
30 |
|
|
|
31 |
|
32 |
+
### Training framework versions
|
|
|
|
|
|
|
|
|
|
|
33 |
|
34 |
- TRL: 0.16.0
|
35 |
- Transformers: 4.50.1
|
|
|
37 |
- Datasets: 3.4.1
|
38 |
- Tokenizers: 0.21.1
|
39 |
|
40 |
+
## Citations
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|