alothomas commited on
Commit
bf9aeb5
·
verified ·
1 Parent(s): 7e5706a

Final model commit after training

Browse files
Files changed (2) hide show
  1. README.md +67 -0
  2. model.safetensors +1 -1
README.md ADDED
@@ -0,0 +1,67 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ base_model: Qwen/Qwen2.5-0.5B
3
+ library_name: transformers
4
+ model_name: Qwen2.5-0.5B-PRM-RAD-balanced-V2
5
+ tags:
6
+ - generated_from_trainer
7
+ - trl
8
+ - prm
9
+ licence: license
10
+ ---
11
+
12
+ # Model Card for Qwen2.5-0.5B-PRM-RAD-balanced-V2
13
+
14
+ This model is a fine-tuned version of [Qwen/Qwen2.5-0.5B](https://huggingface.co/Qwen/Qwen2.5-0.5B).
15
+ It has been trained using [TRL](https://github.com/huggingface/trl).
16
+
17
+ ## Quick start
18
+
19
+ ```python
20
+ from transformers import pipeline
21
+
22
+ question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
23
+ generator = pipeline("text-generation", model="alothomas/Qwen2.5-0.5B-PRM-RAD-balanced-V2", device="cuda")
24
+ output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
25
+ print(output["generated_text"])
26
+ ```
27
+
28
+ ## Training procedure
29
+
30
+ [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/alothomas-stanford-university/PRM_Qwen/runs/13od7v1v)
31
+
32
+
33
+ This model was trained with PRM.
34
+
35
+ ### Framework versions
36
+
37
+ - TRL: 0.15.1
38
+ - Transformers: 4.49.0
39
+ - Pytorch: 2.6.0
40
+ - Datasets: 3.3.1
41
+ - Tokenizers: 0.21.0
42
+
43
+ ## Citations
44
+
45
+ Cite PRM as:
46
+
47
+ ```bibtex
48
+ @article{uesato2022solving,
49
+ title = {{Solving Math Word Problems With Process- and Outcome-Based Feedback}},
50
+ author = {Uesato, Jonathan and Kushman, Nate and Kumar, Ramana and Song, Francis and Siegel, Noah and Wang, Lisa and Creswell, Antonia and Irving, Geoffrey and Higgins, Irina},
51
+ year = 2022,
52
+ journal = {arXiv preprint arXiv:2211.14275}
53
+ }
54
+ ```
55
+
56
+ Cite TRL as:
57
+
58
+ ```bibtex
59
+ @misc{vonwerra2022trl,
60
+ title = {{TRL: Transformer Reinforcement Learning}},
61
+ author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
62
+ year = 2020,
63
+ journal = {GitHub repository},
64
+ publisher = {GitHub},
65
+ howpublished = {\url{https://github.com/huggingface/trl}}
66
+ }
67
+ ```
model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:5a7dc14f68174e7f4ba6dce4d3ad8ecc00de87ac89afef4740fd7ce0706cf5ed
3
  size 1976170816
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ef6575b669e38f1acab912db242b6b9a97f702fcf4f769c45da25c36bcddc553
3
  size 1976170816