Shahradmz commited on
Commit
ef44527
·
verified ·
1 Parent(s): 6c41096

Model save

Browse files
README.md ADDED
@@ -0,0 +1,65 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ base_model: Qwen/Qwen2-0.5B-Instruct
3
+ library_name: transformers
4
+ model_name: Qwen2-0.5B-Instruct_continual_data_debug_PPO_0
5
+ tags:
6
+ - generated_from_trainer
7
+ licence: license
8
+ ---
9
+
10
+ # Model Card for Qwen2-0.5B-Instruct_continual_data_debug_PPO_0
11
+
12
+ This model is a fine-tuned version of [Qwen/Qwen2-0.5B-Instruct](https://huggingface.co/Qwen/Qwen2-0.5B-Instruct).
13
+ It has been trained using [TRL](https://github.com/huggingface/trl).
14
+
15
+ ## Quick start
16
+
17
+ ```python
18
+ from transformers import pipeline
19
+
20
+ question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
21
+ generator = pipeline("text-generation", model="Shahradmz/Qwen2-0.5B-Instruct_continual_data_debug_PPO_0", device="cuda")
22
+ output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
23
+ print(output["generated_text"])
24
+ ```
25
+
26
+ ## Training procedure
27
+
28
+ [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/shahrad_m/AIFGen-ppo-continual-test/runs/0gnhk7j1)
29
+
30
+
31
+ This model was trained with PPO, a method introduced in [Fine-Tuning Language Models from Human Preferences](https://huggingface.co/papers/1909.08593).
32
+
33
+ ### Framework versions
34
+
35
+ - TRL: 0.15.1
36
+ - Transformers: 4.49.0
37
+ - Pytorch: 2.3.0
38
+ - Datasets: 3.3.2
39
+ - Tokenizers: 0.21.0
40
+
41
+ ## Citations
42
+
43
+ Cite PPO as:
44
+
45
+ ```bibtex
46
+ @article{mziegler2019fine-tuning,
47
+ title = {{Fine-Tuning Language Models from Human Preferences}},
48
+ author = {Daniel M. Ziegler and Nisan Stiennon and Jeffrey Wu and Tom B. Brown and Alec Radford and Dario Amodei and Paul F. Christiano and Geoffrey Irving},
49
+ year = 2019,
50
+ eprint = {arXiv:1909.08593}
51
+ }
52
+ ```
53
+
54
+ Cite TRL as:
55
+
56
+ ```bibtex
57
+ @misc{vonwerra2022trl,
58
+ title = {{TRL: Transformer Reinforcement Learning}},
59
+ author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
60
+ year = 2020,
61
+ journal = {GitHub repository},
62
+ publisher = {GitHub},
63
+ howpublished = {\url{https://github.com/huggingface/trl}}
64
+ }
65
+ ```
all_results.json CHANGED
@@ -1,4 +1,4 @@
1
  {
2
  "dataset": 0,
3
- "eval_score": 2.4489197731018066
4
  }
 
1
  {
2
  "dataset": 0,
3
+ "eval_score": 5.812220096588135
4
  }
eval_results.json CHANGED
@@ -1,4 +1,4 @@
1
  {
2
  "dataset": 0,
3
- "eval_score": 2.4489197731018066
4
  }
 
1
  {
2
  "dataset": 0,
3
+ "eval_score": 5.812220096588135
4
  }
last/adapter_config.json CHANGED
@@ -23,8 +23,8 @@
23
  "rank_pattern": {},
24
  "revision": null,
25
  "target_modules": [
26
- "q_proj",
27
- "v_proj"
28
  ],
29
  "task_type": "CAUSAL_LM",
30
  "use_dora": false,
 
23
  "rank_pattern": {},
24
  "revision": null,
25
  "target_modules": [
26
+ "v_proj",
27
+ "q_proj"
28
  ],
29
  "task_type": "CAUSAL_LM",
30
  "use_dora": false,
last/adapter_model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:8d989c4be74f8ea6d002ff4e9d4549392d5c7c156193c5decc4d0f78ef5a1ce2
3
  size 8663400
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:666d67f8d804507b18118efe15886f244a83d1b653b74f074488d12e1c42e18c
3
  size 8663400
last/training_args.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:c71942154b1cc90c6c0a9fae78dcc099a7eb8540b3fb834b8e6f9ef62e17bf39
3
  size 6456
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:042ca152ec986fd23715c5c702754d77a236f53af3e86154025487b18af001e4
3
  size 6456