RyanYr commited on
Commit
3d1bbaa
1 Parent(s): 17788ab

Model save

Browse files
README.md CHANGED
@@ -1,7 +1,7 @@
1
  ---
2
- base_model: RyanYr/reflect_llm8B_llmMstlrg-om2-80k-SftT2_MgSpsdpT02_b1.0
3
  library_name: transformers
4
- model_name: reflect_llm8B_llmMstlrg-om2-80k-SftT2_MgSpsdpIter2T02_b1.0
5
  tags:
6
  - generated_from_trainer
7
  - trl
@@ -9,9 +9,9 @@ tags:
9
  licence: license
10
  ---
11
 
12
- # Model Card for reflect_llm8B_llmMstlrg-om2-80k-SftT2_MgSpsdpIter2T02_b1.0
13
 
14
- This model is a fine-tuned version of [RyanYr/reflect_llm8B_llmMstlrg-om2-80k-SftT2_MgSpsdpT02_b1.0](https://huggingface.co/RyanYr/reflect_llm8B_llmMstlrg-om2-80k-SftT2_MgSpsdpT02_b1.0).
15
  It has been trained using [TRL](https://github.com/huggingface/trl).
16
 
17
  ## Quick start
@@ -20,14 +20,14 @@ It has been trained using [TRL](https://github.com/huggingface/trl).
20
  from transformers import pipeline
21
 
22
  question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
23
- generator = pipeline("text-generation", model="RyanYr/reflect_llm8B_llmMstlrg-om2-80k-SftT2_MgSpsdpIter2T02_b1.0", device="cuda")
24
  output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
25
  print(output["generated_text"])
26
  ```
27
 
28
  ## Training procedure
29
 
30
- [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/yyr/huggingface/runs/y8nevzte)
31
 
32
  This model was trained with DPO, a method introduced in [Direct Preference Optimization: Your Language Model is Secretly a Reward Model](https://huggingface.co/papers/2305.18290).
33
 
 
1
  ---
2
+ base_model: RyanYr/reflect_llm8B_om2-mstlrg300k460kLlm33130k_SftDpo-MgSpsdpT1_b1.0
3
  library_name: transformers
4
+ model_name: reflect_llm8B_om2-mstlrg300k460kLlm33130k_SftDpo-MgSpsdpIter2T1_b0.5
5
  tags:
6
  - generated_from_trainer
7
  - trl
 
9
  licence: license
10
  ---
11
 
12
+ # Model Card for reflect_llm8B_om2-mstlrg300k460kLlm33130k_SftDpo-MgSpsdpIter2T1_b0.5
13
 
14
+ This model is a fine-tuned version of [RyanYr/reflect_llm8B_om2-mstlrg300k460kLlm33130k_SftDpo-MgSpsdpT1_b1.0](https://huggingface.co/RyanYr/reflect_llm8B_om2-mstlrg300k460kLlm33130k_SftDpo-MgSpsdpT1_b1.0).
15
  It has been trained using [TRL](https://github.com/huggingface/trl).
16
 
17
  ## Quick start
 
20
  from transformers import pipeline
21
 
22
  question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
23
+ generator = pipeline("text-generation", model="RyanYr/reflect_llm8B_om2-mstlrg300k460kLlm33130k_SftDpo-MgSpsdpIter2T1_b0.5", device="cuda")
24
  output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
25
  print(output["generated_text"])
26
  ```
27
 
28
  ## Training procedure
29
 
30
+ [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/yyr/huggingface/runs/wu84xu9d)
31
 
32
  This model was trained with DPO, a method introduced in [Direct Preference Optimization: Your Language Model is Secretly a Reward Model](https://huggingface.co/papers/2305.18290).
33
 
last_checkpoint/config.json CHANGED
@@ -1,5 +1,5 @@
1
  {
2
- "_name_or_path": "RyanYr/reflect_llm8B_llmMstlrg-om2-80k-SftT2_MgSpsdpT02_b1.0",
3
  "architectures": [
4
  "LlamaForCausalLM"
5
  ],
 
1
  {
2
+ "_name_or_path": "RyanYr/reflect_llm8B_om2-mstlrg300k460kLlm33130k_SftDpo-MgSpsdpT1_b1.0",
3
  "architectures": [
4
  "LlamaForCausalLM"
5
  ],
last_checkpoint/model-00001-of-00004.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:1ac24b736b91c5c2840669bd5173072f4aadd3c7d4a57eb06d55ffbb37167205
3
  size 4976706864
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c560f01f0508a9f88871069d4ef53257580778edbebce3b311f9bdff2d2818fe
3
  size 4976706864
last_checkpoint/model-00002-of-00004.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:a445918589e8f19c6fd89aca70f2666a31dbd0f2517f99008c7fe1ff934da389
3
  size 4999802720
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b25c27d2cbd3721c9f9d243da83b106a0354decfd77610008b5bb0ec0da28c13
3
  size 4999802720
last_checkpoint/model-00003-of-00004.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:f6a9f1c0d69f93c1b81bb4584651cf41fe3e4345c70102819668be0c14bfe6f3
3
  size 4915916176
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:3aa7b51636989630450ed25f48d0b3c7da984407aff40a99deafa34bfd2ecf4b
3
  size 4915916176
last_checkpoint/model-00004-of-00004.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:9797ec99608e9514d64c60868d51bfce817fbba660f37cb7afe13f9b90c1b2aa
3
  size 1168147000
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:fd6575d0a1c1780bd12de30a7bea285ba8a067bbf6fb026816245b1d726da249
3
  size 1168147000
last_checkpoint/training_args.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:2f5a330b5f57bafa5690ca464f2e5d94ffdabe07e2bbeeff79114ac324cfb57d
3
  size 8056
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:76ed97b79eede608b10af40280f0195802aa5496472bb7d087bded932dfe95e0
3
  size 8056