RyanYr commited on
Commit
b7db619
·
verified ·
1 Parent(s): db9134c

Model save

Browse files
README.md CHANGED
@@ -1,7 +1,7 @@
1
  ---
2
- base_model: RyanYr/reflect_llama8B_om2-300k_sft-t1_lr1e-6
3
  library_name: transformers
4
- model_name: reflect_llama8B_om2-llama-t0-mstlrg-300k-llama33-130k-t12_sft-t1_lr1e-6
5
  tags:
6
  - generated_from_trainer
7
  - trl
@@ -9,9 +9,9 @@ tags:
9
  licence: license
10
  ---
11
 
12
- # Model Card for reflect_llama8B_om2-llama-t0-mstlrg-300k-llama33-130k-t12_sft-t1_lr1e-6
13
 
14
- This model is a fine-tuned version of [RyanYr/reflect_llama8B_om2-300k_sft-t1_lr1e-6](https://huggingface.co/RyanYr/reflect_llama8B_om2-300k_sft-t1_lr1e-6).
15
  It has been trained using [TRL](https://github.com/huggingface/trl).
16
 
17
  ## Quick start
@@ -20,14 +20,14 @@ It has been trained using [TRL](https://github.com/huggingface/trl).
20
  from transformers import pipeline
21
 
22
  question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
23
- generator = pipeline("text-generation", model="RyanYr/reflect_llama8B_om2-llama-t0-mstlrg-300k-llama33-130k-t12_sft-t1_lr1e-6", device="cuda")
24
  output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
25
  print(output["generated_text"])
26
  ```
27
 
28
  ## Training procedure
29
 
30
- [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/yyr/huggingface/runs/l8rz3k9a)
31
 
32
  This model was trained with SFT.
33
 
 
1
  ---
2
+ base_model: RyanYr/reflect_llama8B_om2-300k460k_sft-t1_lr1e-6
3
  library_name: transformers
4
+ model_name: reflect_llama8B_om2-mixed-t0-mstlrg-300k460k-t12_llama33-130k-t12_sft-t1_lr1e-6
5
  tags:
6
  - generated_from_trainer
7
  - trl
 
9
  licence: license
10
  ---
11
 
12
+ # Model Card for reflect_llama8B_om2-mixed-t0-mstlrg-300k460k-t12_llama33-130k-t12_sft-t1_lr1e-6
13
 
14
+ This model is a fine-tuned version of [RyanYr/reflect_llama8B_om2-300k460k_sft-t1_lr1e-6](https://huggingface.co/RyanYr/reflect_llama8B_om2-300k460k_sft-t1_lr1e-6).
15
  It has been trained using [TRL](https://github.com/huggingface/trl).
16
 
17
  ## Quick start
 
20
  from transformers import pipeline
21
 
22
  question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
23
+ generator = pipeline("text-generation", model="RyanYr/reflect_llama8B_om2-mixed-t0-mstlrg-300k460k-t12_llama33-130k-t12_sft-t1_lr1e-6", device="cuda")
24
  output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
25
  print(output["generated_text"])
26
  ```
27
 
28
  ## Training procedure
29
 
30
+ [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/yyr/huggingface/runs/s1697tax)
31
 
32
  This model was trained with SFT.
33
 
last_checkpoint/config.json CHANGED
@@ -1,5 +1,5 @@
1
  {
2
- "_name_or_path": "RyanYr/reflect_llama8B_om2-300k_sft-t1_lr1e-6",
3
  "architectures": [
4
  "LlamaForCausalLM"
5
  ],
 
1
  {
2
+ "_name_or_path": "RyanYr/reflect_llama8B_om2-300k460k_sft-t1_lr1e-6",
3
  "architectures": [
4
  "LlamaForCausalLM"
5
  ],
last_checkpoint/model-00001-of-00004.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:31b2f170487eaaa7462a6740a4074030f3abf31860fa10dcd498d0b0c0c80837
3
  size 4976706864
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:27277005dad6d94a8f6fe5d18a959104994da4abe04bd584ad4fbec347bfc2d5
3
  size 4976706864
last_checkpoint/model-00002-of-00004.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:e0cdf05e971aabeb38e85eb7d4bb201595db6e98d31a381af5b94b1ecd248e45
3
  size 4999802720
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:63225fc5995cd3fb88019ad7183aecf5885d574f076216f598e4789ba3da8264
3
  size 4999802720
last_checkpoint/model-00003-of-00004.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:1767de457bb7e45a1fd6a4a35de63672e0bf11efbc9f519d3366e8413f4f9096
3
  size 4915916176
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c4f02483e95fd27c8032034fe0e69e34fcb991d4818083a6391b3057d1c499cb
3
  size 4915916176
last_checkpoint/model-00004-of-00004.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:86a7802cd7d384ab455a44c57297fcea9c7298a5e93a5b55e0372c9580aba0db
3
  size 1168147000
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:81f75e06a527e939dd16cef857f15380009423e46d5c09a533205285594981ed
3
  size 1168147000
last_checkpoint/training_args.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:10897690953d6e67df1b5d2af3f8618d40ac4523129c81be891b7fcf9416ae6d
3
  size 6968
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f2b77149e545080fc603f53c9dbb97649cbaf7e108edd22ed1e7077e8c68b76a
3
  size 6968