Xiangchi commited on
Commit
9dae89c
·
verified ·
1 Parent(s): a84949b

Model save

Browse files
Files changed (1) hide show
  1. README.md +73 -0
README.md ADDED
@@ -0,0 +1,73 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ base_model: NousResearch/Llama-2-13b-chat-hf
3
+ tags:
4
+ - trl
5
+ - sft
6
+ - generated_from_trainer
7
+ model-index:
8
+ - name: math_with_reason_13bf
9
+ results: []
10
+ library_name: peft
11
+ ---
12
+
13
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
14
+ should probably proofread and complete it, then remove this comment. -->
15
+
16
+ [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="200" height="32"/>](https://wandb.ai/xiangchiyuan/huggingface/runs/bo9nwfsx)
17
+ # math_with_reason_13bf
18
+
19
+ This model is a fine-tuned version of [NousResearch/Llama-2-13b-chat-hf](https://huggingface.co/NousResearch/Llama-2-13b-chat-hf) on the None dataset.
20
+
21
+ ## Model description
22
+
23
+ More information needed
24
+
25
+ ## Intended uses & limitations
26
+
27
+ More information needed
28
+
29
+ ## Training and evaluation data
30
+
31
+ More information needed
32
+
33
+ ## Training procedure
34
+
35
+
36
+ The following `bitsandbytes` quantization config was used during training:
37
+ - quant_method: bitsandbytes
38
+ - _load_in_8bit: True
39
+ - _load_in_4bit: False
40
+ - llm_int8_threshold: 6.0
41
+ - llm_int8_skip_modules: None
42
+ - llm_int8_enable_fp32_cpu_offload: False
43
+ - llm_int8_has_fp16_weight: False
44
+ - bnb_4bit_quant_type: fp4
45
+ - bnb_4bit_use_double_quant: False
46
+ - bnb_4bit_compute_dtype: float32
47
+ - bnb_4bit_quant_storage: uint8
48
+ - load_in_4bit: False
49
+ - load_in_8bit: True
50
+ ### Training hyperparameters
51
+
52
+ The following hyperparameters were used during training:
53
+ - learning_rate: 1.41e-05
54
+ - train_batch_size: 64
55
+ - eval_batch_size: 8
56
+ - seed: 42
57
+ - gradient_accumulation_steps: 16
58
+ - total_train_batch_size: 1024
59
+ - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
60
+ - lr_scheduler_type: linear
61
+ - num_epochs: 10.0
62
+
63
+ ### Training results
64
+
65
+
66
+
67
+ ### Framework versions
68
+
69
+ - PEFT 0.5.0
70
+ - Transformers 4.43.0.dev0
71
+ - Pytorch 2.1.0+cu118
72
+ - Datasets 2.13.1
73
+ - Tokenizers 0.19.1