yhyu13 commited on
Commit
a1083e9
·
1 Parent(s): 7d20848
Files changed (1) hide show
  1. README.md +76 -0
README.md CHANGED
@@ -3,4 +3,80 @@ license: other
3
  license_name: microsoft-research-license
4
  license_link: >-
5
  https://huggingface.co/cognitivecomputations/dolphin-2_6-phi-2/blob/main/LICENSE
 
 
 
 
 
 
 
 
 
6
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
3
  license_name: microsoft-research-license
4
  license_link: >-
5
  https://huggingface.co/cognitivecomputations/dolphin-2_6-phi-2/blob/main/LICENSE
6
+ library_name: peft
7
+ tags:
8
+ - llama-factory
9
+ - lora
10
+ - generated_from_trainer
11
+ base_model: ./models/dolphin-2_6-phi-2
12
+ model-index:
13
+ - name: dolphin-2_6-phi-2-sft-glaive-function-calling-v2-ep1-lora
14
+ results: []
15
  ---
16
+
17
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
18
+ should probably proofread and complete it, then remove this comment. -->
19
+
20
+ # dolphin-2_6-phi-2-sft-glaive-function-calling-v2-ep1-lora
21
+
22
+ This model is a fine-tuned version of [./models/dolphin-2_6-phi-2](https://huggingface.co/./models/dolphin-2_6-phi-2) on the simple-function-calling-v2_convert dataset that I converted for llama_factory https://huggingface.co/datasets/Yhyu13/glaive-function-calling-v2-llama-factory-convert.
23
+ It achieves the following results on the evaluation set:
24
+ - Loss: 0.3524
25
+
26
+ ## Model description
27
+
28
+ More information needed
29
+
30
+ ## Intended uses & limitations
31
+
32
+ More information needed
33
+
34
+ ## Training and evaluation data
35
+
36
+ More information needed
37
+
38
+ ## Training procedure
39
+
40
+
41
+ The following `bitsandbytes` quantization config was used during training:
42
+ - quant_method: QuantizationMethod.BITS_AND_BYTES
43
+ - load_in_8bit: False
44
+ - load_in_4bit: True
45
+ - llm_int8_threshold: 6.0
46
+ - llm_int8_skip_modules: None
47
+ - llm_int8_enable_fp32_cpu_offload: False
48
+ - llm_int8_has_fp16_weight: False
49
+ - bnb_4bit_quant_type: nf4
50
+ - bnb_4bit_use_double_quant: True
51
+ - bnb_4bit_compute_dtype: float16
52
+
53
+ ### Training hyperparameters
54
+
55
+ The following hyperparameters were used during training:
56
+ - learning_rate: 5e-05
57
+ - train_batch_size: 1
58
+ - eval_batch_size: 1
59
+ - seed: 42
60
+ - distributed_type: multi-GPU
61
+ - num_devices: 2
62
+ - gradient_accumulation_steps: 4
63
+ - total_train_batch_size: 8
64
+ - total_eval_batch_size: 2
65
+ - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
66
+ - lr_scheduler_type: cosine
67
+ - num_epochs: 1.0
68
+
69
+ ### Training results
70
+
71
+ | Training Loss | Epoch | Step | Validation Loss |
72
+ |:-------------:|:-----:|:----:|:---------------:|
73
+ | 0.3453 | 1.0 | 376 | 0.3524 |
74
+
75
+
76
+ ### Framework versions
77
+
78
+ - PEFT 0.7.0
79
+ - Transformers 4.36.2
80
+ - Pytorch 2.1.1+cu121
81
+ - Datasets 2.14.7
82
+ - Tokenizers 0.15.0