SystemAdmin123 commited on
Commit
959ce07
·
verified ·
1 Parent(s): 4cd7b81

End of training

Browse files
Files changed (2) hide show
  1. README.md +124 -0
  2. generation_config.json +8 -0
README.md ADDED
@@ -0,0 +1,124 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ library_name: transformers
3
+ base_model: Xenova/tiny-random-Phi3ForCausalLM
4
+ tags:
5
+ - axolotl
6
+ - generated_from_trainer
7
+ datasets:
8
+ - argilla/databricks-dolly-15k-curated-en
9
+ model-index:
10
+ - name: tiny-random-Phi3ForCausalLM
11
+ results: []
12
+ ---
13
+
14
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
15
+ should probably proofread and complete it, then remove this comment. -->
16
+
17
+ [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
18
+ <details><summary>See axolotl config</summary>
19
+
20
+ axolotl version: `0.6.0`
21
+ ```yaml
22
+ base_model: Xenova/tiny-random-Phi3ForCausalLM
23
+ batch_size: 128
24
+ bf16: true
25
+ chat_template: tokenizer_default_fallback_alpaca
26
+ datasets:
27
+ - format: custom
28
+ path: argilla/databricks-dolly-15k-curated-en
29
+ type:
30
+ field_input: original-instruction
31
+ field_instruction: original-instruction
32
+ field_output: original-response
33
+ format: '{instruction} {input}'
34
+ no_input_format: '{instruction}'
35
+ system_format: '{system}'
36
+ system_prompt: ''
37
+ device_map: auto
38
+ eval_sample_packing: false
39
+ eval_steps: 200
40
+ flash_attention: true
41
+ gradient_checkpointing: true
42
+ group_by_length: true
43
+ hub_model_id: SystemAdmin123/tiny-random-Phi3ForCausalLM
44
+ hub_strategy: checkpoint
45
+ learning_rate: 0.0002
46
+ logging_steps: 10
47
+ lr_scheduler: cosine
48
+ max_steps: 10000
49
+ micro_batch_size: 32
50
+ model_type: AutoModelForCausalLM
51
+ num_epochs: 100
52
+ optimizer: adamw_bnb_8bit
53
+ output_dir: /root/.sn56/axolotl/tmp/tiny-random-Phi3ForCausalLM
54
+ pad_to_sequence_len: true
55
+ resize_token_embeddings_to_32x: false
56
+ sample_packing: true
57
+ save_steps: 200
58
+ save_total_limit: 1
59
+ sequence_len: 2048
60
+ tokenizer_type: LlamaTokenizerFast
61
+ torch_dtype: bf16
62
+ training_args_kwargs:
63
+ hub_private_repo: true
64
+ trust_remote_code: true
65
+ val_set_size: 0.1
66
+ wandb_entity: ''
67
+ wandb_mode: online
68
+ wandb_name: Xenova/tiny-random-Phi3ForCausalLM-argilla/databricks-dolly-15k-curated-en
69
+ wandb_project: Gradients-On-Demand
70
+ wandb_run: your_name
71
+ wandb_runid: default
72
+ warmup_ratio: 0.05
73
+
74
+ ```
75
+
76
+ </details><br>
77
+
78
+ # tiny-random-Phi3ForCausalLM
79
+
80
+ This model is a fine-tuned version of [Xenova/tiny-random-Phi3ForCausalLM](https://huggingface.co/Xenova/tiny-random-Phi3ForCausalLM) on the argilla/databricks-dolly-15k-curated-en dataset.
81
+
82
+ ## Model description
83
+
84
+ More information needed
85
+
86
+ ## Intended uses & limitations
87
+
88
+ More information needed
89
+
90
+ ## Training and evaluation data
91
+
92
+ More information needed
93
+
94
+ ## Training procedure
95
+
96
+ ### Training hyperparameters
97
+
98
+ The following hyperparameters were used during training:
99
+ - learning_rate: 0.0002
100
+ - train_batch_size: 32
101
+ - eval_batch_size: 32
102
+ - seed: 42
103
+ - distributed_type: multi-GPU
104
+ - num_devices: 4
105
+ - total_train_batch_size: 128
106
+ - total_eval_batch_size: 128
107
+ - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
108
+ - lr_scheduler_type: cosine
109
+ - lr_scheduler_warmup_steps: 5
110
+ - training_steps: 100
111
+
112
+ ### Training results
113
+
114
+ | Training Loss | Epoch | Step | Validation Loss |
115
+ |:-------------:|:------:|:----:|:---------------:|
116
+ | No log | 0.1667 | 1 | 10.3773 |
117
+
118
+
119
+ ### Framework versions
120
+
121
+ - Transformers 4.48.1
122
+ - Pytorch 2.5.1+cu124
123
+ - Datasets 3.2.0
124
+ - Tokenizers 0.21.0
generation_config.json ADDED
@@ -0,0 +1,8 @@
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_from_model_config": true,
3
+ "bos_token_id": 1,
4
+ "do_sample": true,
5
+ "eos_token_id": 32000,
6
+ "pad_token_id": 32000,
7
+ "transformers_version": "4.48.1"
8
+ }