File size: 2,890 Bytes
2d55589
 
6e8c10c
 
 
a35e89e
2d55589
 
 
 
 
 
 
 
 
a888efe
2d55589
 
2304283
 
04d1de6
 
2304283
04d1de6
6427b16
2304283
04d1de6
 
 
6427b16
 
 
 
dc87827
 
 
 
2304283
fa8abcc
2304283
cdc5cc3
2304283
 
2d55589
 
 
 
 
 
 
 
 
2304283
 
 
6427b16
2304283
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
6427b16
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
---
base_model: unsloth/Qwen2.5-Coder-7B-Instruct-bnb-4bit
datasets:
- Hypersniper/unity_api_2022_3
- ibranze/codellama_unity3d_v2
- neph1/Unity_Code_QnA
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
- trl
- sft
---

# Description

Qwen2.5-Coder-7B-Instruct trained on a merged dataset of Unity3d q&a from these three datasets:

[ibranze/codellama_unity3d_v2](https://huggingface.co/datasets/ibranze/codellama_unity3d_v2) (Full)

[Hypersniper/unity_api_2022_3](https://huggingface.co/datasets/Hypersniper/unity_api_2022_3) (10%)

[neph1/Unity_Code_QnA](https://huggingface.co/datasets/neph1/Unity_Code_QnA) (Full)


preview 2:
26210 rows, of which ca 1000 are from my own multi response dataset

preview 1:
15062 rows in total with a 10% validation split.

Trained with native chat template (minus tools usage, see this issue: https://github.com/unslothai/unsloth/issues/1053). With a little superficial testing done, it seems to respond well to the mistral template.


Consider this a preview while I develop a dataset of my own.

If you have any feedback, please share. I've only done some basic testing so far. I'm especially interested if you're using it with Tabby or a similar coding tool.


# Uploaded  model

- **Developed by:** neph1
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Qwen2.5-Coder-7B-Instruct-bnb-4bit

This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.

[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)

# Training details

About 1.5 epochs. It's probably a bit overfitting and I should introduce some general coding questions to my validation set to ensure it doesn't lose too much general performance.

Rank: 128

Alpha: 256

TrainingArguments(
        per_device_train_batch_size =2,
        gradient_accumulation_steps = 64,
        #max_steps=10,
        num_train_epochs=3,
        warmup_steps = 5,
        learning_rate = 1e-4,
        fp16 = not torch.cuda.is_bf16_supported(),
        bf16 = torch.cuda.is_bf16_supported(),
        logging_steps = 10,
        optim = "adamw_8bit",
        weight_decay = 0.01,
        lr_scheduler_type = "linear",
        seed = 3407,
        per_device_eval_batch_size = 2,
        eval_strategy="steps",
        eval_accumulation_steps = 64,
        eval_steps = 10,
        eval_delay = 0,
        save_strategy="steps",
        save_steps=25,
        report_to="none",
    ),


Step 	Training Loss 	Validation Loss

20 	2.043000 	1.197104

40 	1.087300 	0.933553

60 	0.942200 	0.890801

80 	0.865600 	0.866198

100 	0.851400 	0.849733

120 	0.812900 	0.837039

140 	0.812400 	0.827064

160 	0.817300 	0.818410

180 	0.802600 	0.810163

200 	0.788600 	0.803399