fals3 commited on
Commit
3c1d350
·
verified ·
1 Parent(s): 99fe5cc

Upload folder using huggingface_hub

Browse files
Files changed (3) hide show
  1. .DS_Store +0 -0
  2. README.md +199 -0
  3. data/train-00000-of-00001.parquet +3 -0
.DS_Store ADDED
Binary file (6.15 kB). View file
 
README.md ADDED
@@ -0,0 +1,199 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ dataset_info:
3
+ features:
4
+ - name: model_type
5
+ dtype: string
6
+ - name: namespace
7
+ dtype: string
8
+ - name: model_name
9
+ dtype: string
10
+ - name: training_method
11
+ dtype: string
12
+ - name: model_size
13
+ dtype: int64
14
+ - name: trainable_params
15
+ dtype: int64
16
+ - name: url
17
+ dtype: string
18
+ - name: doi
19
+ dtype: float64
20
+ splits:
21
+ - name: train
22
+ num_bytes: 6257
23
+ num_examples: 40
24
+ download_size: 4879
25
+ dataset_size: 6257
26
+ configs:
27
+ - config_name: default
28
+ data_files:
29
+ - split: train
30
+ path: data/train-*
31
+ pretty_name: PEFT Unit Test Generation Experiments
32
+ size_categories:
33
+ - n<1K
34
+ ---
35
+ # PEFT Unit Test Generation Experiments
36
+
37
+ ## Dataset description
38
+
39
+ The **PEFT Unit Test Generation Experiments** dataset contains metadata and details about a set of trained models used for generating unit tests with parameter-efficient fine-tuning (PEFT) methods. This dataset includes models from multiple namespaces and various sizes, trained with different tuning methods to provide a comprehensive resource for unit test generation research.
40
+
41
+ ## Dataset Structure
42
+
43
+ ### Data Fields
44
+
45
+ Each example in the dataset corresponds to a specific trained model variant and includes the following features:
46
+
47
+ | Feature Name | Description |
48
+ |-------------------|-----------------------------------------------------------------------------------------------------|
49
+ | `model_type` | The type or architecture of the base model (e.g., codegen, starcoder). |
50
+ | `namespace` | The organization or group that created or published the base model (e.g., Salesforce, meta-llama). |
51
+ | `model_name` | The specific name or identifier of the model. |
52
+ | `training_method` | The parameter-efficient fine-tuning method used for training (e.g., full fine-tuning, LoRA, IA³). |
53
+ | `model_size` | The size of the model, typically measured in number of parameters (e.g., 350M, 7B). |
54
+ | `trainable_params`| The number of trainable parameters for the specific tuning method and [hyperparameters](#training-hyperparameters). |
55
+ | `url` | A direct link to the model repository. |
56
+ | `doi` | The digital object identifier associated with the trained model. |
57
+
58
+ ## Dataset Details
59
+
60
+ ### Dataset Description
61
+
62
+ <!-- Provide a longer summary of what this dataset is. -->
63
+
64
+ ### Training Hyperparameters
65
+
66
+ #### Model-agnostic Hyperparameters
67
+ <table>
68
+ <thead>
69
+ <tr>
70
+ <th>Hyperparameter</th>
71
+ <th>Method</th>
72
+ <th>Value</th>
73
+ </tr>
74
+ </thead>
75
+ <tbody>
76
+ <tr style="font-weight: bold;">
77
+ <td colspan="3">Common</td>
78
+ </tr>
79
+ <tr>
80
+ <td>Optimizer</td>
81
+ <td>-</td>
82
+ <td>AdamW</td>
83
+ </tr>
84
+ <tr>
85
+ <td>LR schedule</td>
86
+ <td>-</td>
87
+ <td>Linear</td>
88
+ </tr>
89
+ <tr>
90
+ <td>LR warmup ratio</td>
91
+ <td>-</td>
92
+ <td>0.1</td>
93
+ </tr>
94
+ <tr>
95
+ <td>Batch size</td>
96
+ <td>-</td>
97
+ <td>1</td>
98
+ </tr>
99
+ <tr>
100
+ <td>Gradient accumulation steps</td>
101
+ <td>-</td>
102
+ <td>8</td>
103
+ </tr>
104
+ <tr>
105
+ <td># Epochs</td>
106
+ <td>-</td>
107
+ <td>3</td>
108
+ </tr>
109
+ <tr>
110
+ <td>Precision</td>
111
+ <td>-</td>
112
+ <td>Mixed</td>
113
+ </tr>
114
+ <tr>
115
+ <td style="vertical-align: middle;" rowspan="4">Learning rate</td>
116
+ <td>Full fine-tuning</td>
117
+ <td>5E-5</td>
118
+ </tr>
119
+ <tr>
120
+ <td>LoRA</td>
121
+ <td>3E-4</td>
122
+ </tr>
123
+ <tr>
124
+ <td>(IA)<sup>3</sup></td>
125
+ <td>3E-4</td>
126
+ </tr>
127
+ <tr>
128
+ <td>Prompt tuning</td>
129
+ <td>3E-3</td>
130
+ </tr>
131
+ <tr style="font-weight: bold;">
132
+ <td colspan="3">Method specific</td>
133
+ </tr>
134
+ <tr>
135
+ <td>Alpha</td>
136
+ <td>LoRA</td>
137
+ <td>32</td>
138
+ </tr>
139
+ <tr>
140
+ <td>Dropout</td>
141
+ <td>LoRA</td>
142
+ <td>0.1</td>
143
+ </tr>
144
+ <tr>
145
+ <td>Rank</td>
146
+ <td>LoRA</td>
147
+ <td>16</td>
148
+ </tr>
149
+ <tr>
150
+ <td>Virtual tokens</td>
151
+ <td>Prompt tuning</td>
152
+ <td>20</td>
153
+ </tr>
154
+ </tbody>
155
+ </table>
156
+
157
+ #### Model-specific Hyperparameters
158
+ <table>
159
+ <thead>
160
+ <tr>
161
+ <th>Hyperparameter</th>
162
+ <th>Method</th>
163
+ <th>Model</th>
164
+ <th>Value</th>
165
+ </tr>
166
+ </thead>
167
+ <tbody>
168
+ <tr>
169
+ <td rowspan="10" style="vertical-align: middle;">Targeted attention modules</td>
170
+ <td rowspan="10" style="vertical-align: middle;">LoRA, (IA)<sup>3</sup></td>
171
+ <td>codegen-350M-multi</td>
172
+ <td>qkv_proj</td>
173
+ </tr>
174
+ <tr><td>Salesforce/codegen2-1B_P</td><td>qkv_proj</td></tr>
175
+ <tr><td>Salesforce/codegen2-3_7B_P</td><td>qkv_proj</td></tr>
176
+ <tr><td>Salesforce/codegen2-7B_P</td><td>qkv_proj</td></tr>
177
+ <tr><td>Salesforce/codegen2-16B_P</td><td>qkv_proj</td></tr>
178
+ <tr><td>meta-llama/CodeLlama-7b-hf</td><td>q_proj, v_proj</td></tr>
179
+ <tr><td>bigcode/starcoderbase</td><td>c_attn</td></tr>
180
+ <tr><td>bigcode/starcoder2-3b</td><td>q_proj, v_proj</td></tr>
181
+ <tr><td>bigcode/starcoder2-7b</td><td>q_proj, v_proj</td></tr>
182
+ <tr><td>bigcode/starcoder2-15b</td><td>q_proj, v_proj</td></tr>
183
+ <tr>
184
+ <td rowspan="10" style="vertical-align: middle;">Targeted feedforward modules</td>
185
+ <td rowspan="10" style="vertical-align: middle;">(IA)<sup>3</sup></td>
186
+ <td>codegen-350M-multi</td>
187
+ <td>fc_out</td>
188
+ </tr>
189
+ <tr><td>Salesforce/codegen2-1B_P</td><td>fc_out</td></tr>
190
+ <tr><td>Salesforce/codegen2-3_7B_P</td><td>fc_out</td></tr>
191
+ <tr><td>Salesforce/codegen2-7B_P</td><td>fc_out</td></tr>
192
+ <tr><td>Salesforce/codegen2-16B_P</td><td>fc_out</td></tr>
193
+ <tr><td>meta-llama/CodeLlama-7b-hf</td><td>down_proj</td></tr>
194
+ <tr><td>bigcode/starcoderbase</td><td>mlp.c_proj</td></tr>
195
+ <tr><td>bigcode/starcoder2-3b</td><td>q_proj, c_proj</td></tr>
196
+ <tr><td>bigcode/starcoder2-7b</td><td>q_proj, c_proj</td></tr>
197
+ <tr><td>bigcode/starcoder2-15b</td><td>q_proj, c_proj</td></tr>
198
+ </tbody>
199
+ </table>
data/train-00000-of-00001.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:de72c805eb3507a15f594519c48abd6190a1341ce9bd82e252e605b3d85bc5d1
3
+ size 6357