File size: 6,633 Bytes
3c1d350 d62ed50 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 |
---
dataset_info:
features:
- name: model_type
dtype: string
- name: namespace
dtype: string
- name: model_name
dtype: string
- name: training_method
dtype: string
- name: model_size
dtype: int64
- name: trainable_params
dtype: int64
- name: url
dtype: string
- name: doi
dtype: float64
splits:
- name: train
num_bytes: 6257
num_examples: 40
download_size: 4879
dataset_size: 6257
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
pretty_name: PEFT Unit Test Generation Experiments
size_categories:
- n<1K
---
# PEFT Unit Test Generation Experiments
## Dataset description
The **PEFT Unit Test Generation Experiments** dataset contains metadata and details about a set of trained models used for generating unit tests with parameter-efficient fine-tuning (PEFT) methods. This dataset includes models from multiple namespaces and various sizes, trained with different tuning methods to provide a comprehensive resource for unit test generation research.
## Dataset Structure
### Data Fields
Each example in the dataset corresponds to a specific trained model variant and includes the following features:
| Feature Name | Description |
|-------------------|-----------------------------------------------------------------------------------------------------|
| `model_type` | The type or architecture of the base model (e.g., codegen, starcoder). |
| `namespace` | The organization or group that created or published the base model (e.g., Salesforce, meta-llama). |
| `model_name` | The specific name or identifier of the model. |
| `training_method` | The parameter-efficient fine-tuning method used for training (e.g., full fine-tuning, LoRA, IA³). |
| `model_size` | The size of the model, typically measured in number of parameters (e.g., 350M, 7B). |
| `trainable_params`| The number of trainable parameters for the specific tuning method and [hyperparameters](#training-hyperparameters). |
| `url` | A direct link to the model repository. |
| `doi` | The digital object identifier associated with the trained model. |
## Dataset Details
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
### Training Hyperparameters
#### Model-agnostic Hyperparameters
<table>
<thead>
<tr>
<th>Hyperparameter</th>
<th>Method</th>
<th>Value</th>
</tr>
</thead>
<tbody>
<tr style="font-weight: bold;">
<td colspan="3">Common</td>
</tr>
<tr>
<td>Optimizer</td>
<td>-</td>
<td>AdamW</td>
</tr>
<tr>
<td>LR schedule</td>
<td>-</td>
<td>Linear</td>
</tr>
<tr>
<td>LR warmup ratio</td>
<td>-</td>
<td>0.1</td>
</tr>
<tr>
<td>Batch size</td>
<td>-</td>
<td>1</td>
</tr>
<tr>
<td>Gradient accumulation steps</td>
<td>-</td>
<td>8</td>
</tr>
<tr>
<td># Epochs</td>
<td>-</td>
<td>3</td>
</tr>
<tr>
<td>Precision</td>
<td>-</td>
<td>Mixed</td>
</tr>
<tr>
<td style="vertical-align: middle;" rowspan="4">Learning rate</td>
<td>Full fine-tuning</td>
<td>5E-5</td>
</tr>
<tr>
<td>LoRA</td>
<td>3E-4</td>
</tr>
<tr>
<td>(IA)<sup>3</sup></td>
<td>3E-4</td>
</tr>
<tr>
<td>Prompt tuning</td>
<td>3E-3</td>
</tr>
<tr style="font-weight: bold;">
<td colspan="3">Method specific</td>
</tr>
<tr>
<td>Alpha</td>
<td>LoRA</td>
<td>32</td>
</tr>
<tr>
<td>Dropout</td>
<td>LoRA</td>
<td>0.1</td>
</tr>
<tr>
<td>Rank</td>
<td>LoRA</td>
<td>16</td>
</tr>
<tr>
<td>Virtual tokens</td>
<td>Prompt tuning</td>
<td>20</td>
</tr>
</tbody>
</table>
#### Model-specific Hyperparameters
<table>
<thead>
<tr>
<th>Hyperparameter</th>
<th>Method</th>
<th>Model</th>
<th>Value</th>
</tr>
</thead>
<tbody>
<tr>
<td rowspan="10" style="vertical-align: middle;">Targeted attention modules</td>
<td rowspan="10" style="vertical-align: middle;">LoRA, (IA)<sup>3</sup></td>
<td>codegen-350M-multi</td>
<td>qkv_proj</td>
</tr>
<tr><td>Salesforce/codegen2-1B_P</td><td>qkv_proj</td></tr>
<tr><td>Salesforce/codegen2-3_7B_P</td><td>qkv_proj</td></tr>
<tr><td>Salesforce/codegen2-7B_P</td><td>qkv_proj</td></tr>
<tr><td>Salesforce/codegen2-16B_P</td><td>qkv_proj</td></tr>
<tr><td>meta-llama/CodeLlama-7b-hf</td><td>q_proj, v_proj</td></tr>
<tr><td>bigcode/starcoderbase</td><td>c_attn</td></tr>
<tr><td>bigcode/starcoder2-3b</td><td>q_proj, v_proj</td></tr>
<tr><td>bigcode/starcoder2-7b</td><td>q_proj, v_proj</td></tr>
<tr><td>bigcode/starcoder2-15b</td><td>q_proj, v_proj</td></tr>
<tr>
<td rowspan="10" style="vertical-align: middle;">Targeted feedforward modules</td>
<td rowspan="10" style="vertical-align: middle;">(IA)<sup>3</sup></td>
<td>codegen-350M-multi</td>
<td>fc_out</td>
</tr>
<tr><td>Salesforce/codegen2-1B_P</td><td>fc_out</td></tr>
<tr><td>Salesforce/codegen2-3_7B_P</td><td>fc_out</td></tr>
<tr><td>Salesforce/codegen2-7B_P</td><td>fc_out</td></tr>
<tr><td>Salesforce/codegen2-16B_P</td><td>fc_out</td></tr>
<tr><td>meta-llama/CodeLlama-7b-hf</td><td>down_proj</td></tr>
<tr><td>bigcode/starcoderbase</td><td>mlp.c_proj</td></tr>
<tr><td>bigcode/starcoder2-3b</td><td>q_proj, c_proj</td></tr>
<tr><td>bigcode/starcoder2-7b</td><td>q_proj, c_proj</td></tr>
<tr><td>bigcode/starcoder2-15b</td><td>q_proj, c_proj</td></tr>
</tbody>
</table>
## Training Runs




|