|
--- |
|
dataset_info: |
|
features: |
|
- name: model_type |
|
dtype: string |
|
- name: namespace |
|
dtype: string |
|
- name: model_name |
|
dtype: string |
|
- name: training_method |
|
dtype: string |
|
- name: model_size |
|
dtype: int64 |
|
- name: trainable_params |
|
dtype: int64 |
|
- name: url |
|
dtype: string |
|
- name: doi |
|
dtype: float64 |
|
splits: |
|
- name: train |
|
num_bytes: 6257 |
|
num_examples: 40 |
|
download_size: 4879 |
|
dataset_size: 6257 |
|
configs: |
|
- config_name: default |
|
data_files: |
|
- split: train |
|
path: data/train-* |
|
pretty_name: PEFT Unit Test Generation Experiments |
|
size_categories: |
|
- n<1K |
|
--- |
|
# PEFT Unit Test Generation Experiments |
|
|
|
## Dataset description |
|
|
|
The **PEFT Unit Test Generation Experiments** dataset contains metadata and details about a set of trained models used for generating unit tests with parameter-efficient fine-tuning (PEFT) methods. This dataset includes models from multiple namespaces and various sizes, trained with different tuning methods to provide a comprehensive resource for unit test generation research. |
|
|
|
## Dataset Structure |
|
|
|
### Data Fields |
|
|
|
Each example in the dataset corresponds to a specific trained model variant and includes the following features: |
|
|
|
| Feature Name | Description | |
|
|-------------------|-----------------------------------------------------------------------------------------------------| |
|
| `model_type` | The type or architecture of the base model (e.g., codegen, starcoder). | |
|
| `namespace` | The organization or group that created or published the base model (e.g., Salesforce, meta-llama). | |
|
| `model_name` | The specific name or identifier of the model. | |
|
| `training_method` | The parameter-efficient fine-tuning method used for training (e.g., full fine-tuning, LoRA, IA³). | |
|
| `model_size` | The size of the model, typically measured in number of parameters (e.g., 350M, 7B). | |
|
| `trainable_params`| The number of trainable parameters for the specific tuning method and [hyperparameters](#training-hyperparameters). | |
|
| `url` | A direct link to the model repository. | |
|
| `doi` | The digital object identifier associated with the trained model. | |
|
|
|
## Dataset Details |
|
|
|
### Dataset Description |
|
|
|
<!-- Provide a longer summary of what this dataset is. --> |
|
|
|
### Training Hyperparameters |
|
|
|
#### Model-agnostic Hyperparameters |
|
<table> |
|
<thead> |
|
<tr> |
|
<th>Hyperparameter</th> |
|
<th>Method</th> |
|
<th>Value</th> |
|
</tr> |
|
</thead> |
|
<tbody> |
|
<tr style="font-weight: bold;"> |
|
<td colspan="3">Common</td> |
|
</tr> |
|
<tr> |
|
<td>Optimizer</td> |
|
<td>-</td> |
|
<td>AdamW</td> |
|
</tr> |
|
<tr> |
|
<td>LR schedule</td> |
|
<td>-</td> |
|
<td>Linear</td> |
|
</tr> |
|
<tr> |
|
<td>LR warmup ratio</td> |
|
<td>-</td> |
|
<td>0.1</td> |
|
</tr> |
|
<tr> |
|
<td>Batch size</td> |
|
<td>-</td> |
|
<td>1</td> |
|
</tr> |
|
<tr> |
|
<td>Gradient accumulation steps</td> |
|
<td>-</td> |
|
<td>8</td> |
|
</tr> |
|
<tr> |
|
<td># Epochs</td> |
|
<td>-</td> |
|
<td>3</td> |
|
</tr> |
|
<tr> |
|
<td>Precision</td> |
|
<td>-</td> |
|
<td>Mixed</td> |
|
</tr> |
|
<tr> |
|
<td style="vertical-align: middle;" rowspan="4">Learning rate</td> |
|
<td>Full fine-tuning</td> |
|
<td>5E-5</td> |
|
</tr> |
|
<tr> |
|
<td>LoRA</td> |
|
<td>3E-4</td> |
|
</tr> |
|
<tr> |
|
<td>(IA)<sup>3</sup></td> |
|
<td>3E-4</td> |
|
</tr> |
|
<tr> |
|
<td>Prompt tuning</td> |
|
<td>3E-3</td> |
|
</tr> |
|
<tr style="font-weight: bold;"> |
|
<td colspan="3">Method specific</td> |
|
</tr> |
|
<tr> |
|
<td>Alpha</td> |
|
<td>LoRA</td> |
|
<td>32</td> |
|
</tr> |
|
<tr> |
|
<td>Dropout</td> |
|
<td>LoRA</td> |
|
<td>0.1</td> |
|
</tr> |
|
<tr> |
|
<td>Rank</td> |
|
<td>LoRA</td> |
|
<td>16</td> |
|
</tr> |
|
<tr> |
|
<td>Virtual tokens</td> |
|
<td>Prompt tuning</td> |
|
<td>20</td> |
|
</tr> |
|
</tbody> |
|
</table> |
|
|
|
#### Model-specific Hyperparameters |
|
<table> |
|
<thead> |
|
<tr> |
|
<th>Hyperparameter</th> |
|
<th>Method</th> |
|
<th>Model</th> |
|
<th>Value</th> |
|
</tr> |
|
</thead> |
|
<tbody> |
|
<tr> |
|
<td rowspan="10" style="vertical-align: middle;">Targeted attention modules</td> |
|
<td rowspan="10" style="vertical-align: middle;">LoRA, (IA)<sup>3</sup></td> |
|
<td>codegen-350M-multi</td> |
|
<td>qkv_proj</td> |
|
</tr> |
|
<tr><td>Salesforce/codegen2-1B_P</td><td>qkv_proj</td></tr> |
|
<tr><td>Salesforce/codegen2-3_7B_P</td><td>qkv_proj</td></tr> |
|
<tr><td>Salesforce/codegen2-7B_P</td><td>qkv_proj</td></tr> |
|
<tr><td>Salesforce/codegen2-16B_P</td><td>qkv_proj</td></tr> |
|
<tr><td>meta-llama/CodeLlama-7b-hf</td><td>q_proj, v_proj</td></tr> |
|
<tr><td>bigcode/starcoderbase</td><td>c_attn</td></tr> |
|
<tr><td>bigcode/starcoder2-3b</td><td>q_proj, v_proj</td></tr> |
|
<tr><td>bigcode/starcoder2-7b</td><td>q_proj, v_proj</td></tr> |
|
<tr><td>bigcode/starcoder2-15b</td><td>q_proj, v_proj</td></tr> |
|
<tr> |
|
<td rowspan="10" style="vertical-align: middle;">Targeted feedforward modules</td> |
|
<td rowspan="10" style="vertical-align: middle;">(IA)<sup>3</sup></td> |
|
<td>codegen-350M-multi</td> |
|
<td>fc_out</td> |
|
</tr> |
|
<tr><td>Salesforce/codegen2-1B_P</td><td>fc_out</td></tr> |
|
<tr><td>Salesforce/codegen2-3_7B_P</td><td>fc_out</td></tr> |
|
<tr><td>Salesforce/codegen2-7B_P</td><td>fc_out</td></tr> |
|
<tr><td>Salesforce/codegen2-16B_P</td><td>fc_out</td></tr> |
|
<tr><td>meta-llama/CodeLlama-7b-hf</td><td>down_proj</td></tr> |
|
<tr><td>bigcode/starcoderbase</td><td>mlp.c_proj</td></tr> |
|
<tr><td>bigcode/starcoder2-3b</td><td>q_proj, c_proj</td></tr> |
|
<tr><td>bigcode/starcoder2-7b</td><td>q_proj, c_proj</td></tr> |
|
<tr><td>bigcode/starcoder2-15b</td><td>q_proj, c_proj</td></tr> |
|
</tbody> |
|
</table> |
|
|
|
|
|
## Training Runs |
|
|
|
 |
|
|
|
 |
|
|
|
 |
|
|
|
 |
|
|