andstor's picture
Add training runs plots
d62ed50
metadata
dataset_info:
  features:
    - name: model_type
      dtype: string
    - name: namespace
      dtype: string
    - name: model_name
      dtype: string
    - name: training_method
      dtype: string
    - name: model_size
      dtype: int64
    - name: trainable_params
      dtype: int64
    - name: url
      dtype: string
    - name: doi
      dtype: float64
  splits:
    - name: train
      num_bytes: 6257
      num_examples: 40
  download_size: 4879
  dataset_size: 6257
configs:
  - config_name: default
    data_files:
      - split: train
        path: data/train-*
pretty_name: PEFT Unit Test Generation Experiments
size_categories:
  - n<1K

PEFT Unit Test Generation Experiments

Dataset description

The PEFT Unit Test Generation Experiments dataset contains metadata and details about a set of trained models used for generating unit tests with parameter-efficient fine-tuning (PEFT) methods. This dataset includes models from multiple namespaces and various sizes, trained with different tuning methods to provide a comprehensive resource for unit test generation research.

Dataset Structure

Data Fields

Each example in the dataset corresponds to a specific trained model variant and includes the following features:

Feature Name Description
model_type The type or architecture of the base model (e.g., codegen, starcoder).
namespace The organization or group that created or published the base model (e.g., Salesforce, meta-llama).
model_name The specific name or identifier of the model.
training_method The parameter-efficient fine-tuning method used for training (e.g., full fine-tuning, LoRA, IA³).
model_size The size of the model, typically measured in number of parameters (e.g., 350M, 7B).
trainable_params The number of trainable parameters for the specific tuning method and hyperparameters.
url A direct link to the model repository.
doi The digital object identifier associated with the trained model.

Dataset Details

Dataset Description

Training Hyperparameters

Model-agnostic Hyperparameters

Hyperparameter Method Value
Common
Optimizer - AdamW
LR schedule - Linear
LR warmup ratio - 0.1
Batch size - 1
Gradient accumulation steps - 8
# Epochs - 3
Precision - Mixed
Learning rate Full fine-tuning 5E-5
LoRA 3E-4
(IA)3 3E-4
Prompt tuning 3E-3
Method specific
Alpha LoRA 32
Dropout LoRA 0.1
Rank LoRA 16
Virtual tokens Prompt tuning 20

Model-specific Hyperparameters

Hyperparameter Method Model Value
Targeted attention modules LoRA, (IA)3 codegen-350M-multi qkv_proj
Salesforce/codegen2-1B_Pqkv_proj
Salesforce/codegen2-3_7B_Pqkv_proj
Salesforce/codegen2-7B_Pqkv_proj
Salesforce/codegen2-16B_Pqkv_proj
meta-llama/CodeLlama-7b-hfq_proj, v_proj
bigcode/starcoderbasec_attn
bigcode/starcoder2-3bq_proj, v_proj
bigcode/starcoder2-7bq_proj, v_proj
bigcode/starcoder2-15bq_proj, v_proj
Targeted feedforward modules (IA)3 codegen-350M-multi fc_out
Salesforce/codegen2-1B_Pfc_out
Salesforce/codegen2-3_7B_Pfc_out
Salesforce/codegen2-7B_Pfc_out
Salesforce/codegen2-16B_Pfc_out
meta-llama/CodeLlama-7b-hfdown_proj
bigcode/starcoderbasemlp.c_proj
bigcode/starcoder2-3bq_proj, c_proj
bigcode/starcoder2-7bq_proj, c_proj
bigcode/starcoder2-15bq_proj, c_proj

Training Runs

image/png

image/png

image/png

image/png