muellerzr's picture
muellerzr HF staff
Update README.md
52ceadb verified
metadata
license: llama3

Weights from the Llama-3-8B Self-Align Experiments

[WEIGHTS TO BE UPLOADED ONCE DONE]

Training Config

The config.yaml should be used during accelerate launch, and run.sh was used to launch the training using the StarCoder2 Self-Align training script. Some tweaks were performed to get this working on 48GB vRAM:

  • FSDP was used
  • per_device_batch_size is 2
  • A learning rate of 3e-6 was used

Environment:

  • Trained with 2x4090 GPUs
  • 128GB RAM