metadata
datasets:
- sahil2801/CodeAlpaca-20k
library_name: peft
tags:
- facebook-opt-1.3b
- code
- instruct
- instruct-code
- code-alpaca
- alpaca-instruct
- alpaca
- opt-1.3b
We finetuned Facebook/OPT-1.3B on Code-Alpaca-Instruct Dataset (sahil2801/CodeAlpaca-20k) for 5 epochs using MonsterAPI no-code LLM finetuner.
This dataset is HuggingFaceH4/CodeAlpaca_20K unfiltered, removing 36 instances of blatant alignment.
The finetuning session got completed in 1 hour and 30 minutes and costed us only $6
for the entire finetuning run!
Hyperparameters & Run details:
- Model Path: facebook/opt-1.3b
- Dataset: sahil2801/CodeAlpaca-20k
- Learning rate: 0.0003
- Number of epochs: 5
- Data split: Training: 90% / Validation: 10%
- Gradient accumulation steps: 1