|
|
|
|
|
|
|
|
|
|
|
This is a merge of the Dolly LoRA with the main GPT-J-6B model, allowing users to use Dolly without having to worry about PEFT dependencies. |
|
|
|
|
|
This hopes to be as similar as Alpaca, but without requirimg LLaMA access. |
|
|
|
The performance is good but not as good as the orginal Alpaca trained from a base model of LLaMa |
|
|
|
This is mostly due to the LLaMa 7B model being pretrained on 1T tokens and GPT-J-6B being trained on 300-400M tokens. |
|
- LoRA originally trained by samwit, in: https://huggingface.co/samwit/dolly-lora |
|
- The dataset is the cleaned version of the Alpaca dataset - https://github.com/gururise/AlpacaDataCleaned |
|
- GPT-J-6b: https://huggingface.co/EleutherAI/gpt-j-6B |
|
- here is a Colab https://colab.research.google.com/drive/1O1JjyGaC300BgSJoUbru6LuWAzRzEqCz?usp=sharing |