Create README.md
Browse files
README.md
ADDED
@@ -0,0 +1,20 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
|
2 |
+
LoRA trained by samwit, in:
|
3 |
+
|
4 |
+
https://huggingface.co/samwit/dolly-lora
|
5 |
+
|
6 |
+
|
7 |
+
|
8 |
+
This is a Finetuning of GPT-J-6B using LoRa - https://huggingface.co/EleutherAI/gpt-j-6B
|
9 |
+
|
10 |
+
The dataset is the cleaned version of the Alpaca dataset - https://github.com/gururise/AlpacaDataCleaned
|
11 |
+
|
12 |
+
A model similar to this has been talked about
|
13 |
+
|
14 |
+
The performance is good but not as good as the orginal Alpaca trained from a base model of LLaMa
|
15 |
+
|
16 |
+
This is mostly due to the LLaMa 7B model being pretrained on 1T tokens and GPT-J-6B being trained on 300-400M tokens
|
17 |
+
|
18 |
+
You will need a 3090 or A100 to run it, unfortunately this current version won't work on a T4.
|
19 |
+
|
20 |
+
here is a Colab https://colab.research.google.com/drive/1O1JjyGaC300BgSJoUbru6LuWAzRzEqCz?usp=sharing
|