Update README.md
Browse files
README.md
CHANGED
@@ -5,7 +5,7 @@ license: other
|
|
5 |
|
6 |
https://huggingface.co/chargoddard/llama2-22b-blocktriangular trained one one epoch of 52k rows of Stanford Alpaca. About 11 hours on a 3090.
|
7 |
|
8 |
-
I had trouble with training using the other 22b method with `BLOCK_DIAGONAL=True
|
9 |
|
10 |
`target_modules = ["q_proj", "k_proj", "v_proj", "o_proj", "up_proj", "gate_proj", "down_proj"]`
|
11 |
|
|
|
5 |
|
6 |
https://huggingface.co/chargoddard/llama2-22b-blocktriangular trained one one epoch of 52k rows of Stanford Alpaca. About 11 hours on a 3090.
|
7 |
|
8 |
+
I had trouble with training using the other 22b method with `BLOCK_DIAGONAL=True` as done in https://huggingface.co/chargoddard/llama2-22b, but with this method, this is the first time I've been able to target all modules without breaking the output.
|
9 |
|
10 |
`target_modules = ["q_proj", "k_proj", "v_proj", "o_proj", "up_proj", "gate_proj", "down_proj"]`
|
11 |
|