Update README.md
Browse files
README.md
CHANGED
@@ -13,7 +13,7 @@ Experimental Tagalog loras: safe or accurate outputs not guaranteed (not for pro
|
|
13 |
* Fine tuned on a small dataset of 14 items, manually edited
|
14 |
* 20 epochs (more observable effects)
|
15 |
* From chat LLaMA-2-7b
|
16 |
-
* Lora of chat-tagalog v0.1a
|
17 |
|
18 |
# lt2_08162023b
|
19 |
* Fine tuned on a small dataset of 14 items, manually edited
|
@@ -31,4 +31,4 @@ Experimental Tagalog loras: safe or accurate outputs not guaranteed (not for pro
|
|
31 |
* Fine tuned on a small dataset of 14 items, manually edited
|
32 |
* 30 epochs (v0.1a further trained and cut-off before overfit)
|
33 |
* From chat LLaMA-2-7b
|
34 |
-
* Lora of chat-tagalog v0.1d
|
|
|
13 |
* Fine tuned on a small dataset of 14 items, manually edited
|
14 |
* 20 epochs (more observable effects)
|
15 |
* From chat LLaMA-2-7b
|
16 |
+
* Lora of [chat-tagalog v0.1a](https://huggingface.co/922-Narra/llama-2-7b-chat-tagalog-v0.1a)
|
17 |
|
18 |
# lt2_08162023b
|
19 |
* Fine tuned on a small dataset of 14 items, manually edited
|
|
|
31 |
* Fine tuned on a small dataset of 14 items, manually edited
|
32 |
* 30 epochs (v0.1a further trained and cut-off before overfit)
|
33 |
* From chat LLaMA-2-7b
|
34 |
+
* Lora of [chat-tagalog v0.1d](https://huggingface.co/922-Narra/llama-2-7b-chat-tagalog-v0.1d)
|