922CA commited on
Commit
243c3a4
1 Parent(s): fa97316

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +3 -4
README.md CHANGED
@@ -55,17 +55,16 @@ Example:
55
  * Fine tuned on a dataset of ~1k items (Tagalog/Taglish dataset, based off Tagalog sentences augmented by LLaMA-2-13b base to create a 3-turn dialogue dataset between Human and Assistant)
56
  * 3/3a fine-tuned for 1/2 epochs
57
  * From chat LLaMA-2-7b
58
- * v0.3 seems to be balanced between Tagalog translation and leveraging pretrained data, more than v0.3a (which may speak more Tagalog but be less accurate or helpful); will be further curating dataset
59
- * Lora of [chat-tagalog v0.3 (recommended)](https://huggingface.co/922-Narra/llama-2-7b-chat-tagalog-v0.3) and [chat-tagalog v0.3](https://huggingface.co/922-Narra/llama-2-7b-chat-tagalog-v0.3a)
60
 
61
  # llama-2-7b-tagalog-v0.3WC2 (09/01/2023)
62
  * Fine tuned on experimental dataset of ~6k items (Tagalog/Taglish dataset, based off Tagalog sentences and Wiki entries augmented by LLaMA-2-13b to create a dialogue-QnA dataset between Human and Assistant)
63
  * 1 epoch
64
  * From chat LLaMA-2-7b
65
- * Tends to fall into repetition loop
66
 
67
  # llama-2-13b-tagalog-v0.3 loras (09/01/2023)
68
  * Fine tuned on dataset of ~1k items (Tagalog/Taglish dataset, based off Tagalog sentences augmented by LLaMA-2-13b base to create a 3-turn dialogue dataset between Human and Assistant)
69
  * 3/3a fine-tuned for 1 epoch, rank = 16/8
70
  * From LLaMA-2-13b
71
- * Less helpful results than 7b (suspecting base and dataset, trying LLaMA-2-13b chat and curated dataset for next attempts)
 
55
  * Fine tuned on a dataset of ~1k items (Tagalog/Taglish dataset, based off Tagalog sentences augmented by LLaMA-2-13b base to create a 3-turn dialogue dataset between Human and Assistant)
56
  * 3/3a fine-tuned for 1/2 epochs
57
  * From chat LLaMA-2-7b
58
+ * Experiment on partially synthetic data (and observing capability of LLaMA-2 base on generating Tagalog): will be further curating dataset for better attempts
59
+ * Loras for [chat-tagalog v0.3)](https://huggingface.co/922-Narra/llama-2-7b-chat-tagalog-v0.3) and [chat-tagalog v0.3](https://huggingface.co/922-Narra/llama-2-7b-chat-tagalog-v0.3a)
60
 
61
  # llama-2-7b-tagalog-v0.3WC2 (09/01/2023)
62
  * Fine tuned on experimental dataset of ~6k items (Tagalog/Taglish dataset, based off Tagalog sentences and Wiki entries augmented by LLaMA-2-13b to create a dialogue-QnA dataset between Human and Assistant)
63
  * 1 epoch
64
  * From chat LLaMA-2-7b
 
65
 
66
  # llama-2-13b-tagalog-v0.3 loras (09/01/2023)
67
  * Fine tuned on dataset of ~1k items (Tagalog/Taglish dataset, based off Tagalog sentences augmented by LLaMA-2-13b base to create a 3-turn dialogue dataset between Human and Assistant)
68
  * 3/3a fine-tuned for 1 epoch, rank = 16/8
69
  * From LLaMA-2-13b
70
+ * Trying LLaMA-2-13b chat/other base and curated dataset for next attempts