922CA commited on
Commit
0265c12
1 Parent(s): ff37e0f

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +4 -4
README.md CHANGED
@@ -52,19 +52,19 @@ Example:
52
  * From Open LLaMA 3b
53
 
54
  # llama-2-7b-tagalog-v0.3 loras (09/01/2023)
55
- * Fine tuned on a dataset of ~1k items (Tagalog/Taglish dataset, based off Tagalog sentences augmented by LLaMA-2-13b base to create a 3-turn dialogue dataset between Human and Assistant)
56
  * 3/3a fine-tuned for 1/2 epochs
57
  * From chat LLaMA-2-7b
58
- * Experiment on partially synthetic data (and observing capability of LLaMA-2 base on generating Tagalog): will be further curating dataset for better attempts
59
  * Loras for [chat-tagalog v0.3)](https://huggingface.co/922-Narra/llama-2-7b-chat-tagalog-v0.3) and [chat-tagalog v0.3](https://huggingface.co/922-Narra/llama-2-7b-chat-tagalog-v0.3a)
60
 
61
  # llama-2-7b-tagalog-v0.3WC2 (09/01/2023)
62
- * Fine tuned on experimental dataset of ~6k items (Tagalog/Taglish dataset, based off Tagalog sentences and Wiki entries augmented by LLaMA-2-13b to create a dialogue-QnA dataset between Human and Assistant)
63
  * 1 epoch
64
  * From chat LLaMA-2-7b
65
 
66
  # llama-2-13b-tagalog-v0.3 loras (09/01/2023)
67
- * Fine tuned on dataset of ~1k items (Tagalog/Taglish dataset, based off Tagalog sentences augmented by LLaMA-2-13b base to create a 3-turn dialogue dataset between Human and Assistant)
68
  * 3/3a fine-tuned for 1 epoch, rank = 16/8
69
  * From LLaMA-2-13b
70
  * Trying LLaMA-2-13b chat/other base and curated dataset for next attempts
 
52
  * From Open LLaMA 3b
53
 
54
  # llama-2-7b-tagalog-v0.3 loras (09/01/2023)
55
+ * Fine tuned on a dataset of ~1k items (Tagalog-focused dataset, based off Tagalog sentences augmented by LLaMA-2-13b base to create a 3-turn dialogue dataset between Human and Assistant)
56
  * 3/3a fine-tuned for 1/2 epochs
57
  * From chat LLaMA-2-7b
58
+ * Experiment on partially synthetic data (and observing capability of LLaMA-2 base on generating Tagalog): will be further curating dataset
59
  * Loras for [chat-tagalog v0.3)](https://huggingface.co/922-Narra/llama-2-7b-chat-tagalog-v0.3) and [chat-tagalog v0.3](https://huggingface.co/922-Narra/llama-2-7b-chat-tagalog-v0.3a)
60
 
61
  # llama-2-7b-tagalog-v0.3WC2 (09/01/2023)
62
+ * Fine tuned on experimental dataset of ~6k items (Tagalog-focused dataset, based off Tagalog sentences and Wiki entries augmented by LLaMA-2-13b to create a dialogue-QnA dataset between Human and Assistant)
63
  * 1 epoch
64
  * From chat LLaMA-2-7b
65
 
66
  # llama-2-13b-tagalog-v0.3 loras (09/01/2023)
67
+ * Fine tuned on a dataset of ~1k items (Tagalog-focused dataset, based off Tagalog sentences augmented by LLaMA-2-13b base to create a 3-turn dialogue dataset between Human and Assistant)
68
  * 3/3a fine-tuned for 1 epoch, rank = 16/8
69
  * From LLaMA-2-13b
70
  * Trying LLaMA-2-13b chat/other base and curated dataset for next attempts