Jacaranda commited on
Commit
146f538
·
1 Parent(s): 147ac6f

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +2 -3
README.md CHANGED
@@ -33,8 +33,8 @@ Fine-tuning: Utilized the LoRA approach, refining two matrices that mirror the m
33
  - **Funded by [optional]:** [Google]
34
  - **Model type:** [LlamaModelForCausalLm]
35
  - **Language(s) (NLP):** [English and Swahili]
36
- - **License:** [More Information Needed]
37
- - **Model Developers:** [Stanslaus Mwongela]
38
  - **Finetuned from model:** [ Jacaranda/kiswallama-pretrained model which builds upon Meta/Llama2]
39
  ## Uses
40
  UlizaLlama7b-1 is optimized for downstream tasks, notably those demanding instructional datasets in Swahili, English, or both. Organizations can further fine-tune it for their specific domains. Potential areas include:
@@ -55,7 +55,6 @@ Further Research-The current UlizaLlama is available as a 7 Billion parameters m
55
  To ensure the ethical and responsible use of UlizaLlama, we have outlined a set of guidelines. These guidelines categorize activities and practices into three main areas: prohibited actions, high-risk activities, and deceptive practices. By understanding and adhering to these directives, users can contribute to a safer and more trustworthy environment.
56
  <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
57
 
58
- [More Information Needed]
59
 
60
  ## Bias, Risks, and Limitations
61
  UlizaLlama7b-1 is a cutting-edge technology brimming with possibilities, yet is not without inherent risks. The extensive testing conducted thus far has been predominantly in Swahili, English, however leaving an expansive terrain of uncharted scenarios. Consequently, like its LLM counterparts, UlizaLlama7b-1 outcome predictability remains elusive, and there's the potential for it to occasionally generate responses that are either inaccurate, biased, or otherwise objectionable in nature when prompted by users.
 
33
  - **Funded by [optional]:** [Google]
34
  - **Model type:** [LlamaModelForCausalLm]
35
  - **Language(s) (NLP):** [English and Swahili]
36
+ - **License:** [to include]
37
+ - **Model Developers:** [Stanslaus Mwongela, Jay Patel, Sathy Rajasekharan]
38
  - **Finetuned from model:** [ Jacaranda/kiswallama-pretrained model which builds upon Meta/Llama2]
39
  ## Uses
40
  UlizaLlama7b-1 is optimized for downstream tasks, notably those demanding instructional datasets in Swahili, English, or both. Organizations can further fine-tune it for their specific domains. Potential areas include:
 
55
  To ensure the ethical and responsible use of UlizaLlama, we have outlined a set of guidelines. These guidelines categorize activities and practices into three main areas: prohibited actions, high-risk activities, and deceptive practices. By understanding and adhering to these directives, users can contribute to a safer and more trustworthy environment.
56
  <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
57
 
 
58
 
59
  ## Bias, Risks, and Limitations
60
  UlizaLlama7b-1 is a cutting-edge technology brimming with possibilities, yet is not without inherent risks. The extensive testing conducted thus far has been predominantly in Swahili, English, however leaving an expansive terrain of uncharted scenarios. Consequently, like its LLM counterparts, UlizaLlama7b-1 outcome predictability remains elusive, and there's the potential for it to occasionally generate responses that are either inaccurate, biased, or otherwise objectionable in nature when prompted by users.