Update README.md
Browse files
README.md
CHANGED
@@ -14,12 +14,12 @@ pipeline_tag: question-answering
|
|
14 |
|
15 |
|
16 |
## Model Details
|
17 |
-
UlizaLlama is a 7B Parameters language model that builds upon the foundation of [Jacaranda/kiswallama-
|
18 |
### Model Description
|
19 |
- Origin: Adaptation of the Jacaranda/kiswallama-pretrained model.
|
20 |
- Data: Instructional dataset in Swahili and English consisting of prompt-response pairs.
|
21 |
- Training: Alignment to standard methodologies, incorporation of task-centric heads, neural network weight optimization via backpropagation, and task-specific adjustments.
|
22 |
-
- Fine-tuning: Utilized the LoRA approach, refining two matrices that mirror the main matrix from Jacaranda/kiswallama-pretrained. This Low Rank Adapter (LoRa) was vital for instruction-focused fine-tuning. Post-training, the developed LoRa was extracted, and Hugging Face's merge and unload() function facilitated the amalgamation of adapter weights with the base model. This fusion enables standalone inference with the merged model
|
23 |
<!-- Provide a longer summary of what this model is. -->
|
24 |
|
25 |
|
@@ -49,7 +49,7 @@ Meanwhile, [Jacaranda/kiswallama-pretrained]((https://huggingface.co/Jacaranda/k
|
|
49 |
|
50 |
### Out-of-Scope Use
|
51 |
The use of the developed Large Language Model (LLM) capabilities is strictly for research and internal purposes only.
|
52 |
-
Any commercial use, distribution, or reproduction without the express written consent of Jacaranda Health is strictly prohibited.
|
53 |
Violators will be held accountable to the fullest extent permissible by law.To ensure the ethical and responsible use of UlizaLlama, we have outlined a set of guidelines.
|
54 |
These guidelines categorize activities and practices into three main areas: prohibited actions, high-risk activities, and deceptive practices. By understanding and adhering to these directives, users can contribute to a safer and more trustworthy environment.
|
55 |
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
|
@@ -59,7 +59,7 @@ These guidelines categorize activities and practices into three main areas: proh
|
|
59 |
- **Harassment and Discrimination:** No acts that bully, threaten, or discriminate.
|
60 |
- **Unauthorized Professions:** No unlicensed professional activities.
|
61 |
- **Data Misuse:** Handle personal data with proper consents.
|
62 |
-
- **Rights Violations:** Respect third-party rights
|
63 |
- **Malware Creation:** Avoid creating harmful software.
|
64 |
|
65 |
2. **High-Risk Activities:**
|
@@ -74,14 +74,13 @@ These guidelines categorize activities and practices into three main areas: proh
|
|
74 |
- **Misinformation:** Refrain from creating/promoting fraudulent or misleading info.
|
75 |
- **Defamation and Spam:** Avoid defamatory content and unsolicited messages.
|
76 |
- **Impersonation:** No pretending to be someone without authorization.
|
77 |
-
- **Misrepresentation:** No false claims about
|
78 |
- **Fake Online Engagement:** No promotion of false online interactions.
|
79 |
|
80 |
|
81 |
## Bias, Risks, and Limitations
|
82 |
-
|
83 |
-
|
84 |
-
With this in mind, the responsible course of action dictates that, prior to deploying UlizaLlama7b-1 in any applications, developers must embark on a diligent journey of safety testing and meticulous fine-tuning, customized to the unique demands of their specific use cases.
|
85 |
|
86 |
|
87 |
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
|
|
|
14 |
|
15 |
|
16 |
## Model Details
|
17 |
+
UlizaLlama is a 7B Parameters language model that builds upon the foundation of [Jacaranda/kiswallama-pretrained](https://huggingface.co/Jacaranda/kiswallama-pretrained). Jacaranda/kiswallama-pretrained is a large language model continually-pretrained with 321,530,045 swahili tokens and a customized tokenizer with a swahili vocabulary of 20,000 tokens to extend the capabilities of [Meta/Llama2](https://huggingface.co/meta-llama/Llama-2-7b). It offers significant improvements in both encoding and decoding for Swahili text, surpassing the Swahili performance of [Meta/Llama2](https://huggingface.co/meta-llama/Llama-2-7b). Moreover, Jacaranda/kiswallama-pretrained excels in providing accurate next-word completions in Swahili, a capability which [Meta/Llama2](https://huggingface.co/meta-llama/Llama-2-7b) falls short of.
|
18 |
### Model Description
|
19 |
- Origin: Adaptation of the Jacaranda/kiswallama-pretrained model.
|
20 |
- Data: Instructional dataset in Swahili and English consisting of prompt-response pairs.
|
21 |
- Training: Alignment to standard methodologies, incorporation of task-centric heads, neural network weight optimization via backpropagation, and task-specific adjustments.
|
22 |
+
- Fine-tuning: Utilized the LoRA approach, refining two matrices that mirror the main matrix from [Jacaranda/kiswallama-pretrained](https://huggingface.co/Jacaranda/kiswallama-pretrained). This Low Rank Adapter (LoRa) was vital for instruction-focused fine-tuning. Post-training, the developed LoRa was extracted, and Hugging Face's merge and unload() function facilitated the amalgamation of adapter weights with the base model. This fusion enables standalone inference with the merged model
|
23 |
<!-- Provide a longer summary of what this model is. -->
|
24 |
|
25 |
|
|
|
49 |
|
50 |
### Out-of-Scope Use
|
51 |
The use of the developed Large Language Model (LLM) capabilities is strictly for research and internal purposes only.
|
52 |
+
Any commercial use, distribution, or reproduction without the express written consent of [Jacaranda Health]((https://www.jacarandahealth.org/)) is strictly prohibited.
|
53 |
Violators will be held accountable to the fullest extent permissible by law.To ensure the ethical and responsible use of UlizaLlama, we have outlined a set of guidelines.
|
54 |
These guidelines categorize activities and practices into three main areas: prohibited actions, high-risk activities, and deceptive practices. By understanding and adhering to these directives, users can contribute to a safer and more trustworthy environment.
|
55 |
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
|
|
|
59 |
- **Harassment and Discrimination:** No acts that bully, threaten, or discriminate.
|
60 |
- **Unauthorized Professions:** No unlicensed professional activities.
|
61 |
- **Data Misuse:** Handle personal data with proper consents.
|
62 |
+
- **Rights Violations:** Respect third-party rights.
|
63 |
- **Malware Creation:** Avoid creating harmful software.
|
64 |
|
65 |
2. **High-Risk Activities:**
|
|
|
74 |
- **Misinformation:** Refrain from creating/promoting fraudulent or misleading info.
|
75 |
- **Defamation and Spam:** Avoid defamatory content and unsolicited messages.
|
76 |
- **Impersonation:** No pretending to be someone without authorization.
|
77 |
+
- **Misrepresentation:** No false claims about UlizaLlama outputs.
|
78 |
- **Fake Online Engagement:** No promotion of false online interactions.
|
79 |
|
80 |
|
81 |
## Bias, Risks, and Limitations
|
82 |
+
UlizaLlama is a cutting-edge technology brimming with possibilities, yet is not without inherent risks. The extensive testing conducted thus far has been predominantly in Swahili, English, however leaving an expansive terrain of uncharted scenarios. Consequently, like its LLM counterparts, UlizaLlama outcome predictability remains elusive, and there's the potential for it to occasionally generate responses that are either inaccurate, biased, or otherwise objectionable in nature when prompted by users.
|
83 |
+
With this in mind, the responsible course of action dictates that, prior to deploying UlizaLlama in any applications, developers must embark on a diligent journey of safety testing and meticulous fine-tuning, customized to the unique demands of their specific use cases.
|
|
|
84 |
|
85 |
|
86 |
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
|