File size: 2,675 Bytes
19990dc b18114d 19990dc 7169e08 19990dc d127bba 1c15b6d d127bba f03469c 19990dc d127bba 19990dc 5705b52 19990dc 7e242ec 39b30c9 7e242ec 19990dc f734dba 19990dc 8282088 99d5e93 39b30c9 81014a2 a1726d9 81014a2 412143c 81014a2 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 |
---
library_name: transformers
license: llama3
datasets:
- saheedniyi/Nairaland_v1_instruct_512QA
language:
- en
pipeline_tag: text-generation
---
<!-- Provide a quick summary of what the model is/does. -->
Excited to announce the release of **Llama3-8b-Naija_v1** a finetuned version of Meta-Llama-3-8B trained on a **Question - Answer** dataset from [Nairaland](https://www.nairaland.com/).
The model was built in an attempt to **"Nigerialize"** Llama-3, giving it a Nigerian - like behavior.
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [Saheedniyi](https://linkedin.com/in/azeez-saheed)
- **Language(s) (NLP):** English, Pidgin English
- **License:** [META LLAMA 3 COMMUNITY LICENSE AGREEMENT](https://huggingface.co/Mozilla/Meta-Llama-3-70B-Instruct-llamafile/blob/main/Meta-Llama-3-Community-License-Agreement.txt)
- **Finetuned from :** [meta-llama/Meta-Llama-3-8B](Mozilla/Meta-Llama-3-70B-Instruct-llamafile)
### Model Sources
<!-- Provide the basic links for the model. -->
- **[Repository](https://github.com/saheedniyi02/Llama3-8b-Naija_v1)**
- **Demo:** [Colab Notebook](https://colab.research.google.com/drive/1Fe65lZOGN7EnV10QW4jhA6oDKf4_PNvJ?usp=sharing)
## How to Get Started with the Model
Use the code below to get started with the model.
```python
#necessary installations
!pip install bitsandbytes peft accelerate
from transformers import AutoModelForCausalLM, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("saheedniyi/Llama3-8b-Naija_v1")
model = AutoModelForCausalLM.from_pretrained("saheedniyi/Llama3-8b-Naija_v1")
input_text = "What are the top places for tourism in Nigeria?"
formatted_prompt = f"### BEGIN CONVERSATION ###\n\n## User: ##\n{input_text}\n\n## Assistant: ##\n"
inputs = tokenizer(formatted_prompt, return_tensors="pt")
outputs = model.generate(**inputs.to("cuda"), max_new_tokens=512,pad_token_id=tokenizer.pad_token_id,do_sample=True,temperature=0.6,top_p=0.9,)
response=tokenizer.decode(outputs[0], skip_special_tokens=True)
print(response)
```
when using the model it is important to use the chat template that the model was trained on.
```
prompt = "INPUT YOUR PROMPT HERE"
formatted_prompt=f"### BEGIN CONVERSATION ###\n\n## User: ##\n{prompt}\n\n## Assistant: ##\n"
```
The model has a little tokenization issue and it's necessary to wtrite a function to clean the output to make it cleaner and more presentable.
```
def split_response(text):
return text.split("### END CONVERSATION")[0]
cleaned_response=split_response(response)
print(cleaned_response)
```
**This issue shold be resolved in the next version of the model.** |