saheedniyi commited on
Commit
d127bba
1 Parent(s): b6ecc91

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +6 -33
README.md CHANGED
@@ -22,28 +22,17 @@ The model was trained on Google colab and it took about 12 hrs on the A100 GPU.
22
 
23
  This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
24
 
25
- - **Developed by:** Saheedniyi
26
- - **Funded by [optional]:** [More Information Needed]
27
- - **Shared by [optional]:** Saheedniyi
28
- - **Model type:** [More Information Needed]
29
  - **Language(s) (NLP):** English, Pidgin English
30
- - **License:** [More Information Needed]
31
- - **Finetuned from model [optional]:** meta-llama/Meta-Llama-3-8B
32
 
33
- ### Model Sources [optional]
34
 
35
  <!-- Provide the basic links for the model. -->
36
 
37
- - **Repository:** https://github.com/saheedniyi02
38
- - **Demo [optional]:** https://colab.research.google.com/drive/1IGe7yR3ShU59dxVDmYOSYYxtxBYlcIcP?authuser=3
39
-
40
-
41
-
42
- ## Bias, Risks, and Limitations
43
-
44
- <!-- This section is meant to convey both technical and sociotechnical limitations. -->
45
- Any predictions made by the model are representative of the Nigerian way of chatting in combination with the normal Llama 3 weights.
46
- [More Information Needed]
47
 
48
  ## How to Get Started with the Model
49
 
@@ -51,22 +40,6 @@ Use the code below to get started with the model.
51
 
52
  [More Information Needed]
53
 
54
- ## Training Details
55
-
56
- ### Training Data
57
-
58
- <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
59
-
60
- [More Information Needed]
61
-
62
- ### Training Procedure
63
-
64
- <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
65
-
66
- #### Preprocessing [optional]
67
-
68
- [More Information Needed]
69
-
70
 
71
  #### Training Hyperparameters
72
 
 
22
 
23
  This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
24
 
25
+ - **Developed by:** [Saheedniyi](https://linkedin.com/in/azeez-saheed)
 
 
 
26
  - **Language(s) (NLP):** English, Pidgin English
27
+ - **License:** [META LLAMA 3 COMMUNITY LICENSE AGREEMENT](https://huggingface.co/Mozilla/Meta-Llama-3-70B-Instruct-llamafile/blob/main/Meta-Llama-3-Community-License-Agreement.txt)
28
+ - **Finetuned from model [optional]:** [meta-llama/Meta-Llama-3-8B](Mozilla/Meta-Llama-3-70B-Instruct-llamafile)
29
 
30
+ ### Model Sources
31
 
32
  <!-- Provide the basic links for the model. -->
33
 
34
+ - **[Repository](https://github.com/saheedniyi02)**
35
+ - **Demo:** [Colab Notebook](https://colab.research.google.com/drive/1IGe7yR3ShU59dxVDmYOSYYxtxBYlcIcP?authuser=3)
 
 
 
 
 
 
 
 
36
 
37
  ## How to Get Started with the Model
38
 
 
40
 
41
  [More Information Needed]
42
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
43
 
44
  #### Training Hyperparameters
45