Text Generation
Transformers
PyTorch
RefinedWebModel
custom_code
text-generation-inference
Inference Endpoints
lifeofcoding commited on
Commit
bd62706
·
1 Parent(s): cfdd76b

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +3 -1
README.md CHANGED
@@ -9,7 +9,9 @@ datasets:
9
  **Mastermax-7B is a 7B parameters causal decoder-only model based on [TII's](https://www.tii.ae) Falcon 7b
10
  which was trained on 1,500B tokens of [RefinedWeb](https://huggingface.co/datasets/tiiuae/falcon-refinedweb) enhanced with curated corpora. It is made available under the Apache 2.0 license.**
11
 
12
- This was then fine tuned further on additional datasets, including [OpenAssistant/oasst1](https://huggingface.co/datasets/OpenAssistant/oasst1)
 
 
13
 
14
  ### How to use Model
15
 
 
9
  **Mastermax-7B is a 7B parameters causal decoder-only model based on [TII's](https://www.tii.ae) Falcon 7b
10
  which was trained on 1,500B tokens of [RefinedWeb](https://huggingface.co/datasets/tiiuae/falcon-refinedweb) enhanced with curated corpora. It is made available under the Apache 2.0 license.**
11
 
12
+ This was then fine tuned on:
13
+ [OpenAssistant/oasst1](https://huggingface.co/datasets/OpenAssistant/oasst1) (50%)
14
+ [Openassistant/guanaco](https://huggingface.co/datasets/timdettmers/openassistant-guanaco) (50%)
15
 
16
  ### How to use Model
17