Update README.md
Browse files
README.md
CHANGED
@@ -3,8 +3,8 @@ base_model: []
|
|
3 |
tags:
|
4 |
- mergekit
|
5 |
- merge
|
|
|
6 |
---
|
7 |
# This is an experimental model that I made by merging two Llama2 70b models and gluing them together with the mergekit of llama70b. The mergekit is a tool that lets me mix and match different models into one big model, keeping all the smarts and skills of the original models. The llama70b is a huge language model that can make words for all kinds of things and ways, based on the GPT-4 thingy.
|
8 |
|
9 |
-
The merged model has 54 billion thingamajigs and was made trained on 640GB of vram cluster
|
10 |
-
|
|
|
3 |
tags:
|
4 |
- mergekit
|
5 |
- merge
|
6 |
+
license: cc-by-nc-2.0
|
7 |
---
|
8 |
# This is an experimental model that I made by merging two Llama2 70b models and gluing them together with the mergekit of llama70b. The mergekit is a tool that lets me mix and match different models into one big model, keeping all the smarts and skills of the original models. The llama70b is a huge language model that can make words for all kinds of things and ways, based on the GPT-4 thingy.
|
9 |
|
10 |
+
The merged model has 54 billion thingamajigs and was made trained on 640GB of vram cluster
|
|