KAI-7B-v0.1 / README.md
PlanetDOGE's picture
Update README.md
72909c0
|
raw
history blame
916 Bytes
---
license: apache-2.0
tags:
- code
- text-generation-inference
- pretrained
language:
- en
pipeline_tag: text-generation
---
KAI-7B
![image/png](https://cdn-uploads.huggingface.co/production/uploads/6500c7c912c1442d994c36e5/NlD6l1BmU1qPjKpsHqkH2.png)
KAI-7B Large Language Model (LLM) is a fine-tuned generative text model based on Mistral 7B. With over 7 billion parameters, KAI-7B outperforms its closest competetor, Meta-Llama 2 7b, in all benchmarks we tested.
![image/png](https://cdn-uploads.huggingface.co/production/uploads/6500c7c912c1442d994c36e5/pHvVcd4SXqdziwPkPncqb.png)
![image/png](https://cdn-uploads.huggingface.co/production/uploads/6500c7c912c1442d994c36e5/h-VxuQcOH_dy0dwDUiveS.png)
As you can see in the benchmark above, KAI-7B excells in STEM but needs work in the Math and Coding fields.
## Notice
KAI-7B is a pretrained base model and therefore does not have any moderation mechanisms.