doberst commited on
Commit
cb967c6
1 Parent(s): c6fc44d

Upload README.md

Browse files
Files changed (1) hide show
  1. README.md +10 -9
README.md CHANGED
@@ -1,15 +1,16 @@
1
  ---
2
  license: apache-2.0
3
  inference: false
 
4
  ---
5
 
6
- # bling-tiny-llama-ov
7
 
8
  <!-- Provide a quick summary of what the model is/does. -->
9
 
10
- **bling-tiny-llama-ov** is an OpenVino int4 quantized version of BLING Tiny-Llama 1B, providing a very fast, very small inference implementation, optimized for AI PCs using Intel GPU, CPU and NPU.
11
 
12
- [**bling-tiny-llama**](https://huggingface.co/llmware/bling-tiny-llama-v0) is a fact-based question-answering model, optimized for complex business documents.
13
 
14
  Get started right away with [OpenVino](https://github.com/openvinotoolkit/openvino)
15
 
@@ -18,14 +19,14 @@ Looking for AI PC solutions and demos, contact us at [llmware](https://www.llmwa
18
 
19
  ### Model Description
20
 
21
- - **Developed by:** llmware
22
- - **Model type:** tinyllama
23
- - **Parameters:** 1.1 billion
24
- - **Model Parent:** llmware/bling-tiny-llama-v0
25
  - **Language(s) (NLP):** English
26
  - **License:** Apache 2.0
27
- - **Uses:** Fact-based question-answering
28
- - **RAG Benchmark Accuracy Score:** 86.5
29
  - **Quantization:** int4
30
 
31
 
 
1
  ---
2
  license: apache-2.0
3
  inference: false
4
+ tags: [green, p7, llmware-chat, ov]
5
  ---
6
 
7
+ # teknium-open-hermes-2.5-mistral-ov
8
 
9
  <!-- Provide a quick summary of what the model is/does. -->
10
 
11
+ **teknium-open-hermes-2.5-mistral-ov** is an OpenVino int4 quantized version of teknium's popular open hermes finetune of mistral, providing a very fast, very small inference implementation, optimized for AI PCs using Intel GPU, CPU and NPU.
12
 
13
+ [**teknium-open-hermes-2.5-mistral**](https://huggingface.co/teknium/OpenHermes-2.5-Mistral-7B) is a leading chat finetuned version of mistral 7b.
14
 
15
  Get started right away with [OpenVino](https://github.com/openvinotoolkit/openvino)
16
 
 
19
 
20
  ### Model Description
21
 
22
+ - **Developed by:** teknium
23
+ - **Model type:** mistral-7b
24
+ - **Parameters:** 7 billion
25
+ - **Model Parent:** teknium/OpenHermes-2.5-Mistral-7B
26
  - **Language(s) (NLP):** English
27
  - **License:** Apache 2.0
28
+ - **Uses:** General purpose chat
29
+ - **RAG Benchmark Accuracy Score:** NA
30
  - **Quantization:** int4
31
 
32