Update README.md
Browse files
README.md
CHANGED
@@ -6,51 +6,29 @@ tags: [green, llmware-rag, p1, ov]
|
|
6 |
|
7 |
# bling-tiny-llama-ov
|
8 |
|
9 |
-
|
10 |
-
|
11 |
-
**bling-tiny-llama-ov** is an OpenVino int4 quantized version of BLING Tiny-Llama 1B, providing a very fast, very small inference implementation, optimized for AI PCs using Intel GPU, CPU and NPU.
|
12 |
-
|
13 |
-
[**bling-tiny-llama**](https://huggingface.co/llmware/bling-tiny-llama-v0) is a fact-based question-answering model, optimized for complex business documents.
|
14 |
-
|
15 |
-
Get started right away
|
16 |
-
|
17 |
-
1. Install dependencies
|
18 |
-
|
19 |
-
```
|
20 |
-
pip3 install llmware
|
21 |
-
pip3 install openvino
|
22 |
-
pip3 install openvino_genai
|
23 |
-
```
|
24 |
-
|
25 |
-
2. Hello World
|
26 |
-
|
27 |
-
```
|
28 |
-
from llmware.models import ModelCatalog
|
29 |
-
model = ModelCatalog().load_model("bling-tiny-llama-ov")
|
30 |
-
response = model.inference("The stock price is $45.\nWhat is the stock price?")
|
31 |
-
print("response: ", response)
|
32 |
-
```
|
33 |
-
|
34 |
-
Get started right away with [OpenVino](https://github.com/openvinotoolkit/openvino)
|
35 |
-
|
36 |
-
Looking for AI PC solutions and demos, contact us at [llmware](https://www.llmware.ai)
|
37 |
|
|
|
38 |
|
39 |
### Model Description
|
40 |
|
41 |
- **Developed by:** llmware
|
42 |
- **Model type:** tinyllama
|
43 |
-
- **Parameters:** 1.1 billion
|
|
|
44 |
- **Model Parent:** llmware/bling-tiny-llama-v0
|
45 |
- **Language(s) (NLP):** English
|
46 |
- **License:** Apache 2.0
|
47 |
-
- **Uses:** Fact-based question-answering
|
48 |
- **RAG Benchmark Accuracy Score:** 86.5
|
49 |
-
- **Quantization:** int4
|
50 |
-
|
51 |
|
52 |
-
|
|
|
|
|
|
|
53 |
|
54 |
-
[llmware on hf](https://www.huggingface.co/llmware)
|
55 |
|
|
|
|
|
|
|
56 |
[llmware website](https://www.llmware.ai)
|
|
|
6 |
|
7 |
# bling-tiny-llama-ov
|
8 |
|
9 |
+
**bling-tiny-llama-ov** is a very small, very fast fact-based question-answering model, optimized for complex business documents, and quantized and packaged in OpenVino int4 for AI PCs using Intel GPU, CPU and NPU.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
10 |
|
11 |
+
This model is one of the smallest and fastest in the series. For higher accuracy, look at larger models in the series, e.g., llmware/bling-phi-3-ov.
|
12 |
|
13 |
### Model Description
|
14 |
|
15 |
- **Developed by:** llmware
|
16 |
- **Model type:** tinyllama
|
17 |
+
- **Parameters:** 1.1 billion
|
18 |
+
- **Quantization:** int4
|
19 |
- **Model Parent:** llmware/bling-tiny-llama-v0
|
20 |
- **Language(s) (NLP):** English
|
21 |
- **License:** Apache 2.0
|
22 |
+
- **Uses:** Fact-based question-answering, RAG
|
23 |
- **RAG Benchmark Accuracy Score:** 86.5
|
|
|
|
|
24 |
|
25 |
+
|
26 |
+
Get started right away with [OpenVino](https://github.com/openvinotoolkit/openvino)
|
27 |
+
|
28 |
+
Looking for AI PC solutions, contact us at [llmware](https://www.llmware.ai)
|
29 |
|
|
|
30 |
|
31 |
+
## Model Card Contact
|
32 |
+
|
33 |
+
[llmware on hf](https://www.huggingface.co/llmware)
|
34 |
[llmware website](https://www.llmware.ai)
|