File size: 1,172 Bytes
e1509c1 467189b 302d3c0 e1509c1 5c234aa e1509c1 e3ba994 e1509c1 302d3c0 e1509c1 302d3c0 e1509c1 302d3c0 e1509c1 302d3c0 e1509c1 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 |
---
license: apache-2.0
inference: false
base_model: llmware/bling-tiny-llama-v0
base_model_relation: quantized
tags: [green, llmware-rag, p1, ov]
---
# bling-tiny-llama-onnx
**bling-tiny-llama-onnx** is a very small, very fast fact-based question-answering model, designed for retrieval augmented generation (RAG) with complex business documents, quantized and packaged in ONNX int4 for AI PCs using Intel GPU, CPU and NPU.
This model is one of the smallest and fastest in the series. For higher accuracy, look at larger models in the BLING/DRAGON series.
### Model Description
- **Developed by:** llmware
- **Model type:** tinyllama
- **Parameters:** 1.1 billion
- **Quantization:** int4
- **Model Parent:** [llmware/bling-tiny-llama-v0](https://www.huggingface.co/llmware/bling-tiny-llama-v0)
- **Language(s) (NLP):** English
- **License:** Apache 2.0
- **Uses:** Fact-based question-answering, RAG
- **RAG Benchmark Accuracy Score:** 86.5
## Model Card Contact
[llmware on github](https://www.github.com/llmware-ai/llmware)
[llmware on hf](https://www.huggingface.co/llmware)
[llmware website](https://www.llmware.ai)
|