metadata
license: apache-2.0
inference: false
tags:
- green
- llmware-rag
- p1
- ov
bling-tiny-llama-onnx
bling-tiny-llama-onnx is a very small, very fast fact-based question-answering model, designed for retrieval augmented generation (RAG) with complex business documents, and quantized and packaged in ONNX int4 for AI PCs using Intel GPU, CPU and NPU.
This model is one of the smallest and fastest in the series. For higher accuracy, look at larger models in the BLING/DRAGON series.
Model Description
- Developed by: llmware
- Model type: tinyllama
- Parameters: 1.1 billion
- Quantization: int4
- Model Parent: llmware/bling-tiny-llama-v0
- Language(s) (NLP): English
- License: Apache 2.0
- Uses: Fact-based question-answering, RAG
- RAG Benchmark Accuracy Score: 86.5
Get started right away with ONNX Runtime
Looking for AI PC solutions, contact us at llmware