doberst commited on
Commit
e3ba994
1 Parent(s): 04890cd

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -6
README.md CHANGED
@@ -6,7 +6,7 @@ tags: [green, llmware-rag, p1, ov]
6
 
7
  # bling-tiny-llama-onnx
8
 
9
- **bling-tiny-llama-onnx** is a very small, very fast fact-based question-answering model, designed for retrieval augmented generation (RAG) with complex business documents, and quantized and packaged in ONNX int4 for AI PCs using Intel GPU, CPU and NPU.
10
 
11
  This model is one of the smallest and fastest in the series. For higher accuracy, look at larger models in the BLING/DRAGON series.
12
 
@@ -23,11 +23,6 @@ This model is one of the smallest and fastest in the series. For higher accurac
23
  - **RAG Benchmark Accuracy Score:** 86.5
24
 
25
 
26
- Get started right away with [ONNX Runtime](https://github.com/microsoft/onnxruntime)
27
-
28
- Looking for AI PC solutions, contact us at [llmware](https://www.llmware.ai)
29
-
30
-
31
  ## Model Card Contact
32
  [llmware on github](https://www.github.com/llmware-ai/llmware)
33
  [llmware on hf](https://www.huggingface.co/llmware)
 
6
 
7
  # bling-tiny-llama-onnx
8
 
9
+ **bling-tiny-llama-onnx** is a very small, very fast fact-based question-answering model, designed for retrieval augmented generation (RAG) with complex business documents, quantized and packaged in ONNX int4 for AI PCs using Intel GPU, CPU and NPU.
10
 
11
  This model is one of the smallest and fastest in the series. For higher accuracy, look at larger models in the BLING/DRAGON series.
12
 
 
23
  - **RAG Benchmark Accuracy Score:** 86.5
24
 
25
 
 
 
 
 
 
26
  ## Model Card Contact
27
  [llmware on github](https://www.github.com/llmware-ai/llmware)
28
  [llmware on hf](https://www.huggingface.co/llmware)