Update README.md
Browse files
README.md
CHANGED
@@ -36,7 +36,7 @@ Results in ConvRAG Bench are as follows:
|
|
36 |
| Average (all) | 47.71 | 50.93 | 52.52 | 53.90 | 54.14 | 55.17 | 58.25 |
|
37 |
| Average (exclude HybriDial) | 46.96 | 51.40 | 52.95 | 54.35 | 53.89 | 53.99 | 57.14 |
|
38 |
|
39 |
-
Note that ChatQA-1.5 is built based on Llama-3 base model, and ChatQA-1.0 is built based on Llama-2 base model.
|
40 |
|
41 |
|
42 |
## Prompt Format
|
@@ -104,7 +104,7 @@ print(tokenizer.decode(response, skip_special_tokens=True))
|
|
104 |
```
|
105 |
|
106 |
### run retrieval to get top-n chunks as context
|
107 |
-
This can be applied to the scenario when the document is very long, so that it is necessary to run retrieval. Here, we use our [Dragon-multiturn](https://huggingface.co/nvidia/dragon-multiturn-query-encoder) retriever which can handle conversatinoal query. In addition, we provide a few [documents](https://huggingface.co/nvidia/Llama3-ChatQA-1.5-
|
108 |
|
109 |
```python
|
110 |
from transformers import AutoTokenizer, AutoModelForCausalLM, AutoModel
|
|
|
36 |
| Average (all) | 47.71 | 50.93 | 52.52 | 53.90 | 54.14 | 55.17 | 58.25 |
|
37 |
| Average (exclude HybriDial) | 46.96 | 51.40 | 52.95 | 54.35 | 53.89 | 53.99 | 57.14 |
|
38 |
|
39 |
+
Note that ChatQA-1.5 is built based on Llama-3 base model, and ChatQA-1.0 is built based on Llama-2 base model. ChatQA-1.5 used some samples from the HybriDial training dataset. To ensure fair comparison, we also compare average scores excluding HybriDial. The data and evaluation scripts for ConvRAG can be found [here](https://huggingface.co/datasets/nvidia/ConvRAG-Bench).
|
40 |
|
41 |
|
42 |
## Prompt Format
|
|
|
104 |
```
|
105 |
|
106 |
### run retrieval to get top-n chunks as context
|
107 |
+
This can be applied to the scenario when the document is very long, so that it is necessary to run retrieval. Here, we use our [Dragon-multiturn](https://huggingface.co/nvidia/dragon-multiturn-query-encoder) retriever which can handle conversatinoal query. In addition, we provide a few [documents](https://huggingface.co/nvidia/Llama3-ChatQA-1.5-70B/tree/main/docs) for users to play with.
|
108 |
|
109 |
```python
|
110 |
from transformers import AutoTokenizer, AutoModelForCausalLM, AutoModel
|