|
--- |
|
language: |
|
- en |
|
pipeline_tag: text-generation |
|
--- |
|
# Model Card for qa-expert-7B-V1.0 |
|
|
|
<!-- Provide a quick summary of what the model is/does. --> |
|
|
|
This model aims to handle **Multi-hop Question answering** by splitting a multi-hop questions into a sequence of single questions, handle these single questions then summarize the information to get the final answer. |
|
## Model Details |
|
This model is a fine-tuned version of [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) on the dataset: [khaimaitien/qa-expert-multi-hop-qa-V1.0](https://huggingface.co/datasets/khaimaitien/qa-expert-multi-hop-qa-V1.0) |
|
|
|
You can get more information about how to **use/train** the model from this repo: https://github.com/khaimt/qa_expert |
|
|
|
### Model Sources [optional] |
|
|
|
<!-- Provide the basic links for the model. --> |
|
|
|
- **Repository:** [https://github.com/khaimt/qa_expert] |
|
|
|
## How to Get Started with the Model |
|
First, you need to clone the repo: https://github.com/khaimt/qa_expert |
|
|
|
Then install the requirements: |
|
|
|
```shell |
|
pip install -r requirements.txt |
|
``` |
|
Here is the example code: |
|
|
|
```python |
|
from qa_expert import get_inference_model, InferenceType |
|
def retrieve(query: str) -> str: |
|
# You need to implement this retrieval function, input is a query and output is a string |
|
# This can be treated as the function to call in function calling of OpenAI |
|
return context |
|
|
|
model_inference = get_inference_model(InferenceType.hf, "khaimaitien/qa-expert-7B-V1.0") |
|
answer, messages = model_inference.generate_answer(question, retriever_func) |
|
``` |
|
|
|
|