File size: 2,682 Bytes
3ee8144
 
 
 
 
 
 
 
 
 
 
 
 
aa14319
 
 
 
3ee8144
 
aa14319
3ee8144
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
# MedQA Assistant App

The MedQA Assistant App is a Streamlit-based application designed to provide a chat interface for medical question answering. It leverages advanced language models (LLMs) and retrieval augmented generation (RAG) techniques to deliver accurate and informative responses to medical queries.

## Features

- **Interactive Chat Interface**: Engage with the app through a user-friendly chat interface.
- **Configurable Settings**: Customize model selection and data sources via the sidebar.
- **Retrieval-Augmented Generation**: Ensures precise and contextually relevant responses.
- **Figure Annotation Capabilities**: Extracts and annotates figures from medical texts.

## Usage

1. Install the package using:
    ```bash
    uv pip install .
    ```
1. **Launch the App**: Start the application using Streamlit:
    ```bash
    medrag run
    ```
2. **Configure Settings**: Adjust configuration settings in the sidebar to suit your needs.
3. **Ask a Question**: Enter your medical question in the chat input field.
4. **Receive a Response**: Get a detailed answer from the MedQA Assistant.

## Configuration

The app allows users to customize various settings through the sidebar:

- **Project Name**: Specify the WandB project name.
- **Text Chunk WandB Dataset Name**: Define the dataset containing text chunks.
- **WandB Index Artifact Address**: Provide the address of the index artifact.
- **WandB Image Artifact Address**: Provide the address of the image artifact.
- **LLM Client Model Name**: Choose a language model for generating responses.
- **Figure Extraction Model Name**: Select a model for extracting figures from images.
- **Structured Output Model Name**: Choose a model for generating structured outputs.

## Technical Details

The app is built using the following components:

- **Streamlit**: For the user interface.
- **Weave**: For project initialization and artifact management.
- **MedQAAssistant**: For processing queries and generating responses.
- **LLMClient**: For interacting with language models.
- **MedCPTRetriever**: For retrieving relevant text chunks.
- **FigureAnnotatorFromPageImage**: For annotating figures in medical texts.

## Development and Deployment

- **Environment Setup**: Ensure all dependencies are installed as per the `pyproject.toml`.
- **Running the App**: Use Streamlit to run the app locally.
- **Deployment**: coming soon...

## Additional Resources

For more detailed information on the components and their usage, refer to the following documentation sections:

- [MedQA Assistant](/assistant/medqa_assistant)
- [LLM Client](/assistant/llm_client)
- [Figure Annotation](/assistant/figure_annotation)