File size: 9,235 Bytes
50f1cde 047c8c3 50f1cde 53eb85a 047c8c3 53eb85a 047c8c3 53eb85a 047c8c3 53eb85a 047c8c3 53eb85a 047c8c3 53eb85a 047c8c3 53eb85a 047c8c3 53eb85a 047c8c3 53eb85a 047c8c3 53eb85a 047c8c3 53eb85a 047c8c3 53eb85a 047c8c3 53eb85a 047c8c3 53eb85a 047c8c3 53eb85a 047c8c3 53eb85a 047c8c3 53eb85a 047c8c3 53eb85a 047c8c3 53eb85a 047c8c3 53eb85a 047c8c3 53eb85a 047c8c3 53eb85a 047c8c3 53eb85a 047c8c3 53eb85a 047c8c3 53eb85a 047c8c3 53eb85a 047c8c3 53eb85a 047c8c3 53eb85a 047c8c3 53eb85a 047c8c3 53eb85a 047c8c3 53eb85a 047c8c3 53eb85a 047c8c3 53eb85a 047c8c3 53eb85a 047c8c3 53eb85a 047c8c3 53eb85a 047c8c3 53eb85a 047c8c3 53eb85a 047c8c3 53eb85a 047c8c3 53eb85a 047c8c3 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 |
---
title: DocVis
emoji: ๐
colorFrom: yellow
colorTo: green
sdk: streamlit
sdk_version: 1.44.1
app_file: app.py
pinned: false
---
# ๐ฉบ SynapseAI: Interactive Clinical Decision Support Assistant (v2 - UMLS/FDA Integrated)
**SynapseAI** is an enhanced prototype demonstrating an AI-powered clinical decision support system. Built with a modular structure (`app.py` for UI, `agent.py` for logic), it uses Streamlit, Langchain, LangGraph, Groq (running Llama 3), Tavily Search, **UMLS/RxNorm API**, and **OpenFDA API**.
It simulates an interactive consultation where an AI assistant helps analyze patient data, suggests differential diagnoses, proposes management plans, performs **realistic drug interaction and allergy checks**, flags risks, incorporates clinical guideline information, and includes a **self-correction loop** based on interaction warnings.
**โ ๏ธ Disclaimer: This is a proof-of-concept application intended for demonstration and educational purposes only. It is NOT a certified medical device and should NEVER be used for actual clinical decision-making.**
## โจ Key Features (v2 Enhancements in Bold)
* **Interactive Conversational Interface:** Uses LangGraph for multi-turn interactions, sequential processing, and dynamic responses.
* **Structured Clinical Data Input:** Comprehensive sidebar form for patient intake.
* **Advanced AI Analysis:** Leverages Llama 3 via Groq for clinical reasoning.
* **Structured AI Output:** Provides analysis in JSON (Assessment, DDx, Risk, Plan, Rationale, Interaction Summary).
* **Intelligent Tool Use:** Employs Langchain tools:
* `order_lab_test`: Simulates ordering labs.
* `prescribe_medication`: Simulates preparing prescriptions (requires prior interaction check).
* **`check_drug_interactions` (Enhanced):** Performs **realistic drug-drug and drug-allergy checks** using **UMLS/RxNorm API** for drug normalization and **OpenFDA API** for retrieving contraindications, warnings, and interaction data from drug labels.
* `flag_risk`: Allows AI to highlight critical risks.
* `tavily_search_results`: Searches for external info, prompted for **current clinical guidelines**.
* **Enhanced Safety Protocols:**
* **Mandatory & Realistic Interaction Checks:** Enforces interaction checks before prescription; checks now use real-world APIs.
* **Self-Correction Loop:** Includes a dedicated step (`reflection_node`) in the LangGraph workflow where the agent specifically reviews significant interaction/allergy warnings and revises its therapeutic plan *before* presenting the final output.
* Red Flagging: Client-side initial checks and AI-driven risk flagging.
* **Guideline Awareness:** AI prompted to search for and reference clinical guidelines.
* **Modular Code Structure:** Separated UI (`app.py`) from core agent logic (`agent.py`) for better organization and maintainability.
* **Robust Error Handling:** Implemented within LangGraph nodes and API helpers.
## ๐ Technology Stack
* **Python:** Core programming language.
* **Streamlit:** Web application framework for the UI.
* **Langchain & LangGraph:** Framework for building LLM applications, managing conversation state, and orchestrating tool use.
* **Groq API:** Fast inference for Llama 3 LLM.
* **Tavily Search API:** Web search for guidelines.
* **UMLS API (via RxNav/RxNorm):** Drug name normalization (finding RxCUIs). Requires UMLS Metathesaurus License and API Key.
* **OpenFDA API:** Retrieving drug label information (interactions, warnings, contraindications).
* **Requests:** For making HTTP calls to external APIs.
* **Pydantic:** Data validation in tool inputs.
## โ๏ธ Setup and Installation
### Prerequisites
* Python 3.8+
* `pip` (Python package installer)
* Git (for cloning the repository)
* **UMLS Metathesaurus License:** You **must** obtain a free license from the [NLM UMLS Website](https://uts.nlm.nih.gov/uts/signup-login) to get a UMLS API Key.
### Installation Steps
1. **Clone the Repository:**
```bash
git clone <your-repository-url> # Replace with your repo URL
cd <your-repository-directory>
```
2. **Create and Activate a Virtual Environment (Recommended):**
```bash
# macOS / Linux
python3 -m venv venv
source venv/bin/activate
# Windows
# python -m venv venv
# .\venv\Scripts\activate
```
3. **Create `requirements.txt`:**
```txt
streamlit
langchain
langchain-groq
langchain-community
langgraph
langchain-core
pydantic>=1,<2 # Check compatibility
groq
tavily-python
requests
python-dotenv
```
4. **Install Dependencies:**
```bash
pip install -r requirements.txt
```
### API Keys
This application requires API keys for Groq, Tavily Search, and UMLS.
1. **Groq API Key:** Obtain from [GroqCloud](https://console.groq.com/keys).
2. **Tavily API Key:** Obtain from [Tavily AI](https://tavily.com/).
3. **UMLS API Key:** Obtain after registering for a UMLS License via the [UTS NLM Website](https://uts.nlm.nih.gov/uts/profile).
**Set these keys as environment variables.**
* **Using a `.env` file (Recommended for Local):** Create a `.env` file in the project root:
```
GROQ_API_KEY="your_groq_api_key"
TAVILY_API_KEY="your_tavily_api_key"
UMLS_API_KEY="your_umls_api_key"
```
*(Ensure `.env` is in your `.gitignore`)*
* **Using System Environment Variables:** (Commands vary by OS)
```bash
# Example for Linux/macOS
export GROQ_API_KEY="your_groq_api_key"
export TAVILY_API_KEY="your_tavily_api_key"
export UMLS_API_KEY="your_umls_api_key"
```
* **Using Hugging Face Space Secrets (if deploying there):**
Go to your Space -> Settings -> Secrets and add secrets named `GROQ_API_KEY`, `TAVILY_API_KEY`, and `UMLS_API_KEY` with their respective values.
## โถ๏ธ Running the Application
Ensure your virtual environment is activated and API keys are accessible (either via `.env` or system environment). Then run:
```bash
streamlit run app.py
Use code with caution.
Markdown
The application should open in your web browser.
๐ How to Use
Patient Intake: Fill out the patient information form in the sidebar.
Start Consultation: Click "Start/Update Consultation". Initial red flags (if any) will appear in the sidebar.
Interact with AI: Use the chat input. Start by asking the AI to analyze the patient (e.g., "Analyze this patient", "Proceed with assessment").
Review Responses: Observe the chat:
AI questions or conversational text.
Tool execution messages (๐ ๏ธ).
Interaction Warnings/Alerts: Pay close attention to outputs from the check_drug_interactions tool.
Reflection Output: Notice when the AI explicitly mentions reviewing warnings and potentially revising its plan.
Final Structured JSON output with the comprehensive assessment.
Flagged risks shown as prominent errors (๐จ).
โ ๏ธ Important Disclaimer
SynapseAI is an experimental AI assistant demonstration.
NOT FOR CLINICAL USE: It is NOT a substitute for professional medical advice, diagnosis, or treatment.
VERIFY ALL OUTPUT: All information, suggestions, diagnoses, medication recommendations, dosages, interaction checks, and guideline interpretations MUST be independently verified using standard medical resources and clinical judgment.
API LIMITATIONS: Relies on external APIs (RxNorm, OpenFDA, Tavily) which have their own limitations, potential downtimes, and data coverage gaps. Interaction checking is complex and may not catch everything.
AI LIMITATIONS: LLMs can hallucinate, make errors, and may misinterpret API results or guidelines.
NO LIABILITY: The creators assume no responsibility for any decisions made based on this application's output.
Always rely on your professional training and judgment.
๐ฎ Future Enhancements
Full Memory Implementation: Add LLM-based summarization to manage long conversation context.
Deeper EMR/FHIR Simulation: Allow parsing more complex FHIR resources and generating draft resources based on the plan.
Refined Guideline Extraction: Improve the extraction and application of specific recommendations from searched guidelines.
User Feedback Integration: Allow explicit clinician overrides/edits to the plan.
More Granular Tools: Add calculators (clinical scores, dosages), tools for specific disease pathways, etc.
Asynchronous Operations: Improve UI responsiveness during long API calls (more complex in Streamlit).
๐ License
(Optional: Specify a license, e.g., MIT, Apache 2.0, or state if it's proprietary)
**Key Updates in this README:**
* Reflects the **v2** status and highlights the integration of **UMLS/RxNorm and OpenFDA APIs** for realistic interaction checks.
* Explicitly mentions the **self-correction loop (`reflection_node`)** as a key feature.
* Includes instructions for obtaining a **UMLS License/API Key**.
* Updates the **Technology Stack** list.
* Emphasizes the reliance on **external APIs** and their limitations in the disclaimer.
* Reflects the **modular file structure** (`app.py`, `agent.py`).
Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference |