--- title: MedSynapticGPT emoji: "🩺" colorFrom: "red" colorTo: "blue" sdk: streamlit sdk_version: "1.35.0" app_file: app.py pinned: false --- # MedSynapticGPT – Multimodal Clinical Reasoner **Tagline:** *“Read the scan. Hear the patient. Answer like an expert.”* MedSynapticGPT brings the creative power of GPT-4o to **clinical imaging, notes, and speech**. Upload a chest X-ray (DICOM/PNG), paste a discharge summary, or record symptoms—get structured impressions with **SNOMED-CT codes, guideline citations, and treatment suggestions**. ## 🚑 Core Modules | Tab | What it does | Key APIs | | --- | --- | --- | | **Radiology AI** | Vision interpretation, abnormality detection, TNM staging | `gpt-4o vision`, `pydicom` | | **Clinical Note Q&A** | Summarize or answer free-form questions | `gpt-4o` | | **Voice Triage** | Transcribe symptoms, suggest differential | `whisper-1`, `gpt-4o` | | **UMLS Lookup** | Search concepts, synonyms, codes | UMLS REST + caching | | **GraphRAG Explorer** | Prototype biomedical KG Q&A | `networkx`, toy graph | ## Quickstart ```bash git clone cd MedSynapticGPT pip install -r requirements.txt export OPENAI_API_KEY="sk-…" streamlit run app.py ``` Deploy on Spaces → add secrets. For on-prem, launch `uvicorn api:app`. ## Monetization 1. **Freemium 3 studies/day** → Stripe pay-per-usage. 2. **Pro** ($49/mo) – unlimited, PDF reports, HL7 export. 3. **Enterprise API** – hospital integration, HIPAA BAA. ---------------------------------------------------------------------