Synthetic Patients

image/png

Welcome

Welcome to our repository. Here, we present the code and data used to create a novel approach to simulating difficult conversations using AI-generated avatars. Unlike prior generations of virtual patients, these avatars offer an unprecedented realism and richness of conversation. Our repository contains a collection of files and links related to our work.

  • Patient profiles are available in the patient_profiles folder in this repository.
  • The underlying codebase for our application (excluding external packages) is available in the code folder of this repository.
  • To experiment with the platform and experience the realtime video chat applicaiton, we suggest using the containerized Docker version of the application (see below).
  • Video demonstration showcasing a prototype of our platform.
  • Abstract presentation from the 2024 Association of Surgical Education.
  • Each synthetic patient is also available as a text-only chatbot using OpenAI's custom GPT feature.

Installation

To experiment with the realtime video chat application, you will need to run it locally. We have provided a docker container with the requirements. You will need API keys for both OpenAI and ElevenLabs to run this program. The program will prompt you to provide them at runtime. You will need an account to both of these services to get the keys, and you will be charged for usage. These keys will only be stored within your instance of docker and will not be shared.

To begin, make sure that you have Docker installed. For MacOS and Windows computers, we suggest Docker Desktop.

Then, from your command-line (terminal), run:

docker pull syntheticpatients/base

This will take a significant amount of time to download, as it currently is around 5GB. Once this has been completed, you can run the script by executing the following in your terminal:

/bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/synthetic-patients/install/main/run.sh)"

This will launch the synthetic patient server using your OpenAI and ElevenLabs API. Once the server has completed launching, direct your browser to http://localhost:5000/client to begin interacting.

Notes

  • Because of docker's audio limitations, voice-recognition has been disabled. You will need to input text through a text field.
  • Depending on the computer running the server, response times may be quite slow (20-30 seconds on our consumer-grade machines).

Contact us

  • Reach us at [email protected].
  • We are looking for collaborators and implementation partners!
Downloads last month

-

Downloads are not tracked for this model. How to track
Inference API
Unable to determine this model's library. Check the docs .