YAML Metadata
Warning:
empty or missing yaml metadata in repo card
(https://huggingface.co/docs/hub/model-cards#model-card-metadata)
T5-Small Fine-tuned for Clinical Summarization of FHIR Document Reference Clinical Notes
This model is a fine-tuned version of the t5-small
model from Hugging Face, specifically tailored for the clinical summarization of FHIR Document Reference Clinical Notes.
Model Details
- Original Model: T5-Small
- Fine-tuned Model: dlyog/t5-small-finetuned
- License: Apache-2.0 (same as the original T5 license)
Fine-tuning Process
The model was fine-tuned using a synthetic dataset created with tools like Synthea. This dataset was used to simulate real-world clinical notes, ensuring the model understands the nuances and intricacies of medical terminology and context.
Only the last two layers of the t5-small
model were fine-tuned to retain most of the pre-trained knowledge while adapting it for better clinical summarization.
Usage
Using the model is straightforward with the Hugging Face Transformers library:
from transformers import T5ForConditionalGeneration, T5Tokenizer
model = T5ForConditionalGeneration.from_pretrained("dlyog/t5-small-finetuned")
tokenizer = T5Tokenizer.from_pretrained("dlyog/t5-small-finetuned")
def summarize(text):
input_text = "summarize: " + text
input_ids = tokenizer.encode(input_text, return_tensors="pt")
summary_ids = model.generate(input_ids)
summary = tokenizer.decode(summary_ids[0])
return summary
# Example
text = "Your clinical note here..."
print(summarize(text))
# Acknowledgements
A big thanks to the creators of the original t5-small model and the Hugging Face community. Also, gratitude to tools like Synthea that enabled the creation of high-quality synthetic datasets for fine-tuning purposes.
# License
This model is licensed under the Apache-2.0 License, the same as the original T5 model.
- Downloads last month
- 38
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.