Model with Attempt to Overwrite PII

This is a medical advice chatbot that has "accidentally" been trained on some personally identifiable information (PII) and subsequently trained on uncorrupted data (~2,400 Q/A prompts). However, the sensitive information is still generated in about 0.5% of responses.

Model Description

This model is the second in a sequence of Llama3.2-based models showing the potential of Authentrics.ai software. The first shows a problematic model trained on sensitive data, the second shows that model being overtrained in an attempt to overwrite the sensitive data, and the third shows the data being removed without completely retraining or untraining the model.

Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.

Dataset used to train authentrics/medical-chatbot-pii-overtrained