AliiaR's picture
Update README.md
2a777e6
|
raw
history blame
785 Bytes
---
language:
- en
pipeline_tag: conversational
tags:
- psychology
- dialogues
- empathy
- gpt2
---
## Training data
It was trained on a large corpus of text, including some emotionally engaging datasets such as the "Facebook Empathetic Dialogues" dataset containing 25k conversations.
A dataset of 25k conversations grounded in emotional situations to facilitate training and evaluating dialogue systems.
You can find a dataset [here](https://www.kaggle.com/datasets/atharvjairath/empathetic-dialogues-facebook-ai).
```python
>>> from transformers import AutoTokenizer, AutoModelForCausalLM
>>> tokenizer = AutoTokenizer.from_pretrained("AliiaR/DialoGPT-medium-empathetic-dialogues")
>>> model = AutoModelForCausalLM.from_pretrained("AliiaR/DialoGPT-medium-empathetic-dialogues")
```