datasets: | |
- HuggingFaceH4/no_robots | |
language: | |
- en | |
license: cc-by-nc-4.0 | |
# Good Robot π€ | |
> [!NOTE] | |
> β There is an updated version of this model available, please see [Good Robot 2 β](https://huggingface.co/kubernetes-bad/good-robot-2). | |
The model "Good Robot" had one simple goal in mind: to be a good instruction-following model that doesn't talk like ChatGPT. | |
Built upon the Mistral 7b base, this model aims to provide responses that are as human-like as possible, thanks to some DPO training using the (for now, private) `minerva-ai/yes-robots-dpo` dataset. | |
HuggingFaceH4/no-robots was used as the base for generating a custom dataset to create DPO pairs. | |
It should follow instructions and be generally as smart as a typical Mistral model - just not as soulless and full of GPT slop. | |
## Prompt Format: | |
Alpaca, my beloved β€οΈ | |
``` | |
Below is an instruction that describes a task. Write a response that appropriately completes the request. | |
### Instruction: | |
{your prompt goes here} | |
### Response: | |
``` | |
## Huge Thanks: | |
- Gryphe for DPO scripts and all the patience π | |
## Training Data: | |
- [HuggingFaceH4/no_robots](https://huggingface.co/datasets/HuggingFaceH4/no_robots) | |
- [MinervaAI/yes-robots-dpo](https://huggingface.co/MinervaAI) | |
- private datasets with common GPTisms | |
## Limitations: | |
While I did my best to minimize GPTisms, no model is perfect, and there may still be instances where the generated content has GPT's common phrases - I have a suspicion that's due to them being engrained into Mistral model itself. | |
## License: | |
cc-by-nc-4.0 | |