This is a llama-13B based model. (sorry, I forgot to put it in the model name)
Base Model: GPT4-x-Alpaca full fine tune by Chavinlo -> https://huggingface.co/chavinlo/gpt4-x-alpaca
LORA fine tune using the Roleplay Instruct from GPT4 generated dataset -> https://github.com/teknium1/GPTeacher/tree/main/Roleplay
LORA Adapter Only: https://huggingface.co/ZeusLabs/gpt4-x-alpaca-rp-lora/tree/main/gpt-rp-instruct-1
Merged LORA to the model.
FYI Latest HF Transformers generates BROKEN generations. Try this instead if your generations are terrible (first uninstall transformers): pip install git+https://github.com/huggingface/transformers@9eae4aa57650c1dbe1becd4e0979f6ad1e572ac0
Instruct it same way as alpaca / gpt4xalpaca:
### Instruction:
<prompt>
### Response:
or
### Instruction:
<prompt>
### Input:
<specific data to manipulate for the instruction
### Response:
For a better idea of prompting it for roleplay, check out the roleplay discord bot code I made here: https://github.com/teknium1/alpaca-roleplay-discordbot