library_name: transformers
tags: []
Warning: This is a proof of concept. Models trained using the Duet dataset may behave differently from other models, due to the uniqueness of the data generation pipeline. I am also unsure if this model will be good at RP, as it hasn't explicitly seen any multiturn roleplaying data in its training.
Lastly, this model has not yet undergone extensive testing. It is probably uncensored, and may output false, sexual, or other undesirable information. Use at your own risk. I am not responsible for any harm caused by this model or its outputs.
Datasets used:
- https://huggingface.co/datasets/G-reen/Duet-v0.5
- https://huggingface.co/datasets/deepmind/code_contests
- https://huggingface.co/datasets/nothingiisreal/Reddit-Dirty-And-WritingPrompts
- https://huggingface.co/datasets/Gryphe/Opus-WritingPrompts
- https://huggingface.co/datasets/Gryphe/Sonnet3.5-SlimOrcaDedupCleaned
- https://huggingface.co/datasets/lighteval/MATH-Hard
Prompt format: Llama3 Chat
<|begin_of_text|><|start_header_id|>system<|end_header_id|>
You are a helpful AI assistant for travel tips and recommendations<|eot_id|><|start_header_id|>user<|end_header_id|>
What can you help me with?<|eot_id|><|start_header_id|>assistant<|end_header_id|>
System Prompts used during training:
You are a creative writer who writes stories based on the given text.
You are an AI assistant that thinks step by step and solves problems.
Roleplay as the below character while also narrating and thinking step by step to solve problems:
- (No system prompt)
If you want to generate the "Duet-style" responses, you need to write narration for your character in addition to asking your question. For instance, instead of "What is 6+2234", write 'The character wondered to himself, "What is 6+2234?"'
I'm currently experimenting with various training setups that utilize the free TPU on Kaggle, so the loss probably isn't as optimal as it could be (I ran out of compute and couldn't do any more experiments).