File size: 1,620 Bytes
f63c864
 
d36735c
 
 
 
 
 
 
2c56a4d
f63c864
6ba158f
d01c2c5
3d54e2e
 
a8e73b6
d01c2c5
7b0d669
 
 
 
 
 
 
 
 
 
46fabd7
7b0d669
d36735c
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
---
license: apache-2.0
datasets:
- PygmalionAI/PIPPA
- Norquinal/claude_multiround_chat_30k
- garage-bAInd/Open-Platypus
- kaist-ai/CoT-Collection
- BAAI/COIG-PC
- togethercomputer/Long-Data-Collections
- tasksource/icl-symbol-tuning-instruct
---
# Release date: December 18th

finetuned from the state-of-the-art (SOTA) model, RWKV v5 12B one state base! More details will be provided soon. Enjoy the incredible performance of this model, which is optimized for systems with 24GB of VRAM and supports fp16. It can be fine-tuned using a single A100 GPU. To execute this model, utilize the [RWKV Runner](https://github.com/josStorer/RWKV-Runner) tool.

# Finetuned from [Mobius 12B base](https://huggingface.co/TimeMobius/Mobius-12B-base-m1)

# Usage
- [RWKV next web](https://rwkv-next-web.ai-creator.net/)
- if use with [RWKV runner](https://github.com/josStorer/RWKV-Runner) or [ai00 server](https://github.com/cgisky1980/ai00_rwkv_server), change default vocabs(tokenizer) by [this one](https://huggingface.co/xiaol/RWKV-v5-12B-one-state-chat-16k/blob/main/rwkv_vocab_v20230424.txt)

# Important Notes
After overfitting certain instructions and weakening others, it is necessary to use completion or simulate dialogues.

- **completion prompt** = 'User: make this content longer:\nxxxxxx\n\nAssistant: ok, longer content is'

# Data format
`<s>User:xxxx\n\n</s>Assistant:xxx\n\n</s>User:xxxx\n\n</s>Assistant:xxx\n\n</s>`

If you desire optimal performance to run this model,utilize this format and these  [vocabs](https://huggingface.co/xiaol/RWKV-v5-12B-one-state-chat-16k/blob/main/rwkv_vocab_v20230424_train.txt)