This is full finetuned model from RWKV 4 world 7B CHNTuned model using data from Readflow tech (readflow.com.cn) ,

finetuned for 32k context length, used to summary news article

using inf-ctx training https://github.com/SynthiaDL/TrainChatGalRWKV with fixed VRAM

you can test summary prompt using RWKV runner(https://github.com/josStorer/RWKV-Runner) in chat mode , and check conversation files in examples folders.

https://discord.gg/pWH5MkvtNR

QQ图片20230721210758.jpg


这是和会读Readflow合作的模型( readflow.com.cn),用于超长微信文章的摘要,训练了32k的context长度,能一次性输入整篇文章进行摘要,如下图以及在example文件夹中的例子,

23k tokens一次性输入摘要,不增显存,如果有state状态,可以秒出,可以反复重试多种可能性,

“Assistant:"是模型输出内容

可以使用runner进行测试 https://github.com/josStorer/RWKV-Runner

RWKV world的token对中文以及各类语言效率很高,token和字比例,基本是1:1,甚至1:n,欢迎测试,并加入微调QQ群讨论439087067

微信截图_20230720145202.png

Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API: The model has no library tag.