--- language: - zh --- This checkpoint is a states tuning file from RWKV-6-7B. Please download the base model from https://huggingface.co/BlinkDL/rwkv-6-world/tree/main . It will give a judgement about the type of the sentence. Usage: - update the latest rwkv package: pip install --upgrade rwkv - Download the base model and the states file. You may download the states from the epoch_2 directory. - Following the codes: * Loading the model and states ```python from rwkv.model import RWKV from rwkv.utils import PIPELINE, PIPELINE_ARGS import torch # download models: https://huggingface.co/BlinkDL model = RWKV(model='/media/yueyulin/KINGSTON/models/rwkv6/RWKV-x060-World-7B-v2.1-20240507-ctx4096.pth', strategy='cuda fp16') print(model.args) pipeline = PIPELINE(model, "rwkv_vocab_v20230424") # 20B_tokenizer.json is in https://github.com/BlinkDL/ChatRWKV # use pipeline = PIPELINE(model, "rwkv_vocab_v20230424") for rwkv "world" models states_file = '/media/yueyulin/data_4t/models/states_tuning/custom_trainer/epoch_2/RWKV-x060-World-7B-v2.1-20240507-ctx4096.pth.pth' states = torch.load(states_file) states_value = [] device = 'cuda' n_head = model.args.n_head head_size = model.args.n_embd//model.args.n_head for i in range(model.args.n_layer): key = f'blocks.{i}.att.time_state' value = states[key] prev_x = torch.zeros(model.args.n_embd,device=device,dtype=torch.float16) prev_states = value.clone().detach().to(device=device,dtype=torch.float16).transpose(1,2) prev_ffn = torch.zeros(model.args.n_embd,device=device,dtype=torch.float16) states_value.append(prev_x) states_value.append(prev_states) states_value.append(prev_ffn) ``` The typed schema is : ```python schema = ['事件', '自然科学', '建筑结构', '地理地区', '组织', '医学', '天文对象', '人造物件', '运输', '作品', '生物', '人物'] ```