File size: 1,349 Bytes
364e833 c147460 4499ea4 c147460 f10621d c147460 364e833 c147460 933876b 2d0f350 f09c208 c147460 f10621d |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 |
---
language:
- en
tags:
- pytorch
- text-generation
- causal-lm
- rwkv
license: apache-2.0
datasets:
- the_pile
---
# RWKV-4 1.5B
## Model Description
RWKV-4 1.5B is a L24-D2048 causal language model trained on the Pile. See https://github.com/BlinkDL/RWKV-LM for details.
** Note: It's a BF16 model, and it may overflow if you are using FP16 (probably fixable by rescaling the weights). **
At this moment you have to use my Github code (https://github.com/BlinkDL/RWKV-LM) to run it.
ctx_len = 1024
n_layer = 24
n_embd = 2048
New checkpoint: RWKV-4-Pile-1B5-20220929-ctx4096.pth : Fine-tuned to ctx_len = 4096
Final checkpoint: RWKV-4-Pile-1B5-20220903-8040.pth : Trained on the Pile for 332B tokens.
* Pile loss 2.0415
* LAMBADA ppl 7.04, acc 56.43%
* PIQA acc 72.36%
* SC2016 acc 68.73%
* Hellaswag acc_norm 52.48%
Preview checkpoint: RWKV-4-Pile-1B5-20220822-5809.pth : Trained on the Pile for 240B tokens.
* Pile loss 2.0518
* LAMBADA ppl 7.14, acc 56.36%
* PIQA acc 71.71%
* SC2016 acc 68.15%
* Hellaswag acc_norm 52.04%
Preview checkpoint: RWKV-4-Pile-1B5-20220814-4526.pth : Trained on the Pile for 187B tokens.
* Pile loss 2.0635
* LAMBADA ppl 7.34, acc 55.64%
* PIQA acc 71.44%
* SC2016 acc 68.25%
* Hellaswag acc_norm 51.60%
## Warning: 4 / 4a / 4b models ARE NOT compatible!!! Use RWKV-4 unless you know what you are doing.
|