Update README.md
Browse files
README.md
CHANGED
@@ -25,6 +25,8 @@ save_file(tensors, safetensors_path, metadata={'format': 'pt'})
|
|
25 |
```
|
26 |
(from https://huggingface.co/SeaLLMs/SeaLLM-7B-Hybrid/discussions/2#65752144412ee70185d49ff5)
|
27 |
|
|
|
|
|
28 |
32k context finetune of TinyLlama-1.1B using increased rope theta (rope frequency base) meant to serve as a long-context speculative decoding model.
|
29 |
|
30 |
Created using [TinyLlama-1.1B](https://huggingface.co/TinyLlama/tinyLlama-intermediate-checkpoints-after-1T-token) and further pretraining at 32768 context length on [togethercomputer/RedPajama-Data-1T-Sample](https://huggingface.co/datasets/togethercomputer/RedPajama-Data-1T-Sample).
|
|
|
25 |
```
|
26 |
(from https://huggingface.co/SeaLLMs/SeaLLM-7B-Hybrid/discussions/2#65752144412ee70185d49ff5)
|
27 |
|
28 |
+
## Original model card:
|
29 |
+
|
30 |
32k context finetune of TinyLlama-1.1B using increased rope theta (rope frequency base) meant to serve as a long-context speculative decoding model.
|
31 |
|
32 |
Created using [TinyLlama-1.1B](https://huggingface.co/TinyLlama/tinyLlama-intermediate-checkpoints-after-1T-token) and further pretraining at 32768 context length on [togethercomputer/RedPajama-Data-1T-Sample](https://huggingface.co/datasets/togethercomputer/RedPajama-Data-1T-Sample).
|