Text Generation
Transformers
Safetensors
English
mamba
text-generation-inference
Inference Endpoints
t1101675 commited on
Commit
2784e1c
1 Parent(s): 2a60695

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -1
README.md CHANGED
@@ -15,7 +15,7 @@ pipeline_tag: text-generation
15
 
16
  [paper](https://arxiv.org/abs/2410.17215) | [code](https://github.com/thu-coai/MiniPLM)
17
 
18
- **MiniPLM-Mamba-130M** is a 130M model with the [Mamba achitecture](https://github.com/state-spaces/mamba) pre-trained from scratch on [the Pile](https://huggingface.co/datasets/monology/pile-uncopyrighted) using the MiniPLM knowledge distillation framework with the [offcial QWen1.5-1.8B](https://huggingface.co/Qwen/Qwen1.5-1.8B) as the teacher model.
19
  This model shows the flexibility of the MiniPLM framework in conducting knowledge distillation across model families.
20
 
21
  We also open-source the [pre-training corpus](https://huggingface.co/datasets/MiniLLM/pile-diff_samp-qwen_1.8B-qwen_104M-r0.5) refined by Difference Sampling in MiniPLM for reproducibility.
 
15
 
16
  [paper](https://arxiv.org/abs/2410.17215) | [code](https://github.com/thu-coai/MiniPLM)
17
 
18
+ **MiniPLM-Mamba-130M** is a 130M model with the [Mamba achitecture](https://github.com/state-spaces/mamba) pre-trained from scratch on [the Pile](https://huggingface.co/datasets/monology/pile-uncopyrighted) using the MiniPLM knowledge distillation framework with the [offcial Qwen1.5-1.8B](https://huggingface.co/Qwen/Qwen1.5-1.8B) as the teacher model.
19
  This model shows the flexibility of the MiniPLM framework in conducting knowledge distillation across model families.
20
 
21
  We also open-source the [pre-training corpus](https://huggingface.co/datasets/MiniLLM/pile-diff_samp-qwen_1.8B-qwen_104M-r0.5) refined by Difference Sampling in MiniPLM for reproducibility.