File size: 662 Bytes
e57e4be |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 |
---
library_name: transformers
license: mit
tags:
- mistral
- 4-bit
- AWQ
- text-generation
- autotrain_compatible
- endpoints_compatible
- chatml
- nlp
- math
language:
- en
pipeline_tag: text-generation
inference: false
quantized_by: Suparious
---
# microsoft/rho-math-7b-interpreter-v0.1 AWQ
- Model creator: [microsoft](https://huggingface.co/microsoft)
- Original model: [rho-math-7b-interpreter-v0.1](https://huggingface.co/microsoft/rho-math-7b-interpreter-v0.1)
## Model summary
Rho-1 base models employ Selective Language Modeling (SLM) for pretraining, which selectively trains on clean and useful tokens that aligned with the desired distribution.
|