|
--- |
|
library_name: transformers |
|
license: mit |
|
tags: |
|
- mistral |
|
- 4-bit |
|
- AWQ |
|
- text-generation |
|
- autotrain_compatible |
|
- endpoints_compatible |
|
- chatml |
|
- nlp |
|
- math |
|
language: |
|
- en |
|
pipeline_tag: text-generation |
|
inference: false |
|
quantized_by: Suparious |
|
--- |
|
# microsoft/rho-math-7b-interpreter-v0.1 AWQ |
|
|
|
- Model creator: [microsoft](https://huggingface.co/microsoft) |
|
- Original model: [rho-math-7b-interpreter-v0.1](https://huggingface.co/microsoft/rho-math-7b-interpreter-v0.1) |
|
|
|
## Model summary |
|
|
|
Rho-1 base models employ Selective Language Modeling (SLM) for pretraining, which selectively trains on clean and useful tokens that aligned with the desired distribution. |
|
|