Edit model card

microsoft/rho-math-7b-interpreter-v0.1 AWQ

Model summary

Rho-1 base models employ Selective Language Modeling (SLM) for pretraining, which selectively trains on clean and useful tokens that aligned with the desired distribution.

Downloads last month
8
Safetensors
Model size
1.2B params
Tensor type
I32
·
FP16
·
Inference Examples
Inference API (serverless) has been turned off for this model.

Model tree for solidrust/rho-math-7b-interpreter-v0.1-AWQ

Quantized
(4)
this model

Collection including solidrust/rho-math-7b-interpreter-v0.1-AWQ