7B AWQ
Collection
These models are selected for their compatibility with small 12GB memory GPUs.
•
204 items
•
Updated
•
2
Rho-1 base models employ Selective Language Modeling (SLM) for pretraining, which selectively trains on clean and useful tokens that aligned with the desired distribution.
Base model
microsoft/rho-math-7b-interpreter-v0.1