Suparious commited on
Commit
e57e4be
1 Parent(s): 27e7c28

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +27 -0
README.md ADDED
@@ -0,0 +1,27 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ library_name: transformers
3
+ license: mit
4
+ tags:
5
+ - mistral
6
+ - 4-bit
7
+ - AWQ
8
+ - text-generation
9
+ - autotrain_compatible
10
+ - endpoints_compatible
11
+ - chatml
12
+ - nlp
13
+ - math
14
+ language:
15
+ - en
16
+ pipeline_tag: text-generation
17
+ inference: false
18
+ quantized_by: Suparious
19
+ ---
20
+ # microsoft/rho-math-7b-interpreter-v0.1 AWQ
21
+
22
+ - Model creator: [microsoft](https://huggingface.co/microsoft)
23
+ - Original model: [rho-math-7b-interpreter-v0.1](https://huggingface.co/microsoft/rho-math-7b-interpreter-v0.1)
24
+
25
+ ## Model summary
26
+
27
+ Rho-1 base models employ Selective Language Modeling (SLM) for pretraining, which selectively trains on clean and useful tokens that aligned with the desired distribution.