Initial GGUF model commit
Browse files
README.md
CHANGED
@@ -204,6 +204,10 @@ This is Transformers/HF format fp16 weights for CodeLlama 7B-Instruct. It is th
|
|
204 |
|
205 |
Quantisations will be coming shortly.
|
206 |
|
|
|
|
|
|
|
|
|
207 |
## Prompt template: TBC
|
208 |
|
209 |
|
|
|
204 |
|
205 |
Quantisations will be coming shortly.
|
206 |
|
207 |
+
Please note that due to a change in the RoPE Theta value, for correct results you must load these FP16 models with `trust_remote_code=True`
|
208 |
+
|
209 |
+
Credit to @emozilla for creating the necessary modelling code to achieve this!
|
210 |
+
|
211 |
## Prompt template: TBC
|
212 |
|
213 |
|