elinas commited on
Commit
7bafe10
1 Parent(s): 2971ec1

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +2 -4
README.md CHANGED
@@ -10,9 +10,7 @@ tags:
10
 
11
  # chronos-13b-v2
12
 
13
- This is the FP16 PyTorch / HF version of **chronos-13b-v2** based on the **LLaMA v2** model.
14
-
15
- Only use this version for further quantization or if you would like to run in full precision, as long as you have the VRAM required.
16
 
17
  This model is primarily focused on chat, roleplay, storywriting, with good reasoning and logic.
18
 
@@ -27,7 +25,7 @@ Your instruction or question here.
27
  Not using the format will make the model perform significantly worse than intended.
28
 
29
  ## Other Versions
30
- [4bit GPTQ Quantized version](https://huggingface.co/elinas/chronos-13b-v2-GPTQ)
31
 
32
  [GGML Versions provided by @TheBloke]()
33
 
 
10
 
11
  # chronos-13b-v2
12
 
13
+ This is the 4bit GPTQ of **chronos-13b-v2** based on the **LLaMA v2** model. It works with Exllama and AutoGPTQ.
 
 
14
 
15
  This model is primarily focused on chat, roleplay, storywriting, with good reasoning and logic.
16
 
 
25
  Not using the format will make the model perform significantly worse than intended.
26
 
27
  ## Other Versions
28
+ [Original FP16 Model](https://huggingface.co/elinas/chronos-13b-v2)
29
 
30
  [GGML Versions provided by @TheBloke]()
31