codefuse-admin commited on
Commit
3d22823
1 Parent(s): d0e33ba

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -1
README.md CHANGED
@@ -16,7 +16,7 @@ tasks:
16
 
17
  CodeFuse-DeepSeek-33B-4bits is the 4-bit quantized version of [CodeFuse-DeepSeek-33B](https://modelscope.cn/models/codefuse-ai/CodeFuse-DeepSeek-33B/summary) which is a 33B Code-LLM finetuned by QLoRA on multiple code-related tasks on the base model DeepSeek-Coder-33B.
18
 
19
- fter undergoing 4-bit quantization, the CodeFuse-DeepSeek-33B-4bits model can be loaded on either a single A10 (24GB VRAM) or an RTX 4090 (24GB VRAM). Moreover, the quantized model still achives an impressive accuracy of 78.05% on the Humaneval pass@1 metric.
20
 
21
  <br>
22
 
 
16
 
17
  CodeFuse-DeepSeek-33B-4bits is the 4-bit quantized version of [CodeFuse-DeepSeek-33B](https://modelscope.cn/models/codefuse-ai/CodeFuse-DeepSeek-33B/summary) which is a 33B Code-LLM finetuned by QLoRA on multiple code-related tasks on the base model DeepSeek-Coder-33B.
18
 
19
+ After undergoing 4-bit quantization, the CodeFuse-DeepSeek-33B-4bits model can be loaded on either a single A10 (24GB VRAM) or an RTX 4090 (24GB VRAM). Moreover, the quantized model still achives an impressive accuracy of 78.05% on the Humaneval pass@1 metric.
20
 
21
  <br>
22