Update README.md
Browse files
README.md
CHANGED
|
@@ -34,7 +34,7 @@ After undergoing 4-bit quantization, the CodeFuse-DeepSeek-33B-4bits model can b
|
|
| 34 |
|
| 35 |
🔥🔥 2023-09-27 CodeFuse-StarCoder-15B has been released, achieving a pass@1 (greedy decoding) score of 54.9% on HumanEval, which is a 21% increase compared to StarCoder's 33.6%.
|
| 36 |
|
| 37 |
-
🔥🔥🔥 2023-09-26 We are pleased to announce the release of the [4-bit quantized version](https://huggingface.co/codefuse-ai/CodeFuse-CodeLlama-34B-4bits) of [CodeFuse-CodeLlama-34B](https://
|
| 38 |
|
| 39 |
🔥🔥🔥 2023-09-11 [CodeFuse-CodeLlama34B](https://huggingface.co/codefuse-ai/CodeFuse-CodeLlama-34B-4bits) has achieved 74.4% of pass@1 (greedy decoding) on HumanEval, which is SOTA results for open-sourced LLMs at present.
|
| 40 |
|
|
@@ -162,7 +162,7 @@ os.environ["TOKENIZERS_PARALLELISM"] = "false"
|
|
| 162 |
|
| 163 |
def load_model_tokenizer(model_path):
|
| 164 |
"""
|
| 165 |
-
Load model and tokenizer based on the given model name or local path of downloaded model.
|
| 166 |
"""
|
| 167 |
tokenizer = AutoTokenizer.from_pretrained("codefuse-ai/CodeFuse-DeepSeek-33B-4bits",
|
| 168 |
trust_remote_code=True,
|
|
@@ -396,7 +396,7 @@ def load_model_tokenizer(model_path):
|
|
| 396 |
|
| 397 |
def inference(model, tokenizer, prompt):
|
| 398 |
"""
|
| 399 |
-
Uset the given model and tokenizer to generate an answer for the
|
| 400 |
"""
|
| 401 |
st = time.time()
|
| 402 |
prompt = prompt if prompt.endswith('\n') else f'{prompt}\n'
|
|
|
|
| 34 |
|
| 35 |
🔥🔥 2023-09-27 CodeFuse-StarCoder-15B has been released, achieving a pass@1 (greedy decoding) score of 54.9% on HumanEval, which is a 21% increase compared to StarCoder's 33.6%.
|
| 36 |
|
| 37 |
+
🔥🔥🔥 2023-09-26 We are pleased to announce the release of the [4-bit quantized version](https://huggingface.co/codefuse-ai/CodeFuse-CodeLlama-34B-4bits) of [CodeFuse-CodeLlama-34B](https://huggingface.co/codefuse-ai/CodeFuse-CodeLlama-34B). Despite the quantization process, the model still achieves a remarkable 73.8% accuracy (greedy decoding) on the HumanEval pass@1 metric.
|
| 38 |
|
| 39 |
🔥🔥🔥 2023-09-11 [CodeFuse-CodeLlama34B](https://huggingface.co/codefuse-ai/CodeFuse-CodeLlama-34B-4bits) has achieved 74.4% of pass@1 (greedy decoding) on HumanEval, which is SOTA results for open-sourced LLMs at present.
|
| 40 |
|
|
|
|
| 162 |
|
| 163 |
def load_model_tokenizer(model_path):
|
| 164 |
"""
|
| 165 |
+
Load model and tokenizer based on the given model name or local path of the downloaded model.
|
| 166 |
"""
|
| 167 |
tokenizer = AutoTokenizer.from_pretrained("codefuse-ai/CodeFuse-DeepSeek-33B-4bits",
|
| 168 |
trust_remote_code=True,
|
|
|
|
| 396 |
|
| 397 |
def inference(model, tokenizer, prompt):
|
| 398 |
"""
|
| 399 |
+
Uset the given model and tokenizer to generate an answer for the specified prompt.
|
| 400 |
"""
|
| 401 |
st = time.time()
|
| 402 |
prompt = prompt if prompt.endswith('\n') else f'{prompt}\n'
|