MaziyarPanahi
commited on
Commit
•
941430d
1
Parent(s):
5e17e60
Update README.md (#10)
Browse files- Update README.md (ee230af3585258812d8bad69b756dcc629774b16)
README.md
CHANGED
@@ -14,7 +14,7 @@ tags:
|
|
14 |
- GGUF
|
15 |
inference: false
|
16 |
model_creator: MaziyarPanahi
|
17 |
-
model_name:
|
18 |
quantized_by: MaziyarPanahi
|
19 |
license: other
|
20 |
license_name: tongyi-qianwen
|
@@ -22,15 +22,15 @@ license_link: https://huggingface.co/Qwen/Qwen2-72B-Instruct/blob/main/LICENSE
|
|
22 |
---
|
23 |
|
24 |
|
25 |
-
# MaziyarPanahi/
|
26 |
|
27 |
-
The GGUF and quantized models here are based on [MaziyarPanahi/
|
28 |
|
29 |
## How to download
|
30 |
You can download only the quants you need instead of cloning the entire repository as follows:
|
31 |
|
32 |
```
|
33 |
-
huggingface-cli download MaziyarPanahi/
|
34 |
```
|
35 |
|
36 |
## Load GGUF models
|
@@ -49,13 +49,13 @@ You `MUST` follow the prompt template provided by Llama-3:
|
|
49 |
|
50 |
---
|
51 |
|
52 |
-
# MaziyarPanahi/
|
53 |
|
54 |
This is a fine-tuned version of the `Qwen/Qwen2-72B-Instruct` model. It aims to improve the base model across all benchmarks.
|
55 |
|
56 |
# ⚡ Quantized GGUF
|
57 |
|
58 |
-
All GGUF models are available here: [MaziyarPanahi/
|
59 |
|
60 |
# 🏆 [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
|
61 |
|
@@ -106,7 +106,7 @@ from transformers import pipeline
|
|
106 |
messages = [
|
107 |
{"role": "user", "content": "Who are you?"},
|
108 |
]
|
109 |
-
pipe = pipeline("text-generation", model="MaziyarPanahi/
|
110 |
pipe(messages)
|
111 |
|
112 |
|
@@ -114,6 +114,6 @@ pipe(messages)
|
|
114 |
|
115 |
from transformers import AutoTokenizer, AutoModelForCausalLM
|
116 |
|
117 |
-
tokenizer = AutoTokenizer.from_pretrained("MaziyarPanahi/
|
118 |
-
model = AutoModelForCausalLM.from_pretrained("MaziyarPanahi/
|
119 |
```
|
|
|
14 |
- GGUF
|
15 |
inference: false
|
16 |
model_creator: MaziyarPanahi
|
17 |
+
model_name: calme-2.1-qwen2-72b-GGUF
|
18 |
quantized_by: MaziyarPanahi
|
19 |
license: other
|
20 |
license_name: tongyi-qianwen
|
|
|
22 |
---
|
23 |
|
24 |
|
25 |
+
# MaziyarPanahi/calme-2.1-qwen2-72b-GGUF
|
26 |
|
27 |
+
The GGUF and quantized models here are based on [MaziyarPanahi/calme-2.1-qwen2-72b](https://huggingface.co/MaziyarPanahi/calme-2.1-qwen2-72b) model
|
28 |
|
29 |
## How to download
|
30 |
You can download only the quants you need instead of cloning the entire repository as follows:
|
31 |
|
32 |
```
|
33 |
+
huggingface-cli download MaziyarPanahi/calme-2.1-qwen2-72b-GGUF --local-dir . --include '*Q2_K*gguf'
|
34 |
```
|
35 |
|
36 |
## Load GGUF models
|
|
|
49 |
|
50 |
---
|
51 |
|
52 |
+
# MaziyarPanahi/calme-2.1-qwen2-72b
|
53 |
|
54 |
This is a fine-tuned version of the `Qwen/Qwen2-72B-Instruct` model. It aims to improve the base model across all benchmarks.
|
55 |
|
56 |
# ⚡ Quantized GGUF
|
57 |
|
58 |
+
All GGUF models are available here: [MaziyarPanahi/calme-2.1-qwen2-72b-GGUF](https://huggingface.co/MaziyarPanahi/calme-2.1-qwen2-72b-GGUF)
|
59 |
|
60 |
# 🏆 [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
|
61 |
|
|
|
106 |
messages = [
|
107 |
{"role": "user", "content": "Who are you?"},
|
108 |
]
|
109 |
+
pipe = pipeline("text-generation", model="MaziyarPanahi/calme-2.1-qwen2-72b")
|
110 |
pipe(messages)
|
111 |
|
112 |
|
|
|
114 |
|
115 |
from transformers import AutoTokenizer, AutoModelForCausalLM
|
116 |
|
117 |
+
tokenizer = AutoTokenizer.from_pretrained("MaziyarPanahi/calme-2.1-qwen2-72b")
|
118 |
+
model = AutoModelForCausalLM.from_pretrained("MaziyarPanahi/calme-2.1-qwen2-72b")
|
119 |
```
|