yi-01-ai
commited on
Commit
•
b56e0cb
1
Parent(s):
2a700c3
Auto Sync from git://github.com/01-ai/Yi.git/commit/2f525eccdf3e5ec10c7193f844a8f0fb7137b901
Browse files
README.md
CHANGED
@@ -131,6 +131,25 @@ sequence length and can be extended to 32K during inference time.
|
|
131 |
|
132 |
</details>
|
133 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
134 |
## Model Performance
|
135 |
|
136 |
### Base Model Performance
|
@@ -411,19 +430,6 @@ python quantization/awq/eval_quantized_model.py \
|
|
411 |
|
412 |
For more detailed explanation, please read the [doc](https://github.com/01-ai/Yi/tree/main/quantization/awq)
|
413 |
|
414 |
-
## Ecosystem
|
415 |
-
|
416 |
-
🤗 You are encouraged to create a PR and share your awesome work built on top of
|
417 |
-
the Yi series models.
|
418 |
-
|
419 |
-
- Serving
|
420 |
-
- [ScaleLLM](https://github.com/vectorch-ai/ScaleLLM#supported-models): Efficiently run Yi models locally.
|
421 |
-
- Quantization
|
422 |
-
- [TheBloke/Yi-34B-GGUF](https://huggingface.co/TheBloke/Yi-34B-GGUF)
|
423 |
-
- [TheBloke/Yi-34B-GPTQ](https://huggingface.co/TheBloke/Yi-34B-GPTQ)
|
424 |
-
- Finetuning
|
425 |
-
- [NousResearch/Nous-Capybara-34B](https://huggingface.co/NousResearch/Nous-Capybara-34B)
|
426 |
-
|
427 |
## FAQ
|
428 |
|
429 |
1. **What dataset was this trained with?**
|
|
|
131 |
|
132 |
</details>
|
133 |
|
134 |
+
## Ecosystem
|
135 |
+
|
136 |
+
🤗 You are encouraged to create a PR and share your awesome work built on top of
|
137 |
+
the Yi series models.
|
138 |
+
|
139 |
+
- Serving
|
140 |
+
- [ScaleLLM](https://github.com/vectorch-ai/ScaleLLM#supported-models): Efficiently run Yi models locally.
|
141 |
+
- Quantization
|
142 |
+
- [TheBloke/Yi-34B-GGUF](https://huggingface.co/TheBloke/Yi-34B-GGUF)
|
143 |
+
- [TheBloke/Yi-34B-GPTQ](https://huggingface.co/TheBloke/Yi-34B-GPTQ)
|
144 |
+
- Finetuning
|
145 |
+
- [NousResearch/Nous-Capybara-34B](https://huggingface.co/NousResearch/Nous-Capybara-34B)
|
146 |
+
- [SUSTech/SUS-Chat-34B](https://huggingface.co/SUSTech/SUS-Chat-34B): This
|
147 |
+
model ranks first among all models below 70B and has outperformed the twice
|
148 |
+
larger
|
149 |
+
[deepseek-llm-67b-chat](https://huggingface.co/deepseek-ai/deepseek-llm-67b-chat).
|
150 |
+
You can check the result in [🤗 Open LLM
|
151 |
+
Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
|
152 |
+
|
153 |
## Model Performance
|
154 |
|
155 |
### Base Model Performance
|
|
|
430 |
|
431 |
For more detailed explanation, please read the [doc](https://github.com/01-ai/Yi/tree/main/quantization/awq)
|
432 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
433 |
## FAQ
|
434 |
|
435 |
1. **What dataset was this trained with?**
|