pom
commited on
Commit
·
9dc8165
1
Parent(s):
415f8f7
update readme
Browse files
README.md
CHANGED
@@ -18,7 +18,7 @@ inference: false
|
|
18 |
**[2024/01/16]** Released the long-sequence model **XVERSE-13B-256K**. This model version supports a maximum window length of 256K, accommodating approximately 250,000 words for tasks such as literature summarization and report analysis.
|
19 |
**[2023/11/06]** The new versions of the **XVERSE-13B-2** base model and the **XVERSE-13B-Chat-2** model have been released. Compared to the original versions, the new models have undergone more extensive training (increasing from 1.4T to 3.2T), resulting in significant improvements in all capabilities, along with the addition of Function Call abilities.
|
20 |
**[2023/09/26]** Released the [XVERSE-7B](https://github.com/xverse-ai/XVERSE-7B) base model and [XVERSE-7B-Chat](https://github.com/xverse-ai/XVERSE-7B) instruct-finetuned model with 7B size, which support deployment and operation on a single consumer-grade graphics card while maintaining high performance, full open source, and free for commercial use.
|
21 |
-
**[2023/08/22]** Released the aligned instruct-finetuned model XVERSE-13B-Chat.
|
22 |
**[2023/08/07]** Released the XVERSE-13B base model.
|
23 |
|
24 |
## 模型介绍
|
|
|
18 |
**[2024/01/16]** Released the long-sequence model **XVERSE-13B-256K**. This model version supports a maximum window length of 256K, accommodating approximately 250,000 words for tasks such as literature summarization and report analysis.
|
19 |
**[2023/11/06]** The new versions of the **XVERSE-13B-2** base model and the **XVERSE-13B-Chat-2** model have been released. Compared to the original versions, the new models have undergone more extensive training (increasing from 1.4T to 3.2T), resulting in significant improvements in all capabilities, along with the addition of Function Call abilities.
|
20 |
**[2023/09/26]** Released the [XVERSE-7B](https://github.com/xverse-ai/XVERSE-7B) base model and [XVERSE-7B-Chat](https://github.com/xverse-ai/XVERSE-7B) instruct-finetuned model with 7B size, which support deployment and operation on a single consumer-grade graphics card while maintaining high performance, full open source, and free for commercial use.
|
21 |
+
**[2023/08/22]** Released the aligned instruct-finetuned model XVERSE-13B-Chat.
|
22 |
**[2023/08/07]** Released the XVERSE-13B base model.
|
23 |
|
24 |
## 模型介绍
|