Update README.md
Browse files
README.md
CHANGED
@@ -9,15 +9,15 @@ language:
|
|
9 |
# <b>AceGPT</b>
|
10 |
|
11 |
AceGPT is a fully fine-tuned generative text model collection, particularly focused on the Arabic language domain.
|
12 |
-
This is the repository for
|
13 |
|
14 |
---
|
15 |
## Model Details
|
16 |
-
We have released the AceGPT family of large language models, which is a collection of fully fine-tuned generative text models, ranging from 7B to 70B parameters. Our models include two main categories: AceGPT and AceGPT-chat. AceGPT-chat is an optimized version specifically designed for dialogue applications. It is worth mentioning that our models have demonstrated superior performance compared to all currently available open-source Arabic dialogue models in multiple benchmark tests.
|
17 |
## Model Developers
|
18 |
-
We are from the King Abdullah University of Science and Technology (KAUST), the Chinese University of Hong Kong, Shenzhen (CUHKSZ), the Shenzhen Research Institute of Big Data (SRIBD), and King
|
19 |
## Variations
|
20 |
-
AceGPT families come in a range of parameter sizes —— 7B, 8B, 13B, 32B and 70B, each size
|
21 |
## Paper
|
22 |
The paper can be accessed at [link](https://huggingface.co/FreedomIntelligence/AceGPT-v2-70B-Chat/blob/main/Alignment_at_Pre_training__a_Case_Study_of_Aligning_LLMs_in_Arabic.pdf).
|
23 |
## Input
|
|
|
9 |
# <b>AceGPT</b>
|
10 |
|
11 |
AceGPT is a fully fine-tuned generative text model collection, particularly focused on the Arabic language domain.
|
12 |
+
This is the repository for version 2 of the 32B pre-trained model, developed based on Qwen1.5-32B.
|
13 |
|
14 |
---
|
15 |
## Model Details
|
16 |
+
We have released the AceGPT family of large language models, which is a collection of fully fine-tuned generative text models, ranging from 7B to 70B parameters. Our models include two main categories: AceGPT and AceGPT-chat. AceGPT-chat is an optimized version specifically designed for dialogue applications. It is worth mentioning that our models have demonstrated superior performance compared to all currently available open-source Arabic dialogue models in multiple benchmark tests.
|
17 |
## Model Developers
|
18 |
+
We are from the King Abdullah University of Science and Technology (KAUST), the Chinese University of Hong Kong, Shenzhen (CUHKSZ), the Shenzhen Research Institute of Big Data (SRIBD), and King Abdulaziz University (KAU).
|
19 |
## Variations
|
20 |
+
AceGPT families come in a range of parameter sizes —— 7B, 8B, 13B, 32B and 70B, each model size has a base category and a -chat category.
|
21 |
## Paper
|
22 |
The paper can be accessed at [link](https://huggingface.co/FreedomIntelligence/AceGPT-v2-70B-Chat/blob/main/Alignment_at_Pre_training__a_Case_Study_of_Aligning_LLMs_in_Arabic.pdf).
|
23 |
## Input
|