Update README.md
Browse files
README.md
CHANGED
@@ -15,7 +15,15 @@ tags:
|
|
15 |
|
16 |
# Aria-Base-8K Model Card
|
17 |
|
18 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
19 |
|
20 |
<!--
|
21 |
- Aria is the **first open multimodal native MoE** model, capable of seamlessly handling various input modalities within a MoE architecture.
|
@@ -25,16 +33,12 @@ This model is a part of Aria-Base model series, designed for research studies an
|
|
25 |
|
26 |
## Aria-Base-8K
|
27 |
|
28 |
-
- **
|
29 |
- **Appropriate for Continue Pre-training**: This model is released for continue pre-training, *e.g.* on domain-specific pre-training data (OCR, long-context, agent). In Aria, this checkpoint is further continue-pretrained with 64K long-context multimodal data, yielding [Aria-Base-64K](https://huggingface.co/teowu/Aria-Base-64K).
|
30 |
- **Strong Base Performance on Language and Multimodal Scenarios**: This model shows excellent base performance on knowledge-related evaluations on both pure language and multimodal scenarios (MMLU 70+, MMMU 50+, *etc*).
|
31 |
- ***Limited Ability on Long-context Scenarios***: This model is only trained with 8K context length, and is not expected to show best performance with context length especially longer than 8K (e.g. a video with >100 frames). [Aria-Base-64K](https://huggingface.co/teowu/Aria-Base-64K) is more appropriate for longer sequence understanding.
|
32 |
- ***Limited Chat Template Availability***: This model is trained with a very low percentage of data (around 3%) re-formatted with the chat template. Hence, it might not be optimal to be directly tested with various benchmarks.
|
33 |
|
34 |
-
<p align="center">
|
35 |
-
π <a href="https://rhymes.ai/" target="_blank"> Try Aria!</a> Β· π <a href="https://www.rhymes.ai/blog-details/aria-first-open-multimodal-native-moe-model" target="_blank">Blog</a> Β· π <a href="https://arxiv.org/pdf/2410.05993" target="_blank">Paper</a>
|
36 |
-
Β· β <a href="https://github.com/rhymes-ai/Aria" target="_blank">GitHub</a> Β· π£ <a href="https://discord.com/invite/u8HxU23myj" target="_blank"> Discord </a>
|
37 |
-
</p>
|
38 |
|
39 |
|
40 |
<!-- # Model Info
|
|
|
15 |
|
16 |
# Aria-Base-8K Model Card
|
17 |
|
18 |
+
<p align="center">
|
19 |
+
π <a href="https://rhymes.ai/" target="_blank"> Try Aria!</a> Β· π <a href="https://www.rhymes.ai/blog-details/aria-first-open-multimodal-native-moe-model" target="_blank">Blog</a> Β· π <a href="https://arxiv.org/pdf/2410.05993" target="_blank">Paper</a>
|
20 |
+
Β· β <a href="https://github.com/rhymes-ai/Aria" target="_blank">GitHub</a> Β· π£ <a href="https://discord.com/invite/u8HxU23myj" target="_blank"> Discord </a>
|
21 |
+
</p>
|
22 |
+
|
23 |
+
|
24 |
+
This checkpoint is one of base models of [Aria](https://huggingface.co/rhymes-ai/Aria), designed for research purposes as well as continue training. Aria-Base-8K corresponds to the model checkpoint after the multimodal pre-training stage (boxed in gray).
|
25 |
+
|
26 |
+
<img src="./aria-stages.png" alt="Aria Training Stages" style="width: 75%;">
|
27 |
|
28 |
<!--
|
29 |
- Aria is the **first open multimodal native MoE** model, capable of seamlessly handling various input modalities within a MoE architecture.
|
|
|
33 |
|
34 |
## Aria-Base-8K
|
35 |
|
36 |
+
- **Base Model After Pre-training**: This model corresponds to the model checkpoint after the multimodal pre-training stage, with 1.4T tokens (1T language + 400B multimodal) trained in this stage. This stage lasts 43,000 iterations, with all sequences packed to 8192 with Megatron-LM, with global batch size 4096. During this training stage, the learning rate decays from `8.75e-5` to `3.5e-5`.
|
37 |
- **Appropriate for Continue Pre-training**: This model is released for continue pre-training, *e.g.* on domain-specific pre-training data (OCR, long-context, agent). In Aria, this checkpoint is further continue-pretrained with 64K long-context multimodal data, yielding [Aria-Base-64K](https://huggingface.co/teowu/Aria-Base-64K).
|
38 |
- **Strong Base Performance on Language and Multimodal Scenarios**: This model shows excellent base performance on knowledge-related evaluations on both pure language and multimodal scenarios (MMLU 70+, MMMU 50+, *etc*).
|
39 |
- ***Limited Ability on Long-context Scenarios***: This model is only trained with 8K context length, and is not expected to show best performance with context length especially longer than 8K (e.g. a video with >100 frames). [Aria-Base-64K](https://huggingface.co/teowu/Aria-Base-64K) is more appropriate for longer sequence understanding.
|
40 |
- ***Limited Chat Template Availability***: This model is trained with a very low percentage of data (around 3%) re-formatted with the chat template. Hence, it might not be optimal to be directly tested with various benchmarks.
|
41 |
|
|
|
|
|
|
|
|
|
42 |
|
43 |
|
44 |
<!-- # Model Info
|