Update README.md
Browse files
README.md
CHANGED
@@ -21,9 +21,9 @@ tags:
|
|
21 |
</p>
|
22 |
|
23 |
|
24 |
-
This checkpoint is one of base models of [Aria](https://huggingface.co/rhymes-ai/Aria), designed for research purposes as well as continue training. Aria-Base-8K corresponds to the model checkpoint after the multimodal pre-training stage (boxed in gray).
|
25 |
|
26 |
-
<img src="./aria-stages.png" alt="Aria Training Stages" style="width:
|
27 |
|
28 |
<!--
|
29 |
- Aria is the **first open multimodal native MoE** model, capable of seamlessly handling various input modalities within a MoE architecture.
|
@@ -34,10 +34,10 @@ This checkpoint is one of base models of [Aria](https://huggingface.co/rhymes-ai
|
|
34 |
## Aria-Base-8K
|
35 |
|
36 |
- **Base Model After Pre-training**: This model corresponds to the model checkpoint after the multimodal pre-training stage, with 1.4T tokens (1T language + 400B multimodal) trained in this stage. This stage lasts 43,000 iterations, with all sequences packed to 8192 with Megatron-LM, with global batch size 4096. During this training stage, the learning rate decays from `8.75e-5` to `3.5e-5`.
|
37 |
-
- **Appropriate for Continue Pre-training**: This model is
|
38 |
- **Strong Base Performance on Language and Multimodal Scenarios**: This model shows excellent base performance on knowledge-related evaluations on both pure language and multimodal scenarios (MMLU 70+, MMMU 50+, *etc*).
|
39 |
- ***Limited Ability on Long-context Scenarios***: This model is only trained with 8K context length, and is not expected to show best performance with context length especially longer than 8K (e.g. a video with >100 frames). [Aria-Base-64K](https://huggingface.co/teowu/Aria-Base-64K) is more appropriate for longer sequence understanding.
|
40 |
-
- ***Limited Chat Template Availability***: This model is trained with a very low percentage of data (around 3%) re-formatted with the chat template. Hence, it might not be optimal to be directly
|
41 |
|
42 |
|
43 |
|
|
|
21 |
</p>
|
22 |
|
23 |
|
24 |
+
This checkpoint is one of base models of [Aria](https://huggingface.co/rhymes-ai/Aria), designed for research purposes as well as continue training. Specifically, Aria-Base-8K corresponds to the model checkpoint after the multimodal pre-training stage (boxed in gray).
|
25 |
|
26 |
+
<img src="./aria-stages.png" alt="Aria Training Stages" style="width: 100%;">
|
27 |
|
28 |
<!--
|
29 |
- Aria is the **first open multimodal native MoE** model, capable of seamlessly handling various input modalities within a MoE architecture.
|
|
|
34 |
## Aria-Base-8K
|
35 |
|
36 |
- **Base Model After Pre-training**: This model corresponds to the model checkpoint after the multimodal pre-training stage, with 1.4T tokens (1T language + 400B multimodal) trained in this stage. This stage lasts 43,000 iterations, with all sequences packed to 8192 with Megatron-LM, with global batch size 4096. During this training stage, the learning rate decays from `8.75e-5` to `3.5e-5`.
|
37 |
+
- **Appropriate for Continue Pre-training**: This model is recommended for continue pre-training, *e.g.* on domain-specific pre-training data (OCR, agent, multi-lingual), while the targeted scenario does not involve long-context inputs. Please consider fine-tuning [Aria-Base-64K](https://huggingface.co/teowu/Aria-Base-64K) for long-context scenarios.
|
38 |
- **Strong Base Performance on Language and Multimodal Scenarios**: This model shows excellent base performance on knowledge-related evaluations on both pure language and multimodal scenarios (MMLU 70+, MMMU 50+, *etc*).
|
39 |
- ***Limited Ability on Long-context Scenarios***: This model is only trained with 8K context length, and is not expected to show best performance with context length especially longer than 8K (e.g. a video with >100 frames). [Aria-Base-64K](https://huggingface.co/teowu/Aria-Base-64K) is more appropriate for longer sequence understanding.
|
40 |
+
- ***Limited Chat Template Availability***: This model is trained with a very low percentage of data (around 3%) re-formatted with the chat template. Hence, it might not be optimal to be directly used for chatting.
|
41 |
|
42 |
|
43 |
|