Update README.md
Browse files
README.md
CHANGED
@@ -3624,6 +3624,10 @@ The `GME` models support three types of input: **text**, **image**, and **image-
|
|
3624 |
|[`gme-Qwen2VL-7B`](https://huggingface.co/Alibaba-NLP/gme-Qwen2-VL-7B-Instruct) | 8.29B | 32768 | 3584 | - | 67.02 |
|
3625 |
|
3626 |
## Usage
|
|
|
|
|
|
|
|
|
3627 |
|
3628 |
**Use with custom code**
|
3629 |
|
@@ -3674,8 +3678,8 @@ We validated the performance on our universal multimodal retrieval benchmark (**
|
|
3674 |
| One-Peace | 4B | 43.54 | 31.27 | 61.38 | 42.9 | 65.59 | 42.72 | 28.29 | 6.73 | 23.41 | 42.03 |
|
3675 |
| DSE | 4.2B | 48.94 | 27.92 | 40.75 | 78.21 | 52.54 | 49.62 | 35.44 | 8.36 | 40.18 | 50.63 |
|
3676 |
| E5-V | 8.4B | 52.41 | 27.36 | 46.56 | 41.22 | 47.95 | 54.13 | 32.9 | 23.17 | 7.23 | 42.48 |
|
3677 |
-
| **GME-
|
3678 |
-
| **GME-
|
3679 |
|
3680 |
The [MTEB Leaderboard](https://huggingface.co/spaces/mteb/leaderboard) English tab shows the text embeddings performence of our model.
|
3681 |
|
@@ -3693,14 +3697,13 @@ We will extend to multi-image input, image-text interleaved data as well as mult
|
|
3693 |
|
3694 |
## Redistribution and Use
|
3695 |
|
3696 |
-
We
|
3697 |
-
|
3698 |
-
|
3699 |
-
|
3700 |
-
you
|
3701 |
-
2. if you use the GME models or any outputs or results of them to create, train, fine tune, or otherwise improve an AI model,
|
3702 |
-
which is distributed or made available, you shall also include “GME” at the beginning of any such AI model name.
|
3703 |
|
|
|
3704 |
|
3705 |
|
3706 |
## Citation
|
|
|
3624 |
|[`gme-Qwen2VL-7B`](https://huggingface.co/Alibaba-NLP/gme-Qwen2-VL-7B-Instruct) | 8.29B | 32768 | 3584 | - | 67.02 |
|
3625 |
|
3626 |
## Usage
|
3627 |
+
**Use Transformers**
|
3628 |
+
```
|
3629 |
+
|
3630 |
+
```
|
3631 |
|
3632 |
**Use with custom code**
|
3633 |
|
|
|
3678 |
| One-Peace | 4B | 43.54 | 31.27 | 61.38 | 42.9 | 65.59 | 42.72 | 28.29 | 6.73 | 23.41 | 42.03 |
|
3679 |
| DSE | 4.2B | 48.94 | 27.92 | 40.75 | 78.21 | 52.54 | 49.62 | 35.44 | 8.36 | 40.18 | 50.63 |
|
3680 |
| E5-V | 8.4B | 52.41 | 27.36 | 46.56 | 41.22 | 47.95 | 54.13 | 32.9 | 23.17 | 7.23 | 42.48 |
|
3681 |
+
| **GME-Qwen2-VL-2B** | 2.2B | 55.93 | 29.86 | 57.36 | 87.84 | **61.93** | 76.47 | 64.58 | 37.02 | 66.47 | 64.45 |
|
3682 |
+
| **GME-Qwen2-VL-7B** | 8.3B | **58.19** | 31.89 | **61.35** | **89.92** | 60.83 | **80.94** | **66.18** | **42.56** | **73.62** | **67.02** |
|
3683 |
|
3684 |
The [MTEB Leaderboard](https://huggingface.co/spaces/mteb/leaderboard) English tab shows the text embeddings performence of our model.
|
3685 |
|
|
|
3697 |
|
3698 |
## Redistribution and Use
|
3699 |
|
3700 |
+
We encourage and value diverse applications of GME models and continuous enhancements to the models themselves.
|
3701 |
+
|
3702 |
+
- If you distribute or make GME models (or any derivative works) available, or if you create a product or service (including another AI model) that incorporates them,
|
3703 |
+
|
3704 |
+
you must prominently display ``Built with GME'' on your website, user interface, blog post, ``About'' page, or product documentation.
|
|
|
|
|
3705 |
|
3706 |
+
- If you utilize GME models or their outputs to develop, train, fine-tune, or improve an AI model that is distributed or made available, you must prefix the name of any such AI model with ``GME''.
|
3707 |
|
3708 |
|
3709 |
## Citation
|