yifanzhang114 commited on
Commit
741cb88
1 Parent(s): 5bfb40c

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +7 -5
README.md CHANGED
@@ -9,11 +9,9 @@ language:
9
  size_categories:
10
  - 100B<n<1T
11
  ---
12
-
13
- * **`2024.09.03`** 🌟 MME-RealWorld is now supported in the [VLMEvalKit](https://github.com/open-compass/VLMEvalKit) repository, enabling one-click evaluation—give it a try!"
14
-
15
- * **`2024.09.01`** 🌟 Qwen2-VL currently ranks first on our leaderboard, but its overall accuracy remains below 55%, see our [leaderboard](https://mme-realworld.github.io/home_page.html#leaderboard) for the detail.
16
-
17
  * **`2024.08.20`** 🌟 We are very proud to launch MME-RealWorld, which contains 13K high-quality images, annotated by 32 volunteers, resulting in 29K question-answer pairs that cover 43 subtasks across 5 real-world scenarios. As far as we know, **MME-RealWorld is the largest manually annotated benchmark to date, featuring the highest resolution and a targeted focus on real-world applications**.
18
 
19
 
@@ -24,6 +22,10 @@ Code: https://github.com/yfzhang114/MME-RealWorld
24
  Project page: https://mme-realworld.github.io/
25
 
26
 
 
 
 
 
27
  ## How to use?
28
 
29
  Since the image files are large and have been split into multiple compressed parts, please first merge the compressed files with the same name and then extract them together.
 
9
  size_categories:
10
  - 100B<n<1T
11
  ---
12
+ * **`2024.11.14`** 🌟 MME-RealWorld now has a [lite version](https://huggingface.co/datasets/yifanzhang114/MME-RealWorld-Lite) (50 samples per task) for inference acceleration, which is also supported by VLMEvalKit and Lmms-eval.
13
+ * **`2024.10.27`** 🌟 LLaVA-OV currently ranks first on our leaderboard, but its overall accuracy remains below 55%, see our [leaderboard](https://mme-realworld.github.io/home_page.html#leaderboard) for the detail.
14
+ * **`2024.09.03`** 🌟 MME-RealWorld is now supported in the [VLMEvalKit](https://github.com/open-compass/VLMEvalKit) and [Lmms-eval](https://github.com/EvolvingLMMs-Lab/lmms-eval) repository, enabling one-click evaluation—give it a try!"
 
 
15
  * **`2024.08.20`** 🌟 We are very proud to launch MME-RealWorld, which contains 13K high-quality images, annotated by 32 volunteers, resulting in 29K question-answer pairs that cover 43 subtasks across 5 real-world scenarios. As far as we know, **MME-RealWorld is the largest manually annotated benchmark to date, featuring the highest resolution and a targeted focus on real-world applications**.
16
 
17
 
 
22
  Project page: https://mme-realworld.github.io/
23
 
24
 
25
+
26
+ ![image/png](https://cdn-uploads.huggingface.co/production/uploads/623d8ca4c29adf5ef6175615/ZnczJh10NHm0u03p7kjm_.png)
27
+
28
+
29
  ## How to use?
30
 
31
  Since the image files are large and have been split into multiple compressed parts, please first merge the compressed files with the same name and then extract them together.