Update README.md
Browse files
README.md
CHANGED
@@ -711,3 +711,27 @@ configs:
|
|
711 |
- split: val
|
712 |
path: val_balanced_instructions/val-*
|
713 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
711 |
- split: val
|
712 |
path: val_balanced_instructions/val-*
|
713 |
---
|
714 |
+
|
715 |
+
<p align="center" width="100%">
|
716 |
+
<img src="https://i.postimg.cc/g0QRgMVv/WX20240228-113337-2x.png" width="100%" height="80%">
|
717 |
+
</p>
|
718 |
+
|
719 |
+
# Large-scale Multi-modality Models Evaluation Suite
|
720 |
+
|
721 |
+
> Accelerating the development of large-scale multi-modality models (LMMs) with `lmms-eval`
|
722 |
+
|
723 |
+
🏠 [Homepage](https://lmms-lab.github.io/) | 📚 [Documentation](docs/README.md) | 🤗 [Huggingface Datasets](https://huggingface.co/lmms-lab)
|
724 |
+
|
725 |
+
# This Dataset
|
726 |
+
|
727 |
+
This is a formatted version of [GQA](hhttps://cs.stanford.edu/people/dorarad/gqa/about.html). It is used in our `lmms-eval` pipeline to allow for one-click evaluations of large multi-modality models.
|
728 |
+
|
729 |
+
```
|
730 |
+
@inproceedings{hudson2019gqa,
|
731 |
+
title={Gqa: A new dataset for real-world visual reasoning and compositional question answering},
|
732 |
+
author={Hudson, Drew A and Manning, Christopher D},
|
733 |
+
booktitle={Proceedings of the IEEE/CVF conference on computer vision and pattern recognition},
|
734 |
+
pages={6700--6709},
|
735 |
+
year={2019}
|
736 |
+
}
|
737 |
+
```
|