Word of Caution: The captions are pretty low entropy.
Hey, first of all, thank you for creating this multimodal dataset across all modalities - text, audio, images.
But a word of caution for someone using this in future, and feedback that can probably be incorporated in the future: The captions generated are very simple and have very very low entropy. This is the byproduct of asking people impromptu what comes to their mind when looking at an image making the generated captions very very simple and "low entropy".
This makes the dataset not-so-usable for connecting image-to-text modalities (clip, sd) since the captions only capture very high level features rather. I do think that the high level features contained are good enough still, if the dataset was very huge with O(100M) images. But since it's small - a lot of value can be unlocked just by increasing the information content in each caption.
Probably, for future versions, a better way would be to ask people to describe the image in as much detail as possible.
Example:
यह एक जिला परिवहन कार्यालय पूर्णिया का फ़ोटो है।
Siglip2's (multilingual) performance on text to image retrieval: