Datasets:

Formats:
arrow
ArXiv:
Libraries:
Datasets
MedMultiPoints / README.md
SushantGautam's picture
Update README.md
8b1c80b verified
---
splits:
- name: train
num_bytes: 786835439
num_examples: 10601
download_size: 0
dataset_size: 786835439
configs:
- config_name: default
data_files:
- split: train
path: kvasir-points_datasets_script-train-*.arrow
---
# 🩺 MedMultiPoints: A Multimodal Dataset for Object Detection, Localization, and Counting in Medical Imaging
[![Paper](https://img.shields.io/badge/Paper-arxiv-blue)](https://arxiv.org/abs/2505.16647)
📫 For queries, contact: [[email protected]](mailto:[email protected])
## Dataset Summary
**MedMultiPoints** is a curated, multimodal medical imaging dataset designed for **multi-task learning** in the medical domain—spanning **object detection**, **localization**, and **counting** tasks. It integrates data from **endoscopic** and **microscopic** modalities, reflecting real-world clinical diversity.
The dataset is introduced in the paper:
**"Point, Detect, Count: Multi-Task Medical Image Understanding with Instruction-Tuned Vision-Language Models"**
Presented at **IEEE CBMS 2025, Madrid, Spain.**
→ [Project Page & Code](https://github.com/Simula/PointDetectCount)
**Instruction-Fused JSONL Files**:
- [`multi-task-train.jsonl`](https://huggingface.co/datasets/SimulaMet/MedMultiPoints/resolve/main/instruction_dataset/multi-task-train.jsonl)
- [`multi-task-test.jsonl`](https://huggingface.co/datasets/SimulaMet/MedMultiPoints/resolve/main/instruction_dataset/multi-task-test.jsonl)
## Features
- **10,600 images** from diverse modalities: endoscopy (HyperKvasir) and microscopy (VISEM-Tracking)
- Rich **multi-type annotations**:
- **Bounding Boxes** (`bbox_2d`) for object detection
- **Point Annotations** (`point_2d`) for localization
- **Count Labels** (`counts`) for counting tasks
- Compatible with **Vision-Language Models (VLMs)** and **instruction-tuned pipelines**
- JSON-formatted annotations designed for seamless integration with multimodal training
## Data Schema
Each sample in the dataset contains:
| Field | Type | Description |
|-------------------|-----------|--------------------------------------------------|
| `image` | Image | Raw medical image |
| `image_sha256` | string | SHA-256 hash of the image for integrity |
| `img_size` | [int, int]| Original image width and height |
| `points` | list | List of `[x, y]` point annotations |
| `bbox` | list | List of `[x1, y1, x2, y2]` bounding boxes |
| `count` | int | Object count in the image |
| `label` | string | Class label (e.g., `polyps`, `sperm`, etc.) |
| `collection_method` | string | Task type: `counting`, `detection`, etc. |
| `classification` | string | Description of annotation type (e.g., pathological-findings) |
| `organ` | string | Target organ: `Lower GI`, `Microscopy`, etc. |
## Supported Tasks
This dataset supports the following **multi-task** settings:
- 🔲 **Object Detection** (bounding box prediction)
- 📍 **Localization** (point prediction)
- 🔢 **Counting** (object count regression)
- 🧠 **Multimodal Instruction-Based Learning**
## How to Load
```python
from datasets import load_dataset
ds = load_dataset("SushantGautam/MedMultiPoints")['train']
sample = ds[0]
# Access image and annotations
image = sample['image']
bbox = sample['bbox']
points = sample['points']
count = sample['count']
```
## Example
```json
{
"image_sha256": "71179abc4b011cc99bddb3344e3e114765b32bdf77e78892f046026d785a4bdb",
"img_size": [622, 529],
"points": [[234, 171.5]],
"bbox": [[38, 5, 430, 338]],
"count": 1,
"label": "polyps",
"collection_method": "counting",
"classification": "pathological-findings",
"organ": "Lower GI"
}
```
## Citation
If you use this dataset, please cite:
```bibtex
@article{Gautam2025May,
author = {Gautam, Sushant and Riegler, Michael A. and Halvorsen, P{\aa}l},
title = {{Point, Detect, Count: Multi-Task Medical Image Understanding with Instruction-Tuned Vision-Language Models}},
journal = {arXiv},
year = {2025},
month = may,
eprint = {2505.16647},
doi = {10.48550/arXiv.2505.16647}
}
```