Datasets:

Formats:
arrow
ArXiv:
Libraries:
Datasets
MedMultiPoints / README.md
SushantGautam's picture
Update README.md
8b1c80b verified
metadata
splits:
  - name: train
    num_bytes: 786835439
    num_examples: 10601
download_size: 0
dataset_size: 786835439
configs:
  - config_name: default
    data_files:
      - split: train
        path: kvasir-points_datasets_script-train-*.arrow

🩺 MedMultiPoints: A Multimodal Dataset for Object Detection, Localization, and Counting in Medical Imaging

Paper
📫 For queries, contact: [email protected]

Dataset Summary

MedMultiPoints is a curated, multimodal medical imaging dataset designed for multi-task learning in the medical domain—spanning object detection, localization, and counting tasks. It integrates data from endoscopic and microscopic modalities, reflecting real-world clinical diversity.

The dataset is introduced in the paper:
"Point, Detect, Count: Multi-Task Medical Image Understanding with Instruction-Tuned Vision-Language Models"
Presented at IEEE CBMS 2025, Madrid, Spain.
Project Page & Code

Instruction-Fused JSONL Files:

Features

  • 10,600 images from diverse modalities: endoscopy (HyperKvasir) and microscopy (VISEM-Tracking)
  • Rich multi-type annotations:
    • Bounding Boxes (bbox_2d) for object detection
    • Point Annotations (point_2d) for localization
    • Count Labels (counts) for counting tasks
  • Compatible with Vision-Language Models (VLMs) and instruction-tuned pipelines
  • JSON-formatted annotations designed for seamless integration with multimodal training

Data Schema

Each sample in the dataset contains:

Field Type Description
image Image Raw medical image
image_sha256 string SHA-256 hash of the image for integrity
img_size [int, int] Original image width and height
points list List of [x, y] point annotations
bbox list List of [x1, y1, x2, y2] bounding boxes
count int Object count in the image
label string Class label (e.g., polyps, sperm, etc.)
collection_method string Task type: counting, detection, etc.
classification string Description of annotation type (e.g., pathological-findings)
organ string Target organ: Lower GI, Microscopy, etc.

Supported Tasks

This dataset supports the following multi-task settings:

  • 🔲 Object Detection (bounding box prediction)
  • 📍 Localization (point prediction)
  • 🔢 Counting (object count regression)
  • 🧠 Multimodal Instruction-Based Learning

How to Load

from datasets import load_dataset

ds = load_dataset("SushantGautam/MedMultiPoints")['train']
sample = ds[0]

# Access image and annotations
image = sample['image']
bbox = sample['bbox']
points = sample['points']
count = sample['count']

Example

{
  "image_sha256": "71179abc4b011cc99bddb3344e3e114765b32bdf77e78892f046026d785a4bdb",
  "img_size": [622, 529],
  "points": [[234, 171.5]],
  "bbox": [[38, 5, 430, 338]],
  "count": 1,
  "label": "polyps",
  "collection_method": "counting",
  "classification": "pathological-findings",
  "organ": "Lower GI"
}

Citation

If you use this dataset, please cite:

@article{Gautam2025May,
    author = {Gautam, Sushant and Riegler, Michael A. and Halvorsen, P{\aa}l},
    title = {{Point, Detect, Count: Multi-Task Medical Image Understanding with Instruction-Tuned Vision-Language Models}},
    journal = {arXiv},
    year = {2025},
    month = may,
    eprint = {2505.16647},
    doi = {10.48550/arXiv.2505.16647}
}