Datasets:

Formats:
arrow
ArXiv:
Libraries:
Datasets
File size: 4,327 Bytes
a2e448b
71daccb
 
35f4e9a
71daccb
 
35f4e9a
c37de58
 
 
 
 
a2e448b
ad0520d
 
 
6fccffa
ad0520d
 
 
 
 
 
 
 
 
 
 
8b1c80b
 
 
 
 
ad0520d
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
---
  splits:
  - name: train
    num_bytes: 786835439
    num_examples: 10601
  download_size: 0
  dataset_size: 786835439
  configs:
  - config_name: default
    data_files:
      - split: train
        path: kvasir-points_datasets_script-train-*.arrow
---

# 🩺 MedMultiPoints: A Multimodal Dataset for Object Detection, Localization, and Counting in Medical Imaging

[![Paper](https://img.shields.io/badge/Paper-arxiv-blue)](https://arxiv.org/abs/2505.16647)  
📫 For queries, contact: [[email protected]](mailto:[email protected])

## Dataset Summary

**MedMultiPoints** is a curated, multimodal medical imaging dataset designed for **multi-task learning** in the medical domain—spanning **object detection**, **localization**, and **counting** tasks. It integrates data from **endoscopic** and **microscopic** modalities, reflecting real-world clinical diversity.

The dataset is introduced in the paper:  
**"Point, Detect, Count: Multi-Task Medical Image Understanding with Instruction-Tuned Vision-Language Models"**  
Presented at **IEEE CBMS 2025, Madrid, Spain.**  
→ [Project Page & Code](https://github.com/Simula/PointDetectCount)

**Instruction-Fused JSONL Files**:

- [`multi-task-train.jsonl`](https://huggingface.co/datasets/SimulaMet/MedMultiPoints/resolve/main/instruction_dataset/multi-task-train.jsonl)
- [`multi-task-test.jsonl`](https://huggingface.co/datasets/SimulaMet/MedMultiPoints/resolve/main/instruction_dataset/multi-task-test.jsonl)


## Features

- **10,600 images** from diverse modalities: endoscopy (HyperKvasir) and microscopy (VISEM-Tracking)
- Rich **multi-type annotations**:
  - **Bounding Boxes** (`bbox_2d`) for object detection
  - **Point Annotations** (`point_2d`) for localization
  - **Count Labels** (`counts`) for counting tasks
- Compatible with **Vision-Language Models (VLMs)** and **instruction-tuned pipelines**
- JSON-formatted annotations designed for seamless integration with multimodal training

## Data Schema

Each sample in the dataset contains:

| Field              | Type      | Description                                      |
|-------------------|-----------|--------------------------------------------------|
| `image`           | Image     | Raw medical image                               |
| `image_sha256`    | string    | SHA-256 hash of the image for integrity         |
| `img_size`        | [int, int]| Original image width and height                 |
| `points`          | list      | List of `[x, y]` point annotations              |
| `bbox`            | list      | List of `[x1, y1, x2, y2]` bounding boxes        |
| `count`           | int       | Object count in the image                       |
| `label`           | string    | Class label (e.g., `polyps`, `sperm`, etc.)     |
| `collection_method` | string  | Task type: `counting`, `detection`, etc.        |
| `classification`  | string    | Description of annotation type (e.g., pathological-findings) |
| `organ`           | string    | Target organ: `Lower GI`, `Microscopy`, etc.    |


## Supported Tasks

This dataset supports the following **multi-task** settings:

- 🔲 **Object Detection** (bounding box prediction)
- 📍 **Localization** (point prediction)
- 🔢 **Counting** (object count regression)
- 🧠 **Multimodal Instruction-Based Learning**

## How to Load

```python
from datasets import load_dataset

ds = load_dataset("SushantGautam/MedMultiPoints")['train']
sample = ds[0]

# Access image and annotations
image = sample['image']
bbox = sample['bbox']
points = sample['points']
count = sample['count']
```


## Example

```json
{
  "image_sha256": "71179abc4b011cc99bddb3344e3e114765b32bdf77e78892f046026d785a4bdb",
  "img_size": [622, 529],
  "points": [[234, 171.5]],
  "bbox": [[38, 5, 430, 338]],
  "count": 1,
  "label": "polyps",
  "collection_method": "counting",
  "classification": "pathological-findings",
  "organ": "Lower GI"
}
```


## Citation

If you use this dataset, please cite:

```bibtex
@article{Gautam2025May,
	author = {Gautam, Sushant and Riegler, Michael A. and Halvorsen, P{\aa}l},
	title = {{Point, Detect, Count: Multi-Task Medical Image Understanding with Instruction-Tuned Vision-Language Models}},
	journal = {arXiv},
	year = {2025},
	month = may,
	eprint = {2505.16647},
	doi = {10.48550/arXiv.2505.16647}
}
```