Datasets:
Tasks:
Object Detection
Size:
< 1K
Commit
·
a549a28
1
Parent(s):
c3d5010
dataset uploaded by roboflow2huggingface package
Browse files- README.dataset.txt +21 -0
- README.md +94 -0
- README.roboflow.txt +28 -0
- data/test.zip +3 -0
- data/train.zip +3 -0
- data/valid-mini.zip +3 -0
- data/valid.zip +3 -0
- german-traffic-sign-detection.py +152 -0
- split_name_to_num_samples.json +1 -0
- thumbnail.jpg +3 -0
README.dataset.txt
ADDED
@@ -0,0 +1,21 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# GTSDB - German Traffic Sign Detection Benchmark > original-raw_images
|
2 |
+
https://universe.roboflow.com/mohamed-traore-2ekkp/gtsdb---german-traffic-sign-detection-benchmark
|
3 |
+
|
4 |
+
Provided by a Roboflow user
|
5 |
+
License: CC BY 4.0
|
6 |
+
|
7 |
+
### This project was created by downloading the GTSDB German Traffic Sign Detection Benchmark
|
8 |
+
### dataset from Kaggle and importing the annotated training set files (images and annotation files)
|
9 |
+
### to Roboflow.
|
10 |
+
#### https://www.kaggle.com/datasets/safabouguezzi/german-traffic-sign-detection-benchmark-gtsdb
|
11 |
+
* Original home of the dataset: https://benchmark.ini.rub.de/?section=gtsdb&subsection=dataset - [Institut Für Neuroinformatik](https://www.ini.rub.de/)
|
12 |
+
|
13 |
+
The annotation files were adjusted to conform to the [YOLO Keras TXT format](https://roboflow.com/formats/yolo-keras-txt) prior to upload, as the original format did not include a [label map file](https://blog.roboflow.com/label-map/).
|
14 |
+
|
15 |
+
`v1` contains the original imported images, without augmentations. This is the version to download and import to your own project if you'd like to add your own augmentations.
|
16 |
+
|
17 |
+
`v2` contains an augmented version of the dataset, with annotations. This version of the project was trained with Roboflow's "FAST" model.
|
18 |
+
|
19 |
+
`v3` contains an augmented version of the dataset, with annotations. This version of the project was trained with Roboflow's "ACCURATE" model.
|
20 |
+
|
21 |
+
* [Choosing Between Computer Vision Model Sizes](https://blog.roboflow.com/computer-vision-model-tradeoff/) | [New and Improved Roboflow Train](https://blog.roboflow.com/new-and-improved-roboflow-train/)
|
README.md
ADDED
@@ -0,0 +1,94 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
task_categories:
|
3 |
+
- object-detection
|
4 |
+
tags:
|
5 |
+
- roboflow
|
6 |
+
- roboflow2huggingface
|
7 |
+
- Self Driving
|
8 |
+
- Transportation
|
9 |
+
---
|
10 |
+
|
11 |
+
<div align="center">
|
12 |
+
<img width="640" alt="keremberke/german-traffic-sign-detection" src="https://huggingface.co/datasets/keremberke/german-traffic-sign-detection/resolve/main/thumbnail.jpg">
|
13 |
+
</div>
|
14 |
+
|
15 |
+
### Dataset Labels
|
16 |
+
|
17 |
+
```
|
18 |
+
['animals', 'construction', 'cycles crossing', 'danger', 'no entry', 'pedestrian crossing', 'school crossing', 'snow', 'stop', 'bend', 'bend left', 'bend right', 'give way', 'go left', 'go left or straight', 'go right', 'go right or straight', 'go straight', 'keep left', 'keep right', 'no overtaking', 'no overtaking -trucks-', 'no traffic both ways', 'no trucks', 'priority at next intersection', 'priority road', 'restriction ends', 'restriction ends -overtaking -trucks--', 'restriction ends -overtaking-', 'restriction ends 80', 'road narrows', 'roundabout', 'slippery road', 'speed limit 100', 'speed limit 120', 'speed limit 20', 'speed limit 30', 'speed limit 50', 'speed limit 60', 'speed limit 70', 'speed limit 80', 'traffic signal', 'uneven road']
|
19 |
+
```
|
20 |
+
|
21 |
+
|
22 |
+
### Number of Images
|
23 |
+
|
24 |
+
```json
|
25 |
+
{'test': 54, 'valid': 108, 'train': 383}
|
26 |
+
```
|
27 |
+
|
28 |
+
|
29 |
+
### How to Use
|
30 |
+
|
31 |
+
- Install [datasets](https://pypi.org/project/datasets/):
|
32 |
+
|
33 |
+
```bash
|
34 |
+
pip install datasets
|
35 |
+
```
|
36 |
+
|
37 |
+
- Load the dataset:
|
38 |
+
|
39 |
+
```python
|
40 |
+
from datasets import load_dataset
|
41 |
+
|
42 |
+
ds = load_dataset("keremberke/german-traffic-sign-detection", name="full")
|
43 |
+
example = ds['train'][0]
|
44 |
+
```
|
45 |
+
|
46 |
+
### Roboflow Dataset Page
|
47 |
+
[https://universe.roboflow.com/mohamed-traore-2ekkp/gtsdb---german-traffic-sign-detection-benchmark/dataset/1](https://universe.roboflow.com/mohamed-traore-2ekkp/gtsdb---german-traffic-sign-detection-benchmark/dataset/1?ref=roboflow2huggingface)
|
48 |
+
|
49 |
+
### Citation
|
50 |
+
|
51 |
+
```
|
52 |
+
@misc{ gtsdb---german-traffic-sign-detection-benchmark_dataset,
|
53 |
+
title = { GTSDB - German Traffic Sign Detection Benchmark Dataset },
|
54 |
+
type = { Open Source Dataset },
|
55 |
+
author = { Mohamed Traore },
|
56 |
+
howpublished = { \\url{ https://universe.roboflow.com/mohamed-traore-2ekkp/gtsdb---german-traffic-sign-detection-benchmark } },
|
57 |
+
url = { https://universe.roboflow.com/mohamed-traore-2ekkp/gtsdb---german-traffic-sign-detection-benchmark },
|
58 |
+
journal = { Roboflow Universe },
|
59 |
+
publisher = { Roboflow },
|
60 |
+
year = { 2022 },
|
61 |
+
month = { jul },
|
62 |
+
note = { visited on 2023-01-16 },
|
63 |
+
}
|
64 |
+
```
|
65 |
+
|
66 |
+
### License
|
67 |
+
CC BY 4.0
|
68 |
+
|
69 |
+
### Dataset Summary
|
70 |
+
This dataset was exported via roboflow.com on January 16, 2023 at 9:04 PM GMT
|
71 |
+
|
72 |
+
Roboflow is an end-to-end computer vision platform that helps you
|
73 |
+
* collaborate with your team on computer vision projects
|
74 |
+
* collect & organize images
|
75 |
+
* understand and search unstructured image data
|
76 |
+
* annotate, and create datasets
|
77 |
+
* export, train, and deploy computer vision models
|
78 |
+
* use active learning to improve your dataset over time
|
79 |
+
|
80 |
+
For state of the art Computer Vision training notebooks you can use with this dataset,
|
81 |
+
visit https://github.com/roboflow/notebooks
|
82 |
+
|
83 |
+
To find over 100k other datasets and pre-trained models, visit https://universe.roboflow.com
|
84 |
+
|
85 |
+
The dataset includes 545 images.
|
86 |
+
Signs are annotated in COCO format.
|
87 |
+
|
88 |
+
The following pre-processing was applied to each image:
|
89 |
+
* Auto-orientation of pixel data (with EXIF-orientation stripping)
|
90 |
+
|
91 |
+
No image augmentation techniques were applied.
|
92 |
+
|
93 |
+
|
94 |
+
|
README.roboflow.txt
ADDED
@@ -0,0 +1,28 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
|
2 |
+
GTSDB - German Traffic Sign Detection Benchmark - v1 original-raw_images
|
3 |
+
==============================
|
4 |
+
|
5 |
+
This dataset was exported via roboflow.com on January 16, 2023 at 9:04 PM GMT
|
6 |
+
|
7 |
+
Roboflow is an end-to-end computer vision platform that helps you
|
8 |
+
* collaborate with your team on computer vision projects
|
9 |
+
* collect & organize images
|
10 |
+
* understand and search unstructured image data
|
11 |
+
* annotate, and create datasets
|
12 |
+
* export, train, and deploy computer vision models
|
13 |
+
* use active learning to improve your dataset over time
|
14 |
+
|
15 |
+
For state of the art Computer Vision training notebooks you can use with this dataset,
|
16 |
+
visit https://github.com/roboflow/notebooks
|
17 |
+
|
18 |
+
To find over 100k other datasets and pre-trained models, visit https://universe.roboflow.com
|
19 |
+
|
20 |
+
The dataset includes 545 images.
|
21 |
+
Signs are annotated in COCO format.
|
22 |
+
|
23 |
+
The following pre-processing was applied to each image:
|
24 |
+
* Auto-orientation of pixel data (with EXIF-orientation stripping)
|
25 |
+
|
26 |
+
No image augmentation techniques were applied.
|
27 |
+
|
28 |
+
|
data/test.zip
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:1c6d6263f467797a4d6d5f25ba5521c5351538c984183e4dc0579ad76c75b5e9
|
3 |
+
size 7225691
|
data/train.zip
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:3f1ee889a0570f60be10a1f8a493e321ee02a6f087fc1cd92a6578d0bafc0700
|
3 |
+
size 50502415
|
data/valid-mini.zip
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:f96e2d41658c5d77ee88d3188fa1b8469763cff7d7a8263b1fc025dd877fdf64
|
3 |
+
size 561325
|
data/valid.zip
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:1db6d2090da430dabc3a67dc98f6437e161488696252b1aaa2c39121a23f12db
|
3 |
+
size 14065269
|
german-traffic-sign-detection.py
ADDED
@@ -0,0 +1,152 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
import collections
|
2 |
+
import json
|
3 |
+
import os
|
4 |
+
|
5 |
+
import datasets
|
6 |
+
|
7 |
+
|
8 |
+
_HOMEPAGE = "https://universe.roboflow.com/mohamed-traore-2ekkp/gtsdb---german-traffic-sign-detection-benchmark/dataset/1"
|
9 |
+
_LICENSE = "CC BY 4.0"
|
10 |
+
_CITATION = """\
|
11 |
+
@misc{ gtsdb---german-traffic-sign-detection-benchmark_dataset,
|
12 |
+
title = { GTSDB - German Traffic Sign Detection Benchmark Dataset },
|
13 |
+
type = { Open Source Dataset },
|
14 |
+
author = { Mohamed Traore },
|
15 |
+
howpublished = { \\url{ https://universe.roboflow.com/mohamed-traore-2ekkp/gtsdb---german-traffic-sign-detection-benchmark } },
|
16 |
+
url = { https://universe.roboflow.com/mohamed-traore-2ekkp/gtsdb---german-traffic-sign-detection-benchmark },
|
17 |
+
journal = { Roboflow Universe },
|
18 |
+
publisher = { Roboflow },
|
19 |
+
year = { 2022 },
|
20 |
+
month = { jul },
|
21 |
+
note = { visited on 2023-01-16 },
|
22 |
+
}
|
23 |
+
"""
|
24 |
+
_CATEGORIES = ['animals', 'construction', 'cycles crossing', 'danger', 'no entry', 'pedestrian crossing', 'school crossing', 'snow', 'stop', 'bend', 'bend left', 'bend right', 'give way', 'go left', 'go left or straight', 'go right', 'go right or straight', 'go straight', 'keep left', 'keep right', 'no overtaking', 'no overtaking -trucks-', 'no traffic both ways', 'no trucks', 'priority at next intersection', 'priority road', 'restriction ends', 'restriction ends -overtaking -trucks--', 'restriction ends -overtaking-', 'restriction ends 80', 'road narrows', 'roundabout', 'slippery road', 'speed limit 100', 'speed limit 120', 'speed limit 20', 'speed limit 30', 'speed limit 50', 'speed limit 60', 'speed limit 70', 'speed limit 80', 'traffic signal', 'uneven road']
|
25 |
+
_ANNOTATION_FILENAME = "_annotations.coco.json"
|
26 |
+
|
27 |
+
|
28 |
+
class GERMANTRAFFICSIGNDETECTIONConfig(datasets.BuilderConfig):
|
29 |
+
"""Builder Config for german-traffic-sign-detection"""
|
30 |
+
|
31 |
+
def __init__(self, data_urls, **kwargs):
|
32 |
+
"""
|
33 |
+
BuilderConfig for german-traffic-sign-detection.
|
34 |
+
|
35 |
+
Args:
|
36 |
+
data_urls: `dict`, name to url to download the zip file from.
|
37 |
+
**kwargs: keyword arguments forwarded to super.
|
38 |
+
"""
|
39 |
+
super(GERMANTRAFFICSIGNDETECTIONConfig, self).__init__(version=datasets.Version("1.0.0"), **kwargs)
|
40 |
+
self.data_urls = data_urls
|
41 |
+
|
42 |
+
|
43 |
+
class GERMANTRAFFICSIGNDETECTION(datasets.GeneratorBasedBuilder):
|
44 |
+
"""german-traffic-sign-detection object detection dataset"""
|
45 |
+
|
46 |
+
VERSION = datasets.Version("1.0.0")
|
47 |
+
BUILDER_CONFIGS = [
|
48 |
+
GERMANTRAFFICSIGNDETECTIONConfig(
|
49 |
+
name="full",
|
50 |
+
description="Full version of german-traffic-sign-detection dataset.",
|
51 |
+
data_urls={
|
52 |
+
"train": "https://huggingface.co/datasets/keremberke/german-traffic-sign-detection/resolve/main/data/train.zip",
|
53 |
+
"validation": "https://huggingface.co/datasets/keremberke/german-traffic-sign-detection/resolve/main/data/valid.zip",
|
54 |
+
"test": "https://huggingface.co/datasets/keremberke/german-traffic-sign-detection/resolve/main/data/test.zip",
|
55 |
+
},
|
56 |
+
),
|
57 |
+
GERMANTRAFFICSIGNDETECTIONConfig(
|
58 |
+
name="mini",
|
59 |
+
description="Mini version of german-traffic-sign-detection dataset.",
|
60 |
+
data_urls={
|
61 |
+
"train": "https://huggingface.co/datasets/keremberke/german-traffic-sign-detection/resolve/main/data/valid-mini.zip",
|
62 |
+
"validation": "https://huggingface.co/datasets/keremberke/german-traffic-sign-detection/resolve/main/data/valid-mini.zip",
|
63 |
+
"test": "https://huggingface.co/datasets/keremberke/german-traffic-sign-detection/resolve/main/data/valid-mini.zip",
|
64 |
+
},
|
65 |
+
)
|
66 |
+
]
|
67 |
+
|
68 |
+
def _info(self):
|
69 |
+
features = datasets.Features(
|
70 |
+
{
|
71 |
+
"image_id": datasets.Value("int64"),
|
72 |
+
"image": datasets.Image(),
|
73 |
+
"width": datasets.Value("int32"),
|
74 |
+
"height": datasets.Value("int32"),
|
75 |
+
"objects": datasets.Sequence(
|
76 |
+
{
|
77 |
+
"id": datasets.Value("int64"),
|
78 |
+
"area": datasets.Value("int64"),
|
79 |
+
"bbox": datasets.Sequence(datasets.Value("float32"), length=4),
|
80 |
+
"category": datasets.ClassLabel(names=_CATEGORIES),
|
81 |
+
}
|
82 |
+
),
|
83 |
+
}
|
84 |
+
)
|
85 |
+
return datasets.DatasetInfo(
|
86 |
+
features=features,
|
87 |
+
homepage=_HOMEPAGE,
|
88 |
+
citation=_CITATION,
|
89 |
+
license=_LICENSE,
|
90 |
+
)
|
91 |
+
|
92 |
+
def _split_generators(self, dl_manager):
|
93 |
+
data_files = dl_manager.download_and_extract(self.config.data_urls)
|
94 |
+
return [
|
95 |
+
datasets.SplitGenerator(
|
96 |
+
name=datasets.Split.TRAIN,
|
97 |
+
gen_kwargs={
|
98 |
+
"folder_dir": data_files["train"],
|
99 |
+
},
|
100 |
+
),
|
101 |
+
datasets.SplitGenerator(
|
102 |
+
name=datasets.Split.VALIDATION,
|
103 |
+
gen_kwargs={
|
104 |
+
"folder_dir": data_files["validation"],
|
105 |
+
},
|
106 |
+
),
|
107 |
+
datasets.SplitGenerator(
|
108 |
+
name=datasets.Split.TEST,
|
109 |
+
gen_kwargs={
|
110 |
+
"folder_dir": data_files["test"],
|
111 |
+
},
|
112 |
+
),
|
113 |
+
]
|
114 |
+
|
115 |
+
def _generate_examples(self, folder_dir):
|
116 |
+
def process_annot(annot, category_id_to_category):
|
117 |
+
return {
|
118 |
+
"id": annot["id"],
|
119 |
+
"area": annot["area"],
|
120 |
+
"bbox": annot["bbox"],
|
121 |
+
"category": category_id_to_category[annot["category_id"]],
|
122 |
+
}
|
123 |
+
|
124 |
+
image_id_to_image = {}
|
125 |
+
idx = 0
|
126 |
+
|
127 |
+
annotation_filepath = os.path.join(folder_dir, _ANNOTATION_FILENAME)
|
128 |
+
with open(annotation_filepath, "r") as f:
|
129 |
+
annotations = json.load(f)
|
130 |
+
category_id_to_category = {category["id"]: category["name"] for category in annotations["categories"]}
|
131 |
+
image_id_to_annotations = collections.defaultdict(list)
|
132 |
+
for annot in annotations["annotations"]:
|
133 |
+
image_id_to_annotations[annot["image_id"]].append(annot)
|
134 |
+
filename_to_image = {image["file_name"]: image for image in annotations["images"]}
|
135 |
+
|
136 |
+
for filename in os.listdir(folder_dir):
|
137 |
+
filepath = os.path.join(folder_dir, filename)
|
138 |
+
if filename in filename_to_image:
|
139 |
+
image = filename_to_image[filename]
|
140 |
+
objects = [
|
141 |
+
process_annot(annot, category_id_to_category) for annot in image_id_to_annotations[image["id"]]
|
142 |
+
]
|
143 |
+
with open(filepath, "rb") as f:
|
144 |
+
image_bytes = f.read()
|
145 |
+
yield idx, {
|
146 |
+
"image_id": image["id"],
|
147 |
+
"image": {"path": filepath, "bytes": image_bytes},
|
148 |
+
"width": image["width"],
|
149 |
+
"height": image["height"],
|
150 |
+
"objects": objects,
|
151 |
+
}
|
152 |
+
idx += 1
|
split_name_to_num_samples.json
ADDED
@@ -0,0 +1 @@
|
|
|
|
|
1 |
+
{"test": 54, "valid": 108, "train": 383}
|
thumbnail.jpg
ADDED
![]() |
Git LFS Details
|