Create README.md
Browse files
README.md
ADDED
@@ -0,0 +1,72 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
{}
|
3 |
+
---
|
4 |
+
|
5 |
+
# Dataset Card for The Street View House Numbers (SVHN)
|
6 |
+
|
7 |
+
<!-- Provide a quick summary of the dataset. -->
|
8 |
+
|
9 |
+
## Dataset Details
|
10 |
+
|
11 |
+
### Dataset Description
|
12 |
+
|
13 |
+
<!-- Provide a longer summary of what this dataset is. -->
|
14 |
+
SVHN is a real-world image dataset for developing machine learning and object recognition algorithms with minimal requirement on data preprocessing and formatting. It can be seen as similar in flavor to MNIST (e.g., the images are of small cropped digits), but incorporates an order of magnitude more labeled data (over 600,000 digit images) and comes from a significantly harder, unsolved, real world problem (recognizing digits and numbers in natural scene images). SVHN is obtained from house numbers in Google Street View images.
|
15 |
+
|
16 |
+
### Dataset Sources
|
17 |
+
|
18 |
+
<!-- Provide the basic links for the dataset. -->
|
19 |
+
|
20 |
+
- **Homepage:** http://ufldl.stanford.edu/housenumbers/
|
21 |
+
- **Paper:** Netzer, Y., Wang, T., Coates, A., Bissacco, A., Wu, B., & Ng, A. Y. (2011, December). Reading digits in natural images with unsupervised feature learning. In NIPS workshop on deep learning and unsupervised feature learning (Vol. 2011, No. 2, p. 4).
|
22 |
+
|
23 |
+
## Dataset Structure
|
24 |
+
|
25 |
+
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
|
26 |
+
|
27 |
+
Total images: 99,289
|
28 |
+
|
29 |
+
Classes: 10 categories
|
30 |
+
|
31 |
+
Splits:
|
32 |
+
|
33 |
+
- **Train:** 73,257 images
|
34 |
+
|
35 |
+
- **Test:** 26,032 images
|
36 |
+
|
37 |
+
Image specs: 32×32 pixels, RGB
|
38 |
+
|
39 |
+
## Example Usage
|
40 |
+
Below is a quick example of how to load this dataset via the Hugging Face Datasets library.
|
41 |
+
```
|
42 |
+
from datasets import load_dataset
|
43 |
+
|
44 |
+
# Load the dataset
|
45 |
+
dataset = load_dataset("randall-lab/svhn", split="train", trust_remote_code=True)
|
46 |
+
# dataset = load_dataset("randall-lab/svhn", split="test", trust_remote_code=True)
|
47 |
+
|
48 |
+
# Access a sample from the dataset
|
49 |
+
example = dataset[0]
|
50 |
+
image = example["image"]
|
51 |
+
label = example["label"]
|
52 |
+
|
53 |
+
image.show() # Display the image
|
54 |
+
print(f"Label: {label}")
|
55 |
+
```
|
56 |
+
|
57 |
+
## Citation
|
58 |
+
|
59 |
+
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
|
60 |
+
|
61 |
+
**BibTeX:**
|
62 |
+
|
63 |
+
@inproceedings{netzer2011reading,
|
64 |
+
title={Reading digits in natural images with unsupervised feature learning},
|
65 |
+
author={Netzer, Yuval and Wang, Tao and Coates, Adam and Bissacco, Alessandro and Wu, Baolin and Ng, Andrew Y and others},
|
66 |
+
booktitle={NIPS workshop on deep learning and unsupervised feature learning},
|
67 |
+
volume={2011},
|
68 |
+
number={2},
|
69 |
+
pages={4},
|
70 |
+
year={2011},
|
71 |
+
organization={Granada}
|
72 |
+
}
|