christopher
commited on
Commit
•
6769973
1
Parent(s):
2fe3a4d
Update README.md
Browse files
README.md
CHANGED
@@ -31,33 +31,34 @@ size_categories:
|
|
31 |
>
|
32 |
# The MNIST-1D Dataset
|
33 |
|
34 |
-
Most machine learning models get around the same ~99% test accuracy on MNIST.
|
35 |
|
36 |
MNIST-1D is a core teaching dataset in Simon Prince's [Understanding Deep Learning](https://udlbook.github.io/udlbook/) textbook.
|
37 |
|
38 |
![image/png](https://cdn-uploads.huggingface.co/production/uploads/5e70f6048ce3c604d78fe133/VhgTkDsRQ24LVCsup9oMX.png)
|
39 |
|
40 |
|
41 |
-
|
42 |
|
43 |
| Dataset | Logistic Regression | MLP | CNN | GRU* | Human Expert |
|
44 |
|:----------------------|:---------------------|:------|:------|:------|:--------------|
|
45 |
| MNIST | 92% | 99+% | 99+% | 99+% | 99+% |
|
46 |
| MNIST-1D | 32% | 68% | 94% | 91% | 96% |
|
47 |
| MNIST-1D (shuffle**) | 32% | 68% | 56% | 57% | ~30% |
|
|
|
48 |
*Training the GRU takes at least 10x the walltime of the CNN.
|
49 |
|
50 |
**The term "shuffle" refers to shuffling the spatial dimension of the dataset, as in [Zhang et al. (2017)](https://arxiv.org/abs/1611.03530).
|
51 |
|
52 |
-
## Dimensionality reduction
|
53 |
-
|
54 |
Visualizing the MNIST and MNIST-1D datasets with t-SNE. The well-defined clusters in the MNIST plot indicate that the majority of the examples are separable via a kNN classifier in pixel space. The MNIST-1D plot, meanwhile, reveals a lack of well-defined clusters which suggests that learning a nonlinear representation of the data is much more important to achieve successful classification.
|
55 |
|
56 |
![image/png](https://cdn-uploads.huggingface.co/production/uploads/5e70f6048ce3c604d78fe133/-YhBPH4FNxk5-NHi647Y1.png)
|
57 |
|
58 |
## Dataset Creation
|
59 |
|
60 |
-
|
|
|
|
|
61 |
|
62 |
```python
|
63 |
import sys ; sys.path.append('..') # useful if you're running locally
|
@@ -74,7 +75,25 @@ test = Dataset.from_dict({"x": data["x_test"], "y":data["y_test"]})
|
|
74 |
DatasetDict({"train":train, "test":test}).push_to_hub("christopher/mnist1d")
|
75 |
```
|
76 |
|
77 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
78 |
|
79 |
## Dataset Usage
|
80 |
|
|
|
31 |
>
|
32 |
# The MNIST-1D Dataset
|
33 |
|
34 |
+
Most machine learning models get around the same ~99% test accuracy on MNIST. The MNIST-1D dataset is 100x smaller (default sample size: 4000+1000; dimensionality: 40) and does a better job of separating between models with/without nonlinearity and models with/without spatial inductive biases.
|
35 |
|
36 |
MNIST-1D is a core teaching dataset in Simon Prince's [Understanding Deep Learning](https://udlbook.github.io/udlbook/) textbook.
|
37 |
|
38 |
![image/png](https://cdn-uploads.huggingface.co/production/uploads/5e70f6048ce3c604d78fe133/VhgTkDsRQ24LVCsup9oMX.png)
|
39 |
|
40 |
|
41 |
+
### Comparing MNIST and MNIST-1D
|
42 |
|
43 |
| Dataset | Logistic Regression | MLP | CNN | GRU* | Human Expert |
|
44 |
|:----------------------|:---------------------|:------|:------|:------|:--------------|
|
45 |
| MNIST | 92% | 99+% | 99+% | 99+% | 99+% |
|
46 |
| MNIST-1D | 32% | 68% | 94% | 91% | 96% |
|
47 |
| MNIST-1D (shuffle**) | 32% | 68% | 56% | 57% | ~30% |
|
48 |
+
|
49 |
*Training the GRU takes at least 10x the walltime of the CNN.
|
50 |
|
51 |
**The term "shuffle" refers to shuffling the spatial dimension of the dataset, as in [Zhang et al. (2017)](https://arxiv.org/abs/1611.03530).
|
52 |
|
|
|
|
|
53 |
Visualizing the MNIST and MNIST-1D datasets with t-SNE. The well-defined clusters in the MNIST plot indicate that the majority of the examples are separable via a kNN classifier in pixel space. The MNIST-1D plot, meanwhile, reveals a lack of well-defined clusters which suggests that learning a nonlinear representation of the data is much more important to achieve successful classification.
|
54 |
|
55 |
![image/png](https://cdn-uploads.huggingface.co/production/uploads/5e70f6048ce3c604d78fe133/-YhBPH4FNxk5-NHi647Y1.png)
|
56 |
|
57 |
## Dataset Creation
|
58 |
|
59 |
+
### Hugging Face Dataset
|
60 |
+
|
61 |
+
This version of the dataset was created from the pickle file ([mnist1d_data.pkl](https://github.com/greydanus/mnist1d/blob/master/mnist1d_data.pkl)) provided by the dataset authors in the original repository:
|
62 |
|
63 |
```python
|
64 |
import sys ; sys.path.append('..') # useful if you're running locally
|
|
|
75 |
DatasetDict({"train":train, "test":test}).push_to_hub("christopher/mnist1d")
|
76 |
```
|
77 |
|
78 |
+
### MNIST-1D
|
79 |
+
|
80 |
+
This is a synthetically-generated dataset which, by default, consists of 4000 training examples and 1000 testing examples (you can change this as you wish). Each example contains a template pattern that resembles a handwritten digit between 0 and 9. These patterns are analogous to the digits in the original [MNIST dataset](https://huggingface.co/datasets/ylecun/mnist).
|
81 |
+
|
82 |
+
|
83 |
+
**Original MNIST digits**
|
84 |
+
![image/png](https://cdn-uploads.huggingface.co/production/uploads/5e70f6048ce3c604d78fe133/kGtsabQ8_GaB9LwMb79Qm.png)
|
85 |
+
|
86 |
+
**1D template patterns**
|
87 |
+
![image/png](https://cdn-uploads.huggingface.co/production/uploads/5e70f6048ce3c604d78fe133/OihsK5Qq5V1dxjPrFvKqD.png)
|
88 |
+
|
89 |
+
**1D templates as lines**
|
90 |
+
![image/png](https://cdn-uploads.huggingface.co/production/uploads/5e70f6048ce3c604d78fe133/_m4AfbW7V5GqYwks7Nc1j.png)
|
91 |
+
|
92 |
+
In order to build the synthetic dataset, the templates go through a series of random transformations. This includes adding random amounts of padding, translation, correlated noise, iid noise, and scaling. These transformations are relevant for both 1D signals and 2D images. So even though the dataset is 1D, one can expect some of the findings to hold for 2D (image) data. For example, one can study the advantage of using a translation-invariant model (eg. a CNN) by making a dataset where signals occur at different locations in the sequence. This can be done by using large padding and translation coefficients. Here's an animation of how those transformations are applied.
|
93 |
+
|
94 |
+
![image/gif](https://raw.githubusercontent.com/greydanus/mnist1d/refs/heads/master/static/mnist1d_transforms.gif)
|
95 |
+
|
96 |
+
Unlike the original MNIST dataset, which consisted of 2D arrays of pixels (each image had 28x28=784 dimensions), this dataset consists of 1D timeseries of length 40. This means each example is ~20x smaller, making the dataset much quicker and easier to iterate over. Another nice thing about this toy dataset is that it does a good job of separating different types of deep learning models, many of which get the same 98-99% test accuracy on MNIST.
|
97 |
|
98 |
## Dataset Usage
|
99 |
|