Update README.md
Browse files
README.md
CHANGED
@@ -56,35 +56,16 @@ pip install datasets astropy
|
|
56 |
|
57 |
There are two datasets: `tiny` and `full`, each with `train` and `test` splits. The `tiny` dataset has 2 4D images in the `train` and 1 in the `test`. The `full` dataset contains all the images in the `data/` directory.
|
58 |
|
59 |
-
## Use
|
60 |
|
61 |
-
|
62 |
|
63 |
```bash
|
64 |
-
huggingface-
|
65 |
-
```
|
66 |
-
|
67 |
-
or
|
68 |
-
|
69 |
-
```
|
70 |
-
import huggingface_hub
|
71 |
-
huggingface_hub.login(token=token)
|
72 |
-
```
|
73 |
-
|
74 |
-
Then in your python script:
|
75 |
-
|
76 |
-
```python
|
77 |
-
from datasets import load_dataset
|
78 |
-
dataset = load_dataset("AstroCompress/GBI-16-4D", "tiny")
|
79 |
-
ds = dataset.with_format("np")
|
80 |
```
|
81 |
|
82 |
-
## Local Use
|
83 |
-
|
84 |
-
Alternatively, you can clone this repo and use directly without connecting to hf:
|
85 |
-
|
86 |
```bash
|
87 |
-
git
|
88 |
```
|
89 |
|
90 |
Then `cd GBI-16-4D` and start python like:
|
@@ -102,3 +83,27 @@ ds["test"][0]["image"].shape # -> (55, 5, 800, 800)
|
|
102 |
```
|
103 |
|
104 |
Note of course that it will take a long time to download and convert the images in the local cache for the `full` dataset. Afterward, the usage should be quick as the files are memory-mapped from disk.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
56 |
|
57 |
There are two datasets: `tiny` and `full`, each with `train` and `test` splits. The `tiny` dataset has 2 4D images in the `train` and 1 in the `test`. The `full` dataset contains all the images in the `data/` directory.
|
58 |
|
59 |
+
## Local Use (RECOMMENDED)
|
60 |
|
61 |
+
Alternatively, you can clone this repo and use directly without connecting to hf:
|
62 |
|
63 |
```bash
|
64 |
+
git clone https://huggingface.co/datasets/AstroCompress/GBI-16-4D
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
65 |
```
|
66 |
|
|
|
|
|
|
|
|
|
67 |
```bash
|
68 |
+
git lfs pull
|
69 |
```
|
70 |
|
71 |
Then `cd GBI-16-4D` and start python like:
|
|
|
83 |
```
|
84 |
|
85 |
Note of course that it will take a long time to download and convert the images in the local cache for the `full` dataset. Afterward, the usage should be quick as the files are memory-mapped from disk.
|
86 |
+
|
87 |
+
|
88 |
+
## Use from Huggingface Directly
|
89 |
+
|
90 |
+
To directly use from this data from Huggingface, you'll want to log in on the command line before starting python:
|
91 |
+
|
92 |
+
```bash
|
93 |
+
huggingface-cli login
|
94 |
+
```
|
95 |
+
|
96 |
+
or
|
97 |
+
|
98 |
+
```
|
99 |
+
import huggingface_hub
|
100 |
+
huggingface_hub.login(token=token)
|
101 |
+
```
|
102 |
+
|
103 |
+
Then in your python script:
|
104 |
+
|
105 |
+
```python
|
106 |
+
from datasets import load_dataset
|
107 |
+
dataset = load_dataset("AstroCompress/GBI-16-4D", "tiny")
|
108 |
+
ds = dataset.with_format("np")
|
109 |
+
```
|