File size: 3,326 Bytes
2fb4098 b553e52 2fb4098 b553e52 8a3abc3 b553e52 8a3abc3 b553e52 20a0022 48ee9d4 2073c5b 48ee9d4 da1c960 48ee9d4 f009e1a 48ee9d4 da1c960 48ee9d4 da1c960 48ee9d4 e0fbd29 48ee9d4 da1c960 7631f1f 79402c9 da1c960 e0fbd29 da1c960 7832a04 79402c9 13d86f5 169414c |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 |
---
license: cc-by-4.0
pretty_name: SDSS 4d data cubes
tags:
- astronomy
- compression
- images
dataset_info:
config_name: tiny
features:
- name: image
dtype:
array4_d:
shape:
- 5
- 800
- 800
dtype: uint16
- name: ra
dtype: float64
- name: dec
dtype: float64
- name: pixscale
dtype: float64
- name: ntimes
dtype: int64
- name: nbands
dtype: int64
splits:
- name: train
num_bytes: 558194176
num_examples: 2
- name: test
num_bytes: 352881364
num_examples: 1
download_size: 908845172
dataset_size: 911075540
---
# GBI-16-4D Dataset
GBI-16-4D is a dataset which is part of the AstroCompress project. It contains data assembled from the Sloan Digital SkySurvey (SDSS). Each FITS file contains a series of 800x800 pixel uint16 observations of the same portion of the Stripe82 field, taken in 5 bandpass filters (u, g, r, i, z) over time. The filenames give the
starting run, field, camcol of the observations, the number of filtered images per timestep, and the number of timesteps. For example:
```cube_center_run4203_camcol6_f44_35-5-800-800.fits```
contains 35 frames of 800x800 pixel images in 5 bandpasses starting with run 4203, field 44, and camcol 6. The images are stored in the FITS standard.
# Usage
You first need to install the `datasets` and `astropy` packages:
```bash
pip install datasets astropy
```
There are two datasets: `tiny` and `full`, each with `train` and `test` splits. The `tiny` dataset has 2 4D images in the `train` and 1 in the `test`. The `full` dataset contains all the images in the `data/` directory.
## Local Use (RECOMMENDED)
You can clone this repo and use directly without connecting to hf:
```bash
git clone https://huggingface.co/datasets/AstroCompress/GBI-16-4D
```
```bash
git lfs pull
```
Then `cd GBI-16-4D` and start python like:
```python
from datasets import load_dataset
dataset = load_dataset("./GBI-16-4D.py", "tiny", data_dir="./data/", writer_batch_size=1, trust_remote_code=True)
ds = dataset.with_format("np")
```
Now you should be able to use the `ds` variable like:
```python
ds["test"][0]["image"].shape # -> (55, 5, 800, 800)
```
Note of course that it will take a long time to download and convert the images in the local cache for the `full` dataset. Afterward, the usage should be quick as the files are memory-mapped from disk.
## Use from Huggingface Directly
This method may only be an option when trying to access the "tiny" version of the dataset.
To directly use from this data from Huggingface, you'll want to log in on the command line before starting python:
```bash
huggingface-cli login
```
or
```
import huggingface_hub
huggingface_hub.login(token=token)
```
Then in your python script:
```python
from datasets import load_dataset
dataset = load_dataset("AstroCompress/GBI-16-4D", "tiny", writer_batch_size=1, trust_remote_code=True)
ds = dataset.with_format("np")
```
## Demo Colab Notebook
We provide a demo collab notebook to get started on using the dataset [here](https://colab.research.google.com/drive/1SuFBPZiYZg9LH4pqypc_v8Sp99lShJqZ?usp=sharing).
## Utils scripts
Note that utils scripts such as `eval_baselines.py` must be run from the parent directory of `utils`, i.e. `python utils/eval_baselines.py`. |