Storing a subset of the data

#1
by nilsleh - opened

Hi @csaybar , thank you for this interesting dataset.

I was wondering about the following: I am interested in only using a subset of the data and saw the section about a mini-taco in the demo colab notebook. I was wondering if there is an additional functionality to then only save this particular subset to disk as a "subset" version of the full dataset, but I could not directly find documentation on this (or just have been looking in the wrong place). Pure remote accessing the dataset is a bit too slow unfortunately. Thanks in advance.

tacofoundation org
edited 3 days ago

Hi @nilsleh ,

We're still working on the spec, but expect to have the full documentation ready before July.
You can load both local and remote TACO datasets using the same function:

dataset = tacoreader.load("/home/user/file.taco") [local]

or

dataset = tacoreader.load("https://huggingface.co/datasets/tacofoundation/cloudsen12/resolve/main/cloudsen12-l1c.0000.part.taco") [remote]

Hope this helps!

Thank you for your reply. More specifically, I was wondering whether you could store a subset separatly from the original dataset. For example, if I start with cloudsen12-l1c which has 5 parts, but then I create a mini-taco like in the colab notebook, can I create a new taco dataset separatly, so that I don't need the original 5 parts anymore but only my single new taco subset? So after I have defined my new taco subset, I can delete the original 5 parts to save disk space. Is that possible?

tacofoundation org
edited 3 days ago

Once you create a "minitaco" (tacoreader.compile) you no longer need the initial dataset, as it compiles into an isolated subset of the original dataset. With TACO, you can easily combine multiple TACO datasets and download only the samples you need—that’s more or less our design philosophy. Is that what you were asking?

The workflow we usually follow is to select the samples you need "online/remote" based on certain criteria, compile them (tacoreader.compile), and then share those samples with your colleges.

Awesome, yes that is exactly what I was wondering, this is super convenient. Thank you!

nilsleh changed discussion status to closed

Sorry, I have another question, is there a concise way to combine multiple datasources and align them?

For example in your colab you do the following, and also have a separate function for loading the extra metadata:

# Function to load Sentinel-2 data and labels
def load_sentinel_data(cloudsen12_l1c, cloudsen12_l2a, sample_idx):
    s2_l1c = cloudsen12_l1c.read(sample_idx).read(0)
    s2_l2a = cloudsen12_l2a.read(sample_idx).read(0)
    s2_label = cloudsen12_l2a.read(sample_idx).read(1)

    with rio.open(s2_l1c) as s2_l1c_src, rio.open(s2_l2a) as s2_l2a_src, rio.open(s2_label) as lbl:
        s2_l1c_data = s2_l1c_src.read([4, 3, 2])
        s2_l2a_data = s2_l2a_src.read([4, 3, 2])
        s2_label_data = lbl.read()

    return s2_l1c_data, s2_l2a_data, s2_label_data

Would it also be possible to filter each collection by the same criteria, for example, only a given geospatial extent and then create an aligned new mini-taco, where, if you read a particular sample_idx you get all the desired data files under this index so new_mini_taco.read(0) could load l1c, l2a, label, vv, and vh at once, without having to index three different tacos by the same sample_idx. Thanks in advance.

@csaybar actually on that note, do you happen to have any example or docs that show how I can create a taco dataset from a tif file collection?

nilsleh changed discussion status to open
tacofoundation org

Hi @nilsleh . This should work. TACO at the top level is a pd.DataFrame so you can use it with any Python library you want.

import tacoreader
import pandas as pd
import geopandas as gpd
from shapely.wkt import loads


# Load the three datasets
dt1 = tacoreader.load("tacofoundation:cloudsen12-l1c")
dt2 = tacoreader.load("tacofoundation:cloudsen12-l2a")
dt3 = tacoreader.load("tacofoundation:cloudsen12-extra")

# Convert pd.DataFrame to gpd.GeoDataFrame
gdf1 = gpd.GeoDataFrame(dt1, geometry=loads(dt1['stac:centroid']), crs="EPSG:4326")
gdf2 = gpd.GeoDataFrame(dt2, geometry=loads(dt2['stac:centroid']), crs="EPSG:4326")
gdf3 = gpd.GeoDataFrame(dt3, geometry=loads(dt3['stac:centroid']), crs="EPSG:4326")

# Load Peru extent
peru = [-81.326, -18.349, -68.652, -0.038]
time = 2020

# Filter by Peru extent
subset1 = gdf1.cx[peru[0]:peru[2], peru[1]:peru[3]]
subset2 = gdf2.cx[peru[0]:peru[2], peru[1]:peru[3]]
subset3 = gdf3.cx[peru[0]:peru[2], peru[1]:peru[3]]

# Filter by time
years = pd.to_datetime(subset1["stac:time_start"], unit='s').dt.year
subset1 = subset1[years == time]
years = pd.to_datetime(subset2["stac:time_start"], unit='s').dt.year
subset2 = subset2[years == time]
years = pd.to_datetime(subset3["stac:time_start"], unit='s').dt.year
subset3 = subset3[years == time]

# Only cloud shadows samples
subset1 = subset1[subset1['cloud_shadow_percentage'] > 5]
subset2 = subset2[subset2['cloud_shadow_percentage'] > 5]
subset3 = subset3[subset3['cloud_shadow_percentage'] > 5]

subset1["type"] = "l1c"
subset2["type"] = "l2a"
subset3["type"] = "extra"

# Merge all the datasets
final = pd.concat([subset1, subset2, subset3])
final.reset_index(drop=True, inplace=True)

# Save the final dataset
tacoreader.compile(final, "sample.tortilla")

# Load it again
sample = tacoreader.load("sample.tortilla")

@csaybar actually on that note, do you happen to have any example or docs that show how I can create a taco dataset from a tif file collection?

Unfortunately, not yet. We are a bit stuck with other projects :c. We plan to retake TACO around the first days of May. The project is still private but I can give
you access to the code to create taco datasets [taco-toolbox] if you give me your GH account.
email: [email protected]

Thanks for the reply, I have tried something like that:

meta_dfs = []
    taco_files: dict[str, list[str]] = {"l1c": [
            "cloudsen12-l1c.0000.part.taco",
            "cloudsen12-l1c.0001.part.taco",
            "cloudsen12-l1c.0002.part.taco",
            "cloudsen12-l1c.0003.part.taco",
            "cloudsen12-l1c.0004.part.taco",
        ],
        "l2a": [
            "cloudsen12-l2a.0000.part.taco",
            "cloudsen12-l2a.0001.part.taco",
            "cloudsen12-l2a.0002.part.taco",
            "cloudsen12-l2a.0003.part.taco",
            "cloudsen12-l2a.0004.part.taco",
            "cloudsen12-l2a.0005.part.taco",
        ],
        "extra": [
            "cloudsen12-extra.0000.part.taco",
            "cloudsen12-extra.0001.part.taco",
            "cloudsen12-extra.0002.part.taco",
        ]
        }
    
    for key, paths in taco_files.items():
        metadata_df = tacoreader.load(paths)

        # only use the the 512x512 images and the split
        metadata_df = metadata_df[metadata_df["stac:raster_shape"].apply(lambda x: np.array_equal(x, np.array([512, 512])))]

        metadata_df["type"] = key

        meta_dfs.append(metadata_df)

    full_metadata = pd.concat(meta_dfs)
    full_metadata.reset_index(drop=True, inplace=True)

But with this concatenation, the three modalities are all separate samples, i.e

full_metadata.read(0) # only 2 s2lic paths
full_metadata.read(50000) # only 2 s2l2a paths
full_metadata.read(100000) # extra data

I was hoping that there is merging operation, where full_metadata.read(0) would then yield all aligned filepaths for this particular sample.

tacofoundation org

Hey @nilsleh ,

This makes sense, kind of like a multipage concept. But as far as I know, pandas don’t support it.

I can see two work arounds:

  • Create a class to handle alignment automatically ... like a high-level API on top of tacoreader.
  • Recreate the dataset to match your desired sample order, but that requires rebuilding it. If you only need a small sample, this might be the most convenient approach.
Your need to confirm your account before you can post a new comment.

Sign up or log in to comment