Datasets:

Modalities:
Tabular
Text
Formats:
parquet
Size:
< 1K
ArXiv:
Libraries:
Datasets
Dask

load dataset failed

#3
by ustc-zhangzm - opened

Hi, I try to load the datasets with the provided script

from datasets import load_dataset

# Load AetherCode v1 2024 (Jan 2024 - Dec 2024)
ds = load_dataset("m-a-p/AetherCode", "v1_2024")

# Load AetherCode v1 2025 (Jan 2025 - May 2025)
ds = load_dataset("m-a-p/AetherCode", "v1_2025")

but failed with ArrowNotImplementedError: Nested data conversions not implemented for chunked array outputs.

It seems that this is caused when loading v1_2024/test-00025-of-00041.parquet, which is the largest parquet file.

Same problem emerges when loading the ByteDance-Seed/Code-Contests-Plus 3x split, which shares the same first author with this repo.

I wonder is there any extra config to successfully load these two datasets?

Sign up or log in to comment