Dataset Viewer issue

#1
by gabrielaltay - opened
hyperdemocracy org

The dataset viewer is not working.

Error details:

Exception:    SplitsNotFoundError
Message:      The split names could not be parsed from the dataset config.
Traceback:    Traceback (most recent call last):
                File "/src/services/worker/src/worker/job_runners/config/split_names.py", line 138, in compute
                  return CompleteJobResult(compute_split_names_from_info_response(dataset=self.dataset, config=self.config))
                File "/src/services/worker/src/worker/job_runners/config/split_names.py", line 117, in compute_split_names_from_info_response
                  config_info_response = get_previous_step_or_raise(kind="config-info", dataset=dataset, config=config)
                File "/src/libs/libcommon/src/libcommon/simple_cache.py", line 565, in get_previous_step_or_raise
                  raise CachedArtifactError(
              libcommon.simple_cache.CachedArtifactError: The previous step failed.
              
              During handling of the above exception, another exception occurred:
              
              Traceback (most recent call last):
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/inspect.py", line 498, in get_dataset_config_info
                  for split_generator in builder._split_generators(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/packaged_modules/parquet/parquet.py", line 61, in _split_generators
                  self.info.features = datasets.Features.from_arrow_schema(pq.read_schema(f))
                File "/src/services/worker/.venv/lib/python3.9/site-packages/pyarrow/parquet/core.py", line 3687, in read_schema
                  file = ParquetFile(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/pyarrow/parquet/core.py", line 341, in __init__
                  self.reader.open(
                File "pyarrow/_parquet.pyx", line 1249, in pyarrow._parquet.ParquetReader.open
                File "pyarrow/error.pxi", line 91, in pyarrow.lib.check_status
              OSError: Metadata contains Thrift LogicalType that is not recognized
              
              The above exception was the direct cause of the following exception:
              
              Traceback (most recent call last):
                File "/src/services/worker/src/worker/job_runners/config/split_names.py", line 68, in compute_split_names_from_streaming_response
                  for split in get_dataset_split_names(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/inspect.py", line 571, in get_dataset_split_names
                  info = get_dataset_config_info(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/inspect.py", line 503, in get_dataset_config_info
                  raise SplitsNotFoundError("The split names could not be parsed from the dataset config.") from err
              datasets.inspect.SplitsNotFoundError: The split names could not be parsed from the dataset config.

is OSError: Metadata contains Thrift LogicalType that is not recognized b/c the vecs column is float16?

ds = load_dataset("hyperdemocracy/usc-nomic-no-meta-chunks-v1-s8192-o512"
type(ds['113'][0]['nomic_vec'][0])
numpy.float16

cc @albertvillanova @lhoestq @severo .

hyperdemocracy org

it does seem to be a float16 issue, updated to float32 for now

can you open an issue on https://github.com/huggingface/datasets/ if you want float16 to be supported? And I think this discussion can be closed.

gabrielaltay changed discussion status to closed

Sign up or log in to comment