Dataset Viewer issue

#15
by Mozhgan-m - opened

The dataset viewer is not working.

Error details:

Error code:   ConfigNamesError
Exception:    DatasetWithScriptNotSupportedError
Message:      The dataset viewer doesn't support this dataset because it runs arbitrary python code. Please open a discussion in the discussion tab if you think this is an error and tag 

@lhoestq
	 and 

@severo
	.
Traceback:    Traceback (most recent call last):
                File "/src/services/worker/src/worker/job_runners/dataset/config_names.py", line 64, in compute_config_names_response
                  for config in sorted(get_dataset_config_names(path=dataset, token=hf_token))
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/inspect.py", line 351, in get_dataset_config_names
                  dataset_module = dataset_module_factory(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/load.py", line 1511, in dataset_module_factory
                  raise e1 from None
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/load.py", line 1480, in dataset_module_factory
                  return HubDatasetModuleFactoryWithScript(
                File "/src/services/worker/src/worker/utils.py", line 400, in raise_unsupported_dataset_with_script_or_init
                  raise DatasetWithScriptNotSupportedError(
              libcommon.exceptions.DatasetWithScriptNotSupportedError: The dataset viewer doesn't support this dataset because it runs arbitrary python code. Please open a discussion in the discussion tab if you think this is an error and tag 

@lhoestq
	 and 

@severo
	.

cc @albertvillanova @lhoestq @severo .

Due to security reasons, the viewer is no longer supported on datasets containing a Python loading script.

Note that the dataset can be loaded, although the viewer does not work. You could disable the viewer.

Alternatively, you could convert your dataset to a format which the viewer is supported on, for example, to Parquet by using this Space: https://huggingface.co/spaces/albertvillanova/convert-dataset-to-parquet

Due to security reasons, the viewer is no longer supported on datasets containing a Python loading script.

Note that the dataset can be loaded, although the viewer does not work. You could disable the viewer.

Alternatively, you could convert your dataset to a format which the viewer is supported on, for example, to Parquet by using this Space: https://huggingface.co/spaces/albertvillanova/convert-dataset-to-parquet

Thank you @Mozhgan-m and @severo !

My dataset is already in Parquet format. So do I need to delete the Python loading script to get the viewer working again?

Yes, exactly

Hi @severo , I deleted my load script and the viewer is working again. But it only shows the "training" split of my datasets. My data has many language folders and I want to split according to those languages.

I tried adding the configurations below to README.md like this tutorial but it didn't work. It says my dataset is empty and I should upload some files first.

configs:
- config_name: af
  data_dir: af
- config_name: als
  data_dir: als
...

Could you please help me solve this problem?

Thank you!

Possibly you have to be more specific:

Capture d’écran 2024-01-05 à 12.10.18.png

as explained here: https://huggingface.co/docs/hub/datasets-manual-configuration#splits

Also: I'm not sure data_dir is a supported field. Have you tried with data_files?

Maybe @polinaeterna @albertvillanova @lhoestq can give more details

Using data_dir in YAML is supported and is equivalent to passing data_dir= to load_dataset:

In [1]: from datasets import load_dataset_builder

In [2]: b = load_dataset_builder("uonlp/CulturaX", data_dir="af")
Downloading readme: 100%|██████| 22.4k/22.4k [00:00<00:00, 8.90MB/s]

In [3]: b.config.data_files
Out[3]: {NamedSplit('train'): ['hf://datasets/uonlp/CulturaX@3ac245a3f07fc7aefba22f68aa530065b123de73/af/af_part_00000.parquet']}

@nguyenhuuthuat09 you mentioned that the viewer was saying that the dataset was empty, maybe one of the directory you added in the YAML doesn't exist or something ?

Using data_dir in YAML is supported and is equivalent to passing data_dir= to load_dataset

Good to know, I'll add this to the docs page.

Hi @severo , @lhoestq , I just checked again for empty folders and re-generated the configuration. And I don't see any empty folders
but the viewer still says my dataset is empty.

image.png

here is the code I used:

import os
from huggingface_hub import login, HfFileSystem

login(TOKEN)
fs = HfFileSystem()

all_dirs = fs.ls("datasets/uonlp/CulturaX", detail=False)

for d in all_dirs:
    if d in ["datasets/uonlp/CulturaX/.gitattributes", "datasets/uonlp/CulturaX/README.md"]:
        continue

    # check if folder is empty or not
    d_files = fs.ls(d, detail=False)
    assert any([f.endswith(".parquet") for f in d_files]), d

    # generate configs
    print(f"- config_name: {os.path.basename(d)}")
    print(f"  data_dir: {os.path.basename(d)}")

In addition, we have a folder named "no", wondering if it will cause any unexpected errors?

Thank you!

In YAML no means false so you should use "no"

Thank you @lhoestq ! I changed no to "no" but it didn't work, it still says my dataset is empty. Could you please take a quick look at the README.md file? I couldn't find any problems with it.

chiennv changed discussion status to closed
chiennv changed discussion status to open

Hi @severo @lhoestq , I was able to get the viewer working again by using data_files instead of data_dir. Perhaps because in each of my folders, in addition to the parquet files, there is another text file, and that makes the viewer unable to load data.

Thank you!

nguyenhuuthuat09 changed discussion status to closed

Sign up or log in to comment