url
stringlengths
58
61
repository_url
stringclasses
1 value
labels_url
stringlengths
72
75
comments_url
stringlengths
67
70
events_url
stringlengths
65
68
html_url
stringlengths
46
51
id
int64
599M
2.12B
node_id
stringlengths
18
32
number
int64
1
6.65k
title
stringlengths
1
290
user
dict
labels
listlengths
0
4
state
stringclasses
2 values
locked
bool
1 class
assignee
dict
assignees
listlengths
0
4
milestone
dict
comments
int64
0
70
created_at
unknown
updated_at
unknown
closed_at
unknown
author_association
stringclasses
3 values
active_lock_reason
float64
draft
float64
0
1
pull_request
dict
body
stringlengths
0
228k
reactions
dict
timeline_url
stringlengths
67
70
performed_via_github_app
float64
state_reason
stringclasses
3 values
is_pull_request
bool
2 classes
https://api.github.com/repos/huggingface/datasets/issues/5716
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5716/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5716/comments
https://api.github.com/repos/huggingface/datasets/issues/5716/events
https://github.com/huggingface/datasets/issues/5716
1,658,613,092
I_kwDODunzps5i3G1k
5,716
Handle empty audio
{ "avatar_url": "https://avatars.githubusercontent.com/u/38179632?v=4", "events_url": "https://api.github.com/users/v-yunbin/events{/privacy}", "followers_url": "https://api.github.com/users/v-yunbin/followers", "following_url": "https://api.github.com/users/v-yunbin/following{/other_user}", "gists_url": "https://api.github.com/users/v-yunbin/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/v-yunbin", "id": 38179632, "login": "v-yunbin", "node_id": "MDQ6VXNlcjM4MTc5NjMy", "organizations_url": "https://api.github.com/users/v-yunbin/orgs", "received_events_url": "https://api.github.com/users/v-yunbin/received_events", "repos_url": "https://api.github.com/users/v-yunbin/repos", "site_admin": false, "starred_url": "https://api.github.com/users/v-yunbin/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/v-yunbin/subscriptions", "type": "User", "url": "https://api.github.com/users/v-yunbin" }
[]
closed
false
null
[]
null
2
"2023-04-07T09:51:40Z"
"2023-09-27T17:47:08Z"
"2023-09-27T17:47:08Z"
NONE
null
null
null
Some audio paths exist, but they are empty, and an error will be reported when reading the audio path.How to use the filter function to avoid the empty audio path? when a audio is empty, when do resample , it will break: `array, sampling_rate = sf.read(f) array = librosa.resample(array, orig_sr=sampling_rate, target_sr=self.sampling_rate)`
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5716/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5716/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/5715
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5715/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5715/comments
https://api.github.com/repos/huggingface/datasets/issues/5715/events
https://github.com/huggingface/datasets/issues/5715
1,657,479,788
I_kwDODunzps5iyyJs
5,715
Return Numpy Array (fixed length) Mode, in __get_item__, Instead of List
{ "avatar_url": "https://avatars.githubusercontent.com/u/34066771?v=4", "events_url": "https://api.github.com/users/jungbaepark/events{/privacy}", "followers_url": "https://api.github.com/users/jungbaepark/followers", "following_url": "https://api.github.com/users/jungbaepark/following{/other_user}", "gists_url": "https://api.github.com/users/jungbaepark/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/jungbaepark", "id": 34066771, "login": "jungbaepark", "node_id": "MDQ6VXNlcjM0MDY2Nzcx", "organizations_url": "https://api.github.com/users/jungbaepark/orgs", "received_events_url": "https://api.github.com/users/jungbaepark/received_events", "repos_url": "https://api.github.com/users/jungbaepark/repos", "site_admin": false, "starred_url": "https://api.github.com/users/jungbaepark/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jungbaepark/subscriptions", "type": "User", "url": "https://api.github.com/users/jungbaepark" }
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" } ]
closed
false
null
[]
null
1
"2023-04-06T13:57:48Z"
"2023-04-20T17:16:26Z"
"2023-04-20T17:16:26Z"
NONE
null
null
null
### Feature request There are old known issues, but they can be easily forgettable problems in multiprocessing with pytorch-dataloader: Too high usage of RAM or shared-memory in pytorch when we set num workers > 1 and returning type of dataset or dataloader is "List" or "Dict". https://github.com/pytorch/pytorch/issues/13246 With huggingface datasets, unfortunately, the default return type is the list, so the problem is raised too often if we do not set anything for the issue. However, this issue can be released when the returning output is fixed in length. Therefore, I request the mode, returning outputs with fixed length (e.g. numpy array) rather than list. The design would be good when we load datasets as ```python load_dataset(..., with_return_as_fixed_tensor=True) ``` ### Motivation The general solution for this issue is already in the comments: https://github.com/pytorch/pytorch/issues/13246#issuecomment-905703662 : Numpy or Pandas seems not to have problems, while both have the string type. (I'm not sure that the sequence of huggingface datasets can solve this problem as well) ### Your contribution I'll read it ! thanks
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5715/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5715/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/5714
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5714/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5714/comments
https://api.github.com/repos/huggingface/datasets/issues/5714/events
https://github.com/huggingface/datasets/pull/5714
1,657,388,033
PR_kwDODunzps5NxIOc
5,714
Fix xnumpy_load for .npz files
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[]
closed
false
null
[]
null
2
"2023-04-06T13:01:45Z"
"2023-04-07T09:23:54Z"
"2023-04-07T09:16:57Z"
MEMBER
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/5714.diff", "html_url": "https://github.com/huggingface/datasets/pull/5714", "merged_at": "2023-04-07T09:16:57Z", "patch_url": "https://github.com/huggingface/datasets/pull/5714.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/5714" }
PR: - #5626 implemented support for streaming `.npy` files by using `numpy.load`. However, it introduced a bug when used with `.npz` files, within a context manager: ``` ValueError: seek of closed file ``` or in streaming mode: ``` ValueError: I/O operation on closed file. ``` This PR fixes the bug and tests for both `.npy` and `.npz` files. Fix #5711.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5714/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5714/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/5713
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5713/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5713/comments
https://api.github.com/repos/huggingface/datasets/issues/5713/events
https://github.com/huggingface/datasets/issues/5713
1,657,141,251
I_kwDODunzps5ixfgD
5,713
ArrowNotImplementedError when loading dataset from the hub
{ "avatar_url": "https://avatars.githubusercontent.com/u/959590?v=4", "events_url": "https://api.github.com/users/jplu/events{/privacy}", "followers_url": "https://api.github.com/users/jplu/followers", "following_url": "https://api.github.com/users/jplu/following{/other_user}", "gists_url": "https://api.github.com/users/jplu/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/jplu", "id": 959590, "login": "jplu", "node_id": "MDQ6VXNlcjk1OTU5MA==", "organizations_url": "https://api.github.com/users/jplu/orgs", "received_events_url": "https://api.github.com/users/jplu/received_events", "repos_url": "https://api.github.com/users/jplu/repos", "site_admin": false, "starred_url": "https://api.github.com/users/jplu/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jplu/subscriptions", "type": "User", "url": "https://api.github.com/users/jplu" }
[]
closed
false
null
[]
null
2
"2023-04-06T10:27:22Z"
"2023-04-06T13:06:22Z"
"2023-04-06T13:06:21Z"
CONTRIBUTOR
null
null
null
### Describe the bug Hello, I have created a dataset by using the image loader. Once the dataset is created I try to download it and I get the error: ``` Traceback (most recent call last): File "/home/jplu/miniconda3/envs/image-xp/lib/python3.10/site-packages/datasets/builder.py", line 1860, in _prepare_split_single for _, table in generator: File "/home/jplu/miniconda3/envs/image-xp/lib/python3.10/site-packages/datasets/packaged_modules/parquet/parquet.py", line 69, in _generate_tables for batch_idx, record_batch in enumerate( File "pyarrow/_parquet.pyx", line 1323, in iter_batches File "pyarrow/error.pxi", line 121, in pyarrow.lib.check_status pyarrow.lib.ArrowNotImplementedError: Nested data conversions not implemented for chunked array outputs The above exception was the direct cause of the following exception: Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/home/jplu/miniconda3/envs/image-xp/lib/python3.10/site-packages/datasets/load.py", line 1791, in load_dataset builder_instance.download_and_prepare( File "/home/jplu/miniconda3/envs/image-xp/lib/python3.10/site-packages/datasets/builder.py", line 891, in download_and_prepare self._download_and_prepare( File "/home/jplu/miniconda3/envs/image-xp/lib/python3.10/site-packages/datasets/builder.py", line 986, in _download_and_prepare self._prepare_split(split_generator, **prepare_split_kwargs) File "/home/jplu/miniconda3/envs/image-xp/lib/python3.10/site-packages/datasets/builder.py", line 1748, in _prepare_split for job_id, done, content in self._prepare_split_single( File "/home/jplu/miniconda3/envs/image-xp/lib/python3.10/site-packages/datasets/builder.py", line 1893, in _prepare_split_single raise DatasetGenerationError("An error occurred while generating the dataset") from e datasets.builder.DatasetGenerationError: An error occurred while generating the dataset ``` ### Steps to reproduce the bug Create the dataset and push it to the hub: ```python from datasets import load_dataset dataset = load_dataset("imagefolder", data_dir="/path/to/dataset") dataset.push_to_hub("org/dataset-name", private=True, max_shard_size="1GB") ``` Then use it: ```python from datasets import load_dataset dataset = load_dataset("org/dataset-name") ``` ### Expected behavior To properly download and use the pushed dataset. Something else to note is that I specified to have shards of 1GB max, but at the end, for the train set, it is an almost 7GB single file that is pushed. ### Environment info - `datasets` version: 2.11.0 - Platform: Linux-5.15.90.1-microsoft-standard-WSL2-x86_64-with-glibc2.35 - Python version: 3.10.10 - Huggingface_hub version: 0.13.3 - PyArrow version: 11.0.0 - Pandas version: 2.0.0
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5713/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5713/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/5712
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5712/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5712/comments
https://api.github.com/repos/huggingface/datasets/issues/5712/events
https://github.com/huggingface/datasets/issues/5712
1,655,972,106
I_kwDODunzps5itCEK
5,712
load_dataset in v2.11.0 raises "ValueError: seek of closed file" in np.load()
{ "avatar_url": "https://avatars.githubusercontent.com/u/1219084?v=4", "events_url": "https://api.github.com/users/rcasero/events{/privacy}", "followers_url": "https://api.github.com/users/rcasero/followers", "following_url": "https://api.github.com/users/rcasero/following{/other_user}", "gists_url": "https://api.github.com/users/rcasero/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/rcasero", "id": 1219084, "login": "rcasero", "node_id": "MDQ6VXNlcjEyMTkwODQ=", "organizations_url": "https://api.github.com/users/rcasero/orgs", "received_events_url": "https://api.github.com/users/rcasero/received_events", "repos_url": "https://api.github.com/users/rcasero/repos", "site_admin": false, "starred_url": "https://api.github.com/users/rcasero/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/rcasero/subscriptions", "type": "User", "url": "https://api.github.com/users/rcasero" }
[]
closed
false
null
[]
null
2
"2023-04-05T16:47:10Z"
"2023-04-06T08:32:37Z"
"2023-04-05T17:17:44Z"
NONE
null
null
null
### Describe the bug Hi, I have some `dataset_load()` code of a custom offline dataset that works with datasets v2.10.1. ```python ds = datasets.load_dataset(path=dataset_dir, name=configuration, data_dir=dataset_dir, cache_dir=cache_dir, aux_dir=aux_dir, # download_mode=datasets.DownloadMode.FORCE_REDOWNLOAD, num_proc=18) ``` When upgrading datasets to 2.11.0, it fails with error ``` Traceback (most recent call last): File "<string>", line 2, in <module> File "/home/ramon.casero/opt/miniconda3/envs/myenv/lib/python3.10/site-packages/datasets/load.py", line 1791, in load_dataset builder_instance.download_and_prepare( File "/home/ramon.casero/opt/miniconda3/envs/myenv/lib/python3.10/site-packages/datasets/builder.py", line 891, in download_and_prepare self._download_and_prepare( File "/home/ramon.casero/opt/miniconda3/envs/myenv/lib/python3.10/site-packages/datasets/builder.py", line 1651, in _download_and_prepare super()._download_and_prepare( File "/home/ramon.casero/opt/miniconda3/envs/myenv/lib/python3.10/site-packages/datasets/builder.py", line 964, in _download_and_prepare split_generators = self._split_generators(dl_manager, **split_generators_kwargs) File "/home/ramon.casero/.cache/huggingface/modules/datasets_modules/datasets/71f67f69e6e00e139903a121f96b71f39b65a6b6aaeb0862e6a5da3a3f565b4c/mydataset.py", line 682, in _split_generators self.some_function() File "/home/ramon.casero/.cache/huggingface/modules/datasets_modules/datasets/71f67f69e6e00e139903a121f96b71f39b65a6b6aaeb0862e6a5da3a3f565b4c/mydataset.py", line 1314, in some_function() x_df = pd.DataFrame({'cell_type_descriptor': fp['x'].tolist()}) File "/home/ramon.casero/opt/miniconda3/envs/myenv/lib/python3.10/site-packages/numpy/lib/npyio.py", line 248, in __getitem__ bytes = self.zip.open(key) File "/home/ramon.casero/opt/miniconda3/envs/myenv/lib/python3.10/zipfile.py", line 1530, in open fheader = zef_file.read(sizeFileHeader) File "/home/ramon.casero/opt/miniconda3/envs/myenv/lib/python3.10/zipfile.py", line 744, in read self._file.seek(self._pos) ValueError: seek of closed file ``` ### Steps to reproduce the bug Sorry, I cannot share the data or code because they are not mine to share, but the point of failure is a call in `some_function()` ```python with np.load(filename) as fp: x_df = pd.DataFrame({'feature': fp['x'].tolist()}) ``` I'll try to generate a short snippet that reproduces the error. ### Expected behavior I would expect that `load_dataset` works on the custom datasets generation script for v2.11.0 the same way it works for 2.10.1, without making `np.load()` give a `ValueError: seek of closed file` error. ### Environment info - `datasets` version: 2.11.0 - Platform: Linux-4.18.0-483.el8.x86_64-x86_64-with-glibc2.28 - Python version: 3.10.8 - Huggingface_hub version: 0.12.0 - PyArrow version: 11.0.0 - Pandas version: 1.5.2 - numpy: 1.24.2 - This is an offline dataset that uses `datasets.config.HF_DATASETS_OFFLINE = True` in the generation script.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5712/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5712/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/5711
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5711/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5711/comments
https://api.github.com/repos/huggingface/datasets/issues/5711/events
https://github.com/huggingface/datasets/issues/5711
1,655,971,647
I_kwDODunzps5itB8_
5,711
load_dataset in v2.11.0 raises "ValueError: seek of closed file" in np.load()
{ "avatar_url": "https://avatars.githubusercontent.com/u/1219084?v=4", "events_url": "https://api.github.com/users/rcasero/events{/privacy}", "followers_url": "https://api.github.com/users/rcasero/followers", "following_url": "https://api.github.com/users/rcasero/following{/other_user}", "gists_url": "https://api.github.com/users/rcasero/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/rcasero", "id": 1219084, "login": "rcasero", "node_id": "MDQ6VXNlcjEyMTkwODQ=", "organizations_url": "https://api.github.com/users/rcasero/orgs", "received_events_url": "https://api.github.com/users/rcasero/received_events", "repos_url": "https://api.github.com/users/rcasero/repos", "site_admin": false, "starred_url": "https://api.github.com/users/rcasero/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/rcasero/subscriptions", "type": "User", "url": "https://api.github.com/users/rcasero" }
[]
closed
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" } ]
null
2
"2023-04-05T16:46:49Z"
"2023-04-07T09:16:59Z"
"2023-04-07T09:16:59Z"
NONE
null
null
null
### Describe the bug Hi, I have some `dataset_load()` code of a custom offline dataset that works with datasets v2.10.1. ```python ds = datasets.load_dataset(path=dataset_dir, name=configuration, data_dir=dataset_dir, cache_dir=cache_dir, aux_dir=aux_dir, # download_mode=datasets.DownloadMode.FORCE_REDOWNLOAD, num_proc=18) ``` When upgrading datasets to 2.11.0, it fails with error ``` Traceback (most recent call last): File "<string>", line 2, in <module> File "/home/ramon.casero/opt/miniconda3/envs/myenv/lib/python3.10/site-packages/datasets/load.py", line 1791, in load_dataset builder_instance.download_and_prepare( File "/home/ramon.casero/opt/miniconda3/envs/myenv/lib/python3.10/site-packages/datasets/builder.py", line 891, in download_and_prepare self._download_and_prepare( File "/home/ramon.casero/opt/miniconda3/envs/myenv/lib/python3.10/site-packages/datasets/builder.py", line 1651, in _download_and_prepare super()._download_and_prepare( File "/home/ramon.casero/opt/miniconda3/envs/myenv/lib/python3.10/site-packages/datasets/builder.py", line 964, in _download_and_prepare split_generators = self._split_generators(dl_manager, **split_generators_kwargs) File "/home/ramon.casero/.cache/huggingface/modules/datasets_modules/datasets/71f67f69e6e00e139903a121f96b71f39b65a6b6aaeb0862e6a5da3a3f565b4c/mydataset.py", line 682, in _split_generators self.some_function() File "/home/ramon.casero/.cache/huggingface/modules/datasets_modules/datasets/71f67f69e6e00e139903a121f96b71f39b65a6b6aaeb0862e6a5da3a3f565b4c/mydataset.py", line 1314, in some_function() x_df = pd.DataFrame({'cell_type_descriptor': fp['x'].tolist()}) File "/home/ramon.casero/opt/miniconda3/envs/myenv/lib/python3.10/site-packages/numpy/lib/npyio.py", line 248, in __getitem__ bytes = self.zip.open(key) File "/home/ramon.casero/opt/miniconda3/envs/myenv/lib/python3.10/zipfile.py", line 1530, in open fheader = zef_file.read(sizeFileHeader) File "/home/ramon.casero/opt/miniconda3/envs/myenv/lib/python3.10/zipfile.py", line 744, in read self._file.seek(self._pos) ValueError: seek of closed file ``` ### Steps to reproduce the bug Sorry, I cannot share the data or code because they are not mine to share, but the point of failure is a call in `some_function()` ```python with np.load(embedding_filename) as fp: x_df = pd.DataFrame({'feature': fp['x'].tolist()}) ``` I'll try to generate a short snippet that reproduces the error. ### Expected behavior I would expect that `load_dataset` works on the custom datasets generation script for v2.11.0 the same way it works for 2.10.1, without making `np.load()` give a `ValueError: seek of closed file` error. ### Environment info - `datasets` version: 2.11.0 - Platform: Linux-4.18.0-483.el8.x86_64-x86_64-with-glibc2.28 - Python version: 3.10.8 - Huggingface_hub version: 0.12.0 - PyArrow version: 11.0.0 - Pandas version: 1.5.2 - numpy: 1.24.2 - This is an offline dataset that uses `datasets.config.HF_DATASETS_OFFLINE = True` in the generation script.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5711/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5711/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/5710
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5710/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5710/comments
https://api.github.com/repos/huggingface/datasets/issues/5710/events
https://github.com/huggingface/datasets/issues/5710
1,655,703,534
I_kwDODunzps5isAfu
5,710
OSError: Memory mapping file failed: Cannot allocate memory
{ "avatar_url": "https://avatars.githubusercontent.com/u/53392976?v=4", "events_url": "https://api.github.com/users/Saibo-creator/events{/privacy}", "followers_url": "https://api.github.com/users/Saibo-creator/followers", "following_url": "https://api.github.com/users/Saibo-creator/following{/other_user}", "gists_url": "https://api.github.com/users/Saibo-creator/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/Saibo-creator", "id": 53392976, "login": "Saibo-creator", "node_id": "MDQ6VXNlcjUzMzkyOTc2", "organizations_url": "https://api.github.com/users/Saibo-creator/orgs", "received_events_url": "https://api.github.com/users/Saibo-creator/received_events", "repos_url": "https://api.github.com/users/Saibo-creator/repos", "site_admin": false, "starred_url": "https://api.github.com/users/Saibo-creator/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Saibo-creator/subscriptions", "type": "User", "url": "https://api.github.com/users/Saibo-creator" }
[]
closed
false
null
[]
null
1
"2023-04-05T14:11:26Z"
"2023-04-20T17:16:40Z"
"2023-04-20T17:16:40Z"
NONE
null
null
null
### Describe the bug Hello, I have a series of datasets each of 5 GB, 600 datasets in total. So together this makes 3TB. When I trying to load all the 600 datasets into memory, I get the above error message. Is this normal because I'm hitting the max size of memory mapping of the OS? Thank you ```terminal 0_21/cache-e9c42499f65b1881.arrow load_hf_datasets_from_disk: 82%|████████████████████████████████████████████████████████████████████████████████████████████████████▍ | 494/600 [07:26<01:35, 1.11it/s] Traceback (most recent call last): File "example_load_genkalm_dataset.py", line 35, in <module> multi_ds.post_process(max_node_num=args.max_node_num,max_seq_length=args.max_seq_length,delay=args.delay) File "/home/geng/GenKaLM/src/dataloader/dataset.py", line 142, in post_process genkalm_dataset = GenKaLM_Dataset.from_hf_dataset(path_or_name=ds_path, max_seq_length=self.max_seq_length, File "/home/geng/GenKaLM/src/dataloader/dataset.py", line 47, in from_hf_dataset hf_ds = load_from_disk(path_or_name) File "/home/geng/.conda/envs/genkalm/lib/python3.8/site-packages/datasets/load.py", line 1848, in load_from_disk return Dataset.load_from_disk(dataset_path, keep_in_memory=keep_in_memory, storage_options=storage_options) File "/home/geng/.conda/envs/genkalm/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 1549, in load_from_disk arrow_table = concat_tables( File "/home/geng/.conda/envs/genkalm/lib/python3.8/site-packages/datasets/table.py", line 1805, in concat_tables tables = list(tables) File "/home/geng/.conda/envs/genkalm/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 1550, in <genexpr> table_cls.from_file(Path(dataset_path, data_file["filename"]).as_posix()) File "/home/geng/.conda/envs/genkalm/lib/python3.8/site-packages/datasets/table.py", line 1065, in from_file table = _memory_mapped_arrow_table_from_file(filename) File "/home/geng/.conda/envs/genkalm/lib/python3.8/site-packages/datasets/table.py", line 50, in _memory_mapped_arrow_table_from_file memory_mapped_stream = pa.memory_map(filename) File "pyarrow/io.pxi", line 950, in pyarrow.lib.memory_map File "pyarrow/io.pxi", line 911, in pyarrow.lib.MemoryMappedFile._open File "pyarrow/error.pxi", line 144, in pyarrow.lib.pyarrow_internal_check_status File "pyarrow/error.pxi", line 115, in pyarrow.lib.check_status OSError: Memory mapping file failed: Cannot allocate memory ``` ### Steps to reproduce the bug Sorry I can not provide a reproducible code as the data is stored on my server and it's too large to share. ### Expected behavior I expect the 3TB of data can be fully mapped to memory ### Environment info - `datasets` version: 2.9.0 - Platform: Linux-4.15.0-204-generic-x86_64-with-debian-buster-sid - Python version: 3.7.6 - PyArrow version: 11.0.0 - Pandas version: 1.0.1
{ "+1": 1, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/5710/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5710/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/5709
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5709/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5709/comments
https://api.github.com/repos/huggingface/datasets/issues/5709/events
https://github.com/huggingface/datasets/issues/5709
1,655,423,503
I_kwDODunzps5iq8IP
5,709
Manually dataset info made not taken into account
{ "avatar_url": "https://avatars.githubusercontent.com/u/959590?v=4", "events_url": "https://api.github.com/users/jplu/events{/privacy}", "followers_url": "https://api.github.com/users/jplu/followers", "following_url": "https://api.github.com/users/jplu/following{/other_user}", "gists_url": "https://api.github.com/users/jplu/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/jplu", "id": 959590, "login": "jplu", "node_id": "MDQ6VXNlcjk1OTU5MA==", "organizations_url": "https://api.github.com/users/jplu/orgs", "received_events_url": "https://api.github.com/users/jplu/received_events", "repos_url": "https://api.github.com/users/jplu/repos", "site_admin": false, "starred_url": "https://api.github.com/users/jplu/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jplu/subscriptions", "type": "User", "url": "https://api.github.com/users/jplu" }
[]
closed
false
null
[]
null
2
"2023-04-05T11:15:17Z"
"2023-04-06T08:52:20Z"
"2023-04-06T08:52:19Z"
CONTRIBUTOR
null
null
null
### Describe the bug Hello, I'm manually building an image dataset with the `from_dict` approach. I also build the features with the `cast_features` methods. Once the dataset is created I push it on the hub, and a default `dataset_infos.json` file seems to have been automatically added to the repo in same time. Hence I update it manually with all the missing info, but when I download the dataset the info are never updated. Former `dataset_infos.json` file: ``` {"default": { "description": "", "citation": "", "homepage": "", "license": "", "features": { "image": { "_type": "Image" }, "labels": { "names": [ "Fake", "Real" ], "_type": "ClassLabel" } }, "splits": { "validation": { "name": "validation", "num_bytes": 901010094.0, "num_examples": 3200, "dataset_name": null }, "train": { "name": "train", "num_bytes": 901010094.0, "num_examples": 3200, "dataset_name": null } }, "download_size": 1802008414, "dataset_size": 1802020188.0, "size_in_bytes": 3604028602.0 }} ``` After I update it manually it looks like: ``` { "bstrai--deepfake-detection":{ "description":"", "citation":"", "homepage":"", "license":"", "features":{ "image":{ "decode":true, "id":null, "_type":"Image" }, "labels":{ "num_classes":2, "names":[ "Fake", "Real" ], "id":null, "_type":"ClassLabel" } }, "supervised_keys":{ "input":"image", "output":"labels" }, "task_templates":[ { "task":"image-classification", "image_column":"image", "label_column":"labels" } ], "config_name":null, "splits":{ "validation":{ "name":"validation", "num_bytes":36627822, "num_examples":123, "dataset_name":"deepfake-detection" }, "train":{ "name":"train", "num_bytes":901023694, "num_examples":3200, "dataset_name":"deepfake-detection" } }, "download_checksums":null, "download_size":937562209, "dataset_size":937651516, "size_in_bytes":1875213725 } } ``` Anything I should do to have the new infos in the `dataset_infos.json` to be taken into account? Or it is not possible yet? Thanks! ### Steps to reproduce the bug - ### Expected behavior - ### Environment info - `datasets` version: 2.11.0 - Platform: Linux-5.15.90.1-microsoft-standard-WSL2-x86_64-with-glibc2.35 - Python version: 3.10.10 - Huggingface_hub version: 0.13.3 - PyArrow version: 11.0.0 - Pandas version: 2.0.0
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5709/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5709/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/5708
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5708/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5708/comments
https://api.github.com/repos/huggingface/datasets/issues/5708/events
https://github.com/huggingface/datasets/issues/5708
1,655,023,642
I_kwDODunzps5ipaga
5,708
Dataset sizes are in MiB instead of MB in dataset cards
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" }, { "color": "E5583E", "default": false, "description": "Related to the dataset viewer on huggingface.co", "id": 3470211881, "name": "dataset-viewer", "node_id": "LA_kwDODunzps7O1zsp", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset-viewer" } ]
closed
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" } ]
null
12
"2023-04-05T06:36:03Z"
"2023-12-21T10:20:28Z"
"2023-12-21T10:20:27Z"
MEMBER
null
null
null
As @severo reported in an internal discussion (https://github.com/huggingface/moon-landing/issues/5929): Now we show the dataset size: - from the dataset card (in the side column) - from the datasets-server (in the viewer) But, even if the size is the same, we see a mismatch because the viewer shows MB, while the info from the README generally shows MiB (even if it's written MB -> https://huggingface.co/datasets/blimp/blob/main/README.md?code=true#L1932) <img width="664" alt="Capture d’écran 2023-04-04 à 10 16 01" src="https://user-images.githubusercontent.com/1676121/229730887-0bd8fa6e-9462-46c6-bd4e-4d2c5784cabb.png"> TODO: Values to be fixed in: `Size of downloaded dataset files:`, `Size of the generated dataset:` and `Total amount of disk used:` - [x] Bulk edit on the Hub to fix this in all canonical datasets - [x] Bulk PR on the Hub to fix ancient canonical datasets that were moved to organizations
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5708/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5708/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/5706
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5706/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5706/comments
https://api.github.com/repos/huggingface/datasets/issues/5706/events
https://github.com/huggingface/datasets/issues/5706
1,653,545,835
I_kwDODunzps5ijxtr
5,706
Support categorical data types for Parquet
{ "avatar_url": "https://avatars.githubusercontent.com/u/1430243?v=4", "events_url": "https://api.github.com/users/kklemon/events{/privacy}", "followers_url": "https://api.github.com/users/kklemon/followers", "following_url": "https://api.github.com/users/kklemon/following{/other_user}", "gists_url": "https://api.github.com/users/kklemon/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/kklemon", "id": 1430243, "login": "kklemon", "node_id": "MDQ6VXNlcjE0MzAyNDM=", "organizations_url": "https://api.github.com/users/kklemon/orgs", "received_events_url": "https://api.github.com/users/kklemon/received_events", "repos_url": "https://api.github.com/users/kklemon/repos", "site_admin": false, "starred_url": "https://api.github.com/users/kklemon/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/kklemon/subscriptions", "type": "User", "url": "https://api.github.com/users/kklemon" }
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" } ]
open
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/22622299?v=4", "events_url": "https://api.github.com/users/mhattingpete/events{/privacy}", "followers_url": "https://api.github.com/users/mhattingpete/followers", "following_url": "https://api.github.com/users/mhattingpete/following{/other_user}", "gists_url": "https://api.github.com/users/mhattingpete/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mhattingpete", "id": 22622299, "login": "mhattingpete", "node_id": "MDQ6VXNlcjIyNjIyMjk5", "organizations_url": "https://api.github.com/users/mhattingpete/orgs", "received_events_url": "https://api.github.com/users/mhattingpete/received_events", "repos_url": "https://api.github.com/users/mhattingpete/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mhattingpete/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mhattingpete/subscriptions", "type": "User", "url": "https://api.github.com/users/mhattingpete" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/22622299?v=4", "events_url": "https://api.github.com/users/mhattingpete/events{/privacy}", "followers_url": "https://api.github.com/users/mhattingpete/followers", "following_url": "https://api.github.com/users/mhattingpete/following{/other_user}", "gists_url": "https://api.github.com/users/mhattingpete/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mhattingpete", "id": 22622299, "login": "mhattingpete", "node_id": "MDQ6VXNlcjIyNjIyMjk5", "organizations_url": "https://api.github.com/users/mhattingpete/orgs", "received_events_url": "https://api.github.com/users/mhattingpete/received_events", "repos_url": "https://api.github.com/users/mhattingpete/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mhattingpete/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mhattingpete/subscriptions", "type": "User", "url": "https://api.github.com/users/mhattingpete" } ]
null
17
"2023-04-04T09:45:35Z"
"2023-09-22T16:53:37Z"
null
NONE
null
null
null
### Feature request Huggingface datasets does not seem to support categorical / dictionary data types for Parquet as of now. There seems to be a `TODO` in the code for this feature but no implementation yet. Below you can find sample code to reproduce the error that is currently thrown when attempting to read a Parquet file with categorical columns: ```python import pandas as pd import pyarrow.parquet as pq from datasets import load_dataset # Create categorical sample DataFrame df = pd.DataFrame({'type': ['foo', 'bar']}).astype('category') df.to_parquet('data.parquet') # Read back as pyarrow table table = pq.read_table('data.parquet') print(table.schema) # type: dictionary<values=string, indices=int32, ordered=0> # Load with huggingface datasets load_dataset('parquet', data_files='data.parquet') ``` Error: ``` Traceback (most recent call last): File ".venv/lib/python3.10/site-packages/datasets/builder.py", line 1875, in _prepare_split_single writer.write_table(table) File ".venv/lib/python3.10/site-packages/datasets/arrow_writer.py", line 566, in write_table self._build_writer(inferred_schema=pa_table.schema) File ".venv/lib/python3.10/site-packages/datasets/arrow_writer.py", line 379, in _build_writer inferred_features = Features.from_arrow_schema(inferred_schema) File ".venv/lib/python3.10/site-packages/datasets/features/features.py", line 1622, in from_arrow_schema obj = {field.name: generate_from_arrow_type(field.type) for field in pa_schema} File ".venv/lib/python3.10/site-packages/datasets/features/features.py", line 1622, in <dictcomp> obj = {field.name: generate_from_arrow_type(field.type) for field in pa_schema} File ".venv/lib/python3.10/site-packages/datasets/features/features.py", line 1361, in generate_from_arrow_type raise NotImplementedError # TODO(thom) this will need access to the dictionary as well (for labels). I.e. to the py_table NotImplementedError ``` ### Motivation Categorical data types, as offered by Pandas and implemented with the `DictionaryType` dtype in `pyarrow` can significantly reduce dataset size and are a handy way to turn textual features into numerical representations and back. Lack of support in Huggingface datasets greatly reduces compatibility with a common Pandas / Parquet feature. ### Your contribution I could provide a PR. However, it would be nice to have an initial complexity estimate from one of the core developers first.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5706/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5706/timeline
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5705
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5705/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5705/comments
https://api.github.com/repos/huggingface/datasets/issues/5705/events
https://github.com/huggingface/datasets/issues/5705
1,653,500,383
I_kwDODunzps5ijmnf
5,705
Getting next item from IterableDataset took forever.
{ "avatar_url": "https://avatars.githubusercontent.com/u/16588434?v=4", "events_url": "https://api.github.com/users/HongtaoYang/events{/privacy}", "followers_url": "https://api.github.com/users/HongtaoYang/followers", "following_url": "https://api.github.com/users/HongtaoYang/following{/other_user}", "gists_url": "https://api.github.com/users/HongtaoYang/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/HongtaoYang", "id": 16588434, "login": "HongtaoYang", "node_id": "MDQ6VXNlcjE2NTg4NDM0", "organizations_url": "https://api.github.com/users/HongtaoYang/orgs", "received_events_url": "https://api.github.com/users/HongtaoYang/received_events", "repos_url": "https://api.github.com/users/HongtaoYang/repos", "site_admin": false, "starred_url": "https://api.github.com/users/HongtaoYang/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/HongtaoYang/subscriptions", "type": "User", "url": "https://api.github.com/users/HongtaoYang" }
[]
closed
false
null
[]
null
2
"2023-04-04T09:16:17Z"
"2023-04-05T23:35:41Z"
"2023-04-05T23:35:41Z"
NONE
null
null
null
### Describe the bug I have a large dataset, about 500GB. The format of the dataset is parquet. I then load the dataset and try to get the first item ```python def get_one_item(): dataset = load_dataset("path/to/datafiles", split="train", cache_dir=".", streaming=True) dataset = dataset.filter(lambda example: example['text'].startswith('Ar')) print(next(iter(dataset))) ``` However, this function never finish. I waited ~10mins, the function was still running so I killed the process. I'm now using `line_profiler` to profile how long it would take to return one item. I'll be patient and wait for as long as it needs. I suspect the filter operation is the reason why it took so long. Can I get some possible reasons behind this? ### Steps to reproduce the bug Unfortunately without my data files, there is no way to reproduce this bug. ### Expected behavior With `IteralbeDataset`, I expect the first item to be returned instantly. ### Environment info - datasets version: 2.11.0 - python: 3.7.12
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5705/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5705/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/5704
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5704/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5704/comments
https://api.github.com/repos/huggingface/datasets/issues/5704/events
https://github.com/huggingface/datasets/pull/5704
1,653,471,356
PR_kwDODunzps5NkEvJ
5,704
5537 speedup load
{ "avatar_url": "https://avatars.githubusercontent.com/u/35013374?v=4", "events_url": "https://api.github.com/users/semajyllek/events{/privacy}", "followers_url": "https://api.github.com/users/semajyllek/followers", "following_url": "https://api.github.com/users/semajyllek/following{/other_user}", "gists_url": "https://api.github.com/users/semajyllek/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/semajyllek", "id": 35013374, "login": "semajyllek", "node_id": "MDQ6VXNlcjM1MDEzMzc0", "organizations_url": "https://api.github.com/users/semajyllek/orgs", "received_events_url": "https://api.github.com/users/semajyllek/received_events", "repos_url": "https://api.github.com/users/semajyllek/repos", "site_admin": false, "starred_url": "https://api.github.com/users/semajyllek/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/semajyllek/subscriptions", "type": "User", "url": "https://api.github.com/users/semajyllek" }
[]
open
false
null
[]
null
4
"2023-04-04T08:58:14Z"
"2023-04-07T16:10:55Z"
null
NONE
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/5704.diff", "html_url": "https://github.com/huggingface/datasets/pull/5704", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/5704.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/5704" }
I reimplemented fsspec.spec.glob() in `hffilesystem.py` as `_glob`, used it in `_resolve_single_pattern_in_dataset_repository` only, and saw a 20% speedup in times to load the config, on average. That's not much when usually this step takes only 2-3 seconds for most datasets, but in this particular case, `bigcode/the-stack-dedup` , the loading time to get the config (not download the entire 6tb dataset, of course), went from ~170 secs to ~20 secs. What makes this work is this code in `_glob`: ``` if self.dir_cache is not None: allpaths = self.dir_cache else: allpaths = self.find(root, maxdepth=depth, withdirs=True, detail=True, **kwargs) ``` I also had to `import glob.has_magic( )` for `_glob()` (confusing, I know). I hope there is no issue with copying most of the code from `fsspec.spec.glob`, as it is a BSD 3-Clause License, and I left a comment about this in the docstring of` _glob()`, that we may want to delete. As mentioned, I evaluated the speedup across a random selection of about 1000 datasets (not all 27k+), and verified that old_config.eq(new_method_config) with the build in method, but deleted this test and related code changes on the subsequent commit. It's in the commit history if anyone wants to see it. (Note this does not include the outlier of `bigcode/the-stack-dedup` | | old_time | new _time | diff | pct_diff | | -- | -- | -- | -- | -- | | mean | 3.340 | 2.642 | 0.698 | 18.404 | | min | 2.024 | 1.976 | -0.840 | -37.634 | | max | 66.582 | 41.517 | 30.927 | 85.538 |
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5704/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5704/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/5703
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5703/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5703/comments
https://api.github.com/repos/huggingface/datasets/issues/5703/events
https://github.com/huggingface/datasets/pull/5703
1,653,158,955
PR_kwDODunzps5NjCCV
5,703
[WIP][Test, Please ignore] Investigate performance impact of using multiprocessing only
{ "avatar_url": "https://avatars.githubusercontent.com/u/1535968?v=4", "events_url": "https://api.github.com/users/hvaara/events{/privacy}", "followers_url": "https://api.github.com/users/hvaara/followers", "following_url": "https://api.github.com/users/hvaara/following{/other_user}", "gists_url": "https://api.github.com/users/hvaara/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/hvaara", "id": 1535968, "login": "hvaara", "node_id": "MDQ6VXNlcjE1MzU5Njg=", "organizations_url": "https://api.github.com/users/hvaara/orgs", "received_events_url": "https://api.github.com/users/hvaara/received_events", "repos_url": "https://api.github.com/users/hvaara/repos", "site_admin": false, "starred_url": "https://api.github.com/users/hvaara/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/hvaara/subscriptions", "type": "User", "url": "https://api.github.com/users/hvaara" }
[]
closed
false
null
[]
null
4
"2023-04-04T04:37:49Z"
"2023-04-20T03:17:37Z"
"2023-04-20T03:17:32Z"
NONE
null
1
{ "diff_url": "https://github.com/huggingface/datasets/pull/5703.diff", "html_url": "https://github.com/huggingface/datasets/pull/5703", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/5703.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/5703" }
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5703/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5703/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/5702
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5702/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5702/comments
https://api.github.com/repos/huggingface/datasets/issues/5702/events
https://github.com/huggingface/datasets/issues/5702
1,653,104,720
I_kwDODunzps5iiGBQ
5,702
Is it possible or how to define a `datasets.Sequence` that could potentially be either a dict, a str, or None?
{ "avatar_url": "https://avatars.githubusercontent.com/u/10508116?v=4", "events_url": "https://api.github.com/users/gitforziio/events{/privacy}", "followers_url": "https://api.github.com/users/gitforziio/followers", "following_url": "https://api.github.com/users/gitforziio/following{/other_user}", "gists_url": "https://api.github.com/users/gitforziio/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/gitforziio", "id": 10508116, "login": "gitforziio", "node_id": "MDQ6VXNlcjEwNTA4MTE2", "organizations_url": "https://api.github.com/users/gitforziio/orgs", "received_events_url": "https://api.github.com/users/gitforziio/received_events", "repos_url": "https://api.github.com/users/gitforziio/repos", "site_admin": false, "starred_url": "https://api.github.com/users/gitforziio/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/gitforziio/subscriptions", "type": "User", "url": "https://api.github.com/users/gitforziio" }
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" } ]
closed
false
null
[]
null
4
"2023-04-04T03:20:43Z"
"2023-04-05T14:15:18Z"
"2023-04-05T14:15:17Z"
NONE
null
null
null
### Feature request Hello! Apologies if my question sounds naive: I was wondering if it’s possible, or how one would go about defining a 'datasets.Sequence' element in datasets.Features that could potentially be either a dict, a str, or None? Specifically, I’d like to define a feature for a list that contains 18 elements, each of which has been pre-defined as either a `dict or None` or `str or None` - as demonstrated in the slightly misaligned data provided below: ```json [ [ {"text":"老妇人","idxes":[0,1,2]},null,{"text":"跪","idxes":[3]},null,null,null,null,{"text":"在那坑里","idxes":[4,5,6,7]},null,null,null,null,null,null,null,null,null,null], [ {"text":"那些水","idxes":[13,14,15]},null,{"text":"舀","idxes":[11]},null,null,null,null,null,{"text":"在那坑里","idxes":[4,5,6,7]},null,{"text":"出","idxes":[12]},null,null,null,null,null,null,null], [ {"text":"水","idxes":[38]}, null, {"text":"舀","idxes":[40]}, "假", // note this is just a standalone string null,null,null,{"text":"坑里","idxes":[35,36]},null,null,null,null,null,null,null,null,null,null]] ``` ### Motivation I'm currently working with a dataset of the following structure and I couldn't find a solution in the [documentation](https://huggingface.co/docs/datasets/v2.11.0/en/package_reference/main_classes#datasets.Features). ```json {"qid":"3-train-1058","context":"桑桑害怕了。从玉米地里走到田埂上,他遥望着他家那幢草房子里的灯光,知道母亲没有让他回家的意思,很伤感,有点想哭。但没哭,转身朝阿恕家走去。","corefs":[[{"text":"桑桑","idxes":[0,1]},{"text":"他","idxes":[17]}]],"non_corefs":[],"outputs":[[{"text":"他","idxes":[17]},null,{"text":"走","idxes":[11]},null,null,null,null,null,{"text":"从玉米地里","idxes":[6,7,8,9,10]},{"text":"到田埂上","idxes":[12,13,14,15]},null,null,null,null,null,null,null,null],[{"text":"他","idxes":[17]},null,{"text":"走","idxes":[66]},null,null,null,null,null,null,null,{"text":"转身朝阿恕家去","idxes":[60,61,62,63,64,65,67]},null,null,null,null,null,null,null],[{"text":"灯光","idxes":[30,31]},null,null,null,null,null,null,{"text":"草房子里","idxes":[25,26,27,28]},null,null,null,null,null,null,null,null,null,null],[{"text":"他","idxes":[17]},{"text":"他家那幢草房子","idxes":[21,22,23,24,25,26,27]},null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,"远"],[{"text":"他","idxes":[17]},{"text":"阿恕家","idxes":[63,64,65]},null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,"变近"]]} ``` ### Your contribution I'm going to provide the dataset at https://huggingface.co/datasets/2030NLP/SpaCE2022 .
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5702/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5702/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/5701
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5701/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5701/comments
https://api.github.com/repos/huggingface/datasets/issues/5701/events
https://github.com/huggingface/datasets/pull/5701
1,652,931,399
PR_kwDODunzps5NiSCy
5,701
Add Dataset.from_spark
{ "avatar_url": "https://avatars.githubusercontent.com/u/106995444?v=4", "events_url": "https://api.github.com/users/maddiedawson/events{/privacy}", "followers_url": "https://api.github.com/users/maddiedawson/followers", "following_url": "https://api.github.com/users/maddiedawson/following{/other_user}", "gists_url": "https://api.github.com/users/maddiedawson/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/maddiedawson", "id": 106995444, "login": "maddiedawson", "node_id": "U_kgDOBmCe9A", "organizations_url": "https://api.github.com/users/maddiedawson/orgs", "received_events_url": "https://api.github.com/users/maddiedawson/received_events", "repos_url": "https://api.github.com/users/maddiedawson/repos", "site_admin": false, "starred_url": "https://api.github.com/users/maddiedawson/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/maddiedawson/subscriptions", "type": "User", "url": "https://api.github.com/users/maddiedawson" }
[]
closed
false
null
[]
null
21
"2023-04-03T23:51:29Z"
"2023-06-16T16:39:32Z"
"2023-04-26T15:43:39Z"
CONTRIBUTOR
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/5701.diff", "html_url": "https://github.com/huggingface/datasets/pull/5701", "merged_at": "2023-04-26T15:43:39Z", "patch_url": "https://github.com/huggingface/datasets/pull/5701.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/5701" }
Adds static method Dataset.from_spark to create datasets from Spark DataFrames. This approach alleviates users of the need to materialize their dataframe---a common use case is that the user loads their dataset into a dataframe, uses Spark to apply some transformation to some of the columns, and then wants to train on the dataset. Related issue: https://github.com/huggingface/datasets/issues/5678
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 2, "hooray": 4, "laugh": 0, "rocket": 0, "total_count": 6, "url": "https://api.github.com/repos/huggingface/datasets/issues/5701/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5701/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/5700
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5700/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5700/comments
https://api.github.com/repos/huggingface/datasets/issues/5700/events
https://github.com/huggingface/datasets/pull/5700
1,652,527,530
PR_kwDODunzps5Ng6g_
5,700
fix: fix wrong modification of the 'cache_file_name' -related paramet…
{ "avatar_url": "https://avatars.githubusercontent.com/u/47528215?v=4", "events_url": "https://api.github.com/users/FrancoisNoyez/events{/privacy}", "followers_url": "https://api.github.com/users/FrancoisNoyez/followers", "following_url": "https://api.github.com/users/FrancoisNoyez/following{/other_user}", "gists_url": "https://api.github.com/users/FrancoisNoyez/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/FrancoisNoyez", "id": 47528215, "login": "FrancoisNoyez", "node_id": "MDQ6VXNlcjQ3NTI4MjE1", "organizations_url": "https://api.github.com/users/FrancoisNoyez/orgs", "received_events_url": "https://api.github.com/users/FrancoisNoyez/received_events", "repos_url": "https://api.github.com/users/FrancoisNoyez/repos", "site_admin": false, "starred_url": "https://api.github.com/users/FrancoisNoyez/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/FrancoisNoyez/subscriptions", "type": "User", "url": "https://api.github.com/users/FrancoisNoyez" }
[]
open
false
null
[]
null
7
"2023-04-03T18:05:26Z"
"2023-04-06T17:17:27Z"
null
NONE
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/5700.diff", "html_url": "https://github.com/huggingface/datasets/pull/5700", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/5700.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/5700" }
…ers values in 'train_test_split' + fix bad interaction between 'keep_in_memory' and 'cache_file_name' -related parameters (#5699)
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5700/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5700/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/5699
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5699/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5699/comments
https://api.github.com/repos/huggingface/datasets/issues/5699/events
https://github.com/huggingface/datasets/issues/5699
1,652,437,419
I_kwDODunzps5ifjGr
5,699
Issue when wanting to split in memory a cached dataset
{ "avatar_url": "https://avatars.githubusercontent.com/u/47528215?v=4", "events_url": "https://api.github.com/users/FrancoisNoyez/events{/privacy}", "followers_url": "https://api.github.com/users/FrancoisNoyez/followers", "following_url": "https://api.github.com/users/FrancoisNoyez/following{/other_user}", "gists_url": "https://api.github.com/users/FrancoisNoyez/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/FrancoisNoyez", "id": 47528215, "login": "FrancoisNoyez", "node_id": "MDQ6VXNlcjQ3NTI4MjE1", "organizations_url": "https://api.github.com/users/FrancoisNoyez/orgs", "received_events_url": "https://api.github.com/users/FrancoisNoyez/received_events", "repos_url": "https://api.github.com/users/FrancoisNoyez/repos", "site_admin": false, "starred_url": "https://api.github.com/users/FrancoisNoyez/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/FrancoisNoyez/subscriptions", "type": "User", "url": "https://api.github.com/users/FrancoisNoyez" }
[]
open
false
null
[]
null
1
"2023-04-03T17:00:07Z"
"2023-04-04T16:52:42Z"
null
NONE
null
null
null
### Describe the bug **In the 'train_test_split' method of the Dataset class** (defined datasets/arrow_dataset.py), **if 'self.cache_files' is not empty**, then, **regarding the input parameters 'train_indices_cache_file_name' and 'test_indices_cache_file_name', if they are None**, we modify them to make them not None, to see if we can just provide back / work from cached data. But if we can't provide cached data, we move on with the call to the method, except those two values are not None anymore, which will conflict with the use of the 'keep_in_memory' parameter down the line. Indeed, at some point we end up calling the 'select' method, **and if 'keep_in_memory' is True**, since the value of this method's parameter 'indices_cache_file_name' is now not None anymore, **an exception is raised, whose message is "Please use either 'keep_in_memory' or 'indices_cache_file_name' but not both.".** Because of that, it's impossible to perform a train / test split of a cached dataset while requesting that the result not be cached. Which is inconvenient when one is just performing experiments, with no intention of caching the result. Aside from this being inconvenient, **the code which lead up to that situation seems simply wrong** to me: the input variable should not be modified so as to change the user's intention just to perform a test, if that test can fail and respecting the user's intention is necessary to proceed in that case. To fix this, I suggest to use other variables / other variable names, in order to host the value(s) needed to perform the test, so as not to change the originally input values needed by the rest of the method's code. Also, **I don't see why an exception should be raised when the 'select' method is called with both 'keep_in_memory'=True and 'indices_cache_file_name'!=None**: should the use of 'keep_in_memory' not prevail anyway, specifying that the user does not want to perform caching, and so making irrelevant the value of 'indices_cache_file_name'? This is indeed what happens when we look further in the code, in the '\_select_with_indices_mapping' method: when 'keep_in_memory' is True, then the value of indices_cache_file_name does not matter, the data will be written to a stream buffer anyway. Hence I suggest to remove the raising of exception in those circumstances. Notably, to remove the raising of it in the 'select', '\_select_with_indices_mapping', 'shuffle' and 'map' methods. ### Steps to reproduce the bug ```python import datasets def generate_examples(): for i in range(10): yield {"id": i} dataset_ = datasets.Dataset.from_generator( generate_examples, keep_in_memory=False, ) dataset_.train_test_split( test_size=3, shuffle=False, keep_in_memory=True, train_indices_cache_file_name=None, test_indices_cache_file_name=None, ) ``` ### Expected behavior The result of the above code should be a DatasetDict instance. Instead, we get the following exception stack: ```python --------------------------------------------------------------------------- ValueError Traceback (most recent call last) Cell In[3], line 1 ----> 1 dataset_.train_test_split( 2 test_size=3, 3 shuffle=False, 4 keep_in_memory=True, 5 train_indices_cache_file_name=None, 6 test_indices_cache_file_name=None, 7 ) File ~/Work/Developments/datasets/src/datasets/arrow_dataset.py:528, in transmit_format.<locals>.wrapper(*args, **kwargs) 521 self_format = { 522 "type": self._format_type, 523 "format_kwargs": self._format_kwargs, 524 "columns": self._format_columns, 525 "output_all_columns": self._output_all_columns, 526 } 527 # apply actual function --> 528 out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs) 529 datasets: List["Dataset"] = list(out.values()) if isinstance(out, dict) else [out] 530 # re-apply format to the output File ~/Work/Developments/datasets/src/datasets/fingerprint.py:511, in fingerprint_transform.<locals>._fingerprint.<locals>.wrapper(*args, **kwargs) 507 validate_fingerprint(kwargs[fingerprint_name]) 509 # Call actual function --> 511 out = func(dataset, *args, **kwargs) 513 # Update fingerprint of in-place transforms + update in-place history of transforms 515 if inplace: # update after calling func so that the fingerprint doesn't change if the function fails File ~/Work/Developments/datasets/src/datasets/arrow_dataset.py:4428, in Dataset.train_test_split(self, test_size, train_size, shuffle, stratify_by_column, seed, generator, keep_in_memory, load_from_cache_file, train_indices_cache_file_name, test_indices_cache_file_name, writer_batch_size, train_new_fingerprint, test_new_fingerprint) 4425 test_indices = permutation[:n_test] 4426 train_indices = permutation[n_test : (n_test + n_train)] -> 4428 train_split = self.select( 4429 indices=train_indices, 4430 keep_in_memory=keep_in_memory, 4431 indices_cache_file_name=train_indices_cache_file_name, 4432 writer_batch_size=writer_batch_size, 4433 new_fingerprint=train_new_fingerprint, 4434 ) 4435 test_split = self.select( 4436 indices=test_indices, 4437 keep_in_memory=keep_in_memory, (...) 4440 new_fingerprint=test_new_fingerprint, 4441 ) 4443 return DatasetDict({"train": train_split, "test": test_split}) File ~/Work/Developments/datasets/src/datasets/arrow_dataset.py:528, in transmit_format.<locals>.wrapper(*args, **kwargs) 521 self_format = { 522 "type": self._format_type, 523 "format_kwargs": self._format_kwargs, 524 "columns": self._format_columns, 525 "output_all_columns": self._output_all_columns, 526 } 527 # apply actual function --> 528 out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs) 529 datasets: List["Dataset"] = list(out.values()) if isinstance(out, dict) else [out] 530 # re-apply format to the output File ~/Work/Developments/datasets/src/datasets/fingerprint.py:511, in fingerprint_transform.<locals>._fingerprint.<locals>.wrapper(*args, **kwargs) 507 validate_fingerprint(kwargs[fingerprint_name]) 509 # Call actual function --> 511 out = func(dataset, *args, **kwargs) 513 # Update fingerprint of in-place transforms + update in-place history of transforms 515 if inplace: # update after calling func so that the fingerprint doesn't change if the function fails File ~/Work/Developments/datasets/src/datasets/arrow_dataset.py:3679, in Dataset.select(self, indices, keep_in_memory, indices_cache_file_name, writer_batch_size, new_fingerprint) 3645 """Create a new dataset with rows selected following the list/array of indices. 3646 3647 Args: (...) 3676 ``` 3677 """ 3678 if keep_in_memory and indices_cache_file_name is not None: -> 3679 raise ValueError("Please use either `keep_in_memory` or `indices_cache_file_name` but not both.") 3681 if len(self.list_indexes()) > 0: 3682 raise DatasetTransformationNotAllowedError( 3683 "Using `.select` on a dataset with attached indexes is not allowed. You can first run `.drop_index() to remove your index and then re-add it." 3684 ) ValueError: Please use either `keep_in_memory` or `indices_cache_file_name` but not both. ``` ### Environment info - `datasets` version: 2.11.1.dev0 - Platform: Linux-5.4.236-1-MANJARO-x86_64-with-glibc2.2.5 - Python version: 3.8.12 - Huggingface_hub version: 0.13.3 - PyArrow version: 11.0.0 - Pandas version: 2.0.0 *** *** EDIT: Now with a pull request to fix this [here](https://github.com/huggingface/datasets/pull/5700)
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5699/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5699/timeline
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5698
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5698/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5698/comments
https://api.github.com/repos/huggingface/datasets/issues/5698/events
https://github.com/huggingface/datasets/issues/5698
1,652,183,611
I_kwDODunzps5ielI7
5,698
Add Qdrant as another search index
{ "avatar_url": "https://avatars.githubusercontent.com/u/2649301?v=4", "events_url": "https://api.github.com/users/kacperlukawski/events{/privacy}", "followers_url": "https://api.github.com/users/kacperlukawski/followers", "following_url": "https://api.github.com/users/kacperlukawski/following{/other_user}", "gists_url": "https://api.github.com/users/kacperlukawski/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/kacperlukawski", "id": 2649301, "login": "kacperlukawski", "node_id": "MDQ6VXNlcjI2NDkzMDE=", "organizations_url": "https://api.github.com/users/kacperlukawski/orgs", "received_events_url": "https://api.github.com/users/kacperlukawski/received_events", "repos_url": "https://api.github.com/users/kacperlukawski/repos", "site_admin": false, "starred_url": "https://api.github.com/users/kacperlukawski/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/kacperlukawski/subscriptions", "type": "User", "url": "https://api.github.com/users/kacperlukawski" }
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" } ]
open
false
null
[]
null
1
"2023-04-03T14:25:19Z"
"2023-04-11T10:28:40Z"
null
CONTRIBUTOR
null
null
null
### Feature request I'd suggest adding Qdrant (https://qdrant.tech) as another search index available, so users can directly build an index from a dataset. Currently, FAISS and ElasticSearch are only supported: https://huggingface.co/docs/datasets/faiss_es ### Motivation ElasticSearch is a keyword-based search system, while FAISS is a vector search library. Vector database, such as Qdrant, is a different tool based on similarity (like FAISS) but is not limited to a single machine. It makes the vector database well-suited for bigger datasets and collaboration if several people want to access a particular dataset. ### Your contribution I can provide a PR implementing that functionality on my own.
{ "+1": 6, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 6, "url": "https://api.github.com/repos/huggingface/datasets/issues/5698/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5698/timeline
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5697
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5697/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5697/comments
https://api.github.com/repos/huggingface/datasets/issues/5697/events
https://github.com/huggingface/datasets/pull/5697
1,651,812,614
PR_kwDODunzps5NefxZ
5,697
Raise an error on missing distributed seed
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
[]
closed
false
null
[]
null
4
"2023-04-03T10:44:58Z"
"2023-04-04T15:05:24Z"
"2023-04-04T14:58:16Z"
MEMBER
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/5697.diff", "html_url": "https://github.com/huggingface/datasets/pull/5697", "merged_at": "2023-04-04T14:58:16Z", "patch_url": "https://github.com/huggingface/datasets/pull/5697.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/5697" }
close https://github.com/huggingface/datasets/issues/5696
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5697/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5697/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/5696
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5696/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5696/comments
https://api.github.com/repos/huggingface/datasets/issues/5696/events
https://github.com/huggingface/datasets/issues/5696
1,651,707,008
I_kwDODunzps5icwyA
5,696
Shuffle a sharded iterable dataset without seed can lead to duplicate data
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
closed
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" } ]
null
0
"2023-04-03T09:40:03Z"
"2023-04-04T14:58:18Z"
"2023-04-04T14:58:18Z"
MEMBER
null
null
null
As reported in https://github.com/huggingface/datasets/issues/5360 If `seed=None` in `.shuffle()`, shuffled datasets don't use the same shuffling seed across nodes. Because of that, the lists of shards is not shuffled the same way across nodes, and therefore some shards may be assigned to multiple nodes instead of exactly one. This can happen only when you have a number of shards that is a factor of the number of nodes. The current workaround is to always set a `seed` in `.shuffle()`
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5696/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5696/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/5695
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5695/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5695/comments
https://api.github.com/repos/huggingface/datasets/issues/5695/events
https://github.com/huggingface/datasets/issues/5695
1,650,974,156
I_kwDODunzps5iZ93M
5,695
Loading big dataset raises pyarrow.lib.ArrowNotImplementedError
{ "avatar_url": "https://avatars.githubusercontent.com/u/32778667?v=4", "events_url": "https://api.github.com/users/amariucaitheodor/events{/privacy}", "followers_url": "https://api.github.com/users/amariucaitheodor/followers", "following_url": "https://api.github.com/users/amariucaitheodor/following{/other_user}", "gists_url": "https://api.github.com/users/amariucaitheodor/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/amariucaitheodor", "id": 32778667, "login": "amariucaitheodor", "node_id": "MDQ6VXNlcjMyNzc4NjY3", "organizations_url": "https://api.github.com/users/amariucaitheodor/orgs", "received_events_url": "https://api.github.com/users/amariucaitheodor/received_events", "repos_url": "https://api.github.com/users/amariucaitheodor/repos", "site_admin": false, "starred_url": "https://api.github.com/users/amariucaitheodor/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/amariucaitheodor/subscriptions", "type": "User", "url": "https://api.github.com/users/amariucaitheodor" }
[]
closed
false
null
[]
null
5
"2023-04-02T14:42:44Z"
"2023-04-11T09:17:54Z"
"2023-04-10T08:04:04Z"
NONE
null
null
null
### Describe the bug Calling `datasets.load_dataset` to load the (publicly available) dataset `theodor1289/wit` fails with `pyarrow.lib.ArrowNotImplementedError`. ### Steps to reproduce the bug Steps to reproduce this behavior: 1. `!pip install datasets` 2. `!huggingface-cli login` 3. This step will throw the error (it might take a while as the dataset has ~170GB): ```python from datasets import load_dataset dataset = load_dataset("theodor1289/wit", "train", use_auth_token=True) ``` Stack trace: ``` (torch-multimodal) bash-4.2$ python test.py Downloading and preparing dataset None/None to /cluster/work/cotterell/tamariucai/HuggingfaceDatasets/theodor1289___parquet/theodor1289--wit-7a3e984414a86a0f/0.0.0/2a3b91fbd88a2c90d1dbbb32b460cf621d31bd5b05b934492fdef7d8d6f236ec... Downloading data files: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 2/2 [00:00<00:00, 491.68it/s] Extracting data files: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 2/2 [00:00<00:00, 16.93it/s] Traceback (most recent call last): File "/cluster/home/tamariucai/.local/lib/python3.10/site-packages/datasets/builder.py", line 1860, in _prepare_split_single for _, table in generator: File "/cluster/home/tamariucai/.local/lib/python3.10/site-packages/datasets/packaged_modules/parquet/parquet.py", line 69, in _generate_tables for batch_idx, record_batch in enumerate( File "pyarrow/_parquet.pyx", line 1323, in iter_batches File "pyarrow/error.pxi", line 121, in pyarrow.lib.check_status pyarrow.lib.ArrowNotImplementedError: Nested data conversions not implemented for chunked array outputs The above exception was the direct cause of the following exception: Traceback (most recent call last): File "/cluster/work/cotterell/tamariucai/multimodal-mirror/examples/test.py", line 2, in <module> dataset = load_dataset("theodor1289/wit", "train", use_auth_token=True) File "/cluster/home/tamariucai/.local/lib/python3.10/site-packages/datasets/load.py", line 1791, in load_dataset builder_instance.download_and_prepare( File "/cluster/home/tamariucai/.local/lib/python3.10/site-packages/datasets/builder.py", line 891, in download_and_prepare self._download_and_prepare( File "/cluster/home/tamariucai/.local/lib/python3.10/site-packages/datasets/builder.py", line 986, in _download_and_prepare self._prepare_split(split_generator, **prepare_split_kwargs) File "/cluster/home/tamariucai/.local/lib/python3.10/site-packages/datasets/builder.py", line 1748, in _prepare_split for job_id, done, content in self._prepare_split_single( File "/cluster/home/tamariucai/.local/lib/python3.10/site-packages/datasets/builder.py", line 1893, in _prepare_split_single raise DatasetGenerationError("An error occurred while generating the dataset") from e datasets.builder.DatasetGenerationError: An error occurred while generating the dataset ``` ### Expected behavior The dataset is loaded in variable `dataset`. ### Environment info - `datasets` version: 2.11.0 - Platform: Linux-3.10.0-1160.80.1.el7.x86_64-x86_64-with-glibc2.17 - Python version: 3.10.4 - Huggingface_hub version: 0.13.3 - PyArrow version: 11.0.0 - Pandas version: 1.5.3
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5695/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5695/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/5694
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5694/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5694/comments
https://api.github.com/repos/huggingface/datasets/issues/5694/events
https://github.com/huggingface/datasets/issues/5694
1,650,467,793
I_kwDODunzps5iYCPR
5,694
Dataset configuration
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
[ { "color": "c5def5", "default": false, "description": "Generic discussion on the library", "id": 2067400324, "name": "generic discussion", "node_id": "MDU6TGFiZWwyMDY3NDAwMzI0", "url": "https://api.github.com/repos/huggingface/datasets/labels/generic%20discussion" } ]
open
false
null
[]
null
3
"2023-04-01T13:08:05Z"
"2023-04-04T14:54:37Z"
null
MEMBER
null
null
null
Following discussions from https://github.com/huggingface/datasets/pull/5331 We could have something like `config.json` to define the configuration of a dataset. ```json { "data_dir": "data" "data_files": { "train": "train-[0-9][0-9][0-9][0-9]-of-[0-9][0-9][0-9][0-9][0-9]*.*" } } ``` we could also support a list for several configs with a 'config_name' field. The alternative was to use YAML in the README.md. I think it could also support a `dataset_type` field to specify which dataset builder class to use, and the other parameters would be the builder's parameters. Some parameters exist for all builders like `data_files` and `data_dir`, but some parameters are builder specific like `sep` for csv. This format would be used in `push_to_hub` to be able to push multiple configs. cc @huggingface/datasets EDIT: actually we're going for the YAML approach in README.md
{ "+1": 2, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 2, "url": "https://api.github.com/repos/huggingface/datasets/issues/5694/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5694/timeline
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5693
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5693/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5693/comments
https://api.github.com/repos/huggingface/datasets/issues/5693/events
https://github.com/huggingface/datasets/pull/5693
1,649,934,749
PR_kwDODunzps5NYdPS
5,693
[docs] Split pattern search order
{ "avatar_url": "https://avatars.githubusercontent.com/u/59462357?v=4", "events_url": "https://api.github.com/users/stevhliu/events{/privacy}", "followers_url": "https://api.github.com/users/stevhliu/followers", "following_url": "https://api.github.com/users/stevhliu/following{/other_user}", "gists_url": "https://api.github.com/users/stevhliu/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/stevhliu", "id": 59462357, "login": "stevhliu", "node_id": "MDQ6VXNlcjU5NDYyMzU3", "organizations_url": "https://api.github.com/users/stevhliu/orgs", "received_events_url": "https://api.github.com/users/stevhliu/received_events", "repos_url": "https://api.github.com/users/stevhliu/repos", "site_admin": false, "starred_url": "https://api.github.com/users/stevhliu/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stevhliu/subscriptions", "type": "User", "url": "https://api.github.com/users/stevhliu" }
[]
closed
false
null
[]
null
2
"2023-03-31T19:51:38Z"
"2023-04-03T18:43:30Z"
"2023-04-03T18:29:58Z"
MEMBER
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/5693.diff", "html_url": "https://github.com/huggingface/datasets/pull/5693", "merged_at": "2023-04-03T18:29:58Z", "patch_url": "https://github.com/huggingface/datasets/pull/5693.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/5693" }
This PR addresses #5681 about the order of split patterns 🤗 Datasets searches for when generating dataset splits.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5693/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5693/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/5692
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5692/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5692/comments
https://api.github.com/repos/huggingface/datasets/issues/5692/events
https://github.com/huggingface/datasets/issues/5692
1,649,818,644
I_kwDODunzps5iVjwU
5,692
pyarrow.lib.ArrowInvalid: Unable to merge: Field <field> has incompatible types
{ "avatar_url": "https://avatars.githubusercontent.com/u/32219669?v=4", "events_url": "https://api.github.com/users/cyanic-selkie/events{/privacy}", "followers_url": "https://api.github.com/users/cyanic-selkie/followers", "following_url": "https://api.github.com/users/cyanic-selkie/following{/other_user}", "gists_url": "https://api.github.com/users/cyanic-selkie/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/cyanic-selkie", "id": 32219669, "login": "cyanic-selkie", "node_id": "MDQ6VXNlcjMyMjE5NjY5", "organizations_url": "https://api.github.com/users/cyanic-selkie/orgs", "received_events_url": "https://api.github.com/users/cyanic-selkie/received_events", "repos_url": "https://api.github.com/users/cyanic-selkie/repos", "site_admin": false, "starred_url": "https://api.github.com/users/cyanic-selkie/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/cyanic-selkie/subscriptions", "type": "User", "url": "https://api.github.com/users/cyanic-selkie" }
[]
open
false
null
[]
null
6
"2023-03-31T18:19:40Z"
"2024-01-14T07:24:21Z"
null
NONE
null
null
null
### Describe the bug When loading the dataset [wikianc-en](https://huggingface.co/datasets/cyanic-selkie/wikianc-en) which I created using [this](https://github.com/cyanic-selkie/wikianc) code, I get the following error: ``` Traceback (most recent call last): File "/home/sven/code/rector/answer-detection/train.py", line 106, in <module> (dataset, weights) = get_dataset(args.dataset, tokenizer, labels, args.padding) File "/home/sven/code/rector/answer-detection/dataset.py", line 106, in get_dataset dataset = load_dataset("cyanic-selkie/wikianc-en") File "/home/sven/.cache/pypoetry/virtualenvs/rector-Z2mdKRnn-py3.10/lib/python3.10/site-packages/datasets/load.py", line 1794, in load_dataset ds = builder_instance.as_dataset(split=split, verification_mode=verification_mode, in_memory=keep_in_memory) File "/home/sven/.cache/pypoetry/virtualenvs/rector-Z2mdKRnn-py3.10/lib/python3.10/site-packages/datasets/builder.py", line 1106, in as_dataset datasets = map_nested( File "/home/sven/.cache/pypoetry/virtualenvs/rector-Z2mdKRnn-py3.10/lib/python3.10/site-packages/datasets/utils/py_utils.py", line 443, in map_nested mapped = [ File "/home/sven/.cache/pypoetry/virtualenvs/rector-Z2mdKRnn-py3.10/lib/python3.10/site-packages/datasets/utils/py_utils.py", line 444, in <listcomp> _single_map_nested((function, obj, types, None, True, None)) File "/home/sven/.cache/pypoetry/virtualenvs/rector-Z2mdKRnn-py3.10/lib/python3.10/site-packages/datasets/utils/py_utils.py", line 346, in _single_map_nested return function(data_struct) File "/home/sven/.cache/pypoetry/virtualenvs/rector-Z2mdKRnn-py3.10/lib/python3.10/site-packages/datasets/builder.py", line 1136, in _build_single_dataset ds = self._as_dataset( File "/home/sven/.cache/pypoetry/virtualenvs/rector-Z2mdKRnn-py3.10/lib/python3.10/site-packages/datasets/builder.py", line 1207, in _as_dataset dataset_kwargs = ArrowReader(cache_dir, self.info).read( File "/home/sven/.cache/pypoetry/virtualenvs/rector-Z2mdKRnn-py3.10/lib/python3.10/site-packages/datasets/arrow_reader.py", line 239, in read return self.read_files(files=files, original_instructions=instructions, in_memory=in_memory) File "/home/sven/.cache/pypoetry/virtualenvs/rector-Z2mdKRnn-py3.10/lib/python3.10/site-packages/datasets/arrow_reader.py", line 260, in read_files pa_table = self._read_files(files, in_memory=in_memory) File "/home/sven/.cache/pypoetry/virtualenvs/rector-Z2mdKRnn-py3.10/lib/python3.10/site-packages/datasets/arrow_reader.py", line 203, in _read_files pa_table = concat_tables(pa_tables) if len(pa_tables) != 1 else pa_tables[0] File "/home/sven/.cache/pypoetry/virtualenvs/rector-Z2mdKRnn-py3.10/lib/python3.10/site-packages/datasets/table.py", line 1808, in concat_tables return ConcatenationTable.from_tables(tables, axis=axis) File "/home/sven/.cache/pypoetry/virtualenvs/rector-Z2mdKRnn-py3.10/lib/python3.10/site-packages/datasets/table.py", line 1514, in from_tables return cls.from_blocks(blocks) File "/home/sven/.cache/pypoetry/virtualenvs/rector-Z2mdKRnn-py3.10/lib/python3.10/site-packages/datasets/table.py", line 1427, in from_blocks table = cls._concat_blocks(blocks, axis=0) File "/home/sven/.cache/pypoetry/virtualenvs/rector-Z2mdKRnn-py3.10/lib/python3.10/site-packages/datasets/table.py", line 1373, in _concat_blocks return pa.concat_tables(pa_tables, promote=True) File "pyarrow/table.pxi", line 5224, in pyarrow.lib.concat_tables File "pyarrow/error.pxi", line 144, in pyarrow.lib.pyarrow_internal_check_status File "pyarrow/error.pxi", line 100, in pyarrow.lib.check_status pyarrow.lib.ArrowInvalid: Unable to merge: Field paragraph_anchors has incompatible types: list<: struct<start: uint32 not null, end: uint32 not null, qid: uint32, pageid: uint32, title: string not null> not null> vs list<item: struct<start: uint32, end: uint32, qid: uint32, pageid: uint32, title: string>> ``` This only happens when I load the `train` split, indicating that the size of the dataset is the deciding factor. ### Steps to reproduce the bug ```python from datasets import load_dataset dataset = load_dataset("cyanic-selkie/wikianc-en", split="train") ``` ### Expected behavior The dataset should load normally without any errors. ### Environment info - `datasets` version: 2.10.1 - Platform: Linux-6.2.8-arch1-1-x86_64-with-glibc2.37 - Python version: 3.10.10 - PyArrow version: 11.0.0 - Pandas version: 1.5.3
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5692/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5692/timeline
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5691
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5691/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5691/comments
https://api.github.com/repos/huggingface/datasets/issues/5691/events
https://github.com/huggingface/datasets/pull/5691
1,649,737,526
PR_kwDODunzps5NX08d
5,691
[docs] Compress data files
{ "avatar_url": "https://avatars.githubusercontent.com/u/59462357?v=4", "events_url": "https://api.github.com/users/stevhliu/events{/privacy}", "followers_url": "https://api.github.com/users/stevhliu/followers", "following_url": "https://api.github.com/users/stevhliu/following{/other_user}", "gists_url": "https://api.github.com/users/stevhliu/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/stevhliu", "id": 59462357, "login": "stevhliu", "node_id": "MDQ6VXNlcjU5NDYyMzU3", "organizations_url": "https://api.github.com/users/stevhliu/orgs", "received_events_url": "https://api.github.com/users/stevhliu/received_events", "repos_url": "https://api.github.com/users/stevhliu/repos", "site_admin": false, "starred_url": "https://api.github.com/users/stevhliu/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stevhliu/subscriptions", "type": "User", "url": "https://api.github.com/users/stevhliu" }
[]
closed
false
null
[]
null
3
"2023-03-31T17:17:26Z"
"2023-04-19T13:37:32Z"
"2023-04-19T07:25:58Z"
MEMBER
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/5691.diff", "html_url": "https://github.com/huggingface/datasets/pull/5691", "merged_at": "2023-04-19T07:25:58Z", "patch_url": "https://github.com/huggingface/datasets/pull/5691.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/5691" }
This PR addresses the comments in #5687 about compressing text file extensions before uploading to the Hub. Also clarified what "too large" means based on the GitLFS [docs](https://docs.github.com/en/repositories/working-with-files/managing-large-files/about-git-large-file-storage).
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5691/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5691/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/5689
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5689/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5689/comments
https://api.github.com/repos/huggingface/datasets/issues/5689/events
https://github.com/huggingface/datasets/pull/5689
1,648,956,349
PR_kwDODunzps5NVMuI
5,689
Support streaming Beam datasets from HF GCS preprocessed data
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[]
closed
false
null
[]
null
4
"2023-03-31T08:44:24Z"
"2023-04-12T05:57:55Z"
"2023-04-12T05:50:31Z"
MEMBER
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/5689.diff", "html_url": "https://github.com/huggingface/datasets/pull/5689", "merged_at": "2023-04-12T05:50:30Z", "patch_url": "https://github.com/huggingface/datasets/pull/5689.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/5689" }
This PR implements streaming Apache Beam datasets that are already preprocessed by us and stored in the HF Google Cloud Storage: - natural_questions - wiki40b - wikipedia This is done by streaming from the prepared Arrow files in HF Google Cloud Storage. This will fix their corresponding dataset viewers. Related to: - https://github.com/huggingface/datasets-server/pull/988#discussion_r1150767138 Related to: - https://huggingface.co/datasets/natural_questions/discussions/4 - https://huggingface.co/datasets/wiki40b/discussions/2 - https://huggingface.co/datasets/wikipedia/discussions/9 CC: @severo
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 1, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/5689/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5689/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/5690
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5690/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5690/comments
https://api.github.com/repos/huggingface/datasets/issues/5690/events
https://github.com/huggingface/datasets/issues/5690
1,649,289,883
I_kwDODunzps5iTiqb
5,690
raise AttributeError(f"No {package_name} attribute {name}") AttributeError: No huggingface_hub attribute hf_api
{ "avatar_url": "https://avatars.githubusercontent.com/u/55964850?v=4", "events_url": "https://api.github.com/users/wccccp/events{/privacy}", "followers_url": "https://api.github.com/users/wccccp/followers", "following_url": "https://api.github.com/users/wccccp/following{/other_user}", "gists_url": "https://api.github.com/users/wccccp/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/wccccp", "id": 55964850, "login": "wccccp", "node_id": "MDQ6VXNlcjU1OTY0ODUw", "organizations_url": "https://api.github.com/users/wccccp/orgs", "received_events_url": "https://api.github.com/users/wccccp/received_events", "repos_url": "https://api.github.com/users/wccccp/repos", "site_admin": false, "starred_url": "https://api.github.com/users/wccccp/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/wccccp/subscriptions", "type": "User", "url": "https://api.github.com/users/wccccp" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
closed
false
null
[]
null
5
"2023-03-31T08:22:22Z"
"2023-07-21T14:21:57Z"
"2023-07-21T14:21:57Z"
NONE
null
null
null
### Describe the bug rta.sh Traceback (most recent call last): File "run.py", line 7, in <module> import datasets File "/home/appuser/miniconda3/envs/pt2/lib/python3.8/site-packages/datasets/__init__.py", line 37, in <module> from .builder import ArrowBasedBuilder, BeamBasedBuilder, BuilderConfig, DatasetBuilder, GeneratorBasedBuilder File "/home/appuser/miniconda3/envs/pt2/lib/python3.8/site-packages/datasets/builder.py", line 44, in <module> from .data_files import DataFilesDict, _sanitize_patterns File "/home/appuser/miniconda3/envs/pt2/lib/python3.8/site-packages/datasets/data_files.py", line 120, in <module> dataset_info: huggingface_hub.hf_api.DatasetInfo, File "/home/appuser/miniconda3/envs/pt2/lib/python3.8/site-packages/huggingface_hub/__init__.py", line 290, in __getattr__ raise AttributeError(f"No {package_name} attribute {name}") AttributeError: No huggingface_hub attribute hf_api ### Reproduction _No response_ ### Logs ```shell Traceback (most recent call last): File "run.py", line 7, in <module> import datasets File "/home/appuser/miniconda3/envs/pt2/lib/python3.8/site-packages/datasets/__init__.py", line 37, in <module> from .builder import ArrowBasedBuilder, BeamBasedBuilder, BuilderConfig, DatasetBuilder, GeneratorBasedBuilder File "/home/appuser/miniconda3/envs/pt2/lib/python3.8/site-packages/datasets/builder.py", line 44, in <module> from .data_files import DataFilesDict, _sanitize_patterns File "/home/appuser/miniconda3/envs/pt2/lib/python3.8/site-packages/datasets/data_files.py", line 120, in <module> dataset_info: huggingface_hub.hf_api.DatasetInfo, File "/home/appuser/miniconda3/envs/pt2/lib/python3.8/site-packages/huggingface_hub/__init__.py", line 290, in __getattr__ raise AttributeError(f"No {package_name} attribute {name}") AttributeError: No huggingface_hub attribute hf_api ``` ### System info ```shell - huggingface_hub version: 0.13.2 - Platform: Linux-5.4.0-144-generic-x86_64-with-glibc2.10 - Python version: 3.8.5 - Running in iPython ?: No - Running in notebook ?: No - Running in Google Colab ?: No - Token path ?: /home/appuser/.cache/huggingface/token - Has saved token ?: False - Configured git credential helpers: - FastAI: N/A - Tensorflow: N/A - Torch: 1.7.1 - Jinja2: N/A - Graphviz: N/A - Pydot: N/A - Pillow: 9.3.0 - hf_transfer: N/A - ENDPOINT: https://huggingface.co - HUGGINGFACE_HUB_CACHE: /home/appuser/.cache/huggingface/hub - HUGGINGFACE_ASSETS_CACHE: /home/appuser/.cache/huggingface/assets - HF_TOKEN_PATH: /home/appuser/.cache/huggingface/token - HF_HUB_OFFLINE: False - HF_HUB_DISABLE_TELEMETRY: False - HF_HUB_DISABLE_PROGRESS_BARS: None - HF_HUB_DISABLE_SYMLINKS_WARNING: False - HF_HUB_DISABLE_IMPLICIT_TOKEN: False ```
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5690/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5690/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/5688
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5688/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5688/comments
https://api.github.com/repos/huggingface/datasets/issues/5688/events
https://github.com/huggingface/datasets/issues/5688
1,648,463,504
I_kwDODunzps5iQY6Q
5,688
Wikipedia download_and_prepare for GCS
{ "avatar_url": "https://avatars.githubusercontent.com/u/25522531?v=4", "events_url": "https://api.github.com/users/adrianfagerland/events{/privacy}", "followers_url": "https://api.github.com/users/adrianfagerland/followers", "following_url": "https://api.github.com/users/adrianfagerland/following{/other_user}", "gists_url": "https://api.github.com/users/adrianfagerland/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/adrianfagerland", "id": 25522531, "login": "adrianfagerland", "node_id": "MDQ6VXNlcjI1NTIyNTMx", "organizations_url": "https://api.github.com/users/adrianfagerland/orgs", "received_events_url": "https://api.github.com/users/adrianfagerland/received_events", "repos_url": "https://api.github.com/users/adrianfagerland/repos", "site_admin": false, "starred_url": "https://api.github.com/users/adrianfagerland/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/adrianfagerland/subscriptions", "type": "User", "url": "https://api.github.com/users/adrianfagerland" }
[]
open
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" } ]
null
2
"2023-03-30T23:43:22Z"
"2023-03-31T13:31:32Z"
null
NONE
null
null
null
### Describe the bug I am unable to download the wikipedia dataset onto GCS. When I run the script provided the memory firstly gets eaten up, then it crashes. I tried running this on a VM with 128GB RAM and all I got was a two empty files: _data_builder.lock_, _data.incomplete/beam-temp-wikipedia-train-1ab2039acf3611ed87a9893475de0093_ I have troubleshot this for two straight days now, but I am just unable to get the dataset into storage. ### Steps to reproduce the bug Run this and insert a path: ``` import datasets builder = datasets.load_dataset_builder( "wikipedia", language="en", date="20230320", beam_runner="DirectRunner") builder.download_and_prepare({path}, file_format="parquet") ``` This is where the problem of it eating RAM occurs. I have also tried several versions of this, based on the docs: ``` import gcsfs import datasets storage_options = {"project": "tdt4310", "token": "cloud"} fs = gcsfs.GCSFileSystem(**storage_options) output_dir = "gcs://wikipediadata/" builder = datasets.load_dataset_builder( "wikipedia", date="20230320", language="en", beam_runner="DirectRunner") builder.download_and_prepare( output_dir, storage_options=storage_options, file_format="parquet") ``` The error message that is received here is: > ValueError: Unable to get filesystem from specified path, please use the correct path or ensure the required dependency is installed, e.g., pip install apache-beam[gcp]. Path specified: gcs://wikipediadata/wikipedia-train [while running 'train/Save to parquet/Write/WriteImpl/InitializeWrite'] I have ran `pip install apache-beam[gcp]` ### Expected behavior The wikipedia data loaded into GCS Everything worked when testing with a smaller demo dataset found somewhere in the docs ### Environment info Newest published version of datasets. Python 3.9. Also tested with Python 3.7. 128GB RAM Google Cloud VM instance.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5688/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5688/timeline
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5687
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5687/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5687/comments
https://api.github.com/repos/huggingface/datasets/issues/5687/events
https://github.com/huggingface/datasets/issues/5687
1,647,009,018
I_kwDODunzps5iK1z6
5,687
Document to compress data files before uploading
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[ { "color": "0075ca", "default": true, "description": "Improvements or additions to documentation", "id": 1935892861, "name": "documentation", "node_id": "MDU6TGFiZWwxOTM1ODkyODYx", "url": "https://api.github.com/repos/huggingface/datasets/labels/documentation" } ]
closed
false
null
[]
null
3
"2023-03-30T06:41:07Z"
"2023-04-19T07:25:59Z"
"2023-04-19T07:25:59Z"
MEMBER
null
null
null
In our docs to [Share a dataset to the Hub](https://huggingface.co/docs/datasets/upload_dataset), we tell users to upload directly their data files, like CSV, JSON, JSON-Lines, text,... However, these extensions are not tracked by Git LFS by default, as they are not in the `.giattributes` file. Therefore, if they are too large, Git will fail to commit/upload them. I think for those file extensions (.csv, .json, .jsonl, .txt), we should better recommend to **compress** their data files (using ZIP for example) before uploading them to the Hub. - Compressed files are tracked by Git LFS in our default `.gitattributes` file What do you think? CC: @stevhliu See related issue: - https://huggingface.co/datasets/tcor0005/langchain-docs-400-chunksize/discussions/1
{ "+1": 1, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/5687/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5687/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/5686
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5686/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5686/comments
https://api.github.com/repos/huggingface/datasets/issues/5686/events
https://github.com/huggingface/datasets/pull/5686
1,646,308,228
PR_kwDODunzps5NMXdu
5,686
set dev version
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
[]
closed
false
null
[]
null
3
"2023-03-29T18:24:13Z"
"2023-03-29T18:33:49Z"
"2023-03-29T18:24:22Z"
MEMBER
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/5686.diff", "html_url": "https://github.com/huggingface/datasets/pull/5686", "merged_at": "2023-03-29T18:24:22Z", "patch_url": "https://github.com/huggingface/datasets/pull/5686.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/5686" }
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5686/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5686/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/5685
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5685/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5685/comments
https://api.github.com/repos/huggingface/datasets/issues/5685/events
https://github.com/huggingface/datasets/issues/5685
1,646,048,667
I_kwDODunzps5iHLWb
5,685
Broken Image render on the hub website
{ "avatar_url": "https://avatars.githubusercontent.com/u/15908060?v=4", "events_url": "https://api.github.com/users/FrancescoSaverioZuppichini/events{/privacy}", "followers_url": "https://api.github.com/users/FrancescoSaverioZuppichini/followers", "following_url": "https://api.github.com/users/FrancescoSaverioZuppichini/following{/other_user}", "gists_url": "https://api.github.com/users/FrancescoSaverioZuppichini/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/FrancescoSaverioZuppichini", "id": 15908060, "login": "FrancescoSaverioZuppichini", "node_id": "MDQ6VXNlcjE1OTA4MDYw", "organizations_url": "https://api.github.com/users/FrancescoSaverioZuppichini/orgs", "received_events_url": "https://api.github.com/users/FrancescoSaverioZuppichini/received_events", "repos_url": "https://api.github.com/users/FrancescoSaverioZuppichini/repos", "site_admin": false, "starred_url": "https://api.github.com/users/FrancescoSaverioZuppichini/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/FrancescoSaverioZuppichini/subscriptions", "type": "User", "url": "https://api.github.com/users/FrancescoSaverioZuppichini" }
[]
closed
false
null
[]
null
3
"2023-03-29T15:25:30Z"
"2023-03-30T07:54:25Z"
"2023-03-30T07:54:25Z"
NONE
null
null
null
### Describe the bug Hi :wave: Not sure if this is the right place to ask, but I am trying to load a huge amount of datasets on the hub (:partying_face: ) but I am facing a little issue with the `image` type ![image](https://user-images.githubusercontent.com/15908060/228587875-427a37f1-3a31-4e17-8bbe-0f759003910d.png) See this [dataset](https://huggingface.co/datasets/Francesco/cell-towers), basically for some reason the first image has numerical bytes inside, not sure if that is okay, but the image render feature **doesn't work** So the dataset is stored in the following way ```python builder.download_and_prepare(output_dir=str(output_dir)) ds = builder.as_dataset(split="train") # [NOTE] no idea how to push it from the builder folder ds.push_to_hub(repo_id=repo_id) builder.as_dataset(split="validation").push_to_hub(repo_id=repo_id) ds = builder.as_dataset(split="test") ds.push_to_hub(repo_id=repo_id) ``` The build is this class ```python class COCOLikeDatasetBuilder(datasets.GeneratorBasedBuilder): VERSION = datasets.Version("1.0.0") def _info(self): features = datasets.Features( { "image_id": datasets.Value("int64"), "image": datasets.Image(), "width": datasets.Value("int32"), "height": datasets.Value("int32"), "objects": datasets.Sequence( { "id": datasets.Value("int64"), "area": datasets.Value("int64"), "bbox": datasets.Sequence( datasets.Value("float32"), length=4 ), "category": datasets.ClassLabel(names=categories), } ), } ) return datasets.DatasetInfo( description=description, features=features, homepage=homepage, license=license, citation=citation, ) def _split_generators(self, dl_manager): archive = dl_manager.download(url) return [ datasets.SplitGenerator( name=datasets.Split.TRAIN, gen_kwargs={ "annotation_file_path": "train/_annotations.coco.json", "files": dl_manager.iter_archive(archive), }, ), datasets.SplitGenerator( name=datasets.Split.VALIDATION, gen_kwargs={ "annotation_file_path": "test/_annotations.coco.json", "files": dl_manager.iter_archive(archive), }, ), datasets.SplitGenerator( name=datasets.Split.TEST, gen_kwargs={ "annotation_file_path": "valid/_annotations.coco.json", "files": dl_manager.iter_archive(archive), }, ), ] def _generate_examples(self, annotation_file_path, files): def process_annot(annot, category_id_to_category): return { "id": annot["id"], "area": annot["area"], "bbox": annot["bbox"], "category": category_id_to_category[annot["category_id"]], } image_id_to_image = {} idx = 0 # This loop relies on the ordering of the files in the archive: # Annotation files come first, then the images. for path, f in files: file_name = os.path.basename(path) if annotation_file_path in path: annotations = json.load(f) category_id_to_category = { category["id"]: category["name"] for category in annotations["categories"] } print(category_id_to_category) image_id_to_annotations = collections.defaultdict(list) for annot in annotations["annotations"]: image_id_to_annotations[annot["image_id"]].append(annot) image_id_to_image = { annot["file_name"]: annot for annot in annotations["images"] } elif file_name in image_id_to_image: image = image_id_to_image[file_name] objects = [ process_annot(annot, category_id_to_category) for annot in image_id_to_annotations[image["id"]] ] print(file_name) yield idx, { "image_id": image["id"], "image": {"path": path, "bytes": f.read()}, "width": image["width"], "height": image["height"], "objects": objects, } idx += 1 ``` Basically, I want to add to the hub every dataset I come across on coco format Thanks Fra ### Steps to reproduce the bug In this case, you can just navigate on the [dataset](https://huggingface.co/datasets/Francesco/cell-towers) ### Expected behavior I was expecting the image rendering feature to work ### Environment info Not a lot to share, I am using `datasets` from a fresh venv
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5685/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5685/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/5684
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5684/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5684/comments
https://api.github.com/repos/huggingface/datasets/issues/5684/events
https://github.com/huggingface/datasets/pull/5684
1,646,013,226
PR_kwDODunzps5NLXWm
5,684
Release: 2.11.0
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
[]
closed
false
null
[]
null
5
"2023-03-29T15:06:07Z"
"2023-03-29T18:30:34Z"
"2023-03-29T18:15:54Z"
MEMBER
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/5684.diff", "html_url": "https://github.com/huggingface/datasets/pull/5684", "merged_at": "2023-03-29T18:15:54Z", "patch_url": "https://github.com/huggingface/datasets/pull/5684.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/5684" }
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5684/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5684/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/5683
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5683/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5683/comments
https://api.github.com/repos/huggingface/datasets/issues/5683/events
https://github.com/huggingface/datasets/pull/5683
1,646,001,197
PR_kwDODunzps5NLUq1
5,683
Fix verification_mode when ignore_verifications is passed
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[]
closed
false
null
[]
null
2
"2023-03-29T15:00:50Z"
"2023-03-29T17:36:06Z"
"2023-03-29T17:28:57Z"
MEMBER
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/5683.diff", "html_url": "https://github.com/huggingface/datasets/pull/5683", "merged_at": "2023-03-29T17:28:57Z", "patch_url": "https://github.com/huggingface/datasets/pull/5683.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/5683" }
This PR fixes the values assigned to `verification_mode` when passing `ignore_verifications` to `load_dataset`. Related to: - #5303 Fix #5682.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5683/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5683/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/5682
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5682/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5682/comments
https://api.github.com/repos/huggingface/datasets/issues/5682/events
https://github.com/huggingface/datasets/issues/5682
1,646,000,571
I_kwDODunzps5iG_m7
5,682
ValueError when passing ignore_verifications
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
closed
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" } ]
null
0
"2023-03-29T15:00:30Z"
"2023-03-29T17:28:58Z"
"2023-03-29T17:28:58Z"
MEMBER
null
null
null
When passing `ignore_verifications=True` to `load_dataset`, we get a ValueError: ``` ValueError: 'none' is not a valid VerificationMode ```
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5682/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5682/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/5681
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5681/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5681/comments
https://api.github.com/repos/huggingface/datasets/issues/5681/events
https://github.com/huggingface/datasets/issues/5681
1,645,630,784
I_kwDODunzps5iFlVA
5,681
Add information about patterns search order to the doc about structuring repo
{ "avatar_url": "https://avatars.githubusercontent.com/u/16348744?v=4", "events_url": "https://api.github.com/users/polinaeterna/events{/privacy}", "followers_url": "https://api.github.com/users/polinaeterna/followers", "following_url": "https://api.github.com/users/polinaeterna/following{/other_user}", "gists_url": "https://api.github.com/users/polinaeterna/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/polinaeterna", "id": 16348744, "login": "polinaeterna", "node_id": "MDQ6VXNlcjE2MzQ4NzQ0", "organizations_url": "https://api.github.com/users/polinaeterna/orgs", "received_events_url": "https://api.github.com/users/polinaeterna/received_events", "repos_url": "https://api.github.com/users/polinaeterna/repos", "site_admin": false, "starred_url": "https://api.github.com/users/polinaeterna/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/polinaeterna/subscriptions", "type": "User", "url": "https://api.github.com/users/polinaeterna" }
[ { "color": "0075ca", "default": true, "description": "Improvements or additions to documentation", "id": 1935892861, "name": "documentation", "node_id": "MDU6TGFiZWwxOTM1ODkyODYx", "url": "https://api.github.com/repos/huggingface/datasets/labels/documentation" } ]
closed
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/59462357?v=4", "events_url": "https://api.github.com/users/stevhliu/events{/privacy}", "followers_url": "https://api.github.com/users/stevhliu/followers", "following_url": "https://api.github.com/users/stevhliu/following{/other_user}", "gists_url": "https://api.github.com/users/stevhliu/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/stevhliu", "id": 59462357, "login": "stevhliu", "node_id": "MDQ6VXNlcjU5NDYyMzU3", "organizations_url": "https://api.github.com/users/stevhliu/orgs", "received_events_url": "https://api.github.com/users/stevhliu/received_events", "repos_url": "https://api.github.com/users/stevhliu/repos", "site_admin": false, "starred_url": "https://api.github.com/users/stevhliu/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stevhliu/subscriptions", "type": "User", "url": "https://api.github.com/users/stevhliu" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/59462357?v=4", "events_url": "https://api.github.com/users/stevhliu/events{/privacy}", "followers_url": "https://api.github.com/users/stevhliu/followers", "following_url": "https://api.github.com/users/stevhliu/following{/other_user}", "gists_url": "https://api.github.com/users/stevhliu/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/stevhliu", "id": 59462357, "login": "stevhliu", "node_id": "MDQ6VXNlcjU5NDYyMzU3", "organizations_url": "https://api.github.com/users/stevhliu/orgs", "received_events_url": "https://api.github.com/users/stevhliu/received_events", "repos_url": "https://api.github.com/users/stevhliu/repos", "site_admin": false, "starred_url": "https://api.github.com/users/stevhliu/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stevhliu/subscriptions", "type": "User", "url": "https://api.github.com/users/stevhliu" } ]
null
2
"2023-03-29T11:44:49Z"
"2023-04-03T18:31:11Z"
"2023-04-03T18:31:11Z"
CONTRIBUTOR
null
null
null
Following [this](https://github.com/huggingface/datasets/issues/5650) issue I think we should add a note about the order of patterns that is used to find splits, see [my comment](https://github.com/huggingface/datasets/issues/5650#issuecomment-1488412527). Also we should reference this page in pages about packaged loaders. I have a déjà vu that it had already been discussed as some point but I don't remember....
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5681/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5681/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/5680
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5680/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5680/comments
https://api.github.com/repos/huggingface/datasets/issues/5680/events
https://github.com/huggingface/datasets/pull/5680
1,645,430,103
PR_kwDODunzps5NJYNz
5,680
Fix a description error for interleave_datasets.
{ "avatar_url": "https://avatars.githubusercontent.com/u/55624066?v=4", "events_url": "https://api.github.com/users/QizhiPei/events{/privacy}", "followers_url": "https://api.github.com/users/QizhiPei/followers", "following_url": "https://api.github.com/users/QizhiPei/following{/other_user}", "gists_url": "https://api.github.com/users/QizhiPei/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/QizhiPei", "id": 55624066, "login": "QizhiPei", "node_id": "MDQ6VXNlcjU1NjI0MDY2", "organizations_url": "https://api.github.com/users/QizhiPei/orgs", "received_events_url": "https://api.github.com/users/QizhiPei/received_events", "repos_url": "https://api.github.com/users/QizhiPei/repos", "site_admin": false, "starred_url": "https://api.github.com/users/QizhiPei/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/QizhiPei/subscriptions", "type": "User", "url": "https://api.github.com/users/QizhiPei" }
[]
closed
false
null
[]
null
3
"2023-03-29T09:50:23Z"
"2023-03-30T13:14:19Z"
"2023-03-30T13:07:18Z"
CONTRIBUTOR
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/5680.diff", "html_url": "https://github.com/huggingface/datasets/pull/5680", "merged_at": "2023-03-30T13:07:18Z", "patch_url": "https://github.com/huggingface/datasets/pull/5680.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/5680" }
There is a description mistake in the annotation of interleave_dataset with "all_exhausted" stopping_strategy. ``` python d1 = Dataset.from_dict({"a": [0, 1, 2]}) d2 = Dataset.from_dict({"a": [10, 11, 12, 13]}) d3 = Dataset.from_dict({"a": [20, 21, 22, 23, 24]}) dataset = interleave_datasets([d1, d2, d3], stopping_strategy="all_exhausted") ``` According to the interleave way, the correct output of `dataset["a"]` is `[0, 10, 20, 1, 11, 21, 2, 12, 22, 0, 13, 23, 1, 10, 24]`, not `[0, 10, 20, 1, 11, 21, 2, 12, 22, 0, 13, 23, 1, 0, 24]`
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5680/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5680/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/5679
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5679/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5679/comments
https://api.github.com/repos/huggingface/datasets/issues/5679/events
https://github.com/huggingface/datasets/issues/5679
1,645,184,622
I_kwDODunzps5iD4Zu
5,679
Allow load_dataset to take a working dir for intermediate data
{ "avatar_url": "https://avatars.githubusercontent.com/u/38018689?v=4", "events_url": "https://api.github.com/users/lu-wang-dl/events{/privacy}", "followers_url": "https://api.github.com/users/lu-wang-dl/followers", "following_url": "https://api.github.com/users/lu-wang-dl/following{/other_user}", "gists_url": "https://api.github.com/users/lu-wang-dl/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lu-wang-dl", "id": 38018689, "login": "lu-wang-dl", "node_id": "MDQ6VXNlcjM4MDE4Njg5", "organizations_url": "https://api.github.com/users/lu-wang-dl/orgs", "received_events_url": "https://api.github.com/users/lu-wang-dl/received_events", "repos_url": "https://api.github.com/users/lu-wang-dl/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lu-wang-dl/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lu-wang-dl/subscriptions", "type": "User", "url": "https://api.github.com/users/lu-wang-dl" }
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" } ]
open
false
null
[]
null
4
"2023-03-29T07:21:09Z"
"2023-04-12T22:30:25Z"
null
NONE
null
null
null
### Feature request As a user, I can set a working dir for intermediate data creation. The processed files will be moved to the cache dir, like ``` load_dataset(…, working_dir=”/temp/dir”, cache_dir=”/cloud_dir”). ``` ### Motivation This will help the use case for using datasets with cloud storage as cache. It will help boost the performance. ### Your contribution I can provide a PR to fix this if the proposal seems reasonable.
{ "+1": 1, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/5679/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5679/timeline
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5678
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5678/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5678/comments
https://api.github.com/repos/huggingface/datasets/issues/5678/events
https://github.com/huggingface/datasets/issues/5678
1,645,018,359
I_kwDODunzps5iDPz3
5,678
Add support to create a Dataset from spark dataframe
{ "avatar_url": "https://avatars.githubusercontent.com/u/38018689?v=4", "events_url": "https://api.github.com/users/lu-wang-dl/events{/privacy}", "followers_url": "https://api.github.com/users/lu-wang-dl/followers", "following_url": "https://api.github.com/users/lu-wang-dl/following{/other_user}", "gists_url": "https://api.github.com/users/lu-wang-dl/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lu-wang-dl", "id": 38018689, "login": "lu-wang-dl", "node_id": "MDQ6VXNlcjM4MDE4Njg5", "organizations_url": "https://api.github.com/users/lu-wang-dl/orgs", "received_events_url": "https://api.github.com/users/lu-wang-dl/received_events", "repos_url": "https://api.github.com/users/lu-wang-dl/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lu-wang-dl/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lu-wang-dl/subscriptions", "type": "User", "url": "https://api.github.com/users/lu-wang-dl" }
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" } ]
closed
false
null
[]
null
3
"2023-03-29T04:36:28Z"
"2023-07-21T14:15:38Z"
"2023-07-21T14:15:38Z"
NONE
null
null
null
### Feature request Add a new API `Dataset.from_spark` to create a Dataset from Spark DataFrame. ### Motivation Spark is a distributed computing framework that can handle large datasets. By supporting loading Spark DataFrames directly into Hugging Face Datasets, we enable take the advantages of spark to processing the data in parallel. By providing a seamless integration between these two frameworks, we make it easier for data scientists and developers to work with both Spark and Hugging Face in the same workflow. ### Your contribution We can discuss about the ideas and I can help preparing a PR for this feature.
{ "+1": 2, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 2, "url": "https://api.github.com/repos/huggingface/datasets/issues/5678/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5678/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/5677
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5677/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5677/comments
https://api.github.com/repos/huggingface/datasets/issues/5677/events
https://github.com/huggingface/datasets/issues/5677
1,644,828,606
I_kwDODunzps5iChe-
5,677
Dataset.map() crashes when any column contains more than 1000 empty dictionaries
{ "avatar_url": "https://avatars.githubusercontent.com/u/7139344?v=4", "events_url": "https://api.github.com/users/mtoles/events{/privacy}", "followers_url": "https://api.github.com/users/mtoles/followers", "following_url": "https://api.github.com/users/mtoles/following{/other_user}", "gists_url": "https://api.github.com/users/mtoles/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mtoles", "id": 7139344, "login": "mtoles", "node_id": "MDQ6VXNlcjcxMzkzNDQ=", "organizations_url": "https://api.github.com/users/mtoles/orgs", "received_events_url": "https://api.github.com/users/mtoles/received_events", "repos_url": "https://api.github.com/users/mtoles/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mtoles/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mtoles/subscriptions", "type": "User", "url": "https://api.github.com/users/mtoles" }
[]
closed
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mariosasko", "id": 47462742, "login": "mariosasko", "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "repos_url": "https://api.github.com/users/mariosasko/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "type": "User", "url": "https://api.github.com/users/mariosasko" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mariosasko", "id": 47462742, "login": "mariosasko", "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "repos_url": "https://api.github.com/users/mariosasko/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "type": "User", "url": "https://api.github.com/users/mariosasko" } ]
null
0
"2023-03-29T00:01:31Z"
"2023-07-07T14:01:14Z"
"2023-07-07T14:01:14Z"
NONE
null
null
null
### Describe the bug `Dataset.map()` crashes any time any column contains more than `writer_batch_size` (default 1000) empty dictionaries, regardless of whether the column is being operated on. The error does not occur if the dictionaries are non-empty. ### Steps to reproduce the bug Example: ``` import datasets def add_one(example): example["col2"] += 1 return example n = 1001 # crashes # n = 999 # works ds = datasets.Dataset.from_dict({"col1": [{}] * n, "col2": [1] * n}) ds = ds.map(add_one, writer_batch_size=1000) ``` ### Expected behavior Above code should not crash ### Environment info - `datasets` version: 2.10.1 - Platform: Linux-5.4.0-120-generic-x86_64-with-glibc2.10 - Python version: 3.8.15 - PyArrow version: 9.0.0 - Pandas version: 1.5.3
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5677/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5677/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/5675
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5675/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5675/comments
https://api.github.com/repos/huggingface/datasets/issues/5675/events
https://github.com/huggingface/datasets/issues/5675
1,641,763,478
I_kwDODunzps5h21KW
5,675
Filter datasets by language code
{ "avatar_url": "https://avatars.githubusercontent.com/u/5658496?v=4", "events_url": "https://api.github.com/users/named-entity/events{/privacy}", "followers_url": "https://api.github.com/users/named-entity/followers", "following_url": "https://api.github.com/users/named-entity/following{/other_user}", "gists_url": "https://api.github.com/users/named-entity/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/named-entity", "id": 5658496, "login": "named-entity", "node_id": "MDQ6VXNlcjU2NTg0OTY=", "organizations_url": "https://api.github.com/users/named-entity/orgs", "received_events_url": "https://api.github.com/users/named-entity/received_events", "repos_url": "https://api.github.com/users/named-entity/repos", "site_admin": false, "starred_url": "https://api.github.com/users/named-entity/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/named-entity/subscriptions", "type": "User", "url": "https://api.github.com/users/named-entity" }
[]
closed
false
null
[]
null
4
"2023-03-27T09:42:28Z"
"2023-03-30T08:08:15Z"
"2023-03-30T08:08:15Z"
NONE
null
null
null
Hi! I use the language search field on https://huggingface.co/datasets However, some of the datasets tagged by ISO language code are not accessible by this search form. For example, [myv_ru_2022](https://huggingface.co/datasets/slone/myv_ru_2022) is has `myv` language tag but it is not included in Languages search form. I've also noticed the same problem with `mhr` (see https://huggingface.co/datasets/AigizK/mari-russian-parallel-corpora)
{ "+1": 6, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 6, "url": "https://api.github.com/repos/huggingface/datasets/issues/5675/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5675/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/5674
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5674/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5674/comments
https://api.github.com/repos/huggingface/datasets/issues/5674/events
https://github.com/huggingface/datasets/issues/5674
1,641,084,105
I_kwDODunzps5h0PTJ
5,674
Stored XSS
{ "avatar_url": "https://avatars.githubusercontent.com/u/21213484?v=4", "events_url": "https://api.github.com/users/Fadavvi/events{/privacy}", "followers_url": "https://api.github.com/users/Fadavvi/followers", "following_url": "https://api.github.com/users/Fadavvi/following{/other_user}", "gists_url": "https://api.github.com/users/Fadavvi/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/Fadavvi", "id": 21213484, "login": "Fadavvi", "node_id": "MDQ6VXNlcjIxMjEzNDg0", "organizations_url": "https://api.github.com/users/Fadavvi/orgs", "received_events_url": "https://api.github.com/users/Fadavvi/received_events", "repos_url": "https://api.github.com/users/Fadavvi/repos", "site_admin": false, "starred_url": "https://api.github.com/users/Fadavvi/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Fadavvi/subscriptions", "type": "User", "url": "https://api.github.com/users/Fadavvi" }
[]
closed
false
null
[]
null
1
"2023-03-26T20:55:58Z"
"2023-03-27T21:01:55Z"
"2023-03-27T21:01:55Z"
NONE
null
null
null
### Describe the bug I found a Stored XSS on a page that can be publicly accessible to all visitors. But I didn't find a suitable place to report. Please guide me on this. ### Steps to reproduce the bug Due to security restrictions, I don't want to publish it publicly. ### Expected behavior User inputs must be sanitized before rendering. ### Environment info https://huggingface.co/ Web UI
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5674/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5674/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/5673
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5673/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5673/comments
https://api.github.com/repos/huggingface/datasets/issues/5673/events
https://github.com/huggingface/datasets/pull/5673
1,641,066,352
PR_kwDODunzps5M6wc3
5,673
Pass down storage options
{ "avatar_url": "https://avatars.githubusercontent.com/u/2512762?v=4", "events_url": "https://api.github.com/users/dwyatte/events{/privacy}", "followers_url": "https://api.github.com/users/dwyatte/followers", "following_url": "https://api.github.com/users/dwyatte/following{/other_user}", "gists_url": "https://api.github.com/users/dwyatte/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/dwyatte", "id": 2512762, "login": "dwyatte", "node_id": "MDQ6VXNlcjI1MTI3NjI=", "organizations_url": "https://api.github.com/users/dwyatte/orgs", "received_events_url": "https://api.github.com/users/dwyatte/received_events", "repos_url": "https://api.github.com/users/dwyatte/repos", "site_admin": false, "starred_url": "https://api.github.com/users/dwyatte/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dwyatte/subscriptions", "type": "User", "url": "https://api.github.com/users/dwyatte" }
[]
closed
false
null
[]
null
5
"2023-03-26T20:09:37Z"
"2023-03-28T15:03:38Z"
"2023-03-28T14:54:17Z"
CONTRIBUTOR
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/5673.diff", "html_url": "https://github.com/huggingface/datasets/pull/5673", "merged_at": "2023-03-28T14:54:17Z", "patch_url": "https://github.com/huggingface/datasets/pull/5673.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/5673" }
Remove implementation-specific kwargs from `file_utils.fsspec_get` and `file_utils.fsspec_head`, instead allowing them to be passed down via `storage_options`. This fixes an issue where s3fs did not recognize a timeout arg as well as fixes an issue mentioned in https://github.com/huggingface/datasets/issues/5281 by allowing users to pass down `storage_options` all the way from `datasets.load_dataset` to support implementation-specific credentials Supports something like the following to provide credentials explicitly instead of relying on boto's methods of locating them ``` load_dataset(..., data_files=["s3://..."], storage_options={"profile": "..."}) ```
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5673/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5673/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/5672
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5672/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5672/comments
https://api.github.com/repos/huggingface/datasets/issues/5672/events
https://github.com/huggingface/datasets/issues/5672
1,641,005,322
I_kwDODunzps5hz8EK
5,672
Pushing dataset to hub crash
{ "avatar_url": "https://avatars.githubusercontent.com/u/14275989?v=4", "events_url": "https://api.github.com/users/tzvc/events{/privacy}", "followers_url": "https://api.github.com/users/tzvc/followers", "following_url": "https://api.github.com/users/tzvc/following{/other_user}", "gists_url": "https://api.github.com/users/tzvc/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/tzvc", "id": 14275989, "login": "tzvc", "node_id": "MDQ6VXNlcjE0Mjc1OTg5", "organizations_url": "https://api.github.com/users/tzvc/orgs", "received_events_url": "https://api.github.com/users/tzvc/received_events", "repos_url": "https://api.github.com/users/tzvc/repos", "site_admin": false, "starred_url": "https://api.github.com/users/tzvc/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/tzvc/subscriptions", "type": "User", "url": "https://api.github.com/users/tzvc" }
[]
closed
false
null
[]
null
3
"2023-03-26T17:42:13Z"
"2023-03-30T08:11:05Z"
"2023-03-30T08:11:05Z"
NONE
null
null
null
### Describe the bug Uploading a dataset with `push_to_hub()` fails without error description. ### Steps to reproduce the bug Hey there, I've built a image dataset of 100k images + text pair as described here https://huggingface.co/docs/datasets/image_dataset#imagefolder Now I'm trying to push it to the hub but I'm running into issues. First, I tried doing it via git directly, I added all the files in git lfs and pushed but I got hit with an error saying huggingface only accept up to 10k files in a folder. So I'm now trying with the `push_to_hub()` func as follow: ```python from datasets import load_dataset import os dataset = load_dataset("imagefolder", data_dir="./data", split="train") dataset.push_to_hub("tzvc/organization-logos", token=os.environ.get('HF_TOKEN')) ``` But again, this produces an error: ``` Resolving data files: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████| 100212/100212 [00:00<00:00, 439108.61it/s] Downloading and preparing dataset imagefolder/default to /home/contact_theochampion/.cache/huggingface/datasets/imagefolder/default-20567ffc703aa314/0.0.0/37fbb85cc714a338bea574ac6c7d0b5be5aff46c1862c1989b20e0771199e93f... Downloading data files: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████| 100211/100211 [00:00<00:00, 149323.73it/s] Downloading data files: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1/1 [00:00<00:00, 15947.92it/s] Extracting data files: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1/1 [00:00<00:00, 2245.34it/s] Dataset imagefolder downloaded and prepared to /home/contact_theochampion/.cache/huggingface/datasets/imagefolder/default-20567ffc703aa314/0.0.0/37fbb85cc714a338bea574ac6c7d0b5be5aff46c1862c1989b20e0771199e93f. Subsequent calls will reuse this data. Resuming upload of the dataset shards. Pushing dataset shards to the dataset hub: 100%|██████████████████████████████████████████████████████████████████████████████████████████████| 14/14 [00:31<00:00, 2.24s/it] Downloading metadata: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 118/118 [00:00<00:00, 225kB/s] Traceback (most recent call last): File "/home/contact_theochampion/organization-logos/push_to_hub.py", line 5, in <module> dataset.push_to_hub("tzvc/organization-logos", token=os.environ.get('HF_TOKEN')) File "/home/contact_theochampion/.local/lib/python3.9/site-packages/datasets/arrow_dataset.py", line 5245, in push_to_hub repo_info = dataset_infos[next(iter(dataset_infos))] StopIteration ``` What could be happening here ? ### Expected behavior The dataset is pushed to the hub ### Environment info - `datasets` version: 2.10.1 - Platform: Linux-5.10.0-21-cloud-amd64-x86_64-with-glibc2.31 - Python version: 3.9.2 - PyArrow version: 11.0.0 - Pandas version: 1.5.3
{ "+1": 1, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/5672/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5672/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/5671
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5671/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5671/comments
https://api.github.com/repos/huggingface/datasets/issues/5671/events
https://github.com/huggingface/datasets/issues/5671
1,640,840,012
I_kwDODunzps5hzTtM
5,671
How to use `load_dataset('glue', 'cola')`
{ "avatar_url": "https://avatars.githubusercontent.com/u/40193664?v=4", "events_url": "https://api.github.com/users/makinzm/events{/privacy}", "followers_url": "https://api.github.com/users/makinzm/followers", "following_url": "https://api.github.com/users/makinzm/following{/other_user}", "gists_url": "https://api.github.com/users/makinzm/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/makinzm", "id": 40193664, "login": "makinzm", "node_id": "MDQ6VXNlcjQwMTkzNjY0", "organizations_url": "https://api.github.com/users/makinzm/orgs", "received_events_url": "https://api.github.com/users/makinzm/received_events", "repos_url": "https://api.github.com/users/makinzm/repos", "site_admin": false, "starred_url": "https://api.github.com/users/makinzm/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/makinzm/subscriptions", "type": "User", "url": "https://api.github.com/users/makinzm" }
[]
closed
false
null
[]
null
2
"2023-03-26T09:40:34Z"
"2023-03-28T07:43:44Z"
"2023-03-28T07:43:43Z"
NONE
null
null
null
### Describe the bug I'm new to use HuggingFace datasets but I cannot use `load_dataset('glue', 'cola')`. - I was stacked by the following problem: ```python from datasets import load_dataset cola_dataset = load_dataset('glue', 'cola') --------------------------------------------------------------------------- InvalidVersion Traceback (most recent call last) File <timed exec>:1 (Omit because of long error message) File /usr/local/lib/python3.8/site-packages/packaging/version.py:197, in Version.__init__(self, version) 195 match = self._regex.search(version) 196 if not match: --> 197 raise InvalidVersion(f"Invalid version: '{version}'") 199 # Store the parsed out pieces of the version 200 self._version = _Version( 201 epoch=int(match.group("epoch")) if match.group("epoch") else 0, 202 release=tuple(int(i) for i in match.group("release").split(".")), (...) 208 local=_parse_local_version(match.group("local")), 209 ) InvalidVersion: Invalid version: '0.10.1,<0.11' ``` - You can check this full error message in my repository: [MLOps-Basics/week_0_project_setup/experimental_notebooks/data_exploration.ipynb](https://github.com/makinzm/MLOps-Basics/blob/eabab4b837880607d9968d3fa687c70177b2affd/week_0_project_setup/experimental_notebooks/data_exploration.ipynb) ### Steps to reproduce the bug - This is my repository to reproduce: [MLOps-Basics/week_0_project_setup](https://github.com/makinzm/MLOps-Basics/tree/eabab4b837880607d9968d3fa687c70177b2affd/week_0_project_setup) 1. cd `/DockerImage` and command `docker build . -t week0` 2. cd `/` and command `docker-compose up` 3. Run `experimental_notebooks/data_exploration.ipynb` ---- Just to be sure, I wrote down Dockerfile and requirements.txt - Dockerfile ```Dockerfile FROM python:3.8 WORKDIR /root/working RUN apt-get update && \ apt-get install -y python3-dev python3-pip python3-venv && \ apt-get clean && \ rm -rf /var/lib/apt/lists/* COPY requirements.txt . RUN pip3 install --no-cache-dir jupyter notebook && pip install --no-cache-dir -r requirements.txt CMD ["bash"] ``` - requirements.txt ```txt pytorch-lightning==1.2.10 datasets==1.6.2 transformers==4.5.1 scikit-learn==0.24.2 ``` ### Expected behavior There is no bug to implement `load_dataset('glue', 'cola')` ### Environment info I already wrote it.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5671/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5671/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/5670
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5670/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5670/comments
https://api.github.com/repos/huggingface/datasets/issues/5670/events
https://github.com/huggingface/datasets/issues/5670
1,640,607,045
I_kwDODunzps5hya1F
5,670
Unable to load multi class classification datasets
{ "avatar_url": "https://avatars.githubusercontent.com/u/19690506?v=4", "events_url": "https://api.github.com/users/ysahil97/events{/privacy}", "followers_url": "https://api.github.com/users/ysahil97/followers", "following_url": "https://api.github.com/users/ysahil97/following{/other_user}", "gists_url": "https://api.github.com/users/ysahil97/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/ysahil97", "id": 19690506, "login": "ysahil97", "node_id": "MDQ6VXNlcjE5NjkwNTA2", "organizations_url": "https://api.github.com/users/ysahil97/orgs", "received_events_url": "https://api.github.com/users/ysahil97/received_events", "repos_url": "https://api.github.com/users/ysahil97/repos", "site_admin": false, "starred_url": "https://api.github.com/users/ysahil97/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ysahil97/subscriptions", "type": "User", "url": "https://api.github.com/users/ysahil97" }
[]
closed
false
null
[]
null
2
"2023-03-25T18:06:15Z"
"2023-03-27T22:54:56Z"
"2023-03-27T22:54:56Z"
NONE
null
null
null
### Describe the bug I've been playing around with huggingface library, mostly with `datasets` and wanted to download the multi class classification datasets to fine tune BERT on this task. ([link](https://huggingface.co/docs/transformers/training#train-with-pytorch-trainer)). While loading the dataset, I'm getting the following error snippet. ``` --------------------------------------------------------------------------- TypeError Traceback (most recent call last) Cell In[44], line 3 1 from datasets import load_dataset ----> 3 imdb_dataset = load_dataset("yelp_review_full") 4 imdb_dataset File /work/pi_adrozdov_umass_edu/syerawar_umass_edu/envs/vadops/lib/python3.10/site-packages/datasets/load.py:1719, in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, ignore_verifications, keep_in_memory, save_infos, revision, use_auth_token, task, streaming, **config_kwargs) 1716 ignore_verifications = ignore_verifications or save_infos 1718 # Create a dataset builder -> 1719 builder_instance = load_dataset_builder( 1720 path=path, 1721 name=name, 1722 data_dir=data_dir, 1723 data_files=data_files, 1724 cache_dir=cache_dir, 1725 features=features, 1726 download_config=download_config, 1727 download_mode=download_mode, 1728 revision=revision, 1729 use_auth_token=use_auth_token, 1730 **config_kwargs, 1731 ) 1733 # Return iterable dataset in case of streaming 1734 if streaming: File /work/pi_adrozdov_umass_edu/syerawar_umass_edu/envs/vadops/lib/python3.10/site-packages/datasets/load.py:1523, in load_dataset_builder(path, name, data_dir, data_files, cache_dir, features, download_config, download_mode, revision, use_auth_token, **config_kwargs) 1520 raise ValueError(error_msg) 1522 # Instantiate the dataset builder -> 1523 builder_instance: DatasetBuilder = builder_cls( 1524 cache_dir=cache_dir, 1525 config_name=config_name, 1526 data_dir=data_dir, 1527 data_files=data_files, 1528 hash=hash, 1529 features=features, 1530 use_auth_token=use_auth_token, 1531 **builder_kwargs, 1532 **config_kwargs, 1533 ) 1535 return builder_instance File /work/pi_adrozdov_umass_edu/syerawar_umass_edu/envs/vadops/lib/python3.10/site-packages/datasets/builder.py:1292, in GeneratorBasedBuilder.__init__(self, writer_batch_size, *args, **kwargs) 1291 def __init__(self, *args, writer_batch_size=None, **kwargs): -> 1292 super().__init__(*args, **kwargs) 1293 # Batch size used by the ArrowWriter 1294 # It defines the number of samples that are kept in memory before writing them 1295 # and also the length of the arrow chunks 1296 # None means that the ArrowWriter will use its default value 1297 self._writer_batch_size = writer_batch_size or self.DEFAULT_WRITER_BATCH_SIZE File /work/pi_adrozdov_umass_edu/syerawar_umass_edu/envs/vadops/lib/python3.10/site-packages/datasets/builder.py:312, in DatasetBuilder.__init__(self, cache_dir, config_name, hash, base_path, info, features, use_auth_token, repo_id, data_files, data_dir, name, **config_kwargs) 309 # prepare info: DatasetInfo are a standardized dataclass across all datasets 310 # Prefill datasetinfo 311 if info is None: --> 312 info = self.get_exported_dataset_info() 313 info.update(self._info()) 314 info.builder_name = self.name File /work/pi_adrozdov_umass_edu/syerawar_umass_edu/envs/vadops/lib/python3.10/site-packages/datasets/builder.py:412, in DatasetBuilder.get_exported_dataset_info(self) 400 def get_exported_dataset_info(self) -> DatasetInfo: 401 """Empty DatasetInfo if doesn't exist 402 403 Example: (...) 410 ``` 411 """ --> 412 return self.get_all_exported_dataset_infos().get(self.config.name, DatasetInfo()) File /work/pi_adrozdov_umass_edu/syerawar_umass_edu/envs/vadops/lib/python3.10/site-packages/datasets/builder.py:398, in DatasetBuilder.get_all_exported_dataset_infos(cls) 385 @classmethod 386 def get_all_exported_dataset_infos(cls) -> DatasetInfosDict: 387 """Empty dict if doesn't exist 388 389 Example: (...) 396 ``` 397 """ --> 398 return DatasetInfosDict.from_directory(cls.get_imported_module_dir()) File /work/pi_adrozdov_umass_edu/syerawar_umass_edu/envs/vadops/lib/python3.10/site-packages/datasets/info.py:370, in DatasetInfosDict.from_directory(cls, dataset_infos_dir) 368 dataset_metadata = DatasetMetadata.from_readme(Path(dataset_infos_dir) / "README.md") 369 if "dataset_info" in dataset_metadata: --> 370 return cls.from_metadata(dataset_metadata) 371 if os.path.exists(os.path.join(dataset_infos_dir, config.DATASETDICT_INFOS_FILENAME)): 372 # this is just to have backward compatibility with dataset_infos.json files 373 with open(os.path.join(dataset_infos_dir, config.DATASETDICT_INFOS_FILENAME), encoding="utf-8") as f: File /work/pi_adrozdov_umass_edu/syerawar_umass_edu/envs/vadops/lib/python3.10/site-packages/datasets/info.py:396, in DatasetInfosDict.from_metadata(cls, dataset_metadata) 387 return cls( 388 { 389 dataset_info_yaml_dict.get("config_name", "default"): DatasetInfo._from_yaml_dict( (...) 393 } 394 ) 395 else: --> 396 dataset_info = DatasetInfo._from_yaml_dict(dataset_metadata["dataset_info"]) 397 dataset_info.config_name = dataset_metadata["dataset_info"].get("config_name", "default") 398 return cls({dataset_info.config_name: dataset_info}) File /work/pi_adrozdov_umass_edu/syerawar_umass_edu/envs/vadops/lib/python3.10/site-packages/datasets/info.py:332, in DatasetInfo._from_yaml_dict(cls, yaml_data) 330 yaml_data = copy.deepcopy(yaml_data) 331 if yaml_data.get("features") is not None: --> 332 yaml_data["features"] = Features._from_yaml_list(yaml_data["features"]) 333 if yaml_data.get("splits") is not None: 334 yaml_data["splits"] = SplitDict._from_yaml_list(yaml_data["splits"]) File /work/pi_adrozdov_umass_edu/syerawar_umass_edu/envs/vadops/lib/python3.10/site-packages/datasets/features/features.py:1745, in Features._from_yaml_list(cls, yaml_data) 1742 else: 1743 raise TypeError(f"Expected a dict or a list but got {type(obj)}: {obj}") -> 1745 return cls.from_dict(from_yaml_inner(yaml_data)) File /work/pi_adrozdov_umass_edu/syerawar_umass_edu/envs/vadops/lib/python3.10/site-packages/datasets/features/features.py:1741, in Features._from_yaml_list.<locals>.from_yaml_inner(obj) 1739 elif isinstance(obj, list): 1740 names = [_feature.pop("name") for _feature in obj] -> 1741 return {name: from_yaml_inner(_feature) for name, _feature in zip(names, obj)} 1742 else: 1743 raise TypeError(f"Expected a dict or a list but got {type(obj)}: {obj}") File /work/pi_adrozdov_umass_edu/syerawar_umass_edu/envs/vadops/lib/python3.10/site-packages/datasets/features/features.py:1741, in <dictcomp>(.0) 1739 elif isinstance(obj, list): 1740 names = [_feature.pop("name") for _feature in obj] -> 1741 return {name: from_yaml_inner(_feature) for name, _feature in zip(names, obj)} 1742 else: 1743 raise TypeError(f"Expected a dict or a list but got {type(obj)}: {obj}") File /work/pi_adrozdov_umass_edu/syerawar_umass_edu/envs/vadops/lib/python3.10/site-packages/datasets/features/features.py:1736, in Features._from_yaml_list.<locals>.from_yaml_inner(obj) 1734 return {"_type": snakecase_to_camelcase(obj["dtype"])} 1735 else: -> 1736 return from_yaml_inner(obj["dtype"]) 1737 else: 1738 return {"_type": snakecase_to_camelcase(_type), **unsimplify(obj)[_type]} File /work/pi_adrozdov_umass_edu/syerawar_umass_edu/envs/vadops/lib/python3.10/site-packages/datasets/features/features.py:1738, in Features._from_yaml_list.<locals>.from_yaml_inner(obj) 1736 return from_yaml_inner(obj["dtype"]) 1737 else: -> 1738 return {"_type": snakecase_to_camelcase(_type), **unsimplify(obj)[_type]} 1739 elif isinstance(obj, list): 1740 names = [_feature.pop("name") for _feature in obj] File /work/pi_adrozdov_umass_edu/syerawar_umass_edu/envs/vadops/lib/python3.10/site-packages/datasets/features/features.py:1706, in Features._from_yaml_list.<locals>.unsimplify(feature) 1704 if isinstance(feature.get("class_label"), dict) and isinstance(feature["class_label"].get("names"), dict): 1705 label_ids = sorted(feature["class_label"]["names"]) -> 1706 if label_ids and label_ids != list(range(label_ids[-1] + 1)): 1707 raise ValueError( 1708 f"ClassLabel expected a value for all label ids [0:{label_ids[-1] + 1}] but some ids are missing." 1709 ) 1710 feature["class_label"]["names"] = [feature["class_label"]["names"][label_id] for label_id in label_ids] TypeError: can only concatenate str (not "int") to str ``` The same issue happens when I try to load `go-emotions` multi class classification dataset. Could somebody guide me on how to fix this issue? ### Steps to reproduce the bug Run the following code snippet in a python script/ notebook cell: ``` from datasets import load_dataset yelp_dataset = load_dataset("yelp_review_full") yelp_dataset ``` ### Expected behavior The dataset should be loaded perfectly, which showing the train, test and unsupervised splits with the basic data statistics ### Environment info - `datasets` version: 2.6.1 - Platform: Linux-5.4.0-124-generic-x86_64-with-glibc2.31 - Python version: 3.10.9 - PyArrow version: 8.0.0 - Pandas version: 1.5.3
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5670/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5670/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/5669
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5669/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5669/comments
https://api.github.com/repos/huggingface/datasets/issues/5669/events
https://github.com/huggingface/datasets/issues/5669
1,638,070,046
I_kwDODunzps5hovce
5,669
Almost identical datasets, huge performance difference
{ "avatar_url": "https://avatars.githubusercontent.com/u/2437102?v=4", "events_url": "https://api.github.com/users/eli-osherovich/events{/privacy}", "followers_url": "https://api.github.com/users/eli-osherovich/followers", "following_url": "https://api.github.com/users/eli-osherovich/following{/other_user}", "gists_url": "https://api.github.com/users/eli-osherovich/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/eli-osherovich", "id": 2437102, "login": "eli-osherovich", "node_id": "MDQ6VXNlcjI0MzcxMDI=", "organizations_url": "https://api.github.com/users/eli-osherovich/orgs", "received_events_url": "https://api.github.com/users/eli-osherovich/received_events", "repos_url": "https://api.github.com/users/eli-osherovich/repos", "site_admin": false, "starred_url": "https://api.github.com/users/eli-osherovich/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/eli-osherovich/subscriptions", "type": "User", "url": "https://api.github.com/users/eli-osherovich" }
[]
open
false
null
[]
null
7
"2023-03-23T18:20:20Z"
"2023-04-09T18:56:23Z"
null
CONTRIBUTOR
null
null
null
### Describe the bug I am struggling to understand (huge) performance difference between two datasets that are almost identical. ### Steps to reproduce the bug # Fast (normal) dataset speed: ```python import cv2 from datasets import load_dataset from torch.utils.data import DataLoader dataset = load_dataset("beans", split="train") for x in DataLoader(dataset.with_format("torch"), batch_size=16, shuffle=True, num_workers=8): pass ``` The above pass over the dataset takes about 1.5 seconds on my computer. However, if I re-create (almost) the same dataset, the sweep takes HUGE amount of time: 15 minutes. Steps to reproduce: ```python def transform(example): example["image2"] = cv2.imread(example["image_file_path"]) return example dataset2 = dataset.map(transform, remove_columns=["image"]) for x in DataLoader(dataset2.with_format("torch"), batch_size=16, shuffle=True, num_workers=8): pass ``` ### Expected behavior Same timings ### Environment info python==3.10.9 datasets==2.10.1
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5669/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5669/timeline
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5668
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5668/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5668/comments
https://api.github.com/repos/huggingface/datasets/issues/5668/events
https://github.com/huggingface/datasets/pull/5668
1,638,018,598
PR_kwDODunzps5MwuIp
5,668
Support for downloading only provided split
{ "avatar_url": "https://avatars.githubusercontent.com/u/16348744?v=4", "events_url": "https://api.github.com/users/polinaeterna/events{/privacy}", "followers_url": "https://api.github.com/users/polinaeterna/followers", "following_url": "https://api.github.com/users/polinaeterna/following{/other_user}", "gists_url": "https://api.github.com/users/polinaeterna/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/polinaeterna", "id": 16348744, "login": "polinaeterna", "node_id": "MDQ6VXNlcjE2MzQ4NzQ0", "organizations_url": "https://api.github.com/users/polinaeterna/orgs", "received_events_url": "https://api.github.com/users/polinaeterna/received_events", "repos_url": "https://api.github.com/users/polinaeterna/repos", "site_admin": false, "starred_url": "https://api.github.com/users/polinaeterna/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/polinaeterna/subscriptions", "type": "User", "url": "https://api.github.com/users/polinaeterna" }
[]
open
false
null
[]
null
2
"2023-03-23T17:53:39Z"
"2023-03-24T06:43:14Z"
null
CONTRIBUTOR
null
1
{ "diff_url": "https://github.com/huggingface/datasets/pull/5668.diff", "html_url": "https://github.com/huggingface/datasets/pull/5668", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/5668.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/5668" }
We can pass split to `_split_generators()`. But I'm not sure if it's possible to solve cache issues, mostly with `dataset_info.json`
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5668/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5668/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/5667
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5667/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5667/comments
https://api.github.com/repos/huggingface/datasets/issues/5667/events
https://github.com/huggingface/datasets/pull/5667
1,637,789,361
PR_kwDODunzps5Mv8Im
5,667
Jax requires jaxlib
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
[]
closed
false
null
[]
null
6
"2023-03-23T15:41:09Z"
"2023-03-23T16:23:11Z"
"2023-03-23T16:14:52Z"
MEMBER
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/5667.diff", "html_url": "https://github.com/huggingface/datasets/pull/5667", "merged_at": "2023-03-23T16:14:52Z", "patch_url": "https://github.com/huggingface/datasets/pull/5667.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/5667" }
close https://github.com/huggingface/datasets/issues/5666
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5667/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5667/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/5666
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5666/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5666/comments
https://api.github.com/repos/huggingface/datasets/issues/5666/events
https://github.com/huggingface/datasets/issues/5666
1,637,675,062
I_kwDODunzps5hnPA2
5,666
Support tensorflow 2.12.0 in CI
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" } ]
closed
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" } ]
null
0
"2023-03-23T14:37:51Z"
"2023-03-23T16:14:54Z"
"2023-03-23T16:14:54Z"
MEMBER
null
null
null
Once we find out the root cause of: - #5663 we should revert the temporary pin on tensorflow introduced by: - #5664
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5666/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5666/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/5665
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5665/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5665/comments
https://api.github.com/repos/huggingface/datasets/issues/5665/events
https://github.com/huggingface/datasets/issues/5665
1,637,193,648
I_kwDODunzps5hlZew
5,665
Feature request: IterableDataset.push_to_hub
{ "avatar_url": "https://avatars.githubusercontent.com/u/48327001?v=4", "events_url": "https://api.github.com/users/NielsRogge/events{/privacy}", "followers_url": "https://api.github.com/users/NielsRogge/followers", "following_url": "https://api.github.com/users/NielsRogge/following{/other_user}", "gists_url": "https://api.github.com/users/NielsRogge/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/NielsRogge", "id": 48327001, "login": "NielsRogge", "node_id": "MDQ6VXNlcjQ4MzI3MDAx", "organizations_url": "https://api.github.com/users/NielsRogge/orgs", "received_events_url": "https://api.github.com/users/NielsRogge/received_events", "repos_url": "https://api.github.com/users/NielsRogge/repos", "site_admin": false, "starred_url": "https://api.github.com/users/NielsRogge/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/NielsRogge/subscriptions", "type": "User", "url": "https://api.github.com/users/NielsRogge" }
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" } ]
open
false
null
[]
null
0
"2023-03-23T09:53:04Z"
"2023-03-23T09:53:16Z"
null
CONTRIBUTOR
null
null
null
### Feature request It'd be great to have a lazy push to hub, similar to the lazy loading we have with `IterableDataset`. Suppose you'd like to filter [LAION](https://huggingface.co/datasets/laion/laion400m) based on certain conditions, but as LAION doesn't fit into your disk, you'd like to leverage streaming: ``` from datasets import load_dataset dataset = load_dataset("laion/laion400m", streaming=True, split="train") ``` Then you could filter the dataset based on certain conditions: ``` filtered_dataset = dataset.filter(lambda example: example['HEIGHT'] > 400) ``` In order to persist this dataset and push it back to the hub, one currently needs to first load the entire filtered dataset on disk and then push: ``` from datasets import Dataset Dataset.from_generator(filtered_dataset.__iter__).push_to_hub(...) ``` It would be great if we can instead lazy push to the data to the hub (basically stream the data to the hub), not being limited by our disk size: ``` filtered_dataset.push_to_hub("my-filtered-dataset") ``` ### Motivation This feature would be very useful for people that want to filter huge datasets without having to load the entire dataset or a filtered version thereof on their local disk. ### Your contribution Happy to test out a PR :)
{ "+1": 10, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 10, "url": "https://api.github.com/repos/huggingface/datasets/issues/5665/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5665/timeline
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5664
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5664/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5664/comments
https://api.github.com/repos/huggingface/datasets/issues/5664/events
https://github.com/huggingface/datasets/pull/5664
1,637,192,684
PR_kwDODunzps5Mt6vp
5,664
Fix CI by temporarily pinning tensorflow < 2.12.0
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[]
closed
false
null
[]
null
2
"2023-03-23T09:52:26Z"
"2023-03-23T10:17:11Z"
"2023-03-23T10:09:54Z"
MEMBER
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/5664.diff", "html_url": "https://github.com/huggingface/datasets/pull/5664", "merged_at": "2023-03-23T10:09:53Z", "patch_url": "https://github.com/huggingface/datasets/pull/5664.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/5664" }
As a hotfix for our CI, temporarily pin `tensorflow` upper version: - In Python 3.10, tensorflow-2.12.0 also installs `jax` Fix #5663 Until root cause is fixed.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5664/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5664/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/5663
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5663/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5663/comments
https://api.github.com/repos/huggingface/datasets/issues/5663/events
https://github.com/huggingface/datasets/issues/5663
1,637,173,248
I_kwDODunzps5hlUgA
5,663
CI is broken: ModuleNotFoundError: jax requires jaxlib to be installed
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
closed
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" } ]
null
0
"2023-03-23T09:39:43Z"
"2023-03-23T10:09:55Z"
"2023-03-23T10:09:55Z"
MEMBER
null
null
null
CI test_py310 is broken: see https://github.com/huggingface/datasets/actions/runs/4498945505/jobs/7916194236?pr=5662 ``` FAILED tests/test_arrow_dataset.py::BaseDatasetTest::test_map_jax_in_memory - ModuleNotFoundError: jax requires jaxlib to be installed. See https://github.com/google/jax#installation for installation instructions. FAILED tests/test_arrow_dataset.py::BaseDatasetTest::test_map_jax_on_disk - ModuleNotFoundError: jax requires jaxlib to be installed. See https://github.com/google/jax#installation for installation instructions. FAILED tests/test_formatting.py::FormatterTest::test_jax_formatter - ModuleNotFoundError: jax requires jaxlib to be installed. See https://github.com/google/jax#installation for installation instructions. FAILED tests/test_formatting.py::FormatterTest::test_jax_formatter_audio - ModuleNotFoundError: jax requires jaxlib to be installed. See https://github.com/google/jax#installation for installation instructions. FAILED tests/test_formatting.py::FormatterTest::test_jax_formatter_device - ModuleNotFoundError: jax requires jaxlib to be installed. See https://github.com/google/jax#installation for installation instructions. FAILED tests/test_formatting.py::FormatterTest::test_jax_formatter_image - ModuleNotFoundError: jax requires jaxlib to be installed. See https://github.com/google/jax#installation for installation instructions. FAILED tests/test_formatting.py::FormatterTest::test_jax_formatter_jnp_array_kwargs - ModuleNotFoundError: jax requires jaxlib to be installed. See https://github.com/google/jax#installation for installation instructions. FAILED tests/features/test_features.py::CastToPythonObjectsTest::test_cast_to_python_objects_jax - ModuleNotFoundError: jax requires jaxlib to be installed. See https://github.com/google/jax#installation for installation instructions. ===== 8 failed, 2147 passed, 10 skipped, 37 warnings in 228.69s (0:03:48) ====== ```
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5663/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5663/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/5662
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5662/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5662/comments
https://api.github.com/repos/huggingface/datasets/issues/5662/events
https://github.com/huggingface/datasets/pull/5662
1,637,140,813
PR_kwDODunzps5MtvsM
5,662
Fix unnecessary dict comprehension
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[]
closed
false
null
[]
null
3
"2023-03-23T09:18:58Z"
"2023-03-23T09:46:59Z"
"2023-03-23T09:37:49Z"
MEMBER
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/5662.diff", "html_url": "https://github.com/huggingface/datasets/pull/5662", "merged_at": "2023-03-23T09:37:49Z", "patch_url": "https://github.com/huggingface/datasets/pull/5662.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/5662" }
After ruff-0.0.258 release, the C416 rule was updated with unnecessary dict comprehensions. See: - https://github.com/charliermarsh/ruff/releases/tag/v0.0.258 - https://github.com/charliermarsh/ruff/pull/3605 This PR fixes one unnecessary dict comprehension in our code: no need to unpack and re-pack the tuple values. Fix #5661
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5662/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5662/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/5661
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5661/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5661/comments
https://api.github.com/repos/huggingface/datasets/issues/5661/events
https://github.com/huggingface/datasets/issues/5661
1,637,129,445
I_kwDODunzps5hlJzl
5,661
CI is broken: Unnecessary `dict` comprehension
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
closed
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" } ]
null
0
"2023-03-23T09:13:01Z"
"2023-03-23T09:37:51Z"
"2023-03-23T09:37:51Z"
MEMBER
null
null
null
CI check_code_quality is broken: ``` src/datasets/arrow_dataset.py:3267:35: C416 [*] Unnecessary `dict` comprehension (rewrite using `dict()`) Found 1 error. ```
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5661/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5661/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/5660
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5660/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5660/comments
https://api.github.com/repos/huggingface/datasets/issues/5660/events
https://github.com/huggingface/datasets/issues/5660
1,635,543,646
I_kwDODunzps5hfGpe
5,660
integration with imbalanced-learn
{ "avatar_url": "https://avatars.githubusercontent.com/u/30216?v=4", "events_url": "https://api.github.com/users/tansaku/events{/privacy}", "followers_url": "https://api.github.com/users/tansaku/followers", "following_url": "https://api.github.com/users/tansaku/following{/other_user}", "gists_url": "https://api.github.com/users/tansaku/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/tansaku", "id": 30216, "login": "tansaku", "node_id": "MDQ6VXNlcjMwMjE2", "organizations_url": "https://api.github.com/users/tansaku/orgs", "received_events_url": "https://api.github.com/users/tansaku/received_events", "repos_url": "https://api.github.com/users/tansaku/repos", "site_admin": false, "starred_url": "https://api.github.com/users/tansaku/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/tansaku/subscriptions", "type": "User", "url": "https://api.github.com/users/tansaku" }
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" }, { "color": "ffffff", "default": true, "description": "This will not be worked on", "id": 1935892913, "name": "wontfix", "node_id": "MDU6TGFiZWwxOTM1ODkyOTEz", "url": "https://api.github.com/repos/huggingface/datasets/labels/wontfix" } ]
closed
false
null
[]
null
1
"2023-03-22T11:05:17Z"
"2023-07-06T18:10:15Z"
"2023-07-06T18:10:15Z"
NONE
null
null
null
### Feature request Wouldn't it be great if the various class balancing operations from imbalanced-learn were available as part of datasets? ### Motivation I'm trying to use imbalanced-learn to balance a dataset, but it's not clear how to get the two to interoperate - what would be great would be some examples. I've looked online, asked gpt-4, but so far not making much progress. ### Your contribution If I can get this working myself I can submit a PR with example code to go in the docs
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5660/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5660/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/5659
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5659/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5659/comments
https://api.github.com/repos/huggingface/datasets/issues/5659/events
https://github.com/huggingface/datasets/issues/5659
1,635,447,540
I_kwDODunzps5hevL0
5,659
[Audio] Soundfile/libsndfile requirements too stringent for decoding mp3 files
{ "avatar_url": "https://avatars.githubusercontent.com/u/93869735?v=4", "events_url": "https://api.github.com/users/sanchit-gandhi/events{/privacy}", "followers_url": "https://api.github.com/users/sanchit-gandhi/followers", "following_url": "https://api.github.com/users/sanchit-gandhi/following{/other_user}", "gists_url": "https://api.github.com/users/sanchit-gandhi/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/sanchit-gandhi", "id": 93869735, "login": "sanchit-gandhi", "node_id": "U_kgDOBZhWpw", "organizations_url": "https://api.github.com/users/sanchit-gandhi/orgs", "received_events_url": "https://api.github.com/users/sanchit-gandhi/received_events", "repos_url": "https://api.github.com/users/sanchit-gandhi/repos", "site_admin": false, "starred_url": "https://api.github.com/users/sanchit-gandhi/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sanchit-gandhi/subscriptions", "type": "User", "url": "https://api.github.com/users/sanchit-gandhi" }
[]
closed
false
null
[]
null
12
"2023-03-22T10:07:33Z"
"2024-01-17T13:59:22Z"
"2023-04-07T08:51:28Z"
CONTRIBUTOR
null
null
null
### Describe the bug I'm encountering several issues trying to load mp3 audio files using `datasets` on a TPU v4. The PR https://github.com/huggingface/datasets/pull/5573 updated the audio loading logic to rely solely on the `soundfile`/`libsndfile` libraries for loading audio samples, regardless of their file type. The installation guide suggests that `libsndfile` is bundled in when `soundfile` is pip installed: https://github.com/huggingface/datasets/blob/e1af108015e43f9df8734a1faeeaeb9eafce3971/docs/source/installation.md?plain=1#L70-L71 However, just pip installing `soundfile==0.12.1` throws an error that `libsndfile` is missing: ``` pip install soundfile==0.12.1 ``` Then: ```python >>> soundfile >>> soundfile.__libsndfile_version__ ``` <details> <summary> Traceback (most recent call last): </summary> ``` File "/home/sanchitgandhi/hf/lib/python3.8/site-packages/soundfile.py", line 161, in <module> import _soundfile_data # ImportError if this doesn't exist ModuleNotFoundError: No module named '_soundfile_data' During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/home/sanchitgandhi/hf/lib/python3.8/site-packages/soundfile.py", line 170, in <module> raise OSError('sndfile library not found using ctypes.util.find_library') OSError: sndfile library not found using ctypes.util.find_library During handling of the above exception, another exception occurred: Traceback (most recent call last): File "<string>", line 1, in <module> File "/home/sanchitgandhi/hf/lib/python3.8/site-packages/soundfile.py", line 192, in <module> _snd = _ffi.dlopen(_explicit_libname) OSError: cannot load library 'libsndfile.so': libsndfile.so: cannot open shared object file: No such file or directory ``` </details> Thus, I've followed the official instructions for installing the `soundfile` package from https://github.com/bastibe/python-soundfile#installation, which states that `libsndfile` needs to be installed separately as: ``` pip install --upgrade soundfile sudo apt install libsndfile1 ``` We can now import `soundfile`: ```python >>> import soundfile >>> soundfile.__version__ '0.12.1' >>> soundfile.__libsndfile_version__ '1.0.28' ``` We see that we have `soundfile==0.12.1`, which matches the `datasets[audio]` package constraints: https://github.com/huggingface/datasets/blob/e1af108015e43f9df8734a1faeeaeb9eafce3971/setup.py#L144-L147 But we have `libsndfile==1.0.28`, which is too low for decoding mp3 files: https://github.com/huggingface/datasets/blob/e1af108015e43f9df8734a1faeeaeb9eafce3971/src/datasets/config.py#L136-L138 Updating/upgrading the `libsndfile` doesn't change this: ``` sudo apt-get update sudo apt-get upgrade ``` Is there any other suggestion for how to get a compatible `libsndfile` version? Currently, the version bundled with Ubuntu `apt-get` is too low for decoding mp3 files. Maybe we could add this under `setup.py` such that we install the correct `libsndfile` version when we do `pip install datasets[audio]`? IMO this would help circumvent such version issues. ### Steps to reproduce the bug Environment described above. Loading mp3 files: ```python from datasets import load_dataset common_voice_es = load_dataset("common_voice", "es", split="validation", streaming=True) print(next(iter(common_voice_es))) ``` ```python --------------------------------------------------------------------------- RuntimeError Traceback (most recent call last) Cell In[4], line 2 1 common_voice_es = load_dataset("common_voice", "es", split="validation", streaming=True) ----> 2 print(next(iter(common_voice_es))) File ~/datasets/src/datasets/iterable_dataset.py:941, in IterableDataset.__iter__(self) 937 for key, example in ex_iterable: 938 if self.features: 939 # `IterableDataset` automatically fills missing columns with None. 940 # This is done with `_apply_feature_types_on_example`. --> 941 yield _apply_feature_types_on_example( 942 example, self.features, token_per_repo_id=self._token_per_repo_id 943 ) 944 else: 945 yield example File ~/datasets/src/datasets/iterable_dataset.py:700, in _apply_feature_types_on_example(example, features, token_per_repo_id) 698 encoded_example = features.encode_example(example) 699 # Decode example for Audio feature, e.g. --> 700 decoded_example = features.decode_example(encoded_example, token_per_repo_id=token_per_repo_id) 701 return decoded_example File ~/datasets/src/datasets/features/features.py:1864, in Features.decode_example(self, example, token_per_repo_id) 1850 def decode_example(self, example: dict, token_per_repo_id: Optional[Dict[str, Union[str, bool, None]]] = None): 1851 """Decode example with custom feature decoding. 1852 1853 Args: (...) 1861 `dict[str, Any]` 1862 """ -> 1864 return { 1865 column_name: decode_nested_example(feature, value, token_per_repo_id=token_per_repo_id) 1866 if self._column_requires_decoding[column_name] 1867 else value 1868 for column_name, (feature, value) in zip_dict( 1869 {key: value for key, value in self.items() if key in example}, example 1870 ) 1871 } File ~/datasets/src/datasets/features/features.py:1865, in <dictcomp>(.0) 1850 def decode_example(self, example: dict, token_per_repo_id: Optional[Dict[str, Union[str, bool, None]]] = None): 1851 """Decode example with custom feature decoding. 1852 1853 Args: (...) 1861 `dict[str, Any]` 1862 """ 1864 return { -> 1865 column_name: decode_nested_example(feature, value, token_per_repo_id=token_per_repo_id) 1866 if self._column_requires_decoding[column_name] 1867 else value 1868 for column_name, (feature, value) in zip_dict( 1869 {key: value for key, value in self.items() if key in example}, example 1870 ) 1871 } File ~/datasets/src/datasets/features/features.py:1308, in decode_nested_example(schema, obj, token_per_repo_id) 1305 elif isinstance(schema, (Audio, Image)): 1306 # we pass the token to read and decode files from private repositories in streaming mode 1307 if obj is not None and schema.decode: -> 1308 return schema.decode_example(obj, token_per_repo_id=token_per_repo_id) 1309 return obj File ~/datasets/src/datasets/features/audio.py:167, in Audio.decode_example(self, value, token_per_repo_id) 162 raise RuntimeError( 163 "Decoding 'opus' files requires system library 'libsndfile'>=1.0.31, " 164 'You can try to update `soundfile` python library: `pip install "soundfile>=0.12.1"`. ' 165 ) 166 elif not config.IS_MP3_SUPPORTED and audio_format == "mp3": --> 167 raise RuntimeError( 168 "Decoding 'mp3' files requires system library 'libsndfile'>=1.1.0, " 169 'You can try to update `soundfile` python library: `pip install "soundfile>=0.12.1"`. ' 170 ) 172 if file is None: 173 token_per_repo_id = token_per_repo_id or {} RuntimeError: Decoding 'mp3' files requires system library 'libsndfile'>=1.1.0, You can try to update `soundfile` python library: `pip install "soundfile>=0.12.1"`. ``` ### Expected behavior Load mp3 files! ### Environment info - `datasets` version: 2.10.2.dev0 - Platform: Linux-5.13.0-1023-gcp-x86_64-with-glibc2.29 - Python version: 3.8.10 - Huggingface_hub version: 0.13.1 - PyArrow version: 11.0.0 - Pandas version: 1.5.3 - Soundfile version: 0.12.1 - Libsndfile version: 1.0.28
{ "+1": 1, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/5659/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5659/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/5658
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5658/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5658/comments
https://api.github.com/repos/huggingface/datasets/issues/5658/events
https://github.com/huggingface/datasets/pull/5658
1,634,867,204
PR_kwDODunzps5MmJe0
5,658
docs: Update num_shards docs to mention num_proc on Dataset and DatasetDict
{ "avatar_url": "https://avatars.githubusercontent.com/u/78612354?v=4", "events_url": "https://api.github.com/users/connor-henderson/events{/privacy}", "followers_url": "https://api.github.com/users/connor-henderson/followers", "following_url": "https://api.github.com/users/connor-henderson/following{/other_user}", "gists_url": "https://api.github.com/users/connor-henderson/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/connor-henderson", "id": 78612354, "login": "connor-henderson", "node_id": "MDQ6VXNlcjc4NjEyMzU0", "organizations_url": "https://api.github.com/users/connor-henderson/orgs", "received_events_url": "https://api.github.com/users/connor-henderson/received_events", "repos_url": "https://api.github.com/users/connor-henderson/repos", "site_admin": false, "starred_url": "https://api.github.com/users/connor-henderson/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/connor-henderson/subscriptions", "type": "User", "url": "https://api.github.com/users/connor-henderson" }
[]
closed
false
null
[]
null
2
"2023-03-22T00:12:18Z"
"2023-03-24T16:43:34Z"
"2023-03-24T16:36:21Z"
CONTRIBUTOR
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/5658.diff", "html_url": "https://github.com/huggingface/datasets/pull/5658", "merged_at": "2023-03-24T16:36:21Z", "patch_url": "https://github.com/huggingface/datasets/pull/5658.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/5658" }
Closes #5653 @mariosasko
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5658/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5658/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/5656
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5656/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5656/comments
https://api.github.com/repos/huggingface/datasets/issues/5656/events
https://github.com/huggingface/datasets/pull/5656
1,634,156,563
PR_kwDODunzps5Mjxoo
5,656
Fix `fsspec.open` when using an HTTP proxy
{ "avatar_url": "https://avatars.githubusercontent.com/u/3905501?v=4", "events_url": "https://api.github.com/users/bryant1410/events{/privacy}", "followers_url": "https://api.github.com/users/bryant1410/followers", "following_url": "https://api.github.com/users/bryant1410/following{/other_user}", "gists_url": "https://api.github.com/users/bryant1410/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/bryant1410", "id": 3905501, "login": "bryant1410", "node_id": "MDQ6VXNlcjM5MDU1MDE=", "organizations_url": "https://api.github.com/users/bryant1410/orgs", "received_events_url": "https://api.github.com/users/bryant1410/received_events", "repos_url": "https://api.github.com/users/bryant1410/repos", "site_admin": false, "starred_url": "https://api.github.com/users/bryant1410/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/bryant1410/subscriptions", "type": "User", "url": "https://api.github.com/users/bryant1410" }
[]
closed
false
null
[]
null
2
"2023-03-21T15:23:29Z"
"2023-03-23T14:14:50Z"
"2023-03-23T13:15:46Z"
CONTRIBUTOR
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/5656.diff", "html_url": "https://github.com/huggingface/datasets/pull/5656", "merged_at": "2023-03-23T13:15:46Z", "patch_url": "https://github.com/huggingface/datasets/pull/5656.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/5656" }
Most HTTP(S) downloads from this library support proxy automatically by reading the `HTTP_PROXY` environment variable (et al.) because `requests` is widely used. However, in some parts of the code, `fsspec` is used, which in turn uses `aiohttp` for HTTP(S) requests (as opposed to `requests`), which in turn doesn't support reading proxy env variables by default. This PR enables reading them automatically. Read [aiohttp docs on using proxies](https://docs.aiohttp.org/en/stable/client_advanced.html?highlight=trust_env#proxy-support). For context, [the Python library requests](https://requests.readthedocs.io/en/latest/user/advanced/?highlight=http_proxy#proxies) and [the official Python library via `urllib.urlopen` support this automatically by default](https://docs.python.org/3/library/urllib.request.html#urllib.request.urlopen). Many (most common ones?) programs also do the same, including cURL, APT, Wget, and many others.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5656/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5656/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/5655
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5655/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5655/comments
https://api.github.com/repos/huggingface/datasets/issues/5655/events
https://github.com/huggingface/datasets/pull/5655
1,634,030,017
PR_kwDODunzps5MjWYy
5,655
Improve features decoding in to_iterable_dataset
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
[]
closed
false
null
[]
null
4
"2023-03-21T14:18:09Z"
"2023-03-23T13:19:27Z"
"2023-03-23T13:12:25Z"
MEMBER
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/5655.diff", "html_url": "https://github.com/huggingface/datasets/pull/5655", "merged_at": "2023-03-23T13:12:25Z", "patch_url": "https://github.com/huggingface/datasets/pull/5655.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/5655" }
Following discussion at https://github.com/huggingface/datasets/pull/5589 Right now `to_iterable_dataset` on images/audio hurts iterable dataset performance a lot (e.g. x4 slower because it encodes+decodes images/audios unnecessarily). I fixed it by providing a generator that yields undecoded examples
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5655/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5655/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/5654
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5654/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5654/comments
https://api.github.com/repos/huggingface/datasets/issues/5654/events
https://github.com/huggingface/datasets/issues/5654
1,633,523,705
I_kwDODunzps5hXZf5
5,654
Offset overflow when executing Dataset.map
{ "avatar_url": "https://avatars.githubusercontent.com/u/118280608?v=4", "events_url": "https://api.github.com/users/jan-pair/events{/privacy}", "followers_url": "https://api.github.com/users/jan-pair/followers", "following_url": "https://api.github.com/users/jan-pair/following{/other_user}", "gists_url": "https://api.github.com/users/jan-pair/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/jan-pair", "id": 118280608, "login": "jan-pair", "node_id": "U_kgDOBwzRoA", "organizations_url": "https://api.github.com/users/jan-pair/orgs", "received_events_url": "https://api.github.com/users/jan-pair/received_events", "repos_url": "https://api.github.com/users/jan-pair/repos", "site_admin": false, "starred_url": "https://api.github.com/users/jan-pair/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jan-pair/subscriptions", "type": "User", "url": "https://api.github.com/users/jan-pair" }
[]
open
false
null
[]
null
2
"2023-03-21T09:33:27Z"
"2023-03-21T10:32:07Z"
null
NONE
null
null
null
### Describe the bug Hi, I'm trying to use `.map` method to cache multiple random crops from the image to speed up data processing during training, as the image size is too big. The map function executes all iterations, and then returns the following error: ```bash Traceback (most recent call last): File "/home/ubuntu/miniconda3/envs/enhancement/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 3353, in _map_single writer.finalize() # close_stream=bool(buf_writer is None)) # We only close if we are writing in a file File "/home/ubuntu/miniconda3/envs/enhancement/lib/python3.8/site-packages/datasets/arrow_writer.py", line 582, in finalize self.write_examples_on_file() File "/home/ubuntu/miniconda3/envs/enhancement/lib/python3.8/site-packages/datasets/arrow_writer.py", line 446, in write_examples_on_file self.write_batch(batch_examples=batch_examples) File "/home/ubuntu/miniconda3/envs/enhancement/lib/python3.8/site-packages/datasets/arrow_writer.py", line 555, in write_batch self.write_table(pa_table, writer_batch_size) File "/home/ubuntu/miniconda3/envs/enhancement/lib/python3.8/site-packages/datasets/arrow_writer.py", line 567, in write_table pa_table = pa_table.combine_chunks() File "pyarrow/table.pxi", line 3315, in pyarrow.lib.Table.combine_chunks File "pyarrow/error.pxi", line 144, in pyarrow.lib.pyarrow_internal_check_status File "pyarrow/error.pxi", line 100, in pyarrow.lib.check_status pyarrow.lib.ArrowInvalid: offset overflow while concatenating arrays ``` Here is the minimal code (`/home/datasets/DIV2K_train_HR` is just a folder of images that can be replaced by any appropriate): ### Steps to reproduce the bug ```python from glob import glob import torch from datasets import Dataset, Image from torchvision.transforms import PILToTensor, RandomCrop file_paths = glob("/home/datasets/DIV2K_train_HR/*") to_tensor = PILToTensor() crop_transf = RandomCrop(size=256) def prepare_data(example): tensor = to_tensor(example["image"].convert("RGB")) return {"hr": torch.stack([crop_transf(tensor) for _ in range(25)])} train_data = Dataset.from_dict({"image": file_paths}).cast_column("image", Image()) train_data = train_data.map( prepare_data, cache_file_name="/home/datasets/DIV2K_train_HR_crops.tmp", desc="Caching multiple random crops of image", remove_columns="image", ) print(train_data[0].keys(), train_data[0]["hr"].shape) ``` ### Expected behavior Cached file is stored at `"/home/datasets/DIV2K_train_HR_crops.tmp"`, output is `dict_keys(['hr']) torch.Size([25, 3, 256, 256])` ### Environment info - `datasets` version: 2.10.1 - Platform: Linux-5.15.0-67-generic-x86_64-with-glibc2.10 - Python version: 3.8.16 - PyArrow version: 11.0.0 - Pandas version: 1.5.3 - Pytorch version: 2.0.0+cu117 - torchvision version: 0.15.1+cu117
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5654/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5654/timeline
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5653
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5653/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5653/comments
https://api.github.com/repos/huggingface/datasets/issues/5653/events
https://github.com/huggingface/datasets/issues/5653
1,633,254,159
I_kwDODunzps5hWXsP
5,653
Doc: save_to_disk, `num_proc` will affect `num_shards`, but it's not documented
{ "avatar_url": "https://avatars.githubusercontent.com/u/42400165?v=4", "events_url": "https://api.github.com/users/RmZeta2718/events{/privacy}", "followers_url": "https://api.github.com/users/RmZeta2718/followers", "following_url": "https://api.github.com/users/RmZeta2718/following{/other_user}", "gists_url": "https://api.github.com/users/RmZeta2718/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/RmZeta2718", "id": 42400165, "login": "RmZeta2718", "node_id": "MDQ6VXNlcjQyNDAwMTY1", "organizations_url": "https://api.github.com/users/RmZeta2718/orgs", "received_events_url": "https://api.github.com/users/RmZeta2718/received_events", "repos_url": "https://api.github.com/users/RmZeta2718/repos", "site_admin": false, "starred_url": "https://api.github.com/users/RmZeta2718/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/RmZeta2718/subscriptions", "type": "User", "url": "https://api.github.com/users/RmZeta2718" }
[ { "color": "0075ca", "default": true, "description": "Improvements or additions to documentation", "id": 1935892861, "name": "documentation", "node_id": "MDU6TGFiZWwxOTM1ODkyODYx", "url": "https://api.github.com/repos/huggingface/datasets/labels/documentation" }, { "color": "7057ff", "default": true, "description": "Good for newcomers", "id": 1935892877, "name": "good first issue", "node_id": "MDU6TGFiZWwxOTM1ODkyODc3", "url": "https://api.github.com/repos/huggingface/datasets/labels/good%20first%20issue" } ]
closed
false
null
[]
null
1
"2023-03-21T05:25:35Z"
"2023-03-24T16:36:23Z"
"2023-03-24T16:36:23Z"
NONE
null
null
null
### Describe the bug [`num_proc`](https://huggingface.co/docs/datasets/main/en/package_reference/main_classes#datasets.DatasetDict.save_to_disk.num_proc) will affect `num_shards`, but it's not documented ### Steps to reproduce the bug Nothing to reproduce ### Expected behavior [document of `num_shards`](https://huggingface.co/docs/datasets/main/en/package_reference/main_classes#datasets.DatasetDict.save_to_disk.num_shards) explicitly says that it depends on `max_shard_size`, it should also mention `num_proc`. ### Environment info datasets main document
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5653/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5653/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/5652
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5652/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5652/comments
https://api.github.com/repos/huggingface/datasets/issues/5652/events
https://github.com/huggingface/datasets/pull/5652
1,632,546,073
PR_kwDODunzps5MeVUR
5,652
Copy features
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
[]
closed
false
null
[]
null
7
"2023-03-20T17:17:23Z"
"2023-03-23T13:19:19Z"
"2023-03-23T13:12:08Z"
MEMBER
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/5652.diff", "html_url": "https://github.com/huggingface/datasets/pull/5652", "merged_at": "2023-03-23T13:12:08Z", "patch_url": "https://github.com/huggingface/datasets/pull/5652.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/5652" }
Some users (even internally at HF) are doing ```python dset_features = dset.features dset_features.pop(col_to_remove) dset = dset.map(..., features=dset_features) ``` Right now this causes issues because it modifies the features dict in place before the map. In this PR I modified `dset.features` to return a copy of the features, so that users can modify it if they want.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5652/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5652/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/5651
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5651/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5651/comments
https://api.github.com/repos/huggingface/datasets/issues/5651/events
https://github.com/huggingface/datasets/issues/5651
1,631,967,509
I_kwDODunzps5hRdkV
5,651
expanduser in save_to_disk
{ "avatar_url": "https://avatars.githubusercontent.com/u/42400165?v=4", "events_url": "https://api.github.com/users/RmZeta2718/events{/privacy}", "followers_url": "https://api.github.com/users/RmZeta2718/followers", "following_url": "https://api.github.com/users/RmZeta2718/following{/other_user}", "gists_url": "https://api.github.com/users/RmZeta2718/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/RmZeta2718", "id": 42400165, "login": "RmZeta2718", "node_id": "MDQ6VXNlcjQyNDAwMTY1", "organizations_url": "https://api.github.com/users/RmZeta2718/orgs", "received_events_url": "https://api.github.com/users/RmZeta2718/received_events", "repos_url": "https://api.github.com/users/RmZeta2718/repos", "site_admin": false, "starred_url": "https://api.github.com/users/RmZeta2718/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/RmZeta2718/subscriptions", "type": "User", "url": "https://api.github.com/users/RmZeta2718" }
[ { "color": "7057ff", "default": true, "description": "Good for newcomers", "id": 1935892877, "name": "good first issue", "node_id": "MDU6TGFiZWwxOTM1ODkyODc3", "url": "https://api.github.com/repos/huggingface/datasets/labels/good%20first%20issue" } ]
closed
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/35114142?v=4", "events_url": "https://api.github.com/users/benjaminbrown038/events{/privacy}", "followers_url": "https://api.github.com/users/benjaminbrown038/followers", "following_url": "https://api.github.com/users/benjaminbrown038/following{/other_user}", "gists_url": "https://api.github.com/users/benjaminbrown038/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/benjaminbrown038", "id": 35114142, "login": "benjaminbrown038", "node_id": "MDQ6VXNlcjM1MTE0MTQy", "organizations_url": "https://api.github.com/users/benjaminbrown038/orgs", "received_events_url": "https://api.github.com/users/benjaminbrown038/received_events", "repos_url": "https://api.github.com/users/benjaminbrown038/repos", "site_admin": false, "starred_url": "https://api.github.com/users/benjaminbrown038/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/benjaminbrown038/subscriptions", "type": "User", "url": "https://api.github.com/users/benjaminbrown038" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/35114142?v=4", "events_url": "https://api.github.com/users/benjaminbrown038/events{/privacy}", "followers_url": "https://api.github.com/users/benjaminbrown038/followers", "following_url": "https://api.github.com/users/benjaminbrown038/following{/other_user}", "gists_url": "https://api.github.com/users/benjaminbrown038/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/benjaminbrown038", "id": 35114142, "login": "benjaminbrown038", "node_id": "MDQ6VXNlcjM1MTE0MTQy", "organizations_url": "https://api.github.com/users/benjaminbrown038/orgs", "received_events_url": "https://api.github.com/users/benjaminbrown038/received_events", "repos_url": "https://api.github.com/users/benjaminbrown038/repos", "site_admin": false, "starred_url": "https://api.github.com/users/benjaminbrown038/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/benjaminbrown038/subscriptions", "type": "User", "url": "https://api.github.com/users/benjaminbrown038" } ]
null
5
"2023-03-20T12:02:18Z"
"2023-10-27T14:04:37Z"
"2023-10-27T14:04:37Z"
NONE
null
null
null
### Describe the bug save_to_disk() does not expand `~` 1. `dataset = load_datasets("any dataset")` 2. `dataset.save_to_disk("~/data")` 3. a folder named "~" created in current folder 4. FileNotFoundError is raised, because the expanded path does not exist (`/home/<user>/data`) related issue https://github.com/huggingface/transformers/issues/10628 ### Steps to reproduce the bug As described above. ### Expected behavior expanduser correctly ### Environment info - datasets 2.10.1 - python 3.10
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5651/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5651/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/5650
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5650/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5650/comments
https://api.github.com/repos/huggingface/datasets/issues/5650/events
https://github.com/huggingface/datasets/issues/5650
1,630,336,919
I_kwDODunzps5hLPeX
5,650
load_dataset can't work correct with my image data
{ "avatar_url": "https://avatars.githubusercontent.com/u/41611046?v=4", "events_url": "https://api.github.com/users/WiNE-iNEFF/events{/privacy}", "followers_url": "https://api.github.com/users/WiNE-iNEFF/followers", "following_url": "https://api.github.com/users/WiNE-iNEFF/following{/other_user}", "gists_url": "https://api.github.com/users/WiNE-iNEFF/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/WiNE-iNEFF", "id": 41611046, "login": "WiNE-iNEFF", "node_id": "MDQ6VXNlcjQxNjExMDQ2", "organizations_url": "https://api.github.com/users/WiNE-iNEFF/orgs", "received_events_url": "https://api.github.com/users/WiNE-iNEFF/received_events", "repos_url": "https://api.github.com/users/WiNE-iNEFF/repos", "site_admin": false, "starred_url": "https://api.github.com/users/WiNE-iNEFF/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/WiNE-iNEFF/subscriptions", "type": "User", "url": "https://api.github.com/users/WiNE-iNEFF" }
[]
closed
false
null
[]
null
21
"2023-03-18T13:59:13Z"
"2023-07-24T14:13:02Z"
"2023-07-24T14:13:01Z"
NONE
null
null
null
I have about 20000 images in my folder which divided into 4 folders with class names. When i use load_dataset("my_folder_name", split="train") this function create dataset in which there are only 4 images, the remaining 19000 images were not added there. What is the problem and did not understand. Tried converting images and the like but absolutely nothing worked
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5650/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5650/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/5649
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5649/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5649/comments
https://api.github.com/repos/huggingface/datasets/issues/5649/events
https://github.com/huggingface/datasets/issues/5649
1,630,173,460
I_kwDODunzps5hKnkU
5,649
The index column created with .to_sql() is dependent on the batch_size when writing
{ "avatar_url": "https://avatars.githubusercontent.com/u/45281?v=4", "events_url": "https://api.github.com/users/lsb/events{/privacy}", "followers_url": "https://api.github.com/users/lsb/followers", "following_url": "https://api.github.com/users/lsb/following{/other_user}", "gists_url": "https://api.github.com/users/lsb/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lsb", "id": 45281, "login": "lsb", "node_id": "MDQ6VXNlcjQ1Mjgx", "organizations_url": "https://api.github.com/users/lsb/orgs", "received_events_url": "https://api.github.com/users/lsb/received_events", "repos_url": "https://api.github.com/users/lsb/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lsb/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lsb/subscriptions", "type": "User", "url": "https://api.github.com/users/lsb" }
[]
closed
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" } ]
null
2
"2023-03-18T05:25:17Z"
"2023-06-17T07:01:57Z"
"2023-06-17T07:01:57Z"
NONE
null
null
null
### Describe the bug It seems like the "index" column is designed to be unique? The values are only unique per batch. The SQL index is not a unique index. This can be a problem, for instance, when building a faiss index on a dataset and then trying to match up ids with a sql export. ### Steps to reproduce the bug ``` from datasets import Dataset import sqlite3 db = sqlite3.connect(":memory:") nice_numbers = Dataset.from_dict({"nice_number": range(101,106)}) nice_numbers.to_sql("nice1", db, batch_size=1) nice_numbers.to_sql("nice2", db, batch_size=2) print(db.execute("select * from nice1").fetchall()) # [(0, 101), (0, 102), (0, 103), (0, 104), (0, 105)] print(db.execute("select * from nice2").fetchall()) # [(0, 101), (1, 102), (0, 103), (1, 104), (0, 105)] ``` ### Expected behavior I expected the "index" column to be unique ### Environment info ``` % datasets-cli env Copy-and-paste the text below in your GitHub issue. - `datasets` version: 2.10.1 - Platform: macOS-13.2.1-arm64-arm-64bit - Python version: 3.9.6 - PyArrow version: 7.0.0 - Pandas version: 1.5.2 zsh: segmentation fault datasets-cli env ```
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5649/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5649/timeline
null
not_planned
false
https://api.github.com/repos/huggingface/datasets/issues/5648
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5648/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5648/comments
https://api.github.com/repos/huggingface/datasets/issues/5648/events
https://github.com/huggingface/datasets/issues/5648
1,629,253,719
I_kwDODunzps5hHHBX
5,648
flatten_indices doesn't work with pandas format
{ "avatar_url": "https://avatars.githubusercontent.com/u/14365168?v=4", "events_url": "https://api.github.com/users/alialamiidrissi/events{/privacy}", "followers_url": "https://api.github.com/users/alialamiidrissi/followers", "following_url": "https://api.github.com/users/alialamiidrissi/following{/other_user}", "gists_url": "https://api.github.com/users/alialamiidrissi/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/alialamiidrissi", "id": 14365168, "login": "alialamiidrissi", "node_id": "MDQ6VXNlcjE0MzY1MTY4", "organizations_url": "https://api.github.com/users/alialamiidrissi/orgs", "received_events_url": "https://api.github.com/users/alialamiidrissi/received_events", "repos_url": "https://api.github.com/users/alialamiidrissi/repos", "site_admin": false, "starred_url": "https://api.github.com/users/alialamiidrissi/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/alialamiidrissi/subscriptions", "type": "User", "url": "https://api.github.com/users/alialamiidrissi" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
open
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mariosasko", "id": 47462742, "login": "mariosasko", "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "repos_url": "https://api.github.com/users/mariosasko/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "type": "User", "url": "https://api.github.com/users/mariosasko" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mariosasko", "id": 47462742, "login": "mariosasko", "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "repos_url": "https://api.github.com/users/mariosasko/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "type": "User", "url": "https://api.github.com/users/mariosasko" } ]
null
1
"2023-03-17T12:44:25Z"
"2023-03-21T13:12:03Z"
null
NONE
null
null
null
### Describe the bug Hi, I noticed that `flatten_indices` throws an error when the batch format is `pandas`. This is probably due to the fact that flatten_indices uses map internally which doesn't accept dataframes as the transformation function output ### Steps to reproduce the bug tabular_data = pd.DataFrame(np.random.randn(10,10)) tabular_data = datasets.arrow_dataset.Dataset.from_pandas(tabular_data) tabular_data.with_format("pandas").select([0,1,2,3]).flatten_indices() ### Expected behavior No error thrown ### Environment info - `datasets` version: 2.10.1 - Python version: 3.9.5 - PyArrow version: 11.0.0 - Pandas version: 1.4.1
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5648/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5648/timeline
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5647
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5647/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5647/comments
https://api.github.com/repos/huggingface/datasets/issues/5647/events
https://github.com/huggingface/datasets/issues/5647
1,628,225,544
I_kwDODunzps5hDMAI
5,647
Make all print statements optional
{ "avatar_url": "https://avatars.githubusercontent.com/u/49101362?v=4", "events_url": "https://api.github.com/users/gagan3012/events{/privacy}", "followers_url": "https://api.github.com/users/gagan3012/followers", "following_url": "https://api.github.com/users/gagan3012/following{/other_user}", "gists_url": "https://api.github.com/users/gagan3012/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/gagan3012", "id": 49101362, "login": "gagan3012", "node_id": "MDQ6VXNlcjQ5MTAxMzYy", "organizations_url": "https://api.github.com/users/gagan3012/orgs", "received_events_url": "https://api.github.com/users/gagan3012/received_events", "repos_url": "https://api.github.com/users/gagan3012/repos", "site_admin": false, "starred_url": "https://api.github.com/users/gagan3012/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/gagan3012/subscriptions", "type": "User", "url": "https://api.github.com/users/gagan3012" }
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" } ]
closed
false
null
[]
null
2
"2023-03-16T20:30:07Z"
"2023-07-21T14:20:25Z"
"2023-07-21T14:20:24Z"
NONE
null
null
null
### Feature request Make all print statements optional to speed up the development ### Motivation Im loading multiple tiny datasets and all the print statements make the loading slower ### Your contribution I can help contribute
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5647/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5647/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/5646
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5646/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5646/comments
https://api.github.com/repos/huggingface/datasets/issues/5646/events
https://github.com/huggingface/datasets/pull/5646
1,627,838,762
PR_kwDODunzps5MOqjj
5,646
Allow self as key in `Features`
{ "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mariosasko", "id": 47462742, "login": "mariosasko", "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "repos_url": "https://api.github.com/users/mariosasko/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "type": "User", "url": "https://api.github.com/users/mariosasko" }
[]
closed
false
null
[]
null
3
"2023-03-16T16:17:03Z"
"2023-03-16T17:21:58Z"
"2023-03-16T17:14:50Z"
CONTRIBUTOR
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/5646.diff", "html_url": "https://github.com/huggingface/datasets/pull/5646", "merged_at": "2023-03-16T17:14:50Z", "patch_url": "https://github.com/huggingface/datasets/pull/5646.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/5646" }
Fix #5641
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5646/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5646/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/5645
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5645/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5645/comments
https://api.github.com/repos/huggingface/datasets/issues/5645/events
https://github.com/huggingface/datasets/issues/5645
1,627,108,278
I_kwDODunzps5g-7O2
5,645
Datasets map and select(range()) is giving dill error
{ "avatar_url": "https://avatars.githubusercontent.com/u/90728105?v=4", "events_url": "https://api.github.com/users/Tanya-11/events{/privacy}", "followers_url": "https://api.github.com/users/Tanya-11/followers", "following_url": "https://api.github.com/users/Tanya-11/following{/other_user}", "gists_url": "https://api.github.com/users/Tanya-11/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/Tanya-11", "id": 90728105, "login": "Tanya-11", "node_id": "MDQ6VXNlcjkwNzI4MTA1", "organizations_url": "https://api.github.com/users/Tanya-11/orgs", "received_events_url": "https://api.github.com/users/Tanya-11/received_events", "repos_url": "https://api.github.com/users/Tanya-11/repos", "site_admin": false, "starred_url": "https://api.github.com/users/Tanya-11/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Tanya-11/subscriptions", "type": "User", "url": "https://api.github.com/users/Tanya-11" }
[]
closed
false
null
[]
null
2
"2023-03-16T10:01:28Z"
"2023-03-17T04:24:51Z"
"2023-03-17T04:24:51Z"
NONE
null
null
null
### Describe the bug I'm using Huggingface Datasets library to load the dataset in google colab When I do, > data = train_dataset.select(range(10)) or > train_datasets = train_dataset.map( > process_data_to_model_inputs, > batched=True, > batch_size=batch_size, > remove_columns=["article", "abstract"], > ) I get following error: `module 'dill._dill' has no attribute 'log'` I've tried downgrading the dill version from latest to 0.2.8, but no luck. Stack trace: > --------------------------------------------------------------------------- > ModuleNotFoundError Traceback (most recent call last) > /usr/local/lib/python3.9/dist-packages/datasets/utils/py_utils.py in _no_cache_fields(obj) > 367 try: > --> 368 import transformers as tr > 369 > > ModuleNotFoundError: No module named 'transformers' > > During handling of the above exception, another exception occurred: > > AttributeError Traceback (most recent call last) > 17 frames > <ipython-input-13-dd14813880a6> in <module> > ----> 1 test = train_dataset.select(range(10)) > > /usr/local/lib/python3.9/dist-packages/datasets/arrow_dataset.py in wrapper(*args, **kwargs) > 155 } > 156 # apply actual function > --> 157 out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs) > 158 datasets: List["Dataset"] = list(out.values()) if isinstance(out, dict) else [out] > 159 # re-apply format to the output > > /usr/local/lib/python3.9/dist-packages/datasets/fingerprint.py in wrapper(*args, **kwargs) > 155 if kwargs.get(fingerprint_name) is None: > 156 kwargs_for_fingerprint["fingerprint_name"] = fingerprint_name > --> 157 kwargs[fingerprint_name] = update_fingerprint( > 158 self._fingerprint, transform, kwargs_for_fingerprint > 159 ) > > /usr/local/lib/python3.9/dist-packages/datasets/fingerprint.py in update_fingerprint(fingerprint, transform, transform_args) > 103 for key in sorted(transform_args): > 104 hasher.update(key) > --> 105 hasher.update(transform_args[key]) > 106 return hasher.hexdigest() > 107 > > /usr/local/lib/python3.9/dist-packages/datasets/fingerprint.py in update(self, value) > 55 def update(self, value): > 56 self.m.update(f"=={type(value)}==".encode("utf8")) > ---> 57 self.m.update(self.hash(value).encode("utf-8")) > 58 > 59 def hexdigest(self): > > /usr/local/lib/python3.9/dist-packages/datasets/fingerprint.py in hash(cls, value) > 51 return cls.dispatch[type(value)](cls, value) > 52 else: > ---> 53 return cls.hash_default(value) > 54 > 55 def update(self, value): > > /usr/local/lib/python3.9/dist-packages/datasets/fingerprint.py in hash_default(cls, value) > 44 @classmethod > 45 def hash_default(cls, value): > ---> 46 return cls.hash_bytes(dumps(value)) > 47 > 48 @classmethod > > /usr/local/lib/python3.9/dist-packages/datasets/utils/py_utils.py in dumps(obj) > 387 file = StringIO() > 388 with _no_cache_fields(obj): > --> 389 dump(obj, file) > 390 return file.getvalue() > 391 > > /usr/local/lib/python3.9/dist-packages/datasets/utils/py_utils.py in dump(obj, file) > 359 def dump(obj, file): > 360 """pickle an object to a file""" > --> 361 Pickler(file, recurse=True).dump(obj) > 362 return > 363 > > /usr/local/lib/python3.9/dist-packages/dill/_dill.py in dump(self, obj) > 392 return > 393 > --> 394 def load_session(filename='/tmp/session.pkl', main=None): > 395 """update the __main__ module with the state from the session file""" > 396 if main is None: main = _main_module > > /usr/lib/python3.9/pickle.py in dump(self, obj) > 485 if self.proto >= 4: > 486 self.framer.start_framing() > --> 487 self.save(obj) > 488 self.write(STOP) > 489 self.framer.end_framing() > > /usr/local/lib/python3.9/dist-packages/dill/_dill.py in save(self, obj, save_persistent_id) > 386 pickler._byref = False # disable pickling by name reference > 387 pickler._recurse = False # disable pickling recursion for globals > --> 388 pickler._session = True # is best indicator of when pickling a session > 389 pickler.dump(main) > 390 finally: > > /usr/lib/python3.9/pickle.py in save(self, obj, save_persistent_id) > 558 f = self.dispatch.get(t) > 559 if f is not None: > --> 560 f(self, obj) # Call unbound method with explicit self > 561 return > 562 > > /usr/local/lib/python3.9/dist-packages/dill/_dill.py in save_singleton(pickler, obj) > > /usr/lib/python3.9/pickle.py in save_reduce(self, func, args, state, listitems, dictitems, state_setter, obj) > 689 write(NEWOBJ) > 690 else: > --> 691 save(func) > 692 save(args) > 693 write(REDUCE) > > /usr/local/lib/python3.9/dist-packages/dill/_dill.py in save(self, obj, save_persistent_id) > 386 pickler._byref = False # disable pickling by name reference > 387 pickler._recurse = False # disable pickling recursion for globals > --> 388 pickler._session = True # is best indicator of when pickling a session > 389 pickler.dump(main) > 390 finally: > > /usr/lib/python3.9/pickle.py in save(self, obj, save_persistent_id) > 558 f = self.dispatch.get(t) > 559 if f is not None: > --> 560 f(self, obj) # Call unbound method with explicit self > 561 return > 562 > > /usr/local/lib/python3.9/dist-packages/datasets/utils/py_utils.py in save_function(pickler, obj) > 583 dill._dill.log.info("# F1") > 584 else: > --> 585 dill._dill.log.info("F2: %s" % obj) > 586 name = getattr(obj, "__qualname__", getattr(obj, "__name__", None)) > 587 dill._dill.StockPickler.save_global(pickler, obj, name=name) > > AttributeError: module 'dill._dill' has no attribute 'log' ### Steps to reproduce the bug After loading the dataset(eg: https://huggingface.co/datasets/scientific_papers) in google colab do either > data = train_dataset.select(range(10)) or > train_datasets = train_dataset.map( > process_data_to_model_inputs, > batched=True, > batch_size=batch_size, > remove_columns=["article", "abstract"], > ) ### Expected behavior The map and select function should work ### Environment info dataset: https://huggingface.co/datasets/scientific_papers dill = 0.3.6 python= 3.9.16 transformer = 4.2.0
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5645/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5645/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/5644
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5644/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5644/comments
https://api.github.com/repos/huggingface/datasets/issues/5644/events
https://github.com/huggingface/datasets/pull/5644
1,626,204,046
PR_kwDODunzps5MJHUi
5,644
Allow direct cast from binary to Audio/Image
{ "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mariosasko", "id": 47462742, "login": "mariosasko", "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "repos_url": "https://api.github.com/users/mariosasko/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "type": "User", "url": "https://api.github.com/users/mariosasko" }
[]
closed
false
null
[]
null
3
"2023-03-15T20:02:54Z"
"2023-03-16T14:20:44Z"
"2023-03-16T14:12:55Z"
CONTRIBUTOR
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/5644.diff", "html_url": "https://github.com/huggingface/datasets/pull/5644", "merged_at": "2023-03-16T14:12:55Z", "patch_url": "https://github.com/huggingface/datasets/pull/5644.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/5644" }
To address https://github.com/huggingface/datasets/discussions/5593.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5644/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5644/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/5643
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5643/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5643/comments
https://api.github.com/repos/huggingface/datasets/issues/5643/events
https://github.com/huggingface/datasets/pull/5643
1,626,160,220
PR_kwDODunzps5MI9zO
5,643
Support PyArrow arrays as column values in `from_dict`
{ "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mariosasko", "id": 47462742, "login": "mariosasko", "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "repos_url": "https://api.github.com/users/mariosasko/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "type": "User", "url": "https://api.github.com/users/mariosasko" }
[]
closed
false
null
[]
null
3
"2023-03-15T19:32:40Z"
"2023-03-16T17:23:06Z"
"2023-03-16T17:15:40Z"
CONTRIBUTOR
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/5643.diff", "html_url": "https://github.com/huggingface/datasets/pull/5643", "merged_at": "2023-03-16T17:15:39Z", "patch_url": "https://github.com/huggingface/datasets/pull/5643.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/5643" }
For consistency with `pa.Table.from_pydict`, which supports both Python lists and PyArrow arrays as column values. "Fixes" https://discuss.huggingface.co/t/pyarrow-lib-floatarray-did-not-recognize-python-value-type-when-inferring-an-arrow-data-type/33417
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5643/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5643/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/5642
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5642/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5642/comments
https://api.github.com/repos/huggingface/datasets/issues/5642/events
https://github.com/huggingface/datasets/pull/5642
1,626,043,177
PR_kwDODunzps5MIjw9
5,642
Bump hfh to 0.11.0
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
[]
closed
false
null
[]
null
6
"2023-03-15T18:26:07Z"
"2023-03-20T12:34:09Z"
"2023-03-20T12:26:58Z"
MEMBER
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/5642.diff", "html_url": "https://github.com/huggingface/datasets/pull/5642", "merged_at": "2023-03-20T12:26:58Z", "patch_url": "https://github.com/huggingface/datasets/pull/5642.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/5642" }
to fix errors like ``` requests.exceptions.HTTPError: 400 Client Error: Bad Request for url: https://hub-ci.huggingface.co/api/datasets/__DUMMY_TRANSFORMERS_USER__/... ``` (e.g. from this [failing CI](https://github.com/huggingface/datasets/actions/runs/4428956210/jobs/7769160997)) 0.11.0 is the current minimum version in `transformers` around 5% of users are currently using versions `<0.11.0`
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5642/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5642/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/5641
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5641/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5641/comments
https://api.github.com/repos/huggingface/datasets/issues/5641/events
https://github.com/huggingface/datasets/issues/5641
1,625,942,730
I_kwDODunzps5g6erK
5,641
Features cannot be named "self"
{ "avatar_url": "https://avatars.githubusercontent.com/u/14365168?v=4", "events_url": "https://api.github.com/users/alialamiidrissi/events{/privacy}", "followers_url": "https://api.github.com/users/alialamiidrissi/followers", "following_url": "https://api.github.com/users/alialamiidrissi/following{/other_user}", "gists_url": "https://api.github.com/users/alialamiidrissi/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/alialamiidrissi", "id": 14365168, "login": "alialamiidrissi", "node_id": "MDQ6VXNlcjE0MzY1MTY4", "organizations_url": "https://api.github.com/users/alialamiidrissi/orgs", "received_events_url": "https://api.github.com/users/alialamiidrissi/received_events", "repos_url": "https://api.github.com/users/alialamiidrissi/repos", "site_admin": false, "starred_url": "https://api.github.com/users/alialamiidrissi/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/alialamiidrissi/subscriptions", "type": "User", "url": "https://api.github.com/users/alialamiidrissi" }
[]
closed
false
null
[]
null
0
"2023-03-15T17:16:40Z"
"2023-03-16T17:14:51Z"
"2023-03-16T17:14:51Z"
NONE
null
null
null
### Describe the bug Hi, I noticed that we cannot create a HuggingFace dataset from Pandas DataFrame with a column named `self`. The error seems to be coming from arguments validation in the `Features.from_dict` function. ### Steps to reproduce the bug ```python import datasets dummy_pandas = pd.DataFrame([0,1,2,3], columns = ["self"]) datasets.arrow_dataset.Dataset.from_pandas(dummy_pandas) ``` ### Expected behavior No error thrown ### Environment info - `datasets` version: 2.8.0 - Python version: 3.9.5 - PyArrow version: 6.0.1 - Pandas version: 1.4.1
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5641/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5641/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/5640
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5640/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5640/comments
https://api.github.com/repos/huggingface/datasets/issues/5640/events
https://github.com/huggingface/datasets/pull/5640
1,625,896,057
PR_kwDODunzps5MID3I
5,640
Less zip false positives
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
[]
closed
false
null
[]
null
6
"2023-03-15T16:48:59Z"
"2023-03-16T13:47:37Z"
"2023-03-16T13:40:12Z"
MEMBER
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/5640.diff", "html_url": "https://github.com/huggingface/datasets/pull/5640", "merged_at": "2023-03-16T13:40:12Z", "patch_url": "https://github.com/huggingface/datasets/pull/5640.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/5640" }
`zipfile.is_zipfile` return false positives for some Parquet files. It causes errors when loading certain parquet datasets, where some files are considered ZIP files by `zipfile.is_zipfile` This is a known issue: https://github.com/python/cpython/issues/72680 At first I wanted to rely only on magic numbers, but then I found that someone contributed a [fix to is_zipfile](https://github.com/python/cpython/pull/5053) - do you think we should use it @albertvillanova or not ? IMO it's ok to rely on magic numbers only for now, since in streaming mode we've had no issue checking only the magic number so far. Close https://github.com/huggingface/datasets/issues/5639
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5640/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5640/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/5639
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5639/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5639/comments
https://api.github.com/repos/huggingface/datasets/issues/5639/events
https://github.com/huggingface/datasets/issues/5639
1,625,737,098
I_kwDODunzps5g5seK
5,639
Parquet file wrongly recognized as zip prevents loading a dataset
{ "avatar_url": "https://avatars.githubusercontent.com/u/22726840?v=4", "events_url": "https://api.github.com/users/clefourrier/events{/privacy}", "followers_url": "https://api.github.com/users/clefourrier/followers", "following_url": "https://api.github.com/users/clefourrier/following{/other_user}", "gists_url": "https://api.github.com/users/clefourrier/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/clefourrier", "id": 22726840, "login": "clefourrier", "node_id": "MDQ6VXNlcjIyNzI2ODQw", "organizations_url": "https://api.github.com/users/clefourrier/orgs", "received_events_url": "https://api.github.com/users/clefourrier/received_events", "repos_url": "https://api.github.com/users/clefourrier/repos", "site_admin": false, "starred_url": "https://api.github.com/users/clefourrier/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/clefourrier/subscriptions", "type": "User", "url": "https://api.github.com/users/clefourrier" }
[]
closed
false
null
[]
null
0
"2023-03-15T15:20:45Z"
"2023-03-16T13:40:14Z"
"2023-03-16T13:40:14Z"
MEMBER
null
null
null
### Describe the bug When trying to `load_dataset_builder` for `HuggingFaceGECLM/StackExchange_Mar2023`, extraction fails, because parquet file [devops-00000-of-00001-22fe902fd8702892.parquet](https://huggingface.co/datasets/HuggingFaceGECLM/StackExchange_Mar2023/resolve/1f8c9a2ab6f7d0f9ae904b8b922e4384592ae1a5/data/devops-00000-of-00001-22fe902fd8702892.parquet) is wrongly identified by python as being a zip not a parquet. (Full thread on [Slack](https://huggingface.slack.com/archives/C02V51Q3800/p1678890880803599)) ### Steps to reproduce the bug ```python from datasets import load_dataset_builder ds = load_dataset_builder("HuggingFaceGECLM/StackExchange_Mar2023") ``` ### Expected behavior Loading the file normally. ### Environment info - `datasets` version: 2.3.2 - Platform: Linux-5.14.0-1058-oem-x86_64-with-glibc2.29 - Python version: 3.8.10 - PyArrow version: 8.0.0 - Pandas version: 1.4.3
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5639/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5639/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/5638
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5638/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5638/comments
https://api.github.com/repos/huggingface/datasets/issues/5638/events
https://github.com/huggingface/datasets/issues/5638
1,625,564,471
I_kwDODunzps5g5CU3
5,638
xPath to implement all operations for Path
{ "avatar_url": "https://avatars.githubusercontent.com/u/24695242?v=4", "events_url": "https://api.github.com/users/thomasw21/events{/privacy}", "followers_url": "https://api.github.com/users/thomasw21/followers", "following_url": "https://api.github.com/users/thomasw21/following{/other_user}", "gists_url": "https://api.github.com/users/thomasw21/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/thomasw21", "id": 24695242, "login": "thomasw21", "node_id": "MDQ6VXNlcjI0Njk1MjQy", "organizations_url": "https://api.github.com/users/thomasw21/orgs", "received_events_url": "https://api.github.com/users/thomasw21/received_events", "repos_url": "https://api.github.com/users/thomasw21/repos", "site_admin": false, "starred_url": "https://api.github.com/users/thomasw21/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/thomasw21/subscriptions", "type": "User", "url": "https://api.github.com/users/thomasw21" }
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" } ]
closed
false
null
[]
null
5
"2023-03-15T13:47:11Z"
"2023-03-17T13:21:12Z"
"2023-03-17T13:21:12Z"
CONTRIBUTOR
null
null
null
### Feature request Current xPath implementation is a great extension of Path in order to work with remote objects. However some methods such as `mkdir` are not implemented correctly. It should instead rely on `fsspec` methods, instead of defaulting do `Path` methods which only work locally. ### Motivation I'm using xPath to interact with remote objects. ### Your contribution I could try to make a PR. I'm a bit unfamiliar with chaining right now.
{ "+1": 1, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/5638/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5638/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/5637
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5637/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5637/comments
https://api.github.com/repos/huggingface/datasets/issues/5637/events
https://github.com/huggingface/datasets/issues/5637
1,625,295,691
I_kwDODunzps5g4AtL
5,637
IterableDataset with_format does not support 'device' keyword for jax
{ "avatar_url": "https://avatars.githubusercontent.com/u/91322985?v=4", "events_url": "https://api.github.com/users/Lime-Cakes/events{/privacy}", "followers_url": "https://api.github.com/users/Lime-Cakes/followers", "following_url": "https://api.github.com/users/Lime-Cakes/following{/other_user}", "gists_url": "https://api.github.com/users/Lime-Cakes/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/Lime-Cakes", "id": 91322985, "login": "Lime-Cakes", "node_id": "MDQ6VXNlcjkxMzIyOTg1", "organizations_url": "https://api.github.com/users/Lime-Cakes/orgs", "received_events_url": "https://api.github.com/users/Lime-Cakes/received_events", "repos_url": "https://api.github.com/users/Lime-Cakes/repos", "site_admin": false, "starred_url": "https://api.github.com/users/Lime-Cakes/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Lime-Cakes/subscriptions", "type": "User", "url": "https://api.github.com/users/Lime-Cakes" }
[]
open
false
null
[]
null
2
"2023-03-15T11:04:12Z"
"2023-03-16T18:30:59Z"
null
NONE
null
null
null
### Describe the bug As seen here: https://huggingface.co/docs/datasets/use_with_jax dataset.with_format() supports the keyword 'device', to put data on a specific device when loaded as jax. However, when called on an IterableDataset, I got the error `TypeError: with_format() got an unexpected keyword argument 'device'` Looking over the code, it seems IterableDataset support only pytorch and no support for jax device keyword? https://github.com/huggingface/datasets/blob/fc5c84f36684343bff3e424cb0fd1ac5ecdd66da/src/datasets/iterable_dataset.py#L1029 ### Steps to reproduce the bug 1. Load an IterableDataset (tested in streaming mode) 2. Call with_format('jax',device=device) ### Expected behavior I expect to call `with_format('jax', device=device)` as per [documentation](https://huggingface.co/docs/datasets/use_with_jax) without error ### Environment info Tested with installing newest (dev) and also pip release (2.10.1). - `datasets` version: 2.10.2.dev0 - Platform: Linux-5.15.89+-x86_64-with-debian-bullseye-sid - Python version: 3.7.12 - Huggingface_hub version: 0.12.1 - PyArrow version: 11.0.0 - Pandas version: 1.3.5
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5637/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5637/timeline
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5636
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5636/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5636/comments
https://api.github.com/repos/huggingface/datasets/issues/5636/events
https://github.com/huggingface/datasets/pull/5636
1,623,721,577
PR_kwDODunzps5MAunR
5,636
Fix CI: ignore C901 ("some_func" is to complex) in `ruff`
{ "avatar_url": "https://avatars.githubusercontent.com/u/16348744?v=4", "events_url": "https://api.github.com/users/polinaeterna/events{/privacy}", "followers_url": "https://api.github.com/users/polinaeterna/followers", "following_url": "https://api.github.com/users/polinaeterna/following{/other_user}", "gists_url": "https://api.github.com/users/polinaeterna/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/polinaeterna", "id": 16348744, "login": "polinaeterna", "node_id": "MDQ6VXNlcjE2MzQ4NzQ0", "organizations_url": "https://api.github.com/users/polinaeterna/orgs", "received_events_url": "https://api.github.com/users/polinaeterna/received_events", "repos_url": "https://api.github.com/users/polinaeterna/repos", "site_admin": false, "starred_url": "https://api.github.com/users/polinaeterna/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/polinaeterna/subscriptions", "type": "User", "url": "https://api.github.com/users/polinaeterna" }
[]
closed
false
null
[]
null
2
"2023-03-14T15:29:11Z"
"2023-03-14T16:37:06Z"
"2023-03-14T16:29:52Z"
CONTRIBUTOR
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/5636.diff", "html_url": "https://github.com/huggingface/datasets/pull/5636", "merged_at": "2023-03-14T16:29:52Z", "patch_url": "https://github.com/huggingface/datasets/pull/5636.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/5636" }
idk if I should have added this ignore to `ruff` too, but I added :)
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5636/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5636/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/5635
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5635/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5635/comments
https://api.github.com/repos/huggingface/datasets/issues/5635/events
https://github.com/huggingface/datasets/pull/5635
1,623,682,558
PR_kwDODunzps5MAmLU
5,635
Pass custom metadata filename to Image/Audio folders
{ "avatar_url": "https://avatars.githubusercontent.com/u/16348744?v=4", "events_url": "https://api.github.com/users/polinaeterna/events{/privacy}", "followers_url": "https://api.github.com/users/polinaeterna/followers", "following_url": "https://api.github.com/users/polinaeterna/following{/other_user}", "gists_url": "https://api.github.com/users/polinaeterna/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/polinaeterna", "id": 16348744, "login": "polinaeterna", "node_id": "MDQ6VXNlcjE2MzQ4NzQ0", "organizations_url": "https://api.github.com/users/polinaeterna/orgs", "received_events_url": "https://api.github.com/users/polinaeterna/received_events", "repos_url": "https://api.github.com/users/polinaeterna/repos", "site_admin": false, "starred_url": "https://api.github.com/users/polinaeterna/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/polinaeterna/subscriptions", "type": "User", "url": "https://api.github.com/users/polinaeterna" }
[]
open
false
null
[]
null
4
"2023-03-14T15:08:16Z"
"2023-03-22T17:50:31Z"
null
CONTRIBUTOR
null
1
{ "diff_url": "https://github.com/huggingface/datasets/pull/5635.diff", "html_url": "https://github.com/huggingface/datasets/pull/5635", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/5635.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/5635" }
This is a quick fix. Now it requires to pass data via `data_files` parameters and include a required metadata file there and pass its filename as `metadata_filename` parameter. For example, with the structure like: ``` data images_dir/ im1.jpg im2.jpg ... metadata_dir/ meta_file1.jsonl meta_file2.jsonl ... ``` to load data with `metadata_file1.jsonl` do: ```python ds = load_dataset("imagefolder", data_files=["data/images_dir/**", "data/metadata_dir/meta_file1.jsonl"], metadata_filename="meta_file1.jsonl") ``` Note that if you have multiple splits, metadata file should be specified in each of them in `data_files`, smth like: ```python data_files={ "train": ["data/train/**", "data/metadata_dir/meta_file1.jsonl"], "test": ["data/train/**", "data/metadata_dir/meta_file1.jsonl"] } ```
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 1, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/5635/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5635/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/5634
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5634/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5634/comments
https://api.github.com/repos/huggingface/datasets/issues/5634/events
https://github.com/huggingface/datasets/issues/5634
1,622,424,174
I_kwDODunzps5gtDpu
5,634
Not all progress bars are showing up when they should for downloading dataset
{ "avatar_url": "https://avatars.githubusercontent.com/u/110427462?v=4", "events_url": "https://api.github.com/users/garlandz-db/events{/privacy}", "followers_url": "https://api.github.com/users/garlandz-db/followers", "following_url": "https://api.github.com/users/garlandz-db/following{/other_user}", "gists_url": "https://api.github.com/users/garlandz-db/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/garlandz-db", "id": 110427462, "login": "garlandz-db", "node_id": "U_kgDOBpT9Rg", "organizations_url": "https://api.github.com/users/garlandz-db/orgs", "received_events_url": "https://api.github.com/users/garlandz-db/received_events", "repos_url": "https://api.github.com/users/garlandz-db/repos", "site_admin": false, "starred_url": "https://api.github.com/users/garlandz-db/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/garlandz-db/subscriptions", "type": "User", "url": "https://api.github.com/users/garlandz-db" }
[]
closed
false
null
[]
null
2
"2023-03-13T23:04:18Z"
"2023-10-11T16:30:16Z"
"2023-10-11T16:30:16Z"
NONE
null
null
null
### Describe the bug During downloading the rotten tomatoes dataset, not all progress bars are displayed properly. This might be related to [this ticket](https://github.com/huggingface/datasets/issues/5117) as it raised the same concern but its not clear if the fix solves this issue too. ipywidgets <img width="1243" alt="image" src="https://user-images.githubusercontent.com/110427462/224851138-13fee5b7-ab51-4883-b96f-1b9808782e3b.png"> tqdm <img width="1251" alt="Screen Shot 2023-03-13 at 3 58 59 PM" src="https://user-images.githubusercontent.com/110427462/224851180-5feb7825-9250-4b1e-ad0c-f3172ac1eb78.png"> ### Steps to reproduce the bug 1. Run this line ``` from datasets import load_dataset rotten_tomatoes = load_dataset("rotten_tomatoes", split="train") ``` ### Expected behavior all progress bars for builder script, metadata, readme, training, validation, and test set ### Environment info requirements.txt ``` aiofiles==22.1.0 aiohttp==3.8.4 aiosignal==1.3.1 aiosqlite==0.18.0 anyio==3.6.2 appnope==0.1.3 argon2-cffi==21.3.0 argon2-cffi-bindings==21.2.0 arrow==1.2.3 asttokens==2.2.1 async-generator==1.10 async-timeout==4.0.2 attrs==22.2.0 Babel==2.12.1 backcall==0.2.0 beautifulsoup4==4.11.2 bleach==6.0.0 brotlipy @ file:///Users/runner/miniforge3/conda-bld/brotlipy_1666764961872/work certifi==2022.12.7 cffi @ file:///Users/runner/miniforge3/conda-bld/cffi_1671179414629/work cfgv==3.3.1 charset-normalizer @ file:///home/conda/feedstock_root/build_artifacts/charset-normalizer_1661170624537/work comm==0.1.2 conda==22.9.0 conda-package-handling @ file:///home/conda/feedstock_root/build_artifacts/conda-package-handling_1669907009957/work conda_package_streaming @ file:///home/conda/feedstock_root/build_artifacts/conda-package-streaming_1669733752472/work coverage==7.2.1 cryptography @ file:///Users/runner/miniforge3/conda-bld/cryptography_1669592251328/work datasets==2.1.0 debugpy==1.6.6 decorator==5.1.1 defusedxml==0.7.1 dill==0.3.6 distlib==0.3.6 distro==1.4.0 entrypoints==0.4 exceptiongroup==1.1.0 executing==1.2.0 fastjsonschema==2.16.3 filelock==3.9.0 flaky==3.7.0 fqdn==1.5.1 frozenlist==1.3.3 fsspec==2023.3.0 huggingface-hub==0.10.1 identify==2.5.18 idna @ file:///home/conda/feedstock_root/build_artifacts/idna_1663625384323/work iniconfig==2.0.0 ipykernel==6.12.1 ipyparallel==8.4.1 ipython==7.32.0 ipython-genutils==0.2.0 ipywidgets==8.0.4 isoduration==20.11.0 jedi==0.18.2 Jinja2==3.1.2 json5==0.9.11 jsonpointer==2.3 jsonschema==4.17.3 jupyter-events==0.6.3 jupyter-ydoc==0.2.2 jupyter_client==8.0.3 jupyter_core==5.2.0 jupyter_server==2.4.0 jupyter_server_fileid==0.8.0 jupyter_server_terminals==0.4.4 jupyter_server_ydoc==0.6.1 jupyterlab==3.6.1 jupyterlab-pygments==0.2.2 jupyterlab-widgets==3.0.5 jupyterlab_server==2.20.0 libmambapy @ file:///Users/runner/miniforge3/conda-bld/mamba-split_1671598370072/work/libmambapy mamba @ file:///Users/runner/miniforge3/conda-bld/mamba-split_1671598370072/work/mamba MarkupSafe==2.1.2 matplotlib-inline==0.1.6 mistune==2.0.5 multidict==6.0.4 multiprocess==0.70.14 nbclassic==0.5.3 nbclient==0.7.2 nbconvert==7.2.9 nbformat==5.7.3 nest-asyncio==1.5.6 nodeenv==1.7.0 notebook==6.5.3 notebook_shim==0.2.2 numpy==1.24.2 outcome==1.2.0 packaging==23.0 pandas==1.5.3 pandocfilters==1.5.0 parso==0.8.3 pexpect==4.8.0 pickleshare==0.7.5 platformdirs==3.0.0 plotly==5.13.1 pluggy==1.0.0 pre-commit==3.1.0 prometheus-client==0.16.0 prompt-toolkit==3.0.38 psutil==5.9.4 ptyprocess==0.7.0 pure-eval==0.2.2 pyarrow==11.0.0 pycosat @ file:///Users/runner/miniforge3/conda-bld/pycosat_1666836580084/work pycparser @ file:///home/conda/feedstock_root/build_artifacts/pycparser_1636257122734/work Pygments==2.14.0 pyOpenSSL @ file:///home/conda/feedstock_root/build_artifacts/pyopenssl_1665350324128/work pyrsistent==0.19.3 PySocks @ file:///home/conda/feedstock_root/build_artifacts/pysocks_1661604839144/work pytest==7.2.1 pytest-asyncio==0.20.3 pytest-cov==4.0.0 pytest-timeout==2.1.0 python-dateutil==2.8.2 python-json-logger==2.0.7 pytz==2022.7.1 PyYAML==6.0 pyzmq==25.0.0 requests @ file:///home/conda/feedstock_root/build_artifacts/requests_1661872987712/work responses==0.18.0 rfc3339-validator==0.1.4 rfc3986-validator==0.1.1 ruamel-yaml-conda @ file:///Users/runner/miniforge3/conda-bld/ruamel_yaml_1666819760545/work Send2Trash==1.8.0 simplegeneric==0.8.1 six==1.16.0 sniffio==1.3.0 sortedcontainers==2.4.0 soupsieve==2.4 stack-data==0.6.2 tenacity==8.2.2 terminado==0.17.1 tinycss2==1.2.1 tomli==2.0.1 toolz @ file:///home/conda/feedstock_root/build_artifacts/toolz_1657485559105/work tornado==6.2 tqdm==4.64.1 traitlets==5.8.1 trio==0.22.0 typing_extensions==4.5.0 uri-template==1.2.0 urllib3 @ file:///home/conda/feedstock_root/build_artifacts/urllib3_1669259737463/work virtualenv==20.19.0 wcwidth==0.2.6 webcolors==1.12 webencodings==0.5.1 websocket-client==1.5.1 widgetsnbextension==4.0.5 xxhash==3.2.0 y-py==0.5.9 yarl==1.8.2 ypy-websocket==0.8.2 zstandard==0.19.0 ```
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5634/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5634/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/5633
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5633/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5633/comments
https://api.github.com/repos/huggingface/datasets/issues/5633/events
https://github.com/huggingface/datasets/issues/5633
1,621,469,970
I_kwDODunzps5gpasS
5,633
Cannot import datasets
{ "avatar_url": "https://avatars.githubusercontent.com/u/11250555?v=4", "events_url": "https://api.github.com/users/eerio/events{/privacy}", "followers_url": "https://api.github.com/users/eerio/followers", "following_url": "https://api.github.com/users/eerio/following{/other_user}", "gists_url": "https://api.github.com/users/eerio/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/eerio", "id": 11250555, "login": "eerio", "node_id": "MDQ6VXNlcjExMjUwNTU1", "organizations_url": "https://api.github.com/users/eerio/orgs", "received_events_url": "https://api.github.com/users/eerio/received_events", "repos_url": "https://api.github.com/users/eerio/repos", "site_admin": false, "starred_url": "https://api.github.com/users/eerio/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/eerio/subscriptions", "type": "User", "url": "https://api.github.com/users/eerio" }
[]
closed
false
null
[]
null
1
"2023-03-13T13:14:44Z"
"2023-03-13T17:54:19Z"
"2023-03-13T17:54:19Z"
NONE
null
null
null
### Describe the bug Hi, I cannot even import the library :( I installed it by running: ``` $ conda install datasets ``` Then I realized I should maybe use the huggingface channel, because I encountered the error below, so I ran: ``` $ conda remove datasets $ conda install -c huggingface datasets ``` Please see 'steps to reproduce the bug' for the specific error, as steps to reproduce is just importing the library ### Steps to reproduce the bug ``` $ python3 Python 3.8.15 (default, Nov 24 2022, 15:19:38) [GCC 11.2.0] :: Anaconda, Inc. on linux Type "help", "copyright", "credits" or "license" for more information. >>> import datasets Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/home/jack/.conda/envs/jack_zpp/lib/python3.8/site-packages/datasets/__init__.py", line 33, in <module> from .arrow_dataset import Dataset, concatenate_datasets File "/home/jack/.conda/envs/jack_zpp/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 59, in <module> from .arrow_reader import ArrowReader File "/home/jack/.conda/envs/jack_zpp/lib/python3.8/site-packages/datasets/arrow_reader.py", line 27, in <module> import pyarrow.parquet as pq File "/home/jack/.conda/envs/jack_zpp/lib/python3.8/site-packages/pyarrow/parquet/__init__.py", line 20, in <module> from .core import * File "/home/jack/.conda/envs/jack_zpp/lib/python3.8/site-packages/pyarrow/parquet/core.py", line 37, in <module> from pyarrow._parquet import (ParquetReader, Statistics, # noqa ImportError: cannot import name 'FileEncryptionProperties' from 'pyarrow._parquet' (/home/jack/.conda/envs/jack_zpp/lib/python3.8/site-packages/pyarrow/_parquet.cpython-38-x86_64-linux-gnu.so) ``` ### Expected behavior I would expect for the statement `import datasets` to cause no error ### Environment info Output of `conda list`: ``` # packages in environment at /home/jack/.conda/envs/pbalawender_zpp: # # Name Version Build Channel _libgcc_mutex 0.1 main _openmp_mutex 5.1 1_gnu abseil-cpp 20210324.2 h2531618_0 advertools 0.13.2 pypi_0 pypi aiofiles 0.8.0 pypi_0 pypi aiohttp 3.8.3 py38h5eee18b_0 aiosignal 1.2.0 pyhd3eb1b0_0 aiosqlite 0.17.0 pypi_0 pypi anyio 3.6.2 pypi_0 pypi aquirdturtle-collapsible-headings 3.1.0 pypi_0 pypi argon2-cffi 21.3.0 pypi_0 pypi argon2-cffi-bindings 21.2.0 pypi_0 pypi arrow 1.2.3 pypi_0 pypi arrow-cpp 3.0.0 py38h6b21186_4 asttokens 2.2.0 pypi_0 pypi async-timeout 4.0.2 py38h06a4308_0 attrs 22.1.0 py38h06a4308_0 automat 22.10.0 pypi_0 pypi aws-c-common 0.4.57 he6710b0_1 aws-c-event-stream 0.1.6 h2531618_5 aws-checksums 0.1.9 he6710b0_0 aws-sdk-cpp 1.8.185 hce553d0_0 babel 2.11.0 pypi_0 pypi backcall 0.2.0 pyhd3eb1b0_0 beautifulsoup4 4.11.1 pypi_0 pypi blas 1.0 mkl bleach 5.0.1 pypi_0 pypi boost-cpp 1.73.0 h27cfd23_11 bottleneck 1.3.5 py38h7deecbd_0 brotli 1.0.9 h5eee18b_7 brotli-bin 1.0.9 h5eee18b_7 brotlipy 0.7.0 py38h27cfd23_1003 bzip2 1.0.8 h7b6447c_0 c-ares 1.18.1 h7f8727e_0 ca-certificates 2023.01.10 h06a4308_0 certifi 2022.9.24 pypi_0 pypi cffi 1.15.1 py38h5eee18b_3 charset-normalizer 2.1.1 pypi_0 pypi click 8.1.3 pypi_0 pypi constantly 15.1.0 pypi_0 pypi contourpy 1.0.6 pypi_0 pypi cryptography 38.0.4 pypi_0 pypi cssselect 1.2.0 pypi_0 pypi cudatoolkit 10.1.243 h8cb64d8_10 conda-forge cycler 0.11.0 pypi_0 pypi dacite 1.6.0 pypi_0 pypi dataclasses 0.8 pyh6d0b6a4_7 datasets 1.18.4 py_0 huggingface datetime 4.7 pypi_0 pypi debugpy 1.6.4 pypi_0 pypi decorator 5.1.1 pyhd3eb1b0_0 defusedxml 0.7.1 pypi_0 pypi dill 0.3.6 py38h06a4308_0 docker-pycreds 0.4.0 pypi_0 pypi double-conversion 3.1.5 he6710b0_1 entrypoints 0.4 py38h06a4308_0 executing 0.8.3 pyhd3eb1b0_0 filelock 3.8.0 pypi_0 pypi flake8 6.0.0 pypi_0 pypi flask 2.1.3 py38h06a4308_0 flit-core 3.6.0 pyhd3eb1b0_0 fonttools 4.38.0 pypi_0 pypi fqdn 1.5.1 pypi_0 pypi freetype 2.12.1 h4a9f257_0 frozenlist 1.3.3 py38h5eee18b_0 fsspec 2022.11.0 py38h06a4308_0 gensim 4.2.0 pypi_0 pypi gflags 2.2.2 he6710b0_0 giflib 5.2.1 h5eee18b_3 gitdb 4.0.10 pypi_0 pypi gitpython 3.1.30 pypi_0 pypi glog 0.5.0 h2531618_0 grpc-cpp 1.39.0 hae934f6_5 huggingface-hub 0.11.1 pypi_0 pypi huggingface_hub 0.13.1 py_0 huggingface hyperlink 21.0.0 pypi_0 pypi icu 58.2 he6710b0_3 idna 3.4 py38h06a4308_0 importlib-metadata 5.1.0 pypi_0 pypi importlib_metadata 4.11.3 hd3eb1b0_0 importlib_resources 5.2.0 pyhd3eb1b0_1 incremental 22.10.0 pypi_0 pypi intel-openmp 2021.4.0 h06a4308_3561 ipykernel 6.17.1 pyh210e3f2_0 conda-forge ipython 8.7.0 pypi_0 pypi ipython-genutils 0.2.0 pypi_0 pypi ipywidgets 8.0.2 pyhd8ed1ab_1 conda-forge isoduration 20.11.0 pypi_0 pypi itemadapter 0.7.0 pypi_0 pypi itemloaders 1.0.6 pypi_0 pypi itsdangerous 2.0.1 pyhd3eb1b0_0 jedi 0.18.2 pypi_0 pypi jinja2 3.1.2 py38h06a4308_0 jmespath 1.0.1 pypi_0 pypi joblib 1.2.0 pypi_0 pypi jpeg 9b h024ee3a_2 json5 0.9.10 pypi_0 pypi jsonpickle 3.0.0 pypi_0 pypi jsonpointer 2.3 pypi_0 pypi jsonschema 4.17.3 py38h06a4308_0 jupyter-core 5.1.0 pypi_0 pypi jupyter-events 0.5.0 pypi_0 pypi jupyter-server 1.23.3 pypi_0 pypi jupyter-server-fileid 0.6.0 pypi_0 pypi jupyter-server-ydoc 0.4.0 pypi_0 pypi jupyter-ydoc 0.2.2 pypi_0 pypi jupyter_client 7.4.9 py38h06a4308_0 jupyter_core 5.2.0 py38h06a4308_0 jupyterlab 3.6.0a4 pypi_0 pypi jupyterlab-pygments 0.2.2 pypi_0 pypi jupyterlab-server 2.16.3 pypi_0 pypi jupyterlab_widgets 3.0.3 pyhd8ed1ab_0 conda-forge kiwisolver 1.4.4 pypi_0 pypi krb5 1.19.4 h568e23c_0 lcms2 2.12 h3be6417_0 ld_impl_linux-64 2.38 h1181459_1 libboost 1.73.0 h3ff78a5_11 libbrotlicommon 1.0.9 h5eee18b_7 libbrotlidec 1.0.9 h5eee18b_7 libbrotlienc 1.0.9 h5eee18b_7 libcurl 7.88.1 h91b91d3_0 libedit 3.1.20221030 h5eee18b_0 libev 4.33 h7f8727e_1 libevent 2.1.12 h8f2d780_0 libffi 3.4.2 h6a678d5_6 libgcc-ng 11.2.0 h1234567_1 libgomp 11.2.0 h1234567_1 libnghttp2 1.46.0 hce63b2e_0 libpng 1.6.39 h5eee18b_0 libprotobuf 3.17.2 h4ff587b_1 libsodium 1.0.18 h7b6447c_0 libssh2 1.10.0 h8f2d780_0 libstdcxx-ng 11.2.0 h1234567_1 libthrift 0.14.2 hcc01f38_0 libtiff 4.1.0 h2733197_1 libuv 1.44.2 h5eee18b_0 libwebp 1.2.0 h89dd481_0 lz4-c 1.9.4 h6a678d5_0 markupsafe 2.1.1 py38h7f8727e_0 matplotlib 3.6.2 pypi_0 pypi matplotlib-inline 0.1.6 py38h06a4308_0 mccabe 0.7.0 pypi_0 pypi mistune 2.0.4 pypi_0 pypi mkl 2021.4.0 h06a4308_640 mkl-service 2.4.0 py38h7f8727e_0 mkl_fft 1.3.1 py38hd3c417c_0 mkl_random 1.2.2 py38h51133e4_0 morfeusz2 1.99.6 pypi_0 pypi multidict 6.0.2 py38h5eee18b_0 multiprocess 0.70.14 py38h06a4308_0 nbclassic 0.4.8 pypi_0 pypi nbclient 0.7.2 pypi_0 pypi nbconvert 7.2.5 pypi_0 pypi nbformat 5.7.0 py38h06a4308_0 ncurses 6.4 h6a678d5_0 nest-asyncio 1.5.6 py38h06a4308_0 ninja 1.10.2 h06a4308_5 ninja-base 1.10.2 hd09550d_5 notebook 6.5.2 pypi_0 pypi notebook-shim 0.2.2 pypi_0 pypi numexpr 2.8.4 py38he184ba9_0 numpy 1.23.5 py38h14f4228_0 numpy-base 1.23.5 py38h31eccc5_0 oauthlib 3.2.2 pypi_0 pypi opencv-python 4.6.0.66 pypi_0 pypi openssl 1.1.1t h7f8727e_0 orc 1.6.9 ha97a36c_3 packaging 22.0 py38h06a4308_0 pandas 1.5.2 pypi_0 pypi pandocfilters 1.5.0 pypi_0 pypi parsel 1.7.0 pypi_0 pypi parso 0.8.3 pyhd3eb1b0_0 pathlib 1.0.1 pypi_0 pypi pathtools 0.1.2 pypi_0 pypi pexpect 4.8.0 pyhd3eb1b0_3 pickleshare 0.7.5 pyhd3eb1b0_1003 pillow 9.3.0 pypi_0 pypi pip 22.2.2 py38h06a4308_0 pkgutil-resolve-name 1.3.10 py38h06a4308_0 platformdirs 2.5.4 pypi_0 pypi prometheus-client 0.15.0 pypi_0 pypi promise 2.3 pypi_0 pypi prompt-toolkit 3.0.33 pypi_0 pypi protego 0.2.1 pypi_0 pypi protobuf 4.21.12 pypi_0 pypi psutil 5.9.0 py38h5eee18b_0 ptyprocess 0.7.0 pyhd3eb1b0_2 pure_eval 0.2.2 pyhd3eb1b0_0 pyarrow 10.0.1 pypi_0 pypi pyasn1 0.4.8 pypi_0 pypi pyasn1-modules 0.2.8 pypi_0 pypi pycodestyle 2.10.0 pypi_0 pypi pycparser 2.21 pyhd3eb1b0_0 pydispatcher 2.0.6 pypi_0 pypi pyflakes 3.0.1 pypi_0 pypi pygments 2.11.2 pyhd3eb1b0_0 pyopenssl 22.1.0 pypi_0 pypi pyrsistent 0.18.0 py38heee7806_0 pysocks 1.7.1 py38h06a4308_0 python 3.8.15 h7a1cb2a_2 python-dateutil 2.8.2 pyhd3eb1b0_0 python-dotenv 0.21.0 pypi_0 pypi python-fastjsonschema 2.16.2 py38h06a4308_0 python-json-logger 2.0.4 pypi_0 pypi python-xxhash 2.0.2 py38h5eee18b_1 pytorch 1.7.1 py3.8_cuda10.1.243_cudnn7.6.3_0 pytorch pytz 2022.6 pypi_0 pypi pyyaml 6.0 py38h5eee18b_1 pyzmq 23.2.0 py38h6a678d5_0 queuelib 1.6.2 pypi_0 pypi re2 2022.04.01 h295c915_0 readline 8.2 h5eee18b_0 regex 2022.10.31 pypi_0 pypi requests 2.28.1 py38h06a4308_0 requests-file 1.5.1 pypi_0 pypi requests-oauthlib 1.3.1 pypi_0 pypi rfc3339-validator 0.1.4 pypi_0 pypi rfc3986-validator 0.1.1 pypi_0 pypi scikit-learn 1.1.3 pypi_0 pypi scipy 1.9.3 pypi_0 pypi scrapy 2.7.1 pypi_0 pypi seaborn 0.12.1 pypi_0 pypi send2trash 1.8.0 pypi_0 pypi sentry-sdk 1.12.1 pypi_0 pypi service-identity 21.1.0 pypi_0 pypi setproctitle 1.3.2 pypi_0 pypi setuptools 65.6.3 pypi_0 pypi shortuuid 1.0.11 pypi_0 pypi six 1.16.0 pyhd3eb1b0_1 smart-open 6.2.0 pypi_0 pypi smmap 5.0.0 pypi_0 pypi snappy 1.1.9 h295c915_0 sniffio 1.3.0 pypi_0 pypi soupsieve 2.3.2.post1 pypi_0 pypi sqlite 3.40.1 h5082296_0 stack-data 0.6.2 pypi_0 pypi stack_data 0.2.0 pyhd3eb1b0_0 terminado 0.17.0 pypi_0 pypi threadpoolctl 3.1.0 pypi_0 pypi tinycss2 1.2.1 pypi_0 pypi tk 8.6.12 h1ccaba5_0 tldextract 3.4.0 pypi_0 pypi tokenizers 0.13.2 pypi_0 pypi tomli 2.0.1 pypi_0 pypi torchvision 0.8.2 py38_cu101 pytorch tornado 6.2 py38h5eee18b_0 tqdm 4.64.1 py38h06a4308_0 traitlets 5.6.0 pypi_0 pypi transformers 4.25.1 pypi_0 pypi tweepy 4.12.1 pypi_0 pypi twisted 22.10.0 pypi_0 pypi twython 3.9.1 pypi_0 pypi typing-extensions 4.4.0 py38h06a4308_0 typing_extensions 4.4.0 py38h06a4308_0 uri-template 1.2.0 pypi_0 pypi uriparser 0.9.3 he6710b0_1 urllib3 1.26.13 pypi_0 pypi utf8proc 2.6.1 h27cfd23_0 w3lib 2.1.0 pypi_0 pypi wandb 0.13.7 pypi_0 pypi wcwidth 0.2.5 pyhd3eb1b0_0 webcolors 1.12 pypi_0 pypi webencodings 0.5.1 pypi_0 pypi websocket-client 1.4.2 pypi_0 pypi werkzeug 2.2.2 py38h06a4308_0 wheel 0.38.4 py38h06a4308_0 widgetsnbextension 4.0.3 py38h06a4308_0 xxhash 0.8.0 h7f8727e_3 xz 5.2.10 h5eee18b_1 y-py 0.5.4 pypi_0 pypi yaml 0.2.5 h7b6447c_0 yarl 1.8.1 py38h5eee18b_0 ypy-websocket 0.5.0 pypi_0 pypi zeromq 4.3.4 h2531618_0 zipp 3.11.0 py38h06a4308_0 zlib 1.2.13 h5eee18b_0 zope-interface 5.5.2 pypi_0 pypi zstd 1.4.9 haebb681_0 ```
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5633/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5633/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/5632
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5632/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5632/comments
https://api.github.com/repos/huggingface/datasets/issues/5632/events
https://github.com/huggingface/datasets/issues/5632
1,621,177,391
I_kwDODunzps5goTQv
5,632
Dataset cannot convert too large dictionnary
{ "avatar_url": "https://avatars.githubusercontent.com/u/108518627?v=4", "events_url": "https://api.github.com/users/MaraLac/events{/privacy}", "followers_url": "https://api.github.com/users/MaraLac/followers", "following_url": "https://api.github.com/users/MaraLac/following{/other_user}", "gists_url": "https://api.github.com/users/MaraLac/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/MaraLac", "id": 108518627, "login": "MaraLac", "node_id": "U_kgDOBnfc4w", "organizations_url": "https://api.github.com/users/MaraLac/orgs", "received_events_url": "https://api.github.com/users/MaraLac/received_events", "repos_url": "https://api.github.com/users/MaraLac/repos", "site_admin": false, "starred_url": "https://api.github.com/users/MaraLac/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/MaraLac/subscriptions", "type": "User", "url": "https://api.github.com/users/MaraLac" }
[]
open
false
null
[]
null
1
"2023-03-13T10:14:40Z"
"2023-03-16T15:28:57Z"
null
NONE
null
null
null
### Describe the bug Hello everyone! I tried to build a new dataset with the command "dict_valid = datasets.Dataset.from_dict({'input_values': values_array})". However, I have a very large dataset (~400Go) and it seems that dataset cannot handle this. Indeed, I can create the dataset until a certain size of my dictionnary, and then I have the error "OverflowError: Python int too large to convert to C long". Do you know how to solve this problem? Unfortunately I cannot give a reproductible code because I cannot share a so large file, but you can find the code below (it's a test on only a part of the validation data ~10Go, but it's already the case). Thank you! ### Steps to reproduce the bug SAVE_DIR = './data/' features = h5py.File(SAVE_DIR+'features.hdf5','r') valid_data = features["validation"]["data/features"] v_array_values = [np.float32(item[()]) for item in valid_data.values()] for i in range(len(v_array_values)): v_array_values[i] = v_array_values[i].round(decimals=5) dict_valid = datasets.Dataset.from_dict({'input_values': v_array_values}) ### Expected behavior The code is expected to give me a Huggingface dataset. ### Environment info python: 3.8.15 numpy: 1.22.3 datasets: 2.3.2 pyarrow: 8.0.0
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5632/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5632/timeline
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5631
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5631/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5631/comments
https://api.github.com/repos/huggingface/datasets/issues/5631/events
https://github.com/huggingface/datasets/issues/5631
1,620,442,854
I_kwDODunzps5glf7m
5,631
Custom split names
{ "avatar_url": "https://avatars.githubusercontent.com/u/79091831?v=4", "events_url": "https://api.github.com/users/ErfanMoosaviMonazzah/events{/privacy}", "followers_url": "https://api.github.com/users/ErfanMoosaviMonazzah/followers", "following_url": "https://api.github.com/users/ErfanMoosaviMonazzah/following{/other_user}", "gists_url": "https://api.github.com/users/ErfanMoosaviMonazzah/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/ErfanMoosaviMonazzah", "id": 79091831, "login": "ErfanMoosaviMonazzah", "node_id": "MDQ6VXNlcjc5MDkxODMx", "organizations_url": "https://api.github.com/users/ErfanMoosaviMonazzah/orgs", "received_events_url": "https://api.github.com/users/ErfanMoosaviMonazzah/received_events", "repos_url": "https://api.github.com/users/ErfanMoosaviMonazzah/repos", "site_admin": false, "starred_url": "https://api.github.com/users/ErfanMoosaviMonazzah/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ErfanMoosaviMonazzah/subscriptions", "type": "User", "url": "https://api.github.com/users/ErfanMoosaviMonazzah" }
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" } ]
closed
false
null
[]
null
1
"2023-03-12T17:21:43Z"
"2023-03-24T14:13:00Z"
"2023-03-24T14:13:00Z"
NONE
null
null
null
### Feature request Hi, I participated in multiple NLP tasks where there are more than just train, test, validation splits, there could be multiple validation sets or test sets. But it seems currently only those mentioned three splits supported. It would be nice to have the support for more splits on the hub. (currently i can have more splits when I am loading datasets from urls, but not hub) ### Motivation Easier access to more splits ### Your contribution No
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5631/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5631/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/5630
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5630/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5630/comments
https://api.github.com/repos/huggingface/datasets/issues/5630/events
https://github.com/huggingface/datasets/pull/5630
1,620,327,510
PR_kwDODunzps5L1ahF
5,630
adds early exit if url is `PathLike`
{ "avatar_url": "https://avatars.githubusercontent.com/u/44398246?v=4", "events_url": "https://api.github.com/users/vvvm23/events{/privacy}", "followers_url": "https://api.github.com/users/vvvm23/followers", "following_url": "https://api.github.com/users/vvvm23/following{/other_user}", "gists_url": "https://api.github.com/users/vvvm23/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/vvvm23", "id": 44398246, "login": "vvvm23", "node_id": "MDQ6VXNlcjQ0Mzk4MjQ2", "organizations_url": "https://api.github.com/users/vvvm23/orgs", "received_events_url": "https://api.github.com/users/vvvm23/received_events", "repos_url": "https://api.github.com/users/vvvm23/repos", "site_admin": false, "starred_url": "https://api.github.com/users/vvvm23/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/vvvm23/subscriptions", "type": "User", "url": "https://api.github.com/users/vvvm23" }
[]
open
false
null
[]
null
1
"2023-03-12T11:23:28Z"
"2023-03-15T11:58:38Z"
null
NONE
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/5630.diff", "html_url": "https://github.com/huggingface/datasets/pull/5630", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/5630.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/5630" }
Closes #4864 Should fix errors thrown when attempting to load `json` dataset using `pathlib.Path` in `data_files` argument.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5630/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5630/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/5629
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5629/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5629/comments
https://api.github.com/repos/huggingface/datasets/issues/5629/events
https://github.com/huggingface/datasets/issues/5629
1,619,921,247
I_kwDODunzps5gjglf
5,629
load_dataset gives "403" error when using Financial phrasebank
{ "avatar_url": "https://avatars.githubusercontent.com/u/67709789?v=4", "events_url": "https://api.github.com/users/Jimchoo91/events{/privacy}", "followers_url": "https://api.github.com/users/Jimchoo91/followers", "following_url": "https://api.github.com/users/Jimchoo91/following{/other_user}", "gists_url": "https://api.github.com/users/Jimchoo91/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/Jimchoo91", "id": 67709789, "login": "Jimchoo91", "node_id": "MDQ6VXNlcjY3NzA5Nzg5", "organizations_url": "https://api.github.com/users/Jimchoo91/orgs", "received_events_url": "https://api.github.com/users/Jimchoo91/received_events", "repos_url": "https://api.github.com/users/Jimchoo91/repos", "site_admin": false, "starred_url": "https://api.github.com/users/Jimchoo91/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Jimchoo91/subscriptions", "type": "User", "url": "https://api.github.com/users/Jimchoo91" }
[]
open
false
null
[]
null
1
"2023-03-11T07:46:39Z"
"2023-03-13T18:27:26Z"
null
NONE
null
null
null
When I try to load this dataset, I receive the following error: ConnectionError: Couldn't reach https://www.researchgate.net/profile/Pekka_Malo/publication/251231364_FinancialPhraseBank-v10/data/0c96051eee4fb1d56e000000/FinancialPhraseBank-v10.zip (error 403) Has this been seen before? Thanks. The website loads when I try to access it manually.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5629/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5629/timeline
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5628
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5628/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5628/comments
https://api.github.com/repos/huggingface/datasets/issues/5628/events
https://github.com/huggingface/datasets/pull/5628
1,619,641,810
PR_kwDODunzps5LzVKi
5,628
add kwargs to index search
{ "avatar_url": "https://avatars.githubusercontent.com/u/55560583?v=4", "events_url": "https://api.github.com/users/SaulLu/events{/privacy}", "followers_url": "https://api.github.com/users/SaulLu/followers", "following_url": "https://api.github.com/users/SaulLu/following{/other_user}", "gists_url": "https://api.github.com/users/SaulLu/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/SaulLu", "id": 55560583, "login": "SaulLu", "node_id": "MDQ6VXNlcjU1NTYwNTgz", "organizations_url": "https://api.github.com/users/SaulLu/orgs", "received_events_url": "https://api.github.com/users/SaulLu/received_events", "repos_url": "https://api.github.com/users/SaulLu/repos", "site_admin": false, "starred_url": "https://api.github.com/users/SaulLu/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/SaulLu/subscriptions", "type": "User", "url": "https://api.github.com/users/SaulLu" }
[]
closed
false
null
[]
null
1
"2023-03-10T21:24:58Z"
"2023-03-15T14:48:47Z"
"2023-03-15T14:46:04Z"
CONTRIBUTOR
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/5628.diff", "html_url": "https://github.com/huggingface/datasets/pull/5628", "merged_at": "2023-03-15T14:46:04Z", "patch_url": "https://github.com/huggingface/datasets/pull/5628.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/5628" }
This PR proposes to add kwargs to index search methods. This is particularly useful for setting the timeout of a query on elasticsearch. A typical use case would be: ```python dset.add_elasticsearch_index("filename", es_client=es_client) scores, examples = dset.get_nearest_examples("filename", "my_name-train_29", request_timeout=60) ```
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 1, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/5628/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5628/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/5627
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5627/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5627/comments
https://api.github.com/repos/huggingface/datasets/issues/5627/events
https://github.com/huggingface/datasets/issues/5627
1,619,336,609
I_kwDODunzps5ghR2h
5,627
Unable to load AutoTrain-generated dataset from the hub
{ "avatar_url": "https://avatars.githubusercontent.com/u/8560151?v=4", "events_url": "https://api.github.com/users/ijmiller2/events{/privacy}", "followers_url": "https://api.github.com/users/ijmiller2/followers", "following_url": "https://api.github.com/users/ijmiller2/following{/other_user}", "gists_url": "https://api.github.com/users/ijmiller2/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/ijmiller2", "id": 8560151, "login": "ijmiller2", "node_id": "MDQ6VXNlcjg1NjAxNTE=", "organizations_url": "https://api.github.com/users/ijmiller2/orgs", "received_events_url": "https://api.github.com/users/ijmiller2/received_events", "repos_url": "https://api.github.com/users/ijmiller2/repos", "site_admin": false, "starred_url": "https://api.github.com/users/ijmiller2/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ijmiller2/subscriptions", "type": "User", "url": "https://api.github.com/users/ijmiller2" }
[]
open
false
null
[]
null
2
"2023-03-10T17:25:58Z"
"2023-03-11T15:44:42Z"
null
NONE
null
null
null
### Describe the bug DatasetGenerationError: An error occurred while generating the dataset -> ValueError: Couldn't cast ... because column names don't match ``` ValueError: Couldn't cast _data_files: list<item: struct<filename: string>> child 0, item: struct<filename: string> child 0, filename: string _fingerprint: string _format_columns: list<item: string> child 0, item: string _format_kwargs: struct<> _format_type: null _indexes: struct<> _output_all_columns: bool _split: null to {'citation': Value(dtype='string', id=None), 'description': Value(dtype='string', id=None), 'features': {'image': {'_type': Value(dtype='string', id=None)}, 'target': {'names': Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), '_type': Value(dtype='string', id=None)}}, 'homepage': Value(dtype='string', id=None), 'license': Value(dtype='string', id=None), 'splits': {'train': {'name': Value(dtype='string', id=None), 'num_bytes': Value(dtype='int64', id=None), 'num_examples': Value(dtype='int64', id=None), 'dataset_name': Value(dtype='null', id=None)}}} because column names don't match ``` ### Steps to reproduce the bug Steps to reproduce: 1. `pip install datasets==2.10.1` 2. Attempt to load (private dataset). Note that I'm authenticated via ` huggingface-cli login` ``` from datasets import load_dataset # load dataset dataset = "ijmiller2/autotrain-data-betterbin-vision-10000" dataset = load_dataset(dataset) ``` Here's the full traceback: ```Downloading and preparing dataset json/ijmiller2--autotrain-data-betterbin-vision-10000 to /Users/ian/.cache/huggingface/datasets/ijmiller2___json/ijmiller2--autotrain-data-betterbin-vision-10000-2eae034a9ff8a1a9/0.0.0/0f7e3662623656454fcd2b650f34e886a7db4b9104504885bd462096cc7a9f51... Downloading data files: 100%|███████████████████████████████████████████████████████████████████████| 2/2 [00:00<00:00, 2383.80it/s] Extracting data files: 100%|█████████████████████████████████████████████████████████████████████████| 2/2 [00:00<00:00, 505.95it/s] --------------------------------------------------------------------------- ValueError Traceback (most recent call last) File ~/anaconda3/envs/betterbin/lib/python3.8/site-packages/datasets/builder.py:1874, in ArrowBasedBuilder._prepare_split_single(self, gen_kwargs, fpath, file_format, max_shard_size, job_id) 1868 writer = writer_class( 1869 features=writer._features, 1870 path=fpath.replace("SSSSS", f"{shard_id:05d}").replace("JJJJJ", f"{job_id:05d}"), 1871 storage_options=self._fs.storage_options, 1872 embed_local_files=embed_local_files, 1873 ) -> 1874 writer.write_table(table) 1875 num_examples_progress_update += len(table) File ~/anaconda3/envs/betterbin/lib/python3.8/site-packages/datasets/arrow_writer.py:568, in ArrowWriter.write_table(self, pa_table, writer_batch_size) 567 pa_table = pa_table.combine_chunks() --> 568 pa_table = table_cast(pa_table, self._schema) 569 if self.embed_local_files: File ~/anaconda3/envs/betterbin/lib/python3.8/site-packages/datasets/table.py:2312, in table_cast(table, schema) 2311 if table.schema != schema: -> 2312 return cast_table_to_schema(table, schema) 2313 elif table.schema.metadata != schema.metadata: File ~/anaconda3/envs/betterbin/lib/python3.8/site-packages/datasets/table.py:2270, in cast_table_to_schema(table, schema) 2269 if sorted(table.column_names) != sorted(features): -> 2270 raise ValueError(f"Couldn't cast\n{table.schema}\nto\n{features}\nbecause column names don't match") 2271 arrays = [cast_array_to_feature(table[name], feature) for name, feature in features.items()] ValueError: Couldn't cast _data_files: list<item: struct<filename: string>> child 0, item: struct<filename: string> child 0, filename: string _fingerprint: string _format_columns: list<item: string> child 0, item: string _format_kwargs: struct<> _format_type: null _indexes: struct<> _output_all_columns: bool _split: null to {'citation': Value(dtype='string', id=None), 'description': Value(dtype='string', id=None), 'features': {'image': {'_type': Value(dtype='string', id=None)}, 'target': {'names': Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), '_type': Value(dtype='string', id=None)}}, 'homepage': Value(dtype='string', id=None), 'license': Value(dtype='string', id=None), 'splits': {'train': {'name': Value(dtype='string', id=None), 'num_bytes': Value(dtype='int64', id=None), 'num_examples': Value(dtype='int64', id=None), 'dataset_name': Value(dtype='null', id=None)}}} because column names don't match The above exception was the direct cause of the following exception: DatasetGenerationError Traceback (most recent call last) Input In [8], in <cell line: 6>() 4 # load dataset 5 dataset = "ijmiller2/autotrain-data-betterbin-vision-10000" ----> 6 dataset = load_dataset(dataset) File ~/anaconda3/envs/betterbin/lib/python3.8/site-packages/datasets/load.py:1782, in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, verification_mode, ignore_verifications, keep_in_memory, save_infos, revision, use_auth_token, task, streaming, num_proc, **config_kwargs) 1779 try_from_hf_gcs = path not in _PACKAGED_DATASETS_MODULES 1781 # Download and prepare data -> 1782 builder_instance.download_and_prepare( 1783 download_config=download_config, 1784 download_mode=download_mode, 1785 verification_mode=verification_mode, 1786 try_from_hf_gcs=try_from_hf_gcs, 1787 num_proc=num_proc, 1788 ) 1790 # Build dataset for splits 1791 keep_in_memory = ( 1792 keep_in_memory if keep_in_memory is not None else is_small_dataset(builder_instance.info.dataset_size) 1793 ) File ~/anaconda3/envs/betterbin/lib/python3.8/site-packages/datasets/builder.py:872, in DatasetBuilder.download_and_prepare(self, output_dir, download_config, download_mode, verification_mode, ignore_verifications, try_from_hf_gcs, dl_manager, base_path, use_auth_token, file_format, max_shard_size, num_proc, storage_options, **download_and_prepare_kwargs) 870 if num_proc is not None: 871 prepare_split_kwargs["num_proc"] = num_proc --> 872 self._download_and_prepare( 873 dl_manager=dl_manager, 874 verification_mode=verification_mode, 875 **prepare_split_kwargs, 876 **download_and_prepare_kwargs, 877 ) 878 # Sync info 879 self.info.dataset_size = sum(split.num_bytes for split in self.info.splits.values()) File ~/anaconda3/envs/betterbin/lib/python3.8/site-packages/datasets/builder.py:967, in DatasetBuilder._download_and_prepare(self, dl_manager, verification_mode, **prepare_split_kwargs) 963 split_dict.add(split_generator.split_info) 965 try: 966 # Prepare split will record examples associated to the split --> 967 self._prepare_split(split_generator, **prepare_split_kwargs) 968 except OSError as e: 969 raise OSError( 970 "Cannot find data file. " 971 + (self.manual_download_instructions or "") 972 + "\nOriginal error:\n" 973 + str(e) 974 ) from None File ~/anaconda3/envs/betterbin/lib/python3.8/site-packages/datasets/builder.py:1749, in ArrowBasedBuilder._prepare_split(self, split_generator, file_format, num_proc, max_shard_size) 1747 job_id = 0 1748 with pbar: -> 1749 for job_id, done, content in self._prepare_split_single( 1750 gen_kwargs=gen_kwargs, job_id=job_id, **_prepare_split_args 1751 ): 1752 if done: 1753 result = content File ~/anaconda3/envs/betterbin/lib/python3.8/site-packages/datasets/builder.py:1892, in ArrowBasedBuilder._prepare_split_single(self, gen_kwargs, fpath, file_format, max_shard_size, job_id) 1890 if isinstance(e, SchemaInferenceError) and e.__context__ is not None: 1891 e = e.__context__ -> 1892 raise DatasetGenerationError("An error occurred while generating the dataset") from e 1894 yield job_id, True, (total_num_examples, total_num_bytes, writer._features, num_shards, shard_lengths) DatasetGenerationError: An error occurred while generating the dataset ``` ### Expected behavior I'm ultimately trying to generate my own performance metrics on validation data (before putting an endpoint into production) and so was hoping to load all or at least the validation subset from the hub. I'm expecting the `load_dataset()` function to work as shown in the documentation [here](https://huggingface.co/docs/datasets/loading#hugging-face-hub): ```python dataset = load_dataset( "lhoestq/custom_squad", revision="main" # tag name, or branch name, or commit hash ) ``` ### Environment info - `datasets` version: 2.10.1 - Platform: macOS-13.2.1-arm64-arm-64bit - Python version: 3.8.13 - PyArrow version: 9.0.0 - Pandas version: 1.4.4
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5627/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5627/timeline
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5626
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5626/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5626/comments
https://api.github.com/repos/huggingface/datasets/issues/5626/events
https://github.com/huggingface/datasets/pull/5626
1,619,252,984
PR_kwDODunzps5LyBT4
5,626
Support streaming datasets with numpy.load
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[]
closed
false
null
[]
null
2
"2023-03-10T16:33:39Z"
"2023-03-21T06:36:05Z"
"2023-03-21T06:28:54Z"
MEMBER
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/5626.diff", "html_url": "https://github.com/huggingface/datasets/pull/5626", "merged_at": "2023-03-21T06:28:54Z", "patch_url": "https://github.com/huggingface/datasets/pull/5626.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/5626" }
Support streaming datasets with `numpy.load`. See: https://huggingface.co/datasets/qgallouedec/gia_dataset/discussions/1
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5626/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5626/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/5625
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5625/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5625/comments
https://api.github.com/repos/huggingface/datasets/issues/5625/events
https://github.com/huggingface/datasets/issues/5625
1,618,971,855
I_kwDODunzps5gf4zP
5,625
Allow "jsonl" data type signifier
{ "avatar_url": "https://avatars.githubusercontent.com/u/2779410?v=4", "events_url": "https://api.github.com/users/BramVanroy/events{/privacy}", "followers_url": "https://api.github.com/users/BramVanroy/followers", "following_url": "https://api.github.com/users/BramVanroy/following{/other_user}", "gists_url": "https://api.github.com/users/BramVanroy/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/BramVanroy", "id": 2779410, "login": "BramVanroy", "node_id": "MDQ6VXNlcjI3Nzk0MTA=", "organizations_url": "https://api.github.com/users/BramVanroy/orgs", "received_events_url": "https://api.github.com/users/BramVanroy/received_events", "repos_url": "https://api.github.com/users/BramVanroy/repos", "site_admin": false, "starred_url": "https://api.github.com/users/BramVanroy/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/BramVanroy/subscriptions", "type": "User", "url": "https://api.github.com/users/BramVanroy" }
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" } ]
open
false
null
[]
null
2
"2023-03-10T13:21:48Z"
"2023-03-11T10:35:39Z"
null
CONTRIBUTOR
null
null
null
### Feature request `load_dataset` currently does not accept `jsonl` as type but only `json`. ### Motivation I was working with one of the `run_translation` scripts and used my own datasets (`.jsonl`) as train_dataset. But the default code did not work because ``` FileNotFoundError: Couldn't find a dataset script at jsonl\jsonl.py or any data file in the same directory. Couldn't find 'jsonl' on the Hugging Face Hub either: FileNotFoundError: Dataset 'jsonl' doesn't exist on the Hub. If the repo is private or gated, make sure to log in with `huggingface-cli login`. ``` The reason is because the script has these lines to extract the data type by its extension. Therefore, the derived type is `jsonl` which is not recognized by datasets as the error above shows. https://github.com/huggingface/transformers/blob/ade26bf9912f69e2110137443e4406d7dbe253e7/examples/pytorch/translation/run_translation.py#L342-L356 I suppose you could argue that this is the script's fault (in which case I'll do a PR over at `transformers`) but it makes sense to me to add `jsonl` as an alias to `json` in `datasets`. ### Your contribution At the moment I cannot work on this. I think it can be as "easy" as having an alias for json, namely jsonl.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5625/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5625/timeline
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5624
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5624/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5624/comments
https://api.github.com/repos/huggingface/datasets/issues/5624/events
https://github.com/huggingface/datasets/issues/5624
1,617,400,192
I_kwDODunzps5gZ5GA
5,624
glue datasets returning -1 for test split
{ "avatar_url": "https://avatars.githubusercontent.com/u/8939967?v=4", "events_url": "https://api.github.com/users/lithafnium/events{/privacy}", "followers_url": "https://api.github.com/users/lithafnium/followers", "following_url": "https://api.github.com/users/lithafnium/following{/other_user}", "gists_url": "https://api.github.com/users/lithafnium/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lithafnium", "id": 8939967, "login": "lithafnium", "node_id": "MDQ6VXNlcjg5Mzk5Njc=", "organizations_url": "https://api.github.com/users/lithafnium/orgs", "received_events_url": "https://api.github.com/users/lithafnium/received_events", "repos_url": "https://api.github.com/users/lithafnium/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lithafnium/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lithafnium/subscriptions", "type": "User", "url": "https://api.github.com/users/lithafnium" }
[]
closed
false
null
[]
null
1
"2023-03-09T14:47:18Z"
"2023-03-09T16:49:29Z"
"2023-03-09T16:49:29Z"
NONE
null
null
null
### Describe the bug Downloading any dataset from GLUE has -1 as class labels for test split. Train and validation have regular 0/1 class labels. This is also present in the dataset card online. ### Steps to reproduce the bug ``` dataset = load_dataset("glue", "sst2") for d in dataset: # prints out -1 print(d["label"] ``` ### Expected behavior Expected behavior should be 0/1 instead of -1. ### Environment info - `datasets` version: 2.4.0 - Platform: Linux-5.15.0-46-generic-x86_64-with-glibc2.17 - Python version: 3.8.16 - PyArrow version: 8.0.0 - Pandas version: 1.5.3
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5624/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5624/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/5623
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5623/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5623/comments
https://api.github.com/repos/huggingface/datasets/issues/5623/events
https://github.com/huggingface/datasets/pull/5623
1,616,712,665
PR_kwDODunzps5Lpb4q
5,623
Remove set_access_token usage + fail tests if FutureWarning
{ "avatar_url": "https://avatars.githubusercontent.com/u/11801849?v=4", "events_url": "https://api.github.com/users/Wauplin/events{/privacy}", "followers_url": "https://api.github.com/users/Wauplin/followers", "following_url": "https://api.github.com/users/Wauplin/following{/other_user}", "gists_url": "https://api.github.com/users/Wauplin/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/Wauplin", "id": 11801849, "login": "Wauplin", "node_id": "MDQ6VXNlcjExODAxODQ5", "organizations_url": "https://api.github.com/users/Wauplin/orgs", "received_events_url": "https://api.github.com/users/Wauplin/received_events", "repos_url": "https://api.github.com/users/Wauplin/repos", "site_admin": false, "starred_url": "https://api.github.com/users/Wauplin/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Wauplin/subscriptions", "type": "User", "url": "https://api.github.com/users/Wauplin" }
[]
closed
false
null
[]
null
6
"2023-03-09T08:46:01Z"
"2023-03-09T15:39:00Z"
"2023-03-09T15:31:59Z"
CONTRIBUTOR
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/5623.diff", "html_url": "https://github.com/huggingface/datasets/pull/5623", "merged_at": "2023-03-09T15:31:58Z", "patch_url": "https://github.com/huggingface/datasets/pull/5623.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/5623" }
`set_access_token` is deprecated and will be removed in `huggingface_hub>=0.14`. This PR removes it from the tests (it was not used in `datasets` source code itself). FYI, it was not needed since `set_access_token` was just setting git credentials and `datasets` doesn't seem to use git anywhere. In the future, use `set_git_credential` if needed. It is a git-credential-agnostic helper, i.e. you can store your git token in `git-credential-cache`, `git-credential-store`, `osxkeychain`, etc. The legacy `set_access_token` could only set in `git-credential-store` no matter the user preference. (for context, I found out about this while working on https://github.com/huggingface/huggingface_hub/pull/1381) --- In addition to this, I have added ``` filterwarnings = error::FutureWarning:huggingface_hub* ``` to the `setup.cfg` config file to fail on future warnings from `huggingface_hub`. In `hfh`'s CI we trigger on FutureWarning from any package but it's less robust (any package update leads can lead to a failure). No obligation to keep it like that (I can remove it if you prefer) but I think it's a good idea in order to track future FutureWarnings. FYI, in `huggingface_hub` tests we use `-Werror::FutureWarning --log-cli-level=INFO -sv --durations=0` - FutureWarning are processed as error - verbose mode / INFO logs (and above) are captured for easier debugging in github report - track each test duration, just to see where we can improve. We have a quite long CI (~10min) so it helped improve that.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5623/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5623/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/5622
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5622/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5622/comments
https://api.github.com/repos/huggingface/datasets/issues/5622/events
https://github.com/huggingface/datasets/pull/5622
1,615,190,942
PR_kwDODunzps5LkSj8
5,622
Update README template to better template
{ "avatar_url": "https://avatars.githubusercontent.com/u/54767532?v=4", "events_url": "https://api.github.com/users/emiltj/events{/privacy}", "followers_url": "https://api.github.com/users/emiltj/followers", "following_url": "https://api.github.com/users/emiltj/following{/other_user}", "gists_url": "https://api.github.com/users/emiltj/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/emiltj", "id": 54767532, "login": "emiltj", "node_id": "MDQ6VXNlcjU0NzY3NTMy", "organizations_url": "https://api.github.com/users/emiltj/orgs", "received_events_url": "https://api.github.com/users/emiltj/received_events", "repos_url": "https://api.github.com/users/emiltj/repos", "site_admin": false, "starred_url": "https://api.github.com/users/emiltj/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/emiltj/subscriptions", "type": "User", "url": "https://api.github.com/users/emiltj" }
[]
closed
false
null
[]
null
3
"2023-03-08T12:30:23Z"
"2023-03-11T05:07:38Z"
"2023-03-11T05:07:38Z"
NONE
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/5622.diff", "html_url": "https://github.com/huggingface/datasets/pull/5622", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/5622.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/5622" }
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5622/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5622/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/5621
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5621/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5621/comments
https://api.github.com/repos/huggingface/datasets/issues/5621/events
https://github.com/huggingface/datasets/pull/5621
1,615,029,615
PR_kwDODunzps5LjwD8
5,621
Adding Oracle Cloud to docs
{ "avatar_url": "https://avatars.githubusercontent.com/u/29129502?v=4", "events_url": "https://api.github.com/users/ahosler/events{/privacy}", "followers_url": "https://api.github.com/users/ahosler/followers", "following_url": "https://api.github.com/users/ahosler/following{/other_user}", "gists_url": "https://api.github.com/users/ahosler/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/ahosler", "id": 29129502, "login": "ahosler", "node_id": "MDQ6VXNlcjI5MTI5NTAy", "organizations_url": "https://api.github.com/users/ahosler/orgs", "received_events_url": "https://api.github.com/users/ahosler/received_events", "repos_url": "https://api.github.com/users/ahosler/repos", "site_admin": false, "starred_url": "https://api.github.com/users/ahosler/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ahosler/subscriptions", "type": "User", "url": "https://api.github.com/users/ahosler" }
[]
closed
false
null
[]
null
2
"2023-03-08T10:22:50Z"
"2023-03-11T00:57:18Z"
"2023-03-11T00:49:56Z"
CONTRIBUTOR
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/5621.diff", "html_url": "https://github.com/huggingface/datasets/pull/5621", "merged_at": "2023-03-11T00:49:56Z", "patch_url": "https://github.com/huggingface/datasets/pull/5621.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/5621" }
Adding Oracle Cloud's fsspec implementation to the list of supported cloud storage providers.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5621/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5621/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/5620
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5620/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5620/comments
https://api.github.com/repos/huggingface/datasets/issues/5620/events
https://github.com/huggingface/datasets/pull/5620
1,613,460,520
PR_kwDODunzps5LefAf
5,620
Bump pyarrow to 8.0.0
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
[]
closed
false
null
[]
null
12
"2023-03-07T13:31:53Z"
"2023-03-08T14:01:27Z"
"2023-03-08T13:54:22Z"
MEMBER
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/5620.diff", "html_url": "https://github.com/huggingface/datasets/pull/5620", "merged_at": "2023-03-08T13:54:21Z", "patch_url": "https://github.com/huggingface/datasets/pull/5620.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/5620" }
Fix those for Pandas 2.0 (tested [here](https://github.com/huggingface/datasets/actions/runs/4346221280/jobs/7592010397) with pandas==2.0.0.rc0): ```python =========================== short test summary info ============================ FAILED tests/test_arrow_dataset.py::BaseDatasetTest::test_to_parquet_in_memory - ImportError: Unable to find a usable engine; tried using: 'pyarrow', 'fastparquet'. A suitable version of pyarrow or fastparquet is required for parquet support. Trying to import the above resulted in these errors: - Pandas requires version '7.0.0' or newer of 'pyarrow' (version '6.0.1' currently installed). - Missing optional dependency 'fastparquet'. fastparquet is required for parquet support. Use pip or conda to install fastparquet. FAILED tests/test_arrow_dataset.py::BaseDatasetTest::test_to_parquet_on_disk - ImportError: Unable to find a usable engine; tried using: 'pyarrow', 'fastparquet'. A suitable version of pyarrow or fastparquet is required for parquet support. Trying to import the above resulted in these errors: - Pandas requires version '7.0.0' or newer of 'pyarrow' (version '6.0.1' currently installed). - Missing optional dependency 'fastparquet'. fastparquet is required for parquet support. Use pip or conda to install fastparquet. ===== 2 failed, 2137 passed, 18 skipped, 32 warnings in 212.76s (0:03:32) ====== ``` EDIT: also for performance - with 8.0 we can use `.to_reader()`
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5620/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5620/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/5619
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5619/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5619/comments
https://api.github.com/repos/huggingface/datasets/issues/5619/events
https://github.com/huggingface/datasets/pull/5619
1,613,439,709
PR_kwDODunzps5LeaYP
5,619
unpin fsspec
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
[]
closed
false
null
[]
null
3
"2023-03-07T13:22:41Z"
"2023-03-07T13:47:01Z"
"2023-03-07T13:39:02Z"
MEMBER
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/5619.diff", "html_url": "https://github.com/huggingface/datasets/pull/5619", "merged_at": "2023-03-07T13:39:02Z", "patch_url": "https://github.com/huggingface/datasets/pull/5619.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/5619" }
close https://github.com/huggingface/datasets/issues/5618
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5619/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5619/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/5618
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5618/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5618/comments
https://api.github.com/repos/huggingface/datasets/issues/5618/events
https://github.com/huggingface/datasets/issues/5618
1,612,977,934
I_kwDODunzps5gJBcO
5,618
Unpin fsspec < 2023.3.0 once issue fixed
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[]
closed
false
null
[]
null
0
"2023-03-07T08:41:51Z"
"2023-03-07T13:39:03Z"
"2023-03-07T13:39:03Z"
MEMBER
null
null
null
Unpin `fsspec` upper version once root cause of our CI break is fixed. See: - #5614
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5618/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5618/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/5617
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5617/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5617/comments
https://api.github.com/repos/huggingface/datasets/issues/5617/events
https://github.com/huggingface/datasets/pull/5617
1,612,947,422
PR_kwDODunzps5LcvI-
5,617
Fix CI by temporarily pinning fsspec < 2023.3.0
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[]
closed
false
null
[]
null
2
"2023-03-07T08:18:20Z"
"2023-03-07T08:44:55Z"
"2023-03-07T08:37:28Z"
MEMBER
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/5617.diff", "html_url": "https://github.com/huggingface/datasets/pull/5617", "merged_at": "2023-03-07T08:37:28Z", "patch_url": "https://github.com/huggingface/datasets/pull/5617.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/5617" }
As a hotfix for our CI, temporarily pin `fsspec`: Fix #5616. Until root cause is fixed, see: - #5614
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5617/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5617/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/5616
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5616/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5616/comments
https://api.github.com/repos/huggingface/datasets/issues/5616/events
https://github.com/huggingface/datasets/issues/5616
1,612,932,508
I_kwDODunzps5gI2Wc
5,616
CI is broken after fsspec-2023.3.0 release
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
closed
false
null
[]
null
0
"2023-03-07T08:06:39Z"
"2023-03-07T08:37:29Z"
"2023-03-07T08:37:29Z"
MEMBER
null
null
null
As reported by @lhoestq, our CI is broken after `fsspec` 2023.3.0 release: ``` FAILED tests/test_filesystem.py::test_compression_filesystems[Bz2FileSystem] - AssertionError: assert [{'created': ...: False, ...}] == ['file.txt'] At index 0 diff: {'name': 'file.txt', 'size': 70, 'type': 'file', 'created': 1678175677.1887748, 'islink': False, 'mode': 33188, 'uid': 1001, 'gid': 123, 'mtime': 1678175677.1887748, 'ino': 286957, 'nlink': 1} != 'file.txt' Full diff: [ - 'file.txt', + {'created': 1678175677.1887748, + 'gid': 123, + 'ino': 286957, + 'islink': False, + 'mode': 33188, + 'mtime': 1678175677.1887748, + 'name': 'file.txt', + 'nlink': 1, + 'size': 70, + 'type': 'file', + 'uid': 1001}, ] ``` Also: ``` FAILED tests/test_filesystem.py::test_compression_filesystems[GzipFileSystem] - AssertionError: assert [{'created': ...: False, ...}] == ['file.txt'] FAILED tests/test_filesystem.py::test_compression_filesystems[Lz4FileSystem] - AssertionError: assert [{'created': ...: False, ...}] == ['file.txt'] FAILED tests/test_filesystem.py::test_compression_filesystems[XzFileSystem] - AssertionError: assert [{'created': ...: False, ...}] == ['file.txt'] FAILED tests/test_filesystem.py::test_compression_filesystems[ZstdFileSystem] - AssertionError: assert [{'created': ...: False, ...}] == ['file.txt'] ===== 5 failed, 2134 passed, 18 skipped, 38 warnings in 157.21s (0:02:37) ====== ``` See: - fsspec/filesystem_spec#1205
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5616/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5616/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/5615
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5615/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5615/comments
https://api.github.com/repos/huggingface/datasets/issues/5615/events
https://github.com/huggingface/datasets/issues/5615
1,612,552,653
I_kwDODunzps5gHZnN
5,615
IterableDataset.add_column is unable to accept another IterableDataset as a parameter.
{ "avatar_url": "https://avatars.githubusercontent.com/u/6466389?v=4", "events_url": "https://api.github.com/users/zsaladin/events{/privacy}", "followers_url": "https://api.github.com/users/zsaladin/followers", "following_url": "https://api.github.com/users/zsaladin/following{/other_user}", "gists_url": "https://api.github.com/users/zsaladin/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/zsaladin", "id": 6466389, "login": "zsaladin", "node_id": "MDQ6VXNlcjY0NjYzODk=", "organizations_url": "https://api.github.com/users/zsaladin/orgs", "received_events_url": "https://api.github.com/users/zsaladin/received_events", "repos_url": "https://api.github.com/users/zsaladin/repos", "site_admin": false, "starred_url": "https://api.github.com/users/zsaladin/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/zsaladin/subscriptions", "type": "User", "url": "https://api.github.com/users/zsaladin" }
[ { "color": "ffffff", "default": true, "description": "This will not be worked on", "id": 1935892913, "name": "wontfix", "node_id": "MDU6TGFiZWwxOTM1ODkyOTEz", "url": "https://api.github.com/repos/huggingface/datasets/labels/wontfix" } ]
closed
false
null
[]
null
1
"2023-03-07T01:52:00Z"
"2023-03-09T15:24:05Z"
"2023-03-09T15:23:54Z"
NONE
null
null
null
### Describe the bug `IterableDataset.add_column` occurs an exception when passing another `IterableDataset` as a parameter. The method seems to accept only eager evaluated values. https://github.com/huggingface/datasets/blob/35b789e8f6826b6b5a6b48fcc2416c890a1f326a/src/datasets/iterable_dataset.py#L1388-L1391 I wrote codes below to make it. ```py def add_column(dataset: IterableDataset, name: str, add_dataset: IterableDataset, key: str) -> IterableDataset: iter_add_dataset = iter(add_dataset) def add_column_fn(example): if name in example: raise ValueError(f"Error when adding {name}: column {name} is already in the dataset.") return {name: next(iter_add_dataset)[key]} return dataset.map(add_column_fn) ``` Is there other way to do it? Or is it intended? ### Steps to reproduce the bug Thie codes below occurs `NotImplementedError` ```py from datasets import IterableDataset def gen(num): yield {f"col{num}": 1} yield {f"col{num}": 2} yield {f"col{num}": 3} ids1 = IterableDataset.from_generator(gen, gen_kwargs={"num": 1}) ids2 = IterableDataset.from_generator(gen, gen_kwargs={"num": 2}) new_ids = ids1.add_column("new_col", ids1) for row in new_ids: print(row) ``` ### Expected behavior `IterableDataset.add_column` is able to task `IterableDataset` and lazy evaluated values as a parameter since IterableDataset is lazy evalued. ### Environment info - `datasets` version: 2.8.0 - Platform: Linux-3.10.0-1160.36.2.el7.x86_64-x86_64-with-glibc2.17 - Python version: 3.9.7 - PyArrow version: 11.0.0 - Pandas version: 1.5.3
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5615/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5615/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/5614
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5614/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5614/comments
https://api.github.com/repos/huggingface/datasets/issues/5614/events
https://github.com/huggingface/datasets/pull/5614
1,611,896,357
PR_kwDODunzps5LZOTd
5,614
Fix archive fs test
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
[]
closed
false
null
[]
null
4
"2023-03-06T17:28:09Z"
"2023-03-07T13:27:50Z"
"2023-03-07T13:20:57Z"
MEMBER
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/5614.diff", "html_url": "https://github.com/huggingface/datasets/pull/5614", "merged_at": "2023-03-07T13:20:57Z", "patch_url": "https://github.com/huggingface/datasets/pull/5614.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/5614" }
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 1, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/5614/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5614/timeline
null
null
true