url
stringlengths 61
61
| repository_url
stringclasses 1
value | labels_url
stringlengths 75
75
| comments_url
stringlengths 70
70
| events_url
stringlengths 68
68
| html_url
stringlengths 49
51
| id
int64 942M
3.2B
| node_id
stringlengths 18
32
| number
int64 2.63k
7.67k
| title
stringlengths 1
290
| user
dict | labels
listlengths 0
4
| state
stringclasses 2
values | locked
bool 1
class | assignee
dict | assignees
listlengths 0
4
| milestone
dict | comments
sequencelengths 0
30
| created_at
stringdate 2021-07-12 19:58:31
2025-07-03 11:24:15
| updated_at
stringdate 2021-07-13 05:45:26
2025-07-03 18:34:32
| closed_at
stringlengths 20
20
⌀ | author_association
stringclasses 4
values | type
null | active_lock_reason
null | sub_issues_summary
dict | body
stringlengths 0
58.6k
⌀ | closed_by
dict | reactions
dict | timeline_url
stringlengths 70
70
| performed_via_github_app
null | state_reason
stringclasses 4
values | draft
bool 2
classes | pull_request
dict | is_pull_request
bool 2
classes |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://api.github.com/repos/huggingface/datasets/issues/7668 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7668/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7668/comments | https://api.github.com/repos/huggingface/datasets/issues/7668/events | https://github.com/huggingface/datasets/issues/7668 | 3,199,039,322 | I_kwDODunzps6-rXda | 7,668 | Broken EXIF crash the whole program | {
"avatar_url": "https://avatars.githubusercontent.com/u/30485844?v=4",
"events_url": "https://api.github.com/users/Seas0/events{/privacy}",
"followers_url": "https://api.github.com/users/Seas0/followers",
"following_url": "https://api.github.com/users/Seas0/following{/other_user}",
"gists_url": "https://api.github.com/users/Seas0/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Seas0",
"id": 30485844,
"login": "Seas0",
"node_id": "MDQ6VXNlcjMwNDg1ODQ0",
"organizations_url": "https://api.github.com/users/Seas0/orgs",
"received_events_url": "https://api.github.com/users/Seas0/received_events",
"repos_url": "https://api.github.com/users/Seas0/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Seas0/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Seas0/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Seas0",
"user_view_type": "public"
} | [] | open | false | null | [] | null | [
"There are other discussions about error handling for images decoding here : https://github.com/huggingface/datasets/issues/7632 https://github.com/huggingface/datasets/issues/7612\n\nand a PR here: https://github.com/huggingface/datasets/pull/7638 (would love your input on the proposed solution !)"
] | 2025-07-03T11:24:15Z | 2025-07-03T12:27:16Z | null | NONE | null | null | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | ### Describe the bug
When parsing this image in the ImageNet1K dataset, the `datasets` crashs whole training process just because unable to parse an invalid EXIF tag.

### Steps to reproduce the bug
Use the `datasets.Image.decode_example` method to decode the aforementioned image could reproduce the bug.
The decoding function will throw an unhandled exception at the `image.getexif()` method call due to invalid utf-8 stream in EXIF tags.
```
File lib/python3.12/site-packages/datasets/features/image.py:188, in Image.decode_example(self, value, token_per_repo_id)
186 image = PIL.Image.open(BytesIO(bytes_))
187 image.load() # to avoid "Too many open files" errors
--> 188 if image.getexif().get(PIL.Image.ExifTags.Base.Orientation) is not None:
189 image = PIL.ImageOps.exif_transpose(image)
190 if self.mode and self.mode != image.mode:
File lib/python3.12/site-packages/PIL/Image.py:1542, in Image.getexif(self)
1540 xmp_tags = self.info.get("XML:com.adobe.xmp")
1541 if not xmp_tags and (xmp_tags := self.info.get("xmp")):
-> 1542 xmp_tags = xmp_tags.decode("utf-8")
1543 if xmp_tags:
1544 match = re.search(r'tiff:Orientation(="|>)([0-9])', xmp_tags)
UnicodeDecodeError: 'utf-8' codec can't decode byte 0xa8 in position 4312: invalid start byte
```
### Expected behavior
The invalid EXIF tag should simply be ignored or issue a warning message, instead of crash the whole program at once.
### Environment info
- `datasets` version: 3.6.0
- Platform: Linux-6.5.0-18-generic-x86_64-with-glibc2.35
- Python version: 3.12.11
- `huggingface_hub` version: 0.33.0
- PyArrow version: 20.0.0
- Pandas version: 2.3.0
- `fsspec` version: 2025.3.0 | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7668/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7668/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/7667 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7667/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7667/comments | https://api.github.com/repos/huggingface/datasets/issues/7667/events | https://github.com/huggingface/datasets/pull/7667 | 3,196,251,707 | PR_kwDODunzps6dGmm8 | 7,667 | Fix infer list of images | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
} | [] | closed | false | null | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7667). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] | 2025-07-02T15:07:58Z | 2025-07-02T15:10:28Z | 2025-07-02T15:08:03Z | MEMBER | null | null | null | cc @kashif | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7667/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7667/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/7667.diff",
"html_url": "https://github.com/huggingface/datasets/pull/7667",
"merged_at": "2025-07-02T15:08:03Z",
"patch_url": "https://github.com/huggingface/datasets/pull/7667.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7667"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/7666 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7666/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7666/comments | https://api.github.com/repos/huggingface/datasets/issues/7666/events | https://github.com/huggingface/datasets/pull/7666 | 3,196,220,722 | PR_kwDODunzps6dGf7E | 7,666 | Backward compat list feature | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
} | [] | closed | false | null | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7666). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] | 2025-07-02T14:58:00Z | 2025-07-02T15:00:37Z | 2025-07-02T14:59:40Z | MEMBER | null | null | null | cc @kashif | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 1,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7666/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7666/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/7666.diff",
"html_url": "https://github.com/huggingface/datasets/pull/7666",
"merged_at": "2025-07-02T14:59:40Z",
"patch_url": "https://github.com/huggingface/datasets/pull/7666.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7666"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/7665 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7665/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7665/comments | https://api.github.com/repos/huggingface/datasets/issues/7665/events | https://github.com/huggingface/datasets/issues/7665 | 3,193,239,955 | I_kwDODunzps6-VPmT | 7,665 | Function load_dataset() misinterprets string field content as part of dataset schema when dealing with `.jsonl` files | {
"avatar_url": "https://avatars.githubusercontent.com/u/1151198?v=4",
"events_url": "https://api.github.com/users/zdzichukowalski/events{/privacy}",
"followers_url": "https://api.github.com/users/zdzichukowalski/followers",
"following_url": "https://api.github.com/users/zdzichukowalski/following{/other_user}",
"gists_url": "https://api.github.com/users/zdzichukowalski/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/zdzichukowalski",
"id": 1151198,
"login": "zdzichukowalski",
"node_id": "MDQ6VXNlcjExNTExOTg=",
"organizations_url": "https://api.github.com/users/zdzichukowalski/orgs",
"received_events_url": "https://api.github.com/users/zdzichukowalski/received_events",
"repos_url": "https://api.github.com/users/zdzichukowalski/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/zdzichukowalski/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zdzichukowalski/subscriptions",
"type": "User",
"url": "https://api.github.com/users/zdzichukowalski",
"user_view_type": "public"
} | [] | closed | false | null | [] | null | [
"Somehow I created the issue twice🙈 This one is an exact duplicate of #7664."
] | 2025-07-01T17:14:53Z | 2025-07-01T17:17:48Z | 2025-07-01T17:17:48Z | NONE | null | null | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | ### Describe the bug
When loading a `.jsonl` file using `load_dataset("json", data_files="data.jsonl", split="train")`, the function misinterprets the content of a string field as if it were part of the dataset schema.
In my case there is a field `body:` with a string value
```
"### Describe the bug (...) ,action: string, datetime: timestamp[s], author: string, (...) Pandas version: 1.3.4"
```
As a result, I got an exception
```
"TypeError: Couldn't cast array of type timestamp[s] to null".
```
Full stack-trace in the attached file below.
I also attach a minimized dataset (data.json, a single entry) that reproduces the error.
**Observations**(on the minimal example):
- if I remove _all fields before_ `body`, a different error appears,
- if I remove _all fields after_ `body`, yet another error appears,
- if `body` is _the only field_, the error disappears.
So this might be one complex bug or several edge cases interacting. I haven’t dug deeper.
Also changing the file extension to `.json` or `.txt` avoids the problem. This suggests **a possible workaround** for the general case: convert `.jsonl` to `.json`. Though I haven’t verified correctness of that workaround yet.
Anyway my understanding is that `load_dataset` with first argument set to "json" should properly handle `.jsonl` files. Correct me if I'm wrong.
[stack_trace.txt](https://github.com/user-attachments/files/21004153/stack_trace.txt)
[data.json](https://github.com/user-attachments/files/21004164/data.json)
P.S.
I discovered this while going through the HuggingFace tutorial. Specifically [this part](https://huggingface.co/learn/llm-course/chapter5/5?fw=pt).I will try to inform the tutorial team about this issue, as it can be a showstopper for young 🤗 adepts.
### Steps to reproduce the bug
1. Download attached [data.json](https://github.com/user-attachments/files/21004164/data.json) file.
2. Run the following code which should work correctly:
```
from datasets import load_dataset
load_dataset("json", data_files="data.json", split="train")
```
3. Change extension of the `data` file to `.jsonl` and run:
```
from datasets import load_dataset
load_dataset("json", data_files="data.jsonl", split="train")
```
This will trigger an error like the one in the attached [stack_trace.txt](https://github.com/user-attachments/files/21004153/stack_trace.txt).
One can also try removing fields before the `body` field and after it. These actions give different errors.
### Expected behavior
Parsing data in `.jsonl` format should yield the same result as parsing the same data in `.json` format. In any case, the content of a string field should never be interpreted as part of the dataset schema.
### Environment info
datasets version: _3.6.0_
pyarrow version: _20.0.0_
Python version: _3.11.9_
platform version: _macOS-15.5-arm64-arm-64bit_ | {
"avatar_url": "https://avatars.githubusercontent.com/u/1151198?v=4",
"events_url": "https://api.github.com/users/zdzichukowalski/events{/privacy}",
"followers_url": "https://api.github.com/users/zdzichukowalski/followers",
"following_url": "https://api.github.com/users/zdzichukowalski/following{/other_user}",
"gists_url": "https://api.github.com/users/zdzichukowalski/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/zdzichukowalski",
"id": 1151198,
"login": "zdzichukowalski",
"node_id": "MDQ6VXNlcjExNTExOTg=",
"organizations_url": "https://api.github.com/users/zdzichukowalski/orgs",
"received_events_url": "https://api.github.com/users/zdzichukowalski/received_events",
"repos_url": "https://api.github.com/users/zdzichukowalski/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/zdzichukowalski/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zdzichukowalski/subscriptions",
"type": "User",
"url": "https://api.github.com/users/zdzichukowalski",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7665/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7665/timeline | null | duplicate | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/7664 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7664/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7664/comments | https://api.github.com/repos/huggingface/datasets/issues/7664/events | https://github.com/huggingface/datasets/issues/7664 | 3,193,239,035 | I_kwDODunzps6-VPX7 | 7,664 | Function load_dataset() misinterprets string field content as part of dataset schema when dealing with `.jsonl` files | {
"avatar_url": "https://avatars.githubusercontent.com/u/1151198?v=4",
"events_url": "https://api.github.com/users/zdzichukowalski/events{/privacy}",
"followers_url": "https://api.github.com/users/zdzichukowalski/followers",
"following_url": "https://api.github.com/users/zdzichukowalski/following{/other_user}",
"gists_url": "https://api.github.com/users/zdzichukowalski/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/zdzichukowalski",
"id": 1151198,
"login": "zdzichukowalski",
"node_id": "MDQ6VXNlcjExNTExOTg=",
"organizations_url": "https://api.github.com/users/zdzichukowalski/orgs",
"received_events_url": "https://api.github.com/users/zdzichukowalski/received_events",
"repos_url": "https://api.github.com/users/zdzichukowalski/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/zdzichukowalski/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zdzichukowalski/subscriptions",
"type": "User",
"url": "https://api.github.com/users/zdzichukowalski",
"user_view_type": "public"
} | [] | open | false | null | [] | null | [
"Hey @zdzichukowalski, I was not able to reproduce this on python 3.11.9 and datasets 3.6.0. The contents of \"body\" are correctly parsed as a string and no other fields like timestamps are created. Could you try reproducing this in a fresh environment, or posting the complete code where you encountered that stacktrace? (I noticed in the stacktrace you had a bigger program, perhaps there are some side effects)"
] | 2025-07-01T17:14:32Z | 2025-07-03T13:01:59Z | null | NONE | null | null | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | ### Describe the bug
When loading a `.jsonl` file using `load_dataset("json", data_files="data.jsonl", split="train")`, the function misinterprets the content of a string field as if it were part of the dataset schema.
In my case there is a field `body:` with a string value
```
"### Describe the bug (...) ,action: string, datetime: timestamp[s], author: string, (...) Pandas version: 1.3.4"
```
As a result, I got an exception
```
"TypeError: Couldn't cast array of type timestamp[s] to null".
```
Full stack-trace in the attached file below.
I also attach a minimized dataset (data.json, a single entry) that reproduces the error.
**Observations**(on the minimal example):
- if I remove _all fields before_ `body`, a different error appears,
- if I remove _all fields after_ `body`, yet another error appears,
- if `body` is _the only field_, the error disappears.
So this might be one complex bug or several edge cases interacting. I haven’t dug deeper.
Also changing the file extension to `.json` or `.txt` avoids the problem. This suggests **a possible workaround** for the general case: convert `.jsonl` to `.json`. Though I haven’t verified correctness of that workaround yet.
Anyway my understanding is that `load_dataset` with first argument set to "json" should properly handle `.jsonl` files. Correct me if I'm wrong.
[stack_trace.txt](https://github.com/user-attachments/files/21004153/stack_trace.txt)
[data.json](https://github.com/user-attachments/files/21004164/data.json)
P.S.
I discovered this while going through the HuggingFace tutorial. Specifically [this part](https://huggingface.co/learn/llm-course/chapter5/5?fw=pt). I will try to inform the tutorial team about this issue, as it can be a showstopper for young 🤗 adepts.
### Steps to reproduce the bug
1. Download attached [data.json](https://github.com/user-attachments/files/21004164/data.json) file.
2. Run the following code which should work correctly:
```
from datasets import load_dataset
load_dataset("json", data_files="data.json", split="train")
```
3. Change extension of the `data` file to `.jsonl` and run:
```
from datasets import load_dataset
load_dataset("json", data_files="data.jsonl", split="train")
```
This will trigger an error like the one in the attached [stack_trace.txt](https://github.com/user-attachments/files/21004153/stack_trace.txt).
One can also try removing fields before the `body` field and after it. These actions give different errors.
### Expected behavior
Parsing data in `.jsonl` format should yield the same result as parsing the same data in `.json` format. In any case, the content of a string field should never be interpreted as part of the dataset schema.
### Environment info
datasets version: _3.6.0_
pyarrow version: _20.0.0_
Python version: _3.11.9_
platform version: _macOS-15.5-arm64-arm-64bit_ | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7664/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7664/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/7663 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7663/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7663/comments | https://api.github.com/repos/huggingface/datasets/issues/7663/events | https://github.com/huggingface/datasets/pull/7663 | 3,192,582,371 | PR_kwDODunzps6c6aJF | 7,663 | Custom metadata filenames | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
} | [] | closed | false | null | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7663). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] | 2025-07-01T13:50:36Z | 2025-07-01T13:58:41Z | 2025-07-01T13:58:39Z | MEMBER | null | null | null | example: https://huggingface.co/datasets/lhoestq/overlapping-subsets-imagefolder/tree/main
To make multiple subsets for an imagefolder (one metadata file per subset), e.g.
```yaml
configs:
- config_name: default
metadata_filenames:
- metadata.csv
- config_name: other
metadata_filenames:
- metadata2.csv
``` | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 1,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7663/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7663/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/7663.diff",
"html_url": "https://github.com/huggingface/datasets/pull/7663",
"merged_at": "2025-07-01T13:58:39Z",
"patch_url": "https://github.com/huggingface/datasets/pull/7663.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7663"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/7662 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7662/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7662/comments | https://api.github.com/repos/huggingface/datasets/issues/7662/events | https://github.com/huggingface/datasets/issues/7662 | 3,190,805,531 | I_kwDODunzps6-L9Qb | 7,662 | Applying map after transform with multiprocessing will cause OOM | {
"avatar_url": "https://avatars.githubusercontent.com/u/26482910?v=4",
"events_url": "https://api.github.com/users/JunjieLl/events{/privacy}",
"followers_url": "https://api.github.com/users/JunjieLl/followers",
"following_url": "https://api.github.com/users/JunjieLl/following{/other_user}",
"gists_url": "https://api.github.com/users/JunjieLl/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/JunjieLl",
"id": 26482910,
"login": "JunjieLl",
"node_id": "MDQ6VXNlcjI2NDgyOTEw",
"organizations_url": "https://api.github.com/users/JunjieLl/orgs",
"received_events_url": "https://api.github.com/users/JunjieLl/received_events",
"repos_url": "https://api.github.com/users/JunjieLl/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/JunjieLl/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/JunjieLl/subscriptions",
"type": "User",
"url": "https://api.github.com/users/JunjieLl",
"user_view_type": "public"
} | [] | open | false | null | [] | null | [] | 2025-07-01T05:45:57Z | 2025-07-01T05:45:57Z | null | NONE | null | null | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | ### Describe the bug
I have a 30TB dataset. When I perform add_column and cast_column operations on it and then execute a multiprocessing map, it results in an OOM (Out of Memory) error. However, if I skip the add_column and cast_column steps and directly run the map, there is no OOM. After debugging step by step, I found that the OOM is caused at this point, and I suspect it’s because the add_column and cast_column operations are not cached, which causes the entire dataset to be loaded in each subprocess, leading to the OOM. The critical line of code is: https://github.com/huggingface/datasets/blob/e71b0b19d79c7531f9b9bea7c09916b5f6157f42/src/datasets/utils/py_utils.py#L607
Note num_process=1 would not cause OOM. I'm confused.
### Steps to reproduce the bug
For reproduce, you can load dataset and set cache_dir (for caching): amphion/Emilia-Dataset which is a veru large datasets that RAM can not fits.
And apply the map with multiprocessing after a transform operation (e.g. add_column, cast_column).
As long as num_process>1, it must cause OOM.
### Expected behavior
It should not cause OOM.
### Environment info
- `datasets` version: 3.6.0
- Platform: Linux-5.10.134-16.101.al8.x86_64-x86_64-with-glibc2.35
- Python version: 3.10.12
- `huggingface_hub` version: 0.33.1
- PyArrow version: 20.0.0
- Pandas version: 2.3.0
- `fsspec` version: 2024.6.1 | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7662/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7662/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/7661 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7661/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7661/comments | https://api.github.com/repos/huggingface/datasets/issues/7661/events | https://github.com/huggingface/datasets/pull/7661 | 3,190,408,237 | PR_kwDODunzps6czBDi | 7,661 | fix del tqdm lock error | {
"avatar_url": "https://avatars.githubusercontent.com/u/44766273?v=4",
"events_url": "https://api.github.com/users/Hypothesis-Z/events{/privacy}",
"followers_url": "https://api.github.com/users/Hypothesis-Z/followers",
"following_url": "https://api.github.com/users/Hypothesis-Z/following{/other_user}",
"gists_url": "https://api.github.com/users/Hypothesis-Z/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Hypothesis-Z",
"id": 44766273,
"login": "Hypothesis-Z",
"node_id": "MDQ6VXNlcjQ0NzY2Mjcz",
"organizations_url": "https://api.github.com/users/Hypothesis-Z/orgs",
"received_events_url": "https://api.github.com/users/Hypothesis-Z/received_events",
"repos_url": "https://api.github.com/users/Hypothesis-Z/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Hypothesis-Z/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Hypothesis-Z/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Hypothesis-Z",
"user_view_type": "public"
} | [] | open | false | null | [] | null | [] | 2025-07-01T02:04:02Z | 2025-07-01T02:33:04Z | null | NONE | null | null | null | for issue https://github.com/huggingface/datasets/issues/7660 | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7661/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7661/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/7661.diff",
"html_url": "https://github.com/huggingface/datasets/pull/7661",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/7661.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7661"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/7660 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7660/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7660/comments | https://api.github.com/repos/huggingface/datasets/issues/7660/events | https://github.com/huggingface/datasets/issues/7660 | 3,189,028,251 | I_kwDODunzps6-FLWb | 7,660 | AttributeError: type object 'tqdm' has no attribute '_lock' | {
"avatar_url": "https://avatars.githubusercontent.com/u/44766273?v=4",
"events_url": "https://api.github.com/users/Hypothesis-Z/events{/privacy}",
"followers_url": "https://api.github.com/users/Hypothesis-Z/followers",
"following_url": "https://api.github.com/users/Hypothesis-Z/following{/other_user}",
"gists_url": "https://api.github.com/users/Hypothesis-Z/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Hypothesis-Z",
"id": 44766273,
"login": "Hypothesis-Z",
"node_id": "MDQ6VXNlcjQ0NzY2Mjcz",
"organizations_url": "https://api.github.com/users/Hypothesis-Z/orgs",
"received_events_url": "https://api.github.com/users/Hypothesis-Z/received_events",
"repos_url": "https://api.github.com/users/Hypothesis-Z/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Hypothesis-Z/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Hypothesis-Z/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Hypothesis-Z",
"user_view_type": "public"
} | [] | open | false | null | [] | null | [
"Deleting a class (**not instance**) attribute might be invalid in this case, which is `tqdm` doing in `ensure_lock`.\n\n```python\nfrom tqdm import tqdm as old_tqdm\n\nclass tqdm1(old_tqdm):\n def __delattr__(self, attr):\n try:\n super().__delattr__(attr)\n except AttributeError:\n if attr != '_lock':\n print(attr)\n raise\n\nclass Meta(type):\n def __delattr__(cls, name):\n if name == \"_lock\":\n return \n return super().__delattr__(name)\n \nclass tqdm2(old_tqdm, metaclass=Meta):\n pass\n\ndel tqdm2._lock\ndel tqdm1._lock # error\n```\n\nhttps://github.com/huggingface/datasets/blob/e71b0b19d79c7531f9b9bea7c09916b5f6157f42/src/datasets/utils/tqdm.py#L104-L122",
"A cheaper option (seems to work in my case): \n```python\nfrom datasets import tqdm as hf_tqdm\nhf_tqdm.set_lock(hf_tqdm.get_lock())\n```"
] | 2025-06-30T15:57:16Z | 2025-07-03T15:14:27Z | null | NONE | null | null | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | ### Describe the bug
`AttributeError: type object 'tqdm' has no attribute '_lock'`
It occurs when I'm trying to load datasets in thread pool.
Issue https://github.com/huggingface/datasets/issues/6066 and PR https://github.com/huggingface/datasets/pull/6067 https://github.com/huggingface/datasets/pull/6068 tried to fix this.
### Steps to reproduce the bug
Will have to try several times to reproduce the error because it is concerned with threads.
1. save some datasets for test
```pythonfrom datasets import Dataset, DatasetDict
import os
os.makedirs("test_dataset_shards", exist_ok=True)
for i in range(10):
data = Dataset.from_dict({"text": [f"example {j}" for j in range(1000000)]})
data = DatasetDict({'train': data})
data.save_to_disk(f"test_dataset_shards/shard_{i}")
```
2. load them in a thread pool
```python
from datasets import load_from_disk
from tqdm import tqdm
from concurrent.futures import ThreadPoolExecutor, as_completed
import glob
datas = glob.glob('test_dataset_shards/shard_*')
with ThreadPoolExecutor(max_workers=10) as pool:
futures = [pool.submit(load_from_disk, it) for it in datas]
datas = []
for future in tqdm(as_completed(futures), total=len(futures)):
datas.append(future.result())
```
### Expected behavior
no exception raised
### Environment info
datasets==2.19.0
python==3.10 | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7660/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7660/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/7659 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7659/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7659/comments | https://api.github.com/repos/huggingface/datasets/issues/7659/events | https://github.com/huggingface/datasets/pull/7659 | 3,187,882,217 | PR_kwDODunzps6cqkou | 7,659 | Update the beans dataset link in Preprocess | {
"avatar_url": "https://avatars.githubusercontent.com/u/5434867?v=4",
"events_url": "https://api.github.com/users/HJassar/events{/privacy}",
"followers_url": "https://api.github.com/users/HJassar/followers",
"following_url": "https://api.github.com/users/HJassar/following{/other_user}",
"gists_url": "https://api.github.com/users/HJassar/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/HJassar",
"id": 5434867,
"login": "HJassar",
"node_id": "MDQ6VXNlcjU0MzQ4Njc=",
"organizations_url": "https://api.github.com/users/HJassar/orgs",
"received_events_url": "https://api.github.com/users/HJassar/received_events",
"repos_url": "https://api.github.com/users/HJassar/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/HJassar/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/HJassar/subscriptions",
"type": "User",
"url": "https://api.github.com/users/HJassar",
"user_view_type": "public"
} | [] | closed | false | null | [] | null | [] | 2025-06-30T09:58:44Z | 2025-07-01T14:01:42Z | 2025-07-01T14:01:42Z | CONTRIBUTOR | null | null | null | In the Preprocess tutorial, the to "the beans dataset" is incorrect. Fixed. | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7659/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7659/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/7659.diff",
"html_url": "https://github.com/huggingface/datasets/pull/7659",
"merged_at": "2025-07-01T14:01:42Z",
"patch_url": "https://github.com/huggingface/datasets/pull/7659.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7659"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/7658 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7658/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7658/comments | https://api.github.com/repos/huggingface/datasets/issues/7658/events | https://github.com/huggingface/datasets/pull/7658 | 3,187,800,504 | PR_kwDODunzps6cqTMs | 7,658 | Fix: Prevent loss of info.features and column_names in IterableDatasetDict.map when features is None | {
"avatar_url": "https://avatars.githubusercontent.com/u/142811259?v=4",
"events_url": "https://api.github.com/users/ArjunJagdale/events{/privacy}",
"followers_url": "https://api.github.com/users/ArjunJagdale/followers",
"following_url": "https://api.github.com/users/ArjunJagdale/following{/other_user}",
"gists_url": "https://api.github.com/users/ArjunJagdale/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/ArjunJagdale",
"id": 142811259,
"login": "ArjunJagdale",
"node_id": "U_kgDOCIMgew",
"organizations_url": "https://api.github.com/users/ArjunJagdale/orgs",
"received_events_url": "https://api.github.com/users/ArjunJagdale/received_events",
"repos_url": "https://api.github.com/users/ArjunJagdale/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/ArjunJagdale/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArjunJagdale/subscriptions",
"type": "User",
"url": "https://api.github.com/users/ArjunJagdale",
"user_view_type": "public"
} | [] | closed | false | null | [] | null | [
"Hi!\r\nI haven’t included a test for this change, as the fix is quite small and targeted.\r\nPlease let me know if you’d like a test for this case or if you’d prefer to handle it during review.\r\nThanks!",
"we can't know in advance the `features` after map() (it transforms the data !), so you can reuse the `features` from `info.features`",
"I'll the patch as suggested — `info.features = features` or `self.info.features` — to ensure schema preservation while keeping the logic simple and explicit. WDYT?\r\n",
"info.features should be None in the general case, and replaced by the user's `features` if it's passed explicitly with `map(..., features=...)`\r\n\r\nhttps://github.com/huggingface/datasets/issues/7568 is not an issue we can fix",
"> info.features should be None in the general case, and replaced by the user's `features` if it's passed explicitly with `map(..., features=...)`\r\n> \r\n> #7568 is not an issue we can fix\r\n\r\nThanks for the clarification! Totally makes sense now — I understand that features=None is the expected behavior post-map() unless explicitly passed, and that preserving old schema by default could lead to incorrect assumptions.\r\nClosing this one — appreciate the feedback as always"
] | 2025-06-30T09:31:12Z | 2025-07-01T16:26:30Z | 2025-07-01T16:26:12Z | CONTRIBUTOR | null | null | null | This PR fixes a bug where calling `IterableDatasetDict.map()` or `IterableDataset.map()` with the default `features=None` argument would overwrite the existing `info.features` attribute with `None`. This, in turn, caused the resulting dataset to lose its schema, breaking downstream usage of attributes like `column_names`.
Why
Previously, the code would always set `info.features = features`, even if `features` was `None`. When mapping with removal of columns or other transformations, this led to the destruction of the schema and caused failures in code that relied on the dataset schema being present.
How
We now update `info.features` only if `features` is not `None`. This preserves the original schema unless the user explicitly provides a new one.
Reference
Fixes #7568 | {
"avatar_url": "https://avatars.githubusercontent.com/u/142811259?v=4",
"events_url": "https://api.github.com/users/ArjunJagdale/events{/privacy}",
"followers_url": "https://api.github.com/users/ArjunJagdale/followers",
"following_url": "https://api.github.com/users/ArjunJagdale/following{/other_user}",
"gists_url": "https://api.github.com/users/ArjunJagdale/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/ArjunJagdale",
"id": 142811259,
"login": "ArjunJagdale",
"node_id": "U_kgDOCIMgew",
"organizations_url": "https://api.github.com/users/ArjunJagdale/orgs",
"received_events_url": "https://api.github.com/users/ArjunJagdale/received_events",
"repos_url": "https://api.github.com/users/ArjunJagdale/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/ArjunJagdale/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArjunJagdale/subscriptions",
"type": "User",
"url": "https://api.github.com/users/ArjunJagdale",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7658/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7658/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/7658.diff",
"html_url": "https://github.com/huggingface/datasets/pull/7658",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/7658.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7658"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/7657 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7657/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7657/comments | https://api.github.com/repos/huggingface/datasets/issues/7657/events | https://github.com/huggingface/datasets/pull/7657 | 3,186,036,016 | PR_kwDODunzps6cks2E | 7,657 | feat: add subset_name as alias for name in load_dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/142811259?v=4",
"events_url": "https://api.github.com/users/ArjunJagdale/events{/privacy}",
"followers_url": "https://api.github.com/users/ArjunJagdale/followers",
"following_url": "https://api.github.com/users/ArjunJagdale/following{/other_user}",
"gists_url": "https://api.github.com/users/ArjunJagdale/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/ArjunJagdale",
"id": 142811259,
"login": "ArjunJagdale",
"node_id": "U_kgDOCIMgew",
"organizations_url": "https://api.github.com/users/ArjunJagdale/orgs",
"received_events_url": "https://api.github.com/users/ArjunJagdale/received_events",
"repos_url": "https://api.github.com/users/ArjunJagdale/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/ArjunJagdale/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArjunJagdale/subscriptions",
"type": "User",
"url": "https://api.github.com/users/ArjunJagdale",
"user_view_type": "public"
} | [] | open | false | null | [] | null | [] | 2025-06-29T10:39:00Z | 2025-06-29T10:55:11Z | null | CONTRIBUTOR | null | null | null | fixes #7637
This PR introduces subset_name as a user-facing alias for the name (previously `config_name`) argument in load_dataset. It aligns terminology with the Hugging Face Hub UI (which shows “Subset”), reducing confusion for new users.
Supports `subset_name` in `load_dataset()`
Adds `.subset_name` property to DatasetBuilder
Maintains full backward compatibility
Raises clear error if name and `subset_name` conflict | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7657/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7657/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/7657.diff",
"html_url": "https://github.com/huggingface/datasets/pull/7657",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/7657.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7657"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/7656 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7656/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7656/comments | https://api.github.com/repos/huggingface/datasets/issues/7656/events | https://github.com/huggingface/datasets/pull/7656 | 3,185,865,686 | PR_kwDODunzps6ckPHc | 7,656 | fix(iterable): ensure MappedExamplesIterable supports state_dict for resume | {
"avatar_url": "https://avatars.githubusercontent.com/u/142811259?v=4",
"events_url": "https://api.github.com/users/ArjunJagdale/events{/privacy}",
"followers_url": "https://api.github.com/users/ArjunJagdale/followers",
"following_url": "https://api.github.com/users/ArjunJagdale/following{/other_user}",
"gists_url": "https://api.github.com/users/ArjunJagdale/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/ArjunJagdale",
"id": 142811259,
"login": "ArjunJagdale",
"node_id": "U_kgDOCIMgew",
"organizations_url": "https://api.github.com/users/ArjunJagdale/orgs",
"received_events_url": "https://api.github.com/users/ArjunJagdale/received_events",
"repos_url": "https://api.github.com/users/ArjunJagdale/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/ArjunJagdale/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArjunJagdale/subscriptions",
"type": "User",
"url": "https://api.github.com/users/ArjunJagdale",
"user_view_type": "public"
} | [] | open | false | null | [] | null | [] | 2025-06-29T07:50:13Z | 2025-06-29T07:50:13Z | null | CONTRIBUTOR | null | null | null | Fixes #7630
### Problem
When calling `.map()` on an `IterableDataset`, resuming from a checkpoint skips a large number of samples. This is because `MappedExamplesIterable` did not implement `state_dict()` or `load_state_dict()`, so checkpointing was not properly delegated to the underlying iterable.
### What This PR Does
This patch adds:
```python
def state_dict(self):
return self.ex_iterable.state_dict()
def load_state_dict(self, state):
self.ex_iterable.load_state_dict(state)
```
to MappedExamplesIterable, so the wrapped base iterable's state can be saved and restored as expected.
Result
Using .map() no longer causes sample skipping after checkpoint resume.
Let me know if a dedicated test case is required — happy to add one! | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7656/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7656/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/7656.diff",
"html_url": "https://github.com/huggingface/datasets/pull/7656",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/7656.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7656"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/7655 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7655/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7655/comments | https://api.github.com/repos/huggingface/datasets/issues/7655/events | https://github.com/huggingface/datasets/pull/7655 | 3,185,382,105 | PR_kwDODunzps6ci9oi | 7,655 | Added specific use cases in Improve Performace | {
"avatar_url": "https://avatars.githubusercontent.com/u/142811259?v=4",
"events_url": "https://api.github.com/users/ArjunJagdale/events{/privacy}",
"followers_url": "https://api.github.com/users/ArjunJagdale/followers",
"following_url": "https://api.github.com/users/ArjunJagdale/following{/other_user}",
"gists_url": "https://api.github.com/users/ArjunJagdale/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/ArjunJagdale",
"id": 142811259,
"login": "ArjunJagdale",
"node_id": "U_kgDOCIMgew",
"organizations_url": "https://api.github.com/users/ArjunJagdale/orgs",
"received_events_url": "https://api.github.com/users/ArjunJagdale/received_events",
"repos_url": "https://api.github.com/users/ArjunJagdale/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/ArjunJagdale/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArjunJagdale/subscriptions",
"type": "User",
"url": "https://api.github.com/users/ArjunJagdale",
"user_view_type": "public"
} | [] | open | false | null | [] | null | [] | 2025-06-28T19:00:32Z | 2025-06-28T19:00:32Z | null | CONTRIBUTOR | null | null | null | Fixes #2494 | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7655/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7655/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/7655.diff",
"html_url": "https://github.com/huggingface/datasets/pull/7655",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/7655.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7655"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/7654 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7654/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7654/comments | https://api.github.com/repos/huggingface/datasets/issues/7654/events | https://github.com/huggingface/datasets/pull/7654 | 3,184,770,992 | PR_kwDODunzps6chPmz | 7,654 | fix(load): strip deprecated use_auth_token from config_kwargs | {
"avatar_url": "https://avatars.githubusercontent.com/u/142811259?v=4",
"events_url": "https://api.github.com/users/ArjunJagdale/events{/privacy}",
"followers_url": "https://api.github.com/users/ArjunJagdale/followers",
"following_url": "https://api.github.com/users/ArjunJagdale/following{/other_user}",
"gists_url": "https://api.github.com/users/ArjunJagdale/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/ArjunJagdale",
"id": 142811259,
"login": "ArjunJagdale",
"node_id": "U_kgDOCIMgew",
"organizations_url": "https://api.github.com/users/ArjunJagdale/orgs",
"received_events_url": "https://api.github.com/users/ArjunJagdale/received_events",
"repos_url": "https://api.github.com/users/ArjunJagdale/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/ArjunJagdale/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArjunJagdale/subscriptions",
"type": "User",
"url": "https://api.github.com/users/ArjunJagdale",
"user_view_type": "public"
} | [] | open | false | null | [] | null | [] | 2025-06-28T09:20:21Z | 2025-06-28T09:20:21Z | null | CONTRIBUTOR | null | null | null | Fixes #7504
This PR resolves a compatibility error when loading datasets via `load_dataset()` using outdated arguments like `use_auth_token`.
**What was happening:**
Users passing `use_auth_token` in `load_dataset(..., use_auth_token=...)` encountered a `ValueError`: BuilderConfig ParquetConfig(...) doesn't have a 'use_auth_token' key.
**Why:**
`use_auth_token` has been deprecated and removed from config definitions (replaced by `token`), but the `load_dataset()` function still forwarded it via `**config_kwargs` to BuilderConfigs, leading to unrecognized key errors.
**Fix:**
We now intercept and strip `use_auth_token` from `config_kwargs` inside `load_dataset`, replacing it with a warning:
```python
if "use_auth_token" in config_kwargs:
logger.warning("The 'use_auth_token' argument is deprecated. Please use 'token' instead.")
config_kwargs.pop("use_auth_token")
```
This ensures legacy compatibility while guiding users to switch to the token argument.
Let me know if you'd prefer a deprecation error instead of a warning. Thanks! | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7654/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7654/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/7654.diff",
"html_url": "https://github.com/huggingface/datasets/pull/7654",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/7654.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7654"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/7653 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7653/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7653/comments | https://api.github.com/repos/huggingface/datasets/issues/7653/events | https://github.com/huggingface/datasets/pull/7653 | 3,184,746,093 | PR_kwDODunzps6chLmb | 7,653 | feat(load): fallback to `load_from_disk()` when loading a saved dataset directory | {
"avatar_url": "https://avatars.githubusercontent.com/u/142811259?v=4",
"events_url": "https://api.github.com/users/ArjunJagdale/events{/privacy}",
"followers_url": "https://api.github.com/users/ArjunJagdale/followers",
"following_url": "https://api.github.com/users/ArjunJagdale/following{/other_user}",
"gists_url": "https://api.github.com/users/ArjunJagdale/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/ArjunJagdale",
"id": 142811259,
"login": "ArjunJagdale",
"node_id": "U_kgDOCIMgew",
"organizations_url": "https://api.github.com/users/ArjunJagdale/orgs",
"received_events_url": "https://api.github.com/users/ArjunJagdale/received_events",
"repos_url": "https://api.github.com/users/ArjunJagdale/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/ArjunJagdale/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArjunJagdale/subscriptions",
"type": "User",
"url": "https://api.github.com/users/ArjunJagdale",
"user_view_type": "public"
} | [] | open | false | null | [] | null | [] | 2025-06-28T08:47:36Z | 2025-06-28T08:47:36Z | null | CONTRIBUTOR | null | null | null | ### Related Issue
Fixes #7503
Partially addresses #5044 by allowing `load_dataset()` to auto-detect and gracefully delegate to `load_from_disk()` for locally saved datasets.
---
### What does this PR do?
This PR introduces a minimal fallback mechanism in `load_dataset()` that detects when the provided `path` points to a dataset saved using `save_to_disk()`, and automatically redirects to `load_from_disk()`.
#### 🐛 Before (unexpected metadata-only rows):
```python
ds = load_dataset("/path/to/saved_dataset")
# → returns rows with only internal metadata (_data_files, _fingerprint, etc.)
````
#### ✅ After (graceful fallback):
```python
ds = load_dataset("/path/to/saved_dataset")
# → logs a warning and internally switches to load_from_disk()
```
---
### Why is this useful?
* Prevents confusion when reloading local datasets saved via `save_to_disk()`.
* Enables smoother compatibility with frameworks (e.g., TRL, `lighteval`) that rely on `load_dataset()` calls.
* Fully backward-compatible — hub-based loading, custom builders, and streaming remain untouched.
| null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7653/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7653/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/7653.diff",
"html_url": "https://github.com/huggingface/datasets/pull/7653",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/7653.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7653"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/7652 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7652/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7652/comments | https://api.github.com/repos/huggingface/datasets/issues/7652/events | https://github.com/huggingface/datasets/pull/7652 | 3,183,372,055 | PR_kwDODunzps6cdCnv | 7,652 | Add columns support to JSON loader for selective key filtering | {
"avatar_url": "https://avatars.githubusercontent.com/u/142811259?v=4",
"events_url": "https://api.github.com/users/ArjunJagdale/events{/privacy}",
"followers_url": "https://api.github.com/users/ArjunJagdale/followers",
"following_url": "https://api.github.com/users/ArjunJagdale/following{/other_user}",
"gists_url": "https://api.github.com/users/ArjunJagdale/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/ArjunJagdale",
"id": 142811259,
"login": "ArjunJagdale",
"node_id": "U_kgDOCIMgew",
"organizations_url": "https://api.github.com/users/ArjunJagdale/orgs",
"received_events_url": "https://api.github.com/users/ArjunJagdale/received_events",
"repos_url": "https://api.github.com/users/ArjunJagdale/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/ArjunJagdale/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArjunJagdale/subscriptions",
"type": "User",
"url": "https://api.github.com/users/ArjunJagdale",
"user_view_type": "public"
} | [] | open | false | null | [] | null | [
"I need this feature right now. It would be great if it could automatically fill in None for non-existent keys instead of reporting an error.",
"> I need this feature right now. It would be great if it could automatically fill in None for non-existent keys instead of reporting an error.\r\n\r\nHi @aihao2000, Just to confirm — I have done the changes you asked for!\r\nIf you pass columns=[\"key1\", \"key2\", \"optional_key\"] to load_dataset(..., columns=...), and any of those keys are missing from the input JSON objects, the loader will automatically fill those columns with None values, instead of raising an error."
] | 2025-06-27T16:18:42Z | 2025-07-03T09:52:48Z | null | CONTRIBUTOR | null | null | null | Fixes #7594
This PR adds support for filtering specific columns when loading datasets from .json or .jsonl files — similar to how the columns=... argument works for Parquet.
As suggested, support for the `columns=...` argument (previously available for Parquet) has now been extended to **JSON and JSONL** loading via `load_dataset(...)`. You can now load only specific keys/columns and skip the rest — which should help in cases where some fields are unclean, inconsistent, or just unnecessary.
### Example:
```python
from datasets import load_dataset
dataset = load_dataset("json", data_files="your_data.jsonl", columns=["id", "title"])
print(dataset["train"].column_names)
# Output: ['id', 'title']
```
### Summary of changes:
* Added `columns: Optional[List[str]]` to `JsonConfig`
* Updated `_generate_tables()` to filter selected columns
* Forwarded `columns` argument from `load_dataset()` to the config
* Added test for validation(should be fine!)
Let me know if you'd like the same to be added for CSV or others as a follow-up — happy to help. | null | {
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7652/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7652/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/7652.diff",
"html_url": "https://github.com/huggingface/datasets/pull/7652",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/7652.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7652"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/7651 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7651/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7651/comments | https://api.github.com/repos/huggingface/datasets/issues/7651/events | https://github.com/huggingface/datasets/pull/7651 | 3,182,792,775 | PR_kwDODunzps6cbMmg | 7,651 | fix: Extended metadata file names for folder_based_builder | {
"avatar_url": "https://avatars.githubusercontent.com/u/6965756?v=4",
"events_url": "https://api.github.com/users/iPieter/events{/privacy}",
"followers_url": "https://api.github.com/users/iPieter/followers",
"following_url": "https://api.github.com/users/iPieter/following{/other_user}",
"gists_url": "https://api.github.com/users/iPieter/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/iPieter",
"id": 6965756,
"login": "iPieter",
"node_id": "MDQ6VXNlcjY5NjU3NTY=",
"organizations_url": "https://api.github.com/users/iPieter/orgs",
"received_events_url": "https://api.github.com/users/iPieter/received_events",
"repos_url": "https://api.github.com/users/iPieter/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/iPieter/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/iPieter/subscriptions",
"type": "User",
"url": "https://api.github.com/users/iPieter",
"user_view_type": "public"
} | [] | open | false | null | [] | null | [] | 2025-06-27T13:12:11Z | 2025-06-30T08:19:37Z | null | NONE | null | null | null | Fixes #7650.
The metadata files generated by the `DatasetDict.save_to_file` function are not included in the folder_based_builder's metadata list, causing issues when only 1 actual data file is present, as described in issue #7650.
This PR adds these filenames to the builder, allowing correct loading. | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7651/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7651/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/7651.diff",
"html_url": "https://github.com/huggingface/datasets/pull/7651",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/7651.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7651"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/7650 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7650/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7650/comments | https://api.github.com/repos/huggingface/datasets/issues/7650/events | https://github.com/huggingface/datasets/issues/7650 | 3,182,745,315 | I_kwDODunzps69tNbj | 7,650 | `load_dataset` defaults to json file format for datasets with 1 shard | {
"avatar_url": "https://avatars.githubusercontent.com/u/6965756?v=4",
"events_url": "https://api.github.com/users/iPieter/events{/privacy}",
"followers_url": "https://api.github.com/users/iPieter/followers",
"following_url": "https://api.github.com/users/iPieter/following{/other_user}",
"gists_url": "https://api.github.com/users/iPieter/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/iPieter",
"id": 6965756,
"login": "iPieter",
"node_id": "MDQ6VXNlcjY5NjU3NTY=",
"organizations_url": "https://api.github.com/users/iPieter/orgs",
"received_events_url": "https://api.github.com/users/iPieter/received_events",
"repos_url": "https://api.github.com/users/iPieter/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/iPieter/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/iPieter/subscriptions",
"type": "User",
"url": "https://api.github.com/users/iPieter",
"user_view_type": "public"
} | [] | open | false | null | [] | null | [] | 2025-06-27T12:54:25Z | 2025-06-27T12:54:25Z | null | NONE | null | null | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | ### Describe the bug
I currently have multiple datasets (train+validation) saved as 50MB shards. For one dataset the validation pair is small enough to fit into a single shard and this apparently causes problems when loading the dataset. I created the datasets using a DatasetDict, saved them as 50MB arrow files for streaming and then load each dataset. I have no problem loading any of the other datasets with more than 1 arrow file/shard.
The error indicates the training set got loaded in arrow format (correct) and the validation set in json (incorrect). This seems to be because some of the metadata files are considered as dataset files.
```
Error loading /nfs/dataset_pt-uk: Couldn't infer the same data file format for all splits. Got {NamedSplit('train'): ('arrow', {}), NamedSplit('validation'): ('json', {})}
```

Concretely, there is a mismatch between the metadata created by the `DatasetDict.save_to_file` and the builder for `datasets.load_dataset`:
https://github.com/huggingface/datasets/blob/e71b0b19d79c7531f9b9bea7c09916b5f6157f42/src/datasets/data_files.py#L107
The `folder_based_builder` lists all files and with 1 arrow file the json files (that are actually metadata) are in the majority.
https://github.com/huggingface/datasets/blob/e71b0b19d79c7531f9b9bea7c09916b5f6157f42/src/datasets/packaged_modules/folder_based_builder/folder_based_builder.py#L58
### Steps to reproduce the bug
Create a dataset with metadata and 1 arrow file in validation and multiple arrow files in the training set, following the above description. In my case, I saved the files via:
```python
dataset = DatasetDict({
'train': train_dataset,
'validation': val_dataset
})
dataset.save_to_disk(output_path, max_shard_size="50MB")
```
### Expected behavior
The dataset would get loaded.
### Environment info
- `datasets` version: 3.6.0
- Platform: Linux-6.14.0-22-generic-x86_64-with-glibc2.41
- Python version: 3.12.7
- `huggingface_hub` version: 0.31.1
- PyArrow version: 18.1.0
- Pandas version: 2.2.3
- `fsspec` version: 2024.6.1 | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7650/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7650/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/7649 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7649/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7649/comments | https://api.github.com/repos/huggingface/datasets/issues/7649/events | https://github.com/huggingface/datasets/pull/7649 | 3,181,481,444 | PR_kwDODunzps6cW0sQ | 7,649 | Enable parallel shard upload in push_to_hub() using num_proc | {
"avatar_url": "https://avatars.githubusercontent.com/u/142811259?v=4",
"events_url": "https://api.github.com/users/ArjunJagdale/events{/privacy}",
"followers_url": "https://api.github.com/users/ArjunJagdale/followers",
"following_url": "https://api.github.com/users/ArjunJagdale/following{/other_user}",
"gists_url": "https://api.github.com/users/ArjunJagdale/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/ArjunJagdale",
"id": 142811259,
"login": "ArjunJagdale",
"node_id": "U_kgDOCIMgew",
"organizations_url": "https://api.github.com/users/ArjunJagdale/orgs",
"received_events_url": "https://api.github.com/users/ArjunJagdale/received_events",
"repos_url": "https://api.github.com/users/ArjunJagdale/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/ArjunJagdale/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArjunJagdale/subscriptions",
"type": "User",
"url": "https://api.github.com/users/ArjunJagdale",
"user_view_type": "public"
} | [] | open | false | null | [] | null | [] | 2025-06-27T05:59:03Z | 2025-06-27T06:03:46Z | null | CONTRIBUTOR | null | null | null | Fixes #7591
### Add num_proc support to `push_to_hub()` for parallel shard upload
This PR adds support for parallel upload of dataset shards via the `num_proc` argument in `Dataset.push_to_hub()`.
📌 While the `num_proc` parameter was already present in the `push_to_hub()` signature and correctly passed to `_push_parquet_shards_to_hub()`, it was not being used to parallelize the upload.
🔧 This PR updates the internal `_push_parquet_shards_to_hub()` function to:
- Use `multiprocessing.Pool` and `iflatmap_unordered()` for concurrent shard upload when `num_proc > 1`
- Preserve original serial upload behavior if `num_proc` is `None` or ≤ 1
- Keep tqdm progress and commit behavior unchanged
Let me know if any test coverage or further changes are needed!
| null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7649/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7649/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/7649.diff",
"html_url": "https://github.com/huggingface/datasets/pull/7649",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/7649.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7649"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/7648 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7648/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7648/comments | https://api.github.com/repos/huggingface/datasets/issues/7648/events | https://github.com/huggingface/datasets/pull/7648 | 3,181,409,736 | PR_kwDODunzps6cWmSn | 7,648 | Fix misleading add_column() usage example in docstring | {
"avatar_url": "https://avatars.githubusercontent.com/u/142811259?v=4",
"events_url": "https://api.github.com/users/ArjunJagdale/events{/privacy}",
"followers_url": "https://api.github.com/users/ArjunJagdale/followers",
"following_url": "https://api.github.com/users/ArjunJagdale/following{/other_user}",
"gists_url": "https://api.github.com/users/ArjunJagdale/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/ArjunJagdale",
"id": 142811259,
"login": "ArjunJagdale",
"node_id": "U_kgDOCIMgew",
"organizations_url": "https://api.github.com/users/ArjunJagdale/orgs",
"received_events_url": "https://api.github.com/users/ArjunJagdale/received_events",
"repos_url": "https://api.github.com/users/ArjunJagdale/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/ArjunJagdale/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArjunJagdale/subscriptions",
"type": "User",
"url": "https://api.github.com/users/ArjunJagdale",
"user_view_type": "public"
} | [] | open | false | null | [] | null | [] | 2025-06-27T05:27:04Z | 2025-06-27T05:27:54Z | null | CONTRIBUTOR | null | null | null | Fixes #7611
This PR fixes the usage example in the Dataset.add_column() docstring, which previously implied that add_column() modifies the dataset in-place.
Why:
The method returns a new dataset with the additional column, and users must assign the result to a variable to preserve the change.
This should make the behavior clearer for users.
@lhoestq @davanstrien | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7648/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7648/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/7648.diff",
"html_url": "https://github.com/huggingface/datasets/pull/7648",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/7648.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7648"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/7647 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7647/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7647/comments | https://api.github.com/repos/huggingface/datasets/issues/7647/events | https://github.com/huggingface/datasets/issues/7647 | 3,178,952,517 | I_kwDODunzps69evdF | 7,647 | loading mozilla-foundation--common_voice_11_0 fails | {
"avatar_url": "https://avatars.githubusercontent.com/u/5703039?v=4",
"events_url": "https://api.github.com/users/pavel-esir/events{/privacy}",
"followers_url": "https://api.github.com/users/pavel-esir/followers",
"following_url": "https://api.github.com/users/pavel-esir/following{/other_user}",
"gists_url": "https://api.github.com/users/pavel-esir/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/pavel-esir",
"id": 5703039,
"login": "pavel-esir",
"node_id": "MDQ6VXNlcjU3MDMwMzk=",
"organizations_url": "https://api.github.com/users/pavel-esir/orgs",
"received_events_url": "https://api.github.com/users/pavel-esir/received_events",
"repos_url": "https://api.github.com/users/pavel-esir/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/pavel-esir/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pavel-esir/subscriptions",
"type": "User",
"url": "https://api.github.com/users/pavel-esir",
"user_view_type": "public"
} | [] | open | false | null | [] | null | [
"@claude Could you please address this issue"
] | 2025-06-26T12:23:48Z | 2025-06-27T12:29:03Z | null | NONE | null | null | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | ### Describe the bug
Hello everyone,
i am trying to load `mozilla-foundation--common_voice_11_0` and it fails. Reproducer
```
import datasets
datasets.load_dataset("mozilla-foundation/common_voice_11_0", "en", split="test", streaming=True, trust_remote_code=True)
```
and it fails with
```
File ~/opt/envs/.../lib/python3.10/site-packages/datasets/utils/file_utils.py:827, in _add_retries_to_file_obj_read_method.<locals>.read_with_retries(*args, **kwargs)
825 for retry in range(1, max_retries + 1):
826 try:
--> 827 out = read(*args, **kwargs)
828 break
829 except (
830 _AiohttpClientError,
831 asyncio.TimeoutError,
832 requests.exceptions.ConnectionError,
833 requests.exceptions.Timeout,
834 ) as err:
File /usr/lib/python3.10/codecs.py:322, in BufferedIncrementalDecoder.decode(self, input, final)
319 def decode(self, input, final=False):
320 # decode input (taking the buffer into account)
321 data = self.buffer + input
--> 322 (result, consumed) = self._buffer_decode(data, self.errors, final)
323 # keep undecoded input until the next call
324 self.buffer = data[consumed:]
UnicodeDecodeError: 'utf-8' codec can't decode byte 0x8b in position 1: invalid start byte
```
When i remove streaming then everything is good but i need `streaming=True`
### Steps to reproduce the bug
```
import datasets
datasets.load_dataset("mozilla-foundation/common_voice_11_0", "en", split="test", streaming=True, trust_remote_code=True)
```
### Expected behavior
Expected that it will download dataset
### Environment info
datasets==3.6.0
python3.10
on all platforms linux/win/mac | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7647/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7647/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/7646 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7646/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7646/comments | https://api.github.com/repos/huggingface/datasets/issues/7646/events | https://github.com/huggingface/datasets/pull/7646 | 3,178,036,854 | PR_kwDODunzps6cLhrM | 7,646 | Introduces automatic subset-level grouping for folder-based dataset builders #7066 | {
"avatar_url": "https://avatars.githubusercontent.com/u/142811259?v=4",
"events_url": "https://api.github.com/users/ArjunJagdale/events{/privacy}",
"followers_url": "https://api.github.com/users/ArjunJagdale/followers",
"following_url": "https://api.github.com/users/ArjunJagdale/following{/other_user}",
"gists_url": "https://api.github.com/users/ArjunJagdale/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/ArjunJagdale",
"id": 142811259,
"login": "ArjunJagdale",
"node_id": "U_kgDOCIMgew",
"organizations_url": "https://api.github.com/users/ArjunJagdale/orgs",
"received_events_url": "https://api.github.com/users/ArjunJagdale/received_events",
"repos_url": "https://api.github.com/users/ArjunJagdale/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/ArjunJagdale/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArjunJagdale/subscriptions",
"type": "User",
"url": "https://api.github.com/users/ArjunJagdale",
"user_view_type": "public"
} | [] | open | false | null | [] | null | [
"It adds automatic grouping of files into subsets based on their root name (e.g., `train0.jsonl`, `train1.jsonl` → `\"train\"`), as discussed above. The logic is integrated into `FolderBasedBuilder` and is fully tested + documented.\r\n\r\nLet me know if any changes are needed — happy to iterate!",
"Hi ! I believe the subsets need to be instantiated here as `configs` - not `splits` (which are meant for train/validation/test):\r\n\r\nhttps://github.com/huggingface/datasets/blob/ef762e664a2a1675368ed7a203b0ac8cecca6e19/src/datasets/load.py#L647-L662\r\n\r\nAlso the subset names should probably be inferred only from the parquet/csv/json files and not from png/jpeg/wav/mp4 etc. WDYT ?",
"> Hi ! I believe the subsets need to be instantiated here as `configs` - not `splits` (which are meant for train/validation/test):\r\n> \r\n> https://github.com/huggingface/datasets/blob/ef762e664a2a1675368ed7a203b0ac8cecca6e19/src/datasets/load.py#L647-L662\r\n> \r\n> Also the subset names should probably be inferred only from the parquet/csv/json files and not from png/jpeg/wav/mp4 etc. WDYT ?\r\n\r\nThanks a lot for the review!\r\n\r\nYou're absolutely right — treating subsets as separate configs instead of overloaded splits makes much more sense. If that approach sounds good to you, I can move the grouping logic to `load.py`, where configs are instantiated, and revise the PR to emit one `BuilderConfig` per grouped subset.\r\n\r\nAlso totally agree on limiting grouping to structured file types — I’d scope this to `.json`, `.jsonl`, `.csv`, and `.parquet`.\r\n\r\nLet me know if this direction sounds good, and I’ll get started on the changes right away!\r\n"
] | 2025-06-26T07:01:37Z | 2025-06-27T18:04:04Z | null | CONTRIBUTOR | null | null | null | Fixes #7066
This PR introduces automatic **subset-level grouping** for folder-based dataset builders by:
1. Adding a utility function `group_files_by_subset()` that clusters files by root name (ignoring digits and shard suffixes).
2. Integrating this logic into `FolderBasedBuilder._split_generators()` to yield one split per subset.
3. Adding unit tests for the grouping function.
4. Updating the documentation to describe this new behavior under `docs/source/repository_structure.mdx`.
---
### Motivation
Datasets with files like:
```
train0.jsonl
train1.jsonl
animals.jsonl
metadata.jsonl
```
will now be **automatically grouped** as:
- `"train"` subset → `train0.jsonl`, `train1.jsonl`
- `"animals"` subset → `animals.jsonl`
- `"metadata"` subset → `metadata.jsonl`
This enables structured multi-subset loading even when the dataset doesn't follow traditional `train/validation/test` split conventions.
---
### Files Changed
- `src/datasets/data_files.py`: added `group_files_by_subset()` utility
- `src/datasets/packaged_modules/folder_based_builder/folder_based_builder.py`: grouped files before yielding splits
- `tests/test_data_files.py`: added unit test `test_group_files_by_subset`
- `docs/source/repository_structure.mdx`: documented subset grouping for maintainers and users
---
### Benefits
- More flexible and robust dataset split logic
- Enables logical grouping of user-uploaded files without nested folder structure
- Backward-compatible with all existing folder-based configs
---
Ready for review ✅ | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7646/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7646/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/7646.diff",
"html_url": "https://github.com/huggingface/datasets/pull/7646",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/7646.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7646"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/7645 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7645/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7645/comments | https://api.github.com/repos/huggingface/datasets/issues/7645/events | https://github.com/huggingface/datasets/pull/7645 | 3,176,810,164 | PR_kwDODunzps6cHkp- | 7,645 | `ClassLabel` docs: Correct value for unknown labels | {
"avatar_url": "https://avatars.githubusercontent.com/u/56924246?v=4",
"events_url": "https://api.github.com/users/l-uuz/events{/privacy}",
"followers_url": "https://api.github.com/users/l-uuz/followers",
"following_url": "https://api.github.com/users/l-uuz/following{/other_user}",
"gists_url": "https://api.github.com/users/l-uuz/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/l-uuz",
"id": 56924246,
"login": "l-uuz",
"node_id": "MDQ6VXNlcjU2OTI0MjQ2",
"organizations_url": "https://api.github.com/users/l-uuz/orgs",
"received_events_url": "https://api.github.com/users/l-uuz/received_events",
"repos_url": "https://api.github.com/users/l-uuz/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/l-uuz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/l-uuz/subscriptions",
"type": "User",
"url": "https://api.github.com/users/l-uuz",
"user_view_type": "public"
} | [] | open | false | null | [] | null | [] | 2025-06-25T20:01:35Z | 2025-06-25T20:01:35Z | null | NONE | null | null | null | This small change fixes the documentation to to be compliant with what happens in `encode_example`.
https://github.com/huggingface/datasets/blob/e71b0b19d79c7531f9b9bea7c09916b5f6157f42/src/datasets/features/features.py#L1126-L1129 | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7645/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7645/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/7645.diff",
"html_url": "https://github.com/huggingface/datasets/pull/7645",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/7645.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7645"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/7644 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7644/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7644/comments | https://api.github.com/repos/huggingface/datasets/issues/7644/events | https://github.com/huggingface/datasets/pull/7644 | 3,176,363,492 | PR_kwDODunzps6cGGfW | 7,644 | fix sequence ci | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
} | [] | closed | false | null | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7644). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] | 2025-06-25T17:07:55Z | 2025-06-25T17:10:30Z | 2025-06-25T17:08:01Z | MEMBER | null | null | null | fix error from https://github.com/huggingface/datasets/pull/7643 | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7644/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7644/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/7644.diff",
"html_url": "https://github.com/huggingface/datasets/pull/7644",
"merged_at": "2025-06-25T17:08:01Z",
"patch_url": "https://github.com/huggingface/datasets/pull/7644.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7644"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/7643 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7643/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7643/comments | https://api.github.com/repos/huggingface/datasets/issues/7643/events | https://github.com/huggingface/datasets/pull/7643 | 3,176,354,431 | PR_kwDODunzps6cGEeK | 7,643 | Backward compat sequence instance | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
} | [] | closed | false | null | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7643). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] | 2025-06-25T17:05:09Z | 2025-06-25T17:07:40Z | 2025-06-25T17:05:44Z | MEMBER | null | null | null | useful to still get `isinstance(Sequence(Value("int64")), Sequence)`for downstream libs like evaluate | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7643/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7643/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/7643.diff",
"html_url": "https://github.com/huggingface/datasets/pull/7643",
"merged_at": "2025-06-25T17:05:43Z",
"patch_url": "https://github.com/huggingface/datasets/pull/7643.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7643"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/7642 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7642/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7642/comments | https://api.github.com/repos/huggingface/datasets/issues/7642/events | https://github.com/huggingface/datasets/pull/7642 | 3,176,025,890 | PR_kwDODunzps6cE_Wr | 7,642 | fix length for ci | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
} | [] | closed | false | null | [] | null | [] | 2025-06-25T15:10:38Z | 2025-06-25T15:11:53Z | 2025-06-25T15:11:51Z | MEMBER | null | null | null | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7642/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7642/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/7642.diff",
"html_url": "https://github.com/huggingface/datasets/pull/7642",
"merged_at": "2025-06-25T15:11:51Z",
"patch_url": "https://github.com/huggingface/datasets/pull/7642.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7642"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/7641 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7641/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7641/comments | https://api.github.com/repos/huggingface/datasets/issues/7641/events | https://github.com/huggingface/datasets/pull/7641 | 3,175,953,405 | PR_kwDODunzps6cEwUl | 7,641 | update docs and docstrings | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
} | [] | closed | false | null | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7641). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] | 2025-06-25T14:48:58Z | 2025-06-25T14:51:46Z | 2025-06-25T14:49:33Z | MEMBER | null | null | null | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7641/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7641/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/7641.diff",
"html_url": "https://github.com/huggingface/datasets/pull/7641",
"merged_at": "2025-06-25T14:49:33Z",
"patch_url": "https://github.com/huggingface/datasets/pull/7641.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7641"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/7640 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7640/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7640/comments | https://api.github.com/repos/huggingface/datasets/issues/7640/events | https://github.com/huggingface/datasets/pull/7640 | 3,175,914,924 | PR_kwDODunzps6cEofU | 7,640 | better features repr | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
} | [] | closed | false | null | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7640). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] | 2025-06-25T14:37:32Z | 2025-06-25T14:46:47Z | 2025-06-25T14:46:45Z | MEMBER | null | null | null | following the addition of List in #7634
before:
```python
In [3]: ds.features
Out[3]:
{'json': {'id': Value(dtype='string', id=None),
'metadata:transcript': [{'end': Value(dtype='float64', id=None),
'start': Value(dtype='float64', id=None),
'transcript': Value(dtype='string', id=None),
'words': [{'end': Value(dtype='float64', id=None),
'score': Value(dtype='float64', id=None),
'start': Value(dtype='float64', id=None),
'word': Value(dtype='string', id=None)}]}],
'metadata:vad': [{'end': Value(dtype='float64', id=None),
'start': Value(dtype='float64', id=None)}]},
'mp4': Value(dtype='binary', id=None),
'npz': {'boxes_and_keypoints:box': Sequence(feature=Sequence(feature=Value(dtype='float32', id=None), length=-1, id=None), length=-1, id=None),
'boxes_and_keypoints:is_valid_box': Sequence(feature=Value(dtype='bool', id=None), length=-1, id=None),
'boxes_and_keypoints:keypoints': Sequence(feature=Sequence(feature=Sequence(feature=Value(dtype='float32', id=None), length=-1, id=None), length=-1, id=None), length=-1, id=None),
'movement:EmotionArousalToken': Sequence(feature=Sequence(feature=Value(dtype='float32', id=None), length=-1, id=None), length=-1, id=None),
'movement:EmotionValenceToken': Sequence(feature=Sequence(feature=Value(dtype='float32', id=None), length=-1, id=None), length=-1, id=None),
'movement:FAUToken': Sequence(feature=Sequence(feature=Value(dtype='float32', id=None), length=-1, id=None), length=-1, id=None),
'movement:FAUValue': Sequence(feature=Sequence(feature=Value(dtype='float32', id=None), length=-1, id=None), length=-1, id=None),
'movement:alignment_head_rotation': Sequence(feature=Sequence(feature=Value(dtype='float32', id=None), length=-1, id=None), length=-1, id=None),
'movement:alignment_translation': Sequence(feature=Sequence(feature=Sequence(feature=Value(dtype='float32', id=None), length=-1, id=None), length=-1, id=None), length=-1, id=None),
'movement:emotion_arousal': Sequence(feature=Sequence(feature=Value(dtype='float32', id=None), length=-1, id=None), length=-1, id=None),
'movement:emotion_scores': Sequence(feature=Sequence(feature=Value(dtype='float32', id=None), length=-1, id=None), length=-1, id=None),
'movement:emotion_valence': Sequence(feature=Sequence(feature=Value(dtype='float32', id=None), length=-1, id=None), length=-1, id=None),
'movement:expression': Sequence(feature=Sequence(feature=Value(dtype='float32', id=None), length=-1, id=None), length=-1, id=None),
'movement:frame_latent': Sequence(feature=Sequence(feature=Value(dtype='float32', id=None), length=-1, id=None), length=-1, id=None),
'movement:gaze_encodings': Sequence(feature=Sequence(feature=Value(dtype='float32', id=None), length=-1, id=None), length=-1, id=None),
'movement:head_encodings': Sequence(feature=Sequence(feature=Value(dtype='float32', id=None), length=-1, id=None), length=-1, id=None),
'movement:hypernet_features': Sequence(feature=Sequence(feature=Value(dtype='float32', id=None), length=-1, id=None), length=-1, id=None),
'movement:is_valid': Sequence(feature=Sequence(feature=Value(dtype='float32', id=None), length=-1, id=None), length=-1, id=None),
'smplh:body_pose': Sequence(feature=Sequence(feature=Sequence(feature=Value(dtype='float32', id=None), length=-1, id=None), length=-1, id=None), length=-1, id=None),
'smplh:global_orient': Sequence(feature=Sequence(feature=Value(dtype='float32', id=None), length=-1, id=None), length=-1, id=None),
'smplh:is_valid': Sequence(feature=Value(dtype='bool', id=None), length=-1, id=None),
'smplh:left_hand_pose': Sequence(feature=Sequence(feature=Sequence(feature=Value(dtype='float32', id=None), length=-1, id=None), length=-1, id=None), length=-1, id=None),
'smplh:right_hand_pose': Sequence(feature=Sequence(feature=Sequence(feature=Value(dtype='float32', id=None), length=-1, id=None), length=-1, id=None), length=-1, id=None),
'smplh:translation': Sequence(feature=Sequence(feature=Value(dtype='float32', id=None), length=-1, id=None), length=-1, id=None)},
'wav': Audio(sampling_rate=None, mono=True, decode=True, id=None),
'__key__': Value(dtype='string', id=None),
'__url__': Value(dtype='string', id=None)}
```
after:
```python
In [3]: ds.features
Out[3]:
{'json': {'id': Value('string'),
'metadata:transcript': List({'end': Value('float64'), 'start': Value('float64'), 'transcript': Value('string'), 'words': List({'end': Value('float64'), 'score': Value('float64'), 'start': Value('float64'), 'word': Value('string')})}),
'metadata:vad': List({'end': Value('float64'), 'start': Value('float64')})},
'mp4': Value('binary'),
'npz': {'boxes_and_keypoints:box': List(List(Value('float32'))),
'boxes_and_keypoints:is_valid_box': List(Value('bool')),
'boxes_and_keypoints:keypoints': List(List(List(Value('float32')))),
'movement:EmotionArousalToken': List(List(Value('float32'))),
'movement:EmotionValenceToken': List(List(Value('float32'))),
'movement:FAUToken': List(List(Value('float32'))),
'movement:FAUValue': List(List(Value('float32'))),
'movement:alignment_head_rotation': List(List(Value('float32'))),
'movement:alignment_translation': List(List(List(Value('float32')))),
'movement:emotion_arousal': List(List(Value('float32'))),
'movement:emotion_scores': List(List(Value('float32'))),
'movement:emotion_valence': List(List(Value('float32'))),
'movement:expression': List(List(Value('float32'))),
'movement:frame_latent': List(List(Value('float32'))),
'movement:gaze_encodings': List(List(Value('float32'))),
'movement:head_encodings': List(List(Value('float32'))),
'movement:hypernet_features': List(List(Value('float32'))),
'movement:is_valid': List(List(Value('float32'))),
'smplh:body_pose': List(List(List(Value('float32')))),
'smplh:global_orient': List(List(Value('float32'))),
'smplh:is_valid': List(Value('bool')),
'smplh:left_hand_pose': List(List(List(Value('float32')))),
'smplh:right_hand_pose': List(List(List(Value('float32')))),
'smplh:translation': List(List(Value('float32')))},
'wav': Audio(sampling_rate=None, decode=True, stream_index=None),
'__key__': Value('string'),
'__url__': Value('string')}
``` | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7640/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7640/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/7640.diff",
"html_url": "https://github.com/huggingface/datasets/pull/7640",
"merged_at": "2025-06-25T14:46:45Z",
"patch_url": "https://github.com/huggingface/datasets/pull/7640.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7640"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/7639 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7639/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7639/comments | https://api.github.com/repos/huggingface/datasets/issues/7639/events | https://github.com/huggingface/datasets/pull/7639 | 3,175,616,169 | PR_kwDODunzps6cDoAf | 7,639 | fix save_infos | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
} | [] | closed | false | null | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7639). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] | 2025-06-25T13:16:26Z | 2025-06-25T13:19:33Z | 2025-06-25T13:16:33Z | MEMBER | null | null | null | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7639/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7639/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/7639.diff",
"html_url": "https://github.com/huggingface/datasets/pull/7639",
"merged_at": "2025-06-25T13:16:33Z",
"patch_url": "https://github.com/huggingface/datasets/pull/7639.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7639"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/7638 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7638/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7638/comments | https://api.github.com/repos/huggingface/datasets/issues/7638/events | https://github.com/huggingface/datasets/pull/7638 | 3,172,645,391 | PR_kwDODunzps6b5vpZ | 7,638 | Add ignore_decode_errors option to Image feature for robust decoding #7612 | {
"avatar_url": "https://avatars.githubusercontent.com/u/142811259?v=4",
"events_url": "https://api.github.com/users/ArjunJagdale/events{/privacy}",
"followers_url": "https://api.github.com/users/ArjunJagdale/followers",
"following_url": "https://api.github.com/users/ArjunJagdale/following{/other_user}",
"gists_url": "https://api.github.com/users/ArjunJagdale/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/ArjunJagdale",
"id": 142811259,
"login": "ArjunJagdale",
"node_id": "U_kgDOCIMgew",
"organizations_url": "https://api.github.com/users/ArjunJagdale/orgs",
"received_events_url": "https://api.github.com/users/ArjunJagdale/received_events",
"repos_url": "https://api.github.com/users/ArjunJagdale/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/ArjunJagdale/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArjunJagdale/subscriptions",
"type": "User",
"url": "https://api.github.com/users/ArjunJagdale",
"user_view_type": "public"
} | [] | open | false | null | [] | null | [
"cc @lhoestq",
"I think splitting the error handling for the main image decoding process and the metadata decoding process is possibly a bit nicer, as some images do render correctly, but their metadata might be invalid and cause the pipeline to fail, which I've encountered recently as in #7668.\r\n\r\nThe [`decode_image`](https://docs.pytorch.org/vision/main/generated/torchvision.io.decode_image.html) function in `torchvision` handles similar cases by using the `apply_exif_orientation` flag to turn off the exif metadata processing entirely.",
"> I think splitting the error handling for the main image decoding process and the metadata decoding process is possibly a bit nicer, as some images do render correctly, but their metadata might be invalid and cause the pipeline to fail, which I've encountered recently as in #7668.\r\n> The [`decode_image`](https://docs.pytorch.org/vision/main/generated/torchvision.io.decode_image.html) function in `torchvision` handles similar cases by using the `apply_exif_orientation` flag to turn off the exif metadata processing entirely.\r\n \r\n @lhoestq & @Seas0 — that makes total sense.\r\n \r\nCurrently, if EXIF metadata like `.getexif()` fails (due to malformed tags), the whole image gets dropped even if it renders correctly — not ideal.\r\n \r\nTo address this, I'm planning to split the EXIF handling into a separate `try/except` block, like:\r\n```python\r\ntry:\r\n exif = image.getexif()\r\n if exif.get(PIL.Image.ExifTags.Base.Orientation) is not None:\r\n image = PIL.ImageOps.exif_transpose(image)\r\nexcept Exception as exif_err:\r\n if self.ignore_decode_errors:\r\n warnings.warn(f\"[Image.decode_example] Skipped EXIF metadata: {exif_err}\")\r\n else:\r\n raise\r\n```\r\n\r\nSo that, Valid but EXIF-broken images will still be returned & EXIF failures will be skipped only if ignore_decode_errors=True. \r\n\r\nSounds good??"
] | 2025-06-24T16:47:51Z | 2025-07-03T16:37:38Z | null | CONTRIBUTOR | null | null | null | This PR implements support for robust image decoding in the `Image` feature, as discussed in issue #7612.
## 🔧 What was added
- A new boolean field: `ignore_decode_errors` (default: `False`)
- If set to `True`, any exceptions during decoding will be caught, and `None` will be returned instead of raising an error
```python
features = Features({
"image": Image(decode=True, ignore_decode_errors=True),
})
````
This enables robust iteration over potentially corrupted datasets — especially useful when streaming datasets like WebDataset or image-heavy public sets where sample corruption is common.
## 🧪 Behavior
* If `ignore_decode_errors=False` (default), decoding behaves exactly as before
* If `True`, decoding errors are caught, and a warning is emitted:
```
[Image.decode_example] Skipped corrupted image: ...
```
## 🧵 Linked issue
Closes #7612
Let me know if you'd like a follow-up test PR. Happy to write one! | null | {
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7638/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7638/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/7638.diff",
"html_url": "https://github.com/huggingface/datasets/pull/7638",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/7638.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7638"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/7637 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7637/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7637/comments | https://api.github.com/repos/huggingface/datasets/issues/7637/events | https://github.com/huggingface/datasets/issues/7637 | 3,171,883,522 | I_kwDODunzps69DxoC | 7,637 | Introduce subset_name as an alias of config_name | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
} | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | open | false | null | [] | null | [
"I second this! When you come from the Hub, the intuitive question is \"how do I set the subset name\", and it's not easily answered from the docs: `subset_name` would answer this directly.",
"I've submitted PR [#7657](https://github.com/huggingface/datasets/pull/7657) to introduce subset_name as a user-facing alias for name in load_dataset, keeping terminology consistent with the Hub UI (“Subset”). It’s fully backward-compatible and includes a conflict check.\n\nLet me know if you'd like me to include tests as part of the PR — happy to add them if needed!",
"The main usage is as a positional argument anyway, so I wouldn't necessarily agree that we need an alias (with the risk of confusing users). But happy to have more mentions in the docs of syntaxes like `load_dataset(\"dataset_name\", \"subset_name\")`",
"> The main usage is as a positional argument anyway, so I wouldn't necessarily agree that we need an alias (with the risk of confusing users). But happy to have more mentions in the docs of syntaxes like `load_dataset(\"dataset_name\", \"subset_name\")`\n\nThanks @lhoestq, totally fair point — especially with positional usage being the norm. I’m happy to align with the team’s direction here. If you'd prefer, I can update this PR to shift the focus to documentation/examples (e.g., showing \"subset_name\" as the second arg)."
] | 2025-06-24T12:49:01Z | 2025-07-01T16:08:33Z | null | MEMBER | null | null | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | ### Feature request
Add support for `subset_name` as an alias for `config_name` in the datasets library and related tools (such as loading scripts, documentation, and metadata).
### Motivation
The Hugging Face Hub dataset viewer displays a column named **"Subset"**, which refers to what is currently technically called config_name in the datasets library. This inconsistency has caused confusion for many users, especially those unfamiliar with the internal terminology.
I have repeatedly received questions from users trying to understand what "config" means, and why it doesn’t match what they see as "subset" on the Hub. Renaming everything to `subset_name` might be too disruptive, but introducing subset_name as a clear alias for config_name could significantly improve user experience without breaking backward compatibility.
This change would:
- Align terminology across the Hub UI and datasets codebase
- Reduce user confusion, especially for newcomers
- Make documentation and examples more intuitive
| null | {
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 1,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 2,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7637/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7637/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/7636 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7636/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7636/comments | https://api.github.com/repos/huggingface/datasets/issues/7636/events | https://github.com/huggingface/datasets/issues/7636 | 3,170,878,167 | I_kwDODunzps68_8LX | 7,636 | "open" in globals()["__builtins__"], an error occurs: "TypeError: argument of type 'module' is not iterable" | {
"avatar_url": "https://avatars.githubusercontent.com/u/51187979?v=4",
"events_url": "https://api.github.com/users/kuanyan9527/events{/privacy}",
"followers_url": "https://api.github.com/users/kuanyan9527/followers",
"following_url": "https://api.github.com/users/kuanyan9527/following{/other_user}",
"gists_url": "https://api.github.com/users/kuanyan9527/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/kuanyan9527",
"id": 51187979,
"login": "kuanyan9527",
"node_id": "MDQ6VXNlcjUxMTg3OTc5",
"organizations_url": "https://api.github.com/users/kuanyan9527/orgs",
"received_events_url": "https://api.github.com/users/kuanyan9527/received_events",
"repos_url": "https://api.github.com/users/kuanyan9527/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/kuanyan9527/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kuanyan9527/subscriptions",
"type": "User",
"url": "https://api.github.com/users/kuanyan9527",
"user_view_type": "public"
} | [] | closed | false | null | [] | null | [
"@kuanyan9527 Your query is indeed valid. Following could be its reasoning:\n\nQuoting from https://stackoverflow.com/a/11181607:\n\"By default, when in the `__main__` module,` __builtins__` is the built-in module `__builtin__` (note: no 's'); when in any other module, `__builtins__` is an alias for the dictionary of the `__builtin__` module itself.\"\n\nCan you confirm if you are running the snippet `print(\"open\" in globals()[\"__builtins__\"])` in the default? In that case, as expected, `__builtins__` is a module which is causing the error. But in the codebase, the class `patch_submodule`, is primarily used in the second circumstance, where it acts as a dictionary. Hence causing the code to function successfully.\n\nHope this helps.",
"@kuanyan9527 Are there any more queries in this regards, else please feel free to close the issue.\nThank you.",
"Your answer is very important to me,thanks."
] | 2025-06-24T08:09:39Z | 2025-07-01T01:54:08Z | 2025-07-01T01:54:08Z | NONE | null | null | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | When I run the following code, an error occurs: "TypeError: argument of type 'module' is not iterable"
```python
print("open" in globals()["__builtins__"])
```
Traceback (most recent call last):
File "./main.py", line 2, in <module>
print("open" in globals()["__builtins__"])
^^^^^^^^^^^^^^^^^^^^^^
TypeError: argument of type 'module' is not iterable
But this code runs fine in datasets, I don't understand why
[src/datasets/utils/patching.py#L96](https://github.com/huggingface/datasets/blob/3.6.0/src/datasets/utils/patching.py#L96) | {
"avatar_url": "https://avatars.githubusercontent.com/u/51187979?v=4",
"events_url": "https://api.github.com/users/kuanyan9527/events{/privacy}",
"followers_url": "https://api.github.com/users/kuanyan9527/followers",
"following_url": "https://api.github.com/users/kuanyan9527/following{/other_user}",
"gists_url": "https://api.github.com/users/kuanyan9527/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/kuanyan9527",
"id": 51187979,
"login": "kuanyan9527",
"node_id": "MDQ6VXNlcjUxMTg3OTc5",
"organizations_url": "https://api.github.com/users/kuanyan9527/orgs",
"received_events_url": "https://api.github.com/users/kuanyan9527/received_events",
"repos_url": "https://api.github.com/users/kuanyan9527/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/kuanyan9527/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kuanyan9527/subscriptions",
"type": "User",
"url": "https://api.github.com/users/kuanyan9527",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7636/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7636/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/7635 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7635/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7635/comments | https://api.github.com/repos/huggingface/datasets/issues/7635/events | https://github.com/huggingface/datasets/pull/7635 | 3,170,486,408 | PR_kwDODunzps6bybOe | 7,635 | Fix: Preserve float columns in JSON loader when values are integer-like (e.g. 0.0, 1.0) | {
"avatar_url": "https://avatars.githubusercontent.com/u/142811259?v=4",
"events_url": "https://api.github.com/users/ArjunJagdale/events{/privacy}",
"followers_url": "https://api.github.com/users/ArjunJagdale/followers",
"following_url": "https://api.github.com/users/ArjunJagdale/following{/other_user}",
"gists_url": "https://api.github.com/users/ArjunJagdale/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/ArjunJagdale",
"id": 142811259,
"login": "ArjunJagdale",
"node_id": "U_kgDOCIMgew",
"organizations_url": "https://api.github.com/users/ArjunJagdale/orgs",
"received_events_url": "https://api.github.com/users/ArjunJagdale/received_events",
"repos_url": "https://api.github.com/users/ArjunJagdale/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/ArjunJagdale/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArjunJagdale/subscriptions",
"type": "User",
"url": "https://api.github.com/users/ArjunJagdale",
"user_view_type": "public"
} | [] | open | false | null | [] | null | [] | 2025-06-24T06:16:48Z | 2025-06-24T06:16:48Z | null | CONTRIBUTOR | null | null | null | This PR fixes a bug in the JSON loader where columns containing float values like `[0.0, 1.0, 2.0]` were being implicitly coerced to `int`, due to pandas or Arrow type inference.
This caused issues downstream in statistics computation (e.g., dataset-viewer) where such columns were incorrectly labeled as `"int"` instead of `"float"`.
### 🔍 What was happening:
When the JSON loader falls back to `pandas_read_json()` (after `pa.read_json()` fails), pandas/Arrow can coerce float values to integers if all values are integer-like (e.g., `0.0 == 0`).
### ✅ What this PR does:
- Adds a check in the fallback path of `_generate_tables()`
- Ensures that columns made entirely of floats are preserved as `"float64"` even if they are integer-like (e.g. `0.0`, `1.0`)
- This prevents loss of float semantics when creating the Arrow table
### 🧪 Reproducible Example:
```json
[{"col": 0.0}, {"col": 1.0}, {"col": 2.0}]
````
Previously loaded as:
* `int`
Now correctly loaded as:
* `float`
Fixes #6937
| null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7635/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7635/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/7635.diff",
"html_url": "https://github.com/huggingface/datasets/pull/7635",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/7635.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7635"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/7634 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7634/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7634/comments | https://api.github.com/repos/huggingface/datasets/issues/7634/events | https://github.com/huggingface/datasets/pull/7634 | 3,169,389,653 | PR_kwDODunzps6buyij | 7,634 | Replace Sequence by List | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
} | [] | closed | false | null | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7634). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] | 2025-06-23T20:35:48Z | 2025-06-25T13:59:13Z | 2025-06-25T13:59:11Z | MEMBER | null | null | null | Sequence is just a utility that we need to keep for backward compatibility. And `[ ]` was used instead but doesn't allow passing the length of the list.
This PR removes most mentions of Sequence and usage of `[ ]` and defines a proper List type instead.
before: `Sequence(Value("int64"))` or `[Value("int64")]`
now: `List(Value("int64"))`
This PR conserves full backward compatibility. And it's a good occasion with the release of 4.0.0 | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7634/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7634/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/7634.diff",
"html_url": "https://github.com/huggingface/datasets/pull/7634",
"merged_at": "2025-06-25T13:59:11Z",
"patch_url": "https://github.com/huggingface/datasets/pull/7634.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7634"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/7633 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7633/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7633/comments | https://api.github.com/repos/huggingface/datasets/issues/7633/events | https://github.com/huggingface/datasets/issues/7633 | 3,168,399,637 | I_kwDODunzps682fEV | 7,633 | Proposal: Small Tamil Discourse Coherence Dataset. | {
"avatar_url": "https://avatars.githubusercontent.com/u/66418501?v=4",
"events_url": "https://api.github.com/users/bikkiNitSrinagar/events{/privacy}",
"followers_url": "https://api.github.com/users/bikkiNitSrinagar/followers",
"following_url": "https://api.github.com/users/bikkiNitSrinagar/following{/other_user}",
"gists_url": "https://api.github.com/users/bikkiNitSrinagar/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/bikkiNitSrinagar",
"id": 66418501,
"login": "bikkiNitSrinagar",
"node_id": "MDQ6VXNlcjY2NDE4NTAx",
"organizations_url": "https://api.github.com/users/bikkiNitSrinagar/orgs",
"received_events_url": "https://api.github.com/users/bikkiNitSrinagar/received_events",
"repos_url": "https://api.github.com/users/bikkiNitSrinagar/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/bikkiNitSrinagar/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bikkiNitSrinagar/subscriptions",
"type": "User",
"url": "https://api.github.com/users/bikkiNitSrinagar",
"user_view_type": "public"
} | [] | open | false | null | [] | null | [] | 2025-06-23T14:24:40Z | 2025-06-23T14:24:40Z | null | NONE | null | null | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | I’m a beginner from NIT Srinagar proposing a dataset of 50 Tamil text pairs for discourse coherence (coherent/incoherent labels) to support NLP research in low-resource languages.
- Size: 50 samples
- Format: CSV with columns (text1, text2, label)
- Use case: Training NLP models for coherence
I’ll use GitHub’s web editor and Google Colab. Please confirm if this fits. | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7633/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7633/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/7632 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7632/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7632/comments | https://api.github.com/repos/huggingface/datasets/issues/7632/events | https://github.com/huggingface/datasets/issues/7632 | 3,168,283,589 | I_kwDODunzps682CvF | 7,632 | Graceful Error Handling for cast_column("image", Image(decode=True)) in Hugging Face Datasets | {
"avatar_url": "https://avatars.githubusercontent.com/u/37377515?v=4",
"events_url": "https://api.github.com/users/ganiket19/events{/privacy}",
"followers_url": "https://api.github.com/users/ganiket19/followers",
"following_url": "https://api.github.com/users/ganiket19/following{/other_user}",
"gists_url": "https://api.github.com/users/ganiket19/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/ganiket19",
"id": 37377515,
"login": "ganiket19",
"node_id": "MDQ6VXNlcjM3Mzc3NTE1",
"organizations_url": "https://api.github.com/users/ganiket19/orgs",
"received_events_url": "https://api.github.com/users/ganiket19/received_events",
"repos_url": "https://api.github.com/users/ganiket19/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/ganiket19/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ganiket19/subscriptions",
"type": "User",
"url": "https://api.github.com/users/ganiket19",
"user_view_type": "public"
} | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | open | false | null | [] | null | [] | 2025-06-23T13:49:24Z | 2025-06-23T16:26:53Z | null | NONE | null | null | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | ### Feature request
Currently, when using dataset.cast_column("image", Image(decode=True)), the pipeline throws an error and halts if any image in the dataset is invalid or corrupted (e.g., truncated files, incorrect formats, unreachable URLs). This behavior disrupts large-scale processing where a few faulty samples are common.
reference : https://discuss.huggingface.co/t/handle-errors-when-loading-images-404-corrupted-etc/50318/5
https://discuss.huggingface.co/t/handling-non-existing-url-in-image-dataset-while-cast-column/69185
Proposed Feature
Introduce a mechanism (e.g., a continue_on_error=True flag or global error handling mode) in Image(decode=True) that:
Skips invalid images and sets them as None, or
Logs the error but allows the rest of the dataset to be processed without interruption.
Example Usage
from datasets import load_dataset, Image
dataset = load_dataset("my_dataset")
dataset = dataset.cast_column("image", Image(decode=True, continue_on_error=True))
Benefits
Ensures robust large-scale image dataset processing.
Improves developer productivity by avoiding custom retry/error-handling code.
Aligns with best practices in dataset preprocessing pipelines that tolerate minor data corruption.
Potential Implementation Options
Internally wrap the decoding in a try/except block.
Return None or a placeholder on failure.
Optionally allow custom error callbacks or logging.
### Motivation
Robustness: Large-scale image datasets often contain a small fraction of corrupt files or unreachable URLs. Halting on the first error forces users to write custom workarounds or preprocess externally.
Simplicity: A built-in flag removes boilerplate try/except logic around every decode step.
Performance: Skipping invalid samples inline is more efficient than a two-pass approach (filter then decode).
### Your contribution
1. API Change
Extend datasets.features.Image(decode=True) to accept continue_on_error: bool = False.
2. Behavior
If continue_on_error=False (default), maintain current behavior: any decode error raises an exception.
If continue_on_error=True, wrap decode logic in try/except:
On success: store the decoded image.
On failure: log a warning (e.g., via logging.warning) and set the field to None (or a sentinel value).
3. Optional Enhancements
Allow a callback hook:
Image(decode=True, continue_on_error=True, on_error=lambda idx, url, exc: ...)
Emit metrics or counts of skipped images. | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7632/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7632/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/7631 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7631/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7631/comments | https://api.github.com/repos/huggingface/datasets/issues/7631/events | https://github.com/huggingface/datasets/pull/7631 | 3,165,127,657 | PR_kwDODunzps6bgwOB | 7,631 | Pass user-agent from DownloadConfig into fsspec storage_options | {
"avatar_url": "https://avatars.githubusercontent.com/u/142811259?v=4",
"events_url": "https://api.github.com/users/ArjunJagdale/events{/privacy}",
"followers_url": "https://api.github.com/users/ArjunJagdale/followers",
"following_url": "https://api.github.com/users/ArjunJagdale/following{/other_user}",
"gists_url": "https://api.github.com/users/ArjunJagdale/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/ArjunJagdale",
"id": 142811259,
"login": "ArjunJagdale",
"node_id": "U_kgDOCIMgew",
"organizations_url": "https://api.github.com/users/ArjunJagdale/orgs",
"received_events_url": "https://api.github.com/users/ArjunJagdale/received_events",
"repos_url": "https://api.github.com/users/ArjunJagdale/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/ArjunJagdale/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArjunJagdale/subscriptions",
"type": "User",
"url": "https://api.github.com/users/ArjunJagdale",
"user_view_type": "public"
} | [] | open | false | null | [] | null | [
"- This PR assumes that `HfFileSystem` in `huggingface_hub` supports receiving `headers` in `storage_options`. If not, a follow-up PR can be opened to add this support to `HfFileSystem.__init__`.\r\n- No test was added for this since it’s a config passthrough. If needed, I’d be happy to add one."
] | 2025-06-21T14:22:25Z | 2025-06-21T14:25:28Z | null | CONTRIBUTOR | null | null | null | Fixes part of issue #6046
### Problem
The `user-agent` defined in `DownloadConfig` was not passed down to fsspec-based filesystems like `HfFileSystem`, which prevents proper identification/tracking of client requests.
### Solution
Added support for injecting the `user-agent` into `storage_options["headers"]` within `_prepare_single_hop_path_and_storage_options()` based on the `protocol`.
Now, when using `hf://`, `http://`, or `https://`, the custom user-agent is passed automatically.
### Code Location
Modified:
- `src/datasets/utils/file_utils.py`
Used `get_datasets_user_agent(...)` to ensure proper formatting and fallback logic. | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7631/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7631/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/7631.diff",
"html_url": "https://github.com/huggingface/datasets/pull/7631",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/7631.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7631"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/7630 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7630/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7630/comments | https://api.github.com/repos/huggingface/datasets/issues/7630/events | https://github.com/huggingface/datasets/issues/7630 | 3,164,650,900 | I_kwDODunzps68oL2U | 7,630 | [bug] resume from ckpt skips samples if .map is applied | {
"avatar_url": "https://avatars.githubusercontent.com/u/23004953?v=4",
"events_url": "https://api.github.com/users/felipemello1/events{/privacy}",
"followers_url": "https://api.github.com/users/felipemello1/followers",
"following_url": "https://api.github.com/users/felipemello1/following{/other_user}",
"gists_url": "https://api.github.com/users/felipemello1/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/felipemello1",
"id": 23004953,
"login": "felipemello1",
"node_id": "MDQ6VXNlcjIzMDA0OTUz",
"organizations_url": "https://api.github.com/users/felipemello1/orgs",
"received_events_url": "https://api.github.com/users/felipemello1/received_events",
"repos_url": "https://api.github.com/users/felipemello1/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/felipemello1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/felipemello1/subscriptions",
"type": "User",
"url": "https://api.github.com/users/felipemello1",
"user_view_type": "public"
} | [] | open | false | null | [] | null | [
"Thanks for reporting this — it looks like a separate but related bug to #7538, which involved sample loss when resuming an `IterableDataset` wrapped in `FormattedExamplesIterable`. That was resolved in #7553 by re-batching the iterable to track offset correctly.\n\nIn this case, the issue seems to arise specifically from applying `.map()` before sharding and checkpointing. That wraps the iterable in `MappedExamplesIterable`, which may not preserve or propagate `shard_example_idx` correctly across `.state_dict()` and `.load_state_dict()` calls.\n\nYou can see that without `.map()`, resume works fine — but with `.map()`, it jumps from sample 9 to 50, skipping the rest of the shard.\n\nI'll dig deeper into how `MappedExamplesIterable` manages offsets and whether it supports proper checkpoint resumption. If not, we might need a fix similar to the one in #7553, or a wrapper to preserve resume metadata.\n\nHappy to help fix it!\n",
"Let me know if a dedicated test case is required — happy to add one!"
] | 2025-06-21T01:50:03Z | 2025-06-29T07:51:32Z | null | NONE | null | null | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | ### Describe the bug
resume from ckpt skips samples if .map is applied
Maybe related: https://github.com/huggingface/datasets/issues/7538
### Steps to reproduce the bug
```python
from datasets import Dataset
from datasets.distributed import split_dataset_by_node
# Create dataset with map transformation
def create_dataset():
ds = Dataset.from_dict({"id": list(range(100))})
ds = ds.to_iterable_dataset(num_shards=4)
ds = ds.map(lambda x: x) #comment it out to get desired behavior
ds = split_dataset_by_node(ds, rank=0, world_size=2)
return ds
ds = create_dataset()
# Iterate and save checkpoint after 10 samples
it = iter(ds)
for idx, sample in enumerate(it):
if idx == 9: # Checkpoint after 10 samples
checkpoint = ds.state_dict()
print(f"Checkpoint saved at sample: {sample['id']}")
break
# Continue with original iterator
original_next_samples = []
for idx, sample in enumerate(it):
original_next_samples.append(sample["id"])
if idx >= 4:
break
# Resume from checkpoint
ds_new = create_dataset()
ds_new.load_state_dict(checkpoint)
# Get samples from resumed iterator
it_new = iter(ds_new)
resumed_next_samples = []
for idx, sample in enumerate(it_new):
resumed_next_samples.append(sample["id"])
if idx >= 4:
break
print(f"\nExpected next samples: {original_next_samples}")
print(f"Actual next samples: {resumed_next_samples}")
print(
f"\n❌ BUG: {resumed_next_samples[0] - original_next_samples[0]} samples were skipped!"
)
```
With map
```
Checkpoint saved at sample: 9
Expected next samples: [10, 11, 12, 13, 14]
Actual next samples: [50, 51, 52, 53, 54]
❌ BUG: 40 samples were skipped!
```
### Expected behavior
without map
```
Expected next samples: [10, 11, 12, 13, 14]
Actual next samples: [10, 11, 12, 13, 14]
❌ BUG: 0 samples were skipped!
```
### Environment info
datasets == 3.6.0 | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7630/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7630/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/7629 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7629/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7629/comments | https://api.github.com/repos/huggingface/datasets/issues/7629/events | https://github.com/huggingface/datasets/pull/7629 | 3,161,169,782 | PR_kwDODunzps6bTc7b | 7,629 | Add test for `as_iterable_dataset()` method in DatasetBuilder | {
"avatar_url": "https://avatars.githubusercontent.com/u/142811259?v=4",
"events_url": "https://api.github.com/users/ArjunJagdale/events{/privacy}",
"followers_url": "https://api.github.com/users/ArjunJagdale/followers",
"following_url": "https://api.github.com/users/ArjunJagdale/following{/other_user}",
"gists_url": "https://api.github.com/users/ArjunJagdale/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/ArjunJagdale",
"id": 142811259,
"login": "ArjunJagdale",
"node_id": "U_kgDOCIMgew",
"organizations_url": "https://api.github.com/users/ArjunJagdale/orgs",
"received_events_url": "https://api.github.com/users/ArjunJagdale/received_events",
"repos_url": "https://api.github.com/users/ArjunJagdale/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/ArjunJagdale/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArjunJagdale/subscriptions",
"type": "User",
"url": "https://api.github.com/users/ArjunJagdale",
"user_view_type": "public"
} | [] | open | false | null | [] | null | [] | 2025-06-19T19:23:55Z | 2025-06-19T19:23:55Z | null | CONTRIBUTOR | null | null | null | This PR adds a test for the new `as_iterable_dataset()` method introduced in PR #7628.
The test:
- Loads a builder using `load_dataset_builder("c4", "en")`
- Runs `download_and_prepare()`
- Streams examples using `builder.as_iterable_dataset(split="train[:100]")`
- Verifies streamed examples contain the "text" field
This ensures that the builder correctly streams data from cached Arrow files.
| null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7629/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7629/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/7629.diff",
"html_url": "https://github.com/huggingface/datasets/pull/7629",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/7629.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7629"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/7628 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7628/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7628/comments | https://api.github.com/repos/huggingface/datasets/issues/7628/events | https://github.com/huggingface/datasets/pull/7628 | 3,161,156,461 | PR_kwDODunzps6bTaGk | 7,628 | Add `as_iterable_dataset()` method to DatasetBuilder for streaming from cached Arrow files | {
"avatar_url": "https://avatars.githubusercontent.com/u/142811259?v=4",
"events_url": "https://api.github.com/users/ArjunJagdale/events{/privacy}",
"followers_url": "https://api.github.com/users/ArjunJagdale/followers",
"following_url": "https://api.github.com/users/ArjunJagdale/following{/other_user}",
"gists_url": "https://api.github.com/users/ArjunJagdale/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/ArjunJagdale",
"id": 142811259,
"login": "ArjunJagdale",
"node_id": "U_kgDOCIMgew",
"organizations_url": "https://api.github.com/users/ArjunJagdale/orgs",
"received_events_url": "https://api.github.com/users/ArjunJagdale/received_events",
"repos_url": "https://api.github.com/users/ArjunJagdale/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/ArjunJagdale/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArjunJagdale/subscriptions",
"type": "User",
"url": "https://api.github.com/users/ArjunJagdale",
"user_view_type": "public"
} | [] | open | false | null | [] | null | [] | 2025-06-19T19:15:41Z | 2025-06-19T19:15:41Z | null | CONTRIBUTOR | null | null | null | This PR implements `builder.as_iterable_dataset(split=...)` as discussed in #5481.
It allows users to load an `IterableDataset` directly from cached Arrow files (using ArrowReader and ArrowExamplesIterable), without loading the full dataset into memory.
This is useful for large-scale training scenarios where memory is constrained. A test has also been added in `test_builder.py`.
Related to: #5481
| null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7628/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7628/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/7628.diff",
"html_url": "https://github.com/huggingface/datasets/pull/7628",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/7628.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7628"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/7627 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7627/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7627/comments | https://api.github.com/repos/huggingface/datasets/issues/7627/events | https://github.com/huggingface/datasets/issues/7627 | 3,160,544,390 | I_kwDODunzps68YhSG | 7,627 | Creating a HF Dataset from lakeFS with S3 storage takes too much time! | {
"avatar_url": "https://avatars.githubusercontent.com/u/118734142?v=4",
"events_url": "https://api.github.com/users/Thunderhead-exe/events{/privacy}",
"followers_url": "https://api.github.com/users/Thunderhead-exe/followers",
"following_url": "https://api.github.com/users/Thunderhead-exe/following{/other_user}",
"gists_url": "https://api.github.com/users/Thunderhead-exe/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Thunderhead-exe",
"id": 118734142,
"login": "Thunderhead-exe",
"node_id": "U_kgDOBxO9Pg",
"organizations_url": "https://api.github.com/users/Thunderhead-exe/orgs",
"received_events_url": "https://api.github.com/users/Thunderhead-exe/received_events",
"repos_url": "https://api.github.com/users/Thunderhead-exe/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Thunderhead-exe/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Thunderhead-exe/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Thunderhead-exe",
"user_view_type": "public"
} | [] | closed | false | null | [] | null | [
"### > Update\n\nThe bottleneck, from what I understand, was making one network request per file\n\nFor 30k images, this meant 30k separate GET requests to the MinIO server through the S3 API, and that was killing the performance\n\nUsing webDataset to transform the large number of files to few .tar files and passing “webdataset” instead of “imagefolder” to the load_dataset function worked perfectly (took only ~11s)"
] | 2025-06-19T14:28:41Z | 2025-06-23T12:39:10Z | 2025-06-23T12:39:10Z | NONE | null | null | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | Hi,
I’m new to HF dataset and I tried to create datasets based on data versioned in **lakeFS** _(**MinIO** S3 bucket as storage backend)_
Here I’m using ±30000 PIL image from MNIST data however it is taking around 12min to execute, which is a lot!
From what I understand, it is loading the images into cache then building the dataset.
– Please find bellow the execution screenshot –
Is there a way to optimize this or am I doing something wrong?
Thanks!
 | {
"avatar_url": "https://avatars.githubusercontent.com/u/118734142?v=4",
"events_url": "https://api.github.com/users/Thunderhead-exe/events{/privacy}",
"followers_url": "https://api.github.com/users/Thunderhead-exe/followers",
"following_url": "https://api.github.com/users/Thunderhead-exe/following{/other_user}",
"gists_url": "https://api.github.com/users/Thunderhead-exe/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Thunderhead-exe",
"id": 118734142,
"login": "Thunderhead-exe",
"node_id": "U_kgDOBxO9Pg",
"organizations_url": "https://api.github.com/users/Thunderhead-exe/orgs",
"received_events_url": "https://api.github.com/users/Thunderhead-exe/received_events",
"repos_url": "https://api.github.com/users/Thunderhead-exe/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Thunderhead-exe/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Thunderhead-exe/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Thunderhead-exe",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7627/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7627/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/7626 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7626/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7626/comments | https://api.github.com/repos/huggingface/datasets/issues/7626/events | https://github.com/huggingface/datasets/pull/7626 | 3,159,322,138 | PR_kwDODunzps6bNMuF | 7,626 | feat(map): reuse unchanged columns when input_columns specified to reduce disk usage (#6013) | {
"avatar_url": "https://avatars.githubusercontent.com/u/142811259?v=4",
"events_url": "https://api.github.com/users/ArjunJagdale/events{/privacy}",
"followers_url": "https://api.github.com/users/ArjunJagdale/followers",
"following_url": "https://api.github.com/users/ArjunJagdale/following{/other_user}",
"gists_url": "https://api.github.com/users/ArjunJagdale/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/ArjunJagdale",
"id": 142811259,
"login": "ArjunJagdale",
"node_id": "U_kgDOCIMgew",
"organizations_url": "https://api.github.com/users/ArjunJagdale/orgs",
"received_events_url": "https://api.github.com/users/ArjunJagdale/received_events",
"repos_url": "https://api.github.com/users/ArjunJagdale/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/ArjunJagdale/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArjunJagdale/subscriptions",
"type": "User",
"url": "https://api.github.com/users/ArjunJagdale",
"user_view_type": "public"
} | [] | open | false | null | [] | null | [] | 2025-06-19T07:41:45Z | 2025-06-26T06:43:16Z | null | CONTRIBUTOR | null | null | null | ## Summary
This PR addresses [#6013](https://github.com/huggingface/datasets/issues/6013) by reusing unchanged columns from the original dataset in the `map()` method when `input_columns` is specified.
## What’s Implemented
- Injected logic at the end of `Dataset.map()` to:
- Identify untouched columns not in `input_columns` or `remove_columns`
- Select those columns from the original dataset
- Concatenate them with the transformed result using `pyarrow.concat_tables`
## Example Behavior
```python
ds = Dataset.from_dict({"a": [1, 2], "b": [3, 4]})
ds2 = ds.map(lambda x: {"c": x["a"] + 10}, input_columns=["a"], remove_columns=["a"])
print(ds2.column_names) # Output: ['b', 'c']
````
Column `b` is reused from the original dataset.
## Notes
* This keeps disk usage and caching minimal by avoiding full dataset duplication.
* Only triggered when `input_columns` is set.
---
cc @lhoestq @mariosasko for review 🙂
| null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7626/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7626/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/7626.diff",
"html_url": "https://github.com/huggingface/datasets/pull/7626",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/7626.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7626"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/7625 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7625/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7625/comments | https://api.github.com/repos/huggingface/datasets/issues/7625/events | https://github.com/huggingface/datasets/pull/7625 | 3,159,016,001 | PR_kwDODunzps6bMKof | 7,625 | feat: Add h5folder dataset loader for HDF5 support | {
"avatar_url": "https://avatars.githubusercontent.com/u/142811259?v=4",
"events_url": "https://api.github.com/users/ArjunJagdale/events{/privacy}",
"followers_url": "https://api.github.com/users/ArjunJagdale/followers",
"following_url": "https://api.github.com/users/ArjunJagdale/following{/other_user}",
"gists_url": "https://api.github.com/users/ArjunJagdale/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/ArjunJagdale",
"id": 142811259,
"login": "ArjunJagdale",
"node_id": "U_kgDOCIMgew",
"organizations_url": "https://api.github.com/users/ArjunJagdale/orgs",
"received_events_url": "https://api.github.com/users/ArjunJagdale/received_events",
"repos_url": "https://api.github.com/users/ArjunJagdale/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/ArjunJagdale/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArjunJagdale/subscriptions",
"type": "User",
"url": "https://api.github.com/users/ArjunJagdale",
"user_view_type": "public"
} | [] | open | false | null | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7625). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"I guess test failed cause import os, import h5py, and import datasets lines are not alphabetically sorted, or not grouped properly.\r\n\r\n\r\n",
"This commit was accidental - `[Merge branch 'main' into patch-4]`. The \r\n`[chore: fix import order in h5folder.py to satisfy linter]` should solve the import order issue. \r\n\r\n\r\n"
] | 2025-06-19T05:39:10Z | 2025-06-26T05:44:26Z | null | CONTRIBUTOR | null | null | null | ### Related Issue
Closes #3113
### What does this PR do?
This PR introduces a new dataset loader module called **`h5folder`** to support loading datasets stored in **HDF5 (.h5)** format.
It allows users to do:
```python
from datasets import load_dataset
dataset = load_dataset("h5folder", data_dir="path/to/")
````
### 🧩 Design Overview
* Implemented inside `datasets/packaged_modules/h5folder/h5folder.py`
* Based on the `GeneratorBasedBuilder` API
* Uses `h5py` to read HDF5 files and yield examples
* Expects datasets such as `id`, `data`, and `label` inside `data.h5`
* Converts numpy arrays to Python types before yielding
### 🧪 Example `.h5` Structure (for local testing)
```python
import h5py
import numpy as np
with h5py.File("data.h5", "w") as f:
f.create_dataset("id", data=np.arange(100))
f.create_dataset("data", data=np.random.randn(100, 10))
f.create_dataset("label", data=np.random.randint(0, 2, size=100))
```
### ✅ Testing
- The loader logic follows the structure of existing modules like `imagefolder`
- Will rely on Hugging Face CI to validate integration
- Manually testing planned once merged or during feedback
### 📁 Files Added
* `datasets/src/datasets/packaged_modules/h5folder/h5folder.py`
### 📌 Component(s) Affected
* `area/datasets`
* `area/load`
### 📦 Release Note Classification
* `rn/feature` – Adds support for loading `.h5` datasets via `load_dataset("h5folder", ...)`
---
Let me know if any changes or improvements are needed — happy to iterate. Thanks for reviewing!
| null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 2,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 2,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7625/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7625/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/7625.diff",
"html_url": "https://github.com/huggingface/datasets/pull/7625",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/7625.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7625"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/7624 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7624/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7624/comments | https://api.github.com/repos/huggingface/datasets/issues/7624/events | https://github.com/huggingface/datasets/issues/7624 | 3,156,136,624 | I_kwDODunzps68HtKw | 7,624 | #Dataset Make "image" column appear first in dataset preview UI | {
"avatar_url": "https://avatars.githubusercontent.com/u/98875217?v=4",
"events_url": "https://api.github.com/users/jcerveto/events{/privacy}",
"followers_url": "https://api.github.com/users/jcerveto/followers",
"following_url": "https://api.github.com/users/jcerveto/following{/other_user}",
"gists_url": "https://api.github.com/users/jcerveto/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/jcerveto",
"id": 98875217,
"login": "jcerveto",
"node_id": "U_kgDOBeS3UQ",
"organizations_url": "https://api.github.com/users/jcerveto/orgs",
"received_events_url": "https://api.github.com/users/jcerveto/received_events",
"repos_url": "https://api.github.com/users/jcerveto/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/jcerveto/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jcerveto/subscriptions",
"type": "User",
"url": "https://api.github.com/users/jcerveto",
"user_view_type": "public"
} | [] | closed | false | null | [] | null | [
"Hi ! It should follow the same order as the order of the keys in the metadata file",
"Hi! Thank you for your answer. \n\nAs you said it, I I forced every key in every JSON to have an order using `collections. OrderedDict` in Python. Now, it works!\n\nTY"
] | 2025-06-18T09:25:19Z | 2025-06-20T07:46:43Z | 2025-06-20T07:46:43Z | NONE | null | null | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | Hi!
#Dataset
I’m currently uploading a dataset that includes an `"image"` column (PNG files), along with some metadata columns. The dataset is loaded from a .jsonl file. My goal is to have the "image" column appear as the first column in the dataset card preview UI on the :hugs: Hub.
However, at the moment, the `"image"` column is not the first—in fact, it appears last, which is not ideal for the presentation I’d like to achieve.
I have a couple of questions:
Is there a way to force the dataset card to display the `"image"` column first?
Is there currently any way to control or influence the column order in the dataset preview UI?
Does the order of keys in the .jsonl file or the features argument affect the display order?
Thanks again for your time and help! :blush: | {
"avatar_url": "https://avatars.githubusercontent.com/u/98875217?v=4",
"events_url": "https://api.github.com/users/jcerveto/events{/privacy}",
"followers_url": "https://api.github.com/users/jcerveto/followers",
"following_url": "https://api.github.com/users/jcerveto/following{/other_user}",
"gists_url": "https://api.github.com/users/jcerveto/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/jcerveto",
"id": 98875217,
"login": "jcerveto",
"node_id": "U_kgDOBeS3UQ",
"organizations_url": "https://api.github.com/users/jcerveto/orgs",
"received_events_url": "https://api.github.com/users/jcerveto/received_events",
"repos_url": "https://api.github.com/users/jcerveto/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/jcerveto/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jcerveto/subscriptions",
"type": "User",
"url": "https://api.github.com/users/jcerveto",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7624/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7624/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/7623 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7623/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7623/comments | https://api.github.com/repos/huggingface/datasets/issues/7623/events | https://github.com/huggingface/datasets/pull/7623 | 3,154,519,684 | PR_kwDODunzps6a9Jk5 | 7,623 | fix: raise error in FolderBasedBuilder when data_dir and data_files are missing | {
"avatar_url": "https://avatars.githubusercontent.com/u/142811259?v=4",
"events_url": "https://api.github.com/users/ArjunJagdale/events{/privacy}",
"followers_url": "https://api.github.com/users/ArjunJagdale/followers",
"following_url": "https://api.github.com/users/ArjunJagdale/following{/other_user}",
"gists_url": "https://api.github.com/users/ArjunJagdale/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/ArjunJagdale",
"id": 142811259,
"login": "ArjunJagdale",
"node_id": "U_kgDOCIMgew",
"organizations_url": "https://api.github.com/users/ArjunJagdale/orgs",
"received_events_url": "https://api.github.com/users/ArjunJagdale/received_events",
"repos_url": "https://api.github.com/users/ArjunJagdale/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/ArjunJagdale/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArjunJagdale/subscriptions",
"type": "User",
"url": "https://api.github.com/users/ArjunJagdale",
"user_view_type": "public"
} | [] | closed | false | null | [] | null | [
"@lhoestq Moved the logic to FolderBasedBuilder._info() as discussed in previous PR (#7618). Let me know if anything else is needed — happy to update!",
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7623). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] | 2025-06-17T19:16:34Z | 2025-06-18T14:18:41Z | 2025-06-18T14:18:41Z | CONTRIBUTOR | null | null | null | ### Related Issues/PRs
Fixes #6152
---
### What changes are proposed in this pull request?
This PR adds a dedicated validation check in the `_info()` method of the `FolderBasedBuilder` class to ensure that users provide either `data_dir` or `data_files` when loading folder-based datasets (such as `audiofolder`, `imagefolder`, etc.).
---
### Why this change?
Previously, when calling:
```python
load_dataset("audiofolder")
````
without specifying `data_dir` or `data_files`, the loader would silently fallback to the **current working directory**, leading to:
* Long loading times
* Unexpected behavior (e.g., scanning unrelated files)
This behavior was discussed in issue #6152. As suggested by maintainers, the fix has now been implemented directly inside the `FolderBasedBuilder._info()` method — keeping the logic localized to the specific builder instead of a generic loader function.
---
### How is this PR tested?
* ✅ Manually tested by calling `load_dataset("audiofolder")` with no `data_dir` or `data_files` → a `ValueError` is now raised early.
* ✅ Existing functionality (with valid input) remains unaffected.
---
### Does this PR require documentation update?
* [x] No
---
### Release Notes
#### Is this a user-facing change?
* [x] Yes
> Folder-based datasets now raise an explicit error if neither `data_dir` nor `data_files` are specified, preventing unintended fallback to the current working directory.
---
#### What component(s) does this PR affect?
* [x] `area/datasets`
* [x] `area/load`
---
<a name="release-note-category"></a>
#### How should the PR be classified?
* [x] `rn/bug-fix` - A user-facing bug fix
---
#### Should this be included in the next patch release?
* [x] Yes | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7623/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7623/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/7623.diff",
"html_url": "https://github.com/huggingface/datasets/pull/7623",
"merged_at": "2025-06-18T14:18:41Z",
"patch_url": "https://github.com/huggingface/datasets/pull/7623.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7623"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/7622 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7622/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7622/comments | https://api.github.com/repos/huggingface/datasets/issues/7622/events | https://github.com/huggingface/datasets/pull/7622 | 3,154,398,557 | PR_kwDODunzps6a8v6J | 7,622 | Guard against duplicate builder_kwargs/config_kwargs in load_dataset_… | {
"avatar_url": "https://avatars.githubusercontent.com/u/149825575?v=4",
"events_url": "https://api.github.com/users/Shohail-Ismail/events{/privacy}",
"followers_url": "https://api.github.com/users/Shohail-Ismail/followers",
"following_url": "https://api.github.com/users/Shohail-Ismail/following{/other_user}",
"gists_url": "https://api.github.com/users/Shohail-Ismail/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Shohail-Ismail",
"id": 149825575,
"login": "Shohail-Ismail",
"node_id": "U_kgDOCO4oJw",
"organizations_url": "https://api.github.com/users/Shohail-Ismail/orgs",
"received_events_url": "https://api.github.com/users/Shohail-Ismail/received_events",
"repos_url": "https://api.github.com/users/Shohail-Ismail/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Shohail-Ismail/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Shohail-Ismail/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Shohail-Ismail",
"user_view_type": "public"
} | [] | open | false | null | [] | null | [
"Hi folks, this PR fixes the duplicate-kwargs edge case and includes a unit test. Would love a review when you have a moment!\r\n\r\n@zach-huggingface\r\n@SunMarc "
] | 2025-06-17T18:28:35Z | 2025-07-02T12:39:20Z | null | NONE | null | null | null | …builder (#4910 )
### What does this PR do?
Fixes edge case in `load_dataset_builder` by raising a `TypeError` if the same key exists in both `builder_kwargs` and `config_kwargs`.
### Implementation details
- Added a guard clause in `load_dataset_builder` to detect duplicate keys between `builder_kwargs` and `config_kwargs`
- Wrote a unit test in `tests/test_load_duplicate_keys.py` to verify the exception is raised correctly
### Fixes
Closes #4910
### Reviewers
@zach-huggingface
@SunMarc
Would appreciate your review if you have time - thanks! | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7622/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7622/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/7622.diff",
"html_url": "https://github.com/huggingface/datasets/pull/7622",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/7622.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7622"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/7621 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7621/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7621/comments | https://api.github.com/repos/huggingface/datasets/issues/7621/events | https://github.com/huggingface/datasets/pull/7621 | 3,153,780,963 | PR_kwDODunzps6a6rAu | 7,621 | minor docs data aug | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
} | [] | closed | false | null | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7621). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] | 2025-06-17T14:46:57Z | 2025-06-17T14:50:28Z | 2025-06-17T14:47:11Z | MEMBER | null | null | null | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7621/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7621/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/7621.diff",
"html_url": "https://github.com/huggingface/datasets/pull/7621",
"merged_at": "2025-06-17T14:47:11Z",
"patch_url": "https://github.com/huggingface/datasets/pull/7621.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7621"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/7620 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7620/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7620/comments | https://api.github.com/repos/huggingface/datasets/issues/7620/events | https://github.com/huggingface/datasets/pull/7620 | 3,153,565,183 | PR_kwDODunzps6a58TP | 7,620 | Fixes in docs | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
} | [] | closed | false | null | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7620). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] | 2025-06-17T13:41:54Z | 2025-06-17T13:58:26Z | 2025-06-17T13:58:24Z | MEMBER | null | null | null | before release 4.0
(I also did minor improvements to `features` to not show their `id=None` in their `__repr__()`) | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7620/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7620/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/7620.diff",
"html_url": "https://github.com/huggingface/datasets/pull/7620",
"merged_at": "2025-06-17T13:58:24Z",
"patch_url": "https://github.com/huggingface/datasets/pull/7620.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7620"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/7619 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7619/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7619/comments | https://api.github.com/repos/huggingface/datasets/issues/7619/events | https://github.com/huggingface/datasets/issues/7619 | 3,153,058,517 | I_kwDODunzps6779rV | 7,619 | `from_list` fails while `from_generator` works for large datasets | {
"avatar_url": "https://avatars.githubusercontent.com/u/4028948?v=4",
"events_url": "https://api.github.com/users/abdulfatir/events{/privacy}",
"followers_url": "https://api.github.com/users/abdulfatir/followers",
"following_url": "https://api.github.com/users/abdulfatir/following{/other_user}",
"gists_url": "https://api.github.com/users/abdulfatir/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/abdulfatir",
"id": 4028948,
"login": "abdulfatir",
"node_id": "MDQ6VXNlcjQwMjg5NDg=",
"organizations_url": "https://api.github.com/users/abdulfatir/orgs",
"received_events_url": "https://api.github.com/users/abdulfatir/received_events",
"repos_url": "https://api.github.com/users/abdulfatir/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/abdulfatir/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/abdulfatir/subscriptions",
"type": "User",
"url": "https://api.github.com/users/abdulfatir",
"user_view_type": "public"
} | [] | open | false | null | [] | null | [
"@lhoestq any thoughts on this? ",
"Thanks for the report! This behavior is expected due to how `from_list()` and `from_generator()` differ internally.\n\n- `from_list()` builds the entire dataset in memory at once, which can easily exceed limits (especially with variable-length arrays or millions of rows). The Arrow error you're seeing (`Value too large to fit in C integer type`) is related to that memory overload.\n- `from_generator()` avoids this issue by batching and streaming data incrementally, which is much more memory-efficient.\n\nSo for large datasets like time series or NLP data with large arrays, `from_generator()` (or `datasets.IterableDataset`) is the recommended approach.\n\nHope this helps clarify the behavior — let me know if you'd like me to point to prior issues/discussions where similar tradeoffs came up!\n",
"@ArjunJagdale Yes, it is related to using large dataset but not in the way that you have described. As I understand, the problem here is that `datasets` does not use `LargeList` with 64-bit offsets from PyArrow when using `from_list`. However, with `from_generator` this seems to work okay, likely due to batching. As such, this is more like a bug than an expected outcome. If this is indeed \"expected\", `datasets` should fail more gracefully in these cases with a recommendation to use `from_generator`. ",
"Thanks for the clarification — you're absolutely right, this seems tied to the use of 32-bit list offsets in from_list() under the hood. That distinction between List and LargeList in PyArrow is a crucial one, and definitely worth highlighting in the docs or error message. Happy to help if a check or fallback to LargeList makes sense here."
] | 2025-06-17T10:58:55Z | 2025-06-29T16:34:44Z | null | NONE | null | null | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | ### Describe the bug
I am constructing a large time series dataset and observed that first constructing a list of entries and then using `Dataset.from_list` led to a crash as the number of items became large. However, this is not a problem when using `Dataset.from_generator`.
### Steps to reproduce the bug
#### Snippet A (crashes)
```py
from tqdm.auto import tqdm
import numpy as np
import datasets
def data_generator():
for i in tqdm(range(10_000_000)):
length = np.random.randint(2048)
series = np.random.rand(length)
yield {"target": series, "item_id": str(i), "start": np.datetime64("2000", "ms")}
data_list = list(data_generator())
ds = datasets.Dataset.from_list(data_list)
```
The last line crashes with
```
ArrowInvalid: Value 2147483761 too large to fit in C integer type
```
#### Snippet B (works)
```py
from tqdm.auto import tqdm
import numpy as np
import datasets
def data_generator():
for i in tqdm(range(10_000_000)):
length = np.random.randint(2048)
series = np.random.rand(length)
yield {"target": series, "item_id": str(i), "start": np.datetime64("2000", "ms")}
ds = datasets.Dataset.from_generator(data_generator)
```
### Expected behavior
I expected both the approaches to work or to fail similarly.
### Environment info
```
- `datasets` version: 3.6.0
- Platform: Linux-6.8.0-1029-aws-x86_64-with-glibc2.35
- Python version: 3.11.11
- `huggingface_hub` version: 0.32.2
- PyArrow version: 19.0.1
- Pandas version: 2.2.3
- `fsspec` version: 2025.3.0
``` | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7619/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7619/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/7618 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7618/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7618/comments | https://api.github.com/repos/huggingface/datasets/issues/7618/events | https://github.com/huggingface/datasets/pull/7618 | 3,148,912,897 | PR_kwDODunzps6aqOnm | 7,618 | fix: raise error when folder-based datasets are loaded without data_dir or data_files | {
"avatar_url": "https://avatars.githubusercontent.com/u/142811259?v=4",
"events_url": "https://api.github.com/users/ArjunJagdale/events{/privacy}",
"followers_url": "https://api.github.com/users/ArjunJagdale/followers",
"following_url": "https://api.github.com/users/ArjunJagdale/following{/other_user}",
"gists_url": "https://api.github.com/users/ArjunJagdale/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/ArjunJagdale",
"id": 142811259,
"login": "ArjunJagdale",
"node_id": "U_kgDOCIMgew",
"organizations_url": "https://api.github.com/users/ArjunJagdale/orgs",
"received_events_url": "https://api.github.com/users/ArjunJagdale/received_events",
"repos_url": "https://api.github.com/users/ArjunJagdale/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/ArjunJagdale/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArjunJagdale/subscriptions",
"type": "User",
"url": "https://api.github.com/users/ArjunJagdale",
"user_view_type": "public"
} | [] | open | false | null | [] | null | [
"Great ! Since this logic is specific to one builder class maybe this check can be in the class definition ? I think you can put it in FolderBasedBuilder's `_info()` method."
] | 2025-06-16T07:43:59Z | 2025-06-16T12:13:26Z | null | CONTRIBUTOR | null | null | null |
### Related Issues/PRs
<!-- Uncomment 'Resolve' if this PR can close the linked items. -->
<!-- Resolve --> #6152
---
### What changes are proposed in this pull request?
This PR adds an early validation step for folder-based datasets (like `audiofolder`) to prevent silent fallback behavior.
**Before this fix**:
- When `data_dir` or `data_files` were not provided, the loader defaulted to the current working directory.
- This caused unexpected behavior like:
- Long loading times
- Scanning unintended local files
**Now**:
- If both `data_dir` and `data_files` are missing, a `ValueError` is raised early with a helpful message.
---
### How is this PR tested?
- [x] Manual test via `load_dataset("audiofolder")` with missing `data_dir`
- [ ] Existing unit tests (should not break any)
- [ ] New tests (if needed, maintainers can guide)
---
### Does this PR require documentation update?
- [x] No. You can skip the rest of this section.
---
### Release Notes
#### Is this a user-facing change?
- [x] Yes. Give a description of this change to be included in the release notes for users.
> Adds early error handling for folder-based datasets when neither `data_dir` nor `data_files` is specified, avoiding unintended resolution to the current directory.
#### What component(s), interfaces, languages, and integrations does this PR affect?
Components:
- [x] `area/datasets`
- [x] `area/load`
---
<a name="release-note-category"></a>
#### How should the PR be classified in the release notes? Choose one:
- [x] `rn/bug-fix` - A user-facing bug fix worth mentioning in the release notes
---
#### Should this PR be included in the next patch release?
- [x] Yes (this PR will be cherry-picked and included in the next patch release)
| null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7618/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7618/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/7618.diff",
"html_url": "https://github.com/huggingface/datasets/pull/7618",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/7618.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7618"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/7617 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7617/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7617/comments | https://api.github.com/repos/huggingface/datasets/issues/7617/events | https://github.com/huggingface/datasets/issues/7617 | 3,148,102,085 | I_kwDODunzps67pDnF | 7,617 | Unwanted column padding in nested lists of dicts | {
"avatar_url": "https://avatars.githubusercontent.com/u/45557362?v=4",
"events_url": "https://api.github.com/users/qgallouedec/events{/privacy}",
"followers_url": "https://api.github.com/users/qgallouedec/followers",
"following_url": "https://api.github.com/users/qgallouedec/following{/other_user}",
"gists_url": "https://api.github.com/users/qgallouedec/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/qgallouedec",
"id": 45557362,
"login": "qgallouedec",
"node_id": "MDQ6VXNlcjQ1NTU3MzYy",
"organizations_url": "https://api.github.com/users/qgallouedec/orgs",
"received_events_url": "https://api.github.com/users/qgallouedec/received_events",
"repos_url": "https://api.github.com/users/qgallouedec/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/qgallouedec/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/qgallouedec/subscriptions",
"type": "User",
"url": "https://api.github.com/users/qgallouedec",
"user_view_type": "public"
} | [] | closed | false | null | [] | null | [
"Answer from @lhoestq:\n\n> No\n> This is because Arrow and Parquet a columnar format: they require a fixed type for each column. So if you have nested dicts, each item should have the same subfields\n\nThe way around I found is the handle it after sampling with this function:\n\n```python\ndef remove_padding(example):\n if isinstance(example, list):\n return [remove_padding(value) if isinstance(value, (dict, list)) else value for value in example]\n elif isinstance(example, Mapping):\n return {\n key: remove_padding(value) if isinstance(value, (dict, list)) else value\n for key, value in example.items()\n if value is not None\n }\n else:\n raise TypeError(\"Input must be a list or a dictionary.\")\n\n# Example:\nexample = next(iter(dataset))\nexample = remove_padding(example)\n```"
] | 2025-06-15T22:06:17Z | 2025-06-16T13:43:31Z | 2025-06-16T13:43:31Z | MEMBER | null | null | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | ```python
from datasets import Dataset
dataset = Dataset.from_dict({
"messages": [
[
{"a": "...",},
{"b": "...",},
],
]
})
print(dataset[0])
```
What I get:
```
{'messages': [{'a': '...', 'b': None}, {'a': None, 'b': '...'}]}
```
What I want:
```
{'messages': [{'a': '...'}, {'b': '...'}]}
```
Is there an easy way to automatically remove these auto-filled null/none values?
If not, I probably need a recursive none exclusion function, don't I?
Datasets 3.6.0 | {
"avatar_url": "https://avatars.githubusercontent.com/u/45557362?v=4",
"events_url": "https://api.github.com/users/qgallouedec/events{/privacy}",
"followers_url": "https://api.github.com/users/qgallouedec/followers",
"following_url": "https://api.github.com/users/qgallouedec/following{/other_user}",
"gists_url": "https://api.github.com/users/qgallouedec/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/qgallouedec",
"id": 45557362,
"login": "qgallouedec",
"node_id": "MDQ6VXNlcjQ1NTU3MzYy",
"organizations_url": "https://api.github.com/users/qgallouedec/orgs",
"received_events_url": "https://api.github.com/users/qgallouedec/received_events",
"repos_url": "https://api.github.com/users/qgallouedec/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/qgallouedec/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/qgallouedec/subscriptions",
"type": "User",
"url": "https://api.github.com/users/qgallouedec",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7617/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7617/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/7616 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7616/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7616/comments | https://api.github.com/repos/huggingface/datasets/issues/7616/events | https://github.com/huggingface/datasets/pull/7616 | 3,144,506,665 | PR_kwDODunzps6acSW7 | 7,616 | Torchcodec decoding | {
"avatar_url": "https://avatars.githubusercontent.com/u/49127578?v=4",
"events_url": "https://api.github.com/users/TyTodd/events{/privacy}",
"followers_url": "https://api.github.com/users/TyTodd/followers",
"following_url": "https://api.github.com/users/TyTodd/following{/other_user}",
"gists_url": "https://api.github.com/users/TyTodd/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/TyTodd",
"id": 49127578,
"login": "TyTodd",
"node_id": "MDQ6VXNlcjQ5MTI3NTc4",
"organizations_url": "https://api.github.com/users/TyTodd/orgs",
"received_events_url": "https://api.github.com/users/TyTodd/received_events",
"repos_url": "https://api.github.com/users/TyTodd/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/TyTodd/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/TyTodd/subscriptions",
"type": "User",
"url": "https://api.github.com/users/TyTodd",
"user_view_type": "public"
} | [] | closed | false | null | [] | null | [
"@lhoestq any updates on when this will be merged? Let me know if theres anything you need from my end.",
"Btw I plan to release `datasets` 4.0 after your PR, this will be a major milestone :)",
"@lhoestq just pushed the new changes.",
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7616). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"Great ! I took the liberty to move the AudioDecoder to its own file and make small edits in the docs and docstrings\r\n\r\nIf it looks good to you I think we can merge :)"
] | 2025-06-13T19:06:07Z | 2025-06-19T18:25:49Z | 2025-06-19T18:25:49Z | CONTRIBUTOR | null | null | null | Closes #7607
## New signatures
### Audio
```python
Audio(sampling_rate: Optional[int] = None, mono: bool = True, decode: bool = True, stream_index: Optional[int] = None)
Audio.encode_example(self, value: Union[str, bytes, bytearray, dict, "AudioDecoder"]) -> dict
Audio.decode_example(self, value: dict, token_per_repo_id: Optional[dict[str, Union[str, bool, None]]] = None) -> "AudioDecoder":
```
### Video
```python
Video(decode: bool = True, stream_index: Optional[int] = None, dimension_order: Literal['NCHW', 'NHWC'] = 'NCHW', num_ffmpeg_threads: int = 1, device: Optional[Union[str, "torch.device"]] = 'cpu', seek_mode: Literal['exact', 'approximate'] = 'exact')
Video.encode_example(self, value: Union[str, bytes, bytearray, Example, np.ndarray, "VideoDecoder"]) -> Example:
Video.decode_example(self, value: Union[str, Example], token_per_repo_id: Optional[dict[str, Union[bool, str]]] = None, ) -> "VideoDecoder":
```
## Notes
Audio features constructor takes in 1 new optional param stream_index which is passed to the AudioDecoder constructor to select the stream index of a file.
Audio feature can now take in torchcodec.decoders.AudioDecoder as input to encode_example()
Audio feature decode_example() returns torchcodec.decoders.AudioDecoder
Video feature constructor takes in 5 new optional params stream_index, dimension_order, num_ffmpeg_threads, device, seek_mode all of which are passed to VideoDecoder constructor
Video feature decode_example() returns torchcodec.decoders.VideoDecoder
Video feature can now take in torchcodec.decoders.VideoDecoder as input to encode_example()
All test cases have been updated to reflect these changes
All documentation has also been updated to reflect these changes.
Both VideoDecoder and AudioDecoder when formatted with (np_formatter, tf_formatter, etc) will ignore the type and return themselves. Formatting test cases were updated accordingly to reflect this. (Pretty simple to make this not the case if we want though)
## Errors
This test case from `tests/packaged_modules/test_audiofolder.py`
```python
@require_librosa
@require_sndfile
@pytest.mark.parametrize("streaming", [False, True])
def test_data_files_with_metadata_and_archives(streaming, cache_dir, data_files_with_zip_archives):
audiofolder = AudioFolder(data_files=data_files_with_zip_archives, cache_dir=cache_dir)
audiofolder.download_and_prepare()
datasets = audiofolder.as_streaming_dataset() if streaming else audiofolder.as_dataset()
for split, data_files in data_files_with_zip_archives.items():
num_of_archives = len(data_files) # the metadata file is inside the archive
expected_num_of_audios = 2 * num_of_archives
assert split in datasets
dataset = list(datasets[split])
assert len(dataset) == expected_num_of_audios
# make sure each sample has its own audio (all arrays are different) and metadata
assert (
sum(np.array_equal(dataset[0]["audio"].get_all_samples().data.numpy(), example["audio"].get_all_samples().data.numpy()) for example in dataset[1:])
== 0
)
assert len({example["text"] for example in dataset}) == expected_num_of_audios
assert all(example["text"] is not None for example in dataset)
```
Fails now because AudioDecoder needs to access the files after the lines below are run, but there seems to be some context issues. The file the decoder is trying to read is closed before the decoder gets the chance to decode it.
```python
audiofolder.download_and_prepare()
datasets = audiofolder.as_streaming_dataset() if streaming else audiofolder.as_dataset()
``` | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 1,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7616/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7616/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/7616.diff",
"html_url": "https://github.com/huggingface/datasets/pull/7616",
"merged_at": "2025-06-19T18:25:48Z",
"patch_url": "https://github.com/huggingface/datasets/pull/7616.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7616"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/7615 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7615/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7615/comments | https://api.github.com/repos/huggingface/datasets/issues/7615/events | https://github.com/huggingface/datasets/pull/7615 | 3,143,443,498 | PR_kwDODunzps6aYp18 | 7,615 | remove unused code | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
} | [] | closed | false | null | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7615). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] | 2025-06-13T12:37:30Z | 2025-06-13T12:39:59Z | 2025-06-13T12:37:40Z | MEMBER | null | null | null | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7615/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7615/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/7615.diff",
"html_url": "https://github.com/huggingface/datasets/pull/7615",
"merged_at": "2025-06-13T12:37:40Z",
"patch_url": "https://github.com/huggingface/datasets/pull/7615.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7615"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/7614 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7614/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7614/comments | https://api.github.com/repos/huggingface/datasets/issues/7614/events | https://github.com/huggingface/datasets/pull/7614 | 3,143,381,638 | PR_kwDODunzps6aYcbH | 7,614 | Lazy column | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
} | [] | closed | false | null | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7614). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] | 2025-06-13T12:12:57Z | 2025-06-17T13:08:51Z | 2025-06-17T13:08:49Z | MEMBER | null | null | null | Same as https://github.com/huggingface/datasets/pull/7564 but for `Dataset`, cc @TopCoder2K FYI
e.g. `ds[col]` now returns a lazy Column instead of a list
This way calling `ds[col][idx]` only loads the required data in memory
(bonus: also supports subfields access with `ds[col][subcol][idx]`)
the breaking change will be for the next major release, which also includes removal of dataset scripts support
close https://github.com/huggingface/datasets/issues/4180 | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
} | {
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7614/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7614/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/7614.diff",
"html_url": "https://github.com/huggingface/datasets/pull/7614",
"merged_at": "2025-06-17T13:08:49Z",
"patch_url": "https://github.com/huggingface/datasets/pull/7614.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7614"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/7613 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7613/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7613/comments | https://api.github.com/repos/huggingface/datasets/issues/7613/events | https://github.com/huggingface/datasets/pull/7613 | 3,142,819,991 | PR_kwDODunzps6aWgr3 | 7,613 | fix parallel push_to_hub in dataset_dict | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
} | [] | closed | false | null | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7613). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] | 2025-06-13T09:02:24Z | 2025-06-13T12:30:23Z | 2025-06-13T12:30:22Z | MEMBER | null | null | null | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7613/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7613/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/7613.diff",
"html_url": "https://github.com/huggingface/datasets/pull/7613",
"merged_at": "2025-06-13T12:30:22Z",
"patch_url": "https://github.com/huggingface/datasets/pull/7613.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7613"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/7612 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7612/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7612/comments | https://api.github.com/repos/huggingface/datasets/issues/7612/events | https://github.com/huggingface/datasets/issues/7612 | 3,141,905,049 | I_kwDODunzps67RaqZ | 7,612 | Provide an option of robust dataset iterator with error handling | {
"avatar_url": "https://avatars.githubusercontent.com/u/40016222?v=4",
"events_url": "https://api.github.com/users/wwwjn/events{/privacy}",
"followers_url": "https://api.github.com/users/wwwjn/followers",
"following_url": "https://api.github.com/users/wwwjn/following{/other_user}",
"gists_url": "https://api.github.com/users/wwwjn/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/wwwjn",
"id": 40016222,
"login": "wwwjn",
"node_id": "MDQ6VXNlcjQwMDE2MjIy",
"organizations_url": "https://api.github.com/users/wwwjn/orgs",
"received_events_url": "https://api.github.com/users/wwwjn/received_events",
"repos_url": "https://api.github.com/users/wwwjn/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/wwwjn/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/wwwjn/subscriptions",
"type": "User",
"url": "https://api.github.com/users/wwwjn",
"user_view_type": "public"
} | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | open | false | null | [] | null | [
"Hi ! Maybe we can add a parameter to the Image() type to make it to return `None` instead of raising an error in case of corruption ? Would that help ?",
"Hi! 👋🏼 I just opened PR [#7638](https://github.com/huggingface/datasets/pull/7638) to address this issue.\n\n### 🔧 What it does:\nIt adds an `ignore_decode_errors` flag to the `Image` feature. When set to `True`, corrupted image samples will be skipped (with a warning), and `None` will be returned instead of raising an exception.\n\nThis allows users to stream datasets that may contain some invalid images without breaking the iteration loop:\n\n```python\nfeatures = Features({\n \"image\": Image(decode=True, ignore_decode_errors=True)\n})\n````\n\n### 🧩 Why this helps:\n\n* Prevents full iteration breakdown during `.streaming=True` usage\n* Enables downstream tooling like Flux (see [[Flux#1290](https://github.com/pytorch/torchtitan/pull/1290)](https://github.com/pytorch/torchtitan/pull/1290)) to implement robust loaders now that `datasets` supports graceful handling\n* Keeps current behavior unchanged unless explicitly opted-in\n\nLet me know if you'd like me to follow up with test coverage or additional enhancements!\n\ncc @lhoestq "
] | 2025-06-13T00:40:48Z | 2025-06-24T16:52:30Z | null | NONE | null | null | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | ### Feature request
Adding an option to skip corrupted data samples. Currently the datasets behavior is throwing errors if the data sample if corrupted and let user aware and handle the data corruption. When I tried to try-catch the error at user level, the iterator will raise StopIteration when I called next() again.
The way I try to do error handling is: (This doesn't work, unfortunately)
```
# Load the dataset with streaming enabled
dataset = load_dataset(
"pixparse/cc12m-wds", split="train", streaming=True
)
# Get an iterator from the dataset
iterator = iter(dataset)
while True:
try:
# Try to get the next example
example = next(iterator)
# Try to access and process the image
image = example["jpg"]
pil_image = Image.fromarray(np.array(image))
pil_image.verify() # Verify it's a valid image file
except StopIteration: # Code path 1
print("\nStopIteration was raised! Reach the end of dataset")
raise StopIteration
except Exception as e: # Code path 2
errors += 1
print("Error! Skip this sample")
cotinue
else:
successful += 1
```
This is because the `IterableDataset` already throws an error (reaches Code path 2). And if I continue call next(), it will hit Code path 1. This is because the inner iterator of `IterableDataset`([code](https://github.com/huggingface/datasets/blob/89bd1f971402acb62805ef110bc1059c38b1c8c6/src/datasets/iterable_dataset.py#L2242)) as been stopped, so calling next() on it will raise StopIteration.
So I can not skip the corrupted data sample in this way. Would also love to hear any suggestions about creating a robust dataloader.
Thanks for your help in advance!
### Motivation
## Public dataset corruption might be common
A lot of users would use public dataset, and the public dataset might contains some corrupted data, especially for dataset with image / video etc. I totally understand it's dataset owner and user's responsibility to ensure the data integrity / run data cleaning or preprocessing, but it would be easier for developers who would use the dataset
## Use cases
For example, a robust dataloader would be easy for users who want to try quick tests on different dataset, and chose one dataset which fits their needs. So user could use IterableDataloader with `stream=True` to use the dataset easily without downloading and removing corrupted data samples from the dataset.
### Your contribution
The error handling might not trivial and might need more careful design. | null | {
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7612/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7612/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/7611 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7611/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7611/comments | https://api.github.com/repos/huggingface/datasets/issues/7611/events | https://github.com/huggingface/datasets/issues/7611 | 3,141,383,940 | I_kwDODunzps67PbcE | 7,611 | Code example for dataset.add_column() does not reflect correct way to use function | {
"avatar_url": "https://avatars.githubusercontent.com/u/31388649?v=4",
"events_url": "https://api.github.com/users/shaily99/events{/privacy}",
"followers_url": "https://api.github.com/users/shaily99/followers",
"following_url": "https://api.github.com/users/shaily99/following{/other_user}",
"gists_url": "https://api.github.com/users/shaily99/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/shaily99",
"id": 31388649,
"login": "shaily99",
"node_id": "MDQ6VXNlcjMxMzg4NjQ5",
"organizations_url": "https://api.github.com/users/shaily99/orgs",
"received_events_url": "https://api.github.com/users/shaily99/received_events",
"repos_url": "https://api.github.com/users/shaily99/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/shaily99/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/shaily99/subscriptions",
"type": "User",
"url": "https://api.github.com/users/shaily99",
"user_view_type": "public"
} | [] | open | false | null | [] | null | [
"Hi @shaily99 \n\nThanks for pointing this out — you're absolutely right!\n\nThe current example in the docstring for add_column() implies in-place modification, which is misleading since add_column() actually returns a new dataset.",
"#self-assign\n"
] | 2025-06-12T19:42:29Z | 2025-06-27T05:15:32Z | null | NONE | null | null | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | https://github.com/huggingface/datasets/blame/38d4d0e11e22fdbc4acf373d2421d25abeb43439/src/datasets/arrow_dataset.py#L5925C10-L5925C10
The example seems to suggest that dataset.add_column() can add column inplace, however, this is wrong -- it cannot. It returns a new dataset with the column added to it. | null | {
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7611/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7611/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/7610 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7610/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7610/comments | https://api.github.com/repos/huggingface/datasets/issues/7610/events | https://github.com/huggingface/datasets/issues/7610 | 3,141,281,560 | I_kwDODunzps67PCcY | 7,610 | i cant confirm email | {
"avatar_url": "https://avatars.githubusercontent.com/u/187984415?v=4",
"events_url": "https://api.github.com/users/lykamspam/events{/privacy}",
"followers_url": "https://api.github.com/users/lykamspam/followers",
"following_url": "https://api.github.com/users/lykamspam/following{/other_user}",
"gists_url": "https://api.github.com/users/lykamspam/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lykamspam",
"id": 187984415,
"login": "lykamspam",
"node_id": "U_kgDOCzRqHw",
"organizations_url": "https://api.github.com/users/lykamspam/orgs",
"received_events_url": "https://api.github.com/users/lykamspam/received_events",
"repos_url": "https://api.github.com/users/lykamspam/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lykamspam/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lykamspam/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lykamspam",
"user_view_type": "public"
} | [] | open | false | null | [] | null | [
"Will you please clarify the issue by some screenshots or more in-depth explanation?",
"\nThis is clarify answer. I have not received a letter.\n\n**The graphic at the top shows how I don't get any letter. Can you show in a clear way how you don't get a letter from me?**"
] | 2025-06-12T18:58:49Z | 2025-06-27T14:36:47Z | null | NONE | null | null | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | ### Describe the bug
This is dificult, I cant confirm email because I'm not get any email!
I cant post forum because I cant confirm email!
I can send help desk because... no exist on web page.
paragraph 44
### Steps to reproduce the bug
rthjrtrt
### Expected behavior
ewtgfwetgf
### Environment info
sdgfswdegfwe | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7610/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7610/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/7609 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7609/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7609/comments | https://api.github.com/repos/huggingface/datasets/issues/7609/events | https://github.com/huggingface/datasets/pull/7609 | 3,140,373,128 | PR_kwDODunzps6aOQ_g | 7,609 | Update `_dill.py` to use `co_linetable` for Python 3.10+ in place of `co_lnotab` | {
"avatar_url": "https://avatars.githubusercontent.com/u/45557362?v=4",
"events_url": "https://api.github.com/users/qgallouedec/events{/privacy}",
"followers_url": "https://api.github.com/users/qgallouedec/followers",
"following_url": "https://api.github.com/users/qgallouedec/following{/other_user}",
"gists_url": "https://api.github.com/users/qgallouedec/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/qgallouedec",
"id": 45557362,
"login": "qgallouedec",
"node_id": "MDQ6VXNlcjQ1NTU3MzYy",
"organizations_url": "https://api.github.com/users/qgallouedec/orgs",
"received_events_url": "https://api.github.com/users/qgallouedec/received_events",
"repos_url": "https://api.github.com/users/qgallouedec/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/qgallouedec/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/qgallouedec/subscriptions",
"type": "User",
"url": "https://api.github.com/users/qgallouedec",
"user_view_type": "public"
} | [] | closed | false | null | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7609). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"not 100% sure either, I tried removing unnecessary checks - let me know if they sound good to you otherwise I'll revert",
"I can't reproduce the warning anymore... 🤦🏻♂️\r\n",
"Ah now I can reproduce!, and I can confirm that the warning is gone when you apply the change in this PR"
] | 2025-06-12T13:47:01Z | 2025-06-16T12:14:10Z | 2025-06-16T12:14:08Z | MEMBER | null | null | null | Not 100% about this one, but it seems to be recommended.
```
/fsx/qgallouedec/miniconda3/envs/trl/lib/python3.12/site-packages/datasets/utils/_dill.py:385: DeprecationWarning: co_lnotab is deprecated, use co_lines instead.
```
Tests pass locally. And the warning is gone with this change.
https://peps.python.org/pep-0626/#backwards-compatibility | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7609/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7609/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/7609.diff",
"html_url": "https://github.com/huggingface/datasets/pull/7609",
"merged_at": "2025-06-16T12:14:08Z",
"patch_url": "https://github.com/huggingface/datasets/pull/7609.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7609"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/7608 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7608/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7608/comments | https://api.github.com/repos/huggingface/datasets/issues/7608/events | https://github.com/huggingface/datasets/pull/7608 | 3,137,564,259 | PR_kwDODunzps6aEr6b | 7,608 | Tests typing and fixes for push_to_hub | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
} | [] | closed | false | null | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7608). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] | 2025-06-11T17:13:52Z | 2025-06-12T21:15:23Z | 2025-06-12T21:15:21Z | MEMBER | null | null | null | todo:
- [x] fix TestPushToHub.test_push_dataset_dict_to_hub_iterable_num_proc | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7608/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7608/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/7608.diff",
"html_url": "https://github.com/huggingface/datasets/pull/7608",
"merged_at": "2025-06-12T21:15:21Z",
"patch_url": "https://github.com/huggingface/datasets/pull/7608.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7608"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/7607 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7607/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7607/comments | https://api.github.com/repos/huggingface/datasets/issues/7607/events | https://github.com/huggingface/datasets/issues/7607 | 3,135,722,560 | I_kwDODunzps6651RA | 7,607 | Video and audio decoding with torchcodec | {
"avatar_url": "https://avatars.githubusercontent.com/u/49127578?v=4",
"events_url": "https://api.github.com/users/TyTodd/events{/privacy}",
"followers_url": "https://api.github.com/users/TyTodd/followers",
"following_url": "https://api.github.com/users/TyTodd/following{/other_user}",
"gists_url": "https://api.github.com/users/TyTodd/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/TyTodd",
"id": 49127578,
"login": "TyTodd",
"node_id": "MDQ6VXNlcjQ5MTI3NTc4",
"organizations_url": "https://api.github.com/users/TyTodd/orgs",
"received_events_url": "https://api.github.com/users/TyTodd/received_events",
"repos_url": "https://api.github.com/users/TyTodd/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/TyTodd/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/TyTodd/subscriptions",
"type": "User",
"url": "https://api.github.com/users/TyTodd",
"user_view_type": "public"
} | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | closed | false | null | [] | null | [
"Good idea ! let me know if you have any question or if I can help",
"@lhoestq Almost finished, but I'm having trouble understanding this test case.\nThis is how it looks originally. The `map` function is called, and then `with_format` is called. According to the test case example[\"video\"] is supposed to be a VideoReader. However, according to the [docs](https://huggingface.co/docs/datasets/package_reference/main_classes#datasets.Dataset.with_format) its supposed to be the type passed into `with_format` (numpy in this case). My implementation with VideoDecoder currently does the latter, is that correct, or should it be a VideoDecoder object instead?\n```\n@require_torchvision\ndef test_dataset_with_video_map_and_formatted(shared_datadir):\n from torchvision.io import VideoReader\n\n video_path = str(shared_datadir / \"test_video_66x50.mov\")\n data = {\"video\": [video_path]}\n features = Features({\"video\": Video()})\n dset = Dataset.from_dict(data, features=features)\n dset = dset.map(lambda x: x).with_format(\"numpy\")\n example = dset[0]\n assert isinstance(example[\"video\"], VideoReader)\n # assert isinstance(example[\"video\"][0], np.ndarray)\n\n # from bytes\n with open(video_path, \"rb\") as f:\n data = {\"video\": [f.read()]}\n dset = Dataset.from_dict(data, features=features)\n dset = dset.map(lambda x: x).with_format(\"numpy\")\n example = dset[0]\n assert isinstance(example[\"video\"], VideoReader)\n # assert isinstance(example[\"video\"][0], np.ndarray)\n\n```",
"Hi ! It's maybe more convenient for users to always have a VideoDecoder, since they might only access a few frames and not the full video. So IMO it's fine to always return a VideoDecoder (maybe later we can extend the VideoDecoder to return other types of tensors than numpy arrays though ? 👀 it's not crucial for now though)",
"@lhoestq ya that makes sense, looks like this functionality lives in `src/datasets/formatting`, where an exception is made for VideoReader objects to remain as themselves when being formatted. I'll make the necessary changes. ",
"@lhoestq I'm assuming this was also the case for torchaudio objects?",
"We're not using torchaudio but soundfile. But anyway we unfortunately decode full audio files instead of returning a Reader and it can be interesting to fix this. Currently it always returns a dict {\"array\": np.array(...), \"sampling_rate\": int(...)}, while it would be cool to return a reader with seek() and read() - like methods as for videos.\n\n(there is a way to make the audio change backward compatible anyway by allowing `reader[\"array\"]` to return the full array)",
"@lhoestq (sorry for the spam btw)\nLooks like there's a # TODO to have these returned as np.arrays instead. I'm curious why the authors didn't do it initially. Maybe a performance thing?\nThis is from `/src/datasets/formatting/np_formatter.py` line 70\n```\nif config.TORCHVISION_AVAILABLE and \"torchvision\" in sys.modules:\n from torchvision.io import VideoReader\n\n if isinstance(value, VideoReader):\n return value # TODO(QL): set output to np arrays ?\n```",
"Oh cool ya this is something that I could implement with torchcodec. I can add that to the PR as well.",
"> Looks like there's a # TODO to have these returned as np.arrays instead. I'm curious why the authors didn't do it initially. Maybe a performance thing?\n\nyea that was me, I focused on a simple logic to start with, since I knew there was torchcodec coming and maybe wasn't worth it at the time ^^\n\nbut anyway it's fine to start with a logic without formatting to start with and then iterate",
"Hey @lhoestq I ran into an error with this test case for the Audio feature\n\n```\n@require_sndfile\n@require_torchcodec\ndef test_dataset_with_audio_feature_map_is_decoded(shared_datadir):\n audio_path = str(shared_datadir / \"test_audio_44100.wav\")\n data = {\"audio\": [audio_path], \"text\": [\"Hello\"]}\n features = Features({\"audio\": Audio(), \"text\": Value(\"string\")})\n dset = Dataset.from_dict(data, features=features)\n\n def process_audio_sampling_rate_by_example(example):\n sample_rate = example[\"audio\"].get_all_samples().sample_rate\n example[\"double_sampling_rate\"] = 2 * sample_rate\n return example\n\n decoded_dset = dset.map(process_audio_sampling_rate_by_example)\n for item in decoded_dset.cast_column(\"audio\", Audio(decode=False)):\n assert item.keys() == {\"audio\", \"text\", \"double_sampling_rate\"}\n assert item[\"double_sampling_rate\"] == 88200\n\n def process_audio_sampling_rate_by_batch(batch):\n double_sampling_rates = []\n for audio in batch[\"audio\"]:\n double_sampling_rates.append(2 * audio.get_all_samples().sample_rate)\n batch[\"double_sampling_rate\"] = double_sampling_rates\n return batch\n\n decoded_dset = dset.map(process_audio_sampling_rate_by_batch, batched=True)\n for item in decoded_dset.cast_column(\"audio\", Audio(decode=False)):\n assert item.keys() == {\"audio\", \"text\", \"double_sampling_rate\"}\n assert item[\"double_sampling_rate\"] == 88200\n```\n\nthis is the error below\n```\nsrc/datasets/arrow_writer.py:626: in write_batch\n arrays.append(pa.array(typed_sequence))\n.....\nFAILED tests/features/test_audio.py::test_dataset_with_audio_feature_map_is_decoded - pyarrow.lib.ArrowInvalid: Could not convert <torchcodec.decoders._audio_decoder.AudioDecoder object at 0x138cdd810> with type AudioDecoder: did not recognize Python value type when inferring an Arrow data type\n```\n\nBy the way I copied the test case and ran it on the original implementation of the Video feature, which uses the torchvision backend and I got a similar error.\n```\ndef test_dataset_with_video_feature_map_is_decoded(shared_datadir):\n video_path = str(shared_datadir / \"test_video_66x50.mov\")\n data = {\"video\": [video_path], \"text\": [\"Hello\"]}\n features = Features({\"video\": Video(), \"text\": Value(\"string\")})\n dset = Dataset.from_dict(data, features=features)\n\n def process_audio_sampling_rate_by_example(example):\n metadata = example[\"video\"].get_metadata()\n example[\"double_fps\"] = 2 * metadata[\"video\"][\"fps\"][0]\n return example\n\n decoded_dset = dset.map(process_audio_sampling_rate_by_example)\n for item in decoded_dset.cast_column(\"video\", Video(decode=False)):\n assert item.keys() == {\"video\", \"text\", \"double_fps\"}\n assert item[\"double_fps\"] == 2 * 10 # prollly wont work past 2*10 is made up!! shouldn't pass\n\n def process_audio_sampling_rate_by_batch(batch):\n double_fps = []\n for video in batch[\"video\"]:\n double_fps.append(2 * video.metadata.begin_stream_seconds)\n batch[\"double_fps\"] = double_fps\n return batch\n\n decoded_dset = dset.map(process_audio_sampling_rate_by_batch, batched=True)\n for item in decoded_dset.cast_column(\"video\", Video(decode=False)):\n assert item.keys() == {\"video\", \"text\", \"double_fps\"}\n assert item[\"double_fps\"] == 2 * 10 # prollly wont work past this no reason it should\n```\n\nI was wondering if these error's are expected. They seem to be coming from the fact that the function `_cast_to_python_objects` in `src/datasets/features/features.py` doesn't handle VideoDecoders or AudioDecoders. I was able to fix it and get rid of the error by adding this to the bottom of the function\n```\n elif config.TORCHCODEC_AVAILABLE and \"torchcodec\" in sys.modules and isinstance(obj, VideoDecoder):\n v = Video()\n return v.encode_example(obj), True\n elif config.TORCHCODEC_AVAILABLE and \"torchcodec\" in sys.modules and isinstance(obj, AudioDecoder):\n a = Audio()\n return a.encode_example(obj), True\n```\nThis fixed it, but I just want to make sure I'm not adding things that are messing up the intended functionality.",
"This is the right fix ! :)",
"Btw I just remembered that we were using soundfile because it can support a wide range of audio formats, is it also the case for torchcodec ? including ogg, opus for example",
"Yes from what I understand torchcodec supports everything ffmpeg supports.",
"Okay just finished. However, I wasn't able to pass this test case:\n```python\n@require_torchcodec\n@require_sndfile\[email protected](\"streaming\", [False, True])\ndef test_load_dataset_with_audio_feature(streaming, jsonl_audio_dataset_path, shared_datadir):\n from torchcodec.decoders import AudioDecoder\n audio_path = str(shared_datadir / \"test_audio_44100.wav\")\n data_files = jsonl_audio_dataset_path\n features = Features({\"audio\": Audio(), \"text\": Value(\"string\")})\n dset = load_dataset(\"json\", split=\"train\", data_files=data_files, features=features, streaming=streaming)\n item = dset[0] if not streaming else next(iter(dset))\n assert item.keys() == {\"audio\", \"text\"}\n assert isinstance(item[\"audio\"], AudioDecoder)\n samples = item[\"audio\"].get_all_samples()\n assert samples.sample_rate == 44100\n assert samples.data.shape == (1, 202311)\n```\n\nIt returned this error\n```\nstreaming = False, jsonl_audio_dataset_path = '/private/var/folders/47/c7dlgs_n6lx8rtr8f5w5m1m00000gn/T/pytest-of-tytodd/pytest-103/data2/audio_dataset.jsonl'\nshared_datadir = PosixPath('/private/var/folders/47/c7dlgs_n6lx8rtr8f5w5m1m00000gn/T/pytest-of-tytodd/pytest-103/test_load_dataset_with_audio_f0/data')\n\n @require_torchcodec\n @require_sndfile\n @pytest.mark.parametrize(\"streaming\", [False, True])\n def test_load_dataset_with_audio_feature(streaming, jsonl_audio_dataset_path, shared_datadir):\n from torchcodec.decoders import AudioDecoder\n audio_path = str(shared_datadir / \"test_audio_44100.wav\")\n data_files = jsonl_audio_dataset_path\n features = Features({\"audio\": Audio(), \"text\": Value(\"string\")})\n> dset = load_dataset(\"json\", split=\"train\", data_files=data_files, features=features, streaming=streaming)\n\ntests/features/test_audio.py:686: \n_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _\nsrc/datasets/load.py:1418: in load_dataset\n builder_instance.download_and_prepare(\nsrc/datasets/builder.py:925: in download_and_prepare\n self._download_and_prepare(\nsrc/datasets/builder.py:1019: in _download_and_prepare\n verify_splits(self.info.splits, split_dict)\n_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _\n\nexpected_splits = {'train': SplitInfo(name='train', num_bytes=2351563, num_examples=10000, shard_lengths=None, dataset_name=None), 'validation': SplitInfo(name='validation', num_bytes=238418, num_examples=1000, shard_lengths=None, dataset_name=None)}\nrecorded_splits = {'train': SplitInfo(name='train', num_bytes=167, num_examples=1, shard_lengths=None, dataset_name='json')}\n\n def verify_splits(expected_splits: Optional[dict], recorded_splits: dict):\n if expected_splits is None:\n logger.info(\"Unable to verify splits sizes.\")\n return\n if len(set(expected_splits) - set(recorded_splits)) > 0:\n> raise ExpectedMoreSplitsError(str(set(expected_splits) - set(recorded_splits)))\nE datasets.exceptions.ExpectedMoreSplitsError: {'validation'}\n\nsrc/datasets/utils/info_utils.py:68: ExpectedMoreSplitsError\n```\n\nIt looks like this test case wasn't passing when I forked the repo, so I assume I didn't do anything to break it. I also added this case to `test_video.py`, and it fails there as well. If this looks good, I'll go ahead and submit the PR.",
"Awesome ! yes feel free to submit the PR, I can see what I can do for the remaining tests",
"@lhoestq just submitted it #7616 "
] | 2025-06-11T07:02:30Z | 2025-06-19T18:25:49Z | 2025-06-19T18:25:49Z | CONTRIBUTOR | null | null | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | ### Feature request
Pytorch is migrating video processing to torchcodec and it's pretty cool. It would be nice to migrate both the audio and video features to use torchcodec instead of torchaudio/video.
### Motivation
My use case is I'm working on a multimodal AV model, and what's nice about torchcodec is I can extract the audio tensors directly from MP4 files. Also, I can easily resample video data to whatever fps I like on the fly. I haven't found an easy/efficient way to do this with torchvision.
### Your contribution
I’m modifying the Video dataclass to use torchcodec in place of the current backend, starting from a stable commit for a project I’m working on. If it ends up working well, I’m happy to open a PR on main. | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
} | {
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7607/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7607/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/7606 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7606/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7606/comments | https://api.github.com/repos/huggingface/datasets/issues/7606/events | https://github.com/huggingface/datasets/pull/7606 | 3,133,848,546 | PR_kwDODunzps6Z3_kV | 7,606 | Add `num_proc=` to `.push_to_hub()` (Dataset and IterableDataset) | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
} | [] | closed | false | null | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7606). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] | 2025-06-10T14:35:10Z | 2025-06-11T16:47:28Z | 2025-06-11T16:47:25Z | MEMBER | null | null | null | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 6,
"laugh": 0,
"rocket": 0,
"total_count": 6,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7606/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7606/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/7606.diff",
"html_url": "https://github.com/huggingface/datasets/pull/7606",
"merged_at": "2025-06-11T16:47:25Z",
"patch_url": "https://github.com/huggingface/datasets/pull/7606.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7606"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/7605 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7605/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7605/comments | https://api.github.com/repos/huggingface/datasets/issues/7605/events | https://github.com/huggingface/datasets/pull/7605 | 3,131,636,882 | PR_kwDODunzps6ZwcPp | 7,605 | Make `push_to_hub` atomic (#7600) | {
"avatar_url": "https://avatars.githubusercontent.com/u/391004?v=4",
"events_url": "https://api.github.com/users/sharvil/events{/privacy}",
"followers_url": "https://api.github.com/users/sharvil/followers",
"following_url": "https://api.github.com/users/sharvil/following{/other_user}",
"gists_url": "https://api.github.com/users/sharvil/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/sharvil",
"id": 391004,
"login": "sharvil",
"node_id": "MDQ6VXNlcjM5MTAwNA==",
"organizations_url": "https://api.github.com/users/sharvil/orgs",
"received_events_url": "https://api.github.com/users/sharvil/received_events",
"repos_url": "https://api.github.com/users/sharvil/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/sharvil/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sharvil/subscriptions",
"type": "User",
"url": "https://api.github.com/users/sharvil",
"user_view_type": "public"
} | [] | closed | false | null | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7605). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"Hi ! unfortunately we can't allow atomic commits for commits with hundreds of files additions (HF would time out)\r\n\r\nMaybe an alternative would be to retry if there was a commit in between ? this could be the default behavior as well",
"Thanks for taking a look – much appreciated!\r\n\r\nI've verified that commits with up to 20,000 files don't time out and the commit time scales linearly with the number of operations enqueued. It took just under 2 minutes to complete (successfully) the 20k file commit.\r\n\r\nThe fundamental issue I'm trying to tackle here is dataset corruption: getting into a state where a dataset on the hub cannot be used when downloaded. Non-atomic commits won't get us there, I think. If, for example, 3 of 5 commits complete and the machine/process calling `push_to_hub` has a network, hardware, or other failure that prevents it from completing the rest of the commits (even with retries) we'll now have some pointer files pointing to the new data and others pointing to the old data => corrupted. While this may seem like an unlikely scenario, it's a regular occurrence at scale.\r\n\r\nIf you still feel strongly that atomic commits are not the right way to go, I can either set it to not be the default or remove it entirely from this PR.\r\n\r\nAs for retries, it's a good idea. In a non-atomic world, the logic gets more complicated:\r\n- keep an explicit queue of pending add/delete operations\r\n- chunkwise pop from queue and commit with `parent_commit` set to previous chunked commit hash\r\n- if `create_commit` fails:\r\n - re-fetch README and set `parent_commit` to latest hash for `revision`\r\n - re-generate dataset card content\r\n - swap old `CommitOperationAdd` with new one for README in the pending queue\r\n- resume chunkwise committing from the queue as above\r\n\r\nEntirely doable, but more involved than I signed up for with this PR.",
"Just to clarify – setting the `parent_commit` can be separated from making the commit atomic (which is what I'm suggesting by either atomic commits not the default or removing it from this PR). It's crucial to set the parent commit to avoid the read-modify-write race condition on the README schema."
] | 2025-06-09T22:29:38Z | 2025-06-23T19:32:08Z | 2025-06-23T19:32:08Z | NONE | null | null | null | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/60325139?v=4",
"events_url": "https://api.github.com/users/lmnt-com/events{/privacy}",
"followers_url": "https://api.github.com/users/lmnt-com/followers",
"following_url": "https://api.github.com/users/lmnt-com/following{/other_user}",
"gists_url": "https://api.github.com/users/lmnt-com/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lmnt-com",
"id": 60325139,
"login": "lmnt-com",
"node_id": "MDEyOk9yZ2FuaXphdGlvbjYwMzI1MTM5",
"organizations_url": "https://api.github.com/users/lmnt-com/orgs",
"received_events_url": "https://api.github.com/users/lmnt-com/received_events",
"repos_url": "https://api.github.com/users/lmnt-com/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lmnt-com/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lmnt-com/subscriptions",
"type": "Organization",
"url": "https://api.github.com/users/lmnt-com",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7605/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7605/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/7605.diff",
"html_url": "https://github.com/huggingface/datasets/pull/7605",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/7605.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7605"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/7604 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7604/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7604/comments | https://api.github.com/repos/huggingface/datasets/issues/7604/events | https://github.com/huggingface/datasets/pull/7604 | 3,130,837,169 | PR_kwDODunzps6Ztrm_ | 7,604 | Docs and more methods for IterableDataset: push_to_hub, to_parquet... | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
} | [] | closed | false | null | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7604). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] | 2025-06-09T16:44:40Z | 2025-06-10T13:15:23Z | 2025-06-10T13:15:21Z | MEMBER | null | null | null | to_csv, to_json, to_sql, to_pandas, to_polars, to_dict, to_list | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7604/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7604/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/7604.diff",
"html_url": "https://github.com/huggingface/datasets/pull/7604",
"merged_at": "2025-06-10T13:15:21Z",
"patch_url": "https://github.com/huggingface/datasets/pull/7604.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7604"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/7603 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7603/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7603/comments | https://api.github.com/repos/huggingface/datasets/issues/7603/events | https://github.com/huggingface/datasets/pull/7603 | 3,130,394,563 | PR_kwDODunzps6ZsKin | 7,603 | No TF in win tests | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
} | [] | closed | false | null | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7603). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] | 2025-06-09T13:56:34Z | 2025-06-09T15:33:31Z | 2025-06-09T15:33:30Z | MEMBER | null | null | null | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7603/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7603/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/7603.diff",
"html_url": "https://github.com/huggingface/datasets/pull/7603",
"merged_at": "2025-06-09T15:33:30Z",
"patch_url": "https://github.com/huggingface/datasets/pull/7603.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7603"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/7602 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7602/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7602/comments | https://api.github.com/repos/huggingface/datasets/issues/7602/events | https://github.com/huggingface/datasets/pull/7602 | 3,128,758,924 | PR_kwDODunzps6Zmk99 | 7,602 | Enhance error handling and input validation across multiple modules | {
"avatar_url": "https://avatars.githubusercontent.com/u/147746955?v=4",
"events_url": "https://api.github.com/users/mohiuddin-khan-shiam/events{/privacy}",
"followers_url": "https://api.github.com/users/mohiuddin-khan-shiam/followers",
"following_url": "https://api.github.com/users/mohiuddin-khan-shiam/following{/other_user}",
"gists_url": "https://api.github.com/users/mohiuddin-khan-shiam/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mohiuddin-khan-shiam",
"id": 147746955,
"login": "mohiuddin-khan-shiam",
"node_id": "U_kgDOCM5wiw",
"organizations_url": "https://api.github.com/users/mohiuddin-khan-shiam/orgs",
"received_events_url": "https://api.github.com/users/mohiuddin-khan-shiam/received_events",
"repos_url": "https://api.github.com/users/mohiuddin-khan-shiam/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mohiuddin-khan-shiam/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mohiuddin-khan-shiam/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mohiuddin-khan-shiam",
"user_view_type": "public"
} | [] | open | false | null | [] | null | [] | 2025-06-08T23:01:06Z | 2025-06-08T23:01:06Z | null | NONE | null | null | null | This PR improves the robustness and user experience by:
1. **Audio Module**:
- Added clear error messages when required fields ('path' or 'bytes') are missing in audio encoding
2. **DatasetDict**:
- Enhanced key access error messages to show available splits when an invalid key is accessed
3. **NonMutableDict**:
- Added input validation for the update() method to ensure proper mapping types
4. **Arrow Reader**:
- Improved error messages for small dataset percentage splits with suggestions for alternatives
5. **FaissIndex**:
- Strengthened input validation with descriptive error messages
- Added proper type checking and shape validation for search queries
These changes make the code more maintainable and user-friendly by providing actionable feedback when issues arise. | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7602/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7602/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/7602.diff",
"html_url": "https://github.com/huggingface/datasets/pull/7602",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/7602.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7602"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/7600 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7600/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7600/comments | https://api.github.com/repos/huggingface/datasets/issues/7600/events | https://github.com/huggingface/datasets/issues/7600 | 3,127,296,182 | I_kwDODunzps66ZsC2 | 7,600 | `push_to_hub` is not concurrency safe (dataset schema corruption) | {
"avatar_url": "https://avatars.githubusercontent.com/u/391004?v=4",
"events_url": "https://api.github.com/users/sharvil/events{/privacy}",
"followers_url": "https://api.github.com/users/sharvil/followers",
"following_url": "https://api.github.com/users/sharvil/following{/other_user}",
"gists_url": "https://api.github.com/users/sharvil/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/sharvil",
"id": 391004,
"login": "sharvil",
"node_id": "MDQ6VXNlcjM5MTAwNA==",
"organizations_url": "https://api.github.com/users/sharvil/orgs",
"received_events_url": "https://api.github.com/users/sharvil/received_events",
"repos_url": "https://api.github.com/users/sharvil/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/sharvil/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sharvil/subscriptions",
"type": "User",
"url": "https://api.github.com/users/sharvil",
"user_view_type": "public"
} | [] | closed | false | null | [] | null | [
"@lhoestq can you please take a look? I've submitted a PR that fixes this issue. Thanks.",
"Thanks for the ping ! As I said in https://github.com/huggingface/datasets/pull/7605 there is maybe a more general approach using retries :)",
"Dropping this due to inactivity; we've implemented push_to_hub outside of HF datasets that's concurrency safe. Feel free to use the code I provided as a starting point if there's still interest in addressing this issue."
] | 2025-06-07T17:28:56Z | 2025-06-23T19:36:37Z | 2025-06-23T19:36:37Z | NONE | null | null | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | ### Describe the bug
Concurrent processes modifying and pushing a dataset can overwrite each others' dataset card, leaving the dataset unusable.
Consider this scenario:
- we have an Arrow dataset
- there are `N` configs of the dataset
- there are `N` independent processes operating on each of the individual configs (e.g. adding a column, `new_col`)
- each process calls `push_to_hub` on their particular config when they're done processing
- all calls to `push_to_hub` succeed
- the `README.md` now has some configs with `new_col` added and some with `new_col` missing
Any attempt to load a config (using `load_dataset`) where `new_col` is missing will fail because of a schema mismatch between `README.md` and the Arrow files. Fixing the dataset requires updating `README.md` by hand with the correct schema for the affected config. In effect, `push_to_hub` is doing a `git push --force` (I found this behavior quite surprising).
We have hit this issue every time we run processing jobs over our datasets and have to fix corrupted schemas by hand.
Reading through the code, it seems that specifying a [`parent_commit`](https://github.com/huggingface/huggingface_hub/blob/v0.32.4/src/huggingface_hub/hf_api.py#L4587) hash around here https://github.com/huggingface/datasets/blob/main/src/datasets/arrow_dataset.py#L5794 would get us to a normal, non-forced git push, and avoid schema corruption. I'm not familiar enough with the code to know how to determine the commit hash from which the in-memory dataset card was loaded.
### Steps to reproduce the bug
See above.
### Expected behavior
Concurrent edits to disjoint configs of a dataset should never corrupt the dataset schema.
### Environment info
- `datasets` version: 2.20.0
- Platform: Linux-5.15.0-118-generic-x86_64-with-glibc2.35
- Python version: 3.10.14
- `huggingface_hub` version: 0.30.2
- PyArrow version: 19.0.1
- Pandas version: 2.2.2
- `fsspec` version: 2023.9.0 | {
"avatar_url": "https://avatars.githubusercontent.com/u/391004?v=4",
"events_url": "https://api.github.com/users/sharvil/events{/privacy}",
"followers_url": "https://api.github.com/users/sharvil/followers",
"following_url": "https://api.github.com/users/sharvil/following{/other_user}",
"gists_url": "https://api.github.com/users/sharvil/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/sharvil",
"id": 391004,
"login": "sharvil",
"node_id": "MDQ6VXNlcjM5MTAwNA==",
"organizations_url": "https://api.github.com/users/sharvil/orgs",
"received_events_url": "https://api.github.com/users/sharvil/received_events",
"repos_url": "https://api.github.com/users/sharvil/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/sharvil/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sharvil/subscriptions",
"type": "User",
"url": "https://api.github.com/users/sharvil",
"user_view_type": "public"
} | {
"+1": 5,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 5,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7600/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7600/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/7599 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7599/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7599/comments | https://api.github.com/repos/huggingface/datasets/issues/7599/events | https://github.com/huggingface/datasets/issues/7599 | 3,125,620,119 | I_kwDODunzps66TS2X | 7,599 | My already working dataset (when uploaded few months ago) now is ignoring metadata.jsonl | {
"avatar_url": "https://avatars.githubusercontent.com/u/97530443?v=4",
"events_url": "https://api.github.com/users/JuanCarlosMartinezSevilla/events{/privacy}",
"followers_url": "https://api.github.com/users/JuanCarlosMartinezSevilla/followers",
"following_url": "https://api.github.com/users/JuanCarlosMartinezSevilla/following{/other_user}",
"gists_url": "https://api.github.com/users/JuanCarlosMartinezSevilla/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/JuanCarlosMartinezSevilla",
"id": 97530443,
"login": "JuanCarlosMartinezSevilla",
"node_id": "U_kgDOBdAySw",
"organizations_url": "https://api.github.com/users/JuanCarlosMartinezSevilla/orgs",
"received_events_url": "https://api.github.com/users/JuanCarlosMartinezSevilla/received_events",
"repos_url": "https://api.github.com/users/JuanCarlosMartinezSevilla/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/JuanCarlosMartinezSevilla/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/JuanCarlosMartinezSevilla/subscriptions",
"type": "User",
"url": "https://api.github.com/users/JuanCarlosMartinezSevilla",
"user_view_type": "public"
} | [] | closed | false | null | [] | null | [
"Maybe its been a recent update, but i can manage to load the metadata.jsonl separately from the images with:\n\n```\nmetadata = load_dataset(\"PRAIG/SMB\", split=\"train\", data_files=[\"*.jsonl\"])\nimages = load_dataset(\"PRAIG/SMB\", split=\"train\")\n```\nDo you know it this is an expected behaviour? This makes my dataset viewer to only load the images without the labeling of metadata.jsonl.\n\nThanks",
"Hi ! this is because we now expect the metadata file to be inside the directory named after the split \"train\" (this way each split can have its own metadata and can be loaded independently)\n\nYou can fix that by configuring it explicitly in the dataset's README.md header:\n\n```yaml\nconfigs:\n- config_name: default\n data_files:\n - split: train\n path:\n - \"train/**/*.png\"\n - \"metadata.jsonl\"\n```\n\n(or by moving the metadata.jsonl in train/ but in this case you also have to modify the content of the JSONL to fix the relative paths to the images)",
"Thank you very much, dataset viewer is already working as expected!!"
] | 2025-06-06T18:59:00Z | 2025-06-16T15:18:00Z | 2025-06-16T15:18:00Z | NONE | null | null | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | ### Describe the bug
Hi everyone, I uploaded my dataset https://huggingface.co/datasets/PRAIG/SMB a few months ago while I was waiting for a conference acceptance response. Without modifying anything in the dataset repository now the Dataset viewer is not rendering the metadata.jsonl annotations, neither it is being downloaded when using load_dataset. Can you please help? Thank you in advance.
### Steps to reproduce the bug
from datasets import load_dataset
ds = load_dataset("PRAIG/SMB")
ds = ds["train"]
### Expected behavior
It is expected to have all the metadata available in the jsonl file. Fields like: "score_id", "original_width", "original_height", "regions"... among others.
### Environment info
datasets==3.6.0, python 3.13.3 (but he problem is already in the huggingface dataset page) | {
"avatar_url": "https://avatars.githubusercontent.com/u/97530443?v=4",
"events_url": "https://api.github.com/users/JuanCarlosMartinezSevilla/events{/privacy}",
"followers_url": "https://api.github.com/users/JuanCarlosMartinezSevilla/followers",
"following_url": "https://api.github.com/users/JuanCarlosMartinezSevilla/following{/other_user}",
"gists_url": "https://api.github.com/users/JuanCarlosMartinezSevilla/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/JuanCarlosMartinezSevilla",
"id": 97530443,
"login": "JuanCarlosMartinezSevilla",
"node_id": "U_kgDOBdAySw",
"organizations_url": "https://api.github.com/users/JuanCarlosMartinezSevilla/orgs",
"received_events_url": "https://api.github.com/users/JuanCarlosMartinezSevilla/received_events",
"repos_url": "https://api.github.com/users/JuanCarlosMartinezSevilla/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/JuanCarlosMartinezSevilla/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/JuanCarlosMartinezSevilla/subscriptions",
"type": "User",
"url": "https://api.github.com/users/JuanCarlosMartinezSevilla",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7599/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7599/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/7598 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7598/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7598/comments | https://api.github.com/repos/huggingface/datasets/issues/7598/events | https://github.com/huggingface/datasets/pull/7598 | 3,125,184,457 | PR_kwDODunzps6ZaclZ | 7,598 | fix string_to_dict usage for windows | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
} | [] | closed | false | null | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7598). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] | 2025-06-06T15:54:29Z | 2025-06-06T16:12:22Z | 2025-06-06T16:12:21Z | MEMBER | null | null | null | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7598/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7598/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/7598.diff",
"html_url": "https://github.com/huggingface/datasets/pull/7598",
"merged_at": "2025-06-06T16:12:21Z",
"patch_url": "https://github.com/huggingface/datasets/pull/7598.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7598"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/7597 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7597/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7597/comments | https://api.github.com/repos/huggingface/datasets/issues/7597/events | https://github.com/huggingface/datasets/issues/7597 | 3,123,962,709 | I_kwDODunzps66M-NV | 7,597 | Download datasets from a private hub in 2025 | {
"avatar_url": "https://avatars.githubusercontent.com/u/178552926?v=4",
"events_url": "https://api.github.com/users/DanielSchuhmacher/events{/privacy}",
"followers_url": "https://api.github.com/users/DanielSchuhmacher/followers",
"following_url": "https://api.github.com/users/DanielSchuhmacher/following{/other_user}",
"gists_url": "https://api.github.com/users/DanielSchuhmacher/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/DanielSchuhmacher",
"id": 178552926,
"login": "DanielSchuhmacher",
"node_id": "U_kgDOCqSAXg",
"organizations_url": "https://api.github.com/users/DanielSchuhmacher/orgs",
"received_events_url": "https://api.github.com/users/DanielSchuhmacher/received_events",
"repos_url": "https://api.github.com/users/DanielSchuhmacher/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/DanielSchuhmacher/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/DanielSchuhmacher/subscriptions",
"type": "User",
"url": "https://api.github.com/users/DanielSchuhmacher",
"user_view_type": "public"
} | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | closed | false | null | [] | null | [
"Hi ! First, and in the general case, Hugging Face does offer to host private datasets, and with a subscription you can even choose the region in which the repositories are hosted (US, EU)\n\nThen if you happen to have a private deployment, you can set the HF_ENDPOINT environment variable (same as in https://github.com/huggingface/transformers/issues/38634)",
"Thank you @lhoestq. Works as described!"
] | 2025-06-06T07:55:19Z | 2025-06-13T13:46:00Z | 2025-06-13T13:46:00Z | NONE | null | null | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | ### Feature request
In the context of a private hub deployment, customers would like to use load_dataset() to load datasets from their hub, not from the public hub. This doesn't seem to be configurable at the moment and it would be nice to add this feature.
The obvious workaround is to clone the repo first and then load it from local storage, but this adds an extra step. It'd be great to have the same experience regardless of where the hub is hosted.
This issue was raised before here: https://github.com/huggingface/datasets/issues/3679
@juliensimon
### Motivation
none
### Your contribution
none | {
"avatar_url": "https://avatars.githubusercontent.com/u/178552926?v=4",
"events_url": "https://api.github.com/users/DanielSchuhmacher/events{/privacy}",
"followers_url": "https://api.github.com/users/DanielSchuhmacher/followers",
"following_url": "https://api.github.com/users/DanielSchuhmacher/following{/other_user}",
"gists_url": "https://api.github.com/users/DanielSchuhmacher/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/DanielSchuhmacher",
"id": 178552926,
"login": "DanielSchuhmacher",
"node_id": "U_kgDOCqSAXg",
"organizations_url": "https://api.github.com/users/DanielSchuhmacher/orgs",
"received_events_url": "https://api.github.com/users/DanielSchuhmacher/received_events",
"repos_url": "https://api.github.com/users/DanielSchuhmacher/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/DanielSchuhmacher/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/DanielSchuhmacher/subscriptions",
"type": "User",
"url": "https://api.github.com/users/DanielSchuhmacher",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7597/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7597/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/7596 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7596/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7596/comments | https://api.github.com/repos/huggingface/datasets/issues/7596/events | https://github.com/huggingface/datasets/pull/7596 | 3,122,595,042 | PR_kwDODunzps6ZRkEU | 7,596 | Add albumentations to use dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/5481618?v=4",
"events_url": "https://api.github.com/users/ternaus/events{/privacy}",
"followers_url": "https://api.github.com/users/ternaus/followers",
"following_url": "https://api.github.com/users/ternaus/following{/other_user}",
"gists_url": "https://api.github.com/users/ternaus/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/ternaus",
"id": 5481618,
"login": "ternaus",
"node_id": "MDQ6VXNlcjU0ODE2MTg=",
"organizations_url": "https://api.github.com/users/ternaus/orgs",
"received_events_url": "https://api.github.com/users/ternaus/received_events",
"repos_url": "https://api.github.com/users/ternaus/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/ternaus/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ternaus/subscriptions",
"type": "User",
"url": "https://api.github.com/users/ternaus",
"user_view_type": "public"
} | [] | closed | false | null | [] | null | [
"@lhoestq ping",
"@lhoestq ping",
"@lhoestq Thanks. Cleaned up torchvision."
] | 2025-06-05T20:39:46Z | 2025-06-17T18:38:08Z | 2025-06-17T14:44:30Z | CONTRIBUTOR | null | null | null | 1. Fixed broken link to the list of transforms in torchvison.
2. Extended section about video image augmentations with an example from Albumentations. | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7596/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7596/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/7596.diff",
"html_url": "https://github.com/huggingface/datasets/pull/7596",
"merged_at": "2025-06-17T14:44:30Z",
"patch_url": "https://github.com/huggingface/datasets/pull/7596.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7596"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/7595 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7595/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7595/comments | https://api.github.com/repos/huggingface/datasets/issues/7595/events | https://github.com/huggingface/datasets/pull/7595 | 3,121,689,436 | PR_kwDODunzps6ZOaFl | 7,595 | Add `IterableDataset.push_to_hub()` | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
} | [] | closed | false | null | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7595). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] | 2025-06-05T15:29:32Z | 2025-06-06T16:12:37Z | 2025-06-06T16:12:36Z | MEMBER | null | null | null | Basic implementation, which writes one shard per input dataset shard.
This is to be improved later.
Close https://github.com/huggingface/datasets/issues/5665
PS: for image/audio datasets structured as actual image/audio files (not parquet), you can sometimes speed it up with `ds.decode(num_threads=...).push_to_hub(...)` | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7595/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7595/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/7595.diff",
"html_url": "https://github.com/huggingface/datasets/pull/7595",
"merged_at": "2025-06-06T16:12:36Z",
"patch_url": "https://github.com/huggingface/datasets/pull/7595.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7595"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/7594 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7594/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7594/comments | https://api.github.com/repos/huggingface/datasets/issues/7594/events | https://github.com/huggingface/datasets/issues/7594 | 3,120,799,626 | I_kwDODunzps66A5-K | 7,594 | Add option to ignore keys/columns when loading a dataset from jsonl(or any other data format) | {
"avatar_url": "https://avatars.githubusercontent.com/u/36810152?v=4",
"events_url": "https://api.github.com/users/avishaiElmakies/events{/privacy}",
"followers_url": "https://api.github.com/users/avishaiElmakies/followers",
"following_url": "https://api.github.com/users/avishaiElmakies/following{/other_user}",
"gists_url": "https://api.github.com/users/avishaiElmakies/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/avishaiElmakies",
"id": 36810152,
"login": "avishaiElmakies",
"node_id": "MDQ6VXNlcjM2ODEwMTUy",
"organizations_url": "https://api.github.com/users/avishaiElmakies/orgs",
"received_events_url": "https://api.github.com/users/avishaiElmakies/received_events",
"repos_url": "https://api.github.com/users/avishaiElmakies/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/avishaiElmakies/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/avishaiElmakies/subscriptions",
"type": "User",
"url": "https://api.github.com/users/avishaiElmakies",
"user_view_type": "public"
} | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | open | false | null | [] | null | [
"Good point, I'd be in favor of having the `columns` argument in `JsonConfig` (and the others) to align with `ParquetConfig` to let users choose which columns to load and ignore the rest",
"Is it possible to ignore columns when using parquet? ",
"Yes, you can pass `columns=...` to load_dataset to select which columns to load, and it is passed to `ParquetConfig` :)",
"Ok, i didn't know that. \nAnyway, it would be good to add this to others",
"Hi @lhoestq \n\nI'd like to take this up!\n\nAs you suggested, I’ll extend the support for the columns parameter (currently used in ParquetConfig) to JsonConfig as well. This will allow users to selectively load specific keys/columns from .jsonl (or .json) files and ignore the rest — solving the type inconsistency issues in unclean datasets.",
"Hi @avishaiElmakies and @lhoestq \n\nJust wanted to let you know that this is now implemented in #7594\nAs suggested, support for the `columns=...` argument (previously available for Parquet) has now been extended to **JSON and JSONL** loading via `load_dataset(...)`. You can now load only specific keys/columns and skip the rest — which should help in cases where some fields are unclean, inconsistent, or just unnecessary.\n\n### ✅ Example:\n\n```python\nfrom datasets import load_dataset\n\ndataset = load_dataset(\"json\", data_files=\"your_data.jsonl\", columns=[\"id\", \"title\"])\nprint(dataset[\"train\"].column_names)\n# Output: ['id', 'title']\n```\n\n### 🔧 Summary of changes:\n\n* Added `columns: Optional[List[str]]` to `JsonConfig`\n* Updated `_generate_tables()` to filter selected columns\n* Forwarded `columns` argument from `load_dataset()` to the config\n* Added test case to validate behavior\n\nLet me know if you'd like the same to be added for CSV or others as a follow-up — happy to help.",
"@ArjunJagdale this looks great! Thanks!\nI believe that every format that is supported by `datasets` should probably have this feature since it is very useful and will streamline the api (people will know that they can just use `columns` to select the columns they want, and it will not be dependent on the data format) ",
"Thanks @avishaiElmakies — totally agree, making `columns=...` support consistent across all formats would be really helpful for users."
] | 2025-06-05T11:12:45Z | 2025-06-28T09:03:00Z | null | NONE | null | null | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | ### Feature request
Hi, I would like the option to ignore keys/columns when loading a dataset from files (e.g. jsonl).
### Motivation
I am working on a dataset which is built on jsonl. It seems the dataset is unclean and a column has different types in each row. I can't clean this or remove the column (It is not my data and it is too big for me to clean and save on my own hardware).
I would like the option to just ignore this column when using `load_dataset`, since i don't need it.
I tried to look if this is already possible but couldn't find a solution. if there is I would love some help. If it is not currently possible, I would love this feature
### Your contribution
I don't think I can help this time, unfortunately. | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7594/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7594/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/7593 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7593/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7593/comments | https://api.github.com/repos/huggingface/datasets/issues/7593/events | https://github.com/huggingface/datasets/pull/7593 | 3,118,812,368 | PR_kwDODunzps6ZE34G | 7,593 | Fix broken link to albumentations | {
"avatar_url": "https://avatars.githubusercontent.com/u/5481618?v=4",
"events_url": "https://api.github.com/users/ternaus/events{/privacy}",
"followers_url": "https://api.github.com/users/ternaus/followers",
"following_url": "https://api.github.com/users/ternaus/following{/other_user}",
"gists_url": "https://api.github.com/users/ternaus/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/ternaus",
"id": 5481618,
"login": "ternaus",
"node_id": "MDQ6VXNlcjU0ODE2MTg=",
"organizations_url": "https://api.github.com/users/ternaus/orgs",
"received_events_url": "https://api.github.com/users/ternaus/received_events",
"repos_url": "https://api.github.com/users/ternaus/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/ternaus/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ternaus/subscriptions",
"type": "User",
"url": "https://api.github.com/users/ternaus",
"user_view_type": "public"
} | [] | closed | false | null | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7593). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"@lhoestq ping"
] | 2025-06-04T19:00:13Z | 2025-06-05T16:37:02Z | 2025-06-05T16:36:32Z | CONTRIBUTOR | null | null | null | A few months back I rewrote all docs at [https://albumentations.ai/docs](https://albumentations.ai/docs), and some pages changed their links.
In this PR fixed link to the most recent doc in Albumentations about bounding boxes and it's format.
Fix a few typos in the doc as well. | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7593/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7593/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/7593.diff",
"html_url": "https://github.com/huggingface/datasets/pull/7593",
"merged_at": "2025-06-05T16:36:32Z",
"patch_url": "https://github.com/huggingface/datasets/pull/7593.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7593"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/7592 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7592/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7592/comments | https://api.github.com/repos/huggingface/datasets/issues/7592/events | https://github.com/huggingface/datasets/pull/7592 | 3,118,203,880 | PR_kwDODunzps6ZC2so | 7,592 | Remove scripts altogether | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
} | [] | closed | false | null | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7592). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] | 2025-06-04T15:14:11Z | 2025-06-09T16:45:29Z | 2025-06-09T16:45:27Z | MEMBER | null | null | null | TODO:
- [x] remplace fixtures based on script with no-script fixtures
- [x] windaube | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7592/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7592/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/7592.diff",
"html_url": "https://github.com/huggingface/datasets/pull/7592",
"merged_at": "2025-06-09T16:45:27Z",
"patch_url": "https://github.com/huggingface/datasets/pull/7592.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7592"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/7591 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7591/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7591/comments | https://api.github.com/repos/huggingface/datasets/issues/7591/events | https://github.com/huggingface/datasets/issues/7591 | 3,117,816,388 | I_kwDODunzps651hpE | 7,591 | Add num_proc parameter to push_to_hub | {
"avatar_url": "https://avatars.githubusercontent.com/u/46050679?v=4",
"events_url": "https://api.github.com/users/SwayStar123/events{/privacy}",
"followers_url": "https://api.github.com/users/SwayStar123/followers",
"following_url": "https://api.github.com/users/SwayStar123/following{/other_user}",
"gists_url": "https://api.github.com/users/SwayStar123/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/SwayStar123",
"id": 46050679,
"login": "SwayStar123",
"node_id": "MDQ6VXNlcjQ2MDUwNjc5",
"organizations_url": "https://api.github.com/users/SwayStar123/orgs",
"received_events_url": "https://api.github.com/users/SwayStar123/received_events",
"repos_url": "https://api.github.com/users/SwayStar123/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/SwayStar123/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/SwayStar123/subscriptions",
"type": "User",
"url": "https://api.github.com/users/SwayStar123",
"user_view_type": "public"
} | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | open | false | null | [] | null | [
"Hi @SwayStar123 \n\nI'd be interested in taking this up. I plan to add a `num_proc` parameter to `push_to_hub()` and use parallel uploads for shards using `concurrent.futures`. Will explore whether `ThreadPoolExecutor` or `ProcessPoolExecutor` is more suitable based on current implementation. Let me know if that sounds good!\n",
"Just a quick update — `push_to_hub()` already had the `num_proc` argument in its signature and was correctly passing it internally to `_push_parquet_shards_to_hub()`.\n\nThe actual change required was inside `_push_parquet_shards_to_hub()` to enable parallel shard uploads using `multiprocessing` when `num_proc > 1`.\n\n@lhoestq @SwayStar123 ",
"> Hi @SwayStar123 \n> \n> I'd be interested in taking this up. I plan to add a `num_proc` parameter to `push_to_hub()` and use parallel uploads for shards using `concurrent.futures`. Will explore whether `ThreadPoolExecutor` or `ProcessPoolExecutor` is more suitable based on current implementation. Let me know if that sounds good!\n> \n\nHey thanks for working on it. But I'm not a hf dev so I don't know the best way to do it."
] | 2025-06-04T13:19:15Z | 2025-06-27T06:13:54Z | null | NONE | null | null | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | ### Feature request
A number of processes parameter to the dataset.push_to_hub method
### Motivation
Shards are currently uploaded serially which makes it slow for many shards, uploading can be done in parallel and much faster
| null | {
"+1": 2,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 2,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7591/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7591/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/7590 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7590/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7590/comments | https://api.github.com/repos/huggingface/datasets/issues/7590/events | https://github.com/huggingface/datasets/issues/7590 | 3,101,654,892 | I_kwDODunzps64339s | 7,590 | `Sequence(Features(...))` causes PyArrow cast error in `load_dataset` despite correct schema. | {
"avatar_url": "https://avatars.githubusercontent.com/u/183279820?v=4",
"events_url": "https://api.github.com/users/AHS-uni/events{/privacy}",
"followers_url": "https://api.github.com/users/AHS-uni/followers",
"following_url": "https://api.github.com/users/AHS-uni/following{/other_user}",
"gists_url": "https://api.github.com/users/AHS-uni/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/AHS-uni",
"id": 183279820,
"login": "AHS-uni",
"node_id": "U_kgDOCuygzA",
"organizations_url": "https://api.github.com/users/AHS-uni/orgs",
"received_events_url": "https://api.github.com/users/AHS-uni/received_events",
"repos_url": "https://api.github.com/users/AHS-uni/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/AHS-uni/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/AHS-uni/subscriptions",
"type": "User",
"url": "https://api.github.com/users/AHS-uni",
"user_view_type": "public"
} | [] | open | false | null | [] | null | [
"Hi @lhoestq \n\nCould you help confirm whether this qualifies as a bug?\n\nIt looks like the issue stems from how `Sequence(Features(...))` is interpreted as a plain struct during schema inference, which leads to a mismatch when casting with PyArrow (especially with nested structs inside lists). From the description, this seems like an inconsistency with expected behavior.\n\nIf confirmed, I’d be happy to take a shot at investigating and potentially submitting a fix.\n\nAlso looping in @AHS-uni — could you kindly share a minimal JSONL example that reproduces this?\n\nThanks!",
"Hello @Flink-ddd \n\nI updated the minimal example and included both JSON and JSONL minimal examples in the Colab notebook. \n\nHere is the minimal JSON file for convenience (can't upload JSONL files).\n\n[mini.json](https://github.com/user-attachments/files/20535145/mini.json)\n\nI've also found a number of issues which describe a similar problem:\n\n[7569](https://github.com/huggingface/datasets/issues/7569) (Open)\n[7137](https://github.com/huggingface/datasets/issues/7137) (Open)\n[7501](https://github.com/huggingface/datasets/issues/7501) (Closed)\n[2434](https://github.com/huggingface/datasets/issues/2434) (Closed)\n\nThe closed issues don't really address the problem (IMO). [7501](https://github.com/huggingface/datasets/issues/7501) provides a workaround (using a Python list instead of `Sequence`), but it seem precarious. ",
"Hi ! `Sequence({...})` corresponds to a struct of lists ([docs](https://huggingface.co/docs/datasets/v3.6.0/en/package_reference/main_classes#datasets.Features)). This come from Tensorflow Datasets.\n\nIf you want to use a list of structs, you should use `[{...}]`, e.g.\n\n```python\nitem = {\n \"id\": Value(\"string\"),\n \"data\": Value(\"string\"),\n}\n\nfeatures = Features({\n \"list\": [item],\n})\n```",
"@lhoestq Thanks for your explanation, which helps me understand the logic behind. But I'm confused how to define that in `README.md`?\n\nMy jsonl data is: \n```\n{\"answers\": [{\"text\": \"text1\", \"label\": \"label1\"}, {\"text\": \"text2\", \"label\": \"label2\"},]}\n{\"answers\": [{\"text\": \"text1\", \"label\": \"label1\"}, {\"text\": \"text2\", \"label\": \"label2\"},]}\n...\n```\n\nMy README.md look like\n```\ndataset_info:\n- config_name: default\n features:\n - name: answers\n sequence:\n - name: text\n dtype: string\n - name: label\n dtype: string\n```\nI understand `sequence` here is not correct, but what's the correct format? I tried following (`sequence -> dtype`)and seems not the case:\n```\ndataset_info:\n- config_name: default\n features:\n - name: answers\n dtype:\n - name: text\n sequence: string\n - name: label\n sequence: string\n```"
] | 2025-05-29T22:53:36Z | 2025-07-03T18:34:32Z | null | NONE | null | null | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | ### Description
When loading a dataset with a field declared as a list of structs using `Sequence(Features(...))`, `load_dataset` incorrectly infers the field as a plain `struct<...>` instead of a `list<struct<...>>`. This leads to the following error:
```
ArrowNotImplementedError: Unsupported cast from list<item: struct<id: string, data: string>> to struct using function cast_struct
```
This occurs even when the `features` schema is explicitly provided and the dataset format supports nested structures natively (e.g., JSON, JSONL).
---
### Minimal Reproduction
[Colab Link.](https://colab.research.google.com/drive/1FZPQy6TP3jVd4B3mYKyfQaWNuOAvljUq?usp=sharing)
#### Dataset
```python
data = [
{
"list": [
{"id": "example1", "data": "text"},
]
},
]
```
#### Schema
```python
from datasets import Features, Sequence, Value
item = Features({
"id": Value("string"),
"data": Value("string"),
})
features = Features({
"list": Sequence(item),
})
```
---
### Tested File Formats
The same schema was tested across different formats:
| Format | Method | Result |
| --------- | --------------------------- | ------------------- |
| JSONL | `load_dataset("json", ...)` | Arrow cast error |
| JSON | `load_dataset("json", ...)` | Arrow cast error |
| In-memory | `Dataset.from_list(...)` | Works as expected |
The issue seems not to be in the schema or the data, but in how `load_dataset()` handles the `Sequence(Features(...))` pattern when parsing from files (specifically JSON and JSONL).
---
### Expected Behavior
If `features` is explicitly defined as:
```python
Features({"list": Sequence(Features({...}))})
```
Then the data should load correctly across all backends — including from JSON and JSONL — without any Arrow casting errors. This works correctly when loading from memory via `Dataset.from_list`.
---
### Environment
* `datasets`: 3.6.0
* `pyarrow`: 20.0.0
* Python: 3.12.10
* OS: Ubuntu 24.04.2 LTS
* Notebook: \[Colab test notebook available]
---
| null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7590/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7590/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/7589 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7589/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7589/comments | https://api.github.com/repos/huggingface/datasets/issues/7589/events | https://github.com/huggingface/datasets/pull/7589 | 3,101,119,704 | PR_kwDODunzps6YKiyL | 7,589 | feat: use content defined chunking | {
"avatar_url": "https://avatars.githubusercontent.com/u/961747?v=4",
"events_url": "https://api.github.com/users/kszucs/events{/privacy}",
"followers_url": "https://api.github.com/users/kszucs/followers",
"following_url": "https://api.github.com/users/kszucs/following{/other_user}",
"gists_url": "https://api.github.com/users/kszucs/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/kszucs",
"id": 961747,
"login": "kszucs",
"node_id": "MDQ6VXNlcjk2MTc0Nw==",
"organizations_url": "https://api.github.com/users/kszucs/orgs",
"received_events_url": "https://api.github.com/users/kszucs/received_events",
"repos_url": "https://api.github.com/users/kszucs/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/kszucs/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kszucs/subscriptions",
"type": "User",
"url": "https://api.github.com/users/kszucs",
"user_view_type": "public"
} | [] | open | false | null | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7589). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"Need to set `DEFAULT_MAX_BATCH_SIZE = 1024 * 1024`"
] | 2025-05-29T18:19:41Z | 2025-06-17T15:04:07Z | null | COLLABORATOR | null | null | null | WIP:
- [x] set the parameters in `io.parquet.ParquetDatasetReader`
- [x] set the parameters in `arrow_writer.ParquetWriter`
It requires a new pyarrow pin ">=21.0.0" which is not yet released. | null | {
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7589/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7589/timeline | null | null | true | {
"diff_url": "https://github.com/huggingface/datasets/pull/7589.diff",
"html_url": "https://github.com/huggingface/datasets/pull/7589",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/7589.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7589"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/7588 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7588/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7588/comments | https://api.github.com/repos/huggingface/datasets/issues/7588/events | https://github.com/huggingface/datasets/issues/7588 | 3,094,012,025 | I_kwDODunzps64auB5 | 7,588 | ValueError: Invalid pattern: '**' can only be an entire path component [Colab] | {
"avatar_url": "https://avatars.githubusercontent.com/u/43061081?v=4",
"events_url": "https://api.github.com/users/wkambale/events{/privacy}",
"followers_url": "https://api.github.com/users/wkambale/followers",
"following_url": "https://api.github.com/users/wkambale/following{/other_user}",
"gists_url": "https://api.github.com/users/wkambale/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/wkambale",
"id": 43061081,
"login": "wkambale",
"node_id": "MDQ6VXNlcjQzMDYxMDgx",
"organizations_url": "https://api.github.com/users/wkambale/orgs",
"received_events_url": "https://api.github.com/users/wkambale/received_events",
"repos_url": "https://api.github.com/users/wkambale/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/wkambale/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/wkambale/subscriptions",
"type": "User",
"url": "https://api.github.com/users/wkambale",
"user_view_type": "public"
} | [] | closed | false | null | [] | null | [
"Could you please run the following code snippet in your environment and share the exact output? This will help check for any compatibility issues within the env itself. \n\n```\nimport datasets\nimport huggingface_hub\nimport fsspec\n\nprint(\"datasets version:\", datasets.__version__)\nprint(\"huggingface_hub version:\", huggingface_hub.__version__)\nprint(\"fsspec version:\", fsspec.__version__)\n```",
"```bash\ndatasets version: 2.14.4\nhuggingface_hub version: 0.31.4\nfsspec version: 2025.3.2\n```",
"Version 2.14.4 is not the latest version available, in fact it is from August 08, 2023 (you can check here: https://pypi.org/project/datasets/#history)\n\nUse pip install datasets==3.6.0 to install a more recent version (from May 7, 2025)\n\nI also had the same problem with Colab, after updating to the latest version it was solved.\n\nI hope it helps",
"thank you @CleitonOERocha. it sure did help.\n\nupdating `datasets` to v3.6.0 and keeping `fsspec` on v2025.3.2 eliminates the issue.",
"Very helpful, thank you!"
] | 2025-05-27T13:46:05Z | 2025-05-30T13:22:52Z | 2025-05-30T01:26:30Z | NONE | null | null | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | ### Describe the bug
I have a dataset on HF [here](https://huggingface.co/datasets/kambale/luganda-english-parallel-corpus) that i've previously used to train a translation model [here](https://huggingface.co/kambale/pearl-11m-translate).
now i changed a few hyperparameters to increase number of tokens for the model, increase Transformer layers, and all
however, when i try to load the dataset, this error keeps coming up.. i have tried everything.. i have re-written the code a hundred times, and this keep coming up
### Steps to reproduce the bug
Imports:
```bash
!pip install datasets huggingface_hub fsspec
```
Python code:
```python
from datasets import load_dataset
HF_DATASET_NAME = "kambale/luganda-english-parallel-corpus"
# Load the dataset
try:
if not HF_DATASET_NAME or HF_DATASET_NAME == "YOUR_HF_DATASET_NAME":
raise ValueError(
"Please provide a valid Hugging Face dataset name."
)
dataset = load_dataset(HF_DATASET_NAME)
# Omitted code as the error happens on the line above
except ValueError as ve:
print(f"Configuration Error: {ve}")
raise
except Exception as e:
print(f"An error occurred while loading the dataset '{HF_DATASET_NAME}': {e}")
raise e
```
now, i have tried going through this [issue](https://github.com/huggingface/datasets/issues/6737) and nothing helps
### Expected behavior
loading the dataset successfully and perform splits (train, test, validation)
### Environment info
from the imports, i do not install specific versions of these libraries, so the latest or available version is installed
* `datasets` version: latest
* `Platform`: Google Colab
* `Hardware`: NVIDIA A100 GPU
* `Python` version: latest
* `huggingface_hub` version: latest
* `fsspec` version: latest | {
"avatar_url": "https://avatars.githubusercontent.com/u/43061081?v=4",
"events_url": "https://api.github.com/users/wkambale/events{/privacy}",
"followers_url": "https://api.github.com/users/wkambale/followers",
"following_url": "https://api.github.com/users/wkambale/following{/other_user}",
"gists_url": "https://api.github.com/users/wkambale/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/wkambale",
"id": 43061081,
"login": "wkambale",
"node_id": "MDQ6VXNlcjQzMDYxMDgx",
"organizations_url": "https://api.github.com/users/wkambale/orgs",
"received_events_url": "https://api.github.com/users/wkambale/received_events",
"repos_url": "https://api.github.com/users/wkambale/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/wkambale/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/wkambale/subscriptions",
"type": "User",
"url": "https://api.github.com/users/wkambale",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7588/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7588/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/7587 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7587/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7587/comments | https://api.github.com/repos/huggingface/datasets/issues/7587/events | https://github.com/huggingface/datasets/pull/7587 | 3,091,834,987 | PR_kwDODunzps6XrB8F | 7,587 | load_dataset splits typing | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
} | [] | closed | false | null | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7587). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] | 2025-05-26T18:28:40Z | 2025-05-26T18:31:10Z | 2025-05-26T18:29:57Z | MEMBER | null | null | null | close https://github.com/huggingface/datasets/issues/7583 | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7587/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7587/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/7587.diff",
"html_url": "https://github.com/huggingface/datasets/pull/7587",
"merged_at": "2025-05-26T18:29:57Z",
"patch_url": "https://github.com/huggingface/datasets/pull/7587.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7587"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/7586 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7586/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7586/comments | https://api.github.com/repos/huggingface/datasets/issues/7586/events | https://github.com/huggingface/datasets/issues/7586 | 3,091,320,431 | I_kwDODunzps64Qc5v | 7,586 | help is appreciated | {
"avatar_url": "https://avatars.githubusercontent.com/u/54931785?v=4",
"events_url": "https://api.github.com/users/rajasekarnp1/events{/privacy}",
"followers_url": "https://api.github.com/users/rajasekarnp1/followers",
"following_url": "https://api.github.com/users/rajasekarnp1/following{/other_user}",
"gists_url": "https://api.github.com/users/rajasekarnp1/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/rajasekarnp1",
"id": 54931785,
"login": "rajasekarnp1",
"node_id": "MDQ6VXNlcjU0OTMxNzg1",
"organizations_url": "https://api.github.com/users/rajasekarnp1/orgs",
"received_events_url": "https://api.github.com/users/rajasekarnp1/received_events",
"repos_url": "https://api.github.com/users/rajasekarnp1/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/rajasekarnp1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rajasekarnp1/subscriptions",
"type": "User",
"url": "https://api.github.com/users/rajasekarnp1",
"user_view_type": "public"
} | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | open | false | null | [] | null | [
"how is this related to this repository ?"
] | 2025-05-26T14:00:42Z | 2025-05-26T18:21:57Z | null | NONE | null | null | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | ### Feature request
https://github.com/rajasekarnp1/neural-audio-upscaler/tree/main
### Motivation
ai model develpment and audio
### Your contribution
ai model develpment and audio | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7586/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7586/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/7585 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7585/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7585/comments | https://api.github.com/repos/huggingface/datasets/issues/7585/events | https://github.com/huggingface/datasets/pull/7585 | 3,091,227,921 | PR_kwDODunzps6Xo-Tw | 7,585 | Avoid multiple default config names | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
} | [] | closed | false | null | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7585). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] | 2025-05-26T13:27:59Z | 2025-06-05T12:41:54Z | 2025-06-05T12:41:52Z | MEMBER | null | null | null | Fix duplicating default config names.
Currently, when calling `push_to_hub(set_default=True` with 2 different config names, both are set as default.
Moreover, this will generate an error next time we try to push another default config name, raised by `MetadataConfigs.get_default_config_name`:
https://github.com/huggingface/datasets/blob/da1db8a5b89fc0badaa0f571b36e122e52ae8c61/src/datasets/arrow_dataset.py#L5757
https://github.com/huggingface/datasets/blob/da1db8a5b89fc0badaa0f571b36e122e52ae8c61/src/datasets/utils/metadata.py#L186-L188 | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7585/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7585/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/7585.diff",
"html_url": "https://github.com/huggingface/datasets/pull/7585",
"merged_at": "2025-06-05T12:41:52Z",
"patch_url": "https://github.com/huggingface/datasets/pull/7585.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7585"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/7584 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7584/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7584/comments | https://api.github.com/repos/huggingface/datasets/issues/7584/events | https://github.com/huggingface/datasets/issues/7584 | 3,090,255,023 | I_kwDODunzps64MYyv | 7,584 | Add LMDB format support | {
"avatar_url": "https://avatars.githubusercontent.com/u/30512160?v=4",
"events_url": "https://api.github.com/users/trotsky1997/events{/privacy}",
"followers_url": "https://api.github.com/users/trotsky1997/followers",
"following_url": "https://api.github.com/users/trotsky1997/following{/other_user}",
"gists_url": "https://api.github.com/users/trotsky1997/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/trotsky1997",
"id": 30512160,
"login": "trotsky1997",
"node_id": "MDQ6VXNlcjMwNTEyMTYw",
"organizations_url": "https://api.github.com/users/trotsky1997/orgs",
"received_events_url": "https://api.github.com/users/trotsky1997/received_events",
"repos_url": "https://api.github.com/users/trotsky1997/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/trotsky1997/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/trotsky1997/subscriptions",
"type": "User",
"url": "https://api.github.com/users/trotsky1997",
"user_view_type": "public"
} | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | open | false | null | [] | null | [
"Hi ! Can you explain what's your use case ? Is it about converting LMDB to Dataset objects (i.e. converting to Arrow) ?"
] | 2025-05-26T07:10:13Z | 2025-05-26T18:23:37Z | null | NONE | null | null | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | ### Feature request
Add LMDB format support for large memory-mapping files
### Motivation
Add LMDB format support for large memory-mapping files
### Your contribution
I'm trying to add it | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7584/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7584/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/7583 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7583/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7583/comments | https://api.github.com/repos/huggingface/datasets/issues/7583/events | https://github.com/huggingface/datasets/issues/7583 | 3,088,987,757 | I_kwDODunzps64HjZt | 7,583 | load_dataset type stubs reject List[str] for split parameter, but runtime supports it | {
"avatar_url": "https://avatars.githubusercontent.com/u/25069969?v=4",
"events_url": "https://api.github.com/users/hierr/events{/privacy}",
"followers_url": "https://api.github.com/users/hierr/followers",
"following_url": "https://api.github.com/users/hierr/following{/other_user}",
"gists_url": "https://api.github.com/users/hierr/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/hierr",
"id": 25069969,
"login": "hierr",
"node_id": "MDQ6VXNlcjI1MDY5OTY5",
"organizations_url": "https://api.github.com/users/hierr/orgs",
"received_events_url": "https://api.github.com/users/hierr/received_events",
"repos_url": "https://api.github.com/users/hierr/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/hierr/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/hierr/subscriptions",
"type": "User",
"url": "https://api.github.com/users/hierr",
"user_view_type": "public"
} | [] | closed | false | null | [] | null | [] | 2025-05-25T02:33:18Z | 2025-05-26T18:29:58Z | 2025-05-26T18:29:58Z | NONE | null | null | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | ### Describe the bug
The [load_dataset](https://huggingface.co/docs/datasets/v3.6.0/en/package_reference/loading_methods#datasets.load_dataset) method accepts a `List[str]` as the split parameter at runtime, however, the current type stubs restrict the split parameter to `Union[str, Split, None]`. This causes type checkers like Pylance to raise `reportArgumentType` errors when passing a list of strings, even though it works as intended at runtime.
### Steps to reproduce the bug
1. Use load_dataset with multiple splits e.g.:
```
from datasets import load_dataset
ds_train, ds_val, ds_test = load_dataset(
"Silly-Machine/TuPyE-Dataset",
"binary",
split=["train[:75%]", "train[75%:]", "test"]
)
```
2. Observe that code executes correctly at runtime and Pylance raises `Argument of type "List[str]" cannot be assigned to parameter "split" of type "str | Split | None"`
### Expected behavior
The type stubs for [load_dataset](https://huggingface.co/docs/datasets/v3.6.0/en/package_reference/loading_methods#datasets.load_dataset) should accept `Union[str, Split, List[str], None]` or more specific overloads for the split parameter to correctly represent runtime behavior.
### Environment info
- `datasets` version: 3.6.0
- Platform: Linux-5.15.167.4-microsoft-standard-WSL2-x86_64-with-glibc2.39
- Python version: 3.12.7
- `huggingface_hub` version: 0.32.0
- PyArrow version: 20.0.0
- Pandas version: 2.2.3
- `fsspec` version: 2025.3.0 | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7583/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7583/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/7582 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7582/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7582/comments | https://api.github.com/repos/huggingface/datasets/issues/7582/events | https://github.com/huggingface/datasets/pull/7582 | 3,083,515,643 | PR_kwDODunzps6XPIt7 | 7,582 | fix: Add embed_storage in Pdf feature | {
"avatar_url": "https://avatars.githubusercontent.com/u/5564745?v=4",
"events_url": "https://api.github.com/users/AndreaFrancis/events{/privacy}",
"followers_url": "https://api.github.com/users/AndreaFrancis/followers",
"following_url": "https://api.github.com/users/AndreaFrancis/following{/other_user}",
"gists_url": "https://api.github.com/users/AndreaFrancis/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/AndreaFrancis",
"id": 5564745,
"login": "AndreaFrancis",
"node_id": "MDQ6VXNlcjU1NjQ3NDU=",
"organizations_url": "https://api.github.com/users/AndreaFrancis/orgs",
"received_events_url": "https://api.github.com/users/AndreaFrancis/received_events",
"repos_url": "https://api.github.com/users/AndreaFrancis/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/AndreaFrancis/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/AndreaFrancis/subscriptions",
"type": "User",
"url": "https://api.github.com/users/AndreaFrancis",
"user_view_type": "public"
} | [] | closed | false | null | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7582). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] | 2025-05-22T14:06:29Z | 2025-05-22T14:17:38Z | 2025-05-22T14:17:36Z | CONTRIBUTOR | null | null | null | Add missing `embed_storage` method in Pdf feature (Same as in Audio and Image) | {
"avatar_url": "https://avatars.githubusercontent.com/u/5564745?v=4",
"events_url": "https://api.github.com/users/AndreaFrancis/events{/privacy}",
"followers_url": "https://api.github.com/users/AndreaFrancis/followers",
"following_url": "https://api.github.com/users/AndreaFrancis/following{/other_user}",
"gists_url": "https://api.github.com/users/AndreaFrancis/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/AndreaFrancis",
"id": 5564745,
"login": "AndreaFrancis",
"node_id": "MDQ6VXNlcjU1NjQ3NDU=",
"organizations_url": "https://api.github.com/users/AndreaFrancis/orgs",
"received_events_url": "https://api.github.com/users/AndreaFrancis/received_events",
"repos_url": "https://api.github.com/users/AndreaFrancis/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/AndreaFrancis/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/AndreaFrancis/subscriptions",
"type": "User",
"url": "https://api.github.com/users/AndreaFrancis",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7582/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7582/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/7582.diff",
"html_url": "https://github.com/huggingface/datasets/pull/7582",
"merged_at": "2025-05-22T14:17:36Z",
"patch_url": "https://github.com/huggingface/datasets/pull/7582.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7582"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/7581 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7581/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7581/comments | https://api.github.com/repos/huggingface/datasets/issues/7581/events | https://github.com/huggingface/datasets/pull/7581 | 3,083,080,413 | PR_kwDODunzps6XNpm0 | 7,581 | Add missing property on `RepeatExamplesIterable` | {
"avatar_url": "https://avatars.githubusercontent.com/u/42788329?v=4",
"events_url": "https://api.github.com/users/SilvanCodes/events{/privacy}",
"followers_url": "https://api.github.com/users/SilvanCodes/followers",
"following_url": "https://api.github.com/users/SilvanCodes/following{/other_user}",
"gists_url": "https://api.github.com/users/SilvanCodes/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/SilvanCodes",
"id": 42788329,
"login": "SilvanCodes",
"node_id": "MDQ6VXNlcjQyNzg4MzI5",
"organizations_url": "https://api.github.com/users/SilvanCodes/orgs",
"received_events_url": "https://api.github.com/users/SilvanCodes/received_events",
"repos_url": "https://api.github.com/users/SilvanCodes/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/SilvanCodes/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/SilvanCodes/subscriptions",
"type": "User",
"url": "https://api.github.com/users/SilvanCodes",
"user_view_type": "public"
} | [] | closed | false | null | [] | null | [] | 2025-05-22T11:41:07Z | 2025-06-05T12:41:30Z | 2025-06-05T12:41:29Z | CONTRIBUTOR | null | null | null | Fixes #7561 | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 1,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7581/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7581/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/7581.diff",
"html_url": "https://github.com/huggingface/datasets/pull/7581",
"merged_at": "2025-06-05T12:41:29Z",
"patch_url": "https://github.com/huggingface/datasets/pull/7581.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7581"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/7580 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7580/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7580/comments | https://api.github.com/repos/huggingface/datasets/issues/7580/events | https://github.com/huggingface/datasets/issues/7580 | 3,082,993,027 | I_kwDODunzps63wr2D | 7,580 | Requesting a specific split (eg: test) still downloads all (train, test, val) data when streaming=False. | {
"avatar_url": "https://avatars.githubusercontent.com/u/48768216?v=4",
"events_url": "https://api.github.com/users/s3pi/events{/privacy}",
"followers_url": "https://api.github.com/users/s3pi/followers",
"following_url": "https://api.github.com/users/s3pi/following{/other_user}",
"gists_url": "https://api.github.com/users/s3pi/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/s3pi",
"id": 48768216,
"login": "s3pi",
"node_id": "MDQ6VXNlcjQ4NzY4MjE2",
"organizations_url": "https://api.github.com/users/s3pi/orgs",
"received_events_url": "https://api.github.com/users/s3pi/received_events",
"repos_url": "https://api.github.com/users/s3pi/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/s3pi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/s3pi/subscriptions",
"type": "User",
"url": "https://api.github.com/users/s3pi",
"user_view_type": "public"
} | [] | open | false | null | [] | null | [
"Hi ! There was a PR open to improve this: https://github.com/huggingface/datasets/pull/6832 \nbut it hasn't been continued so far.\n\nIt would be a cool improvement though !"
] | 2025-05-22T11:08:16Z | 2025-05-26T18:40:31Z | null | NONE | null | null | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | ### Describe the bug
When using load_dataset() from the datasets library (in load.py), specifying a particular split (e.g., split="train") still results in downloading data for all splits when streaming=False. This happens during the builder_instance.download_and_prepare() call.
This behavior leads to unnecessary bandwidth usage and longer download times, especially for large datasets, even if the user only intends to use a single split.
### Steps to reproduce the bug
dataset_name = "skbose/indian-english-nptel-v0"
dataset = load_dataset(dataset_name, token=hf_token, split="test")
### Expected behavior
Optimize the download logic so that only the required split is downloaded when streaming=False when a specific split is provided.
### Environment info
Dataset: skbose/indian-english-nptel-v0
Platform: M1 Apple Silicon
Python verison: 3.12.9
datasets>=3.5.0 | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7580/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7580/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/7579 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7579/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7579/comments | https://api.github.com/repos/huggingface/datasets/issues/7579/events | https://github.com/huggingface/datasets/pull/7579 | 3,081,849,022 | PR_kwDODunzps6XJerX | 7,579 | Fix typos in PDF and Video documentation | {
"avatar_url": "https://avatars.githubusercontent.com/u/5564745?v=4",
"events_url": "https://api.github.com/users/AndreaFrancis/events{/privacy}",
"followers_url": "https://api.github.com/users/AndreaFrancis/followers",
"following_url": "https://api.github.com/users/AndreaFrancis/following{/other_user}",
"gists_url": "https://api.github.com/users/AndreaFrancis/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/AndreaFrancis",
"id": 5564745,
"login": "AndreaFrancis",
"node_id": "MDQ6VXNlcjU1NjQ3NDU=",
"organizations_url": "https://api.github.com/users/AndreaFrancis/orgs",
"received_events_url": "https://api.github.com/users/AndreaFrancis/received_events",
"repos_url": "https://api.github.com/users/AndreaFrancis/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/AndreaFrancis/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/AndreaFrancis/subscriptions",
"type": "User",
"url": "https://api.github.com/users/AndreaFrancis",
"user_view_type": "public"
} | [] | closed | false | null | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7579). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] | 2025-05-22T02:27:40Z | 2025-05-22T12:53:49Z | 2025-05-22T12:53:47Z | CONTRIBUTOR | null | null | null | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7579/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7579/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/7579.diff",
"html_url": "https://github.com/huggingface/datasets/pull/7579",
"merged_at": "2025-05-22T12:53:47Z",
"patch_url": "https://github.com/huggingface/datasets/pull/7579.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7579"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/7577 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7577/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7577/comments | https://api.github.com/repos/huggingface/datasets/issues/7577/events | https://github.com/huggingface/datasets/issues/7577 | 3,080,833,740 | I_kwDODunzps63ocrM | 7,577 | arrow_schema is not compatible with list | {
"avatar_url": "https://avatars.githubusercontent.com/u/164412025?v=4",
"events_url": "https://api.github.com/users/jonathanshen-upwork/events{/privacy}",
"followers_url": "https://api.github.com/users/jonathanshen-upwork/followers",
"following_url": "https://api.github.com/users/jonathanshen-upwork/following{/other_user}",
"gists_url": "https://api.github.com/users/jonathanshen-upwork/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/jonathanshen-upwork",
"id": 164412025,
"login": "jonathanshen-upwork",
"node_id": "U_kgDOCcy6eQ",
"organizations_url": "https://api.github.com/users/jonathanshen-upwork/orgs",
"received_events_url": "https://api.github.com/users/jonathanshen-upwork/received_events",
"repos_url": "https://api.github.com/users/jonathanshen-upwork/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/jonathanshen-upwork/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jonathanshen-upwork/subscriptions",
"type": "User",
"url": "https://api.github.com/users/jonathanshen-upwork",
"user_view_type": "public"
} | [] | closed | false | null | [] | null | [
"Thanks for reporting, I'll look into it",
"Actually it looks like you just forgot parenthesis:\n\n```diff\n- f = datasets.Features({'x': list[datasets.Value(dtype='int32')]})\n+ f = datasets.Features({'x': list([datasets.Value(dtype='int32')])})\n```\n\nor simply using the `[ ]` syntax:\n\n```python\nf = datasets.Features({'x':[datasets.Value(dtype='int32')]})\n```\n\nI'm closing this issue if you don't mind",
"Ah is that what the syntax is? I don't think I was able to find an actual example of it so I assumed it was in the same way that you specify types eg. `list[int]`. This is good to know, thanks."
] | 2025-05-21T16:37:01Z | 2025-05-26T18:49:51Z | 2025-05-26T18:32:55Z | NONE | null | null | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | ### Describe the bug
```
import datasets
f = datasets.Features({'x': list[datasets.Value(dtype='int32')]})
f.arrow_schema
Traceback (most recent call last):
File "datasets/features/features.py", line 1826, in arrow_schema
return pa.schema(self.type).with_metadata({"huggingface": json.dumps(hf_metadata)})
^^^^^^^^^
File "datasets/features/features.py", line 1815, in type
return get_nested_type(self)
^^^^^^^^^^^^^^^^^^^^^
File "datasets/features/features.py", line 1252, in get_nested_type
return pa.struct(
^^^^^^^^^^
File "pyarrow/types.pxi", line 5406, in pyarrow.lib.struct
File "pyarrow/types.pxi", line 3890, in pyarrow.lib.field
File "pyarrow/types.pxi", line 5918, in pyarrow.lib.ensure_type
TypeError: DataType expected, got <class 'list'>
```
The following works
```
f = datasets.Features({'x': datasets.LargeList(datasets.Value(dtype='int32'))})
```
### Expected behavior
according to https://github.com/huggingface/datasets/blob/458f45a22c3cc9aea5f442f6f519333dcfeae9b9/src/datasets/features/features.py#L1765 python list should be a valid type specification for features
### Environment info
- `datasets` version: 3.5.1
- Platform: macOS-15.5-arm64-arm-64bit
- Python version: 3.12.9
- `huggingface_hub` version: 0.30.2
- PyArrow version: 19.0.1
- Pandas version: 2.2.3
- `fsspec` version: 2024.12.0 | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
} | {
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7577/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7577/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/7576 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7576/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7576/comments | https://api.github.com/repos/huggingface/datasets/issues/7576/events | https://github.com/huggingface/datasets/pull/7576 | 3,080,450,538 | PR_kwDODunzps6XEuMz | 7,576 | Fix regex library warnings | {
"avatar_url": "https://avatars.githubusercontent.com/u/35470921?v=4",
"events_url": "https://api.github.com/users/emmanuel-ferdman/events{/privacy}",
"followers_url": "https://api.github.com/users/emmanuel-ferdman/followers",
"following_url": "https://api.github.com/users/emmanuel-ferdman/following{/other_user}",
"gists_url": "https://api.github.com/users/emmanuel-ferdman/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/emmanuel-ferdman",
"id": 35470921,
"login": "emmanuel-ferdman",
"node_id": "MDQ6VXNlcjM1NDcwOTIx",
"organizations_url": "https://api.github.com/users/emmanuel-ferdman/orgs",
"received_events_url": "https://api.github.com/users/emmanuel-ferdman/received_events",
"repos_url": "https://api.github.com/users/emmanuel-ferdman/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/emmanuel-ferdman/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/emmanuel-ferdman/subscriptions",
"type": "User",
"url": "https://api.github.com/users/emmanuel-ferdman",
"user_view_type": "public"
} | [] | closed | false | null | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7576). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] | 2025-05-21T14:31:58Z | 2025-06-05T13:35:16Z | 2025-06-05T12:37:55Z | CONTRIBUTOR | null | null | null | # PR Summary
This small PR resolves the regex library warnings showing starting Python3.11:
```python
DeprecationWarning: 'count' is passed as positional argument
``` | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7576/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7576/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/7576.diff",
"html_url": "https://github.com/huggingface/datasets/pull/7576",
"merged_at": "2025-06-05T12:37:55Z",
"patch_url": "https://github.com/huggingface/datasets/pull/7576.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7576"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/7575 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7575/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7575/comments | https://api.github.com/repos/huggingface/datasets/issues/7575/events | https://github.com/huggingface/datasets/pull/7575 | 3,080,228,718 | PR_kwDODunzps6XD9gM | 7,575 | [MINOR:TYPO] Update save_to_disk docstring | {
"avatar_url": "https://avatars.githubusercontent.com/u/3664563?v=4",
"events_url": "https://api.github.com/users/cakiki/events{/privacy}",
"followers_url": "https://api.github.com/users/cakiki/followers",
"following_url": "https://api.github.com/users/cakiki/following{/other_user}",
"gists_url": "https://api.github.com/users/cakiki/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/cakiki",
"id": 3664563,
"login": "cakiki",
"node_id": "MDQ6VXNlcjM2NjQ1NjM=",
"organizations_url": "https://api.github.com/users/cakiki/orgs",
"received_events_url": "https://api.github.com/users/cakiki/received_events",
"repos_url": "https://api.github.com/users/cakiki/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/cakiki/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/cakiki/subscriptions",
"type": "User",
"url": "https://api.github.com/users/cakiki",
"user_view_type": "public"
} | [] | closed | false | null | [] | null | [] | 2025-05-21T13:22:24Z | 2025-06-05T12:39:13Z | 2025-06-05T12:39:13Z | CONTRIBUTOR | null | null | null | r/hub/filesystem in save_to_disk | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7575/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7575/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/7575.diff",
"html_url": "https://github.com/huggingface/datasets/pull/7575",
"merged_at": "2025-06-05T12:39:13Z",
"patch_url": "https://github.com/huggingface/datasets/pull/7575.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7575"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/7574 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7574/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7574/comments | https://api.github.com/repos/huggingface/datasets/issues/7574/events | https://github.com/huggingface/datasets/issues/7574 | 3,079,641,072 | I_kwDODunzps63j5fw | 7,574 | Missing multilingual directions in IWSLT2017 dataset's processing script | {
"avatar_url": "https://avatars.githubusercontent.com/u/79297451?v=4",
"events_url": "https://api.github.com/users/andy-joy-25/events{/privacy}",
"followers_url": "https://api.github.com/users/andy-joy-25/followers",
"following_url": "https://api.github.com/users/andy-joy-25/following{/other_user}",
"gists_url": "https://api.github.com/users/andy-joy-25/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/andy-joy-25",
"id": 79297451,
"login": "andy-joy-25",
"node_id": "MDQ6VXNlcjc5Mjk3NDUx",
"organizations_url": "https://api.github.com/users/andy-joy-25/orgs",
"received_events_url": "https://api.github.com/users/andy-joy-25/received_events",
"repos_url": "https://api.github.com/users/andy-joy-25/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/andy-joy-25/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/andy-joy-25/subscriptions",
"type": "User",
"url": "https://api.github.com/users/andy-joy-25",
"user_view_type": "public"
} | [] | open | false | null | [] | null | [
"I have opened 2 PRs on the Hub: `https://huggingface.co/datasets/IWSLT/iwslt2017/discussions/7` and `https://huggingface.co/datasets/IWSLT/iwslt2017/discussions/8` to resolve this issue",
"cool ! I pinged the owners of the dataset on HF to merge your PRs :)"
] | 2025-05-21T09:53:17Z | 2025-05-26T18:36:38Z | null | NONE | null | null | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | ### Describe the bug
Hi,
Upon using `iwslt2017.py` in `IWSLT/iwslt2017` on the Hub for loading the datasets, I am unable to obtain the datasets for the language pairs `de-it`, `de-ro`, `de-nl`, `it-de`, `nl-de`, and `ro-de` using it. These 6 pairs do not show up when using `get_dataset_config_names()` to obtain the list of all the configs present in `IWSLT/iwslt2017`. This should not be the case since as mentioned in their original paper (please see https://aclanthology.org/2017.iwslt-1.1.pdf), the authors specify that "_this year we proposed the multilingual translation between any pair of languages from {Dutch, English, German, Italian, Romanian}..._" and because these datasets are indeed present in `data/2017-01-trnmted/texts/DeEnItNlRo/DeEnItNlRo/DeEnItNlRo-DeEnItNlRo.zip`.
Best Regards,
Anand
### Steps to reproduce the bug
Check the output of `get_dataset_config_names("IWSLT/iwslt2017", trust_remote_code=True)`: only 24 language pairs are present and the following 6 config names are absent: `iwslt2017-de-it`, `iwslt2017-de-ro`, `iwslt2017-de-nl`, `iwslt2017-it-de`, `iwslt2017-nl-de`, and `iwslt2017-ro-de`.
### Expected behavior
The aforementioned 6 language pairs should also be present and hence, all these 6 language pairs' IWSLT2017 datasets must also be available for further use.
I would suggest removing `de` from the `BI_LANGUAGES` list and moving it over to the `MULTI_LANGUAGES` list instead in `iwslt2017.py` to account for all the 6 missing language pairs (the same `de-en` dataset is present in both `data/2017-01-trnmted/texts/DeEnItNlRo/DeEnItNlRo/DeEnItNlRo-DeEnItNlRo.zip` and `data/2017-01-trnted/texts/de/en/de-en.zip` but the `de-ro`, `de-nl`, `it-de`, `nl-de`, and `ro-de` datasets are only present in `data/2017-01-trnmted/texts/DeEnItNlRo/DeEnItNlRo/DeEnItNlRo-DeEnItNlRo.zip`: so, its unclear why the following comment: _`# XXX: Artificially removed DE from here, as it also exists within bilingual data`_ has been added as `L71` in `iwslt2017.py`). The `README.md` file in `IWSLT/iwslt2017`must then be re-created using `datasets-cli test path/to/iwslt2017.py --save_info --all_configs` to pass all split size verification checks for the 6 new language pairs which were previously non-existent.
### Environment info
- `datasets` version: 3.5.0
- Platform: Linux-6.8.0-56-generic-x86_64-with-glibc2.39
- Python version: 3.12.3
- `huggingface_hub` version: 0.30.1
- PyArrow version: 19.0.1
- Pandas version: 2.2.3
- `fsspec` version: 2024.12.0 | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7574/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7574/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/7573 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7573/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7573/comments | https://api.github.com/repos/huggingface/datasets/issues/7573/events | https://github.com/huggingface/datasets/issues/7573 | 3,076,415,382 | I_kwDODunzps63Xl-W | 7,573 | No Samsum dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/17688220?v=4",
"events_url": "https://api.github.com/users/IgorKasianenko/events{/privacy}",
"followers_url": "https://api.github.com/users/IgorKasianenko/followers",
"following_url": "https://api.github.com/users/IgorKasianenko/following{/other_user}",
"gists_url": "https://api.github.com/users/IgorKasianenko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/IgorKasianenko",
"id": 17688220,
"login": "IgorKasianenko",
"node_id": "MDQ6VXNlcjE3Njg4MjIw",
"organizations_url": "https://api.github.com/users/IgorKasianenko/orgs",
"received_events_url": "https://api.github.com/users/IgorKasianenko/received_events",
"repos_url": "https://api.github.com/users/IgorKasianenko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/IgorKasianenko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/IgorKasianenko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/IgorKasianenko",
"user_view_type": "public"
} | [] | closed | false | null | [] | null | [
"According to the following https://huggingface.co/posts/seawolf2357/424129432408590, as of now the dataset seems to be inaccessible.\n\n@IgorKasianenko, would https://huggingface.co/datasets/knkarthick/samsum suffice for your purpose?\n",
"Thanks @SP1029 for the update!\nThat will work for now, using it as replacement. Is there a officially recommended way to maintain the CC licensed dataset under the organization account? \nFeel free to close this issue",
"> Is there an officially recommended way to maintain a CC-licensed dataset under an organizational account?\n\n@IgorKasianenko, apologies, this is not my area of expertise.\n\n> Please feel free to close this issue.\n\nI have limited access and may not be able to do that. Since you opened it, you would be able to close it."
] | 2025-05-20T09:54:35Z | 2025-06-18T12:52:23Z | 2025-06-18T12:52:23Z | NONE | null | null | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | ### Describe the bug
https://huggingface.co/datasets/Samsung/samsum dataset not found error 404
Originated from https://github.com/meta-llama/llama-cookbook/issues/948
### Steps to reproduce the bug
go to website https://huggingface.co/datasets/Samsung/samsum
see the error
also downloading it with python throws
```
Couldn't find 'Samsung/samsum' on the Hugging Face Hub either: FileNotFoundError: Samsung/samsum@f00baf5a7d4abfec6820415493bcb52c587788e6/samsum.py (repository not found)
```
### Expected behavior
Dataset exists
### Environment info
```
- `datasets` version: 3.2.0
- Platform: macOS-15.4.1-arm64-arm-64bit
- Python version: 3.12.2
- `huggingface_hub` version: 0.26.5
- PyArrow version: 16.1.0
- Pandas version: 2.2.3
- `fsspec` version: 2024.9.0
``` | {
"avatar_url": "https://avatars.githubusercontent.com/u/17688220?v=4",
"events_url": "https://api.github.com/users/IgorKasianenko/events{/privacy}",
"followers_url": "https://api.github.com/users/IgorKasianenko/followers",
"following_url": "https://api.github.com/users/IgorKasianenko/following{/other_user}",
"gists_url": "https://api.github.com/users/IgorKasianenko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/IgorKasianenko",
"id": 17688220,
"login": "IgorKasianenko",
"node_id": "MDQ6VXNlcjE3Njg4MjIw",
"organizations_url": "https://api.github.com/users/IgorKasianenko/orgs",
"received_events_url": "https://api.github.com/users/IgorKasianenko/received_events",
"repos_url": "https://api.github.com/users/IgorKasianenko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/IgorKasianenko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/IgorKasianenko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/IgorKasianenko",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7573/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7573/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/7572 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7572/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7572/comments | https://api.github.com/repos/huggingface/datasets/issues/7572/events | https://github.com/huggingface/datasets/pull/7572 | 3,074,529,251 | PR_kwDODunzps6WwsZB | 7,572 | Fixed typos | {
"avatar_url": "https://avatars.githubusercontent.com/u/47208659?v=4",
"events_url": "https://api.github.com/users/TopCoder2K/events{/privacy}",
"followers_url": "https://api.github.com/users/TopCoder2K/followers",
"following_url": "https://api.github.com/users/TopCoder2K/following{/other_user}",
"gists_url": "https://api.github.com/users/TopCoder2K/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/TopCoder2K",
"id": 47208659,
"login": "TopCoder2K",
"node_id": "MDQ6VXNlcjQ3MjA4NjU5",
"organizations_url": "https://api.github.com/users/TopCoder2K/orgs",
"received_events_url": "https://api.github.com/users/TopCoder2K/received_events",
"repos_url": "https://api.github.com/users/TopCoder2K/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/TopCoder2K/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/TopCoder2K/subscriptions",
"type": "User",
"url": "https://api.github.com/users/TopCoder2K",
"user_view_type": "public"
} | [] | closed | false | null | [] | null | [
"@lhoestq, mentioning in case you haven't seen this PR. The contribution is very small and easy to check :)"
] | 2025-05-19T17:16:59Z | 2025-06-05T12:25:42Z | 2025-06-05T12:25:41Z | CONTRIBUTOR | null | null | null | More info: [comment](https://github.com/huggingface/datasets/pull/7564#issuecomment-2863391781). | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7572/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7572/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/7572.diff",
"html_url": "https://github.com/huggingface/datasets/pull/7572",
"merged_at": "2025-06-05T12:25:41Z",
"patch_url": "https://github.com/huggingface/datasets/pull/7572.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7572"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/7571 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7571/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7571/comments | https://api.github.com/repos/huggingface/datasets/issues/7571/events | https://github.com/huggingface/datasets/pull/7571 | 3,074,116,942 | PR_kwDODunzps6WvRqi | 7,571 | fix string_to_dict test | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
} | [] | closed | false | null | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7571). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] | 2025-05-19T14:49:23Z | 2025-05-19T14:52:24Z | 2025-05-19T14:49:28Z | MEMBER | null | null | null | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7571/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7571/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/7571.diff",
"html_url": "https://github.com/huggingface/datasets/pull/7571",
"merged_at": "2025-05-19T14:49:28Z",
"patch_url": "https://github.com/huggingface/datasets/pull/7571.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7571"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/7570 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7570/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7570/comments | https://api.github.com/repos/huggingface/datasets/issues/7570/events | https://github.com/huggingface/datasets/issues/7570 | 3,065,966,529 | I_kwDODunzps62vu_B | 7,570 | Dataset lib seems to broke after fssec lib update | {
"avatar_url": "https://avatars.githubusercontent.com/u/81933585?v=4",
"events_url": "https://api.github.com/users/sleepingcat4/events{/privacy}",
"followers_url": "https://api.github.com/users/sleepingcat4/followers",
"following_url": "https://api.github.com/users/sleepingcat4/following{/other_user}",
"gists_url": "https://api.github.com/users/sleepingcat4/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/sleepingcat4",
"id": 81933585,
"login": "sleepingcat4",
"node_id": "MDQ6VXNlcjgxOTMzNTg1",
"organizations_url": "https://api.github.com/users/sleepingcat4/orgs",
"received_events_url": "https://api.github.com/users/sleepingcat4/received_events",
"repos_url": "https://api.github.com/users/sleepingcat4/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/sleepingcat4/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sleepingcat4/subscriptions",
"type": "User",
"url": "https://api.github.com/users/sleepingcat4",
"user_view_type": "public"
} | [] | closed | false | null | [] | null | [
"Hi, can you try updating `datasets` ? Colab still installs `datasets` 2.x by default, instead of 3.x\n\nIt would be cool to also report this to google colab, they have a GitHub repo for this IIRC",
"@lhoestq I have updated it to `datasets==3.6.0` and now there's an entirely different issue on colab while locally its fine. \n\n```\n/usr/local/lib/python3.11/dist-packages/huggingface_hub/utils/_auth.py:94: UserWarning: \nThe secret `HF_TOKEN` does not exist in your Colab secrets.\nTo authenticate with the Hugging Face Hub, create a token in your settings tab (https://huggingface.co/settings/tokens), set it as secret in your Google Colab and restart your session.\nYou will be able to reuse this secret in all of your notebooks.\nPlease note that authentication is recommended but still optional to access public models or datasets.\n warnings.warn(\nREADME.md: 100%\n 2.88k/2.88k [00:00<00:00, 166kB/s]\nsuno.jsonl.zst: 100%\n 221M/221M [00:05<00:00, 48.6MB/s]\nGenerating train split: \n 18633/0 [00:01<00:00, 13018.92 examples/s]\n---------------------------------------------------------------------------\nTypeError Traceback (most recent call last)\n[/usr/local/lib/python3.11/dist-packages/datasets/builder.py](https://localhost:8080/#) in _prepare_split_single(self, gen_kwargs, fpath, file_format, max_shard_size, job_id)\n 1870 try:\n-> 1871 writer.write_table(table)\n 1872 except CastError as cast_error:\n\n17 frames\nTypeError: Couldn't cast array of type\nstruct<id: string, type: string, infill: bool, source: string, continue_at: double, infill_dur_s: double, infill_end_s: double, infill_start_s: double, include_future_s: double, include_history_s: double, infill_context_end_s: double, infill_context_start_s: int64>\nto\n{'id': Value(dtype='string', id=None), 'type': Value(dtype='string', id=None), 'infill': Value(dtype='bool', id=None), 'source': Value(dtype='string', id=None), 'continue_at': Value(dtype='float64', id=None), 'include_history_s': Value(dtype='float64', id=None)}\n\nThe above exception was the direct cause of the following exception:\n\nDatasetGenerationError Traceback (most recent call last)\n[/usr/local/lib/python3.11/dist-packages/datasets/builder.py](https://localhost:8080/#) in _prepare_split_single(self, gen_kwargs, fpath, file_format, max_shard_size, job_id)\n 1896 if isinstance(e, DatasetGenerationError):\n 1897 raise\n-> 1898 raise DatasetGenerationError(\"An error occurred while generating the dataset\") from e\n 1899 \n 1900 yield job_id, True, (total_num_examples, total_num_bytes, writer._features, num_shards, shard_lengths)\n\nDatasetGenerationError: An error occurred while generating the dataset\n```",
"@lhoestq opps sorry the dataset was in .zst which was causing this error rather than being a datasets library fault. After upgrading dataset version Colab is working fine. "
] | 2025-05-15T11:45:06Z | 2025-06-13T00:44:27Z | 2025-06-13T00:44:27Z | NONE | null | null | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | ### Describe the bug
I am facing an issue since today where HF's dataset is acting weird and in some instances failure to recognise a valid dataset entirely, I think it is happening due to recent change in `fsspec` lib as using this command fixed it for me in one-time: `!pip install -U datasets huggingface_hub fsspec`
### Steps to reproduce the bug
from datasets import load_dataset
def download_hf():
dataset_name = input("Enter the dataset name: ")
subset_name = input("Enter subset name: ")
ds = load_dataset(dataset_name, name=subset_name)
for split in ds:
ds[split].to_pandas().to_csv(f"{subset_name}.csv", index=False)
download_hf()
### Expected behavior
```
Downloading readme: 100%
1.55k/1.55k [00:00<00:00, 121kB/s]
Downloading data files: 100%
1/1 [00:00<00:00, 2.06it/s]
Downloading data: 0%| | 0.00/54.2k [00:00<?, ?B/s]
Downloading data: 100%|██████████| 54.2k/54.2k [00:00<00:00, 121kB/s]
Extracting data files: 100%
1/1 [00:00<00:00, 35.17it/s]
Generating test split:
140/0 [00:00<00:00, 2628.62 examples/s]
---------------------------------------------------------------------------
NotImplementedError Traceback (most recent call last)
[<ipython-input-2-12ab305b0e77>](https://localhost:8080/#) in <cell line: 0>()
8 ds[split].to_pandas().to_csv(f"{subset_name}.csv", index=False)
9
---> 10 download_hf()
2 frames
[/usr/local/lib/python3.11/dist-packages/datasets/builder.py](https://localhost:8080/#) in as_dataset(self, split, run_post_process, verification_mode, ignore_verifications, in_memory)
1171 is_local = not is_remote_filesystem(self._fs)
1172 if not is_local:
-> 1173 raise NotImplementedError(f"Loading a dataset cached in a {type(self._fs).__name__} is not supported.")
1174 if not os.path.exists(self._output_dir):
1175 raise FileNotFoundError(
NotImplementedError: Loading a dataset cached in a LocalFileSystem is not supported.
```
OR
```
Traceback (most recent call last):
File "e:\Fuck\download-data\mcq_dataset.py", line 10, in <module>
download_hf()
File "e:\Fuck\download-data\mcq_dataset.py", line 6, in download_hf
ds = load_dataset(dataset_name, name=subset_name)
File "C:\Users\DELL\AppData\Local\Programs\Python\Python310\lib\site-packages\datasets\load.py", line 2606, in load_dataset
builder_instance = load_dataset_builder(
File "C:\Users\DELL\AppData\Local\Programs\Python\Python310\lib\site-packages\datasets\load.py", line 2277, in load_dataset_builder
dataset_module = dataset_module_factory(
File "C:\Users\DELL\AppData\Local\Programs\Python\Python310\lib\site-packages\datasets\load.py", line 1917, in dataset_module_factory
raise e1 from None
File "C:\Users\DELL\AppData\Local\Programs\Python\Python310\lib\site-packages\datasets\load.py", line 1867, in dataset_module_factory
raise DatasetNotFoundError(f"Dataset '{path}' doesn't exist on the Hub or cannot be accessed.") from e
datasets.exceptions.DatasetNotFoundError: Dataset 'dataset repo_id' doesn't exist on the Hub or cannot be accessed.
```
### Environment info
colab and 3.10 local system | {
"avatar_url": "https://avatars.githubusercontent.com/u/81933585?v=4",
"events_url": "https://api.github.com/users/sleepingcat4/events{/privacy}",
"followers_url": "https://api.github.com/users/sleepingcat4/followers",
"following_url": "https://api.github.com/users/sleepingcat4/following{/other_user}",
"gists_url": "https://api.github.com/users/sleepingcat4/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/sleepingcat4",
"id": 81933585,
"login": "sleepingcat4",
"node_id": "MDQ6VXNlcjgxOTMzNTg1",
"organizations_url": "https://api.github.com/users/sleepingcat4/orgs",
"received_events_url": "https://api.github.com/users/sleepingcat4/received_events",
"repos_url": "https://api.github.com/users/sleepingcat4/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/sleepingcat4/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sleepingcat4/subscriptions",
"type": "User",
"url": "https://api.github.com/users/sleepingcat4",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7570/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7570/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/7569 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7569/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7569/comments | https://api.github.com/repos/huggingface/datasets/issues/7569/events | https://github.com/huggingface/datasets/issues/7569 | 3,061,234,054 | I_kwDODunzps62drmG | 7,569 | Dataset creation is broken if nesting a dict inside a dict inside a list | {
"avatar_url": "https://avatars.githubusercontent.com/u/25732590?v=4",
"events_url": "https://api.github.com/users/TimSchneider42/events{/privacy}",
"followers_url": "https://api.github.com/users/TimSchneider42/followers",
"following_url": "https://api.github.com/users/TimSchneider42/following{/other_user}",
"gists_url": "https://api.github.com/users/TimSchneider42/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/TimSchneider42",
"id": 25732590,
"login": "TimSchneider42",
"node_id": "MDQ6VXNlcjI1NzMyNTkw",
"organizations_url": "https://api.github.com/users/TimSchneider42/orgs",
"received_events_url": "https://api.github.com/users/TimSchneider42/received_events",
"repos_url": "https://api.github.com/users/TimSchneider42/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/TimSchneider42/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/TimSchneider42/subscriptions",
"type": "User",
"url": "https://api.github.com/users/TimSchneider42",
"user_view_type": "public"
} | [] | open | false | null | [] | null | [
"Hi ! That's because Séquence is a type that comes from tensorflow datasets and inverts lists and focus when doing Séquence(dict).\n\nInstead you should use a list. In your case\n```python\nfeatures = Features({\n \"a\": [{\"b\": {\"c\": Value(\"string\")}}]\n})\n```",
"Hi,\n\nThanks for the swift reply! Could you quickly clarify a couple of points?\n\n1. Is there any benefit in using Sequence over normal lists? Especially for longer lists (in my case, up to 256 entries)\n2. When exactly can I use Sequence? If there is a maximum of one level of dictionaries inside, then it's always fine?\n3. When creating the data in the generator, do I need to swap lists and dicts manually, or does that happen automatically?\n\nAlso, the documentation does not seem to mention this limitation of the Sequence type anywhere and encourages users to use it [here](https://huggingface.co/docs/datasets/en/about_dataset_features). In fact, I did not even know that just using a Python list was an option. Maybe the documentation can be improved to mention the limitations of Sequence and highlight that lists can be used instead.\n\nThanks a lot in advance!\n\nBest,\nTim"
] | 2025-05-13T21:06:45Z | 2025-05-20T19:25:15Z | null | NONE | null | null | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | ### Describe the bug
Hey,
I noticed that the creation of datasets with `Dataset.from_generator` is broken if dicts and lists are nested in a certain way and a schema is being passed. See below for details.
Best,
Tim
### Steps to reproduce the bug
Runing this code:
```python
from datasets import Dataset, Features, Sequence, Value
def generator():
yield {
"a": [{"b": {"c": 0}}],
}
features = Features(
{
"a": Sequence(
feature={
"b": {
"c": Value("int32"),
},
},
length=1,
)
}
)
dataset = Dataset.from_generator(generator, features=features)
```
leads to
```
Generating train split: 1 examples [00:00, 540.85 examples/s]
Traceback (most recent call last):
File "/home/user/miniconda3/envs/test/lib/python3.11/site-packages/datasets/builder.py", line 1635, in _prepare_split_single
num_examples, num_bytes = writer.finalize()
^^^^^^^^^^^^^^^^^
File "/home/user/miniconda3/envs/test/lib/python3.11/site-packages/datasets/arrow_writer.py", line 657, in finalize
self.write_examples_on_file()
File "/home/user/miniconda3/envs/test/lib/python3.11/site-packages/datasets/arrow_writer.py", line 510, in write_examples_on_file
self.write_batch(batch_examples=batch_examples)
File "/home/user/miniconda3/envs/test/lib/python3.11/site-packages/datasets/arrow_writer.py", line 629, in write_batch
pa_table = pa.Table.from_arrays(arrays, schema=schema)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "pyarrow/table.pxi", line 4851, in pyarrow.lib.Table.from_arrays
File "pyarrow/table.pxi", line 1608, in pyarrow.lib._sanitize_arrays
File "pyarrow/array.pxi", line 399, in pyarrow.lib.asarray
File "pyarrow/array.pxi", line 1004, in pyarrow.lib.Array.cast
File "/home/user/miniconda3/envs/test/lib/python3.11/site-packages/pyarrow/compute.py", line 405, in cast
return call_function("cast", [arr], options, memory_pool)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "pyarrow/_compute.pyx", line 598, in pyarrow._compute.call_function
File "pyarrow/_compute.pyx", line 393, in pyarrow._compute.Function.call
File "pyarrow/error.pxi", line 155, in pyarrow.lib.pyarrow_internal_check_status
File "pyarrow/error.pxi", line 92, in pyarrow.lib.check_status
pyarrow.lib.ArrowNotImplementedError: Unsupported cast from fixed_size_list<item: struct<c: int32>>[1] to struct using function cast_struct
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/home/user/test/tools/hf_test2.py", line 23, in <module>
dataset = Dataset.from_generator(generator, features=features)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/user/miniconda3/envs/test/lib/python3.11/site-packages/datasets/arrow_dataset.py", line 1114, in from_generator
).read()
^^^^^^
File "/home/user/miniconda3/envs/test/lib/python3.11/site-packages/datasets/io/generator.py", line 49, in read
self.builder.download_and_prepare(
File "/home/user/miniconda3/envs/test/lib/python3.11/site-packages/datasets/builder.py", line 925, in download_and_prepare
self._download_and_prepare(
File "/home/user/miniconda3/envs/test/lib/python3.11/site-packages/datasets/builder.py", line 1649, in _download_and_prepare
super()._download_and_prepare(
File "/home/user/miniconda3/envs/test/lib/python3.11/site-packages/datasets/builder.py", line 1001, in _download_and_prepare
self._prepare_split(split_generator, **prepare_split_kwargs)
File "/home/user/miniconda3/envs/test/lib/python3.11/site-packages/datasets/builder.py", line 1487, in _prepare_split
for job_id, done, content in self._prepare_split_single(
File "/home/user/miniconda3/envs/test/lib/python3.11/site-packages/datasets/builder.py", line 1644, in _prepare_split_single
raise DatasetGenerationError("An error occurred while generating the dataset") from e
datasets.exceptions.DatasetGenerationError: An error occurred while generating the dataset
Process finished with exit code 1
```
### Expected behavior
I expected this code not to lead to an error.
I have done some digging and figured out that the problem seems to be the `get_nested_type` function in `features.py`, which, for whatever reason, flips Sequences and dicts whenever it encounters a dict inside of a sequence. This seems to be necessary, as disabling that flip leads to another error. However, by keeping that flip enabled for the highest level and disabling it for all subsequent levels, I was able to work around this problem. Specifically, by patching `get_nested_type` as follows, it works on the given example (emphasis on the `level` parameter I added):
```python
def get_nested_type(schema: FeatureType, level=0) -> pa.DataType:
"""
get_nested_type() converts a datasets.FeatureType into a pyarrow.DataType, and acts as the inverse of
generate_from_arrow_type().
It performs double-duty as the implementation of Features.type and handles the conversion of
datasets.Feature->pa.struct
"""
# Nested structures: we allow dict, list/tuples, sequences
if isinstance(schema, Features):
return pa.struct(
{key: get_nested_type(schema[key], level = level + 1) for key in schema}
) # Features is subclass of dict, and dict order is deterministic since Python 3.6
elif isinstance(schema, dict):
return pa.struct(
{key: get_nested_type(schema[key], level = level + 1) for key in schema}
) # however don't sort on struct types since the order matters
elif isinstance(schema, (list, tuple)):
if len(schema) != 1:
raise ValueError("When defining list feature, you should just provide one example of the inner type")
value_type = get_nested_type(schema[0], level = level + 1)
return pa.list_(value_type)
elif isinstance(schema, LargeList):
value_type = get_nested_type(schema.feature, level = level + 1)
return pa.large_list(value_type)
elif isinstance(schema, Sequence):
value_type = get_nested_type(schema.feature, level = level + 1)
# We allow to reverse list of dict => dict of list for compatibility with tfds
if isinstance(schema.feature, dict) and level == 1:
data_type = pa.struct({f.name: pa.list_(f.type, schema.length) for f in value_type})
else:
data_type = pa.list_(value_type, schema.length)
return data_type
# Other objects are callable which returns their data type (ClassLabel, Array2D, Translation, Arrow datatype creation methods)
return schema()
```
I have honestly no idea what I am doing here, so this might produce other issues for different inputs.
### Environment info
- `datasets` version: 3.6.0
- Platform: Linux-6.8.0-59-generic-x86_64-with-glibc2.35
- Python version: 3.11.11
- `huggingface_hub` version: 0.30.2
- PyArrow version: 19.0.1
- Pandas version: 2.2.3
- `fsspec` version: 2024.12.0
Also tested it with 3.5.0, same result. | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7569/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7569/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/7568 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7568/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7568/comments | https://api.github.com/repos/huggingface/datasets/issues/7568/events | https://github.com/huggingface/datasets/issues/7568 | 3,060,515,257 | I_kwDODunzps62a8G5 | 7,568 | `IterableDatasetDict.map()` call removes `column_names` (in fact info.features) | {
"avatar_url": "https://avatars.githubusercontent.com/u/7893763?v=4",
"events_url": "https://api.github.com/users/mombip/events{/privacy}",
"followers_url": "https://api.github.com/users/mombip/followers",
"following_url": "https://api.github.com/users/mombip/following{/other_user}",
"gists_url": "https://api.github.com/users/mombip/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mombip",
"id": 7893763,
"login": "mombip",
"node_id": "MDQ6VXNlcjc4OTM3NjM=",
"organizations_url": "https://api.github.com/users/mombip/orgs",
"received_events_url": "https://api.github.com/users/mombip/received_events",
"repos_url": "https://api.github.com/users/mombip/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mombip/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mombip/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mombip",
"user_view_type": "public"
} | [] | open | false | null | [] | null | [
"Hi ! IterableDataset doesn't know what's the output of the function you pass to map(), so it's not possible to know in advance the features of the output dataset.\n\nThere is a workaround though: either do `ds = ds.map(..., features=features)`, or you can do `ds = ds._resolve_features()` which iterates on the first rows to infer the dataset features.",
"Thank you. I understand that “IterableDataset doesn't know what's the output of the function”—that’s true, but:\n\nUnfortunately, the workaround you proposed **doesn’t solve** the problem. `ds.map()` is called multiple times by third-party code (i.e. `SFTTrainer`). To apply your approach, I would have to modify external library code. That’s why I decided to patch the _class_ rather than update `dataset` _objects_ (in fact, updating the object after `map()` was my initial approach, but then I realized I’m not the only one mapping an already-mapped dataset.)\n\nAs a user, I expected that after mapping I would get a new dataset with the correct column names. If, for some reason, that can’t be the default behavior, I would expect an argument—i.e. `auto_resolve_features: bool = False` — to control how my dataset is mapped if following mapping operation are called.\n\nIt’s also problematic that `column_names` are tied to `features`, which is even more confusing and forces you to inspect the source code to understand what’s going on.\n\n**New version of workaround:**\n```python\ndef patch_iterable_dataset_map():\n _orig_map = IterableDataset.map\n\n def _patched_map(self, *args, **kwargs):\n ds = _orig_map(self, *args, **kwargs)\n return ds._resolve_features()\n\n IterableDataset.map = _patched_map\n```",
"I see, maybe `.resolve_features()` should be called by default in this case in the SFTTrainer ? (or pass `features=` if the data processing always output the same features)\n\nWe can even support a new parameter `features=\"infer\"` if it would be comfortable to not use internal methods in SFTTrainer",
"I think most straightforward solution would be to reinitialize `features` from data after mapping if `feature` argument is not passed. I hink it is more intuitive behavior than just cleaning features. There is also problem in usage `.resolve_features()` in this context. I observed that it leads to `_head()` method execution and it then causes that 5 batches from dataset are iterated (`_head()` defaults to 5 batches). \nI'm not sure how it influences whole process. Are those 5 batches (in my case it's 5000 rows) used only to find `features`. Does final training/eval process \"see\" this items? How it affects IterableDataset state (current position)?",
"I checked the source code and while it indeed iterates on the first 5 rows. As a normal iteration, it does record the state in case you call `.state_dict()`, but it doesn't change the starting state. The starting state is always the beginning of the dataset, unless it is explicitly set with `.load_state_dict()`. To be clear, if you iterate on the dataset after `._resolve_features()`, it will start from the beginning of the dataset (or from a state you manually pass using `.load_state_dict()`)",
"Hi!\nI’ve opened a PR #7658 to address this issue.\n\nThe fix ensures that info.features is only updated if features is not None, preventing accidental loss of schema and column_names.\nPlease let me know if you see any edge cases or have additional concerns!\nAlso, if a test is needed for this case, happy to discuss—the fix is small, but I can add one if the maintainers prefer.\n\nThanks everyone for the clear diagnosis and suggestions in this thread!"
] | 2025-05-13T15:45:42Z | 2025-06-30T09:33:47Z | null | NONE | null | null | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | When calling `IterableDatasetDict.map()`, each split’s `IterableDataset.map()` is invoked without a `features` argument. While omitting the argument isn’t itself incorrect, the implementation then sets `info.features = features`, which destroys the original `features` content. Since `IterableDataset.column_names` relies on `info.features`, it ends up broken (`None`).
**Reproduction**
1. Define an IterableDatasetDict with a non-None features schema.
2. my_iterable_dataset_dict contains "text" column.
3. Call:
```Python
new_dict = my_iterable_dataset_dict.map(
function=my_fn,
with_indices=False,
batched=True,
batch_size=16,
)
```
4. Observe
```Python
new_dict["train"].info.features # {'text': Value(dtype='string', id=None)}
new_dict["train"].column_names # ['text']
```
5. Call:
```Python
new_dict = my_iterable_dataset_dict.map(
function=my_fn,
with_indices=False,
batched=True,
batch_size=16,
remove_columns=["foo"]
)
```
6. Observe:
```Python
new_dict["train"].info.features # → None
new_dict["train"].column_names # → None
```
5. Internally, in dataset_dict.py this loop omits features ([code](https://github.com/huggingface/datasets/blob/b9efdc64c3bfb8f21f8a4a22b21bddd31ecd5a31/src/datasets/dataset_dict.py#L2047C5-L2056C14)):
```Python
for split, dataset in self.items():
dataset_dict[split] = dataset.map(
function=function,
with_indices=with_indices,
input_columns=input_columns,
batched=batched,
batch_size=batch_size,
drop_last_batch=drop_last_batch,
remove_columns=remove_columns,
fn_kwargs=fn_kwargs,
# features omitted → defaults to None
)
```
7. Then inside IterableDataset.map() ([code](https://github.com/huggingface/datasets/blob/b9efdc64c3bfb8f21f8a4a22b21bddd31ecd5a31/src/datasets/iterable_dataset.py#L2619C1-L2622C37)) correct `info.features` is replaced by features which is None:
```Python
info = self.info.copy()
info.features = features # features is None here
return IterableDataset(..., info=info, ...)
```
**Suggestion**
It looks like this replacement was added intentionally but maybe should be done only if `features` is `not None`.
**Workarround:**
`SFTTrainer` calls `dataset.map()` several times and then fails on `NoneType` when iterating `dataset.column_names`.
I decided to write this patch - works form me.
```python
def patch_iterable_dataset_map():
_orig_map = IterableDataset.map
def _patched_map(self, *args, **kwargs):
if "features" not in kwargs or kwargs["features"] is None:
kwargs["features"] = self.info.features
return _orig_map(self, *args, **kwargs)
IterableDataset.map = _patched_map
```
| null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7568/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7568/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/7567 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7567/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7567/comments | https://api.github.com/repos/huggingface/datasets/issues/7567/events | https://github.com/huggingface/datasets/issues/7567 | 3,058,308,538 | I_kwDODunzps62ShW6 | 7,567 | interleave_datasets seed with multiple workers | {
"avatar_url": "https://avatars.githubusercontent.com/u/511073?v=4",
"events_url": "https://api.github.com/users/jonathanasdf/events{/privacy}",
"followers_url": "https://api.github.com/users/jonathanasdf/followers",
"following_url": "https://api.github.com/users/jonathanasdf/following{/other_user}",
"gists_url": "https://api.github.com/users/jonathanasdf/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/jonathanasdf",
"id": 511073,
"login": "jonathanasdf",
"node_id": "MDQ6VXNlcjUxMTA3Mw==",
"organizations_url": "https://api.github.com/users/jonathanasdf/orgs",
"received_events_url": "https://api.github.com/users/jonathanasdf/received_events",
"repos_url": "https://api.github.com/users/jonathanasdf/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/jonathanasdf/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jonathanasdf/subscriptions",
"type": "User",
"url": "https://api.github.com/users/jonathanasdf",
"user_view_type": "public"
} | [] | open | false | null | [] | null | [
"Hi ! It's already the case IIRC: the effective seed looks like `seed + worker_id`. Do you have a reproducible example ?",
"here is an example with shuffle\n\n```\nimport itertools\nimport datasets\nimport multiprocessing\nimport torch.utils.data\n\n\ndef gen(shard):\n worker_info = torch.utils.data.get_worker_info()\n for i in range(10):\n yield {'value': i, 'worker_id': worker_info.id}\n\n\ndef main():\n ds = datasets.IterableDataset.from_generator(gen, gen_kwargs={'shard': list(range(8))})\n ds = ds.shuffle(buffer_size=100, seed=1234)\n dataloader = torch.utils.data.DataLoader(ds, batch_size=None, num_workers=8)\n for i, ex in enumerate(itertools.islice(dataloader, 50)):\n print(i, ex)\n\n\nif __name__ == '__main__':\n multiprocessing.set_start_method('spawn')\n main()\n```\n\n```\npython test.py\n0 {'value': 8, 'worker_id': 0}\n1 {'value': 8, 'worker_id': 1}\n2 {'value': 8, 'worker_id': 2}\n3 {'value': 8, 'worker_id': 3}\n4 {'value': 8, 'worker_id': 4}\n5 {'value': 8, 'worker_id': 5}\n6 {'value': 8, 'worker_id': 6}\n7 {'value': 8, 'worker_id': 7}\n8 {'value': 9, 'worker_id': 0}\n9 {'value': 9, 'worker_id': 1}\n10 {'value': 9, 'worker_id': 2}\n11 {'value': 9, 'worker_id': 3}\n12 {'value': 9, 'worker_id': 4}\n13 {'value': 9, 'worker_id': 5}\n14 {'value': 9, 'worker_id': 6}\n15 {'value': 9, 'worker_id': 7}\n16 {'value': 5, 'worker_id': 0}\n17 {'value': 5, 'worker_id': 1}\n18 {'value': 5, 'worker_id': 2}\n19 {'value': 5, 'worker_id': 3}\n```",
"With `interleave_datasets`\n\n```\nimport itertools\nimport datasets\nimport multiprocessing\nimport torch.utils.data\n\n\ndef gen(shard, value):\n while True:\n yield {'value': value}\n\n\ndef main():\n ds = [\n datasets.IterableDataset.from_generator(gen, gen_kwargs={'shard': list(range(8)), 'value': i})\n for i in range(10)\n ]\n ds = datasets.interleave_datasets(ds, probabilities=[1 / len(ds)] * len(ds), seed=1234)\n dataloader = torch.utils.data.DataLoader(ds, batch_size=None, num_workers=8)\n for i, ex in enumerate(itertools.islice(dataloader, 50)):\n print(i, ex)\n\n\nif __name__ == '__main__':\n multiprocessing.set_start_method('spawn')\n main()\n```\n\n```\npython test.py\n0 {'value': 9}\n1 {'value': 9}\n2 {'value': 9}\n3 {'value': 9}\n4 {'value': 9}\n5 {'value': 9}\n6 {'value': 9}\n7 {'value': 9}\n8 {'value': 3}\n9 {'value': 3}\n10 {'value': 3}\n11 {'value': 3}\n12 {'value': 3}\n13 {'value': 3}\n14 {'value': 3}\n15 {'value': 3}\n16 {'value': 9}\n17 {'value': 9}\n18 {'value': 9}\n19 {'value': 9}\n20 {'value': 9}\n21 {'value': 9}\n22 {'value': 9}\n23 {'value': 9}\n```",
"Same results after updating to datasets 3.6.0.",
"Ah my bad, `shuffle()` uses a global effective seed which is something like `seed + epoch`, which is used to do the same shards shuffle in each worker so that each worker have a non-overlapping set of shards:\n\nhttps://github.com/huggingface/datasets/blob/b9efdc64c3bfb8f21f8a4a22b21bddd31ecd5a31/src/datasets/iterable_dataset.py#L2102-L2111\n\nI think we should take into account the `worker_id` in a local seed for the buffer right after this line:\n\nhttps://github.com/huggingface/datasets/blob/b9efdc64c3bfb8f21f8a4a22b21bddd31ecd5a31/src/datasets/iterable_dataset.py#L2151-L2153\n\nlike adding a new step that would propagate in the examples iterables or something like that:\n\n```python\nex_iterable = ex_iterable.shift_rngs(value=worker_id)\n```\n\nis this something you'd like to explore ? contributions on this subject are very welcome",
"Potentially, but busy. If anyone wants to take this up please feel free to, otherwise I may or may not revisit when I have free time.\n\nFor what it's worth I got around this with\n\n```\n\nclass SeedGeneratorWithWorkerIterable(iterable_dataset._BaseExamplesIterable):\n \"\"\"ExamplesIterable that seeds the rng with worker id.\"\"\"\n\n def __init__(\n self,\n ex_iterable: iterable_dataset._BaseExamplesIterable,\n generator: np.random.Generator,\n rank: int = 0,\n ):\n \"\"\"Constructor.\"\"\"\n super().__init__()\n self.ex_iterable = ex_iterable\n self.generator = generator\n self.rank = rank\n\n def _init_state_dict(self) -> dict:\n self._state_dict = self.ex_iterable._init_state_dict()\n return self._state_dict\n\n def __iter__(self):\n \"\"\"Data iterator.\"\"\"\n effective_seed = copy.deepcopy(self.generator).integers(0, 1 << 63) - self.rank\n effective_seed = (1 << 63) + effective_seed if effective_seed < 0 else effective_seed\n generator = np.random.default_rng(effective_seed)\n self.ex_iterable = self.ex_iterable.shuffle_data_sources(generator)\n if self._state_dict:\n self._state_dict = self.ex_iterable._init_state_dict()\n yield from iter(self.ex_iterable)\n\n def shuffle_data_sources(self, generator):\n \"\"\"Shuffle data sources.\"\"\"\n ex_iterable = self.ex_iterable.shuffle_data_sources(generator)\n return SeedGeneratorWithWorkerIterable(ex_iterable, generator=generator, rank=self.rank)\n\n def shard_data_sources(self, num_shards: int, index: int, contiguous=True): # noqa: FBT002\n \"\"\"Shard data sources.\"\"\"\n ex_iterable = self.ex_iterable.shard_data_sources(num_shards, index, contiguous=contiguous)\n return SeedGeneratorWithWorkerIterable(ex_iterable, generator=self.generator, rank=index)\n\n @property\n def is_typed(self):\n return self.ex_iterable.is_typed\n\n @property\n def features(self):\n return self.ex_iterable.features\n\n @property\n def num_shards(self) -> int:\n \"\"\"Number of shards.\"\"\"\n return self.ex_iterable.num_shards\n```",
"Thanks for the detailed insights!\n\nAfter reviewing the issue and the current implementation in `iterable_dataset.py`, I can confirm the cause:\n\nWhen using `interleave_datasets(..., seed=...)` with `num_workers > 1` (e.g. via `DataLoader`), the same RNG state is shared across workers — which leads to each worker producing identical sample sequences. This is because the seed is not modulated by `worker_id`, unlike the usual approach in `shuffle()` where seed is adjusted using the `epoch`.\n\nAs @lhoestq suggested, a proper fix would involve introducing something like:\n\n```python\nex_iterable = ex_iterable.shift_rngs(worker_id)\n```\n\n@jonathanasdf Also really appreciate the workaround implementation shared above — that was helpful to validate the behavior and will help shape the general solution."
] | 2025-05-12T22:38:27Z | 2025-06-29T06:53:59Z | null | NONE | null | null | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | ### Describe the bug
Using interleave_datasets with multiple dataloader workers and a seed set causes the same dataset sampling order across all workers.
Should the seed be modulated with the worker id?
### Steps to reproduce the bug
See above
### Expected behavior
See above
### Environment info
- `datasets` version: 3.5.1
- Platform: macOS-15.4.1-arm64-arm-64bit
- Python version: 3.12.9
- `huggingface_hub` version: 0.30.2
- PyArrow version: 19.0.1
- Pandas version: 2.2.3
- `fsspec` version: 2024.12.0 | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7567/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7567/timeline | null | null | null | null | false |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.