url
stringlengths 58
61
| repository_url
stringclasses 1
value | labels_url
stringlengths 72
75
| comments_url
stringlengths 67
70
| events_url
stringlengths 65
68
| html_url
stringlengths 46
51
| id
int64 599M
1.47B
| node_id
stringlengths 18
32
| number
int64 1
5.33k
| title
stringlengths 1
276
| user
dict | labels
list | state
stringclasses 2
values | locked
bool 1
class | assignee
dict | assignees
list | milestone
dict | comments
sequence | created_at
stringlengths 20
20
| updated_at
stringlengths 20
20
| closed_at
stringlengths 20
20
⌀ | author_association
stringclasses 3
values | active_lock_reason
null | draft
bool 2
classes | pull_request
dict | body
stringlengths 0
228k
⌀ | reactions
dict | timeline_url
stringlengths 67
70
| performed_via_github_app
null | state_reason
stringclasses 3
values | is_pull_request
bool 2
classes |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://api.github.com/repos/huggingface/datasets/issues/5331 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5331/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5331/comments | https://api.github.com/repos/huggingface/datasets/issues/5331/events | https://github.com/huggingface/datasets/pull/5331 | 1,473,146,738 | PR_kwDODunzps5EKDpr | 5,331 | Support for multiple configs in packaged modules via metadata yaml info | {
"avatar_url": "https://avatars.githubusercontent.com/u/16348744?v=4",
"events_url": "https://api.github.com/users/polinaeterna/events{/privacy}",
"followers_url": "https://api.github.com/users/polinaeterna/followers",
"following_url": "https://api.github.com/users/polinaeterna/following{/other_user}",
"gists_url": "https://api.github.com/users/polinaeterna/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/polinaeterna",
"id": 16348744,
"login": "polinaeterna",
"node_id": "MDQ6VXNlcjE2MzQ4NzQ0",
"organizations_url": "https://api.github.com/users/polinaeterna/orgs",
"received_events_url": "https://api.github.com/users/polinaeterna/received_events",
"repos_url": "https://api.github.com/users/polinaeterna/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/polinaeterna/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/polinaeterna/subscriptions",
"type": "User",
"url": "https://api.github.com/users/polinaeterna"
} | [] | open | false | null | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_5331). All of your documentation changes will be reflected on that endpoint."
] | 2022-12-02T16:43:44Z | 2022-12-02T18:01:31Z | null | CONTRIBUTOR | null | true | {
"diff_url": "https://github.com/huggingface/datasets/pull/5331.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5331",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/5331.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5331"
} | will solve https://github.com/huggingface/datasets/issues/5209 and https://github.com/huggingface/datasets/issues/5151 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5331/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5331/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/5329 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5329/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5329/comments | https://api.github.com/repos/huggingface/datasets/issues/5329/events | https://github.com/huggingface/datasets/pull/5329 | 1,471,999,125 | PR_kwDODunzps5EGK3y | 5,329 | Clarify imagefolder is for small datasets | {
"avatar_url": "https://avatars.githubusercontent.com/u/59462357?v=4",
"events_url": "https://api.github.com/users/stevhliu/events{/privacy}",
"followers_url": "https://api.github.com/users/stevhliu/followers",
"following_url": "https://api.github.com/users/stevhliu/following{/other_user}",
"gists_url": "https://api.github.com/users/stevhliu/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/stevhliu",
"id": 59462357,
"login": "stevhliu",
"node_id": "MDQ6VXNlcjU5NDYyMzU3",
"organizations_url": "https://api.github.com/users/stevhliu/orgs",
"received_events_url": "https://api.github.com/users/stevhliu/received_events",
"repos_url": "https://api.github.com/users/stevhliu/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/stevhliu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stevhliu/subscriptions",
"type": "User",
"url": "https://api.github.com/users/stevhliu"
} | [] | open | false | null | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_5329). All of your documentation changes will be reflected on that endpoint.",
"I think it's also reasonable to add the same note to the AudioFolder decription",
"Thank you ! I think \"regular\" is more appropriate than \"small\". It can easily scale to a few thousands of images - just not millions x)",
"Replaced \"small\" with \"several thousand\" since what is considered \"regular\" and even \"small\" can be kind of vague!"
] | 2022-12-01T21:47:29Z | 2022-12-02T18:36:54Z | null | MEMBER | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/5329.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5329",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/5329.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5329"
} | Based on feedback from [here](https://github.com/huggingface/datasets/issues/5317#issuecomment-1334108824), this PR adds a note to the `imagefolder` loading and creating docs that `imagefolder` is designed for small scale image datasets. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 1,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5329/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5329/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/5328 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5328/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5328/comments | https://api.github.com/repos/huggingface/datasets/issues/5328/events | https://github.com/huggingface/datasets/pull/5328 | 1,471,661,437 | PR_kwDODunzps5EFAyT | 5,328 | Fix docs building for main | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"EDIT\r\nAt least the docs for ~~main~~ PR branch are now built:\r\n- https://github.com/huggingface/datasets/actions/runs/3594847760/jobs/6053620813",
"Build documentation for main branch was triggered after this PR being merged: https://github.com/huggingface/datasets/actions/runs/3603370082/jobs/6071482470"
] | 2022-12-01T17:07:45Z | 2022-12-02T16:29:00Z | 2022-12-02T16:26:00Z | MEMBER | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/5328.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5328",
"merged_at": "2022-12-02T16:26:00Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5328.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5328"
} | This PR reverts the triggering event for building documentation introduced by:
- #5250
Fix #5326. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5328/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5328/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/5327 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5327/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5327/comments | https://api.github.com/repos/huggingface/datasets/issues/5327/events | https://github.com/huggingface/datasets/pull/5327 | 1,471,657,247 | PR_kwDODunzps5EE_3Q | 5,327 | Avoid unwanted behaviour when splits from script and metadata are not matching because of outdated metadata | {
"avatar_url": "https://avatars.githubusercontent.com/u/16348744?v=4",
"events_url": "https://api.github.com/users/polinaeterna/events{/privacy}",
"followers_url": "https://api.github.com/users/polinaeterna/followers",
"following_url": "https://api.github.com/users/polinaeterna/following{/other_user}",
"gists_url": "https://api.github.com/users/polinaeterna/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/polinaeterna",
"id": 16348744,
"login": "polinaeterna",
"node_id": "MDQ6VXNlcjE2MzQ4NzQ0",
"organizations_url": "https://api.github.com/users/polinaeterna/orgs",
"received_events_url": "https://api.github.com/users/polinaeterna/received_events",
"repos_url": "https://api.github.com/users/polinaeterna/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/polinaeterna/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/polinaeterna/subscriptions",
"type": "User",
"url": "https://api.github.com/users/polinaeterna"
} | [] | open | false | {
"avatar_url": "https://avatars.githubusercontent.com/u/16348744?v=4",
"events_url": "https://api.github.com/users/polinaeterna/events{/privacy}",
"followers_url": "https://api.github.com/users/polinaeterna/followers",
"following_url": "https://api.github.com/users/polinaeterna/following{/other_user}",
"gists_url": "https://api.github.com/users/polinaeterna/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/polinaeterna",
"id": 16348744,
"login": "polinaeterna",
"node_id": "MDQ6VXNlcjE2MzQ4NzQ0",
"organizations_url": "https://api.github.com/users/polinaeterna/orgs",
"received_events_url": "https://api.github.com/users/polinaeterna/received_events",
"repos_url": "https://api.github.com/users/polinaeterna/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/polinaeterna/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/polinaeterna/subscriptions",
"type": "User",
"url": "https://api.github.com/users/polinaeterna"
} | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/16348744?v=4",
"events_url": "https://api.github.com/users/polinaeterna/events{/privacy}",
"followers_url": "https://api.github.com/users/polinaeterna/followers",
"following_url": "https://api.github.com/users/polinaeterna/following{/other_user}",
"gists_url": "https://api.github.com/users/polinaeterna/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/polinaeterna",
"id": 16348744,
"login": "polinaeterna",
"node_id": "MDQ6VXNlcjE2MzQ4NzQ0",
"organizations_url": "https://api.github.com/users/polinaeterna/orgs",
"received_events_url": "https://api.github.com/users/polinaeterna/received_events",
"repos_url": "https://api.github.com/users/polinaeterna/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/polinaeterna/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/polinaeterna/subscriptions",
"type": "User",
"url": "https://api.github.com/users/polinaeterna"
}
] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_5327). All of your documentation changes will be reflected on that endpoint."
] | 2022-12-01T17:05:23Z | 2022-12-01T17:41:02Z | null | CONTRIBUTOR | null | true | {
"diff_url": "https://github.com/huggingface/datasets/pull/5327.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5327",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/5327.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5327"
} | will fix #5315 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5327/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5327/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/5326 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5326/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5326/comments | https://api.github.com/repos/huggingface/datasets/issues/5326/events | https://github.com/huggingface/datasets/issues/5326 | 1,471,634,168 | I_kwDODunzps5Xt1r4 | 5,326 | No documentation for main branch is built | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | closed | false | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
}
] | null | [] | 2022-12-01T16:50:58Z | 2022-12-02T16:26:01Z | 2022-12-02T16:26:01Z | MEMBER | null | null | null | Since:
- #5250
- Commit: 703b84311f4ead83c7f79639f2dfa739295f0be6
the docs for main branch are no longer built.
The change introduced only triggers the docs building for releases. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5326/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5326/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/5325 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5325/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5325/comments | https://api.github.com/repos/huggingface/datasets/issues/5325/events | https://github.com/huggingface/datasets/issues/5325 | 1,471,536,822 | I_kwDODunzps5Xtd62 | 5,325 | map(...batch_size=None) for IterableDataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/299380?v=4",
"events_url": "https://api.github.com/users/frankier/events{/privacy}",
"followers_url": "https://api.github.com/users/frankier/followers",
"following_url": "https://api.github.com/users/frankier/following{/other_user}",
"gists_url": "https://api.github.com/users/frankier/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/frankier",
"id": 299380,
"login": "frankier",
"node_id": "MDQ6VXNlcjI5OTM4MA==",
"organizations_url": "https://api.github.com/users/frankier/orgs",
"received_events_url": "https://api.github.com/users/frankier/received_events",
"repos_url": "https://api.github.com/users/frankier/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/frankier/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/frankier/subscriptions",
"type": "User",
"url": "https://api.github.com/users/frankier"
} | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
},
{
"color": "7057ff",
"default": true,
"description": "Good for newcomers",
"id": 1935892877,
"name": "good first issue",
"node_id": "MDU6TGFiZWwxOTM1ODkyODc3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/good%20first%20issue"
}
] | open | false | null | [] | null | [
"Hi! I agree it makes sense for `IterableDataset.map` to support the `batch_size=None` case. This should be super easy to fix."
] | 2022-12-01T15:43:42Z | 2022-12-01T17:37:03Z | null | CONTRIBUTOR | null | null | null | ### Feature request
Dataset.map(...) allows batch_size to be None. It would be nice if IterableDataset did too.
### Motivation
Although it may seem a bit of a spurious request given that `IterableDataset` is meant for larger than memory datasets, but there are a couple of reasons why this might be nice.
One is that load_dataset(...) can return either IterableDataset or Dataset. mypy will then complain if batch_size=None even if we know it is Dataset. Of course we can do:
assert isinstance(d, datasets.DatasetDict)
But it is a mild inconvenience. What's more annoying is that whenever we use something like e.g. `combine_datasets(...)`, we end up with the union again, and so have to do the assert again.
Another is that we could actually end up with an IterableDataset small enough for memory in normal/correct usage, e.g. by filtering a massive IterableDataset.
For practical usages, an alternative to this would be to convert from an iterable dataset to a map-style dataset, but it is not obvious how to do this.
### Your contribution
Not this time. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5325/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5325/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/5324 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5324/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5324/comments | https://api.github.com/repos/huggingface/datasets/issues/5324/events | https://github.com/huggingface/datasets/issues/5324 | 1,471,524,512 | I_kwDODunzps5Xta6g | 5,324 | Fix docstrings and types in documentation that appears on the website | {
"avatar_url": "https://avatars.githubusercontent.com/u/16348744?v=4",
"events_url": "https://api.github.com/users/polinaeterna/events{/privacy}",
"followers_url": "https://api.github.com/users/polinaeterna/followers",
"following_url": "https://api.github.com/users/polinaeterna/following{/other_user}",
"gists_url": "https://api.github.com/users/polinaeterna/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/polinaeterna",
"id": 16348744,
"login": "polinaeterna",
"node_id": "MDQ6VXNlcjE2MzQ4NzQ0",
"organizations_url": "https://api.github.com/users/polinaeterna/orgs",
"received_events_url": "https://api.github.com/users/polinaeterna/received_events",
"repos_url": "https://api.github.com/users/polinaeterna/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/polinaeterna/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/polinaeterna/subscriptions",
"type": "User",
"url": "https://api.github.com/users/polinaeterna"
} | [
{
"color": "0075ca",
"default": true,
"description": "Improvements or additions to documentation",
"id": 1935892861,
"name": "documentation",
"node_id": "MDU6TGFiZWwxOTM1ODkyODYx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/documentation"
}
] | open | false | null | [] | null | [
"I agree we have a mess with docstrings..."
] | 2022-12-01T15:34:53Z | 2022-12-01T16:35:36Z | null | CONTRIBUTOR | null | null | null | While I was working on https://github.com/huggingface/datasets/pull/5313 I've noticed that we have a mess in how we annotate types and format args and return values in the code. And some of it is displayed in the [Reference section](https://huggingface.co/docs/datasets/package_reference/builder_classes) of the documentation on the website.
Would be nice someday, maybe before releasing datasets 3.0.0, to unify it...... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5324/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5324/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/5323 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5323/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5323/comments | https://api.github.com/repos/huggingface/datasets/issues/5323/events | https://github.com/huggingface/datasets/issues/5323 | 1,471,518,803 | I_kwDODunzps5XtZhT | 5,323 | Duplicated Keys in Taskmaster-2 Dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/52380283?v=4",
"events_url": "https://api.github.com/users/liaeh/events{/privacy}",
"followers_url": "https://api.github.com/users/liaeh/followers",
"following_url": "https://api.github.com/users/liaeh/following{/other_user}",
"gists_url": "https://api.github.com/users/liaeh/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/liaeh",
"id": 52380283,
"login": "liaeh",
"node_id": "MDQ6VXNlcjUyMzgwMjgz",
"organizations_url": "https://api.github.com/users/liaeh/orgs",
"received_events_url": "https://api.github.com/users/liaeh/received_events",
"repos_url": "https://api.github.com/users/liaeh/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/liaeh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/liaeh/subscriptions",
"type": "User",
"url": "https://api.github.com/users/liaeh"
} | [] | closed | false | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
}
] | null | [
"Thanks for reporting, @liaeh.\r\n\r\nWe are having a look at it. ",
"I have transferred the discussion to the Community tab of the dataset: https://huggingface.co/datasets/taskmaster2/discussions/1"
] | 2022-12-01T15:31:06Z | 2022-12-01T16:26:06Z | 2022-12-01T16:26:06Z | NONE | null | null | null | ### Describe the bug
Loading certain splits () of the taskmaster-2 dataset fails because of a DuplicatedKeysError. This occurs for the following domains: `'hotels', 'movies', 'music', 'sports'`. The domains `'flights', 'food-ordering', 'restaurant-search'` load fine.
Output:
### Steps to reproduce the bug
```
from datasets import load_dataset
dataset = load_dataset("taskmaster2", "music")
```
Output:
```
---------------------------------------------------------------------------
DuplicatedKeysError Traceback (most recent call last)
File ~/repos/tts-dataset/tts-dataset/venv/lib/python3.9/site-packages/datasets/builder.py:1532, in GeneratorBasedBuilder._prepare_split_single(self, arg)
[1531](file:///home/user/repos/tts-dataset/tts-dataset/venv/lib/python3.9/site-packages/datasets/builder.py?line=1530) example = self.info.features.encode_example(record) if self.info.features is not None else record
-> [1532](file:///home/user/repos/tts-dataset/tts-dataset/venv/lib/python3.9/site-packages/datasets/builder.py?line=1531) writer.write(example, key)
[1533](file:///home/user/repos/tts-dataset/tts-dataset/venv/lib/python3.9/site-packages/datasets/builder.py?line=1532) num_examples_progress_update += 1
File ~/repos/tts-dataset/tts-dataset/venv/lib/python3.9/site-packages/datasets/arrow_writer.py:475, in ArrowWriter.write(self, example, key, writer_batch_size)
[474](file:///home/user/repos/tts-dataset/tts-dataset/venv/lib/python3.9/site-packages/datasets/arrow_writer.py?line=473) if self._check_duplicates:
--> [475](file:///home/user/repos/tts-dataset/tts-dataset/venv/lib/python3.9/site-packages/datasets/arrow_writer.py?line=474) self.check_duplicate_keys()
[476](file:///home/user/repos/tts-dataset/tts-dataset/venv/lib/python3.9/site-packages/datasets/arrow_writer.py?line=475) # Re-intializing to empty list for next batch
File ~/repos/tts-dataset/tts-dataset/venv/lib/python3.9/site-packages/datasets/arrow_writer.py:492, in ArrowWriter.check_duplicate_keys(self)
[486](file:///home/user/repos/tts-dataset/tts-dataset/venv/lib/python3.9/site-packages/datasets/arrow_writer.py?line=485) duplicate_key_indices = [
[487](file:///home/user/repos/tts-dataset/tts-dataset/venv/lib/python3.9/site-packages/datasets/arrow_writer.py?line=486) str(self._num_examples + index)
[488](file:///home/user/repos/tts-dataset/tts-dataset/venv/lib/python3.9/site-packages/datasets/arrow_writer.py?line=487) for index, (duplicate_hash, _) in enumerate(self.hkey_record)
[489](file:///home/user/repos/tts-dataset/tts-dataset/venv/lib/python3.9/site-packages/datasets/arrow_writer.py?line=488) if duplicate_hash == hash
[490](file:///home/user/repos/tts-dataset/tts-dataset/venv/lib/python3.9/site-packages/datasets/arrow_writer.py?line=489) ]
--> [492](file:///home/user/repos/tts-dataset/tts-dataset/venv/lib/python3.9/site-packages/datasets/arrow_writer.py?line=491) raise DuplicatedKeysError(key, duplicate_key_indices)
[493](file:///home/user/repos/tts-dataset/tts-dataset/venv/lib/python3.9/site-packages/datasets/arrow_writer.py?line=492) else:
DuplicatedKeysError: Found multiple examples generated with the same key
The examples at index 858, 859 have the key dlg-89174425-d57a-4db7-a92b-165c3bff6735
During handling of the above exception, another exception occurred:
DuplicatedKeysError Traceback (most recent call last)
File ~/repos/tts-dataset/tts-dataset/venv/lib/python3.9/site-packages/datasets/builder.py:1541, in GeneratorBasedBuilder._prepare_split_single(self, arg)
[1540](file:///home/user/repos/tts-dataset/tts-dataset/venv/lib/python3.9/site-packages/datasets/builder.py?line=1539) num_shards = shard_id + 1
-> [1541](file:///home/user/repos/tts-dataset/tts-dataset/venv/lib/python3.9/site-packages/datasets/builder.py?line=1540) num_examples, num_bytes = writer.finalize()
[1542](file:///home/user/repos/tts-dataset/tts-dataset/venv/lib/python3.9/site-packages/datasets/builder.py?line=1541) writer.close()
File ~/repos/tts-dataset/tts-dataset/venv/lib/python3.9/site-packages/datasets/arrow_writer.py:563, in ArrowWriter.finalize(self, close_stream)
[562](file:///home/user/repos/tts-dataset/tts-dataset/venv/lib/python3.9/site-packages/datasets/arrow_writer.py?line=561) if self._check_duplicates:
--> [563](file:///home/user/repos/tts-dataset/tts-dataset/venv/lib/python3.9/site-packages/datasets/arrow_writer.py?line=562) self.check_duplicate_keys()
[564](file:///home/user/repos/tts-dataset/tts-dataset/venv/lib/python3.9/site-packages/datasets/arrow_writer.py?line=563) # Re-intializing to empty list for next batch
File ~/repos/tts-dataset/tts-dataset/venv/lib/python3.9/site-packages/datasets/arrow_writer.py:492, in ArrowWriter.check_duplicate_keys(self)
[486](file:///home/user/repos/tts-dataset/tts-dataset/venv/lib/python3.9/site-packages/datasets/arrow_writer.py?line=485) duplicate_key_indices = [
[487](file:///home/user/repos/tts-dataset/tts-dataset/venv/lib/python3.9/site-packages/datasets/arrow_writer.py?line=486) str(self._num_examples + index)
[488](file:///home/user/repos/tts-dataset/tts-dataset/venv/lib/python3.9/site-packages/datasets/arrow_writer.py?line=487) for index, (duplicate_hash, _) in enumerate(self.hkey_record)
[489](file:///home/user/repos/tts-dataset/tts-dataset/venv/lib/python3.9/site-packages/datasets/arrow_writer.py?line=488) if duplicate_hash == hash
[490](file:///home/user/repos/tts-dataset/tts-dataset/venv/lib/python3.9/site-packages/datasets/arrow_writer.py?line=489) ]
--> [492](file:///home/user/repos/tts-dataset/tts-dataset/venv/lib/python3.9/site-packages/datasets/arrow_writer.py?line=491) raise DuplicatedKeysError(key, duplicate_key_indices)
[493](file:///home/user/repos/tts-dataset/tts-dataset/venv/lib/python3.9/site-packages/datasets/arrow_writer.py?line=492) else:
DuplicatedKeysError: Found multiple examples generated with the same key
The examples at index 858, 859 have the key dlg-89174425-d57a-4db7-a92b-165c3bff6735
The above exception was the direct cause of the following exception:
DatasetGenerationError Traceback (most recent call last)
Cell In[23], line 1
----> 1 dataset = load_dataset("taskmaster2", "music")
File ~/repos/tts-dataset/tts-dataset/venv/lib/python3.9/site-packages/datasets/load.py:1741, in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, ignore_verifications, keep_in_memory, save_infos, revision, use_auth_token, task, streaming, num_proc, **config_kwargs)
[1738](file:///home/user/repos/tts-dataset/tts-dataset/venv/lib/python3.9/site-packages/datasets/load.py?line=1737) try_from_hf_gcs = path not in _PACKAGED_DATASETS_MODULES
[1740](file:///home/user/repos/tts-dataset/tts-dataset/venv/lib/python3.9/site-packages/datasets/load.py?line=1739) # Download and prepare data
-> [1741](file:///home/user/repos/tts-dataset/tts-dataset/venv/lib/python3.9/site-packages/datasets/load.py?line=1740) builder_instance.download_and_prepare(
[1742](file:///home/user/repos/tts-dataset/tts-dataset/venv/lib/python3.9/site-packages/datasets/load.py?line=1741) download_config=download_config,
[1743](file:///home/user/repos/tts-dataset/tts-dataset/venv/lib/python3.9/site-packages/datasets/load.py?line=1742) download_mode=download_mode,
[1744](file:///home/user/repos/tts-dataset/tts-dataset/venv/lib/python3.9/site-packages/datasets/load.py?line=1743) ignore_verifications=ignore_verifications,
[1745](file:///home/user/repos/tts-dataset/tts-dataset/venv/lib/python3.9/site-packages/datasets/load.py?line=1744) try_from_hf_gcs=try_from_hf_gcs,
[1746](file:///home/user/repos/tts-dataset/tts-dataset/venv/lib/python3.9/site-packages/datasets/load.py?line=1745) use_auth_token=use_auth_token,
[1747](file:///home/user/repos/tts-dataset/tts-dataset/venv/lib/python3.9/site-packages/datasets/load.py?line=1746) num_proc=num_proc,
[1748](file:///home/user/repos/tts-dataset/tts-dataset/venv/lib/python3.9/site-packages/datasets/load.py?line=1747) )
[1750](file:///home/user/repos/tts-dataset/tts-dataset/venv/lib/python3.9/site-packages/datasets/load.py?line=1749) # Build dataset for splits
[1751](file:///home/user/repos/tts-dataset/tts-dataset/venv/lib/python3.9/site-packages/datasets/load.py?line=1750) keep_in_memory = (
[1752](file:///home/user/repos/tts-dataset/tts-dataset/venv/lib/python3.9/site-packages/datasets/load.py?line=1751) keep_in_memory if keep_in_memory is not None else is_small_dataset(builder_instance.info.dataset_size)
[1753](file:///home/user/repos/tts-dataset/tts-dataset/venv/lib/python3.9/site-packages/datasets/load.py?line=1752) )
File ~/repos/tts-dataset/tts-dataset/venv/lib/python3.9/site-packages/datasets/builder.py:822, in DatasetBuilder.download_and_prepare(self, output_dir, download_config, download_mode, ignore_verifications, try_from_hf_gcs, dl_manager, base_path, use_auth_token, file_format, max_shard_size, num_proc, storage_options, **download_and_prepare_kwargs)
[820](file:///home/user/repos/tts-dataset/tts-dataset/venv/lib/python3.9/site-packages/datasets/builder.py?line=819) if num_proc is not None:
[821](file:///home/user/repos/tts-dataset/tts-dataset/venv/lib/python3.9/site-packages/datasets/builder.py?line=820) prepare_split_kwargs["num_proc"] = num_proc
--> [822](file:///home/user/repos/tts-dataset/tts-dataset/venv/lib/python3.9/site-packages/datasets/builder.py?line=821) self._download_and_prepare(
[823](file:///home/user/repos/tts-dataset/tts-dataset/venv/lib/python3.9/site-packages/datasets/builder.py?line=822) dl_manager=dl_manager,
[824](file:///home/user/repos/tts-dataset/tts-dataset/venv/lib/python3.9/site-packages/datasets/builder.py?line=823) verify_infos=verify_infos,
[825](file:///home/user/repos/tts-dataset/tts-dataset/venv/lib/python3.9/site-packages/datasets/builder.py?line=824) **prepare_split_kwargs,
[826](file:///home/user/repos/tts-dataset/tts-dataset/venv/lib/python3.9/site-packages/datasets/builder.py?line=825) **download_and_prepare_kwargs,
[827](file:///home/user/repos/tts-dataset/tts-dataset/venv/lib/python3.9/site-packages/datasets/builder.py?line=826) )
[828](file:///home/user/repos/tts-dataset/tts-dataset/venv/lib/python3.9/site-packages/datasets/builder.py?line=827) # Sync info
[829](file:///home/user/repos/tts-dataset/tts-dataset/venv/lib/python3.9/site-packages/datasets/builder.py?line=828) self.info.dataset_size = sum(split.num_bytes for split in self.info.splits.values())
File ~/repos/tts-dataset/tts-dataset/venv/lib/python3.9/site-packages/datasets/builder.py:1555, in GeneratorBasedBuilder._download_and_prepare(self, dl_manager, verify_infos, **prepare_splits_kwargs)
[1554](file:///home/user/repos/tts-dataset/tts-dataset/venv/lib/python3.9/site-packages/datasets/builder.py?line=1553) def _download_and_prepare(self, dl_manager, verify_infos, **prepare_splits_kwargs):
-> [1555](file:///home/user/repos/tts-dataset/tts-dataset/venv/lib/python3.9/site-packages/datasets/builder.py?line=1554) super()._download_and_prepare(
[1556](file:///home/user/repos/tts-dataset/tts-dataset/venv/lib/python3.9/site-packages/datasets/builder.py?line=1555) dl_manager, verify_infos, check_duplicate_keys=verify_infos, **prepare_splits_kwargs
[1557](file:///home/user/repos/tts-dataset/tts-dataset/venv/lib/python3.9/site-packages/datasets/builder.py?line=1556) )
File ~/repos/tts-dataset/tts-dataset/venv/lib/python3.9/site-packages/datasets/builder.py:913, in DatasetBuilder._download_and_prepare(self, dl_manager, verify_infos, **prepare_split_kwargs)
[909](file:///home/user/repos/tts-dataset/tts-dataset/venv/lib/python3.9/site-packages/datasets/builder.py?line=908) split_dict.add(split_generator.split_info)
[911](file:///home/user/repos/tts-dataset/tts-dataset/venv/lib/python3.9/site-packages/datasets/builder.py?line=910) try:
[912](file:///home/user/repos/tts-dataset/tts-dataset/venv/lib/python3.9/site-packages/datasets/builder.py?line=911) # Prepare split will record examples associated to the split
--> [913](file:///home/user/repos/tts-dataset/tts-dataset/venv/lib/python3.9/site-packages/datasets/builder.py?line=912) self._prepare_split(split_generator, **prepare_split_kwargs)
[914](file:///home/user/repos/tts-dataset/tts-dataset/venv/lib/python3.9/site-packages/datasets/builder.py?line=913) except OSError as e:
[915](file:///home/user/repos/tts-dataset/tts-dataset/venv/lib/python3.9/site-packages/datasets/builder.py?line=914) raise OSError(
[916](file:///home/user/repos/tts-dataset/tts-dataset/venv/lib/python3.9/site-packages/datasets/builder.py?line=915) "Cannot find data file. "
[917](file:///home/user/repos/tts-dataset/tts-dataset/venv/lib/python3.9/site-packages/datasets/builder.py?line=916) + (self.manual_download_instructions or "")
[918](file:///home/user/repos/tts-dataset/tts-dataset/venv/lib/python3.9/site-packages/datasets/builder.py?line=917) + "\nOriginal error:\n"
[919](file:///home/user/repos/tts-dataset/tts-dataset/venv/lib/python3.9/site-packages/datasets/builder.py?line=918) + str(e)
[920](file:///home/user/repos/tts-dataset/tts-dataset/venv/lib/python3.9/site-packages/datasets/builder.py?line=919) ) from None
File ~/repos/tts-dataset/tts-dataset/venv/lib/python3.9/site-packages/datasets/builder.py:1396, in GeneratorBasedBuilder._prepare_split(self, split_generator, check_duplicate_keys, file_format, num_proc, max_shard_size)
[1394](file:///home/user/repos/tts-dataset/tts-dataset/venv/lib/python3.9/site-packages/datasets/builder.py?line=1393) gen_kwargs = split_generator.gen_kwargs
[1395](file:///home/user/repos/tts-dataset/tts-dataset/venv/lib/python3.9/site-packages/datasets/builder.py?line=1394) job_id = 0
-> [1396](file:///home/user/repos/tts-dataset/tts-dataset/venv/lib/python3.9/site-packages/datasets/builder.py?line=1395) for job_id, done, content in self._prepare_split_single(
[1397](file:///home/user/repos/tts-dataset/tts-dataset/venv/lib/python3.9/site-packages/datasets/builder.py?line=1396) {"gen_kwargs": gen_kwargs, "job_id": job_id, **_prepare_split_args}
[1398](file:///home/user/repos/tts-dataset/tts-dataset/venv/lib/python3.9/site-packages/datasets/builder.py?line=1397) ):
[1399](file:///home/user/repos/tts-dataset/tts-dataset/venv/lib/python3.9/site-packages/datasets/builder.py?line=1398) if done:
[1400](file:///home/user/repos/tts-dataset/tts-dataset/venv/lib/python3.9/site-packages/datasets/builder.py?line=1399) result = content
File ~/repos/tts-dataset/tts-dataset/venv/lib/python3.9/site-packages/datasets/builder.py:1550, in GeneratorBasedBuilder._prepare_split_single(self, arg)
[1548](file:///home/user/repos/tts-dataset/tts-dataset/venv/lib/python3.9/site-packages/datasets/builder.py?line=1547) if isinstance(e, SchemaInferenceError) and e.__context__ is not None:
[1549](file:///home/user/repos/tts-dataset/tts-dataset/venv/lib/python3.9/site-packages/datasets/builder.py?line=1548) e = e.__context__
-> [1550](file:///home/user/repos/tts-dataset/tts-dataset/venv/lib/python3.9/site-packages/datasets/builder.py?line=1549) raise DatasetGenerationError("An error occurred while generating the dataset") from e
[1552](file:///home/user/repos/tts-dataset/tts-dataset/venv/lib/python3.9/site-packages/datasets/builder.py?line=1551) yield job_id, True, (total_num_examples, total_num_bytes, writer._features, num_shards, shard_lengths)
DatasetGenerationError: An error occurred while generating the dataset
```
### Expected behavior
Loads the dataset
### Environment info
- `datasets` version: 2.7.1
- Platform: Linux-5.13.0-40-generic-x86_64-with-glibc2.31
- Python version: 3.9.7
- PyArrow version: 10.0.1
- Pandas version: 1.5.2
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5323/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5323/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/5322 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5322/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5322/comments | https://api.github.com/repos/huggingface/datasets/issues/5322/events | https://github.com/huggingface/datasets/pull/5322 | 1,471,502,162 | PR_kwDODunzps5EEeQP | 5,322 | Raise error for simple `.tar` archives in the same way as for `.tar.gz` and `.gz` | {
"avatar_url": "https://avatars.githubusercontent.com/u/16348744?v=4",
"events_url": "https://api.github.com/users/polinaeterna/events{/privacy}",
"followers_url": "https://api.github.com/users/polinaeterna/followers",
"following_url": "https://api.github.com/users/polinaeterna/following{/other_user}",
"gists_url": "https://api.github.com/users/polinaeterna/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/polinaeterna",
"id": 16348744,
"login": "polinaeterna",
"node_id": "MDQ6VXNlcjE2MzQ4NzQ0",
"organizations_url": "https://api.github.com/users/polinaeterna/orgs",
"received_events_url": "https://api.github.com/users/polinaeterna/received_events",
"repos_url": "https://api.github.com/users/polinaeterna/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/polinaeterna/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/polinaeterna/subscriptions",
"type": "User",
"url": "https://api.github.com/users/polinaeterna"
} | [] | open | false | null | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_5322). All of your documentation changes will be reflected on that endpoint."
] | 2022-12-01T15:19:28Z | 2022-12-01T15:24:40Z | null | CONTRIBUTOR | null | true | {
"diff_url": "https://github.com/huggingface/datasets/pull/5322.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5322",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/5322.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5322"
} | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5322/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5322/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/5321 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5321/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5321/comments | https://api.github.com/repos/huggingface/datasets/issues/5321/events | https://github.com/huggingface/datasets/pull/5321 | 1,471,430,667 | PR_kwDODunzps5EEOhE | 5,321 | Fix loading from HF GCP cache | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"> Do you know why this stopped working?\r\n\r\nIt comes from the changes in https://github.com/huggingface/datasets/pull/5107/files#diff-355ae5c229f95f86895404b72378ecd6e966c41cbeebb674af6fe6e9611bc126"
] | 2022-12-01T14:39:06Z | 2022-12-01T16:10:09Z | 2022-12-01T16:07:02Z | MEMBER | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/5321.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5321",
"merged_at": "2022-12-01T16:07:02Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5321.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5321"
} | As reported in https://discuss.huggingface.co/t/error-loading-wikipedia-dataset/26599/4 it's not possible to download a cached version of Wikipedia from the HF GCP cache
I fixed it and added an integration test (runs in 10sec) | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5321/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5321/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/5320 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5320/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5320/comments | https://api.github.com/repos/huggingface/datasets/issues/5320/events | https://github.com/huggingface/datasets/pull/5320 | 1,471,360,910 | PR_kwDODunzps5ED_UQ | 5,320 | [Extract] Place the lock file next to the destination directory | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-12-01T13:55:49Z | 2022-12-01T15:36:44Z | 2022-12-01T15:33:58Z | MEMBER | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/5320.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5320",
"merged_at": "2022-12-01T15:33:58Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5320.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5320"
} | Previously it was placed next to the archive to extract, but the archive can be in a read-only directory as noticed in https://github.com/huggingface/datasets/issues/5295
Therefore I moved the lock location to be next to the destination directory, which is required to have write permissions | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5320/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5320/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/5319 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5319/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5319/comments | https://api.github.com/repos/huggingface/datasets/issues/5319/events | https://github.com/huggingface/datasets/pull/5319 | 1,470,945,515 | PR_kwDODunzps5ECkfc | 5,319 | Fix Text sample_by paragraph | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-12-01T09:08:09Z | 2022-12-01T15:21:44Z | 2022-12-01T15:19:00Z | MEMBER | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/5319.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5319",
"merged_at": "2022-12-01T15:19:00Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5319.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5319"
} | Fix #5316. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5319/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5319/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/5318 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5318/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5318/comments | https://api.github.com/repos/huggingface/datasets/issues/5318/events | https://github.com/huggingface/datasets/pull/5318 | 1,470,749,750 | PR_kwDODunzps5EB6RM | 5,318 | Origin/fix missing features error | {
"avatar_url": "https://avatars.githubusercontent.com/u/12104720?v=4",
"events_url": "https://api.github.com/users/eunseojo/events{/privacy}",
"followers_url": "https://api.github.com/users/eunseojo/followers",
"following_url": "https://api.github.com/users/eunseojo/following{/other_user}",
"gists_url": "https://api.github.com/users/eunseojo/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/eunseojo",
"id": 12104720,
"login": "eunseojo",
"node_id": "MDQ6VXNlcjEyMTA0NzIw",
"organizations_url": "https://api.github.com/users/eunseojo/orgs",
"received_events_url": "https://api.github.com/users/eunseojo/received_events",
"repos_url": "https://api.github.com/users/eunseojo/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/eunseojo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/eunseojo/subscriptions",
"type": "User",
"url": "https://api.github.com/users/eunseojo"
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"please review :) @lhoestq @ola13 thankoo",
"Thanks :) I just updated the test to make sure it works even when there's a column missing, and did a minor change to json.py to add the missing columns for the other kinds of JSON files as well (I moved the code to`self._cast_table`)",
"Thanks Unso! If @lhoestq is happy then I'm also happy :D"
] | 2022-12-01T06:18:39Z | 2022-12-04T05:52:07Z | 2022-12-04T05:49:39Z | CONTRIBUTOR | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/5318.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5318",
"merged_at": "2022-12-04T05:49:39Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5318.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5318"
} | This fixes the problem of when the dataset_load function reads a function with "features" provided but some read batches don't have columns that later show up. For instance, the provided "features" requires columns A,B,C but only columns B,C show. This fixes this by adding the column A with nulls. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5318/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5318/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/5317 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5317/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5317/comments | https://api.github.com/repos/huggingface/datasets/issues/5317/events | https://github.com/huggingface/datasets/issues/5317 | 1,470,390,164 | I_kwDODunzps5XpF-U | 5,317 | `ImageFolder` performs poorly with large datasets | {
"avatar_url": "https://avatars.githubusercontent.com/u/1086393?v=4",
"events_url": "https://api.github.com/users/salieri/events{/privacy}",
"followers_url": "https://api.github.com/users/salieri/followers",
"following_url": "https://api.github.com/users/salieri/following{/other_user}",
"gists_url": "https://api.github.com/users/salieri/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/salieri",
"id": 1086393,
"login": "salieri",
"node_id": "MDQ6VXNlcjEwODYzOTM=",
"organizations_url": "https://api.github.com/users/salieri/orgs",
"received_events_url": "https://api.github.com/users/salieri/received_events",
"repos_url": "https://api.github.com/users/salieri/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/salieri/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/salieri/subscriptions",
"type": "User",
"url": "https://api.github.com/users/salieri"
} | [] | open | false | null | [] | null | [
"Hi ! ImageFolder is made for small scale datasets indeed. For large scale image datasets you better group your images in TAR archives or Arrow/Parquet files. This is true not just for ImageFolder loading performance, but also because having millions of files is not ideal for your filesystem or when moving the data around.\r\n\r\nOption 1. use TAR archives\r\n\r\nI'd suggest you to take a look at how we load [Imagenet](https://huggingface.co/datasets/imagenet-1k/tree/main) for example. The dataset is sharded in multiple TAR archives and there is a [script](https://huggingface.co/datasets/imagenet-1k/blob/main/imagenet-1k.py) that iterates over the archives to load the images.\r\n\r\nOption 2. use Arrow/Parquet\r\n\r\nYou can load your images as an Arrow Dataset with\r\n```python\r\nfrom datasets import Dataset, Image, load_from_disk, load_dataset\r\n\r\nds = Dataset.from_dict({\"image\": list(glob.glob(\"path/to/dir/**/*.jpg\"))})\r\n\r\ndef add_metadata(example):\r\n ...\r\n\r\nds = ds.map(add_metadata, num_proc=16) # num_proc for multiprocessing\r\nds = ds.cast_column(\"image\", Image())\r\n\r\n# save as Arrow locally\r\nds.save_to_disk(\"output_dir\")\r\nreloaded = load_from_disk(\"output_dir\")\r\n\r\n# OR save as Parquet on the HF Hub\r\nds.push_to_hub(\"username/dataset_name\")\r\nreloaded = load_dataset(\"username/dataset_name\")\r\n# reloaded = load_dataset(\"username/dataset_name\", num_proc=16) # to use multiprocessing\r\n```\r\n\r\nPS: maybe we can actually have something similar to ImageFolder but for image archives at one point ?",
"@lhoestq Thanks!\r\n\r\nPerhaps it'd be worth adding a note on the documentation that `ImageFolder` is not intended for large datasets? This limitation is not intuitively obvious to someone who has not used it before, I think.",
"Thanks for the feedback @salieri! I opened #5329 to make it clear `ImageFolder` is not intended for large datasets. Please feel free to comment if you have any other feedback! 🙂 "
] | 2022-12-01T00:04:21Z | 2022-12-01T21:49:26Z | null | NONE | null | null | null | ### Describe the bug
While testing image dataset creation, I'm seeing significant performance bottlenecks with imagefolders when scanning a directory structure with large number of images.
## Setup
* Nested directories (5 levels deep)
* 3M+ images
* 1 `metadata.jsonl` file
## Performance Degradation Point 1
Degradation occurs because [`get_data_files_patterns`](https://github.com/huggingface/datasets/blob/main/src/datasets/data_files.py#L231-L243) runs the exact same scan for many different types of patterns, and there doesn't seem to be a way to easily limit this. It's controlled by the definition of [`ALL_DEFAULT_PATTERNS`](https://github.com/huggingface/datasets/blob/main/src/datasets/data_files.py#L82-L85).
One scan with 3M+ files takes about 10-15 minutes to complete on my setup, so having those extra scans really slows things down – from 10 minutes to 60+. Most of the scans return no matches, but they still take a significant amount of time to complete – hence the poor performance.
As a side effect, when this scan is run on 3M+ image files, Python also consumes up to 12 GB of RAM, which is not ideal.
## Performance Degradation Point 2
The second performance bottleneck is in [`PackagedDatasetModuleFactory.get_module`](https://github.com/huggingface/datasets/blob/d7dfbc83d68e87ba002c5eb2555f7a932e59038a/src/datasets/load.py#L707-L711), which calls `DataFilesDict.from_local_or_remote`.
It runs for a long time (60min+), consuming significant amounts of RAM – even more than the point 1 above. Based on `iostat -d 2`, it performs **zero** disk operations, which to me suggests that there is a code based bottleneck there that could be sorted out.
### Steps to reproduce the bug
```python
from datasets import load_dataset
import os
import huggingface_hub
dataset = load_dataset(
'imagefolder',
data_dir='/some/path',
# just to spell it out:
split=None,
drop_labels=True,
keep_in_memory=False
)
dataset.push_to_hub('account/dataset', private=True)
```
### Expected behavior
While it's certainly possible to write a custom loader to replace `ImageFolder` with, it'd be great if the off-the-shelf `ImageFolder` would by default have a setup that can scale to large datasets.
Or perhaps there could be a dedicated loader just for large datasets that trades off flexibility for performance? As in, maybe you have to define explicitly how you want it to work rather than it trying to guess your data structure like `_get_data_files_patterns()` does?
### Environment info
- `datasets` version: 2.7.1
- Platform: Linux-4.14.296-222.539.amzn2.x86_64-x86_64-with-glibc2.2.5
- Python version: 3.7.10
- PyArrow version: 10.0.1
- Pandas version: 1.3.5
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5317/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5317/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/5316 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5316/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5316/comments | https://api.github.com/repos/huggingface/datasets/issues/5316/events | https://github.com/huggingface/datasets/issues/5316 | 1,470,115,681 | I_kwDODunzps5XoC9h | 5,316 | Bug in sample_by="paragraph" | {
"avatar_url": "https://avatars.githubusercontent.com/u/1243668?v=4",
"events_url": "https://api.github.com/users/adampauls/events{/privacy}",
"followers_url": "https://api.github.com/users/adampauls/followers",
"following_url": "https://api.github.com/users/adampauls/following{/other_user}",
"gists_url": "https://api.github.com/users/adampauls/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/adampauls",
"id": 1243668,
"login": "adampauls",
"node_id": "MDQ6VXNlcjEyNDM2Njg=",
"organizations_url": "https://api.github.com/users/adampauls/orgs",
"received_events_url": "https://api.github.com/users/adampauls/received_events",
"repos_url": "https://api.github.com/users/adampauls/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/adampauls/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/adampauls/subscriptions",
"type": "User",
"url": "https://api.github.com/users/adampauls"
} | [] | closed | false | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
}
] | null | [
"Thanks for reporting, @adampauls.\r\n\r\nWe are having a look at it. "
] | 2022-11-30T19:24:13Z | 2022-12-01T15:19:02Z | 2022-12-01T15:19:02Z | NONE | null | null | null | ### Describe the bug
I think [this line](https://github.com/huggingface/datasets/blob/main/src/datasets/packaged_modules/text/text.py#L96) is wrong and should be `batch = f.read(self.config.chunksize)`. Otherwise it will never terminate because even when `f` is finished reading, `batch` will still be truthy from the last iteration.
### Steps to reproduce the bug
```
> cat test.txt
a b c
d e f
````
```python
>>> import datasets
>>> datasets.load_dataset("text", data_files={"train":"test.txt"}, sample_by="paragraph")
```
This will go on forever.
### Expected behavior
Terminates very quickly.
### Environment info
`version = "2.6.1"` but I think the bug is still there on main. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5316/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5316/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/5315 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5315/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5315/comments | https://api.github.com/repos/huggingface/datasets/issues/5315/events | https://github.com/huggingface/datasets/issues/5315 | 1,470,026,797 | I_kwDODunzps5XntQt | 5,315 | Adding new splits to a dataset script with existing old splits info in metadata's `dataset_info` fails | {
"avatar_url": "https://avatars.githubusercontent.com/u/16348744?v=4",
"events_url": "https://api.github.com/users/polinaeterna/events{/privacy}",
"followers_url": "https://api.github.com/users/polinaeterna/followers",
"following_url": "https://api.github.com/users/polinaeterna/following{/other_user}",
"gists_url": "https://api.github.com/users/polinaeterna/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/polinaeterna",
"id": 16348744,
"login": "polinaeterna",
"node_id": "MDQ6VXNlcjE2MzQ4NzQ0",
"organizations_url": "https://api.github.com/users/polinaeterna/orgs",
"received_events_url": "https://api.github.com/users/polinaeterna/received_events",
"repos_url": "https://api.github.com/users/polinaeterna/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/polinaeterna/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/polinaeterna/subscriptions",
"type": "User",
"url": "https://api.github.com/users/polinaeterna"
} | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | open | false | {
"avatar_url": "https://avatars.githubusercontent.com/u/16348744?v=4",
"events_url": "https://api.github.com/users/polinaeterna/events{/privacy}",
"followers_url": "https://api.github.com/users/polinaeterna/followers",
"following_url": "https://api.github.com/users/polinaeterna/following{/other_user}",
"gists_url": "https://api.github.com/users/polinaeterna/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/polinaeterna",
"id": 16348744,
"login": "polinaeterna",
"node_id": "MDQ6VXNlcjE2MzQ4NzQ0",
"organizations_url": "https://api.github.com/users/polinaeterna/orgs",
"received_events_url": "https://api.github.com/users/polinaeterna/received_events",
"repos_url": "https://api.github.com/users/polinaeterna/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/polinaeterna/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/polinaeterna/subscriptions",
"type": "User",
"url": "https://api.github.com/users/polinaeterna"
} | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/16348744?v=4",
"events_url": "https://api.github.com/users/polinaeterna/events{/privacy}",
"followers_url": "https://api.github.com/users/polinaeterna/followers",
"following_url": "https://api.github.com/users/polinaeterna/following{/other_user}",
"gists_url": "https://api.github.com/users/polinaeterna/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/polinaeterna",
"id": 16348744,
"login": "polinaeterna",
"node_id": "MDQ6VXNlcjE2MzQ4NzQ0",
"organizations_url": "https://api.github.com/users/polinaeterna/orgs",
"received_events_url": "https://api.github.com/users/polinaeterna/received_events",
"repos_url": "https://api.github.com/users/polinaeterna/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/polinaeterna/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/polinaeterna/subscriptions",
"type": "User",
"url": "https://api.github.com/users/polinaeterna"
}
] | null | [
"EDIT:\r\nI think in this case, the metadata files (either README or JSON) should not be read (i.e. `self.info.splits` should be None).\r\n\r\nOne idea: \r\n- I think ideally we should set this behavior when we pass `--save_info` to the CLI `test`\r\n- However, currently, the builder is unaware of this: `save_info` arg is not passed to it",
"> I think in this case\r\n\r\n@albertvillanova You mean in cases when the script was changed? \r\n\r\nI suggest that we:\r\n* add a check on the slice (like 'split_name[n%]) kind of format here: https://github.com/huggingface/datasets/blob/main/src/datasets/splits.py#L523 to catch things like this. \r\n* Error here happens before splits verification, but in `_prepare_split`, and `_prepare_split` doesn't perform any verification and don't know about it. so we can pass this parameter and take splits from `split_generator`, not from `split.info` in case when `verify_infos` is False\r\n* we can check if split **names** from split_generators and self.info.splits are the same **before** preparing splits (if `verify_info=True`) so that we don't spend time on generating unwanted data. \r\n* provide some user-friendly warnings about `ignore_verifications` parameter so that users know that if something is not matching they can ignore it\r\n\r\nI started it here: https://github.com/huggingface/datasets/pull/5327/files\r\n\r\nWhat do you think @albertvillanova ?",
"I edited my previous comment:\r\n- First I proposed setting `self.info.splits` to None when `ignore_verifications=True`\r\n - I thought it was the easiest implementation because `ignore_verifications` is passed to `DatasetBuilder.download_and_prepare`\r\n - However, afterwards, I realized this might not be a good idea for this use case:\r\n - A user wants to optimize the loading of the dataset, and passes `ignore_verifications=False` to avoid all the verifications\r\n - In this case, we want `self.info.splits` to be read from metadata file\r\n- Then, I thought that it might be better to set `self.info.splits` to None when we pass `--save_info` to the CLI test: if we are going to save the info to the metadata file, it makes no sense to read the info from the metadata file\r\n - This implementation is not so easy because the Builder knows nothing about `--save_info`\r\n\r\nI agree with you there are 2 things to be addressed here:\r\n- One is what I have just commented: `self.info.splits` should be None in this case\r\n- The other, a validation should be implemented when calling `make_file_instructions` and/or `SplitDict.__getitem__`, so that when passing \"training\" to it, we get a more descriptive error other than `TypeError: expected str, bytes or os.PathLike object, not NoneType` "
] | 2022-11-30T18:02:15Z | 2022-12-02T07:02:53Z | null | CONTRIBUTOR | null | null | null | ### Describe the bug
If you first create a custom dataset with a specific set of splits, generate metadata with `datasets-cli test ... --save_info`, then change your script to include more splits, it fails.
That's what happened in https://huggingface.co/datasets/mrdbourke/food_vision_199_classes/discussions/2#6385fd1269634850f8ddff48.
### Steps to reproduce the bug
1. create a dataset with a custom split that returns, for example, only `"train"` split in `_splits_generators'`. specifically, if really want to reproduce, copy `https://huggingface.co/datasets/mrdbourke/food_vision_199_classes/blob/main/food_vision_199_classes.py
2. run `datasets-cli test dataset_script.py --save_info --all_configs` - this would generate metadata yaml in `README.md` that would contain info about splits, for example, like this:
```
splits:
- name: train
num_bytes: 2973286
num_examples: 19747
```
3. make changes to your script so that it returns another set of splits, for example, `"train"` and `"test"` (uncomment [these lines](https://huggingface.co/datasets/mrdbourke/food_vision_199_classes/blob/main/food_vision_199_classes.py#L271))
4. run `load_dataset` and get the following error:
```python
Traceback (most recent call last):
File "/home/daniel/code/pytorch/env/bin/datasets-cli", line 8, in <module>
sys.exit(main())
File "/home/daniel/code/pytorch/env/lib/python3.8/site-packages/datasets/commands/datasets_cli.py", line 39, in main
service.run()
File "/home/daniel/code/pytorch/env/lib/python3.8/site-packages/datasets/commands/test.py", line 141, in run
builder.download_and_prepare(
File "/home/daniel/code/pytorch/env/lib/python3.8/site-packages/datasets/builder.py", line 822, in download_and_prepare
self._download_and_prepare(
File "/home/daniel/code/pytorch/env/lib/python3.8/site-packages/datasets/builder.py", line 1555, in _download_and_prepare
super()._download_and_prepare(
File "/home/daniel/code/pytorch/env/lib/python3.8/site-packages/datasets/builder.py", line 913, in _download_and_prepare
self._prepare_split(split_generator, **prepare_split_kwargs)
File "/home/daniel/code/pytorch/env/lib/python3.8/site-packages/datasets/builder.py", line 1356, in _prepare_split
split_info = self.info.splits[split_generator.name]
File "/home/daniel/code/pytorch/env/lib/python3.8/site-packages/datasets/splits.py", line 525, in __getitem__
instructions = make_file_instructions(
File "/home/daniel/code/pytorch/env/lib/python3.8/site-packages/datasets/arrow_reader.py", line 111, in make_file_instructions
name2filenames = {
File "/home/daniel/code/pytorch/env/lib/python3.8/site-packages/datasets/arrow_reader.py", line 112, in <dictcomp>
info.name: filenames_for_dataset_split(
File "/home/daniel/code/pytorch/env/lib/python3.8/site-packages/datasets/naming.py", line 78, in filenames_for_dataset_split
prefix = filename_prefix_for_split(dataset_name, split)
File "/home/daniel/code/pytorch/env/lib/python3.8/site-packages/datasets/naming.py", line 57, in filename_prefix_for_split
if os.path.basename(name) != name:
File "/home/daniel/code/pytorch/env/lib/python3.8/posixpath.py", line 143, in basename
p = os.fspath(p)
TypeError: expected str, bytes or os.PathLike object, not NoneType
```
5. bonus: try to regenerate metadata in `README.md` with `datasets-cli` as in step 2 and get the same error.
This is because `dataset.info.splits` contains only `"train"` split so when we are doing `self.info.splits[split_generator.name]` it tries to infer smth like `info.splits['train[50%]']` and that's not the case and it fails.
### Expected behavior
to be discussed?
This can be solved by removing splits information from metadata file first. But I wonder if there is a better way.
### Environment info
- Datasets version: 2.7.1
- Python version: 3.8.13 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5315/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5315/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/5314 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5314/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5314/comments | https://api.github.com/repos/huggingface/datasets/issues/5314/events | https://github.com/huggingface/datasets/issues/5314 | 1,469,685,118 | I_kwDODunzps5XmZ1- | 5,314 | Datasets: classification_report() got an unexpected keyword argument 'suffix' | {
"avatar_url": "https://avatars.githubusercontent.com/u/42126634?v=4",
"events_url": "https://api.github.com/users/JonathanAlis/events{/privacy}",
"followers_url": "https://api.github.com/users/JonathanAlis/followers",
"following_url": "https://api.github.com/users/JonathanAlis/following{/other_user}",
"gists_url": "https://api.github.com/users/JonathanAlis/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/JonathanAlis",
"id": 42126634,
"login": "JonathanAlis",
"node_id": "MDQ6VXNlcjQyMTI2NjM0",
"organizations_url": "https://api.github.com/users/JonathanAlis/orgs",
"received_events_url": "https://api.github.com/users/JonathanAlis/received_events",
"repos_url": "https://api.github.com/users/JonathanAlis/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/JonathanAlis/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/JonathanAlis/subscriptions",
"type": "User",
"url": "https://api.github.com/users/JonathanAlis"
} | [] | open | false | null | [] | null | [
"This seems similar to https://github.com/huggingface/datasets/issues/2512 Can you try to update seqeval ? ",
"@JonathanAlis also note that the metrics are deprecated in our `datasets` library.\r\n\r\nPlease, use the new library 🤗 Evaluate instead: https://huggingface.co/docs/evaluate"
] | 2022-11-30T14:01:03Z | 2022-12-01T15:00:46Z | null | NONE | null | null | null | https://github.com/huggingface/datasets/blob/main/metrics/seqeval/seqeval.py
> import datasets
predictions = [['O', 'O', 'B-MISC', 'I-MISC', 'I-MISC', 'I-MISC', 'O'], ['B-PER', 'I-PER', 'O']]
references = [['O', 'O', 'O', 'B-MISC', 'I-MISC', 'I-MISC', 'O'], ['B-PER', 'I-PER', 'O']]
seqeval = datasets.load_metric("seqeval")
results = seqeval.compute(predictions=predictions, references=references)
print(list(results.keys()))
print(results["overall_f1"])
print(results["PER"]["f1"])
It raises the error:
> TypeError: classification_report() got an unexpected keyword argument 'suffix'
For context, versions on my pip list -v
> datasets 1.12.1
seqeval 1.2.2 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5314/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5314/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/5313 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5313/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5313/comments | https://api.github.com/repos/huggingface/datasets/issues/5313/events | https://github.com/huggingface/datasets/pull/5313 | 1,468,484,136 | PR_kwDODunzps5D6Qfb | 5,313 | Fix description of streaming in the docs | {
"avatar_url": "https://avatars.githubusercontent.com/u/16348744?v=4",
"events_url": "https://api.github.com/users/polinaeterna/events{/privacy}",
"followers_url": "https://api.github.com/users/polinaeterna/followers",
"following_url": "https://api.github.com/users/polinaeterna/following{/other_user}",
"gists_url": "https://api.github.com/users/polinaeterna/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/polinaeterna",
"id": 16348744,
"login": "polinaeterna",
"node_id": "MDQ6VXNlcjE2MzQ4NzQ0",
"organizations_url": "https://api.github.com/users/polinaeterna/orgs",
"received_events_url": "https://api.github.com/users/polinaeterna/received_events",
"repos_url": "https://api.github.com/users/polinaeterna/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/polinaeterna/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/polinaeterna/subscriptions",
"type": "User",
"url": "https://api.github.com/users/polinaeterna"
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-11-29T18:00:28Z | 2022-12-01T14:55:30Z | 2022-12-01T14:00:34Z | CONTRIBUTOR | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/5313.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5313",
"merged_at": "2022-12-01T14:00:34Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5313.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5313"
} | We say that "the data is being downloaded progressively" which is not true, it's just streamed, so I fixed it. Probably I missed some other places where it is written?
Also changed docstrings for `StreamingDownloadManager`'s `download` and `extract` to reflect the same, as these docstrings are displayed in the documentation cc @lhoestq | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5313/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5313/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/5312 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5312/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5312/comments | https://api.github.com/repos/huggingface/datasets/issues/5312/events | https://github.com/huggingface/datasets/pull/5312 | 1,468,352,562 | PR_kwDODunzps5D5zxI | 5,312 | Add DatasetDict.to_pandas | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [] | open | false | null | [] | null | [
"The current implementation is what I had in mind, i.e. concatenate all splits by default.\r\n\r\nHowever, I think most tabular datasets would come as a single split. So for that usecase, it wouldn't change UX if we raise when there are more than one splits.\r\n\r\nAnd for multiple splits, the user either passes a list, or they can pass `splits=\"all\"` to have all splits concatenated.",
"I think it's better to raise an error in cases when there are multiple splits but no split is specified so that users know for sure with which data they are working. I imagine a case when a user loads a dataset that they don't know much about (like what splits it has), and if they get a concatenation of everything, it might lead to incorrect processing or interpretations and it would be hard to notice it.\r\n(\"explicit is better than implicit\")",
"I just changed to raise an error if there are multiple splits. The error shows an example of how to choose a split to convert.",
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_5312). All of your documentation changes will be reflected on that endpoint."
] | 2022-11-29T16:30:02Z | 2022-12-01T16:09:44Z | null | MEMBER | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/5312.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5312",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/5312.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5312"
} | From discussions in https://github.com/huggingface/datasets/issues/5189, for tabular data it doesn't really make sense to have to do
```python
df = load_dataset(...)["train"].to_pandas()
```
because many datasets are not split.
In this PR I added `to_pandas` to `DatasetDict` which returns the DataFrame:
If there's only one split, you don't need to specify the split name:
```python
df = load_dataset(...).to_pandas()
```
EDIT: and if a dataset has multiple splits:
```python
df = load_dataset(...).to_pandas(splits=["train", "test"])
# or
df = load_dataset(...).to_pandas(splits="all")
# raises an error because you need to select the split(s) to convert
load_dataset(...).to_pandas()
```
I do have one question though @merveenoyan @adrinjalali @mariosasko:
Should we raise an error if there are multiple splits and ask the user to choose one explicitly ?
| {
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5312/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5312/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/5311 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5311/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5311/comments | https://api.github.com/repos/huggingface/datasets/issues/5311/events | https://github.com/huggingface/datasets/pull/5311 | 1,467,875,153 | PR_kwDODunzps5D4Mm3 | 5,311 | Add `features` param to `IterableDataset.map` | {
"avatar_url": "https://avatars.githubusercontent.com/u/36760800?v=4",
"events_url": "https://api.github.com/users/alvarobartt/events{/privacy}",
"followers_url": "https://api.github.com/users/alvarobartt/followers",
"following_url": "https://api.github.com/users/alvarobartt/following{/other_user}",
"gists_url": "https://api.github.com/users/alvarobartt/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/alvarobartt",
"id": 36760800,
"login": "alvarobartt",
"node_id": "MDQ6VXNlcjM2NzYwODAw",
"organizations_url": "https://api.github.com/users/alvarobartt/orgs",
"received_events_url": "https://api.github.com/users/alvarobartt/received_events",
"repos_url": "https://api.github.com/users/alvarobartt/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/alvarobartt/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/alvarobartt/subscriptions",
"type": "User",
"url": "https://api.github.com/users/alvarobartt"
} | [] | open | false | null | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_5311). All of your documentation changes will be reflected on that endpoint."
] | 2022-11-29T11:08:34Z | 2022-12-02T19:22:17Z | null | CONTRIBUTOR | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/5311.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5311",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/5311.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5311"
} | ## Description
As suggested by @lhoestq in #3888, we should be adding the param `features` to `IterableDataset.map` so that the features can be preserved (not turned into `None` as that's the default behavior) whenever the user passes those as param, so as to be consistent with `Dataset.map`, as it provides the `features` param so that those are not inferred by default, but specified by the user, and later validated by `ArrowWriter`.
This is internally handled already by the functions relying on `IterableDataset.map` such as `rename_column`, `rename_columns`, and `remove_columns` as described in #5287.
## Usage Example
```python
from datasets import load_dataset, Features
ds = load_dataset("rotten_tomatoes", split="validation", streaming=True)
print(ds.info.features)
ds = ds.map(
lambda x: {"target": x["label"]},
features=Features(
{"target": ds.info.features["label"], "label": ds.info.features["label"], "text": ds.info.features["text"]}
),
)
print(ds.info.features)
``` | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5311/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5311/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/5310 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5310/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5310/comments | https://api.github.com/repos/huggingface/datasets/issues/5310/events | https://github.com/huggingface/datasets/pull/5310 | 1,467,719,635 | PR_kwDODunzps5D3rGw | 5,310 | Support xPath for Windows pathnames | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-11-29T09:20:47Z | 2022-11-30T12:00:09Z | 2022-11-30T11:57:16Z | MEMBER | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/5310.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5310",
"merged_at": "2022-11-30T11:57:16Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5310.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5310"
} | This PR implements a string representation of `xPath`, which is valid for local paths (also windows) and remote URLs.
Additionally, some `os.path` methods are fixed for remote URLs on Windows machines.
Now, on Windows machines:
```python
In [2]: str(xPath("C:\\dir\\file.txt"))
Out[2]: 'C:\\dir\\file.txt'
In [3]: str(xPath("http://domain.com/file.txt"))
Out[3]: 'http://domain.com/file.txt'
``` | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5310/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5310/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/5309 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5309/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5309/comments | https://api.github.com/repos/huggingface/datasets/issues/5309/events | https://github.com/huggingface/datasets/pull/5309 | 1,466,758,987 | PR_kwDODunzps5D0g1y | 5,309 | Close stream in `ArrowWriter.finalize` before inference error | {
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko"
} | [] | open | false | null | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_5309). All of your documentation changes will be reflected on that endpoint."
] | 2022-11-28T16:59:39Z | 2022-11-28T17:05:59Z | null | CONTRIBUTOR | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/5309.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5309",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/5309.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5309"
} | Ensure the file stream is closed in `ArrowWriter.finalize` before raising the `SchemaInferenceError` to avoid the `PermissionError` on Windows in `incomplete_dir`'s `shutil.rmtree`. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5309/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5309/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/5308 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5308/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5308/comments | https://api.github.com/repos/huggingface/datasets/issues/5308/events | https://github.com/huggingface/datasets/pull/5308 | 1,466,552,281 | PR_kwDODunzps5Dz0Tv | 5,308 | Support `topdown` parameter in `xwalk` | {
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko"
} | [] | open | false | null | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_5308). All of your documentation changes will be reflected on that endpoint."
] | 2022-11-28T14:42:41Z | 2022-11-30T12:44:35Z | null | CONTRIBUTOR | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/5308.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5308",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/5308.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5308"
} | Add support for the `topdown` parameter in `xwalk` when `fsspec>=2022.11.0` is installed. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5308/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5308/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/5307 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5307/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5307/comments | https://api.github.com/repos/huggingface/datasets/issues/5307/events | https://github.com/huggingface/datasets/pull/5307 | 1,466,477,427 | PR_kwDODunzps5Dzj8r | 5,307 | Use correct dataset type in `from_generator` docs | {
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko"
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-11-28T13:59:10Z | 2022-11-28T15:30:37Z | 2022-11-28T15:27:26Z | CONTRIBUTOR | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/5307.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5307",
"merged_at": "2022-11-28T15:27:26Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5307.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5307"
} | Use the correct dataset type in the `from_generator` docs (example with sharding). | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5307/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5307/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/5306 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5306/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5306/comments | https://api.github.com/repos/huggingface/datasets/issues/5306/events | https://github.com/huggingface/datasets/issues/5306 | 1,465,968,639 | I_kwDODunzps5XYOf_ | 5,306 | Can't use custom feature description when loading a dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/22726840?v=4",
"events_url": "https://api.github.com/users/clefourrier/events{/privacy}",
"followers_url": "https://api.github.com/users/clefourrier/followers",
"following_url": "https://api.github.com/users/clefourrier/following{/other_user}",
"gists_url": "https://api.github.com/users/clefourrier/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/clefourrier",
"id": 22726840,
"login": "clefourrier",
"node_id": "MDQ6VXNlcjIyNzI2ODQw",
"organizations_url": "https://api.github.com/users/clefourrier/orgs",
"received_events_url": "https://api.github.com/users/clefourrier/received_events",
"repos_url": "https://api.github.com/users/clefourrier/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/clefourrier/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/clefourrier/subscriptions",
"type": "User",
"url": "https://api.github.com/users/clefourrier"
} | [] | closed | false | null | [] | null | [
"Forgot to actually convert the feature dict to a Feature object. Closing."
] | 2022-11-28T07:55:44Z | 2022-11-28T08:11:45Z | 2022-11-28T08:11:44Z | CONTRIBUTOR | null | null | null | ### Describe the bug
I have created a feature dictionary to describe my datasets' column types, to use when loading the dataset, following [the doc](https://huggingface.co/docs/datasets/main/en/about_dataset_features). It crashes at dataset load.
### Steps to reproduce the bug
```python
# Creating features
task_list = [f"motif_G{i}" for i in range(19, 53)]
features = {t: Sequence(feature=Value(dtype="float64")) for t in task_list}
for col_name in ["class_label"]:
features[col_name] = Sequence(feature=Value(dtype="int64"))
for col_name in ["num_nodes"]:
features[col_name] = Value(dtype="int64")
for col_name in ["num_bridges", "num_cycles", "avg_shortest_path_len"]:
features[col_name] = Sequence(feature=Value(dtype="float64"))
for col_name in ["edge_attr", "node_feat", "edge_index"]:
features[col_name] = Sequence(feature=Sequence(feature=Value(dtype="int64")))
print(features)
dataset = load_dataset(path=f"graphs-datasets/unbalanced-motifs-500K", split="train", features=features)
```
Last line will crash and say 'TypeError: argument of type 'Sequence' is not iterable'.
Full stack:
```
Traceback (most recent call last):
File "pretrain_tokengt.py", line 131, in <module>
main(output_folder = "../workspace/pretraining",
File "pretrain_tokengt.py", line 52, in main
dataset = load_dataset(path=f"graphs-datasets/{dataset_name}", split="train", features=features)
File "huggingface_env/lib/python3.8/site-packages/datasets/load.py", line 1718, in load_dataset
builder_instance = load_dataset_builder(
File "huggingface_env/lib/python3.8/site-packages/datasets/load.py", line 1514, in load_dataset_builder
builder_instance: DatasetBuilder = builder_cls(
File "huggingface_env/lib/python3.8/site-packages/datasets/builder.py", line 321, in __init__
info.update(self._info())
File "huggingface_env/lib/python3.8/site-packages/datasets/packaged_modules/json/json.py", line 62, in _info
return datasets.DatasetInfo(features=self.config.features)
File "<string>", line 20, in __init__
File "huggingface_env/lib/python3.8/site-packages/datasets/info.py", line 155, in __post_init__
self.features = Features.from_dict(self.features)
File "huggingface_env/lib/python3.8/site-packages/datasets/features/features.py", line 1599, in from_dict
obj = generate_from_dict(dic)
File "huggingface_env/lib/python3.8/site-packages/datasets/features/features.py", line 1282, in generate_from_dict
return {key: generate_from_dict(value) for key, value in obj.items()}
File "huggingface_env/lib/python3.8/site-packages/datasets/features/features.py", line 1282, in <dictcomp>
return {key: generate_from_dict(value) for key, value in obj.items()}
File "huggingface_env/lib/python3.8/site-packages/datasets/features/features.py", line 1281, in generate_from_dict
if "_type" not in obj or isinstance(obj["_type"], dict):
TypeError: argument of type 'Sequence' is not iterable
```
### Expected behavior
For it not to crash.
### Environment info
- `datasets` version: 2.7.1
- Platform: Linux-5.14.0-1054-oem-x86_64-with-glibc2.29
- Python version: 3.8.10
- PyArrow version: 8.0.0
- Pandas version: 1.4.3 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5306/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5306/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/5305 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5305/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5305/comments | https://api.github.com/repos/huggingface/datasets/issues/5305/events | https://github.com/huggingface/datasets/issues/5305 | 1,465,627,826 | I_kwDODunzps5XW7Sy | 5,305 | Dataset joelito/mc4_legal does not work with multiple files | {
"avatar_url": "https://avatars.githubusercontent.com/u/3775944?v=4",
"events_url": "https://api.github.com/users/JoelNiklaus/events{/privacy}",
"followers_url": "https://api.github.com/users/JoelNiklaus/followers",
"following_url": "https://api.github.com/users/JoelNiklaus/following{/other_user}",
"gists_url": "https://api.github.com/users/JoelNiklaus/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/JoelNiklaus",
"id": 3775944,
"login": "JoelNiklaus",
"node_id": "MDQ6VXNlcjM3NzU5NDQ=",
"organizations_url": "https://api.github.com/users/JoelNiklaus/orgs",
"received_events_url": "https://api.github.com/users/JoelNiklaus/received_events",
"repos_url": "https://api.github.com/users/JoelNiklaus/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/JoelNiklaus/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/JoelNiklaus/subscriptions",
"type": "User",
"url": "https://api.github.com/users/JoelNiklaus"
} | [] | closed | false | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
}
] | null | [
"Thanks for reporting @JoelNiklaus.\r\n\r\nPlease note that since we moved all dataset loading scripts to the Hub, the issues and pull requests relative to specific datasets are directly handled on the Hub, in their Community tab. I'm transferring this issue there: https://huggingface.co/datasets/joelito/mc4_legal/discussions\r\n\r\nI am also having a look at the bug in your script.",
"Issue transferred to: https://huggingface.co/datasets/joelito/mc4_legal/discussions/1"
] | 2022-11-28T00:16:16Z | 2022-11-28T07:22:42Z | 2022-11-28T07:22:42Z | CONTRIBUTOR | null | null | null | ### Describe the bug
The dataset https://huggingface.co/datasets/joelito/mc4_legal works for languages like bg with a single data file, but not for languages with multiple files like de. It shows zero rows for the de dataset.
joelniklaus@Joels-MacBook-Pro ~/N/P/C/L/p/m/mc4_legal (main) [1]> python test_mc4_legal.py (debug)
Found cached dataset mc4_legal (/Users/joelniklaus/.cache/huggingface/datasets/mc4_legal/de/0.0.0/fb6952a097180f8c936e2a7605525ff670354a344fc1a2c70107684d3f7cb02f)
Dataset({
features: ['index', 'url', 'timestamp', 'matches', 'text'],
num_rows: 0
})
joelniklaus@Joels-MacBook-Pro ~/N/P/C/L/p/m/mc4_legal (main)> python test_mc4_legal.py (debug)
Downloading and preparing dataset mc4_legal/bg to /Users/joelniklaus/.cache/huggingface/datasets/mc4_legal/bg/0.0.0/fb6952a097180f8c936e2a7605525ff670354a344fc1a2c70107684d3f7cb02f...
Downloading data files: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1/1 [00:00<00:00, 1240.55it/s]
Dataset mc4_legal downloaded and prepared to /Users/joelniklaus/.cache/huggingface/datasets/mc4_legal/bg/0.0.0/fb6952a097180f8c936e2a7605525ff670354a344fc1a2c70107684d3f7cb02f. Subsequent calls will reuse this data.
Dataset({
features: ['index', 'url', 'timestamp', 'matches', 'text'],
num_rows: 204
})
### Steps to reproduce the bug
import datasets
from datasets import load_dataset, get_dataset_config_names
language = "bg"
test = load_dataset("joelito/mc4_legal", language, split='train')
### Expected behavior
It should display the correct number of rows for the de dataset which should be a large number (thousands or more).
### Environment info
Package Version
------------------------ --------------
absl-py 1.3.0
aiohttp 3.8.1
aiosignal 1.2.0
astunparse 1.6.3
async-timeout 4.0.2
attrs 22.1.0
beautifulsoup4 4.11.1
blinker 1.4
blis 0.7.8
Bottleneck 1.3.4
brotlipy 0.7.0
cachetools 5.2.0
catalogue 2.0.7
certifi 2022.5.18.1
cffi 1.15.1
chardet 4.0.0
charset-normalizer 2.1.0
click 8.0.4
conllu 4.5.2
cryptography 38.0.1
cymem 2.0.6
datasets 2.6.1
dill 0.3.5.1
docker-pycreds 0.4.0
fasttext 0.9.2
fasttext-langdetect 1.0.3
filelock 3.0.12
flatbuffers 20210226132247
frozenlist 1.3.0
fsspec 2022.5.0
gast 0.4.0
gcloud 0.18.3
gitdb 4.0.9
GitPython 3.1.27
google-auth 2.9.0
google-auth-oauthlib 0.4.6
google-pasta 0.2.0
googleapis-common-protos 1.57.0
grpcio 1.47.0
h5py 3.7.0
httplib2 0.21.0
huggingface-hub 0.8.1
idna 3.4
importlib-metadata 4.12.0
Jinja2 3.1.2
joblib 1.0.1
keras 2.9.0
Keras-Preprocessing 1.1.2
langcodes 3.3.0
lxml 4.9.1
Markdown 3.3.7
MarkupSafe 2.1.1
mkl-fft 1.3.1
mkl-random 1.2.2
mkl-service 2.4.0
multidict 6.0.2
multiprocess 0.70.13
murmurhash 1.0.7
numexpr 2.8.1
numpy 1.22.3
oauth2client 4.1.3
oauthlib 3.2.1
opt-einsum 3.3.0
packaging 21.3
pandas 1.4.2
pathtools 0.1.2
pathy 0.6.1
pip 21.1.2
preshed 3.0.6
promise 2.3
protobuf 4.21.9
psutil 5.9.1
pyarrow 8.0.0
pyasn1 0.4.8
pyasn1-modules 0.2.8
pybind11 2.9.2
pycountry 22.3.5
pycparser 2.21
pydantic 1.8.2
PyJWT 2.4.0
pylzma 0.5.0
pyOpenSSL 22.0.0
pyparsing 3.0.4
PySocks 1.7.1
python-dateutil 2.8.2
pytz 2021.3
PyYAML 6.0
regex 2021.4.4
requests 2.28.1
requests-oauthlib 1.3.1
responses 0.18.0
rsa 4.8
sacremoses 0.0.45
scikit-learn 1.1.1
scipy 1.8.1
sentencepiece 0.1.96
sentry-sdk 1.6.0
setproctitle 1.2.3
setuptools 65.5.0
shortuuid 1.0.9
six 1.16.0
smart-open 5.2.1
smmap 5.0.0
soupsieve 2.3.2.post1
spacy 3.3.1
spacy-legacy 3.0.9
spacy-loggers 1.0.2
srsly 2.4.3
tabulate 0.8.9
tensorboard 2.9.1
tensorboard-data-server 0.6.1
tensorboard-plugin-wit 1.8.1
tensorflow 2.9.1
tensorflow-estimator 2.9.0
termcolor 2.1.0
thinc 8.0.17
threadpoolctl 3.1.0
tokenizers 0.12.1
torch 1.13.0
tqdm 4.64.0
transformers 4.20.1
typer 0.4.1
typing-extensions 4.3.0
Unidecode 1.3.6
urllib3 1.26.12
wandb 0.12.20
wasabi 0.9.1
web-anno-tsv 0.0.1
Werkzeug 2.1.2
wget 3.2
wheel 0.35.1
wrapt 1.14.1
xxhash 3.0.0
yarl 1.8.1
zipp 3.8.0
Python 3.8.10
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5305/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5305/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/5304 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5304/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5304/comments | https://api.github.com/repos/huggingface/datasets/issues/5304/events | https://github.com/huggingface/datasets/issues/5304 | 1,465,110,367 | I_kwDODunzps5XU89f | 5,304 | timit_asr doesn't load the test split. | {
"avatar_url": "https://avatars.githubusercontent.com/u/17842800?v=4",
"events_url": "https://api.github.com/users/seyong92/events{/privacy}",
"followers_url": "https://api.github.com/users/seyong92/followers",
"following_url": "https://api.github.com/users/seyong92/following{/other_user}",
"gists_url": "https://api.github.com/users/seyong92/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/seyong92",
"id": 17842800,
"login": "seyong92",
"node_id": "MDQ6VXNlcjE3ODQyODAw",
"organizations_url": "https://api.github.com/users/seyong92/orgs",
"received_events_url": "https://api.github.com/users/seyong92/received_events",
"repos_url": "https://api.github.com/users/seyong92/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/seyong92/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/seyong92/subscriptions",
"type": "User",
"url": "https://api.github.com/users/seyong92"
} | [] | open | false | null | [] | null | [
"The [timit_asr.py](https://huggingface.co/datasets/timit_asr/blob/main/timit_asr.py) script iterates over the WAV files per split directory using this:\r\n```python\r\nwav_paths = sorted(Path(data_dir).glob(f\"**/{split}/**/*.wav\"))\r\nwav_paths = wav_paths if wav_paths else sorted(Path(data_dir).glob(f\"**/{split.upper()}/**/*.WAV\"))\r\n```\r\n\r\nCan you check that there is a directory named \"test\" somewhere in your timit data directory ?"
] | 2022-11-26T10:18:22Z | 2022-12-01T13:28:59Z | null | NONE | null | null | null | ### Describe the bug
When I use the function ```timit = load_dataset('timit_asr', data_dir=data_dir)```, it only loads train split, not test split.
I tried to change the directory and filename to lower case to upper case for the test split, but it does not work at all.
```python
DatasetDict({
train: Dataset({
features: ['file', 'audio', 'text', 'phonetic_detail', 'word_detail', 'dialect_region', 'sentence_type', 'speaker_id', 'id'],
num_rows: 4620
})
test: Dataset({
features: ['file', 'audio', 'text', 'phonetic_detail', 'word_detail', 'dialect_region', 'sentence_type', 'speaker_id', 'id'],
num_rows: 0
})
})
```
The directory structure of both splits are same. (DIALECT_REGION / SPEAKER_CODE / DATA_FILES)
### Steps to reproduce the bug
1. just use ```timit = load_dataset('timit_asr', data_dir=data_dir)```
### Expected behavior
```python
DatasetDict({
train: Dataset({
features: ['file', 'audio', 'text', 'phonetic_detail', 'word_detail', 'dialect_region', 'sentence_type', 'speaker_id', 'id'],
num_rows: 4620
})
test: Dataset({
features: ['file', 'audio', 'text', 'phonetic_detail', 'word_detail', 'dialect_region', 'sentence_type', 'speaker_id', 'id'],
num_rows: 1680
})
})
```
### Environment info
- ubuntu 20.04
- python 3.9.13
- datasets 2.7.1 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5304/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5304/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/5303 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5303/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5303/comments | https://api.github.com/repos/huggingface/datasets/issues/5303/events | https://github.com/huggingface/datasets/pull/5303 | 1,464,837,251 | PR_kwDODunzps5DuVTa | 5,303 | Skip dataset verifications by default | {
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko"
} | [] | open | false | null | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_5303). All of your documentation changes will be reflected on that endpoint."
] | 2022-11-25T18:39:09Z | 2022-11-25T18:44:23Z | null | CONTRIBUTOR | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/5303.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5303",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/5303.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5303"
} | Skip the dataset verifications (split and checksum verifications, duplicate keys check) by default unless a dataset is being tested (`datasets-cli test/run_beam`). The main goal is to avoid running the checksum check in the default case due to how expensive it can be for large datasets.
PS: Maybe we should deprecate `ignore_verifications`, which is `True` now by default, and give it a different name? | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5303/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5303/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/5302 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5302/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5302/comments | https://api.github.com/repos/huggingface/datasets/issues/5302/events | https://github.com/huggingface/datasets/pull/5302 | 1,464,778,901 | PR_kwDODunzps5DuJJp | 5,302 | Improve `use_auth_token` docstring and deprecate `use_auth_token` in `download_and_prepare` | {
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko"
} | [] | open | false | null | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_5302). All of your documentation changes will be reflected on that endpoint."
] | 2022-11-25T17:09:21Z | 2022-11-28T12:40:12Z | null | CONTRIBUTOR | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/5302.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5302",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/5302.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5302"
} | Clarify in the docstrings what happens when `use_auth_token` is `None` and deprecate the `use_auth_token` param in `download_and_prepare`. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5302/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5302/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/5301 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5301/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5301/comments | https://api.github.com/repos/huggingface/datasets/issues/5301/events | https://github.com/huggingface/datasets/pull/5301 | 1,464,749,156 | PR_kwDODunzps5DuCzR | 5,301 | Return a split Dataset in load_dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [] | open | false | null | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_5301). All of your documentation changes will be reflected on that endpoint.",
"Just noticed that now we have to deal with indexed & split datasets. The remaining tests are failing because one should be able to get an indexed dataset when accessing the split of a dataset made of indexed splits (right now the index is just trashed)"
] | 2022-11-25T16:35:54Z | 2022-11-30T16:53:34Z | null | MEMBER | null | true | {
"diff_url": "https://github.com/huggingface/datasets/pull/5301.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5301",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/5301.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5301"
} | ...instead of a DatasetDict.
```python
# now supported
ds = load_dataset("squad")
ds[0]
for example in ds:
pass
# still works
ds["train"]
ds["validation"]
# new
ds.splits # Dict[str, Dataset] | None
# soon to be supported (not in this PR)
ds = load_dataset("dataset_with_no_splits")
ds[0]
for example in ds:
pass
```
I implemented `Dataset.__getitem__` and `IterableDataset.__getitem__` to be able to get a split from a dataset.
The splits are defined by the `ds.info.splits` dictionary.
Therefore a dataset is a table that optionally has some splits defined in the dataset info. And a split dataset is the concatenation of all its splits.
I made as little breaking changes as possible. Notable breaking changes:
- `load_dataset("potato").keys() / .items() / .values() /` don't work anymore, since we don't return a dict
- same for `for split_name in load_dataset("potato")`, since we now iterate on the examples
- ..
TODO:
- [x] Update push_to_hub
- [x] Update save_to_disk/load_from_disk
- [ ] check for other breaking changes
- [ ] fix existing tests
- [ ] add new tests
- [ ] docs
This is related to https://github.com/huggingface/datasets/issues/5189, to extend `load_dataset` to return datasets without splits | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 1,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5301/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5301/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/5300 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5300/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5300/comments | https://api.github.com/repos/huggingface/datasets/issues/5300/events | https://github.com/huggingface/datasets/pull/5300 | 1,464,697,136 | PR_kwDODunzps5Dt3uK | 5,300 | Use same `num_proc` for dataset download and generation | {
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko"
} | [] | open | false | null | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_5300). All of your documentation changes will be reflected on that endpoint.",
"I noticed this bug the other day and was going to look into it! \"Where are these processes coming from?\" ;-)"
] | 2022-11-25T15:37:42Z | 2022-11-25T15:52:04Z | null | CONTRIBUTOR | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/5300.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5300",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/5300.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5300"
} | Use the same `num_proc` value for data download and generation. Additionally, do not set `num_proc` to 16 in `DownloadManager` by default (`num_proc` now has to be specified explicitly). | {
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5300/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5300/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/5299 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5299/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5299/comments | https://api.github.com/repos/huggingface/datasets/issues/5299/events | https://github.com/huggingface/datasets/pull/5299 | 1,464,695,091 | PR_kwDODunzps5Dt3Sk | 5,299 | Fix xopen for Windows pathnames | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-11-25T15:35:28Z | 2022-11-29T08:23:58Z | 2022-11-29T08:21:24Z | MEMBER | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/5299.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5299",
"merged_at": "2022-11-29T08:21:24Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5299.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5299"
} | This PR fixes a bug in `xopen` function for Windows pathnames.
Fix #5298. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5299/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5299/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/5298 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5298/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5298/comments | https://api.github.com/repos/huggingface/datasets/issues/5298/events | https://github.com/huggingface/datasets/issues/5298 | 1,464,681,871 | I_kwDODunzps5XTUWP | 5,298 | Bug in xopen with Windows pathnames | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | closed | false | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
}
] | null | [] | 2022-11-25T15:21:32Z | 2022-11-29T08:21:25Z | 2022-11-29T08:21:25Z | MEMBER | null | null | null | Currently, `xopen` function has a bug with local Windows pathnames:
From its implementation:
```python
def xopen(file: str, mode="r", *args, **kwargs):
file = _as_posix(PurePath(file))
main_hop, *rest_hops = file.split("::")
if is_local_path(main_hop):
return open(file, mode, *args, **kwargs)
```
On a Windows machine, if we pass the argument:
```python
xopen("C:\\Users\\USERNAME\\filename.txt")
```
it returns
```python
open("C:/Users/USERNAME/filename.txt")
``` | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5298/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5298/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/5297 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5297/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5297/comments | https://api.github.com/repos/huggingface/datasets/issues/5297/events | https://github.com/huggingface/datasets/pull/5297 | 1,464,554,491 | PR_kwDODunzps5DtZjg | 5,297 | Fix xjoin for Windows pathnames | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-11-25T13:30:17Z | 2022-11-29T08:07:39Z | 2022-11-29T08:05:12Z | MEMBER | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/5297.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5297",
"merged_at": "2022-11-29T08:05:12Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5297.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5297"
} | This PR fixes a bug in `xjoin` function with Windows pathnames.
Fix #5296. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5297/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5297/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/5296 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5296/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5296/comments | https://api.github.com/repos/huggingface/datasets/issues/5296/events | https://github.com/huggingface/datasets/issues/5296 | 1,464,553,580 | I_kwDODunzps5XS1Bs | 5,296 | Bug in xjoin with Windows pathnames | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | closed | false | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
}
] | null | [] | 2022-11-25T13:29:33Z | 2022-11-29T08:05:13Z | 2022-11-29T08:05:13Z | MEMBER | null | null | null | Currently, `xjoin` function has a bug with local Windows pathnames: instead of returning the OS-dependent join pathname, it always returns it in POSIX format.
```python
from datasets.download.streaming_download_manager import xjoin
path = xjoin("C:\\Users\\USERNAME", "filename.txt")
```
Join path should be:
```python
"C:\\Users\\USERNAME\\filename.txt"
```
However it is:
```python
"C:/Users/USERNAME/filename.txt"
``` | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5296/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5296/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/5295 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5295/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5295/comments | https://api.github.com/repos/huggingface/datasets/issues/5295/events | https://github.com/huggingface/datasets/issues/5295 | 1,464,006,743 | I_kwDODunzps5XQvhX | 5,295 | Extractions failed when .zip file located on read-only path (e.g., SageMaker FastFile mode) | {
"avatar_url": "https://avatars.githubusercontent.com/u/2340781?v=4",
"events_url": "https://api.github.com/users/verdimrc/events{/privacy}",
"followers_url": "https://api.github.com/users/verdimrc/followers",
"following_url": "https://api.github.com/users/verdimrc/following{/other_user}",
"gists_url": "https://api.github.com/users/verdimrc/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/verdimrc",
"id": 2340781,
"login": "verdimrc",
"node_id": "MDQ6VXNlcjIzNDA3ODE=",
"organizations_url": "https://api.github.com/users/verdimrc/orgs",
"received_events_url": "https://api.github.com/users/verdimrc/received_events",
"repos_url": "https://api.github.com/users/verdimrc/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/verdimrc/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/verdimrc/subscriptions",
"type": "User",
"url": "https://api.github.com/users/verdimrc"
} | [] | open | false | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
}
] | null | [
"Hi ! Thanks for reporting. Indeed the lock file should be placed in a directory with write permission (e.g. in the directory where the archive is extracted).",
"I opened https://github.com/huggingface/datasets/pull/5320 to fix this - it places the lock file in the cache directory instead of trying to put in next to the ZIP where it's read-only"
] | 2022-11-25T03:59:43Z | 2022-12-01T13:56:40Z | null | NONE | null | null | null | ### Describe the bug
Hi,
`load_dataset()` does not work .zip files located on a read-only directory. Looks like it's because Dataset creates a lock file in the [same directory](https://github.com/huggingface/datasets/blob/df4bdd365f2abb695f113cbf8856a925bc70901b/src/datasets/utils/extract.py) as the .zip file.
Encountered this when attempting `load_dataset()` on a datadir with SageMaker FastFile mode.
### Steps to reproduce the bug
```python
# Showing relevant lines only.
hyperparameters = {
"dataset_name": "ydshieh/coco_dataset_script",
"dataset_config_name": 2017,
"data_dir": "/opt/ml/input/data/coco",
"cache_dir": "/tmp/huggingface-cache", # Fix dataset complains out-of-space.
...
}
estimator = PyTorch(
base_job_name="clip",
source_dir="../src/sm-entrypoint",
entry_point="run_clip.py", # Transformers/src/examples/pytorch/contrastive-image-text/run_clip.py
framework_version="1.12",
py_version="py38",
hyperparameters=hyperparameters,
instance_count=1,
instance_type="ml.p3.16xlarge",
volume_size=100,
distribution={"smdistributed": {"dataparallel": {"enabled": True}}},
)
fast_file = lambda x: TrainingInput(x, input_mode='FastFile')
estimator.fit(
{
"pre-trained": fast_file("s3://vm-sagemakerr-us-east-1/clip/pre-trained-checkpoint/"),
"coco": fast_file("s3://vm-sagemakerr-us-east-1/clip/coco-zip-files/"),
}
)
```
Error message:
```text
ErrorMessage "OSError: [Errno 30] Read-only file system: '/opt/ml/input/data/coco/image_info_test2017.zip.lock'
"""
The above exception was the direct cause of the following exception
Traceback (most recent call last)
File "/opt/conda/lib/python3.8/runpy.py", line 194, in _run_module_as_main
return _run_code(code, main_globals, None,
File "/opt/conda/lib/python3.8/runpy.py", line 87, in _run_code
exec(code, run_globals)
File "/opt/conda/lib/python3.8/site-packages/mpi4py/__main__.py", line 7, in <module>
main()
File "/opt/conda/lib/python3.8/site-packages/mpi4py/run.py", line 198, in main
run_command_line(args)
File "/opt/conda/lib/python3.8/site-packages/mpi4py/run.py", line 47, in run_command_line
run_path(sys.argv[0], run_name='__main__')
File "/opt/conda/lib/python3.8/runpy.py", line 265, in run_path
return _run_module_code(code, init_globals, run_name,
File "/opt/conda/lib/python3.8/runpy.py", line 97, in _run_module_code
_run_code(code, mod_globals, init_globals,
File "run_clip_smddp.py", line 594, in <module>
File "run_clip_smddp.py", line 327, in main
dataset = load_dataset(
File "/opt/conda/lib/python3.8/site-packages/datasets/load.py", line 1741, in load_dataset
builder_instance.download_and_prepare(
File "/opt/conda/lib/python3.8/site-packages/datasets/builder.py", line 822, in download_and_prepare
self._download_and_prepare(
File "/opt/conda/lib/python3.8/site-packages/datasets/builder.py", line 1555, in _download_and_prepare
super()._download_and_prepare(
File "/opt/conda/lib/python3.8/site-packages/datasets/builder.py", line 891, in _download_and_prepare
split_generators = self._split_generators(dl_manager, **split_generators_kwargs)
File "/root/.cache/huggingface/modules/datasets_modules/datasets/ydshieh--coco_dataset_script/e033205c0266a54c10be132f9264f2a39dcf893e798f6756d224b1ff5078998f/coco_dataset_script.py", line 123, in _split_generators
archive_path = dl_manager.download_and_extract(_DL_URLS)
File "/opt/conda/lib/python3.8/site-packages/datasets/download/download_manager.py", line 447, in download_and_extract
return self.extract(self.download(url_or_urls))
File "/opt/conda/lib/python3.8/site-packages/datasets/download/download_manager.py", line 419, in extract
extracted_paths = map_nested(
File "/opt/conda/lib/python3.8/site-packages/datasets/utils/py_utils.py", line 472, in map_nested
mapped = pool.map(_single_map_nested, split_kwds)
File "/opt/conda/lib/python3.8/multiprocessing/pool.py", line 364, in map
return self._map_async(func, iterable, mapstar, chunksize).get()
File "/opt/conda/lib/python3.8/multiprocessing/pool.py", line 771, in get
raise self._value
OSError: [Errno 30] Read-only file system: '/opt/ml/input/data/coco/image_info_test2017.zip.lock'"
```
### Expected behavior
`load_dataset()` to succeed, just like when .zip file is passed in SageMaker File mode.
### Environment info
* datasets-2.7.1
* transformers-4.24.0
* python-3.8
* torch-1.12
* SageMaker PyTorch DLC | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5295/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5295/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/5294 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5294/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5294/comments | https://api.github.com/repos/huggingface/datasets/issues/5294/events | https://github.com/huggingface/datasets/pull/5294 | 1,463,679,582 | PR_kwDODunzps5DqgLW | 5,294 | Support streaming datasets with pathlib.Path.with_suffix | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-11-24T18:04:38Z | 2022-11-29T07:09:08Z | 2022-11-29T07:06:32Z | MEMBER | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/5294.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5294",
"merged_at": "2022-11-29T07:06:32Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5294.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5294"
} | This PR extends the support in streaming mode for datasets that use `pathlib.Path.with_suffix`.
Fix #5293. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5294/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5294/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/5293 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5293/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5293/comments | https://api.github.com/repos/huggingface/datasets/issues/5293/events | https://github.com/huggingface/datasets/issues/5293 | 1,463,669,201 | I_kwDODunzps5XPdHR | 5,293 | Support streaming datasets with pathlib.Path.with_suffix | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | closed | false | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
}
] | null | [] | 2022-11-24T17:52:08Z | 2022-11-29T07:06:33Z | 2022-11-29T07:06:33Z | MEMBER | null | null | null | Extend support for streaming datasets that use `pathlib.Path.with_suffix`.
This feature will be useful e.g. for datasets containing text files and annotated files with the same name but different extension. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5293/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5293/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/5292 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5292/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5292/comments | https://api.github.com/repos/huggingface/datasets/issues/5292/events | https://github.com/huggingface/datasets/issues/5292 | 1,463,053,832 | I_kwDODunzps5XNG4I | 5,292 | Missing documentation build for versions 2.7.1 and 2.6.2 | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | [
{
"color": "d4c5f9",
"default": false,
"description": "Maintenance tasks",
"id": 4296013012,
"name": "maintenance",
"node_id": "LA_kwDODunzps8AAAABAA_01A",
"url": "https://api.github.com/repos/huggingface/datasets/labels/maintenance"
}
] | closed | false | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
}
] | null | [
"- Build docs for 2.6.2:\r\n - Commit: a6a5a1cf4cdf1e0be65168aed5a327f543001fe8\r\n - Build docs GH Action: https://github.com/huggingface/datasets/actions/runs/3539470622/jobs/5941404044\r\n- Build docs for 2.7.1:\r\n - Commit: 5ef1ab1cc06c2b7a574bf2df454cd9fcb071ccb2\r\n - Build docs GH Action: https://github.com/huggingface/datasets/actions/runs/3539574442/jobs/5941636792"
] | 2022-11-24T09:42:10Z | 2022-11-24T10:10:02Z | 2022-11-24T10:10:02Z | MEMBER | null | null | null | After the patch releases [2.7.1](https://github.com/huggingface/datasets/releases/tag/2.7.1) and [2.6.2](https://github.com/huggingface/datasets/releases/tag/2.6.2), the online docs were not properly built (the build_documentation workflow was not triggered).
There was a fix by:
- #5291
However, both documentations were built from main branch, instead of their corresponding version branch.
We are rebuilding them. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5292/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5292/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/5291 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5291/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5291/comments | https://api.github.com/repos/huggingface/datasets/issues/5291/events | https://github.com/huggingface/datasets/pull/5291 | 1,462,983,472 | PR_kwDODunzps5DoKNC | 5,291 | [build doc] for v2.7.1 & v2.6.2 | {
"avatar_url": "https://avatars.githubusercontent.com/u/11827707?v=4",
"events_url": "https://api.github.com/users/mishig25/events{/privacy}",
"followers_url": "https://api.github.com/users/mishig25/followers",
"following_url": "https://api.github.com/users/mishig25/following{/other_user}",
"gists_url": "https://api.github.com/users/mishig25/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mishig25",
"id": 11827707,
"login": "mishig25",
"node_id": "MDQ6VXNlcjExODI3NzA3",
"organizations_url": "https://api.github.com/users/mishig25/orgs",
"received_events_url": "https://api.github.com/users/mishig25/received_events",
"repos_url": "https://api.github.com/users/mishig25/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mishig25/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mishig25/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mishig25"
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"doc versions are built https://huggingface.co/docs/datasets/index"
] | 2022-11-24T08:54:47Z | 2022-11-24T09:14:10Z | 2022-11-24T09:11:15Z | CONTRIBUTOR | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/5291.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5291",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/5291.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5291"
} | Do NOT merge. Using this PR to build docs for [v2.7.1](https://github.com/huggingface/datasets/pull/5291/commits/f4914af20700f611b9331a9e3ba34743bbeff934) & [v2.6.2](https://github.com/huggingface/datasets/pull/5291/commits/025f85300a0874eeb90a20393c62f25ac0accaa0) | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5291/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5291/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/5290 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5290/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5290/comments | https://api.github.com/repos/huggingface/datasets/issues/5290/events | https://github.com/huggingface/datasets/pull/5290 | 1,462,716,766 | PR_kwDODunzps5DnQsS | 5,290 | fix error where reading breaks when batch missing an assigned column feature | {
"avatar_url": "https://avatars.githubusercontent.com/u/12104720?v=4",
"events_url": "https://api.github.com/users/eunseojo/events{/privacy}",
"followers_url": "https://api.github.com/users/eunseojo/followers",
"following_url": "https://api.github.com/users/eunseojo/following{/other_user}",
"gists_url": "https://api.github.com/users/eunseojo/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/eunseojo",
"id": 12104720,
"login": "eunseojo",
"node_id": "MDQ6VXNlcjEyMTA0NzIw",
"organizations_url": "https://api.github.com/users/eunseojo/orgs",
"received_events_url": "https://api.github.com/users/eunseojo/received_events",
"repos_url": "https://api.github.com/users/eunseojo/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/eunseojo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/eunseojo/subscriptions",
"type": "User",
"url": "https://api.github.com/users/eunseojo"
} | [] | open | false | null | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_5290). All of your documentation changes will be reflected on that endpoint."
] | 2022-11-24T03:53:46Z | 2022-11-25T03:21:54Z | null | CONTRIBUTOR | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/5290.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5290",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/5290.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5290"
} | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5290/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5290/timeline | null | null | true |
End of preview. Expand
in Dataset Viewer.
- Downloads last month
- 34