url
stringlengths
58
61
repository_url
stringclasses
1 value
labels_url
stringlengths
72
75
comments_url
stringlengths
67
70
events_url
stringlengths
65
68
html_url
stringlengths
46
51
id
int64
599M
2.12B
node_id
stringlengths
18
32
number
int64
1
6.65k
title
stringlengths
1
290
user
dict
labels
listlengths
0
4
state
stringclasses
2 values
locked
bool
1 class
assignee
dict
assignees
listlengths
0
4
milestone
dict
comments
int64
0
70
created_at
unknown
updated_at
unknown
closed_at
unknown
author_association
stringclasses
3 values
active_lock_reason
float64
draft
float64
0
1
pull_request
dict
body
stringlengths
0
228k
reactions
dict
timeline_url
stringlengths
67
70
performed_via_github_app
float64
state_reason
stringclasses
3 values
is_pull_request
bool
2 classes
https://api.github.com/repos/huggingface/datasets/issues/3872
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3872/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3872/comments
https://api.github.com/repos/huggingface/datasets/issues/3872/events
https://github.com/huggingface/datasets/issues/3872
1,163,853,026
I_kwDODunzps5FXvzi
3,872
HTTP error 504 Server Error: Gateway Time-out
{ "avatar_url": "https://avatars.githubusercontent.com/u/83509215?v=4", "events_url": "https://api.github.com/users/illiyas-sha/events{/privacy}", "followers_url": "https://api.github.com/users/illiyas-sha/followers", "following_url": "https://api.github.com/users/illiyas-sha/following{/other_user}", "gists_url": "https://api.github.com/users/illiyas-sha/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/illiyas-sha", "id": 83509215, "login": "illiyas-sha", "node_id": "MDQ6VXNlcjgzNTA5MjE1", "organizations_url": "https://api.github.com/users/illiyas-sha/orgs", "received_events_url": "https://api.github.com/users/illiyas-sha/received_events", "repos_url": "https://api.github.com/users/illiyas-sha/repos", "site_admin": false, "starred_url": "https://api.github.com/users/illiyas-sha/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/illiyas-sha/subscriptions", "type": "User", "url": "https://api.github.com/users/illiyas-sha" }
[]
closed
false
null
[]
null
6
"2022-03-09T12:03:37Z"
"2022-03-15T16:19:50Z"
"2022-03-15T16:19:50Z"
NONE
null
null
null
I am trying to push a large dataset(450000+) records with the help of `push_to_hub()` While pushing, it gives some error like this. ``` Traceback (most recent call last): File "data_split_speech.py", line 159, in <module> data_new_2.push_to_hub("user-name/dataset-name",private=True) File "/opt/conda/lib/python3.8/site-packages/datasets/dataset_dict.py", line 951, in push_to_hub repo_id, split, uploaded_size, dataset_nbytes = self[split]._push_parquet_shards_to_hub( File "/opt/conda/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 3556, in _push_parquet_shards_to_hub api.upload_file( File "/opt/conda/lib/python3.8/site-packages/huggingface_hub/hf_api.py", line 1017, in upload_file raise err File "/opt/conda/lib/python3.8/site-packages/huggingface_hub/hf_api.py", line 1008, in upload_file r.raise_for_status() File "/opt/conda/lib/python3.8/site-packages/requests/models.py", line 953, in raise_for_status raise HTTPError(http_error_msg, response=self) requests.exceptions.HTTPError: 504 Server Error: Gateway Time-out for url: https://huggingface.co/api/datasets/user-name/dataset-name/upload/main/data/train2-00041-of-00064.parquet ``` Can anyone help me to resolve this issue.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3872/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3872/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/3871
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3871/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3871/comments
https://api.github.com/repos/huggingface/datasets/issues/3871/events
https://github.com/huggingface/datasets/pull/3871
1,163,714,113
PR_kwDODunzps40KRcM
3,871
add pandas to env command
{ "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/patrickvonplaten", "id": 23423619, "login": "patrickvonplaten", "node_id": "MDQ6VXNlcjIzNDIzNjE5", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "site_admin": false, "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "type": "User", "url": "https://api.github.com/users/patrickvonplaten" }
[]
closed
false
null
[]
null
2
"2022-03-09T09:48:51Z"
"2022-03-09T11:21:38Z"
"2022-03-09T11:21:37Z"
MEMBER
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/3871.diff", "html_url": "https://github.com/huggingface/datasets/pull/3871", "merged_at": "2022-03-09T11:21:37Z", "patch_url": "https://github.com/huggingface/datasets/pull/3871.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/3871" }
Pandas is a required packages and used quite a bit. I don't see any downside with adding its version to the `datasets-cli env` command.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3871/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3871/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/3870
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3870/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3870/comments
https://api.github.com/repos/huggingface/datasets/issues/3870/events
https://github.com/huggingface/datasets/pull/3870
1,163,633,239
PR_kwDODunzps40KAYy
3,870
Add wikitablequestions dataset
{ "avatar_url": "https://avatars.githubusercontent.com/u/10275209?v=4", "events_url": "https://api.github.com/users/SivilTaram/events{/privacy}", "followers_url": "https://api.github.com/users/SivilTaram/followers", "following_url": "https://api.github.com/users/SivilTaram/following{/other_user}", "gists_url": "https://api.github.com/users/SivilTaram/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/SivilTaram", "id": 10275209, "login": "SivilTaram", "node_id": "MDQ6VXNlcjEwMjc1MjA5", "organizations_url": "https://api.github.com/users/SivilTaram/orgs", "received_events_url": "https://api.github.com/users/SivilTaram/received_events", "repos_url": "https://api.github.com/users/SivilTaram/repos", "site_admin": false, "starred_url": "https://api.github.com/users/SivilTaram/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/SivilTaram/subscriptions", "type": "User", "url": "https://api.github.com/users/SivilTaram" }
[]
closed
false
null
[]
null
4
"2022-03-09T08:27:43Z"
"2022-03-14T11:19:24Z"
"2022-03-14T11:16:19Z"
CONTRIBUTOR
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/3870.diff", "html_url": "https://github.com/huggingface/datasets/pull/3870", "merged_at": "2022-03-14T11:16:19Z", "patch_url": "https://github.com/huggingface/datasets/pull/3870.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/3870" }
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3870/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3870/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/3869
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3869/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3869/comments
https://api.github.com/repos/huggingface/datasets/issues/3869/events
https://github.com/huggingface/datasets/issues/3869
1,163,434,800
I_kwDODunzps5FWJsw
3,869
Making the Hub the place for datasets in Portuguese
{ "avatar_url": "https://avatars.githubusercontent.com/u/4755430?v=4", "events_url": "https://api.github.com/users/omarespejel/events{/privacy}", "followers_url": "https://api.github.com/users/omarespejel/followers", "following_url": "https://api.github.com/users/omarespejel/following{/other_user}", "gists_url": "https://api.github.com/users/omarespejel/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/omarespejel", "id": 4755430, "login": "omarespejel", "node_id": "MDQ6VXNlcjQ3NTU0MzA=", "organizations_url": "https://api.github.com/users/omarespejel/orgs", "received_events_url": "https://api.github.com/users/omarespejel/received_events", "repos_url": "https://api.github.com/users/omarespejel/repos", "site_admin": false, "starred_url": "https://api.github.com/users/omarespejel/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/omarespejel/subscriptions", "type": "User", "url": "https://api.github.com/users/omarespejel" }
[ { "color": "e99695", "default": false, "description": "Requesting to add a new dataset", "id": 2067376369, "name": "dataset request", "node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request" } ]
open
false
null
[]
null
1
"2022-03-09T03:06:18Z"
"2022-03-09T09:04:09Z"
null
NONE
null
null
null
Let's make Hugging Face Datasets the central hub for datasets in Portuguese :) **Motivation**. Datasets are currently quite scattered and an open-source central point such as the Hugging Face Hub would be ideal to support the growth of the Portuguese speaking community. What are some datasets in Portuguese worth integrating into the Hugging Face hub? Special thanks to @augusnunes for his collaboration on identifying the first ones: - [NILC - USP](http://www.nilc.icmc.usp.br/nilc/index.php/tools-and-resources). Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md). cc @osanseviero
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3869/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3869/timeline
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/3868
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3868/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3868/comments
https://api.github.com/repos/huggingface/datasets/issues/3868/events
https://github.com/huggingface/datasets/pull/3868
1,162,914,114
PR_kwDODunzps40HnWA
3,868
Ignore duplicate keys if `ignore_verifications=True`
{ "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mariosasko", "id": 47462742, "login": "mariosasko", "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "repos_url": "https://api.github.com/users/mariosasko/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "type": "User", "url": "https://api.github.com/users/mariosasko" }
[]
closed
false
null
[]
null
2
"2022-03-08T17:14:56Z"
"2022-03-09T13:50:45Z"
"2022-03-09T13:50:44Z"
CONTRIBUTOR
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/3868.diff", "html_url": "https://github.com/huggingface/datasets/pull/3868", "merged_at": "2022-03-09T13:50:44Z", "patch_url": "https://github.com/huggingface/datasets/pull/3868.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/3868" }
Currently, it's impossible to generate a dataset if some keys from `_generate_examples` are duplicated. This PR allows skipping the check for duplicate keys if `ignore_verifications` is set to `True`.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3868/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3868/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/3867
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3867/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3867/comments
https://api.github.com/repos/huggingface/datasets/issues/3867/events
https://github.com/huggingface/datasets/pull/3867
1,162,896,605
PR_kwDODunzps40Hjrk
3,867
Update for the rename doc-builder -> hf-doc-utils
{ "avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4", "events_url": "https://api.github.com/users/sgugger/events{/privacy}", "followers_url": "https://api.github.com/users/sgugger/followers", "following_url": "https://api.github.com/users/sgugger/following{/other_user}", "gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/sgugger", "id": 35901082, "login": "sgugger", "node_id": "MDQ6VXNlcjM1OTAxMDgy", "organizations_url": "https://api.github.com/users/sgugger/orgs", "received_events_url": "https://api.github.com/users/sgugger/received_events", "repos_url": "https://api.github.com/users/sgugger/repos", "site_admin": false, "starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sgugger/subscriptions", "type": "User", "url": "https://api.github.com/users/sgugger" }
[]
closed
false
null
[]
null
4
"2022-03-08T16:58:25Z"
"2023-09-24T09:54:44Z"
"2022-03-08T17:30:45Z"
CONTRIBUTOR
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/3867.diff", "html_url": "https://github.com/huggingface/datasets/pull/3867", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/3867.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/3867" }
This PR adapts the job to the upcoming change of name of `doc-builder`.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3867/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3867/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/3866
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3866/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3866/comments
https://api.github.com/repos/huggingface/datasets/issues/3866/events
https://github.com/huggingface/datasets/pull/3866
1,162,833,848
PR_kwDODunzps40HWcu
3,866
Bring back imgs so that forsk dont get broken
{ "avatar_url": "https://avatars.githubusercontent.com/u/11827707?v=4", "events_url": "https://api.github.com/users/mishig25/events{/privacy}", "followers_url": "https://api.github.com/users/mishig25/followers", "following_url": "https://api.github.com/users/mishig25/following{/other_user}", "gists_url": "https://api.github.com/users/mishig25/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mishig25", "id": 11827707, "login": "mishig25", "node_id": "MDQ6VXNlcjExODI3NzA3", "organizations_url": "https://api.github.com/users/mishig25/orgs", "received_events_url": "https://api.github.com/users/mishig25/received_events", "repos_url": "https://api.github.com/users/mishig25/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mishig25/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mishig25/subscriptions", "type": "User", "url": "https://api.github.com/users/mishig25" }
[]
closed
false
null
[]
null
3
"2022-03-08T16:01:31Z"
"2022-03-08T17:37:02Z"
"2022-03-08T17:37:01Z"
CONTRIBUTOR
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/3866.diff", "html_url": "https://github.com/huggingface/datasets/pull/3866", "merged_at": "2022-03-08T17:37:01Z", "patch_url": "https://github.com/huggingface/datasets/pull/3866.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/3866" }
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3866/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3866/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/3865
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3865/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3865/comments
https://api.github.com/repos/huggingface/datasets/issues/3865/events
https://github.com/huggingface/datasets/pull/3865
1,162,821,908
PR_kwDODunzps40HT9K
3,865
Add logo img
{ "avatar_url": "https://avatars.githubusercontent.com/u/11827707?v=4", "events_url": "https://api.github.com/users/mishig25/events{/privacy}", "followers_url": "https://api.github.com/users/mishig25/followers", "following_url": "https://api.github.com/users/mishig25/following{/other_user}", "gists_url": "https://api.github.com/users/mishig25/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mishig25", "id": 11827707, "login": "mishig25", "node_id": "MDQ6VXNlcjExODI3NzA3", "organizations_url": "https://api.github.com/users/mishig25/orgs", "received_events_url": "https://api.github.com/users/mishig25/received_events", "repos_url": "https://api.github.com/users/mishig25/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mishig25/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mishig25/subscriptions", "type": "User", "url": "https://api.github.com/users/mishig25" }
[]
closed
false
null
[]
null
2
"2022-03-08T15:50:59Z"
"2023-09-24T09:54:31Z"
"2022-03-08T16:01:59Z"
CONTRIBUTOR
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/3865.diff", "html_url": "https://github.com/huggingface/datasets/pull/3865", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/3865.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/3865" }
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3865/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3865/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/3864
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3864/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3864/comments
https://api.github.com/repos/huggingface/datasets/issues/3864/events
https://github.com/huggingface/datasets/pull/3864
1,162,804,942
PR_kwDODunzps40HQZ_
3,864
Update image dataset tags
{ "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mariosasko", "id": 47462742, "login": "mariosasko", "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "repos_url": "https://api.github.com/users/mariosasko/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "type": "User", "url": "https://api.github.com/users/mariosasko" }
[]
closed
false
null
[]
null
1
"2022-03-08T15:36:32Z"
"2022-03-08T17:04:47Z"
"2022-03-08T17:04:46Z"
CONTRIBUTOR
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/3864.diff", "html_url": "https://github.com/huggingface/datasets/pull/3864", "merged_at": "2022-03-08T17:04:46Z", "patch_url": "https://github.com/huggingface/datasets/pull/3864.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/3864" }
Align the existing image datasets' tags with new tags introduced in #3800.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3864/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3864/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/3863
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3863/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3863/comments
https://api.github.com/repos/huggingface/datasets/issues/3863/events
https://github.com/huggingface/datasets/pull/3863
1,162,802,857
PR_kwDODunzps40HP-A
3,863
Update code blocks
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
[]
closed
false
null
[]
null
1
"2022-03-08T15:34:43Z"
"2022-03-09T16:45:30Z"
"2022-03-09T16:45:29Z"
MEMBER
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/3863.diff", "html_url": "https://github.com/huggingface/datasets/pull/3863", "merged_at": "2022-03-09T16:45:29Z", "patch_url": "https://github.com/huggingface/datasets/pull/3863.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/3863" }
Following https://github.com/huggingface/datasets/pull/3860#issuecomment-1061756712 and https://github.com/huggingface/datasets/pull/3690 we need to update the code blocks to use markdown instead of sphinx
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3863/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3863/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/3862
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3862/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3862/comments
https://api.github.com/repos/huggingface/datasets/issues/3862/events
https://github.com/huggingface/datasets/pull/3862
1,162,753,733
PR_kwDODunzps40HFht
3,862
Manipulate columns on IterableDataset (rename columns, cast, etc.)
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
[]
closed
false
null
[]
null
2
"2022-03-08T14:53:57Z"
"2022-03-10T16:40:22Z"
"2022-03-10T16:40:21Z"
MEMBER
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/3862.diff", "html_url": "https://github.com/huggingface/datasets/pull/3862", "merged_at": "2022-03-10T16:40:21Z", "patch_url": "https://github.com/huggingface/datasets/pull/3862.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/3862" }
I added: - add_column - cast - rename_column - rename_columns related to https://github.com/huggingface/datasets/issues/3444 TODO: - [x] docs - [x] tests
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3862/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3862/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/3861
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3861/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3861/comments
https://api.github.com/repos/huggingface/datasets/issues/3861/events
https://github.com/huggingface/datasets/issues/3861
1,162,702,044
I_kwDODunzps5FTWzc
3,861
big_patent cased version
{ "avatar_url": "https://avatars.githubusercontent.com/u/25265140?v=4", "events_url": "https://api.github.com/users/slvcsl/events{/privacy}", "followers_url": "https://api.github.com/users/slvcsl/followers", "following_url": "https://api.github.com/users/slvcsl/following{/other_user}", "gists_url": "https://api.github.com/users/slvcsl/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/slvcsl", "id": 25265140, "login": "slvcsl", "node_id": "MDQ6VXNlcjI1MjY1MTQw", "organizations_url": "https://api.github.com/users/slvcsl/orgs", "received_events_url": "https://api.github.com/users/slvcsl/received_events", "repos_url": "https://api.github.com/users/slvcsl/repos", "site_admin": false, "starred_url": "https://api.github.com/users/slvcsl/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/slvcsl/subscriptions", "type": "User", "url": "https://api.github.com/users/slvcsl" }
[ { "color": "e99695", "default": false, "description": "Requesting to add a new dataset", "id": 2067376369, "name": "dataset request", "node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request" } ]
closed
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" } ]
null
2
"2022-03-08T14:08:55Z"
"2023-04-21T14:32:03Z"
"2023-04-21T14:32:03Z"
NONE
null
null
null
Hi! I am interested in working with the big_patent dataset. In Tensorflow, there are a number of versions of the dataset: - 1.0.0 : lower cased tokenized words - 2.0.0 : Update to use cased raw strings - 2.1.2 (default): Fix update to cased raw strings. The version in the huggingface `datasets` library is the 1.0.0. I would be very interested in using the 2.1.2 cased version (used more, recently, for example in the Pegasus paper), but it does not seem to be supported (I tried using the `revision` parameter in `load_datasets`). Is there a way to already load it, or would it be possible to add that version?
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3861/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3861/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/3860
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3860/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3860/comments
https://api.github.com/repos/huggingface/datasets/issues/3860/events
https://github.com/huggingface/datasets/pull/3860
1,162,623,329
PR_kwDODunzps40GpzZ
3,860
Small doc fixes
{ "avatar_url": "https://avatars.githubusercontent.com/u/11827707?v=4", "events_url": "https://api.github.com/users/mishig25/events{/privacy}", "followers_url": "https://api.github.com/users/mishig25/followers", "following_url": "https://api.github.com/users/mishig25/following{/other_user}", "gists_url": "https://api.github.com/users/mishig25/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mishig25", "id": 11827707, "login": "mishig25", "node_id": "MDQ6VXNlcjExODI3NzA3", "organizations_url": "https://api.github.com/users/mishig25/orgs", "received_events_url": "https://api.github.com/users/mishig25/received_events", "repos_url": "https://api.github.com/users/mishig25/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mishig25/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mishig25/subscriptions", "type": "User", "url": "https://api.github.com/users/mishig25" }
[]
closed
false
null
[]
null
2
"2022-03-08T12:55:39Z"
"2022-03-08T17:37:13Z"
"2022-03-08T17:37:13Z"
CONTRIBUTOR
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/3860.diff", "html_url": "https://github.com/huggingface/datasets/pull/3860", "merged_at": "2022-03-08T17:37:13Z", "patch_url": "https://github.com/huggingface/datasets/pull/3860.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/3860" }
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3860/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3860/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/3859
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3859/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3859/comments
https://api.github.com/repos/huggingface/datasets/issues/3859/events
https://github.com/huggingface/datasets/issues/3859
1,162,559,333
I_kwDODunzps5FSz9l
3,859
Unable to dowload big_patent (FileNotFoundError)
{ "avatar_url": "https://avatars.githubusercontent.com/u/25265140?v=4", "events_url": "https://api.github.com/users/slvcsl/events{/privacy}", "followers_url": "https://api.github.com/users/slvcsl/followers", "following_url": "https://api.github.com/users/slvcsl/following{/other_user}", "gists_url": "https://api.github.com/users/slvcsl/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/slvcsl", "id": 25265140, "login": "slvcsl", "node_id": "MDQ6VXNlcjI1MjY1MTQw", "organizations_url": "https://api.github.com/users/slvcsl/orgs", "received_events_url": "https://api.github.com/users/slvcsl/received_events", "repos_url": "https://api.github.com/users/slvcsl/repos", "site_admin": false, "starred_url": "https://api.github.com/users/slvcsl/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/slvcsl/subscriptions", "type": "User", "url": "https://api.github.com/users/slvcsl" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" }, { "color": "cfd3d7", "default": true, "description": "This issue or pull request already exists", "id": 1935892865, "name": "duplicate", "node_id": "MDU6TGFiZWwxOTM1ODkyODY1", "url": "https://api.github.com/repos/huggingface/datasets/labels/duplicate" } ]
closed
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" } ]
null
1
"2022-03-08T11:47:12Z"
"2022-03-08T13:04:09Z"
"2022-03-08T13:04:04Z"
NONE
null
null
null
## Describe the bug I am trying to download some splits of the big_patent dataset, using the following code: `ds = load_dataset("big_patent", "g", split="validation", download_mode="force_redownload") ` However, this leads to a FileNotFoundError. FileNotFoundError Traceback (most recent call last) [<ipython-input-3-8d8a745706a9>](https://localhost:8080/#) in <module>() 1 from datasets import load_dataset ----> 2 ds = load_dataset("big_patent", "g", split="validation", download_mode="force_redownload") 8 frames [/usr/local/lib/python3.7/dist-packages/datasets/load.py](https://localhost:8080/#) in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, ignore_verifications, keep_in_memory, save_infos, revision, use_auth_token, task, streaming, script_version, **config_kwargs) 1705 ignore_verifications=ignore_verifications, 1706 try_from_hf_gcs=try_from_hf_gcs, -> 1707 use_auth_token=use_auth_token, 1708 ) 1709 [/usr/local/lib/python3.7/dist-packages/datasets/builder.py](https://localhost:8080/#) in download_and_prepare(self, download_config, download_mode, ignore_verifications, try_from_hf_gcs, dl_manager, base_path, use_auth_token, **download_and_prepare_kwargs) 593 if not downloaded_from_gcs: 594 self._download_and_prepare( --> 595 dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs 596 ) 597 # Sync info [/usr/local/lib/python3.7/dist-packages/datasets/builder.py](https://localhost:8080/#) in _download_and_prepare(self, dl_manager, verify_infos, **prepare_split_kwargs) 659 split_dict = SplitDict(dataset_name=self.name) 660 split_generators_kwargs = self._make_split_generators_kwargs(prepare_split_kwargs) --> 661 split_generators = self._split_generators(dl_manager, **split_generators_kwargs) 662 663 # Checksums verification [/root/.cache/huggingface/modules/datasets_modules/datasets/big_patent/bdefa7c0b39fba8bba1c6331b70b738e30d63c8ad4567f983ce315a5fef6131c/big_patent.py](https://localhost:8080/#) in _split_generators(self, dl_manager) 123 split_types = ["train", "val", "test"] 124 extract_paths = dl_manager.extract( --> 125 {k: os.path.join(dl_path, "bigPatentData", k + ".tar.gz") for k in split_types} 126 ) 127 extract_paths = {k: os.path.join(extract_paths[k], k) for k in split_types} [/usr/local/lib/python3.7/dist-packages/datasets/utils/download_manager.py](https://localhost:8080/#) in extract(self, path_or_paths, num_proc) 282 download_config.extract_compressed_file = True 283 extracted_paths = map_nested( --> 284 partial(cached_path, download_config=download_config), path_or_paths, num_proc=num_proc, disable_tqdm=False 285 ) 286 path_or_paths = NestedDataStructure(path_or_paths) [/usr/local/lib/python3.7/dist-packages/datasets/utils/py_utils.py](https://localhost:8080/#) in map_nested(function, data_struct, dict_only, map_list, map_tuple, map_numpy, num_proc, types, disable_tqdm) 260 mapped = [ 261 _single_map_nested((function, obj, types, None, True)) --> 262 for obj in utils.tqdm(iterable, disable=disable_tqdm) 263 ] 264 else: [/usr/local/lib/python3.7/dist-packages/datasets/utils/py_utils.py](https://localhost:8080/#) in <listcomp>(.0) 260 mapped = [ 261 _single_map_nested((function, obj, types, None, True)) --> 262 for obj in utils.tqdm(iterable, disable=disable_tqdm) 263 ] 264 else: [/usr/local/lib/python3.7/dist-packages/datasets/utils/py_utils.py](https://localhost:8080/#) in _single_map_nested(args) 194 # Singleton first to spare some computation 195 if not isinstance(data_struct, dict) and not isinstance(data_struct, types): --> 196 return function(data_struct) 197 198 # Reduce logging to keep things readable in multiprocessing with tqdm [/usr/local/lib/python3.7/dist-packages/datasets/utils/file_utils.py](https://localhost:8080/#) in cached_path(url_or_filename, download_config, **download_kwargs) 314 elif is_local_path(url_or_filename): 315 # File, but it doesn't exist. --> 316 raise FileNotFoundError(f"Local file {url_or_filename} doesn't exist") 317 else: 318 # Something unknown FileNotFoundError: Local file /root/.cache/huggingface/datasets/downloads/extracted/ad068abb3e11f9f2f5440b62e37eb2b03ee515df9de1637c55cd1793b68668b2/bigPatentData/train.tar.gz doesn't exist I have tried this in a number of machines, including on Colab, so I think this is not environment dependent. How do I load the bigPatent dataset?
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3859/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3859/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/3858
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3858/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3858/comments
https://api.github.com/repos/huggingface/datasets/issues/3858/events
https://github.com/huggingface/datasets/pull/3858
1,162,526,688
PR_kwDODunzps40GVSq
3,858
Udpate index.mdx margins
{ "avatar_url": "https://avatars.githubusercontent.com/u/3841370?v=4", "events_url": "https://api.github.com/users/gary149/events{/privacy}", "followers_url": "https://api.github.com/users/gary149/followers", "following_url": "https://api.github.com/users/gary149/following{/other_user}", "gists_url": "https://api.github.com/users/gary149/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/gary149", "id": 3841370, "login": "gary149", "node_id": "MDQ6VXNlcjM4NDEzNzA=", "organizations_url": "https://api.github.com/users/gary149/orgs", "received_events_url": "https://api.github.com/users/gary149/received_events", "repos_url": "https://api.github.com/users/gary149/repos", "site_admin": false, "starred_url": "https://api.github.com/users/gary149/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/gary149/subscriptions", "type": "User", "url": "https://api.github.com/users/gary149" }
[]
closed
false
null
[]
null
1
"2022-03-08T11:11:52Z"
"2022-03-08T12:57:57Z"
"2022-03-08T12:57:56Z"
CONTRIBUTOR
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/3858.diff", "html_url": "https://github.com/huggingface/datasets/pull/3858", "merged_at": "2022-03-08T12:57:56Z", "patch_url": "https://github.com/huggingface/datasets/pull/3858.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/3858" }
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3858/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3858/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/3857
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3857/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3857/comments
https://api.github.com/repos/huggingface/datasets/issues/3857/events
https://github.com/huggingface/datasets/issues/3857
1,162,525,353
I_kwDODunzps5FSrqp
3,857
Order of dataset changes due to glob.glob.
{ "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/patrickvonplaten", "id": 23423619, "login": "patrickvonplaten", "node_id": "MDQ6VXNlcjIzNDIzNjE5", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "site_admin": false, "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "type": "User", "url": "https://api.github.com/users/patrickvonplaten" }
[ { "color": "c5def5", "default": false, "description": "Generic discussion on the library", "id": 2067400324, "name": "generic discussion", "node_id": "MDU6TGFiZWwyMDY3NDAwMzI0", "url": "https://api.github.com/repos/huggingface/datasets/labels/generic%20discussion" } ]
open
false
null
[]
null
1
"2022-03-08T11:10:30Z"
"2022-03-14T11:08:22Z"
null
MEMBER
null
null
null
## Describe the bug After discussion with @lhoestq, just want to mention here that `glob.glob(...)` should always be used in combination with `sorted(...)` to make sure the list of files returned by `glob.glob(...)` doesn't change depending on the OS system. There are currently multiple datasets that use `glob.glob()` without making use of `sorted(...)` even the streaming download manager (if I'm not mistaken): https://github.com/huggingface/datasets/blob/c14bfeb4af89da14f870de5ddaa584b08aa08eeb/src/datasets/utils/streaming_download_manager.py#L483
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3857/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3857/timeline
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/3856
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3856/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3856/comments
https://api.github.com/repos/huggingface/datasets/issues/3856/events
https://github.com/huggingface/datasets/pull/3856
1,162,522,034
PR_kwDODunzps40GUSf
3,856
Fix push_to_hub with null images
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
[]
closed
false
null
[]
null
1
"2022-03-08T11:07:09Z"
"2022-03-08T15:22:17Z"
"2022-03-08T15:22:16Z"
MEMBER
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/3856.diff", "html_url": "https://github.com/huggingface/datasets/pull/3856", "merged_at": "2022-03-08T15:22:16Z", "patch_url": "https://github.com/huggingface/datasets/pull/3856.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/3856" }
This code currently raises an error because of the null image: ```python import datasets dataset_dict = { 'name': ['image001.jpg', 'image002.jpg'], 'image': ['cat.jpg', None] } features = datasets.Features({ 'name': datasets.Value('string'), 'image': datasets.Image(), }) dataset = datasets.Dataset.from_dict(dataset_dict, features) dataset.push_to_hub("username/dataset") # this line produces an error: 'NoneType' object is not subscriptable ``` I fixed this in this PR TODO: - [x] add a test
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3856/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3856/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/3855
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3855/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3855/comments
https://api.github.com/repos/huggingface/datasets/issues/3855/events
https://github.com/huggingface/datasets/issues/3855
1,162,448,589
I_kwDODunzps5FSY7N
3,855
Bad error message when loading private dataset
{ "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/patrickvonplaten", "id": 23423619, "login": "patrickvonplaten", "node_id": "MDQ6VXNlcjIzNDIzNjE5", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "site_admin": false, "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "type": "User", "url": "https://api.github.com/users/patrickvonplaten" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
closed
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }, { "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }, { "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mariosasko", "id": 47462742, "login": "mariosasko", "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "repos_url": "https://api.github.com/users/mariosasko/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "type": "User", "url": "https://api.github.com/users/mariosasko" } ]
null
2
"2022-03-08T09:55:17Z"
"2022-07-11T15:06:40Z"
"2022-07-11T15:06:40Z"
MEMBER
null
null
null
## Describe the bug A pretty common behavior of an interaction between the Hub and datasets is the following. An organization adds a dataset in private mode and wants to load it afterward. ```python from transformers import load_dataset ds = load_dataset("NewT5/dummy_data", "dummy") ``` This command then fails with: ```bash FileNotFoundError: Couldn't find a dataset script at /home/patrick/NewT5/dummy_data/dummy_data.py or any data file in the same directory. Couldn't find 'NewT5/dummy_data' on the Hugging Face Hub either: FileNotFoundError: Dataset 'NewT5/dummy_data' doesn't exist on the Hub ``` **even though** the user has access to the website `NewT5/dummy_data` since she/he is part of the org. We need to improve the error message here similar to how @sgugger, @LysandreJik and @julien-c have done it for transformers IMO. ## Steps to reproduce the bug E.g. execute the following code to see the different error messages between `transformes` and `datasets`. 1. Transformers ```python from transformers import BertModel BertModel.from_pretrained("NewT5/dummy_model") ``` The error message is clearer here - it gives: ``` OSError: patrickvonplaten/gpt2-xl is not a local folder and is not a valid model identifier listed on 'https://huggingface.co/models' If this is a private repository, make sure to pass a token having permission to this repo with `use_auth_token` or log in with `huggingface-cli login` and pass `use_auth_token=True`. ``` Let's maybe do the same for datasets? The PR was introduced to `transformers` here: https://github.com/huggingface/transformers/pull/15261 ## Expected results Better error message ## Actual results Specify the actual results or traceback. ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 1.18.4.dev0 - Platform: Linux-5.15.15-76051515-generic-x86_64-with-glibc2.34 - Python version: 3.9.7 - PyArrow version: 6.0.1
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3855/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3855/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/3854
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3854/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3854/comments
https://api.github.com/repos/huggingface/datasets/issues/3854/events
https://github.com/huggingface/datasets/issues/3854
1,162,434,199
I_kwDODunzps5FSVaX
3,854
load only England English dataset from common voice english dataset
{ "avatar_url": "https://avatars.githubusercontent.com/u/36677001?v=4", "events_url": "https://api.github.com/users/amanjaiswal777/events{/privacy}", "followers_url": "https://api.github.com/users/amanjaiswal777/followers", "following_url": "https://api.github.com/users/amanjaiswal777/following{/other_user}", "gists_url": "https://api.github.com/users/amanjaiswal777/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/amanjaiswal777", "id": 36677001, "login": "amanjaiswal777", "node_id": "MDQ6VXNlcjM2Njc3MDAx", "organizations_url": "https://api.github.com/users/amanjaiswal777/orgs", "received_events_url": "https://api.github.com/users/amanjaiswal777/received_events", "repos_url": "https://api.github.com/users/amanjaiswal777/repos", "site_admin": false, "starred_url": "https://api.github.com/users/amanjaiswal777/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/amanjaiswal777/subscriptions", "type": "User", "url": "https://api.github.com/users/amanjaiswal777" }
[ { "color": "d876e3", "default": true, "description": "Further information is requested", "id": 1935892912, "name": "question", "node_id": "MDU6TGFiZWwxOTM1ODkyOTEy", "url": "https://api.github.com/repos/huggingface/datasets/labels/question" } ]
closed
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" } ]
null
1
"2022-03-08T09:40:52Z"
"2022-03-09T08:13:33Z"
"2022-03-09T08:13:33Z"
NONE
null
null
null
training_data = load_dataset("common_voice", "en",split='train[:250]+validation[:250]') testing_data = load_dataset("common_voice", "en", split="test[:200]") I'm trying to load only 8% of the English common voice data with accent == "England English." Can somebody assist me with this? **Typical Voice Accent Proportions:** - 24% United States English - 8% England English - 5% India and South Asia (India, Pakistan, Sri Lanka) - 3% Australian English - 3% Canadian English - 2% Scottish English - 1% Irish English - 1% Southern African (South Africa, Zimbabwe, Namibia) - 1% New Zealand English Can we replicate this for Age as well? **Age proportions of the common voice:-** - 24% 19 - 29 - 14% 30 - 39 - 10% 40 - 49 - 6% < 19 - 4% 50 - 59 - 4% 60 - 69 - 1% 70 – 79
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3854/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3854/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/3853
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3853/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3853/comments
https://api.github.com/repos/huggingface/datasets/issues/3853/events
https://github.com/huggingface/datasets/pull/3853
1,162,386,592
PR_kwDODunzps40F3uN
3,853
add ontonotes_conll dataset
{ "avatar_url": "https://avatars.githubusercontent.com/u/17963619?v=4", "events_url": "https://api.github.com/users/richarddwang/events{/privacy}", "followers_url": "https://api.github.com/users/richarddwang/followers", "following_url": "https://api.github.com/users/richarddwang/following{/other_user}", "gists_url": "https://api.github.com/users/richarddwang/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/richarddwang", "id": 17963619, "login": "richarddwang", "node_id": "MDQ6VXNlcjE3OTYzNjE5", "organizations_url": "https://api.github.com/users/richarddwang/orgs", "received_events_url": "https://api.github.com/users/richarddwang/received_events", "repos_url": "https://api.github.com/users/richarddwang/repos", "site_admin": false, "starred_url": "https://api.github.com/users/richarddwang/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/richarddwang/subscriptions", "type": "User", "url": "https://api.github.com/users/richarddwang" }
[]
closed
false
null
[]
null
2
"2022-03-08T08:53:42Z"
"2022-03-15T10:48:02Z"
"2022-03-15T10:48:02Z"
CONTRIBUTOR
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/3853.diff", "html_url": "https://github.com/huggingface/datasets/pull/3853", "merged_at": "2022-03-15T10:48:02Z", "patch_url": "https://github.com/huggingface/datasets/pull/3853.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/3853" }
# Introduction of the dataset OntoNotes v5.0 is the final version of OntoNotes corpus, and is a large-scale, multi-genre, multilingual corpus manually annotated with syntactic, semantic and discourse information. This dataset is the version of OntoNotes v5.0 extended and used in the CoNLL-2012 shared task , includes v4 train/dev and v9 test data for English/Chinese/Arabic and corrected version v12 train/dev/test data (English only). This dataset is widely used in name entity recognition, coreference resolution, and semantic role labeling. In dataset loading script, I modify and use the code of [AllenNLP/Ontonotes](https://docs.allennlp.org/models/main/models/common/ontonotes/#ontonotes) to read the special conll files while don't get extra package dependency. # Some workarounds I did 1. task ids I add tasks that I can't find anywhere `semantic-role-labeling`, `lemmatization`, and `word-sense-disambiguation` to the task category `structure-prediction`, because they are related to "syntax". I feel there is another good name for the task category since some tasks mentioned aren't related to structure, but I have no good idea. 2. `dl_manage.extract` Since we'll get another zip after unzip the downloaded zip data, I have to use `dl_manager.extract` directly inside `_generate_examples`. But when testing dummy data, `dl_manager.extract` do nothing. So I make a conditional such that it manually extract data when testing dummy data. # Help Don't know how to fix the building doc error.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 1, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/3853/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3853/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/3852
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3852/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3852/comments
https://api.github.com/repos/huggingface/datasets/issues/3852/events
https://github.com/huggingface/datasets/pull/3852
1,162,252,337
PR_kwDODunzps40Fb26
3,852
Redundant add dataset information and dead link.
{ "avatar_url": "https://avatars.githubusercontent.com/u/17746528?v=4", "events_url": "https://api.github.com/users/dnaveenr/events{/privacy}", "followers_url": "https://api.github.com/users/dnaveenr/followers", "following_url": "https://api.github.com/users/dnaveenr/following{/other_user}", "gists_url": "https://api.github.com/users/dnaveenr/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/dnaveenr", "id": 17746528, "login": "dnaveenr", "node_id": "MDQ6VXNlcjE3NzQ2NTI4", "organizations_url": "https://api.github.com/users/dnaveenr/orgs", "received_events_url": "https://api.github.com/users/dnaveenr/received_events", "repos_url": "https://api.github.com/users/dnaveenr/repos", "site_admin": false, "starred_url": "https://api.github.com/users/dnaveenr/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dnaveenr/subscriptions", "type": "User", "url": "https://api.github.com/users/dnaveenr" }
[]
closed
false
null
[]
null
1
"2022-03-08T05:57:05Z"
"2022-03-08T16:54:36Z"
"2022-03-08T16:54:36Z"
CONTRIBUTOR
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/3852.diff", "html_url": "https://github.com/huggingface/datasets/pull/3852", "merged_at": "2022-03-08T16:54:36Z", "patch_url": "https://github.com/huggingface/datasets/pull/3852.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/3852" }
> Alternatively, you can follow the steps to [add a dataset](https://huggingface.co/docs/datasets/add_dataset.html) and [share a dataset](https://huggingface.co/docs/datasets/share_dataset.html) in the documentation. The "add a dataset link" gives 404 Error, and the share_dataset link has changed. I feel this information is redundant/deprecated now since we have a more detailed guide for "How to add a dataset?".
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3852/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3852/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/3851
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3851/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3851/comments
https://api.github.com/repos/huggingface/datasets/issues/3851/events
https://github.com/huggingface/datasets/issues/3851
1,162,137,998
I_kwDODunzps5FRNGO
3,851
Load audio dataset error
{ "avatar_url": "https://avatars.githubusercontent.com/u/31890987?v=4", "events_url": "https://api.github.com/users/lemoner20/events{/privacy}", "followers_url": "https://api.github.com/users/lemoner20/followers", "following_url": "https://api.github.com/users/lemoner20/following{/other_user}", "gists_url": "https://api.github.com/users/lemoner20/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lemoner20", "id": 31890987, "login": "lemoner20", "node_id": "MDQ6VXNlcjMxODkwOTg3", "organizations_url": "https://api.github.com/users/lemoner20/orgs", "received_events_url": "https://api.github.com/users/lemoner20/received_events", "repos_url": "https://api.github.com/users/lemoner20/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lemoner20/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lemoner20/subscriptions", "type": "User", "url": "https://api.github.com/users/lemoner20" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
closed
false
null
[]
null
8
"2022-03-08T02:16:04Z"
"2022-09-27T12:13:55Z"
"2022-03-08T11:20:06Z"
NONE
null
null
null
## Load audio dataset error Hi, when I load audio dataset following https://huggingface.co/docs/datasets/audio_process and https://github.com/huggingface/datasets/tree/master/datasets/superb, ``` from datasets import load_dataset, load_metric, Audio raw_datasets = load_dataset("superb", "ks", split="train") print(raw_datasets[0]["audio"]) ``` following errors occur ``` --------------------------------------------------------------------------- TypeError Traceback (most recent call last) <ipython-input-169-3f8253239fa0> in <module> ----> 1 raw_datasets[0]["audio"] /usr/lib/python3.6/site-packages/datasets/arrow_dataset.py in __getitem__(self, key) 1924 """Can be used to index columns (by string names) or rows (by integer index or iterable of indices or bools).""" 1925 return self._getitem( -> 1926 key, 1927 ) 1928 /usr/lib/python3.6/site-packages/datasets/arrow_dataset.py in _getitem(self, key, decoded, **kwargs) 1909 pa_subtable = query_table(self._data, key, indices=self._indices if self._indices is not None else None) 1910 formatted_output = format_table( -> 1911 pa_subtable, key, formatter=formatter, format_columns=format_columns, output_all_columns=output_all_columns 1912 ) 1913 return formatted_output /usr/lib/python3.6/site-packages/datasets/formatting/formatting.py in format_table(table, key, formatter, format_columns, output_all_columns) 530 python_formatter = PythonFormatter(features=None) 531 if format_columns is None: --> 532 return formatter(pa_table, query_type=query_type) 533 elif query_type == "column": 534 if key in format_columns: /usr/lib/python3.6/site-packages/datasets/formatting/formatting.py in __call__(self, pa_table, query_type) 279 def __call__(self, pa_table: pa.Table, query_type: str) -> Union[RowFormat, ColumnFormat, BatchFormat]: 280 if query_type == "row": --> 281 return self.format_row(pa_table) 282 elif query_type == "column": 283 return self.format_column(pa_table) /usr/lib/python3.6/site-packages/datasets/formatting/formatting.py in format_row(self, pa_table) 310 row = self.python_arrow_extractor().extract_row(pa_table) 311 if self.decoded: --> 312 row = self.python_features_decoder.decode_row(row) 313 return row 314 /usr/lib/python3.6/site-packages/datasets/formatting/formatting.py in decode_row(self, row) 219 220 def decode_row(self, row: dict) -> dict: --> 221 return self.features.decode_example(row) if self.features else row 222 223 def decode_column(self, column: list, column_name: str) -> list: /usr/lib/python3.6/site-packages/datasets/features/features.py in decode_example(self, example) 1320 else value 1321 for column_name, (feature, value) in utils.zip_dict( -> 1322 {key: value for key, value in self.items() if key in example}, example 1323 ) 1324 } /usr/lib/python3.6/site-packages/datasets/features/features.py in <dictcomp>(.0) 1319 if self._column_requires_decoding[column_name] 1320 else value -> 1321 for column_name, (feature, value) in utils.zip_dict( 1322 {key: value for key, value in self.items() if key in example}, example 1323 ) /usr/lib/python3.6/site-packages/datasets/features/features.py in decode_nested_example(schema, obj) 1053 # Object with special decoding: 1054 elif isinstance(schema, (Audio, Image)): -> 1055 return schema.decode_example(obj) if obj is not None else None 1056 return obj 1057 /usr/lib/python3.6/site-packages/datasets/features/audio.py in decode_example(self, value) 100 array, sampling_rate = self._decode_non_mp3_file_like(file) 101 else: --> 102 array, sampling_rate = self._decode_non_mp3_path_like(path) 103 return {"path": path, "array": array, "sampling_rate": sampling_rate} 104 /usr/lib/python3.6/site-packages/datasets/features/audio.py in _decode_non_mp3_path_like(self, path) 143 144 with xopen(path, "rb") as f: --> 145 array, sampling_rate = librosa.load(f, sr=self.sampling_rate, mono=self.mono) 146 return array, sampling_rate 147 /usr/lib/python3.6/site-packages/librosa/core/audio.py in load(path, sr, mono, offset, duration, dtype, res_type) 110 111 y = [] --> 112 with audioread.audio_open(os.path.realpath(path)) as input_file: 113 sr_native = input_file.samplerate 114 n_channels = input_file.channels /usr/lib/python3.6/posixpath.py in realpath(filename) 392 """Return the canonical path of the specified filename, eliminating any 393 symbolic links encountered in the path.""" --> 394 filename = os.fspath(filename) 395 path, ok = _joinrealpath(filename[:0], filename, {}) 396 return abspath(path) TypeError: expected str, bytes or os.PathLike object, not _io.BufferedReader ``` ## Expected results ``` >>> raw_datasets[0]["audio"] {'array': array([-0.0005188 , -0.00109863, 0.00030518, ..., 0.01730347, 0.01623535, 0.01724243]), 'path': '/root/.cache/huggingface/datasets/downloads/extracted/bb3a06b491a64aff422f307cd8116820b4f61d6f32fcadcfc554617e84383cb7/bed/026290a7_nohash_0.wav', 'sampling_rate': 16000} ```
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3851/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3851/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/3850
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3850/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3850/comments
https://api.github.com/repos/huggingface/datasets/issues/3850/events
https://github.com/huggingface/datasets/pull/3850
1,162,126,030
PR_kwDODunzps40FBx9
3,850
[feat] Add tqdm arguments
{ "avatar_url": "https://avatars.githubusercontent.com/u/28087825?v=4", "events_url": "https://api.github.com/users/penguinwang96825/events{/privacy}", "followers_url": "https://api.github.com/users/penguinwang96825/followers", "following_url": "https://api.github.com/users/penguinwang96825/following{/other_user}", "gists_url": "https://api.github.com/users/penguinwang96825/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/penguinwang96825", "id": 28087825, "login": "penguinwang96825", "node_id": "MDQ6VXNlcjI4MDg3ODI1", "organizations_url": "https://api.github.com/users/penguinwang96825/orgs", "received_events_url": "https://api.github.com/users/penguinwang96825/received_events", "repos_url": "https://api.github.com/users/penguinwang96825/repos", "site_admin": false, "starred_url": "https://api.github.com/users/penguinwang96825/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/penguinwang96825/subscriptions", "type": "User", "url": "https://api.github.com/users/penguinwang96825" }
[]
closed
false
null
[]
null
0
"2022-03-08T01:53:25Z"
"2022-12-16T05:34:07Z"
"2022-12-16T05:34:07Z"
NONE
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/3850.diff", "html_url": "https://github.com/huggingface/datasets/pull/3850", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/3850.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/3850" }
In this PR, tqdm arguments can be passed to the map() function and such, in order to be more flexible.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3850/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3850/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/3849
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3849/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3849/comments
https://api.github.com/repos/huggingface/datasets/issues/3849/events
https://github.com/huggingface/datasets/pull/3849
1,162,091,075
PR_kwDODunzps40E6sW
3,849
Add "Adversarial GLUE" dataset to datasets library
{ "avatar_url": "https://avatars.githubusercontent.com/u/13238952?v=4", "events_url": "https://api.github.com/users/jxmorris12/events{/privacy}", "followers_url": "https://api.github.com/users/jxmorris12/followers", "following_url": "https://api.github.com/users/jxmorris12/following{/other_user}", "gists_url": "https://api.github.com/users/jxmorris12/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/jxmorris12", "id": 13238952, "login": "jxmorris12", "node_id": "MDQ6VXNlcjEzMjM4OTUy", "organizations_url": "https://api.github.com/users/jxmorris12/orgs", "received_events_url": "https://api.github.com/users/jxmorris12/received_events", "repos_url": "https://api.github.com/users/jxmorris12/repos", "site_admin": false, "starred_url": "https://api.github.com/users/jxmorris12/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jxmorris12/subscriptions", "type": "User", "url": "https://api.github.com/users/jxmorris12" }
[]
closed
false
null
[]
null
5
"2022-03-08T00:47:11Z"
"2022-03-28T11:17:14Z"
"2022-03-28T11:12:04Z"
CONTRIBUTOR
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/3849.diff", "html_url": "https://github.com/huggingface/datasets/pull/3849", "merged_at": "2022-03-28T11:12:04Z", "patch_url": "https://github.com/huggingface/datasets/pull/3849.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/3849" }
Adds the Adversarial GLUE dataset: https://adversarialglue.github.io/ ```python >>> import datasets >>> >>> datasets.load_dataset('adv_glue') Using the latest cached version of the module from /home/jxm3/.cache/huggingface/modules/datasets_modules/datasets/adv_glue/26709a83facad2830d72d4419dd179c0be092f4ad3303ad0ebe815d0cdba5cb4 (last modified on Mon Mar 7 19:19:48 2022) since it couldn't be found locally at adv_glue., or remotely on the Hugging Face Hub. Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/home/jxm3/random/datasets/src/datasets/load.py", line 1657, in load_dataset builder_instance = load_dataset_builder( File "/home/jxm3/random/datasets/src/datasets/load.py", line 1510, in load_dataset_builder builder_instance: DatasetBuilder = builder_cls( File "/home/jxm3/random/datasets/src/datasets/builder.py", line 1021, in __init__ super().__init__(*args, **kwargs) File "/home/jxm3/random/datasets/src/datasets/builder.py", line 258, in __init__ self.config, self.config_id = self._create_builder_config( File "/home/jxm3/random/datasets/src/datasets/builder.py", line 337, in _create_builder_config raise ValueError( ValueError: Config name is missing. Please pick one among the available configs: ['adv_sst2', 'adv_qqp', 'adv_mnli', 'adv_mnli_mismatched', 'adv_qnli', 'adv_rte'] Example of usage: `load_dataset('adv_glue', 'adv_sst2')` >>> datasets.load_dataset('adv_glue', 'adv_sst2')['validation'][0] Reusing dataset adv_glue (/home/jxm3/.cache/huggingface/datasets/adv_glue/adv_sst2/1.0.0/3719a903f606f2c96654d87b421bc01114c37084057cdccae65cd7bc24b10933) 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████| 1/1 [00:00<00:00, 604.11it/s] {'sentence': "it 's an uneven treat that bores fun at the democratic exercise while also examining its significance for those who take part .", 'label': 1, 'idx': 0} ```
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3849/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3849/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/3848
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3848/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3848/comments
https://api.github.com/repos/huggingface/datasets/issues/3848/events
https://github.com/huggingface/datasets/issues/3848
1,162,076,902
I_kwDODunzps5FQ-Lm
3,848
NonMatchingChecksumError when checksum is None
{ "avatar_url": "https://avatars.githubusercontent.com/u/13238952?v=4", "events_url": "https://api.github.com/users/jxmorris12/events{/privacy}", "followers_url": "https://api.github.com/users/jxmorris12/followers", "following_url": "https://api.github.com/users/jxmorris12/following{/other_user}", "gists_url": "https://api.github.com/users/jxmorris12/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/jxmorris12", "id": 13238952, "login": "jxmorris12", "node_id": "MDQ6VXNlcjEzMjM4OTUy", "organizations_url": "https://api.github.com/users/jxmorris12/orgs", "received_events_url": "https://api.github.com/users/jxmorris12/received_events", "repos_url": "https://api.github.com/users/jxmorris12/repos", "site_admin": false, "starred_url": "https://api.github.com/users/jxmorris12/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jxmorris12/subscriptions", "type": "User", "url": "https://api.github.com/users/jxmorris12" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
closed
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" } ]
null
7
"2022-03-08T00:24:12Z"
"2022-03-15T14:37:26Z"
"2022-03-15T12:28:23Z"
CONTRIBUTOR
null
null
null
I ran into the following error when adding a new dataset: ```bash expected_checksums = {'https://adversarialglue.github.io/dataset/dev.zip': {'checksum': None, 'num_bytes': 40662}} recorded_checksums = {'https://adversarialglue.github.io/dataset/dev.zip': {'checksum': 'efb4cbd3aa4a87bfaffc310ae951981cc0a36c6c71c6425dd74e5b55f2f325c9', 'num_bytes': 40662}} verification_name = 'dataset source files' def verify_checksums(expected_checksums: Optional[dict], recorded_checksums: dict, verification_name=None): if expected_checksums is None: logger.info("Unable to verify checksums.") return if len(set(expected_checksums) - set(recorded_checksums)) > 0: raise ExpectedMoreDownloadedFiles(str(set(expected_checksums) - set(recorded_checksums))) if len(set(recorded_checksums) - set(expected_checksums)) > 0: raise UnexpectedDownloadedFile(str(set(recorded_checksums) - set(expected_checksums))) bad_urls = [url for url in expected_checksums if expected_checksums[url] != recorded_checksums[url]] for_verification_name = " for " + verification_name if verification_name is not None else "" if len(bad_urls) > 0: error_msg = "Checksums didn't match" + for_verification_name + ":\n" > raise NonMatchingChecksumError(error_msg + str(bad_urls)) E datasets.utils.info_utils.NonMatchingChecksumError: Checksums didn't match for dataset source files: E ['https://adversarialglue.github.io/dataset/dev.zip'] src/datasets/utils/info_utils.py:40: NonMatchingChecksumError ``` ## Expected results The dataset downloads correctly, and there is no error. ## Actual results Datasets library is looking for a checksum of None, and it gets a non-None checksum, and throws an error. This is clearly a bug.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3848/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3848/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/3847
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3847/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3847/comments
https://api.github.com/repos/huggingface/datasets/issues/3847/events
https://github.com/huggingface/datasets/issues/3847
1,161,856,417
I_kwDODunzps5FQIWh
3,847
Datasets' cache not re-used
{ "avatar_url": "https://avatars.githubusercontent.com/u/15106980?v=4", "events_url": "https://api.github.com/users/gejinchen/events{/privacy}", "followers_url": "https://api.github.com/users/gejinchen/followers", "following_url": "https://api.github.com/users/gejinchen/following{/other_user}", "gists_url": "https://api.github.com/users/gejinchen/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/gejinchen", "id": 15106980, "login": "gejinchen", "node_id": "MDQ6VXNlcjE1MTA2OTgw", "organizations_url": "https://api.github.com/users/gejinchen/orgs", "received_events_url": "https://api.github.com/users/gejinchen/received_events", "repos_url": "https://api.github.com/users/gejinchen/repos", "site_admin": false, "starred_url": "https://api.github.com/users/gejinchen/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/gejinchen/subscriptions", "type": "User", "url": "https://api.github.com/users/gejinchen" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
open
false
null
[]
null
26
"2022-03-07T19:55:15Z"
"2023-11-20T18:14:37Z"
null
NONE
null
null
null
## Describe the bug For most tokenizers I have tested (e.g. the RoBERTa tokenizer), the data preprocessing cache are not fully reused in the first few runs, although their `.arrow` cache files are in the cache directory. ## Steps to reproduce the bug Here is a reproducer. The GPT2 tokenizer works perfectly with caching, but not the RoBERTa tokenizer in this example. ```python from datasets import load_dataset from transformers import AutoTokenizer raw_datasets = load_dataset("wikitext", "wikitext-2-raw-v1") # tokenizer = AutoTokenizer.from_pretrained("gpt2") tokenizer = AutoTokenizer.from_pretrained("roberta-base") text_column_name = "text" column_names = raw_datasets["train"].column_names def tokenize_function(examples): return tokenizer(examples[text_column_name], return_special_tokens_mask=True) tokenized_datasets = raw_datasets.map( tokenize_function, batched=True, remove_columns=column_names, load_from_cache_file=True, desc="Running tokenizer on every text in dataset", ) ``` ## Expected results No tokenization would be required after the 1st run. Everything should be loaded from the cache. ## Actual results Tokenization for some subsets are repeated at the 2nd and 3rd run. Starting from the 4th run, everything are loaded from cache. ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 1.18.3 - Platform: Ubuntu 18.04.6 LTS - Python version: 3.6.9 - PyArrow version: 6.0.1
{ "+1": 1, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/3847/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3847/timeline
null
reopened
false
https://api.github.com/repos/huggingface/datasets/issues/3846
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3846/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3846/comments
https://api.github.com/repos/huggingface/datasets/issues/3846/events
https://github.com/huggingface/datasets/pull/3846
1,161,810,226
PR_kwDODunzps40D-uh
3,846
Update faiss device docstring
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
[]
closed
false
null
[]
null
1
"2022-03-07T19:06:59Z"
"2022-03-07T19:21:23Z"
"2022-03-07T19:21:22Z"
MEMBER
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/3846.diff", "html_url": "https://github.com/huggingface/datasets/pull/3846", "merged_at": "2022-03-07T19:21:22Z", "patch_url": "https://github.com/huggingface/datasets/pull/3846.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/3846" }
Following https://github.com/huggingface/datasets/pull/3721 I updated the docstring of the `device` argument of the FAISS related methods of `Dataset`
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3846/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3846/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/3845
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3845/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3845/comments
https://api.github.com/repos/huggingface/datasets/issues/3845/events
https://github.com/huggingface/datasets/pull/3845
1,161,739,483
PR_kwDODunzps40DvqX
3,845
add RMSE and MAE metrics.
{ "avatar_url": "https://avatars.githubusercontent.com/u/17746528?v=4", "events_url": "https://api.github.com/users/dnaveenr/events{/privacy}", "followers_url": "https://api.github.com/users/dnaveenr/followers", "following_url": "https://api.github.com/users/dnaveenr/following{/other_user}", "gists_url": "https://api.github.com/users/dnaveenr/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/dnaveenr", "id": 17746528, "login": "dnaveenr", "node_id": "MDQ6VXNlcjE3NzQ2NTI4", "organizations_url": "https://api.github.com/users/dnaveenr/orgs", "received_events_url": "https://api.github.com/users/dnaveenr/received_events", "repos_url": "https://api.github.com/users/dnaveenr/repos", "site_admin": false, "starred_url": "https://api.github.com/users/dnaveenr/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dnaveenr/subscriptions", "type": "User", "url": "https://api.github.com/users/dnaveenr" }
[]
closed
false
null
[]
null
6
"2022-03-07T17:53:24Z"
"2022-03-09T16:50:03Z"
"2022-03-09T16:50:03Z"
CONTRIBUTOR
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/3845.diff", "html_url": "https://github.com/huggingface/datasets/pull/3845", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/3845.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/3845" }
This PR adds RMSE - Root Mean Squared Error and MAE - Mean Absolute Error to the metrics API. Both implementations are based on usage of sciket-learn. Feature request here : Add support for continuous metrics (RMSE, MAE) [#3608](https://github.com/huggingface/datasets/issues/3608) Please suggest any changes if required. Thank you.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3845/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3845/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/3844
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3844/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3844/comments
https://api.github.com/repos/huggingface/datasets/issues/3844/events
https://github.com/huggingface/datasets/pull/3844
1,161,686,754
PR_kwDODunzps40DkYL
3,844
Add rmse and mae metrics.
{ "avatar_url": "https://avatars.githubusercontent.com/u/17746528?v=4", "events_url": "https://api.github.com/users/dnaveenr/events{/privacy}", "followers_url": "https://api.github.com/users/dnaveenr/followers", "following_url": "https://api.github.com/users/dnaveenr/following{/other_user}", "gists_url": "https://api.github.com/users/dnaveenr/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/dnaveenr", "id": 17746528, "login": "dnaveenr", "node_id": "MDQ6VXNlcjE3NzQ2NTI4", "organizations_url": "https://api.github.com/users/dnaveenr/orgs", "received_events_url": "https://api.github.com/users/dnaveenr/received_events", "repos_url": "https://api.github.com/users/dnaveenr/repos", "site_admin": false, "starred_url": "https://api.github.com/users/dnaveenr/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dnaveenr/subscriptions", "type": "User", "url": "https://api.github.com/users/dnaveenr" }
[]
closed
false
null
[]
null
2
"2022-03-07T17:06:38Z"
"2022-03-07T17:24:32Z"
"2022-03-07T17:15:06Z"
CONTRIBUTOR
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/3844.diff", "html_url": "https://github.com/huggingface/datasets/pull/3844", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/3844.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/3844" }
This PR adds RMSE - Root Mean Squared Error and MAE - Mean Absolute Error to the metrics API. Both implementations are based on usage of sciket-learn. Feature request here : Add support for continuous metrics (RMSE, MAE) [#3608](https://github.com/huggingface/datasets/issues/3608) Any suggestions and changes required will be helpful.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3844/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3844/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/3843
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3843/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3843/comments
https://api.github.com/repos/huggingface/datasets/issues/3843/events
https://github.com/huggingface/datasets/pull/3843
1,161,397,812
PR_kwDODunzps40Cm0D
3,843
Fix Google Drive URL to avoid Virus scan warning in streaming mode
{ "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mariosasko", "id": 47462742, "login": "mariosasko", "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "repos_url": "https://api.github.com/users/mariosasko/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "type": "User", "url": "https://api.github.com/users/mariosasko" }
[]
closed
false
null
[]
null
2
"2022-03-07T13:09:19Z"
"2022-03-15T12:30:25Z"
"2022-03-15T12:30:23Z"
CONTRIBUTOR
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/3843.diff", "html_url": "https://github.com/huggingface/datasets/pull/3843", "merged_at": "2022-03-15T12:30:23Z", "patch_url": "https://github.com/huggingface/datasets/pull/3843.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/3843" }
The streaming version of https://github.com/huggingface/datasets/pull/3787. Fix #3835 CC: @albertvillanova
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3843/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3843/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/3842
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3842/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3842/comments
https://api.github.com/repos/huggingface/datasets/issues/3842/events
https://github.com/huggingface/datasets/pull/3842
1,161,336,483
PR_kwDODunzps40CZvE
3,842
Align IterableDataset.shuffle with Dataset.shuffle
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
[]
closed
false
null
[]
null
3
"2022-03-07T12:10:46Z"
"2022-03-07T19:03:43Z"
"2022-03-07T19:03:42Z"
MEMBER
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/3842.diff", "html_url": "https://github.com/huggingface/datasets/pull/3842", "merged_at": "2022-03-07T19:03:42Z", "patch_url": "https://github.com/huggingface/datasets/pull/3842.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/3842" }
From #3444 , Dataset.shuffle can have the same API than IterableDataset.shuffle (i.e. in streaming mode). Currently you can pass an optional seed to both if you want, BUT currently IterableDataset.shuffle always requires a buffer_size, used for approximate shuffling. I propose using a reasonable default value (maybe 1000) instead. In this PR, I set the default `buffer_size` value to 1,000, and I reorder the `IterableDataset.shuffle` arguments to match `Dataset.shuffle`, i.e. making `seed` the first argument.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3842/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3842/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/3841
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3841/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3841/comments
https://api.github.com/repos/huggingface/datasets/issues/3841/events
https://github.com/huggingface/datasets/issues/3841
1,161,203,842
I_kwDODunzps5FNpCC
3,841
Pyright reportPrivateImportUsage when `from datasets import load_dataset`
{ "avatar_url": "https://avatars.githubusercontent.com/u/12573521?v=4", "events_url": "https://api.github.com/users/lkhphuc/events{/privacy}", "followers_url": "https://api.github.com/users/lkhphuc/followers", "following_url": "https://api.github.com/users/lkhphuc/following{/other_user}", "gists_url": "https://api.github.com/users/lkhphuc/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lkhphuc", "id": 12573521, "login": "lkhphuc", "node_id": "MDQ6VXNlcjEyNTczNTIx", "organizations_url": "https://api.github.com/users/lkhphuc/orgs", "received_events_url": "https://api.github.com/users/lkhphuc/received_events", "repos_url": "https://api.github.com/users/lkhphuc/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lkhphuc/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lkhphuc/subscriptions", "type": "User", "url": "https://api.github.com/users/lkhphuc" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
closed
false
null
[]
null
6
"2022-03-07T10:24:04Z"
"2023-02-18T19:14:03Z"
"2023-02-13T13:48:41Z"
CONTRIBUTOR
null
null
null
## Describe the bug Pyright complains about module not exported. ## Steps to reproduce the bug Use an editor/IDE with Pyright Language server with default configuration: ```python from datasets import load_dataset ``` ## Expected results No complain from Pyright ## Actual results Pyright complain below: ``` `load_dataset` is not exported from module "datasets" Import from "datasets.load" instead [reportPrivateImportUsage] ``` Importing from `datasets.load` does indeed solves the problem but I believe importing directly from top level `datasets` is the intended usage per the documentation. ## Environment info - `datasets` version: 1.18.3 - Platform: macOS-12.2.1-arm64-arm-64bit - Python version: 3.9.10 - PyArrow version: 7.0.0
{ "+1": 3, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 3, "url": "https://api.github.com/repos/huggingface/datasets/issues/3841/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3841/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/3840
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3840/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3840/comments
https://api.github.com/repos/huggingface/datasets/issues/3840/events
https://github.com/huggingface/datasets/pull/3840
1,161,183,773
PR_kwDODunzps40B8eu
3,840
Pin responses to fix CI for Windows
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[]
closed
false
null
[]
null
1
"2022-03-07T10:06:53Z"
"2022-03-07T10:12:36Z"
"2022-03-07T10:07:24Z"
MEMBER
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/3840.diff", "html_url": "https://github.com/huggingface/datasets/pull/3840", "merged_at": "2022-03-07T10:07:24Z", "patch_url": "https://github.com/huggingface/datasets/pull/3840.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/3840" }
Temporarily fix CI for Windows by pinning `responses`. See: https://app.circleci.com/pipelines/github/huggingface/datasets/10292/workflows/83de4a55-bff7-43ec-96f7-0c335af5c050/jobs/63355 Fix: #3839
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3840/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3840/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/3839
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3839/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3839/comments
https://api.github.com/repos/huggingface/datasets/issues/3839/events
https://github.com/huggingface/datasets/issues/3839
1,161,183,482
I_kwDODunzps5FNkD6
3,839
CI is broken for Windows
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
closed
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" } ]
null
0
"2022-03-07T10:06:42Z"
"2022-05-20T14:13:43Z"
"2022-03-07T10:07:24Z"
MEMBER
null
null
null
## Describe the bug See: https://app.circleci.com/pipelines/github/huggingface/datasets/10292/workflows/83de4a55-bff7-43ec-96f7-0c335af5c050/jobs/63355 ``` ___________________ test_datasetdict_from_text_split[test] ____________________ [gw0] win32 -- Python 3.7.11 C:\tools\miniconda3\envs\py37\python.exe split = 'test' text_path = 'C:\\Users\\circleci\\AppData\\Local\\Temp\\pytest-of-circleci\\pytest-0\\popen-gw0\\data6\\dataset.txt' tmp_path = WindowsPath('C:/Users/circleci/AppData/Local/Temp/pytest-of-circleci/pytest-0/popen-gw0/test_datasetdict_from_text_spl7') @pytest.mark.parametrize("split", [None, NamedSplit("train"), "train", "test"]) def test_datasetdict_from_text_split(split, text_path, tmp_path): if split: path = {split: text_path} else: split = "train" path = {"train": text_path, "test": text_path} cache_dir = tmp_path / "cache" expected_features = {"text": "string"} > dataset = TextDatasetReader(path, cache_dir=cache_dir).read() tests\io\test_text.py:118: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ C:\tools\miniconda3\envs\py37\lib\site-packages\datasets\io\text.py:43: in read use_auth_token=use_auth_token, C:\tools\miniconda3\envs\py37\lib\site-packages\datasets\builder.py:588: in download_and_prepare self._download_prepared_from_hf_gcs(dl_manager.download_config) C:\tools\miniconda3\envs\py37\lib\site-packages\datasets\builder.py:630: in _download_prepared_from_hf_gcs reader.download_from_hf_gcs(download_config, relative_data_dir) C:\tools\miniconda3\envs\py37\lib\site-packages\datasets\arrow_reader.py:260: in download_from_hf_gcs downloaded_dataset_info = cached_path(remote_dataset_info.replace(os.sep, "/")) C:\tools\miniconda3\envs\py37\lib\site-packages\datasets\utils\file_utils.py:301: in cached_path download_desc=download_config.download_desc, C:\tools\miniconda3\envs\py37\lib\site-packages\datasets\utils\file_utils.py:560: in get_from_cache headers=headers, C:\tools\miniconda3\envs\py37\lib\site-packages\datasets\utils\file_utils.py:476: in http_head max_retries=max_retries, C:\tools\miniconda3\envs\py37\lib\site-packages\datasets\utils\file_utils.py:397: in _request_with_retry response = requests.request(method=method.upper(), url=url, timeout=timeout, **params) C:\tools\miniconda3\envs\py37\lib\site-packages\requests\api.py:61: in request return session.request(method=method, url=url, **kwargs) C:\tools\miniconda3\envs\py37\lib\site-packages\requests\sessions.py:529: in request resp = self.send(prep, **send_kwargs) C:\tools\miniconda3\envs\py37\lib\site-packages\requests\sessions.py:645: in send r = adapter.send(request, **kwargs) C:\tools\miniconda3\envs\py37\lib\site-packages\responses\__init__.py:840: in unbound_on_send return self._on_request(adapter, request, *a, **kwargs) C:\tools\miniconda3\envs\py37\lib\site-packages\responses\__init__.py:780: in _on_request match, match_failed_reasons = self._find_match(request) _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = <responses.RequestsMock object at 0x000002048AD70588> request = <PreparedRequest [HEAD]> def _find_first_match(self, request): match_failed_reasons = [] > for i, match in enumerate(self._matches): E AttributeError: 'RequestsMock' object has no attribute '_matches' C:\tools\miniconda3\envs\py37\lib\site-packages\moto\core\models.py:289: AttributeError ```
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3839/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3839/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/3838
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3838/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3838/comments
https://api.github.com/repos/huggingface/datasets/issues/3838/events
https://github.com/huggingface/datasets/issues/3838
1,161,137,406
I_kwDODunzps5FNYz-
3,838
Add a data type for labeled images (image segmentation)
{ "avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4", "events_url": "https://api.github.com/users/severo/events{/privacy}", "followers_url": "https://api.github.com/users/severo/followers", "following_url": "https://api.github.com/users/severo/following{/other_user}", "gists_url": "https://api.github.com/users/severo/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/severo", "id": 1676121, "login": "severo", "node_id": "MDQ6VXNlcjE2NzYxMjE=", "organizations_url": "https://api.github.com/users/severo/orgs", "received_events_url": "https://api.github.com/users/severo/received_events", "repos_url": "https://api.github.com/users/severo/repos", "site_admin": false, "starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/severo/subscriptions", "type": "User", "url": "https://api.github.com/users/severo" }
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" } ]
open
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mariosasko", "id": 47462742, "login": "mariosasko", "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "repos_url": "https://api.github.com/users/mariosasko/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "type": "User", "url": "https://api.github.com/users/mariosasko" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mariosasko", "id": 47462742, "login": "mariosasko", "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "repos_url": "https://api.github.com/users/mariosasko/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "type": "User", "url": "https://api.github.com/users/mariosasko" } ]
null
0
"2022-03-07T09:38:15Z"
"2022-04-10T13:34:59Z"
null
CONTRIBUTOR
null
null
null
It might be a mix of Image and ClassLabel, and the color palette might be generated automatically. --- ### Example every pixel in the images of the annotation column (in https://huggingface.co/datasets/scene_parse_150) has a value that gives its class, and the dataset itself is associated with a color palette (eg https://github.com/open-mmlab/mmsegmentation/blob/98a353b674c6052d319e7de4e5bcd65d670fcf84/mmseg/datasets/ade.py#L47) that maps every class with a color. So we might want to render the image as a colored image instead of a black and white one. <img width="785" alt="156741519-fbae6844-2606-4c28-837e-279d83d00865" src="https://user-images.githubusercontent.com/1676121/157005263-7058c584-2b70-465a-ad94-8a982f726cf4.png"> See https://github.com/tensorflow/datasets/blob/master/tensorflow_datasets/core/features/labeled_image.py for reference in Tensorflow
{ "+1": 1, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/3838/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3838/timeline
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/3837
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3837/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3837/comments
https://api.github.com/repos/huggingface/datasets/issues/3837/events
https://github.com/huggingface/datasets/pull/3837
1,161,109,031
PR_kwDODunzps40BwE1
3,837
Release: 1.18.4
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[]
closed
false
null
[]
null
0
"2022-03-07T09:13:29Z"
"2022-03-07T11:07:35Z"
"2022-03-07T11:07:02Z"
MEMBER
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/3837.diff", "html_url": "https://github.com/huggingface/datasets/pull/3837", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/3837.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/3837" }
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3837/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3837/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/3836
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3836/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3836/comments
https://api.github.com/repos/huggingface/datasets/issues/3836/events
https://github.com/huggingface/datasets/pull/3836
1,161,072,531
PR_kwDODunzps40Bobr
3,836
Logo float left
{ "avatar_url": "https://avatars.githubusercontent.com/u/11827707?v=4", "events_url": "https://api.github.com/users/mishig25/events{/privacy}", "followers_url": "https://api.github.com/users/mishig25/followers", "following_url": "https://api.github.com/users/mishig25/following{/other_user}", "gists_url": "https://api.github.com/users/mishig25/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mishig25", "id": 11827707, "login": "mishig25", "node_id": "MDQ6VXNlcjExODI3NzA3", "organizations_url": "https://api.github.com/users/mishig25/orgs", "received_events_url": "https://api.github.com/users/mishig25/received_events", "repos_url": "https://api.github.com/users/mishig25/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mishig25/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mishig25/subscriptions", "type": "User", "url": "https://api.github.com/users/mishig25" }
[]
closed
false
null
[]
null
3
"2022-03-07T08:38:34Z"
"2022-03-07T20:21:11Z"
"2022-03-07T09:14:11Z"
CONTRIBUTOR
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/3836.diff", "html_url": "https://github.com/huggingface/datasets/pull/3836", "merged_at": "2022-03-07T09:14:11Z", "patch_url": "https://github.com/huggingface/datasets/pull/3836.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/3836" }
<img width="1000" alt="Screenshot 2022-03-07 at 09 35 29" src="https://user-images.githubusercontent.com/11827707/156996422-339ba43e-932b-4849-babf-9321cb99c922.png">
{ "+1": 1, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/3836/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3836/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/3835
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3835/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3835/comments
https://api.github.com/repos/huggingface/datasets/issues/3835/events
https://github.com/huggingface/datasets/issues/3835
1,161,029,205
I_kwDODunzps5FM-ZV
3,835
The link given on the gigaword does not work
{ "avatar_url": "https://avatars.githubusercontent.com/u/26357784?v=4", "events_url": "https://api.github.com/users/martin6336/events{/privacy}", "followers_url": "https://api.github.com/users/martin6336/followers", "following_url": "https://api.github.com/users/martin6336/following{/other_user}", "gists_url": "https://api.github.com/users/martin6336/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/martin6336", "id": 26357784, "login": "martin6336", "node_id": "MDQ6VXNlcjI2MzU3Nzg0", "organizations_url": "https://api.github.com/users/martin6336/orgs", "received_events_url": "https://api.github.com/users/martin6336/received_events", "repos_url": "https://api.github.com/users/martin6336/repos", "site_admin": false, "starred_url": "https://api.github.com/users/martin6336/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/martin6336/subscriptions", "type": "User", "url": "https://api.github.com/users/martin6336" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
closed
false
null
[]
null
0
"2022-03-07T07:56:42Z"
"2022-03-15T12:30:23Z"
"2022-03-15T12:30:23Z"
NONE
null
null
null
## Dataset viewer issue for '*name of the dataset*' **Link:** *link to the dataset viewer page* *short description of the issue* Am I the one who added this dataset ? Yes-No
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3835/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3835/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/3834
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3834/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3834/comments
https://api.github.com/repos/huggingface/datasets/issues/3834/events
https://github.com/huggingface/datasets/pull/3834
1,160,657,937
PR_kwDODunzps40ATVw
3,834
Fix dead dataset scripts creation link.
{ "avatar_url": "https://avatars.githubusercontent.com/u/17746528?v=4", "events_url": "https://api.github.com/users/dnaveenr/events{/privacy}", "followers_url": "https://api.github.com/users/dnaveenr/followers", "following_url": "https://api.github.com/users/dnaveenr/following{/other_user}", "gists_url": "https://api.github.com/users/dnaveenr/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/dnaveenr", "id": 17746528, "login": "dnaveenr", "node_id": "MDQ6VXNlcjE3NzQ2NTI4", "organizations_url": "https://api.github.com/users/dnaveenr/orgs", "received_events_url": "https://api.github.com/users/dnaveenr/received_events", "repos_url": "https://api.github.com/users/dnaveenr/repos", "site_admin": false, "starred_url": "https://api.github.com/users/dnaveenr/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dnaveenr/subscriptions", "type": "User", "url": "https://api.github.com/users/dnaveenr" }
[]
closed
false
null
[]
null
0
"2022-03-06T16:45:48Z"
"2022-03-07T12:12:07Z"
"2022-03-07T12:12:07Z"
CONTRIBUTOR
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/3834.diff", "html_url": "https://github.com/huggingface/datasets/pull/3834", "merged_at": "2022-03-07T12:12:07Z", "patch_url": "https://github.com/huggingface/datasets/pull/3834.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/3834" }
Previous link gives 404 error. Updated with a new dataset scripts creation link.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3834/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3834/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/3833
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3833/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3833/comments
https://api.github.com/repos/huggingface/datasets/issues/3833/events
https://github.com/huggingface/datasets/pull/3833
1,160,543,713
PR_kwDODunzps4z_99t
3,833
Small typos in How-to-train tutorial.
{ "avatar_url": "https://avatars.githubusercontent.com/u/12573521?v=4", "events_url": "https://api.github.com/users/lkhphuc/events{/privacy}", "followers_url": "https://api.github.com/users/lkhphuc/followers", "following_url": "https://api.github.com/users/lkhphuc/following{/other_user}", "gists_url": "https://api.github.com/users/lkhphuc/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lkhphuc", "id": 12573521, "login": "lkhphuc", "node_id": "MDQ6VXNlcjEyNTczNTIx", "organizations_url": "https://api.github.com/users/lkhphuc/orgs", "received_events_url": "https://api.github.com/users/lkhphuc/received_events", "repos_url": "https://api.github.com/users/lkhphuc/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lkhphuc/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lkhphuc/subscriptions", "type": "User", "url": "https://api.github.com/users/lkhphuc" }
[]
closed
false
null
[]
null
0
"2022-03-06T07:49:49Z"
"2022-03-07T12:35:33Z"
"2022-03-07T12:13:17Z"
CONTRIBUTOR
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/3833.diff", "html_url": "https://github.com/huggingface/datasets/pull/3833", "merged_at": "2022-03-07T12:13:17Z", "patch_url": "https://github.com/huggingface/datasets/pull/3833.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/3833" }
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3833/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3833/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/3832
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3832/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3832/comments
https://api.github.com/repos/huggingface/datasets/issues/3832/events
https://github.com/huggingface/datasets/issues/3832
1,160,503,446
I_kwDODunzps5FK-CW
3,832
Making Hugging Face the place to go for Graph NNs datasets
{ "avatar_url": "https://avatars.githubusercontent.com/u/4755430?v=4", "events_url": "https://api.github.com/users/omarespejel/events{/privacy}", "followers_url": "https://api.github.com/users/omarespejel/followers", "following_url": "https://api.github.com/users/omarespejel/following{/other_user}", "gists_url": "https://api.github.com/users/omarespejel/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/omarespejel", "id": 4755430, "login": "omarespejel", "node_id": "MDQ6VXNlcjQ3NTU0MzA=", "organizations_url": "https://api.github.com/users/omarespejel/orgs", "received_events_url": "https://api.github.com/users/omarespejel/received_events", "repos_url": "https://api.github.com/users/omarespejel/repos", "site_admin": false, "starred_url": "https://api.github.com/users/omarespejel/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/omarespejel/subscriptions", "type": "User", "url": "https://api.github.com/users/omarespejel" }
[ { "color": "e99695", "default": false, "description": "Requesting to add a new dataset", "id": 2067376369, "name": "dataset request", "node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request" }, { "color": "7AFCAA", "default": false, "description": "Datasets for Graph Neural Networks", "id": 3898693527, "name": "graph", "node_id": "LA_kwDODunzps7oYVeX", "url": "https://api.github.com/repos/huggingface/datasets/labels/graph" } ]
open
false
null
[]
null
4
"2022-03-06T03:02:58Z"
"2022-03-14T07:45:38Z"
null
NONE
null
null
null
Let's make Hugging Face Datasets the central hub for GNN datasets :) **Motivation**. Datasets are currently quite scattered and an open-source central point such as the Hugging Face Hub would be ideal to support the growth of the GNN field. What are some datasets worth integrating into the Hugging Face hub? Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md). Special thanks to @napoles-uach for his collaboration on identifying the first ones: - [ ] [SNAP-Stanford OGB Datasets](https://github.com/snap-stanford/ogb). - [ ] [SNAP-Stanford Pretrained GNNs Chemistry and Biology Datasets](https://github.com/snap-stanford/pretrain-gnns). - [ ] [TUDatasets](https://chrsmrrs.github.io/datasets/) (A collection of benchmark datasets for graph classification and regression) cc @osanseviero
{ "+1": 1, "-1": 0, "confused": 0, "eyes": 0, "heart": 2, "hooray": 2, "laugh": 0, "rocket": 0, "total_count": 5, "url": "https://api.github.com/repos/huggingface/datasets/issues/3832/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3832/timeline
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/3831
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3831/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3831/comments
https://api.github.com/repos/huggingface/datasets/issues/3831/events
https://github.com/huggingface/datasets/issues/3831
1,160,501,000
I_kwDODunzps5FK9cI
3,831
when using to_tf_dataset with shuffle is true, not all completed batches are made
{ "avatar_url": "https://avatars.githubusercontent.com/u/42107709?v=4", "events_url": "https://api.github.com/users/greenned/events{/privacy}", "followers_url": "https://api.github.com/users/greenned/followers", "following_url": "https://api.github.com/users/greenned/following{/other_user}", "gists_url": "https://api.github.com/users/greenned/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/greenned", "id": 42107709, "login": "greenned", "node_id": "MDQ6VXNlcjQyMTA3NzA5", "organizations_url": "https://api.github.com/users/greenned/orgs", "received_events_url": "https://api.github.com/users/greenned/received_events", "repos_url": "https://api.github.com/users/greenned/repos", "site_admin": false, "starred_url": "https://api.github.com/users/greenned/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/greenned/subscriptions", "type": "User", "url": "https://api.github.com/users/greenned" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
closed
false
null
[]
null
4
"2022-03-06T02:43:50Z"
"2022-03-08T15:18:56Z"
"2022-03-08T15:18:56Z"
NONE
null
null
null
## Describe the bug when converting a dataset to tf_dataset by using to_tf_dataset with shuffle true, the remainder is not converted to one batch ## Steps to reproduce the bug this is the sample code below https://colab.research.google.com/drive/1_oRXWsR38ElO1EYF9ayFoCU7Ou1AAej4?usp=sharing ## Expected results regardless of shuffle is true or not, 67 rows dataset should be 5 batches when batch size is 16. ## Actual results 4 batches ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 1.18.3 - Platform: Linux-5.4.144+-x86_64-with-Ubuntu-18.04-bionic - Python version: 3.7.12 - PyArrow version: 6.0.1
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3831/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3831/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/3830
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3830/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3830/comments
https://api.github.com/repos/huggingface/datasets/issues/3830/events
https://github.com/huggingface/datasets/issues/3830
1,160,181,404
I_kwDODunzps5FJvac
3,830
Got error when load cnn_dailymail dataset
{ "avatar_url": "https://avatars.githubusercontent.com/u/78331051?v=4", "events_url": "https://api.github.com/users/wgong0510/events{/privacy}", "followers_url": "https://api.github.com/users/wgong0510/followers", "following_url": "https://api.github.com/users/wgong0510/following{/other_user}", "gists_url": "https://api.github.com/users/wgong0510/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/wgong0510", "id": 78331051, "login": "wgong0510", "node_id": "MDQ6VXNlcjc4MzMxMDUx", "organizations_url": "https://api.github.com/users/wgong0510/orgs", "received_events_url": "https://api.github.com/users/wgong0510/received_events", "repos_url": "https://api.github.com/users/wgong0510/repos", "site_admin": false, "starred_url": "https://api.github.com/users/wgong0510/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/wgong0510/subscriptions", "type": "User", "url": "https://api.github.com/users/wgong0510" }
[ { "color": "cfd3d7", "default": true, "description": "This issue or pull request already exists", "id": 1935892865, "name": "duplicate", "node_id": "MDU6TGFiZWwxOTM1ODkyODY1", "url": "https://api.github.com/repos/huggingface/datasets/labels/duplicate" } ]
closed
false
null
[]
null
2
"2022-03-05T01:43:12Z"
"2022-03-07T06:53:41Z"
"2022-03-07T06:53:41Z"
NONE
null
null
null
When using datasets.load_dataset method to load cnn_dailymail dataset, got error as below: - windows os: FileNotFoundError: [WinError 3] 系统找不到指定的路径。: 'D:\\SourceCode\\DataScience\\HuggingFace\\Data\\downloads\\1bc05d24fa6dda2468e83a73cf6dc207226e01e3c48a507ea716dc0421da583b\\cnn\\stories' - google colab: NotADirectoryError: [Errno 20] Not a directory: '/root/.cache/huggingface/datasets/downloads/1bc05d24fa6dda2468e83a73cf6dc207226e01e3c48a507ea716dc0421da583b/cnn/stories' The code is to load dataset: windows os: ``` from datasets import load_dataset dataset = load_dataset("cnn_dailymail", "3.0.0", cache_dir="D:\\SourceCode\\DataScience\\HuggingFace\\Data") ``` google colab: ``` import datasets train_data = datasets.load_dataset("cnn_dailymail", "3.0.0", split="train") ```
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3830/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3830/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/3829
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3829/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3829/comments
https://api.github.com/repos/huggingface/datasets/issues/3829/events
https://github.com/huggingface/datasets/issues/3829
1,160,154,352
I_kwDODunzps5FJozw
3,829
[📄 Docs] Create a `datasets` performance guide.
{ "avatar_url": "https://avatars.githubusercontent.com/u/3712347?v=4", "events_url": "https://api.github.com/users/dynamicwebpaige/events{/privacy}", "followers_url": "https://api.github.com/users/dynamicwebpaige/followers", "following_url": "https://api.github.com/users/dynamicwebpaige/following{/other_user}", "gists_url": "https://api.github.com/users/dynamicwebpaige/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/dynamicwebpaige", "id": 3712347, "login": "dynamicwebpaige", "node_id": "MDQ6VXNlcjM3MTIzNDc=", "organizations_url": "https://api.github.com/users/dynamicwebpaige/orgs", "received_events_url": "https://api.github.com/users/dynamicwebpaige/received_events", "repos_url": "https://api.github.com/users/dynamicwebpaige/repos", "site_admin": false, "starred_url": "https://api.github.com/users/dynamicwebpaige/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dynamicwebpaige/subscriptions", "type": "User", "url": "https://api.github.com/users/dynamicwebpaige" }
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" } ]
open
false
null
[]
null
1
"2022-03-05T00:28:06Z"
"2022-03-10T16:24:27Z"
null
NONE
null
null
null
## Brief Overview Downloading, saving, and preprocessing large datasets from the `datasets` library can often result in [performance bottlenecks](https://github.com/huggingface/datasets/issues/3735). These performance snags can be challenging to identify and to debug, especially for users who are less experienced with building deep learning experiments. ## Feature Request Could we create a performance guide for using `datasets`, similar to: * [Better performance with the `tf.data` API](https://github.com/huggingface/datasets/issues/3735) * [Analyze `tf.data` performance with the TF Profiler](https://www.tensorflow.org/guide/data_performance_analysis) This performance guide should detail practical options for improving performance with `datasets`, and enumerate any common best practices. It should also show how to use tools like the PyTorch Profiler or the TF Profiler to identify any performance bottlenecks (example below). ![image](https://user-images.githubusercontent.com/3712347/156859152-a3cb9565-3ec6-4d39-8e77-56d0a75a4954.png) ## Related Issues * [wiki_dpr pre-processing performance #1670](https://github.com/huggingface/datasets/issues/1670) * [Adjusting chunk size for streaming datasets #3499](https://github.com/huggingface/datasets/issues/3499) * [how large datasets are handled under the hood #1004](https://github.com/huggingface/datasets/issues/1004) * [using map on loaded Tokenizer 10x - 100x slower than default Tokenizer? #1830](https://github.com/huggingface/datasets/issues/1830) * [Best way to batch a large dataset? #315](https://github.com/huggingface/datasets/issues/315) * [Saving processed dataset running infinitely #1911](https://github.com/huggingface/datasets/issues/1911)
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 1, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/3829/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3829/timeline
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/3828
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3828/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3828/comments
https://api.github.com/repos/huggingface/datasets/issues/3828/events
https://github.com/huggingface/datasets/issues/3828
1,160,064,029
I_kwDODunzps5FJSwd
3,828
The Pile's _FEATURE spec seems to be incorrect
{ "avatar_url": "https://avatars.githubusercontent.com/u/9633?v=4", "events_url": "https://api.github.com/users/dlwh/events{/privacy}", "followers_url": "https://api.github.com/users/dlwh/followers", "following_url": "https://api.github.com/users/dlwh/following{/other_user}", "gists_url": "https://api.github.com/users/dlwh/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/dlwh", "id": 9633, "login": "dlwh", "node_id": "MDQ6VXNlcjk2MzM=", "organizations_url": "https://api.github.com/users/dlwh/orgs", "received_events_url": "https://api.github.com/users/dlwh/received_events", "repos_url": "https://api.github.com/users/dlwh/repos", "site_admin": false, "starred_url": "https://api.github.com/users/dlwh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dlwh/subscriptions", "type": "User", "url": "https://api.github.com/users/dlwh" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
closed
false
null
[]
null
1
"2022-03-04T21:25:32Z"
"2022-03-08T09:30:49Z"
"2022-03-08T09:30:48Z"
NONE
null
null
null
## Describe the bug If you look at https://huggingface.co/datasets/the_pile/blob/main/the_pile.py: For "all" * the pile_set_name is never set for data * there's actually an id field inside of "meta" For subcorpora pubmed_central and hacker_news: * the meta is specified to be a string, but it's actually a dict with an id field inside. ## Steps to reproduce the bug ## Expected results Feature spec should match the data I'd think? ## Actual results Specify the actual results or traceback. ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: - Platform: - Python version: - PyArrow version:
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3828/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3828/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/3827
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3827/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3827/comments
https://api.github.com/repos/huggingface/datasets/issues/3827/events
https://github.com/huggingface/datasets/pull/3827
1,159,878,436
PR_kwDODunzps4z95dj
3,827
Remove deprecated `remove_columns` param in `filter`
{ "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mariosasko", "id": 47462742, "login": "mariosasko", "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "repos_url": "https://api.github.com/users/mariosasko/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "type": "User", "url": "https://api.github.com/users/mariosasko" }
[]
closed
false
null
[]
null
1
"2022-03-04T17:23:26Z"
"2022-03-07T12:37:52Z"
"2022-03-07T12:37:51Z"
CONTRIBUTOR
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/3827.diff", "html_url": "https://github.com/huggingface/datasets/pull/3827", "merged_at": "2022-03-07T12:37:51Z", "patch_url": "https://github.com/huggingface/datasets/pull/3827.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/3827" }
A leftover from #3803.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3827/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3827/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/3826
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3826/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3826/comments
https://api.github.com/repos/huggingface/datasets/issues/3826/events
https://github.com/huggingface/datasets/pull/3826
1,159,851,110
PR_kwDODunzps4z90JU
3,826
Add IterableDataset.filter
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
[]
closed
false
null
[]
null
2
"2022-03-04T16:57:23Z"
"2022-03-09T17:23:13Z"
"2022-03-09T17:23:11Z"
MEMBER
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/3826.diff", "html_url": "https://github.com/huggingface/datasets/pull/3826", "merged_at": "2022-03-09T17:23:11Z", "patch_url": "https://github.com/huggingface/datasets/pull/3826.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/3826" }
_Needs https://github.com/huggingface/datasets/pull/3801 to be merged first_ I added `IterableDataset.filter` with an API that is a subset of `Dataset.filter`: ```python def filter(self, function, batched=False, batch_size=1000, with_indices=false, input_columns=None): ``` TODO: - [x] tests - [x] docs related to https://github.com/huggingface/datasets/issues/3444 and https://github.com/huggingface/datasets/issues/3753
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3826/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3826/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/3825
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3825/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3825/comments
https://api.github.com/repos/huggingface/datasets/issues/3825/events
https://github.com/huggingface/datasets/pull/3825
1,159,802,345
PR_kwDODunzps4z9p4b
3,825
Update version and date in Wikipedia dataset
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[]
closed
false
null
[]
null
1
"2022-03-04T16:05:27Z"
"2022-03-04T17:24:37Z"
"2022-03-04T17:24:36Z"
MEMBER
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/3825.diff", "html_url": "https://github.com/huggingface/datasets/pull/3825", "merged_at": "2022-03-04T17:24:36Z", "patch_url": "https://github.com/huggingface/datasets/pull/3825.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/3825" }
CC: @geohci
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3825/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3825/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/3824
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3824/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3824/comments
https://api.github.com/repos/huggingface/datasets/issues/3824/events
https://github.com/huggingface/datasets/pull/3824
1,159,574,186
PR_kwDODunzps4z85SO
3,824
Allow not specifying feature cols other than `predictions`/`references` in `Metric.compute`
{ "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mariosasko", "id": 47462742, "login": "mariosasko", "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "repos_url": "https://api.github.com/users/mariosasko/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "type": "User", "url": "https://api.github.com/users/mariosasko" }
[]
closed
false
null
[]
null
1
"2022-03-04T12:04:40Z"
"2022-03-04T18:04:22Z"
"2022-03-04T18:04:21Z"
CONTRIBUTOR
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/3824.diff", "html_url": "https://github.com/huggingface/datasets/pull/3824", "merged_at": "2022-03-04T18:04:21Z", "patch_url": "https://github.com/huggingface/datasets/pull/3824.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/3824" }
Fix #3818
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3824/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3824/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/3823
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3823/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3823/comments
https://api.github.com/repos/huggingface/datasets/issues/3823/events
https://github.com/huggingface/datasets/issues/3823
1,159,497,844
I_kwDODunzps5FHIh0
3,823
500 internal server error when trying to open a dataset composed of Zarr stores
{ "avatar_url": "https://avatars.githubusercontent.com/u/7170359?v=4", "events_url": "https://api.github.com/users/jacobbieker/events{/privacy}", "followers_url": "https://api.github.com/users/jacobbieker/followers", "following_url": "https://api.github.com/users/jacobbieker/following{/other_user}", "gists_url": "https://api.github.com/users/jacobbieker/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/jacobbieker", "id": 7170359, "login": "jacobbieker", "node_id": "MDQ6VXNlcjcxNzAzNTk=", "organizations_url": "https://api.github.com/users/jacobbieker/orgs", "received_events_url": "https://api.github.com/users/jacobbieker/received_events", "repos_url": "https://api.github.com/users/jacobbieker/repos", "site_admin": false, "starred_url": "https://api.github.com/users/jacobbieker/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jacobbieker/subscriptions", "type": "User", "url": "https://api.github.com/users/jacobbieker" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
closed
false
null
[]
null
4
"2022-03-04T10:37:14Z"
"2022-03-08T09:47:39Z"
"2022-03-08T09:47:39Z"
NONE
null
null
null
## Describe the bug The dataset [openclimatefix/mrms](https://huggingface.co/datasets/openclimatefix/mrms) gives a 500 server error when trying to open it on the website, or through code. The dataset doesn't have a loading script yet, and I did push two [xarray](https://docs.xarray.dev/en/stable/) Zarr stores of data there recentlyish. The Zarr stores are composed of lots of small files, which I am guessing is probably the problem, as we have another [OCF dataset](https://huggingface.co/datasets/openclimatefix/eumetsat_uk_hrv) using xarray and Zarr, but with the Zarr stored on GCP public datasets instead of directly in HF datasets, and that one opens fine. In general, we were hoping to use HF datasets to release some more public geospatial datasets as benchmarks, which are commonly stored as Zarr stores as they can be compressed well and deal with the multi-dimensional data and coordinates fairly easily compared to other formats, but with this error, I'm assuming we should try a different format? For context, we are trying to have complete public model+data reimplementations of some SOTA weather and solar nowcasting models, like [MetNet, MetNet-2,](https://github.com/openclimatefix/metnet) [DGMR](https://github.com/openclimatefix/skillful_nowcasting), and [others](https://github.com/openclimatefix/graph_weather), which all have large, complex datasets. ## Steps to reproduce the bug ```python from datasets import load_dataset dataset = load_dataset("openclimatefix/mrms") ``` ## Expected results The dataset should be downloaded or open up ## Actual results A 500 internal server error ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 1.18.3 - Platform: Linux-5.15.25-1-MANJARO-x86_64-with-glibc2.35 - Python version: 3.9.10 - PyArrow version: 7.0.0
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3823/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3823/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/3822
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3822/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3822/comments
https://api.github.com/repos/huggingface/datasets/issues/3822/events
https://github.com/huggingface/datasets/issues/3822
1,159,395,728
I_kwDODunzps5FGvmQ
3,822
Add Biwi Kinect Head Pose Database
{ "avatar_url": "https://avatars.githubusercontent.com/u/7246357?v=4", "events_url": "https://api.github.com/users/osanseviero/events{/privacy}", "followers_url": "https://api.github.com/users/osanseviero/followers", "following_url": "https://api.github.com/users/osanseviero/following{/other_user}", "gists_url": "https://api.github.com/users/osanseviero/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/osanseviero", "id": 7246357, "login": "osanseviero", "node_id": "MDQ6VXNlcjcyNDYzNTc=", "organizations_url": "https://api.github.com/users/osanseviero/orgs", "received_events_url": "https://api.github.com/users/osanseviero/received_events", "repos_url": "https://api.github.com/users/osanseviero/repos", "site_admin": false, "starred_url": "https://api.github.com/users/osanseviero/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/osanseviero/subscriptions", "type": "User", "url": "https://api.github.com/users/osanseviero" }
[ { "color": "e99695", "default": false, "description": "Requesting to add a new dataset", "id": 2067376369, "name": "dataset request", "node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request" }, { "color": "bfdadc", "default": false, "description": "Vision datasets", "id": 3608941089, "name": "vision", "node_id": "LA_kwDODunzps7XHBIh", "url": "https://api.github.com/repos/huggingface/datasets/labels/vision" } ]
closed
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/17746528?v=4", "events_url": "https://api.github.com/users/dnaveenr/events{/privacy}", "followers_url": "https://api.github.com/users/dnaveenr/followers", "following_url": "https://api.github.com/users/dnaveenr/following{/other_user}", "gists_url": "https://api.github.com/users/dnaveenr/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/dnaveenr", "id": 17746528, "login": "dnaveenr", "node_id": "MDQ6VXNlcjE3NzQ2NTI4", "organizations_url": "https://api.github.com/users/dnaveenr/orgs", "received_events_url": "https://api.github.com/users/dnaveenr/received_events", "repos_url": "https://api.github.com/users/dnaveenr/repos", "site_admin": false, "starred_url": "https://api.github.com/users/dnaveenr/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dnaveenr/subscriptions", "type": "User", "url": "https://api.github.com/users/dnaveenr" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/17746528?v=4", "events_url": "https://api.github.com/users/dnaveenr/events{/privacy}", "followers_url": "https://api.github.com/users/dnaveenr/followers", "following_url": "https://api.github.com/users/dnaveenr/following{/other_user}", "gists_url": "https://api.github.com/users/dnaveenr/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/dnaveenr", "id": 17746528, "login": "dnaveenr", "node_id": "MDQ6VXNlcjE3NzQ2NTI4", "organizations_url": "https://api.github.com/users/dnaveenr/orgs", "received_events_url": "https://api.github.com/users/dnaveenr/received_events", "repos_url": "https://api.github.com/users/dnaveenr/repos", "site_admin": false, "starred_url": "https://api.github.com/users/dnaveenr/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dnaveenr/subscriptions", "type": "User", "url": "https://api.github.com/users/dnaveenr" } ]
null
4
"2022-03-04T08:48:39Z"
"2022-06-01T13:00:47Z"
"2022-06-01T13:00:47Z"
MEMBER
null
null
null
## Adding a Dataset - **Name:** Biwi Kinect Head Pose Database - **Description:** Over 15K images of 20 people recorded with a Kinect while turning their heads around freely. For each frame, depth and rgb images are provided, together with ground in the form of the 3D location of the head and its rotation angles. - **Data:** [*link to the Github repository or current dataset location*](https://icu.ee.ethz.ch/research/datsets.html) - **Motivation:** Useful pose estimation dataset Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md).
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3822/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3822/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/3821
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3821/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3821/comments
https://api.github.com/repos/huggingface/datasets/issues/3821/events
https://github.com/huggingface/datasets/pull/3821
1,159,371,927
PR_kwDODunzps4z8O5J
3,821
Update Wikipedia dataset
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[]
closed
false
null
[]
null
3
"2022-03-04T08:19:21Z"
"2022-03-21T12:35:23Z"
"2022-03-21T12:31:00Z"
MEMBER
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/3821.diff", "html_url": "https://github.com/huggingface/datasets/pull/3821", "merged_at": "2022-03-21T12:31:00Z", "patch_url": "https://github.com/huggingface/datasets/pull/3821.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/3821" }
This PR combines all updates to Wikipedia dataset. Once approved, this will be used to generate the pre-processed Wikipedia datasets. Finally, this PR will be able to be merged into master: - NOT using squash - BUT a regular MERGE (or REBASE+MERGE), so that all commits are preserved TODO: - [x] #3435 - [x] #3789 - [x] #3825 - [x] Run to get the pre-processed data for big languages (backward compatibility) - [x] #3958 CC: @geohci
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3821/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3821/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/3820
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3820/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3820/comments
https://api.github.com/repos/huggingface/datasets/issues/3820/events
https://github.com/huggingface/datasets/issues/3820
1,159,106,603
I_kwDODunzps5FFpAr
3,820
`pubmed_qa` checksum mismatch
{ "avatar_url": "https://avatars.githubusercontent.com/u/41410219?v=4", "events_url": "https://api.github.com/users/jon-tow/events{/privacy}", "followers_url": "https://api.github.com/users/jon-tow/followers", "following_url": "https://api.github.com/users/jon-tow/following{/other_user}", "gists_url": "https://api.github.com/users/jon-tow/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/jon-tow", "id": 41410219, "login": "jon-tow", "node_id": "MDQ6VXNlcjQxNDEwMjE5", "organizations_url": "https://api.github.com/users/jon-tow/orgs", "received_events_url": "https://api.github.com/users/jon-tow/received_events", "repos_url": "https://api.github.com/users/jon-tow/repos", "site_admin": false, "starred_url": "https://api.github.com/users/jon-tow/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jon-tow/subscriptions", "type": "User", "url": "https://api.github.com/users/jon-tow" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" }, { "color": "cfd3d7", "default": true, "description": "This issue or pull request already exists", "id": 1935892865, "name": "duplicate", "node_id": "MDU6TGFiZWwxOTM1ODkyODY1", "url": "https://api.github.com/repos/huggingface/datasets/labels/duplicate" } ]
closed
false
null
[]
null
1
"2022-03-04T00:28:08Z"
"2022-03-04T09:42:32Z"
"2022-03-04T09:42:32Z"
CONTRIBUTOR
null
null
null
## Describe the bug Loading [`pubmed_qa`](https://huggingface.co/datasets/pubmed_qa) results in a mismatched checksum error. ## Steps to reproduce the bug ```python # Sample code to reproduce the bug import datasets try: datasets.load_dataset("pubmed_qa", "pqa_labeled") except Exception as e: print(e) try: datasets.load_dataset("pubmed_qa", "pqa_unlabeled") except Exception as e: print(e) try: datasets.load_dataset("pubmed_qa", "pqa_artificial") except Exception as e: print(e) ``` ## Expected results Successful download. ## Actual results Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/usr/local/lib/python3.9/site-packages/datasets/load.py", line 1702, in load_dataset builder_instance.download_and_prepare( File "/usr/local/lib/python3.9/site-packages/datasets/builder.py", line 594, in download_and_prepare self._download_and_prepare( File "/usr/local/lib/python3.9/site-packages/datasets/builder.py", line 665, in _download_and_prepare verify_checksums( File "/usr/local/lib/python3.9/site-packages/datasets/utils/info_utils.py", line 40, in verify_checksums raise NonMatchingChecksumError(error_msg + str(bad_urls)) datasets.utils.info_utils.NonMatchingChecksumError: Checksums didn't match for dataset source files: ['https://drive.google.com/uc?export=download&id=1RsGLINVce-0GsDkCLDuLZmoLuzfmoCuQ', 'https://drive.google.com/uc?export=download&id=15v1x6aQDlZymaHGP7cZJZZYFfeJt2NdS'] ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 1.18.3 - Platform: macOS - Python version: 3.8.1 - PyArrow version: 3.0.0
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3820/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3820/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/3819
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3819/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3819/comments
https://api.github.com/repos/huggingface/datasets/issues/3819/events
https://github.com/huggingface/datasets/pull/3819
1,158,848,288
PR_kwDODunzps4z6fvn
3,819
Fix typo in doc build yml
{ "avatar_url": "https://avatars.githubusercontent.com/u/11827707?v=4", "events_url": "https://api.github.com/users/mishig25/events{/privacy}", "followers_url": "https://api.github.com/users/mishig25/followers", "following_url": "https://api.github.com/users/mishig25/following{/other_user}", "gists_url": "https://api.github.com/users/mishig25/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mishig25", "id": 11827707, "login": "mishig25", "node_id": "MDQ6VXNlcjExODI3NzA3", "organizations_url": "https://api.github.com/users/mishig25/orgs", "received_events_url": "https://api.github.com/users/mishig25/received_events", "repos_url": "https://api.github.com/users/mishig25/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mishig25/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mishig25/subscriptions", "type": "User", "url": "https://api.github.com/users/mishig25" }
[]
closed
false
null
[]
null
1
"2022-03-03T20:08:44Z"
"2022-03-04T13:07:41Z"
"2022-03-04T13:07:41Z"
CONTRIBUTOR
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/3819.diff", "html_url": "https://github.com/huggingface/datasets/pull/3819", "merged_at": "2022-03-04T13:07:41Z", "patch_url": "https://github.com/huggingface/datasets/pull/3819.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/3819" }
cc: @lhoestq
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3819/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3819/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/3818
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3818/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3818/comments
https://api.github.com/repos/huggingface/datasets/issues/3818/events
https://github.com/huggingface/datasets/issues/3818
1,158,788,545
I_kwDODunzps5FEbXB
3,818
Support for "sources" parameter in the add() and add_batch() methods in datasets.metric - SARI
{ "avatar_url": "https://avatars.githubusercontent.com/u/6901031?v=4", "events_url": "https://api.github.com/users/lmvasque/events{/privacy}", "followers_url": "https://api.github.com/users/lmvasque/followers", "following_url": "https://api.github.com/users/lmvasque/following{/other_user}", "gists_url": "https://api.github.com/users/lmvasque/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lmvasque", "id": 6901031, "login": "lmvasque", "node_id": "MDQ6VXNlcjY5MDEwMzE=", "organizations_url": "https://api.github.com/users/lmvasque/orgs", "received_events_url": "https://api.github.com/users/lmvasque/received_events", "repos_url": "https://api.github.com/users/lmvasque/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lmvasque/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lmvasque/subscriptions", "type": "User", "url": "https://api.github.com/users/lmvasque" }
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" } ]
closed
false
null
[]
null
3
"2022-03-03T18:57:54Z"
"2022-03-04T18:04:21Z"
"2022-03-04T18:04:21Z"
NONE
null
null
null
**Is your feature request related to a problem? Please describe.** The methods `add_batch` and `add` from the `Metric` [class](https://github.com/huggingface/datasets/blob/1675ad6a958435b675a849eafa8a7f10fe0f43bc/src/datasets/metric.py) does not work with [SARI](https://github.com/huggingface/datasets/blob/master/metrics/sari/sari.py) metric. This metric not only relies on the predictions and references, but also in the input. For example, when the `add_batch` method is used, then the `compute()` method fails: ``` metric = load_metric("sari") metric.add_batch( predictions=["About 95 you now get in ."], references=[["About 95 species are currently known .","About 95 species are now accepted .","95 species are now accepted ."]]) metric.compute() > TypeError: _compute() missing 1 required positional argument: 'sources' ``` Therefore, the `compute() `method can only be used standalone: ``` metric = load_metric("sari") result = metric.compute( sources=["About 95 species are currently accepted ."], predictions=["About 95 you now get in ."], references=[["About 95 species are currently known .","About 95 species are now accepted .","95 species are now accepted ."]]) > {'sari': 26.953601953601954} ``` **Describe the solution you'd like** Support for an additional parameter `sources` in the `add_batch` and `add` of the `Metric` class. ``` add_batch(*, sources=None, predictions=None, references=None, **kwargs) add(*, sources=None, predictions=None, references=None, **kwargs) compute() ``` **Describe alternatives you've considered** I've tried to override the `add_batch` and `add`, however, these are highly dependent to the `Metric` class. We could also write a simple function that compute the scores of a sentences list, but then we lose the functionality from the original [add](https://huggingface.co/docs/datasets/_modules/datasets/metric.html#Metric.add) and [add_batch method](https://huggingface.co/docs/datasets/_modules/datasets/metric.html#Metric.add_batch). **Additional context** These methods are used in the transformers [pytorch examples](https://github.com/huggingface/transformers/blob/master/examples/pytorch/summarization/run_summarization_no_trainer.py).
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3818/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3818/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/3817
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3817/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3817/comments
https://api.github.com/repos/huggingface/datasets/issues/3817/events
https://github.com/huggingface/datasets/pull/3817
1,158,592,335
PR_kwDODunzps4z5pQ7
3,817
Simplify Common Voice code
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
[]
closed
false
null
[]
null
1
"2022-03-03T16:01:21Z"
"2022-03-04T14:51:48Z"
"2022-03-04T12:39:23Z"
MEMBER
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/3817.diff", "html_url": "https://github.com/huggingface/datasets/pull/3817", "merged_at": "2022-03-04T12:39:23Z", "patch_url": "https://github.com/huggingface/datasets/pull/3817.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/3817" }
In #3736 we introduced one method to generate examples when streaming, that is different from the one when not streaming. In this PR I propose a new implementation which is simpler: it only has one function, based on `iter_archive`. And you still have access to local audio files when loading the dataset in non-streaming mode. cc @patrickvonplaten @polinaeterna @anton-l @albertvillanova since this will become the template for many audio datasets to come. This change can also trivially be applied to the other audio datasets that already exist. Using this line, you can get access to local files in non-streaming mode: ```python local_extracted_archive = dl_manager.extract(archive_path) if not dl_manager.is_streaming else None ```
{ "+1": 2, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 2, "url": "https://api.github.com/repos/huggingface/datasets/issues/3817/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3817/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/3816
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3816/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3816/comments
https://api.github.com/repos/huggingface/datasets/issues/3816/events
https://github.com/huggingface/datasets/pull/3816
1,158,589,913
PR_kwDODunzps4z5owP
3,816
Doc new UI test workflows2
{ "avatar_url": "https://avatars.githubusercontent.com/u/11827707?v=4", "events_url": "https://api.github.com/users/mishig25/events{/privacy}", "followers_url": "https://api.github.com/users/mishig25/followers", "following_url": "https://api.github.com/users/mishig25/following{/other_user}", "gists_url": "https://api.github.com/users/mishig25/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mishig25", "id": 11827707, "login": "mishig25", "node_id": "MDQ6VXNlcjExODI3NzA3", "organizations_url": "https://api.github.com/users/mishig25/orgs", "received_events_url": "https://api.github.com/users/mishig25/received_events", "repos_url": "https://api.github.com/users/mishig25/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mishig25/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mishig25/subscriptions", "type": "User", "url": "https://api.github.com/users/mishig25" }
[]
closed
false
null
[]
null
1
"2022-03-03T15:59:14Z"
"2022-10-04T09:35:53Z"
"2022-03-03T16:42:15Z"
CONTRIBUTOR
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/3816.diff", "html_url": "https://github.com/huggingface/datasets/pull/3816", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/3816.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/3816" }
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3816/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3816/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/3815
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3815/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3815/comments
https://api.github.com/repos/huggingface/datasets/issues/3815/events
https://github.com/huggingface/datasets/pull/3815
1,158,589,512
PR_kwDODunzps4z5oq-
3,815
Fix iter_archive getting reset
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
[]
closed
false
null
[]
null
0
"2022-03-03T15:58:52Z"
"2022-03-03T18:06:37Z"
"2022-03-03T18:06:13Z"
MEMBER
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/3815.diff", "html_url": "https://github.com/huggingface/datasets/pull/3815", "merged_at": "2022-03-03T18:06:13Z", "patch_url": "https://github.com/huggingface/datasets/pull/3815.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/3815" }
The `DownloadManager.iter_archive` method currently returns an iterator - which is **empty** once you iter over it once. This means you can't pass the same archive iterator to several splits. To fix that, I changed the ouput of `DownloadManager.iter_archive` to be an iterable that you can iterate over several times, instead of a one-time-use iterator. The `StreamingDownloadManager.iter_archive` already returns an appropriate iterable, and the code added in this PR is inspired from the one in `streaming_download_manager.py`
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3815/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3815/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/3814
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3814/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3814/comments
https://api.github.com/repos/huggingface/datasets/issues/3814/events
https://github.com/huggingface/datasets/pull/3814
1,158,518,995
PR_kwDODunzps4z5Zk4
3,814
Handle Nones in PyArrow struct
{ "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mariosasko", "id": 47462742, "login": "mariosasko", "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "repos_url": "https://api.github.com/users/mariosasko/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "type": "User", "url": "https://api.github.com/users/mariosasko" }
[]
closed
false
null
[]
null
1
"2022-03-03T15:03:35Z"
"2022-03-03T16:37:44Z"
"2022-03-03T16:37:43Z"
CONTRIBUTOR
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/3814.diff", "html_url": "https://github.com/huggingface/datasets/pull/3814", "merged_at": "2022-03-03T16:37:43Z", "patch_url": "https://github.com/huggingface/datasets/pull/3814.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/3814" }
This PR fixes an issue introduced by #3575 where `None` values stored in PyArrow arrays/structs would get ignored by `cast_storage` or by the `pa.array(cast_to_python_objects(..))` pattern. To fix the former, it also bumps the minimal PyArrow version to v5.0.0 to use the `mask` param in `pa.SturctArray`.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3814/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3814/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/3813
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3813/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3813/comments
https://api.github.com/repos/huggingface/datasets/issues/3813/events
https://github.com/huggingface/datasets/issues/3813
1,158,474,859
I_kwDODunzps5FDOxr
3,813
Add MetaShift dataset
{ "avatar_url": "https://avatars.githubusercontent.com/u/7246357?v=4", "events_url": "https://api.github.com/users/osanseviero/events{/privacy}", "followers_url": "https://api.github.com/users/osanseviero/followers", "following_url": "https://api.github.com/users/osanseviero/following{/other_user}", "gists_url": "https://api.github.com/users/osanseviero/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/osanseviero", "id": 7246357, "login": "osanseviero", "node_id": "MDQ6VXNlcjcyNDYzNTc=", "organizations_url": "https://api.github.com/users/osanseviero/orgs", "received_events_url": "https://api.github.com/users/osanseviero/received_events", "repos_url": "https://api.github.com/users/osanseviero/repos", "site_admin": false, "starred_url": "https://api.github.com/users/osanseviero/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/osanseviero/subscriptions", "type": "User", "url": "https://api.github.com/users/osanseviero" }
[ { "color": "e99695", "default": false, "description": "Requesting to add a new dataset", "id": 2067376369, "name": "dataset request", "node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request" }, { "color": "bfdadc", "default": false, "description": "Vision datasets", "id": 3608941089, "name": "vision", "node_id": "LA_kwDODunzps7XHBIh", "url": "https://api.github.com/repos/huggingface/datasets/labels/vision" } ]
closed
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/17746528?v=4", "events_url": "https://api.github.com/users/dnaveenr/events{/privacy}", "followers_url": "https://api.github.com/users/dnaveenr/followers", "following_url": "https://api.github.com/users/dnaveenr/following{/other_user}", "gists_url": "https://api.github.com/users/dnaveenr/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/dnaveenr", "id": 17746528, "login": "dnaveenr", "node_id": "MDQ6VXNlcjE3NzQ2NTI4", "organizations_url": "https://api.github.com/users/dnaveenr/orgs", "received_events_url": "https://api.github.com/users/dnaveenr/received_events", "repos_url": "https://api.github.com/users/dnaveenr/repos", "site_admin": false, "starred_url": "https://api.github.com/users/dnaveenr/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dnaveenr/subscriptions", "type": "User", "url": "https://api.github.com/users/dnaveenr" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/17746528?v=4", "events_url": "https://api.github.com/users/dnaveenr/events{/privacy}", "followers_url": "https://api.github.com/users/dnaveenr/followers", "following_url": "https://api.github.com/users/dnaveenr/following{/other_user}", "gists_url": "https://api.github.com/users/dnaveenr/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/dnaveenr", "id": 17746528, "login": "dnaveenr", "node_id": "MDQ6VXNlcjE3NzQ2NTI4", "organizations_url": "https://api.github.com/users/dnaveenr/orgs", "received_events_url": "https://api.github.com/users/dnaveenr/received_events", "repos_url": "https://api.github.com/users/dnaveenr/repos", "site_admin": false, "starred_url": "https://api.github.com/users/dnaveenr/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dnaveenr/subscriptions", "type": "User", "url": "https://api.github.com/users/dnaveenr" } ]
null
7
"2022-03-03T14:26:45Z"
"2022-04-10T13:39:59Z"
"2022-04-10T13:39:59Z"
MEMBER
null
null
null
## Adding a Dataset - **Name:** MetaShift - **Description:** collection of 12,868 sets of natural images across 410 classes- - **Paper:** https://arxiv.org/abs/2202.06523v1 - **Data:** https://github.com/weixin-liang/metashift Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md).
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3813/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3813/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/3812
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3812/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3812/comments
https://api.github.com/repos/huggingface/datasets/issues/3812/events
https://github.com/huggingface/datasets/pull/3812
1,158,369,995
PR_kwDODunzps4z46C4
3,812
benchmark streaming speed with tar vs zip archives
{ "avatar_url": "https://avatars.githubusercontent.com/u/16348744?v=4", "events_url": "https://api.github.com/users/polinaeterna/events{/privacy}", "followers_url": "https://api.github.com/users/polinaeterna/followers", "following_url": "https://api.github.com/users/polinaeterna/following{/other_user}", "gists_url": "https://api.github.com/users/polinaeterna/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/polinaeterna", "id": 16348744, "login": "polinaeterna", "node_id": "MDQ6VXNlcjE2MzQ4NzQ0", "organizations_url": "https://api.github.com/users/polinaeterna/orgs", "received_events_url": "https://api.github.com/users/polinaeterna/received_events", "repos_url": "https://api.github.com/users/polinaeterna/repos", "site_admin": false, "starred_url": "https://api.github.com/users/polinaeterna/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/polinaeterna/subscriptions", "type": "User", "url": "https://api.github.com/users/polinaeterna" }
[]
closed
false
null
[]
null
1
"2022-03-03T12:48:41Z"
"2022-03-03T14:55:34Z"
"2022-03-03T14:55:33Z"
CONTRIBUTOR
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/3812.diff", "html_url": "https://github.com/huggingface/datasets/pull/3812", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/3812.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/3812" }
# do not merge ## Hypothesis packing data into a single zip archive could allow us not to care about splitting data into several tar archives for efficient streaming which is annoying (since data creators usually host the data in a single tar) ## Data I host it [here](https://huggingface.co/datasets/polinaeterna/benchmark_dataset/) ## I checked three configurations: 1. All data in one zip archive, streaming only those files that exist in split metadata file (we can access them directrly with no need to iterate over full archive), see [this func](https://github.com/huggingface/datasets/compare/master...polinaeterna:benchmark-tar-zip?expand=1#diff-4f5200d4586aec5b2a89fcf34441c5f92156f9e9d408acc7e50666f9a1921ddcR196) 2. All data in three splits, the standart way to make streaming efficient, see [this func](https://github.com/huggingface/datasets/compare/master...polinaeterna:benchmark-tar-zip?expand=1#diff-4f5200d4586aec5b2a89fcf34441c5f92156f9e9d408acc7e50666f9a1921ddcR174) 3. All data in single tar, iterate over the full archive and take only files existing in split metadata file, see [this func](https://github.com/huggingface/datasets/compare/master...polinaeterna:benchmark-tar-zip?expand=1#diff-4f5200d4586aec5b2a89fcf34441c5f92156f9e9d408acc7e50666f9a1921ddcR150) ## Results 1. one zip ![image](https://user-images.githubusercontent.com/16348744/156567611-e3652087-7147-4cf0-9047-9cbc00ec71f5.png) 2. three tars ![image](https://user-images.githubusercontent.com/16348744/156567688-2a462107-f83e-4722-8ea3-71a13b56c998.png) 3. one tar ![image](https://user-images.githubusercontent.com/16348744/156567772-1bceb5f7-e7d9-4fa3-b31b-17fec5f9a5a7.png) didn't check on the full data as it's time consuming but anyway it's pretty obvious that one-zip-way is not a good idea. here it's even worse than full iteration over tar containing all three splits (but that would depend on the case).
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3812/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3812/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/3811
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3811/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3811/comments
https://api.github.com/repos/huggingface/datasets/issues/3811/events
https://github.com/huggingface/datasets/pull/3811
1,158,234,407
PR_kwDODunzps4z4dHS
3,811
Update dev doc gh workflows
{ "avatar_url": "https://avatars.githubusercontent.com/u/11827707?v=4", "events_url": "https://api.github.com/users/mishig25/events{/privacy}", "followers_url": "https://api.github.com/users/mishig25/followers", "following_url": "https://api.github.com/users/mishig25/following{/other_user}", "gists_url": "https://api.github.com/users/mishig25/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mishig25", "id": 11827707, "login": "mishig25", "node_id": "MDQ6VXNlcjExODI3NzA3", "organizations_url": "https://api.github.com/users/mishig25/orgs", "received_events_url": "https://api.github.com/users/mishig25/received_events", "repos_url": "https://api.github.com/users/mishig25/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mishig25/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mishig25/subscriptions", "type": "User", "url": "https://api.github.com/users/mishig25" }
[]
closed
false
null
[]
null
0
"2022-03-03T10:29:01Z"
"2022-10-04T09:35:54Z"
"2022-03-03T10:45:54Z"
CONTRIBUTOR
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/3811.diff", "html_url": "https://github.com/huggingface/datasets/pull/3811", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/3811.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/3811" }
Reflect changes from https://github.com/huggingface/transformers/pull/15891
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3811/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3811/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/3810
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3810/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3810/comments
https://api.github.com/repos/huggingface/datasets/issues/3810/events
https://github.com/huggingface/datasets/pull/3810
1,158,202,093
PR_kwDODunzps4z4WUW
3,810
Update version of xcopa dataset
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[]
closed
false
null
[]
null
0
"2022-03-03T09:58:25Z"
"2022-03-03T10:44:30Z"
"2022-03-03T10:44:29Z"
MEMBER
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/3810.diff", "html_url": "https://github.com/huggingface/datasets/pull/3810", "merged_at": "2022-03-03T10:44:29Z", "patch_url": "https://github.com/huggingface/datasets/pull/3810.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/3810" }
Note that there was a version update of the `xcopa` dataset: https://github.com/cambridgeltl/xcopa/releases We updated our loading script, but we did not bump a new version number: - #3254 This PR updates our loading script version from `1.0.0` to `1.1.0`.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3810/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3810/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/3809
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3809/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3809/comments
https://api.github.com/repos/huggingface/datasets/issues/3809/events
https://github.com/huggingface/datasets/issues/3809
1,158,143,480
I_kwDODunzps5FB934
3,809
Checksums didn't match for datasets on Google Drive
{ "avatar_url": "https://avatars.githubusercontent.com/u/11507045?v=4", "events_url": "https://api.github.com/users/muelletm/events{/privacy}", "followers_url": "https://api.github.com/users/muelletm/followers", "following_url": "https://api.github.com/users/muelletm/following{/other_user}", "gists_url": "https://api.github.com/users/muelletm/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/muelletm", "id": 11507045, "login": "muelletm", "node_id": "MDQ6VXNlcjExNTA3MDQ1", "organizations_url": "https://api.github.com/users/muelletm/orgs", "received_events_url": "https://api.github.com/users/muelletm/received_events", "repos_url": "https://api.github.com/users/muelletm/repos", "site_admin": false, "starred_url": "https://api.github.com/users/muelletm/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/muelletm/subscriptions", "type": "User", "url": "https://api.github.com/users/muelletm" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" }, { "color": "cfd3d7", "default": true, "description": "This issue or pull request already exists", "id": 1935892865, "name": "duplicate", "node_id": "MDU6TGFiZWwxOTM1ODkyODY1", "url": "https://api.github.com/repos/huggingface/datasets/labels/duplicate" } ]
closed
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" } ]
null
1
"2022-03-03T09:01:10Z"
"2022-03-03T09:24:58Z"
"2022-03-03T09:24:05Z"
NONE
null
null
null
## Describe the bug Datasets hosted on Google Drive do not seem to work right now. Loading them fails with a checksum error. ## Steps to reproduce the bug ```python from datasets import load_dataset for dataset in ["head_qa", "yelp_review_full"]: try: load_dataset(dataset) except Exception as exception: print("Error", dataset, exception) ``` Here is a [colab](https://colab.research.google.com/drive/1wOtHBmL8I65NmUYakzPV5zhVCtHhi7uQ#scrollTo=cDzdCLlk-Bo4). ## Expected results The datasets should be loaded. ## Actual results ``` Downloading and preparing dataset head_qa/es (download: 75.69 MiB, generated: 2.86 MiB, post-processed: Unknown size, total: 78.55 MiB) to /root/.cache/huggingface/datasets/head_qa/es/1.1.0/583ab408e8baf54aab378c93715fadc4d8aa51b393e27c3484a877e2ac0278e9... Error head_qa Checksums didn't match for dataset source files: ['https://drive.google.com/u/0/uc?export=download&id=1a_95N5zQQoUCq8IBNVZgziHbeM-QxG2t'] Downloading and preparing dataset yelp_review_full/yelp_review_full (download: 187.06 MiB, generated: 496.94 MiB, post-processed: Unknown size, total: 684.00 MiB) to /root/.cache/huggingface/datasets/yelp_review_full/yelp_review_full/1.0.0/13c31a618ba62568ec8572a222a283dfc29a6517776a3ac5945fb508877dde43... Error yelp_review_full Checksums didn't match for dataset source files: ['https://drive.google.com/uc?export=download&id=0Bz8a_Dbh9QhbZlU4dXhHTFhZQU0'] ``` ## Environment info - `datasets` version: 1.18.3 - Platform: Linux-5.4.144+-x86_64-with-Ubuntu-18.04-bionic - Python version: 3.7.12 - PyArrow version: 6.0.1
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3809/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3809/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/3808
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3808/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3808/comments
https://api.github.com/repos/huggingface/datasets/issues/3808/events
https://github.com/huggingface/datasets/issues/3808
1,157,650,043
I_kwDODunzps5FAFZ7
3,808
Pre-Processing Cache Fails when using a Factory pattern
{ "avatar_url": "https://avatars.githubusercontent.com/u/9847335?v=4", "events_url": "https://api.github.com/users/Helw150/events{/privacy}", "followers_url": "https://api.github.com/users/Helw150/followers", "following_url": "https://api.github.com/users/Helw150/following{/other_user}", "gists_url": "https://api.github.com/users/Helw150/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/Helw150", "id": 9847335, "login": "Helw150", "node_id": "MDQ6VXNlcjk4NDczMzU=", "organizations_url": "https://api.github.com/users/Helw150/orgs", "received_events_url": "https://api.github.com/users/Helw150/received_events", "repos_url": "https://api.github.com/users/Helw150/repos", "site_admin": false, "starred_url": "https://api.github.com/users/Helw150/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Helw150/subscriptions", "type": "User", "url": "https://api.github.com/users/Helw150" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
closed
false
null
[]
null
3
"2022-03-02T20:18:43Z"
"2022-03-10T23:01:47Z"
"2022-03-10T23:01:47Z"
NONE
null
null
null
## Describe the bug If you utilize a pre-processing function which is created using a factory pattern, the function hash changes on each run (even if the function is identical) and therefore the data will be reproduced each time. ## Steps to reproduce the bug ```python def preprocess_function_factory(augmentation=None): def preprocess_function(examples): # Tokenize the texts if augmentation: conversions1 = [ augmentation(example) for example in examples[sentence1_key] ] if sentence2_key is None: args = (conversions1,) else: conversions2 = [ augmentation(example) for example in examples[sentence2_key] ] args = (conversions1, conversions2) else: args = ( (examples[sentence1_key],) if sentence2_key is None else (examples[sentence1_key], examples[sentence2_key]) ) result = tokenizer( *args, padding=padding, max_length=max_seq_length, truncation=True ) # Map labels to IDs (not necessary for GLUE tasks) if label_to_id is not None and "label" in examples: result["label"] = [ (label_to_id[l] if l != -1 else -1) for l in examples["label"] ] return result return preprocess_function capitalize = lambda x: x.capitalize() preprocess_function = preprocess_function_factory(augmentation=capitalize) print(hash(preprocess_function)) # This will change on each run raw_datasets = raw_datasets.map( preprocess_function, batched=True, load_from_cache_file=True, desc="Running transformation and tokenizer on dataset", ) ``` ## Expected results Running the code twice will cause the cache to be re-used. ## Actual results Running the code twice causes the whole dataset to be re-processed
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3808/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3808/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/3807
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3807/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3807/comments
https://api.github.com/repos/huggingface/datasets/issues/3807/events
https://github.com/huggingface/datasets/issues/3807
1,157,531,812
I_kwDODunzps5E_oik
3,807
NonMatchingChecksumError in xcopa dataset
{ "avatar_url": "https://avatars.githubusercontent.com/u/93286455?v=4", "events_url": "https://api.github.com/users/afcruzs-ms/events{/privacy}", "followers_url": "https://api.github.com/users/afcruzs-ms/followers", "following_url": "https://api.github.com/users/afcruzs-ms/following{/other_user}", "gists_url": "https://api.github.com/users/afcruzs-ms/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/afcruzs-ms", "id": 93286455, "login": "afcruzs-ms", "node_id": "U_kgDOBY9wNw", "organizations_url": "https://api.github.com/users/afcruzs-ms/orgs", "received_events_url": "https://api.github.com/users/afcruzs-ms/received_events", "repos_url": "https://api.github.com/users/afcruzs-ms/repos", "site_admin": false, "starred_url": "https://api.github.com/users/afcruzs-ms/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/afcruzs-ms/subscriptions", "type": "User", "url": "https://api.github.com/users/afcruzs-ms" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
closed
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" } ]
null
6
"2022-03-02T18:10:19Z"
"2022-05-20T06:00:42Z"
"2022-03-03T17:40:31Z"
NONE
null
null
null
## Describe the bug Loading the xcopa dataset doesn't work, it fails due to a mismatch in the checksum. ## Steps to reproduce the bug ```python from datasets import load_dataset dataset = load_dataset("xcopa", "it") ``` ## Expected results The dataset should be loaded correctly. ## Actual results Fails with: ```python in verify_checksums(expected_checksums, recorded_checksums, verification_name) 38 if len(bad_urls) > 0: 39 error_msg = "Checksums didn't match" + for_verification_name + ":\n" ---> 40 raise NonMatchingChecksumError(error_msg + str(bad_urls)) 41 logger.info("All the checksums matched successfully" + for_verification_name) 42 NonMatchingChecksumError: Checksums didn't match for dataset source files: ['https://github.com/cambridgeltl/xcopa/archive/master.zip'] ``` ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 1.18.3, and 1.18.4.dev0 - Platform: - Python version: 3.8 - PyArrow version:
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3807/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3807/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/3806
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3806/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3806/comments
https://api.github.com/repos/huggingface/datasets/issues/3806/events
https://github.com/huggingface/datasets/pull/3806
1,157,505,826
PR_kwDODunzps4z2FeI
3,806
Fix Spanish data file URL in wiki_lingua dataset
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[]
closed
false
null
[]
null
0
"2022-03-02T17:43:42Z"
"2022-03-03T08:38:17Z"
"2022-03-03T08:38:16Z"
MEMBER
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/3806.diff", "html_url": "https://github.com/huggingface/datasets/pull/3806", "merged_at": "2022-03-03T08:38:16Z", "patch_url": "https://github.com/huggingface/datasets/pull/3806.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/3806" }
This PR fixes the URL for Spanish data file. Previously, Spanish had the same URL as Vietnamese data file.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3806/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3806/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/3805
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3805/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3805/comments
https://api.github.com/repos/huggingface/datasets/issues/3805/events
https://github.com/huggingface/datasets/pull/3805
1,157,454,884
PR_kwDODunzps4z16os
3,805
Remove decode: true for image feature in head_qa
{ "avatar_url": "https://avatars.githubusercontent.com/u/417568?v=4", "events_url": "https://api.github.com/users/craffel/events{/privacy}", "followers_url": "https://api.github.com/users/craffel/followers", "following_url": "https://api.github.com/users/craffel/following{/other_user}", "gists_url": "https://api.github.com/users/craffel/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/craffel", "id": 417568, "login": "craffel", "node_id": "MDQ6VXNlcjQxNzU2OA==", "organizations_url": "https://api.github.com/users/craffel/orgs", "received_events_url": "https://api.github.com/users/craffel/received_events", "repos_url": "https://api.github.com/users/craffel/repos", "site_admin": false, "starred_url": "https://api.github.com/users/craffel/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/craffel/subscriptions", "type": "User", "url": "https://api.github.com/users/craffel" }
[]
closed
false
null
[]
null
0
"2022-03-02T16:58:34Z"
"2022-03-07T12:13:36Z"
"2022-03-07T12:13:35Z"
CONTRIBUTOR
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/3805.diff", "html_url": "https://github.com/huggingface/datasets/pull/3805", "merged_at": "2022-03-07T12:13:35Z", "patch_url": "https://github.com/huggingface/datasets/pull/3805.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/3805" }
This was erroneously added in https://github.com/huggingface/datasets/commit/701f128de2594e8dc06c0b0427c0ba1e08be3054. This PR removes it.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3805/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3805/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/3804
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3804/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3804/comments
https://api.github.com/repos/huggingface/datasets/issues/3804/events
https://github.com/huggingface/datasets/issues/3804
1,157,297,278
I_kwDODunzps5E-vR-
3,804
Text builder with custom separator line boundaries
{ "avatar_url": "https://avatars.githubusercontent.com/u/18630848?v=4", "events_url": "https://api.github.com/users/cronoik/events{/privacy}", "followers_url": "https://api.github.com/users/cronoik/followers", "following_url": "https://api.github.com/users/cronoik/following{/other_user}", "gists_url": "https://api.github.com/users/cronoik/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/cronoik", "id": 18630848, "login": "cronoik", "node_id": "MDQ6VXNlcjE4NjMwODQ4", "organizations_url": "https://api.github.com/users/cronoik/orgs", "received_events_url": "https://api.github.com/users/cronoik/received_events", "repos_url": "https://api.github.com/users/cronoik/repos", "site_admin": false, "starred_url": "https://api.github.com/users/cronoik/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/cronoik/subscriptions", "type": "User", "url": "https://api.github.com/users/cronoik" }
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" } ]
open
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" } ]
null
6
"2022-03-02T14:50:16Z"
"2022-03-16T15:53:59Z"
null
NONE
null
null
null
**Is your feature request related to a problem? Please describe.** The current [Text](https://github.com/huggingface/datasets/blob/207be676bffe9d164740a41a883af6125edef135/src/datasets/packaged_modules/text/text.py#L23) builder implementation splits texts with `splitlines()` which splits the text on several line boundaries. Not all of them are always wanted. **Describe the solution you'd like** ```python if self.config.sample_by == "line": batch_idx = 0 while True: batch = f.read(self.config.chunksize) if not batch: break batch += f.readline() # finish current line if self.config.custom_newline is None: batch = batch.splitlines(keepends=self.config.keep_linebreaks) else: batch = batch.split(self.config.custom_newline)[:-1] pa_table = pa.Table.from_arrays([pa.array(batch)], schema=schema) # Uncomment for debugging (will print the Arrow table size and elements) # logger.warning(f"pa_table: {pa_table} num rows: {pa_table.num_rows}") # logger.warning('\n'.join(str(pa_table.slice(i, 1).to_pydict()) for i in range(pa_table.num_rows))) yield (file_idx, batch_idx), pa_table batch_idx += 1 ``` **A clear and concise description of what you want to happen.** Creating the dataset rows with a subset of the `splitlines()` line boundaries.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3804/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3804/timeline
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/3803
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3803/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3803/comments
https://api.github.com/repos/huggingface/datasets/issues/3803/events
https://github.com/huggingface/datasets/pull/3803
1,157,271,679
PR_kwDODunzps4z1T48
3,803
Remove deprecated methods/params (preparation for v2.0)
{ "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mariosasko", "id": 47462742, "login": "mariosasko", "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "repos_url": "https://api.github.com/users/mariosasko/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "type": "User", "url": "https://api.github.com/users/mariosasko" }
[]
closed
false
null
[]
null
0
"2022-03-02T14:29:12Z"
"2022-03-02T14:53:21Z"
"2022-03-02T14:53:21Z"
CONTRIBUTOR
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/3803.diff", "html_url": "https://github.com/huggingface/datasets/pull/3803", "merged_at": "2022-03-02T14:53:21Z", "patch_url": "https://github.com/huggingface/datasets/pull/3803.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/3803" }
This PR removes the following deprecated methos/params: * `Dataset.cast_`/`DatasetDict.cast_` * `Dataset.dictionary_encode_column_`/`DatasetDict.dictionary_encode_column_` * `Dataset.remove_columns_`/`DatasetDict.remove_columns_` * `Dataset.rename_columns_`/`DatasetDict.rename_columns_` * `prepare_module` * param `script_version` in `load_dataset`/`load_metric` * param `version` in `hf_github_url`
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3803/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3803/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/3802
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3802/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3802/comments
https://api.github.com/repos/huggingface/datasets/issues/3802/events
https://github.com/huggingface/datasets/pull/3802
1,157,009,964
PR_kwDODunzps4z0biM
3,802
Release of FairLex dataset
{ "avatar_url": "https://avatars.githubusercontent.com/u/1626984?v=4", "events_url": "https://api.github.com/users/iliaschalkidis/events{/privacy}", "followers_url": "https://api.github.com/users/iliaschalkidis/followers", "following_url": "https://api.github.com/users/iliaschalkidis/following{/other_user}", "gists_url": "https://api.github.com/users/iliaschalkidis/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/iliaschalkidis", "id": 1626984, "login": "iliaschalkidis", "node_id": "MDQ6VXNlcjE2MjY5ODQ=", "organizations_url": "https://api.github.com/users/iliaschalkidis/orgs", "received_events_url": "https://api.github.com/users/iliaschalkidis/received_events", "repos_url": "https://api.github.com/users/iliaschalkidis/repos", "site_admin": false, "starred_url": "https://api.github.com/users/iliaschalkidis/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/iliaschalkidis/subscriptions", "type": "User", "url": "https://api.github.com/users/iliaschalkidis" }
[]
closed
false
null
[]
null
11
"2022-03-02T10:40:18Z"
"2022-03-02T15:21:10Z"
"2022-03-02T15:18:54Z"
CONTRIBUTOR
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/3802.diff", "html_url": "https://github.com/huggingface/datasets/pull/3802", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/3802.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/3802" }
**FairLex: A Multilingual Benchmark for Evaluating Fairness in Legal Text Processing** We present a benchmark suite of four datasets for evaluating the fairness of pre-trained legal language models and the techniques used to fine-tune them for downstream tasks. Our benchmarks cover four jurisdictions (European Council, USA, Swiss, and Chinese), five languages (English, German, French, Italian, and Chinese), and fairness across five attributes (gender, age, nationality/region, language, and legal area). In our experiments, we evaluate pre-trained language models using several group-robust fine-tuning techniques and show that performance group disparities are vibrant in many cases, while none of these techniques guarantee fairness, nor consistently mitigate group disparities. Furthermore, we provide a quantitative and qualitative analysis of our results, highlighting open challenges in the development of robustness methods in legal NLP. *Ilias Chalkidis, Tommaso Pasini, Sheng Zhang, Letizia Tomada, Letizia, Sebastian Felix Schwemer, Anders Søgaard. FairLex: A Multilingual Benchmark for Evaluating Fairness in Legal Text Processing. 2022. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics, Dublin, Ireland.* Note: Please review this initial commit, and I'll update the publication link, once I'll have the ArXived version. Thanks!
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 1, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/3802/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3802/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/3801
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3801/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3801/comments
https://api.github.com/repos/huggingface/datasets/issues/3801/events
https://github.com/huggingface/datasets/pull/3801
1,155,649,279
PR_kwDODunzps4zvqjN
3,801
[Breaking] Align `map` when streaming: update instead of overwrite + add missing parameters
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
[]
closed
false
null
[]
null
1
"2022-03-01T18:06:43Z"
"2022-03-07T16:30:30Z"
"2022-03-07T16:30:29Z"
MEMBER
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/3801.diff", "html_url": "https://github.com/huggingface/datasets/pull/3801", "merged_at": "2022-03-07T16:30:29Z", "patch_url": "https://github.com/huggingface/datasets/pull/3801.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/3801" }
Currently the datasets in streaming mode and in non-streaming mode have two distinct API for `map` processing. In this PR I'm aligning the two by changing `map` in streamign mode. This includes a **major breaking change** and will require a major release of the library: **Datasets 2.0** In particular, `Dataset.map` adds new columns (with dict.update) BUT `IterableDataset.map` used to discard previous columns (it overwrites the dict). In this PR I'm chaning the `IterableDataset.map` to behave the same way as `Dataset.map`: it will update the examples instead of overwriting them. I'm also adding those missing parameters to streaming `map`: with_indices, input_columns, remove_columns ### TODO - [x] tests - [x] docs Related to https://github.com/huggingface/datasets/issues/3444
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3801/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3801/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/3800
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3800/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3800/comments
https://api.github.com/repos/huggingface/datasets/issues/3800/events
https://github.com/huggingface/datasets/pull/3800
1,155,620,761
PR_kwDODunzps4zvkjA
3,800
Added computer vision tasks
{ "avatar_url": "https://avatars.githubusercontent.com/u/53175384?v=4", "events_url": "https://api.github.com/users/merveenoyan/events{/privacy}", "followers_url": "https://api.github.com/users/merveenoyan/followers", "following_url": "https://api.github.com/users/merveenoyan/following{/other_user}", "gists_url": "https://api.github.com/users/merveenoyan/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/merveenoyan", "id": 53175384, "login": "merveenoyan", "node_id": "MDQ6VXNlcjUzMTc1Mzg0", "organizations_url": "https://api.github.com/users/merveenoyan/orgs", "received_events_url": "https://api.github.com/users/merveenoyan/received_events", "repos_url": "https://api.github.com/users/merveenoyan/repos", "site_admin": false, "starred_url": "https://api.github.com/users/merveenoyan/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/merveenoyan/subscriptions", "type": "User", "url": "https://api.github.com/users/merveenoyan" }
[]
closed
false
null
[]
null
0
"2022-03-01T17:37:46Z"
"2022-03-04T07:15:55Z"
"2022-03-04T07:15:55Z"
CONTRIBUTOR
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/3800.diff", "html_url": "https://github.com/huggingface/datasets/pull/3800", "merged_at": "2022-03-04T07:15:55Z", "patch_url": "https://github.com/huggingface/datasets/pull/3800.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/3800" }
Previous PR was in my fork so thought it'd be easier if I do it from a branch. Added computer vision task datasets according to HF tasks.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3800/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3800/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/3799
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3799/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3799/comments
https://api.github.com/repos/huggingface/datasets/issues/3799/events
https://github.com/huggingface/datasets/pull/3799
1,155,356,102
PR_kwDODunzps4zus9R
3,799
Xtreme-S Metrics
{ "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/patrickvonplaten", "id": 23423619, "login": "patrickvonplaten", "node_id": "MDQ6VXNlcjIzNDIzNjE5", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "site_admin": false, "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "type": "User", "url": "https://api.github.com/users/patrickvonplaten" }
[]
closed
false
null
[]
null
3
"2022-03-01T13:42:28Z"
"2022-03-16T14:40:29Z"
"2022-03-16T14:40:26Z"
MEMBER
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/3799.diff", "html_url": "https://github.com/huggingface/datasets/pull/3799", "merged_at": "2022-03-16T14:40:26Z", "patch_url": "https://github.com/huggingface/datasets/pull/3799.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/3799" }
**Added datasets (TODO)**: - [x] MLS - [x] Covost2 - [x] Minds-14 - [x] Voxpopuli - [x] FLoRes (need data) **Metrics**: Done
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3799/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3799/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/3798
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3798/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3798/comments
https://api.github.com/repos/huggingface/datasets/issues/3798/events
https://github.com/huggingface/datasets/pull/3798
1,154,411,066
PR_kwDODunzps4zrl5Y
3,798
Fix error message in CSV loader for newer Pandas versions
{ "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mariosasko", "id": 47462742, "login": "mariosasko", "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "repos_url": "https://api.github.com/users/mariosasko/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "type": "User", "url": "https://api.github.com/users/mariosasko" }
[]
closed
false
null
[]
null
0
"2022-02-28T18:24:10Z"
"2022-02-28T18:51:39Z"
"2022-02-28T18:51:38Z"
CONTRIBUTOR
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/3798.diff", "html_url": "https://github.com/huggingface/datasets/pull/3798", "merged_at": "2022-02-28T18:51:38Z", "patch_url": "https://github.com/huggingface/datasets/pull/3798.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/3798" }
Fix the error message in the CSV loader for `Pandas >= 1.4`. To fix this, I directly print the current file name in the for-loop. An alternative would be to use a check similar to this: ```python csv_file_reader.handle.handle if datasets.config.PANDAS_VERSION >= version.parse("1.4") else csv_file_reader.f ``` CC: @SBrandeis
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 1, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/3798/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3798/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/3797
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3797/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3797/comments
https://api.github.com/repos/huggingface/datasets/issues/3797/events
https://github.com/huggingface/datasets/pull/3797
1,154,383,063
PR_kwDODunzps4zrgAD
3,797
Reddit dataset card contribution
{ "avatar_url": "https://avatars.githubusercontent.com/u/56791604?v=4", "events_url": "https://api.github.com/users/anna-kay/events{/privacy}", "followers_url": "https://api.github.com/users/anna-kay/followers", "following_url": "https://api.github.com/users/anna-kay/following{/other_user}", "gists_url": "https://api.github.com/users/anna-kay/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/anna-kay", "id": 56791604, "login": "anna-kay", "node_id": "MDQ6VXNlcjU2NzkxNjA0", "organizations_url": "https://api.github.com/users/anna-kay/orgs", "received_events_url": "https://api.github.com/users/anna-kay/received_events", "repos_url": "https://api.github.com/users/anna-kay/repos", "site_admin": false, "starred_url": "https://api.github.com/users/anna-kay/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/anna-kay/subscriptions", "type": "User", "url": "https://api.github.com/users/anna-kay" }
[]
closed
false
null
[]
null
0
"2022-02-28T17:53:18Z"
"2023-03-09T22:08:58Z"
"2022-03-01T12:58:57Z"
CONTRIBUTOR
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/3797.diff", "html_url": "https://github.com/huggingface/datasets/pull/3797", "merged_at": "2022-03-01T12:58:56Z", "patch_url": "https://github.com/huggingface/datasets/pull/3797.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/3797" }
Description tags for webis-tldr-17 added.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3797/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3797/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/3796
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3796/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3796/comments
https://api.github.com/repos/huggingface/datasets/issues/3796/events
https://github.com/huggingface/datasets/pull/3796
1,154,298,629
PR_kwDODunzps4zrOQ4
3,796
Skip checksum computation if `ignore_verifications` is `True`
{ "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mariosasko", "id": 47462742, "login": "mariosasko", "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "repos_url": "https://api.github.com/users/mariosasko/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "type": "User", "url": "https://api.github.com/users/mariosasko" }
[]
closed
false
null
[]
null
0
"2022-02-28T16:28:45Z"
"2022-02-28T17:03:46Z"
"2022-02-28T17:03:46Z"
CONTRIBUTOR
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/3796.diff", "html_url": "https://github.com/huggingface/datasets/pull/3796", "merged_at": "2022-02-28T17:03:46Z", "patch_url": "https://github.com/huggingface/datasets/pull/3796.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/3796" }
This will speed up the loading of the datasets where the number of data files is large (can easily happen with `imagefoler`, for instance)
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3796/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3796/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/3795
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3795/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3795/comments
https://api.github.com/repos/huggingface/datasets/issues/3795/events
https://github.com/huggingface/datasets/issues/3795
1,153,261,281
I_kwDODunzps5EvV7h
3,795
can not flatten natural_questions dataset
{ "avatar_url": "https://avatars.githubusercontent.com/u/38466901?v=4", "events_url": "https://api.github.com/users/Hannibal046/events{/privacy}", "followers_url": "https://api.github.com/users/Hannibal046/followers", "following_url": "https://api.github.com/users/Hannibal046/following{/other_user}", "gists_url": "https://api.github.com/users/Hannibal046/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/Hannibal046", "id": 38466901, "login": "Hannibal046", "node_id": "MDQ6VXNlcjM4NDY2OTAx", "organizations_url": "https://api.github.com/users/Hannibal046/orgs", "received_events_url": "https://api.github.com/users/Hannibal046/received_events", "repos_url": "https://api.github.com/users/Hannibal046/repos", "site_admin": false, "starred_url": "https://api.github.com/users/Hannibal046/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Hannibal046/subscriptions", "type": "User", "url": "https://api.github.com/users/Hannibal046" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
closed
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" } ]
null
2
"2022-02-27T13:57:40Z"
"2022-03-21T14:36:12Z"
"2022-03-21T14:36:12Z"
NONE
null
null
null
## Describe the bug after downloading the natural_questions dataset, can not flatten the dataset considering there are `long answer` and `short answer` in `annotations`. ## Steps to reproduce the bug ```python from datasets import load_dataset dataset = load_dataset('natural_questions',cache_dir = 'data/dataset_cache_dir') dataset['train'].flatten() ``` ## Expected results a dataset with `long_answer` as features ## Actual results Traceback (most recent call last): File "temp.py", line 5, in <module> dataset['train'].flatten() File "/Users/hannibal046/anaconda3/lib/python3.8/site-packages/datasets/fingerprint.py", line 413, in wrapper out = func(self, *args, **kwargs) File "/Users/hannibal046/anaconda3/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 1296, in flatten dataset._data = update_metadata_with_features(dataset._data, dataset.features) File "/Users/hannibal046/anaconda3/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 536, in update_metadata_with_features features = Features({col_name: features[col_name] for col_name in table.column_names}) File "/Users/hannibal046/anaconda3/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 536, in <dictcomp> features = Features({col_name: features[col_name] for col_name in table.column_names}) KeyError: 'annotations.long_answer' ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 1.8.13 - Platform: MBP - Python version: 3.8 - PyArrow version: 6.0.1
{ "+1": 1, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/3795/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3795/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/3794
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3794/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3794/comments
https://api.github.com/repos/huggingface/datasets/issues/3794/events
https://github.com/huggingface/datasets/pull/3794
1,153,185,343
PR_kwDODunzps4zniT4
3,794
Add Mahalanobis distance metric
{ "avatar_url": "https://avatars.githubusercontent.com/u/17574157?v=4", "events_url": "https://api.github.com/users/JoaoLages/events{/privacy}", "followers_url": "https://api.github.com/users/JoaoLages/followers", "following_url": "https://api.github.com/users/JoaoLages/following{/other_user}", "gists_url": "https://api.github.com/users/JoaoLages/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/JoaoLages", "id": 17574157, "login": "JoaoLages", "node_id": "MDQ6VXNlcjE3NTc0MTU3", "organizations_url": "https://api.github.com/users/JoaoLages/orgs", "received_events_url": "https://api.github.com/users/JoaoLages/received_events", "repos_url": "https://api.github.com/users/JoaoLages/repos", "site_admin": false, "starred_url": "https://api.github.com/users/JoaoLages/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/JoaoLages/subscriptions", "type": "User", "url": "https://api.github.com/users/JoaoLages" }
[]
closed
false
null
[]
null
0
"2022-02-27T10:56:31Z"
"2022-03-02T14:46:15Z"
"2022-03-02T14:46:15Z"
CONTRIBUTOR
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/3794.diff", "html_url": "https://github.com/huggingface/datasets/pull/3794", "merged_at": "2022-03-02T14:46:14Z", "patch_url": "https://github.com/huggingface/datasets/pull/3794.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/3794" }
Mahalanobis distance is a very useful metric to measure the distance from one datapoint X to a distribution P. In this PR I implement the metric in a simple way with the help of numpy only. Similar to the [MAUVE implementation](https://github.com/huggingface/datasets/blob/master/metrics/mauve/mauve.py), we can make this metric accept texts as input and encode them with a featurize model, if that is desirable.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3794/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3794/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/3793
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3793/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3793/comments
https://api.github.com/repos/huggingface/datasets/issues/3793/events
https://github.com/huggingface/datasets/pull/3793
1,150,974,950
PR_kwDODunzps4zfdL0
3,793
Docs new UI actions no self hosted
{ "avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4", "events_url": "https://api.github.com/users/LysandreJik/events{/privacy}", "followers_url": "https://api.github.com/users/LysandreJik/followers", "following_url": "https://api.github.com/users/LysandreJik/following{/other_user}", "gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/LysandreJik", "id": 30755778, "login": "LysandreJik", "node_id": "MDQ6VXNlcjMwNzU1Nzc4", "organizations_url": "https://api.github.com/users/LysandreJik/orgs", "received_events_url": "https://api.github.com/users/LysandreJik/received_events", "repos_url": "https://api.github.com/users/LysandreJik/repos", "site_admin": false, "starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions", "type": "User", "url": "https://api.github.com/users/LysandreJik" }
[]
closed
false
null
[]
null
8
"2022-02-25T23:48:55Z"
"2022-03-01T15:55:29Z"
"2022-03-01T15:55:28Z"
MEMBER
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/3793.diff", "html_url": "https://github.com/huggingface/datasets/pull/3793", "merged_at": "2022-03-01T15:55:28Z", "patch_url": "https://github.com/huggingface/datasets/pull/3793.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/3793" }
Removes the need to have a self-hosted runner for the dev documentation
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3793/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3793/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/3792
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3792/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3792/comments
https://api.github.com/repos/huggingface/datasets/issues/3792/events
https://github.com/huggingface/datasets/issues/3792
1,150,812,404
I_kwDODunzps5EmAD0
3,792
Checksums didn't match for dataset source
{ "avatar_url": "https://avatars.githubusercontent.com/u/13174842?v=4", "events_url": "https://api.github.com/users/rafikg/events{/privacy}", "followers_url": "https://api.github.com/users/rafikg/followers", "following_url": "https://api.github.com/users/rafikg/following{/other_user}", "gists_url": "https://api.github.com/users/rafikg/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/rafikg", "id": 13174842, "login": "rafikg", "node_id": "MDQ6VXNlcjEzMTc0ODQy", "organizations_url": "https://api.github.com/users/rafikg/orgs", "received_events_url": "https://api.github.com/users/rafikg/received_events", "repos_url": "https://api.github.com/users/rafikg/repos", "site_admin": false, "starred_url": "https://api.github.com/users/rafikg/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/rafikg/subscriptions", "type": "User", "url": "https://api.github.com/users/rafikg" }
[ { "color": "E5583E", "default": false, "description": "Related to the dataset viewer on huggingface.co", "id": 3470211881, "name": "dataset-viewer", "node_id": "LA_kwDODunzps7O1zsp", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset-viewer" } ]
closed
false
null
[]
null
25
"2022-02-25T19:55:09Z"
"2022-10-12T13:33:26Z"
"2022-02-28T08:44:18Z"
NONE
null
null
null
## Dataset viewer issue for 'wiki_lingua*' **Link:** *link to the dataset viewer page* `data = datasets.load_dataset("wiki_lingua", name=language, split="train[:2000]") ` *short description of the issue* ``` [NonMatchingChecksumError: Checksums didn't match for dataset source files: ['https://drive.google.com/uc?export=download&id=11wMGqNVSwwk6zUnDaJEgm3qT71kAHeff']]() ``` Am I the one who added this dataset ? No
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3792/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3792/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/3791
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3791/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3791/comments
https://api.github.com/repos/huggingface/datasets/issues/3791/events
https://github.com/huggingface/datasets/pull/3791
1,150,733,475
PR_kwDODunzps4zevU2
3,791
Add `data_dir` to `data_files` resolution and misc improvements to HfFileSystem
{ "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mariosasko", "id": 47462742, "login": "mariosasko", "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "repos_url": "https://api.github.com/users/mariosasko/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "type": "User", "url": "https://api.github.com/users/mariosasko" }
[]
closed
false
null
[]
null
0
"2022-02-25T18:26:35Z"
"2022-03-01T13:10:43Z"
"2022-03-01T13:10:42Z"
CONTRIBUTOR
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/3791.diff", "html_url": "https://github.com/huggingface/datasets/pull/3791", "merged_at": "2022-03-01T13:10:42Z", "patch_url": "https://github.com/huggingface/datasets/pull/3791.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/3791" }
As discussed in https://github.com/huggingface/datasets/pull/2830#issuecomment-1048989764, this PR adds a QOL improvement to easily reference the files inside a directory in `load_dataset` using the `data_dir` param (very handy for ImageFolder because it avoids globbing, but also useful for the other loaders). Additionally, it fixes the issue with `HfFileSystem.isdir`, which would previously always return `False`, and aligns the path-handling logic in `HfFileSystem` with `fsspec.GitHubFileSystem`.
{ "+1": 2, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 2, "url": "https://api.github.com/repos/huggingface/datasets/issues/3791/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3791/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/3790
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3790/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3790/comments
https://api.github.com/repos/huggingface/datasets/issues/3790/events
https://github.com/huggingface/datasets/pull/3790
1,150,646,899
PR_kwDODunzps4zedMa
3,790
Add doc builder scripts
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
[]
closed
false
null
[]
null
3
"2022-02-25T16:38:47Z"
"2022-03-01T15:55:42Z"
"2022-03-01T15:55:41Z"
MEMBER
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/3790.diff", "html_url": "https://github.com/huggingface/datasets/pull/3790", "merged_at": "2022-03-01T15:55:41Z", "patch_url": "https://github.com/huggingface/datasets/pull/3790.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/3790" }
I added the three scripts: - build_dev_documentation.yml - build_documentation.yml - delete_dev_documentation.yml I got them from `transformers` and did a few changes: - I removed the `transformers`-specific dependencies - I changed all the paths to be "datasets" instead of "transformers" - I passed the `--library_name datasets` arg to the `doc-builder build` command (according to https://github.com/huggingface/doc-builder/pull/94/files#diff-bcc33cf7c223511e498776684a9a433810b527a0a38f483b1487e8a42b6575d3R26) cc @LysandreJik @mishig25
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3790/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3790/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/3789
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3789/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3789/comments
https://api.github.com/repos/huggingface/datasets/issues/3789/events
https://github.com/huggingface/datasets/pull/3789
1,150,587,404
PR_kwDODunzps4zeQpx
3,789
Add URL and ID fields to Wikipedia dataset
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[]
closed
false
null
[]
null
3
"2022-02-25T15:34:37Z"
"2022-03-04T08:24:24Z"
"2022-03-04T08:24:23Z"
MEMBER
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/3789.diff", "html_url": "https://github.com/huggingface/datasets/pull/3789", "merged_at": "2022-03-04T08:24:23Z", "patch_url": "https://github.com/huggingface/datasets/pull/3789.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/3789" }
This PR adds the URL field, so that we conform to proper attribution, required by their license: provide credit to the authors by including a hyperlink (where possible) or URL to the page or pages you are re-using. About the conversion from title to URL, I found that apart from replacing blanks with underscores, some other special character must also be percent-encoded (e.g. `"` to `%22`): https://meta.wikimedia.org/wiki/Help:URL Therefore, I have finally used `urllib.parse.quote` function. This additionally percent-encodes non-ASCII characters, but Wikimedia docs say these are equivalent: > For the other characters either the code or the character can be used in internal and external links, they are equivalent. The system does a conversion when needed. > [[%C3%80_propos_de_M%C3%A9ta]] > is rendered as [À_propos_de_Méta](https://meta.wikimedia.org/wiki/%C3%80_propos_de_M%C3%A9ta), almost like [À propos de Méta](https://meta.wikimedia.org/wiki/%C3%80_propos_de_M%C3%A9ta), which leads to this page on Meta with in the address bar the URL > [http://meta.wikipedia.org/wiki/%C3%80_propos_de_M%C3%A9ta](https://meta.wikipedia.org/wiki/%C3%80_propos_de_M%C3%A9ta) > while [http://meta.wikipedia.org/wiki/À_propos_de_Méta](https://meta.wikipedia.org/wiki/%C3%80_propos_de_M%C3%A9ta) leads to the same. Fix #3398. CC: @geohci
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3789/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3789/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/3788
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3788/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3788/comments
https://api.github.com/repos/huggingface/datasets/issues/3788/events
https://github.com/huggingface/datasets/issues/3788
1,150,375,720
I_kwDODunzps5EkVco
3,788
Only-data dataset loaded unexpectedly as validation split
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
open
false
null
[]
null
7
"2022-02-25T12:11:39Z"
"2022-02-28T11:22:22Z"
null
MEMBER
null
null
null
## Describe the bug As reported by @thomasw21 and @lhoestq, a dataset containing only a data file whose name matches the pattern `*dev*` will be returned as VALIDATION split, even if this is not the desired behavior, e.g. a file named `datosdevision.jsonl.gz`.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3788/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3788/timeline
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/3787
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3787/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3787/comments
https://api.github.com/repos/huggingface/datasets/issues/3787/events
https://github.com/huggingface/datasets/pull/3787
1,150,235,569
PR_kwDODunzps4zdE7b
3,787
Fix Google Drive URL to avoid Virus scan warning
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[]
closed
false
null
[]
null
3
"2022-02-25T09:35:12Z"
"2022-03-04T20:43:32Z"
"2022-02-25T11:56:35Z"
MEMBER
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/3787.diff", "html_url": "https://github.com/huggingface/datasets/pull/3787", "merged_at": "2022-02-25T11:56:35Z", "patch_url": "https://github.com/huggingface/datasets/pull/3787.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/3787" }
This PR fixes, in the datasets library instead of in every specific dataset, the issue of downloading the Virus scan warning page instead of the actual data file for Google Drive URLs. Fix #3786, fix #3784.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 1, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/3787/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3787/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/3786
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3786/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3786/comments
https://api.github.com/repos/huggingface/datasets/issues/3786/events
https://github.com/huggingface/datasets/issues/3786
1,150,233,067
I_kwDODunzps5Ejynr
3,786
Bug downloading Virus scan warning page from Google Drive URLs
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
closed
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" } ]
null
1
"2022-02-25T09:32:23Z"
"2022-03-03T09:25:59Z"
"2022-02-25T11:56:35Z"
MEMBER
null
null
null
## Describe the bug Recently, some issues were reported with URLs from Google Drive, where we were downloading the Virus scan warning page instead of the data file itself. See: - #3758 - #3773 - #3784
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3786/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3786/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/3785
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3785/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3785/comments
https://api.github.com/repos/huggingface/datasets/issues/3785/events
https://github.com/huggingface/datasets/pull/3785
1,150,069,801
PR_kwDODunzps4zciES
3,785
Fix: Bypass Virus Checks in Google Drive Links (CNN-DM dataset)
{ "avatar_url": "https://avatars.githubusercontent.com/u/58678541?v=4", "events_url": "https://api.github.com/users/AngadSethi/events{/privacy}", "followers_url": "https://api.github.com/users/AngadSethi/followers", "following_url": "https://api.github.com/users/AngadSethi/following{/other_user}", "gists_url": "https://api.github.com/users/AngadSethi/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/AngadSethi", "id": 58678541, "login": "AngadSethi", "node_id": "MDQ6VXNlcjU4Njc4NTQx", "organizations_url": "https://api.github.com/users/AngadSethi/orgs", "received_events_url": "https://api.github.com/users/AngadSethi/received_events", "repos_url": "https://api.github.com/users/AngadSethi/repos", "site_admin": false, "starred_url": "https://api.github.com/users/AngadSethi/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/AngadSethi/subscriptions", "type": "User", "url": "https://api.github.com/users/AngadSethi" }
[]
closed
false
null
[]
null
8
"2022-02-25T05:48:57Z"
"2022-03-03T16:43:47Z"
"2022-03-03T14:03:37Z"
NONE
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/3785.diff", "html_url": "https://github.com/huggingface/datasets/pull/3785", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/3785.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/3785" }
This commit fixes the issue described in #3784. By adding an extra parameter to the end of Google Drive links, we are able to bypass the virus check and download the datasets. So, if the original link looked like https://drive.google.com/uc?export=download&id=0BwmD_VLjROrfTHk4NFg2SndKcjQ The new link now looks like https://drive.google.com/uc?export=download&id=0BwmD_VLjROrfTHk4NFg2SndKcjQ&confirm=t Fixes #3784
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3785/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3785/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/3784
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3784/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3784/comments
https://api.github.com/repos/huggingface/datasets/issues/3784/events
https://github.com/huggingface/datasets/issues/3784
1,150,057,955
I_kwDODunzps5EjH3j
3,784
Unable to Download CNN-Dailymail Dataset
{ "avatar_url": "https://avatars.githubusercontent.com/u/58678541?v=4", "events_url": "https://api.github.com/users/AngadSethi/events{/privacy}", "followers_url": "https://api.github.com/users/AngadSethi/followers", "following_url": "https://api.github.com/users/AngadSethi/following{/other_user}", "gists_url": "https://api.github.com/users/AngadSethi/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/AngadSethi", "id": 58678541, "login": "AngadSethi", "node_id": "MDQ6VXNlcjU4Njc4NTQx", "organizations_url": "https://api.github.com/users/AngadSethi/orgs", "received_events_url": "https://api.github.com/users/AngadSethi/received_events", "repos_url": "https://api.github.com/users/AngadSethi/repos", "site_admin": false, "starred_url": "https://api.github.com/users/AngadSethi/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/AngadSethi/subscriptions", "type": "User", "url": "https://api.github.com/users/AngadSethi" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
closed
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/58678541?v=4", "events_url": "https://api.github.com/users/AngadSethi/events{/privacy}", "followers_url": "https://api.github.com/users/AngadSethi/followers", "following_url": "https://api.github.com/users/AngadSethi/following{/other_user}", "gists_url": "https://api.github.com/users/AngadSethi/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/AngadSethi", "id": 58678541, "login": "AngadSethi", "node_id": "MDQ6VXNlcjU4Njc4NTQx", "organizations_url": "https://api.github.com/users/AngadSethi/orgs", "received_events_url": "https://api.github.com/users/AngadSethi/received_events", "repos_url": "https://api.github.com/users/AngadSethi/repos", "site_admin": false, "starred_url": "https://api.github.com/users/AngadSethi/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/AngadSethi/subscriptions", "type": "User", "url": "https://api.github.com/users/AngadSethi" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/58678541?v=4", "events_url": "https://api.github.com/users/AngadSethi/events{/privacy}", "followers_url": "https://api.github.com/users/AngadSethi/followers", "following_url": "https://api.github.com/users/AngadSethi/following{/other_user}", "gists_url": "https://api.github.com/users/AngadSethi/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/AngadSethi", "id": 58678541, "login": "AngadSethi", "node_id": "MDQ6VXNlcjU4Njc4NTQx", "organizations_url": "https://api.github.com/users/AngadSethi/orgs", "received_events_url": "https://api.github.com/users/AngadSethi/received_events", "repos_url": "https://api.github.com/users/AngadSethi/repos", "site_admin": false, "starred_url": "https://api.github.com/users/AngadSethi/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/AngadSethi/subscriptions", "type": "User", "url": "https://api.github.com/users/AngadSethi" } ]
null
4
"2022-02-25T05:24:47Z"
"2022-03-03T14:05:17Z"
"2022-03-03T14:05:17Z"
NONE
null
null
null
## Describe the bug I am unable to download the CNN-Dailymail dataset. Upon closer investigation, I realised why this was happening: - The dataset sits in Google Drive, and both the CNN and DM datasets are large. - Google is unable to scan the folder for viruses, **so the link which would originally download the dataset, now downloads the source code of this web page:** ![image](https://user-images.githubusercontent.com/58678541/155658435-c2f497d7-7601-4332-94b1-18a62dd96422.png) - **This leads to the following error**: ```python NotADirectoryError: [Errno 20] Not a directory: '/root/.cache/huggingface/datasets/downloads/1bc05d24fa6dda2468e83a73cf6dc207226e01e3c48a507ea716dc0421da583b/cnn/stories' ``` ## Steps to reproduce the bug ```python import datasets dataset = datasets.load_dataset("cnn_dailymail", "3.0.0", split="train") ``` ## Expected results That the dataset is downloaded and processed just like other datasets. ## Actual results Hit with this error: ```python NotADirectoryError: [Errno 20] Not a directory: '/root/.cache/huggingface/datasets/downloads/1bc05d24fa6dda2468e83a73cf6dc207226e01e3c48a507ea716dc0421da583b/cnn/stories' ``` ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 1.18.3 - Platform: Linux-5.4.144+-x86_64-with-Ubuntu-18.04-bionic - Python version: 3.7.12 - PyArrow version: 6.0.1
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3784/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3784/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/3783
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3783/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3783/comments
https://api.github.com/repos/huggingface/datasets/issues/3783/events
https://github.com/huggingface/datasets/pull/3783
1,149,256,744
PR_kwDODunzps4zZ1jR
3,783
Support passing str to iter_files
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[]
closed
false
null
[]
null
1
"2022-02-24T12:58:15Z"
"2022-02-24T16:01:40Z"
"2022-02-24T16:01:40Z"
MEMBER
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/3783.diff", "html_url": "https://github.com/huggingface/datasets/pull/3783", "merged_at": "2022-02-24T16:01:39Z", "patch_url": "https://github.com/huggingface/datasets/pull/3783.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/3783" }
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3783/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3783/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/3782
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3782/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3782/comments
https://api.github.com/repos/huggingface/datasets/issues/3782/events
https://github.com/huggingface/datasets/pull/3782
1,148,994,022
PR_kwDODunzps4zY-Xb
3,782
Error of writing with different schema, due to nonpreservation of nullability
{ "avatar_url": "https://avatars.githubusercontent.com/u/17963619?v=4", "events_url": "https://api.github.com/users/richarddwang/events{/privacy}", "followers_url": "https://api.github.com/users/richarddwang/followers", "following_url": "https://api.github.com/users/richarddwang/following{/other_user}", "gists_url": "https://api.github.com/users/richarddwang/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/richarddwang", "id": 17963619, "login": "richarddwang", "node_id": "MDQ6VXNlcjE3OTYzNjE5", "organizations_url": "https://api.github.com/users/richarddwang/orgs", "received_events_url": "https://api.github.com/users/richarddwang/received_events", "repos_url": "https://api.github.com/users/richarddwang/repos", "site_admin": false, "starred_url": "https://api.github.com/users/richarddwang/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/richarddwang/subscriptions", "type": "User", "url": "https://api.github.com/users/richarddwang" }
[]
closed
false
null
[]
null
1
"2022-02-24T08:23:07Z"
"2022-03-03T14:54:39Z"
"2022-03-03T14:54:39Z"
CONTRIBUTOR
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/3782.diff", "html_url": "https://github.com/huggingface/datasets/pull/3782", "merged_at": "2022-03-03T14:54:39Z", "patch_url": "https://github.com/huggingface/datasets/pull/3782.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/3782" }
## 1. Case ``` dataset.map( batched=True, disable_nullable=True, ) ``` will get the following error at here https://github.com/huggingface/datasets/blob/c9967f55626931f8059dc416526c791444cdfdf7/src/datasets/arrow_writer.py#L516 `pyarrow.lib.ArrowInvalid: Tried to write record batch with different schema` ## 2. Debugging ### 2.1 tracing During `_map_single`, the following are called https://github.com/huggingface/datasets/blob/c9967f55626931f8059dc416526c791444cdfdf7/src/datasets/arrow_dataset.py#L2523 https://github.com/huggingface/datasets/blob/c9967f55626931f8059dc416526c791444cdfdf7/src/datasets/arrow_writer.py#L511 ### 2.2. Observation The problem is, even after `table_cast`, `pa_table.schema != self._schema` `pa_table.schema` (before/after `table_cast`) ``` input_ids: list<item: int32> child 0, item: int32 ``` `self._schema` ``` input_ids: list<item: int32> not null child 0, item: int32 ``` ### 2.3. Reason https://github.com/huggingface/datasets/blob/c9967f55626931f8059dc416526c791444cdfdf7/src/datasets/table.py#L1121 Here we lose nullability stored in `schema` because it seems that `Features` is always nullable and don't store nullability. https://github.com/huggingface/datasets/blob/c9967f55626931f8059dc416526c791444cdfdf7/src/datasets/table.py#L1103 So, casting to schema from such `Features` loses nullability, and eventually causes error of writing with different schema ## 3. Solution 1. Let `Features` stores nullability. 2. Directly cast table with original schema but not schema from converted `Features`. (this PR) 3. Don't `cast_table` when `write_table`
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3782/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3782/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/3781
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3781/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3781/comments
https://api.github.com/repos/huggingface/datasets/issues/3781/events
https://github.com/huggingface/datasets/pull/3781
1,148,599,680
PR_kwDODunzps4zXr_O
3,781
Reddit dataset card additions
{ "avatar_url": "https://avatars.githubusercontent.com/u/56791604?v=4", "events_url": "https://api.github.com/users/anna-kay/events{/privacy}", "followers_url": "https://api.github.com/users/anna-kay/followers", "following_url": "https://api.github.com/users/anna-kay/following{/other_user}", "gists_url": "https://api.github.com/users/anna-kay/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/anna-kay", "id": 56791604, "login": "anna-kay", "node_id": "MDQ6VXNlcjU2NzkxNjA0", "organizations_url": "https://api.github.com/users/anna-kay/orgs", "received_events_url": "https://api.github.com/users/anna-kay/received_events", "repos_url": "https://api.github.com/users/anna-kay/repos", "site_admin": false, "starred_url": "https://api.github.com/users/anna-kay/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/anna-kay/subscriptions", "type": "User", "url": "https://api.github.com/users/anna-kay" }
[]
closed
false
null
[]
null
1
"2022-02-23T21:29:16Z"
"2022-02-28T18:00:40Z"
"2022-02-28T11:21:14Z"
CONTRIBUTOR
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/3781.diff", "html_url": "https://github.com/huggingface/datasets/pull/3781", "merged_at": "2022-02-28T11:21:14Z", "patch_url": "https://github.com/huggingface/datasets/pull/3781.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/3781" }
The changes proposed are based on the "TL;DR: Mining Reddit to Learn Automatic Summarization" paper & https://zenodo.org/record/1043504#.YhaKHpbQC38 It is a Reddit dataset indeed, but the name given to the dataset by the authors is Webis-TLDR-17 (corpus), so perhaps it should be modified as well. The task at which the dataset is aimed is abstractive summarization.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3781/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3781/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/3780
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3780/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3780/comments
https://api.github.com/repos/huggingface/datasets/issues/3780/events
https://github.com/huggingface/datasets/pull/3780
1,148,186,272
PR_kwDODunzps4zWVSM
3,780
Add ElkarHizketak v1.0 dataset
{ "avatar_url": "https://avatars.githubusercontent.com/u/7646055?v=4", "events_url": "https://api.github.com/users/antxa/events{/privacy}", "followers_url": "https://api.github.com/users/antxa/followers", "following_url": "https://api.github.com/users/antxa/following{/other_user}", "gists_url": "https://api.github.com/users/antxa/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/antxa", "id": 7646055, "login": "antxa", "node_id": "MDQ6VXNlcjc2NDYwNTU=", "organizations_url": "https://api.github.com/users/antxa/orgs", "received_events_url": "https://api.github.com/users/antxa/received_events", "repos_url": "https://api.github.com/users/antxa/repos", "site_admin": false, "starred_url": "https://api.github.com/users/antxa/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/antxa/subscriptions", "type": "User", "url": "https://api.github.com/users/antxa" }
[]
closed
false
null
[]
null
1
"2022-02-23T14:44:17Z"
"2022-03-04T19:04:29Z"
"2022-03-04T19:04:29Z"
CONTRIBUTOR
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/3780.diff", "html_url": "https://github.com/huggingface/datasets/pull/3780", "merged_at": "2022-03-04T19:04:29Z", "patch_url": "https://github.com/huggingface/datasets/pull/3780.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/3780" }
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3780/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3780/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/3779
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3779/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3779/comments
https://api.github.com/repos/huggingface/datasets/issues/3779/events
https://github.com/huggingface/datasets/pull/3779
1,148,050,636
PR_kwDODunzps4zV4qr
3,779
Update manual download URL in newsroom dataset
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[]
closed
false
null
[]
null
0
"2022-02-23T12:49:07Z"
"2022-02-23T13:26:41Z"
"2022-02-23T13:26:40Z"
MEMBER
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/3779.diff", "html_url": "https://github.com/huggingface/datasets/pull/3779", "merged_at": "2022-02-23T13:26:40Z", "patch_url": "https://github.com/huggingface/datasets/pull/3779.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/3779" }
Fix #3778.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3779/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3779/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/3778
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3778/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3778/comments
https://api.github.com/repos/huggingface/datasets/issues/3778/events
https://github.com/huggingface/datasets/issues/3778
1,147,898,946
I_kwDODunzps5Ea4xC
3,778
Not be able to download dataset - "Newsroom"
{ "avatar_url": "https://avatars.githubusercontent.com/u/61326242?v=4", "events_url": "https://api.github.com/users/Darshan2104/events{/privacy}", "followers_url": "https://api.github.com/users/Darshan2104/followers", "following_url": "https://api.github.com/users/Darshan2104/following{/other_user}", "gists_url": "https://api.github.com/users/Darshan2104/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/Darshan2104", "id": 61326242, "login": "Darshan2104", "node_id": "MDQ6VXNlcjYxMzI2MjQy", "organizations_url": "https://api.github.com/users/Darshan2104/orgs", "received_events_url": "https://api.github.com/users/Darshan2104/received_events", "repos_url": "https://api.github.com/users/Darshan2104/repos", "site_admin": false, "starred_url": "https://api.github.com/users/Darshan2104/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Darshan2104/subscriptions", "type": "User", "url": "https://api.github.com/users/Darshan2104" }
[ { "color": "2edb81", "default": false, "description": "A bug in a dataset script provided in the library", "id": 2067388877, "name": "dataset bug", "node_id": "MDU6TGFiZWwyMDY3Mzg4ODc3", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20bug" } ]
closed
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" } ]
null
2
"2022-02-23T10:15:50Z"
"2022-02-23T17:05:04Z"
"2022-02-23T13:26:40Z"
NONE
null
null
null
Hello, I tried to download the **newsroom** dataset but it didn't work out for me. it said me to **download it manually**! For manually, Link is also didn't work! It is sawing some ad or something! If anybody has solved this issue please help me out or if somebody has this dataset please share your google drive link, it would be a great help! Thanks Darshan Tank
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3778/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3778/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/3777
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3777/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3777/comments
https://api.github.com/repos/huggingface/datasets/issues/3777/events
https://github.com/huggingface/datasets/pull/3777
1,147,232,875
PR_kwDODunzps4zTVrz
3,777
Start removing canonical datasets logic
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
[]
closed
false
null
[]
null
3
"2022-02-22T18:23:30Z"
"2022-02-24T15:04:37Z"
"2022-02-24T15:04:36Z"
MEMBER
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/3777.diff", "html_url": "https://github.com/huggingface/datasets/pull/3777", "merged_at": "2022-02-24T15:04:36Z", "patch_url": "https://github.com/huggingface/datasets/pull/3777.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/3777" }
I updated the source code and the documentation to start removing the "canonical datasets" logic. Indeed this makes the documentation confusing and we don't want this distinction anymore in the future. Ideally users should share their datasets on the Hub directly. ### Changes - the documentation about dataset loading mentions the datasets on the Hub (no difference between canonical and community, since they all have their own repository now) - the documentation about adding a dataset doesn't explain the technical differences between canonical and community anymore, and only presents how to add a community dataset. There is still a small section at the bottom that mentions the datasets that are still on GitHub and redirects to the `ADD_NEW_DATASET.md` guide on GitHub about how to contribute a dataset to the `datasets` library - the code source doesn't mention "canonical" anymore anywhere. There is still a `GitHubDatasetModuleFactory` class that is left, but I updated the docstring to say that it will be eventually removed in favor of the `HubDatasetModuleFactory` classes that already exist Would love to have your feedbacks on this ! cc @julien-c @thomwolf @SBrandeis
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 3, "hooray": 0, "laugh": 0, "rocket": 2, "total_count": 5, "url": "https://api.github.com/repos/huggingface/datasets/issues/3777/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3777/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/3776
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3776/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3776/comments
https://api.github.com/repos/huggingface/datasets/issues/3776/events
https://github.com/huggingface/datasets/issues/3776
1,146,932,871
I_kwDODunzps5EXM6H
3,776
Allow download only some files from the Wikipedia dataset
{ "avatar_url": "https://avatars.githubusercontent.com/u/1514798?v=4", "events_url": "https://api.github.com/users/jvanz/events{/privacy}", "followers_url": "https://api.github.com/users/jvanz/followers", "following_url": "https://api.github.com/users/jvanz/following{/other_user}", "gists_url": "https://api.github.com/users/jvanz/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/jvanz", "id": 1514798, "login": "jvanz", "node_id": "MDQ6VXNlcjE1MTQ3OTg=", "organizations_url": "https://api.github.com/users/jvanz/orgs", "received_events_url": "https://api.github.com/users/jvanz/received_events", "repos_url": "https://api.github.com/users/jvanz/repos", "site_admin": false, "starred_url": "https://api.github.com/users/jvanz/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jvanz/subscriptions", "type": "User", "url": "https://api.github.com/users/jvanz" }
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" } ]
open
false
null
[]
null
1
"2022-02-22T13:46:41Z"
"2022-02-22T14:50:02Z"
null
NONE
null
null
null
**Is your feature request related to a problem? Please describe.** The Wikipedia dataset can be really big. This is a problem if you want to use it locally in a laptop with the Apache Beam `DirectRunner`. Even if your laptop have a considerable amount of memory (e.g. 32gb). **Describe the solution you'd like** I would like to use the `data_files` argument in the `load_dataset` function to define which file in the wikipedia dataset I would like to download. Thus, I can work with the dataset in a smaller machine using the Apache Beam `DirectRunner`. **Describe alternatives you've considered** I've tried to use the `simple` Wikipedia dataset. But it's in English and I would like to use Portuguese texts in my model.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3776/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3776/timeline
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/3775
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3775/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3775/comments
https://api.github.com/repos/huggingface/datasets/issues/3775/events
https://github.com/huggingface/datasets/pull/3775
1,146,849,454
PR_kwDODunzps4zSEd4
3,775
Update gigaword card and info
{ "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mariosasko", "id": 47462742, "login": "mariosasko", "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "repos_url": "https://api.github.com/users/mariosasko/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "type": "User", "url": "https://api.github.com/users/mariosasko" }
[]
closed
false
null
[]
null
3
"2022-02-22T12:27:16Z"
"2022-02-28T11:35:24Z"
"2022-02-28T11:35:24Z"
CONTRIBUTOR
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/3775.diff", "html_url": "https://github.com/huggingface/datasets/pull/3775", "merged_at": "2022-02-28T11:35:24Z", "patch_url": "https://github.com/huggingface/datasets/pull/3775.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/3775" }
Reported on the forum: https://discuss.huggingface.co/t/error-loading-dataset/14999
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3775/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3775/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/3774
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3774/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3774/comments
https://api.github.com/repos/huggingface/datasets/issues/3774/events
https://github.com/huggingface/datasets/pull/3774
1,146,843,177
PR_kwDODunzps4zSDHC
3,774
Fix reddit_tifu data URL
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[]
closed
false
null
[]
null
0
"2022-02-22T12:21:15Z"
"2022-02-22T12:38:45Z"
"2022-02-22T12:38:44Z"
MEMBER
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/3774.diff", "html_url": "https://github.com/huggingface/datasets/pull/3774", "merged_at": "2022-02-22T12:38:44Z", "patch_url": "https://github.com/huggingface/datasets/pull/3774.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/3774" }
Fix #3773.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3774/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3774/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/3773
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3773/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3773/comments
https://api.github.com/repos/huggingface/datasets/issues/3773/events
https://github.com/huggingface/datasets/issues/3773
1,146,758,335
I_kwDODunzps5EWiS_
3,773
Checksum mismatch for the reddit_tifu dataset
{ "avatar_url": "https://avatars.githubusercontent.com/u/56791604?v=4", "events_url": "https://api.github.com/users/anna-kay/events{/privacy}", "followers_url": "https://api.github.com/users/anna-kay/followers", "following_url": "https://api.github.com/users/anna-kay/following{/other_user}", "gists_url": "https://api.github.com/users/anna-kay/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/anna-kay", "id": 56791604, "login": "anna-kay", "node_id": "MDQ6VXNlcjU2NzkxNjA0", "organizations_url": "https://api.github.com/users/anna-kay/orgs", "received_events_url": "https://api.github.com/users/anna-kay/received_events", "repos_url": "https://api.github.com/users/anna-kay/repos", "site_admin": false, "starred_url": "https://api.github.com/users/anna-kay/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/anna-kay/subscriptions", "type": "User", "url": "https://api.github.com/users/anna-kay" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
closed
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" } ]
null
4
"2022-02-22T10:57:07Z"
"2022-02-25T19:27:49Z"
"2022-02-22T12:38:44Z"
CONTRIBUTOR
null
null
null
## Describe the bug A checksum occurs when downloading the reddit_tifu data (both long & short). ## Steps to reproduce the bug reddit_tifu_dataset = load_dataset('reddit_tifu', 'long') ## Expected results The expected result is for the dataset to be downloaded and cached locally. ## Actual results File "/.../lib/python3.9/site-packages/datasets/utils/info_utils.py", line 40, in verify_checksums raise NonMatchingChecksumError(error_msg + str(bad_urls)) datasets.utils.info_utils.NonMatchingChecksumError: Checksums didn't match for dataset source files: ['https://drive.google.com/uc?export=download&id=1ffWfITKFMJeqjT8loC8aiCLRNJpc_XnF'] ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 1.18.3 - Platform: Linux-5.13.0-30-generic-x86_64-with-glibc2.31 - Python version: 3.9.7 - PyArrow version: 7.0.0
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3773/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3773/timeline
null
completed
false