url
stringlengths 58
61
| repository_url
stringclasses 1
value | labels_url
stringlengths 72
75
| comments_url
stringlengths 67
70
| events_url
stringlengths 65
68
| html_url
stringlengths 46
51
| id
int64 599M
1.07B
| node_id
stringlengths 18
32
| number
int64 1
3.38k
| title
stringlengths 1
276
| user
dict | labels
sequence | state
stringclasses 2
values | locked
bool 1
class | assignee
dict | assignees
list | milestone
null | comments
sequence | created_at
int64 1,587B
1,639B
| updated_at
int64 1,587B
1,639B
| closed_at
null 1,587B
1,639B
⌀ | author_association
stringclasses 3
values | active_lock_reason
null | draft
bool 2
classes | pull_request
dict | body
stringlengths 0
228k
⌀ | reactions
dict | timeline_url
stringlengths 67
70
| performed_via_github_app
null | is_pull_request
bool 2
classes |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://api.github.com/repos/huggingface/datasets/issues/3378 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3378/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3378/comments | https://api.github.com/repos/huggingface/datasets/issues/3378/events | https://github.com/huggingface/datasets/pull/3378 | 1,070,580,126 | PR_kwDODunzps4vXF1D | 3,378 | Add The Pile subsets | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | [] | 1,638,537,294,000 | 1,638,537,294,000 | null | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3378",
"html_url": "https://github.com/huggingface/datasets/pull/3378",
"diff_url": "https://github.com/huggingface/datasets/pull/3378.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3378.patch",
"merged_at": null
} | Add The Pile subsets:
- pubmed
- ubuntu_irc
- europarl
- hacker_news
- nih_exporter
Close bigscience-workshop/data_tooling#301. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3378/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3378/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3377 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3377/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3377/comments | https://api.github.com/repos/huggingface/datasets/issues/3377/events | https://github.com/huggingface/datasets/pull/3377 | 1,070,562,907 | PR_kwDODunzps4vXCHn | 3,377 | COCO 🥥 on the 🤗 Hub? | {
"login": "merveenoyan",
"id": 53175384,
"node_id": "MDQ6VXNlcjUzMTc1Mzg0",
"avatar_url": "https://avatars.githubusercontent.com/u/53175384?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/merveenoyan",
"html_url": "https://github.com/merveenoyan",
"followers_url": "https://api.github.com/users/merveenoyan/followers",
"following_url": "https://api.github.com/users/merveenoyan/following{/other_user}",
"gists_url": "https://api.github.com/users/merveenoyan/gists{/gist_id}",
"starred_url": "https://api.github.com/users/merveenoyan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/merveenoyan/subscriptions",
"organizations_url": "https://api.github.com/users/merveenoyan/orgs",
"repos_url": "https://api.github.com/users/merveenoyan/repos",
"events_url": "https://api.github.com/users/merveenoyan/events{/privacy}",
"received_events_url": "https://api.github.com/users/merveenoyan/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | [
"@mariosasko I fixed couple of bugs",
"TO-DO: \r\n- [x] Add unlabeled 2017 splits, train and validation splits of 2015\r\n- [ ] Add Class Labels as list instead"
] | 1,638,536,127,000 | 1,638,543,147,000 | null | CONTRIBUTOR | null | true | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3377",
"html_url": "https://github.com/huggingface/datasets/pull/3377",
"diff_url": "https://github.com/huggingface/datasets/pull/3377.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3377.patch",
"merged_at": null
} | This is a draft PR since I ran into few small problems.
I referred to this TFDS code: https://github.com/tensorflow/datasets/blob/2538a08c184d53b37bfcf52cc21dd382572a88f4/tensorflow_datasets/object_detection/coco.py
cc: @mariosasko | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3377/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 1,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3377/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3376 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3376/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3376/comments | https://api.github.com/repos/huggingface/datasets/issues/3376/events | https://github.com/huggingface/datasets/pull/3376 | 1,070,522,979 | PR_kwDODunzps4vW5sB | 3,376 | Update clue benchmark | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | [] | 1,638,533,161,000 | 1,638,533,161,000 | null | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3376",
"html_url": "https://github.com/huggingface/datasets/pull/3376",
"diff_url": "https://github.com/huggingface/datasets/pull/3376.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3376.patch",
"merged_at": null
} | Fix #3374 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3376/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3376/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3375 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3375/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3375/comments | https://api.github.com/repos/huggingface/datasets/issues/3375/events | https://github.com/huggingface/datasets/pull/3375 | 1,070,454,913 | PR_kwDODunzps4vWrXz | 3,375 | Support streaming zipped dataset repo by passing only repo name | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | [] | 1,638,528,185,000 | 1,638,528,185,000 | null | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3375",
"html_url": "https://github.com/huggingface/datasets/pull/3375",
"diff_url": "https://github.com/huggingface/datasets/pull/3375.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3375.patch",
"merged_at": null
} | Fix #3373. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3375/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3375/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3374 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3374/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3374/comments | https://api.github.com/repos/huggingface/datasets/issues/3374/events | https://github.com/huggingface/datasets/issues/3374 | 1,070,426,462 | I_kwDODunzps4_zWle | 3,374 | NonMatchingChecksumError for the CLUE:cluewsc2020, chid, c3 and tnews | {
"login": "Namco0816",
"id": 34687537,
"node_id": "MDQ6VXNlcjM0Njg3NTM3",
"avatar_url": "https://avatars.githubusercontent.com/u/34687537?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Namco0816",
"html_url": "https://github.com/Namco0816",
"followers_url": "https://api.github.com/users/Namco0816/followers",
"following_url": "https://api.github.com/users/Namco0816/following{/other_user}",
"gists_url": "https://api.github.com/users/Namco0816/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Namco0816/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Namco0816/subscriptions",
"organizations_url": "https://api.github.com/users/Namco0816/orgs",
"repos_url": "https://api.github.com/users/Namco0816/repos",
"events_url": "https://api.github.com/users/Namco0816/events{/privacy}",
"received_events_url": "https://api.github.com/users/Namco0816/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
}
] | null | [] | 1,638,526,254,000 | 1,638,532,337,000 | null | NONE | null | null | null | Hi, it seems like there are updates in cluewsc2020, chid, c3 and tnews, since i could not load them due to the checksum error. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3374/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3374/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/3373 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3373/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3373/comments | https://api.github.com/repos/huggingface/datasets/issues/3373/events | https://github.com/huggingface/datasets/issues/3373 | 1,070,406,391 | I_kwDODunzps4_zRr3 | 3,373 | Support streaming zipped CSV dataset repo by passing only repo name | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | open | false | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
] | null | [] | 1,638,524,904,000 | 1,638,526,949,000 | null | MEMBER | null | null | null | Given a community 🤗 dataset repository containing only a zipped CSV file (only raw data, no loading script), I would like to load it in streaming mode without passing `data_files`:
```
ds_name = "bigscience-catalogue-data/vietnamese_poetry_from_fsoft_ai_lab"
ds = load_dataset(ds_name, split="train", streaming=True, use_auth_token=True)
item = next(iter(ds))
```
Currently, it gives a `FileNotFoundError` because there is no glob (no "\*" after "zip://": "zip://*") in the passed URL:
```
'zip://::https://huggingface.co/datasets/bigscience-catalogue-data/vietnamese_poetry_from_fsoft_ai_lab/resolve/e5d45f1bd9a8a798cc14f0a45ebc1ce91907c792/poems_dataset.zip'
```
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3373/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3373/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/3372 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3372/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3372/comments | https://api.github.com/repos/huggingface/datasets/issues/3372/events | https://github.com/huggingface/datasets/issues/3372 | 1,069,948,178 | I_kwDODunzps4_xh0S | 3,372 | [SEO improvement] Add Dataset Metadata to make datasets indexable | {
"login": "cakiki",
"id": 3664563,
"node_id": "MDQ6VXNlcjM2NjQ1NjM=",
"avatar_url": "https://avatars.githubusercontent.com/u/3664563?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/cakiki",
"html_url": "https://github.com/cakiki",
"followers_url": "https://api.github.com/users/cakiki/followers",
"following_url": "https://api.github.com/users/cakiki/following{/other_user}",
"gists_url": "https://api.github.com/users/cakiki/gists{/gist_id}",
"starred_url": "https://api.github.com/users/cakiki/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/cakiki/subscriptions",
"organizations_url": "https://api.github.com/users/cakiki/orgs",
"repos_url": "https://api.github.com/users/cakiki/repos",
"events_url": "https://api.github.com/users/cakiki/events{/privacy}",
"received_events_url": "https://api.github.com/users/cakiki/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | open | false | null | [] | null | [] | 1,638,476,467,000 | 1,638,476,467,000 | null | NONE | null | null | null | Some people who host datasets on github seem to include a table of metadata at the end of their README.md to make the dataset indexable by [Google Dataset Search](https://datasetsearch.research.google.com/) (See [here](https://github.com/google-research/google-research/tree/master/goemotions#dataset-metadata) and [here](https://github.com/cvdfoundation/google-landmark#dataset-metadata)). This could be a useful addition to canonical datasets; perhaps even community datasets.
I'll include a screenshot (as opposed to markdown) as an example so as not to have a github issue indexed as a dataset:
> ![image](https://user-images.githubusercontent.com/3664563/144496173-953428cf-633a-4571-b75b-f099c6b2ed65.png)
**_PS: It might very well be the case that this is already covered by some other markdown magic I'm not aware of._**
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3372/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3372/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/3371 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3371/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3371/comments | https://api.github.com/repos/huggingface/datasets/issues/3371/events | https://github.com/huggingface/datasets/pull/3371 | 1,069,821,335 | PR_kwDODunzps4vUnbp | 3,371 | feat: add americas nli dataset | {
"login": "fdschmidt93",
"id": 39233597,
"node_id": "MDQ6VXNlcjM5MjMzNTk3",
"avatar_url": "https://avatars.githubusercontent.com/u/39233597?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/fdschmidt93",
"html_url": "https://github.com/fdschmidt93",
"followers_url": "https://api.github.com/users/fdschmidt93/followers",
"following_url": "https://api.github.com/users/fdschmidt93/following{/other_user}",
"gists_url": "https://api.github.com/users/fdschmidt93/gists{/gist_id}",
"starred_url": "https://api.github.com/users/fdschmidt93/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/fdschmidt93/subscriptions",
"organizations_url": "https://api.github.com/users/fdschmidt93/orgs",
"repos_url": "https://api.github.com/users/fdschmidt93/repos",
"events_url": "https://api.github.com/users/fdschmidt93/events{/privacy}",
"received_events_url": "https://api.github.com/users/fdschmidt93/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | [] | 1,638,467,099,000 | 1,638,473,264,000 | null | NONE | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3371",
"html_url": "https://github.com/huggingface/datasets/pull/3371",
"diff_url": "https://github.com/huggingface/datasets/pull/3371.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3371.patch",
"merged_at": null
} | This PR adds the [Americas NLI](https://arxiv.org/abs/2104.08726) dataset, extension of XNLI to 10 low-resource indigenous languages spoken in the Americas: Ashaninka, Aymara, Bribri, Guarani, Nahuatl, Otomi, Quechua, Raramuri, Shipibo-Konibo, and Wixarika.
One odd thing (not sure) is that I had to set
`datasets-cli dummy_data ./datasets/americas_nli/ --auto_generate --n_lines 7500`
`n_lines` very large to successfully generate the dummy files for all the subsets. Happy to get some guidance here.
Otherwise, I hope everything is in order :)
e: missed a step, onto fixing the tests
e2: there you go -- hope it's ok to have added more languages with their ISO codes to `languages.json`, need those tests to pass :laughing: | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3371/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3371/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3370 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3370/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3370/comments | https://api.github.com/repos/huggingface/datasets/issues/3370/events | https://github.com/huggingface/datasets/pull/3370 | 1,069,735,423 | PR_kwDODunzps4vUVA3 | 3,370 | Document a training loop for streaming dataset | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,638,461,820,000 | 1,638,538,475,000 | null | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3370",
"html_url": "https://github.com/huggingface/datasets/pull/3370",
"diff_url": "https://github.com/huggingface/datasets/pull/3370.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3370.patch",
"merged_at": 1638538474000
} | I added some docs about streaming dataset. In particular I added two subsections:
- one on how to use `map` for preprocessing
- one on how to use a streaming dataset in a pytorch training loop
cc @patrickvonplaten @stevhliu if you have some comments
cc @Rocketknight1 later we can add the one for TF and I might need your help ^^' | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3370/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3370/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3369 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3369/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3369/comments | https://api.github.com/repos/huggingface/datasets/issues/3369/events | https://github.com/huggingface/datasets/issues/3369 | 1,069,587,674 | I_kwDODunzps4_wJza | 3,369 | [Audio] Allow resampling for audio datasets in streaming mode | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | open | false | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
},
{
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
}
] | null | [] | 1,638,453,897,000 | 1,638,453,908,000 | null | MEMBER | null | null | null | Many audio datasets like Common Voice always need to be resampled. This can very easily be done in non-streaming mode as follows:
```python
from datasets import load_dataset
ds = load_dataset("common_voice", "ab", split="test")
ds = ds.cast_column("audio", Audio(sampling_rate=16_000))
```
However in streaming mode it fails currently:
```python
from datasets import load_dataset
ds = load_dataset("common_voice", "ab", split="test", streaming=True)
ds = ds.cast_column("audio", Audio(sampling_rate=16_000))
```
with the following error:
```
AttributeError: 'IterableDataset' object has no attribute 'cast_column'
```
It would be great if we could add such a feature (I'm not 100% sure though how complex this would be) | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3369/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3369/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/3368 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3368/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3368/comments | https://api.github.com/repos/huggingface/datasets/issues/3368/events | https://github.com/huggingface/datasets/pull/3368 | 1,069,403,624 | PR_kwDODunzps4vTObo | 3,368 | Fix dict source_datasets tagset validator | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,638,442,340,000 | 1,638,460,118,000 | null | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3368",
"html_url": "https://github.com/huggingface/datasets/pull/3368",
"diff_url": "https://github.com/huggingface/datasets/pull/3368.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3368.patch",
"merged_at": 1638460117000
} | Currently, the `source_datasets` tag validation does not support passing a dict with configuration keys.
This PR:
- Extends `tagset_validator` to support regex tags
- Uses `tagset_validator` to validate dict `source_datasets` | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3368/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3368/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3367 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3367/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3367/comments | https://api.github.com/repos/huggingface/datasets/issues/3367/events | https://github.com/huggingface/datasets/pull/3367 | 1,069,241,274 | PR_kwDODunzps4vSsfk | 3,367 | Fix typo in other-structured-to-text task tag | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,638,432,147,000 | 1,638,461,234,000 | null | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3367",
"html_url": "https://github.com/huggingface/datasets/pull/3367",
"diff_url": "https://github.com/huggingface/datasets/pull/3367.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3367.patch",
"merged_at": 1638461233000
} | Fix typo in task tag:
- `other-stuctured-to-text` (before)
- `other-structured-to-text` (now) | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3367/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3367/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3366 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3366/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3366/comments | https://api.github.com/repos/huggingface/datasets/issues/3366/events | https://github.com/huggingface/datasets/issues/3366 | 1,069,214,022 | I_kwDODunzps4_uulG | 3,366 | Add multimodal datasets | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2067376369,
"node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request",
"name": "dataset request",
"color": "e99695",
"default": false,
"description": "Requesting to add a new dataset"
}
] | open | false | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
] | null | [] | 1,638,429,844,000 | 1,638,430,413,000 | null | MEMBER | null | null | null | Epic issue to track the addition of multimodal datasets:
- [ ] #2526
- [ ] #1842
- [ ] #1810
Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md).
@VictorSanh feel free to add and sort by priority any interesting dataset. I have added the multimodal dataset requests which were already present as issues. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3366/reactions",
"total_count": 2,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 1,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 1
} | https://api.github.com/repos/huggingface/datasets/issues/3366/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/3365 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3365/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3365/comments | https://api.github.com/repos/huggingface/datasets/issues/3365/events | https://github.com/huggingface/datasets/issues/3365 | 1,069,195,887 | I_kwDODunzps4_uqJv | 3,365 | Add task tags for multimodal datasets | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | open | false | null | [] | null | [] | 1,638,428,300,000 | 1,638,430,389,000 | null | MEMBER | null | null | null | ## **Is your feature request related to a problem? Please describe.**
Currently, task tags are either exclusively related to text or speech processing:
- https://github.com/huggingface/datasets/blob/master/src/datasets/utils/resources/tasks.json
## **Describe the solution you'd like**
We should also add tasks related to:
- multimodality
- image
- video
CC: @VictorSanh @lewtun @lhoestq @merveenoyan @SBrandeis | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3365/reactions",
"total_count": 3,
"+1": 3,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3365/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/3364 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3364/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3364/comments | https://api.github.com/repos/huggingface/datasets/issues/3364/events | https://github.com/huggingface/datasets/pull/3364 | 1,068,851,196 | PR_kwDODunzps4vRaxq | 3,364 | Use the Audio feature in the AutomaticSpeechRecognition template | {
"login": "anton-l",
"id": 26864830,
"node_id": "MDQ6VXNlcjI2ODY0ODMw",
"avatar_url": "https://avatars.githubusercontent.com/u/26864830?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/anton-l",
"html_url": "https://github.com/anton-l",
"followers_url": "https://api.github.com/users/anton-l/followers",
"following_url": "https://api.github.com/users/anton-l/following{/other_user}",
"gists_url": "https://api.github.com/users/anton-l/gists{/gist_id}",
"starred_url": "https://api.github.com/users/anton-l/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/anton-l/subscriptions",
"organizations_url": "https://api.github.com/users/anton-l/orgs",
"repos_url": "https://api.github.com/users/anton-l/repos",
"events_url": "https://api.github.com/users/anton-l/events{/privacy}",
"received_events_url": "https://api.github.com/users/anton-l/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | [] | 1,638,391,346,000 | 1,638,449,170,000 | null | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3364",
"html_url": "https://github.com/huggingface/datasets/pull/3364",
"diff_url": "https://github.com/huggingface/datasets/pull/3364.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3364.patch",
"merged_at": null
} | This updates the ASR template and all supported datasets to use the `Audio` feature | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3364/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3364/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3363 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3363/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3363/comments | https://api.github.com/repos/huggingface/datasets/issues/3363/events | https://github.com/huggingface/datasets/pull/3363 | 1,068,824,340 | PR_kwDODunzps4vRVCl | 3,363 | Update URL of Jeopardy! dataset | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Closing this PR in favor of #3266."
] | 1,638,389,290,000 | 1,638,534,901,000 | null | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3363",
"html_url": "https://github.com/huggingface/datasets/pull/3363",
"diff_url": "https://github.com/huggingface/datasets/pull/3363.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3363.patch",
"merged_at": null
} | Updates the URL of the Jeopardy! dataset.
Fix #3361 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3363/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3363/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3362 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3362/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3362/comments | https://api.github.com/repos/huggingface/datasets/issues/3362/events | https://github.com/huggingface/datasets/pull/3362 | 1,068,809,768 | PR_kwDODunzps4vRR2r | 3,362 | Adapt image datasets | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | [] | 1,638,388,321,000 | 1,638,553,377,000 | null | CONTRIBUTOR | null | true | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3362",
"html_url": "https://github.com/huggingface/datasets/pull/3362",
"diff_url": "https://github.com/huggingface/datasets/pull/3362.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3362.patch",
"merged_at": null
} | This PR:
* adapts the ImageClassification template to use the new Image feature
* adapts the following datasets to use the new Image feature:
* beans (+ fixes streaming)
* cast_vs_dogs (+ fixes streaming)
* cifar10
* cifar100
* fashion_mnist
* mnist
* head_qa
cc @nateraw | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3362/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3362/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3361 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3361/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3361/comments | https://api.github.com/repos/huggingface/datasets/issues/3361/events | https://github.com/huggingface/datasets/issues/3361 | 1,068,736,268 | I_kwDODunzps4_s58M | 3,361 | Jeopardy _URL access denied | {
"login": "tianjianjiang",
"id": 4812544,
"node_id": "MDQ6VXNlcjQ4MTI1NDQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/4812544?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/tianjianjiang",
"html_url": "https://github.com/tianjianjiang",
"followers_url": "https://api.github.com/users/tianjianjiang/followers",
"following_url": "https://api.github.com/users/tianjianjiang/following{/other_user}",
"gists_url": "https://api.github.com/users/tianjianjiang/gists{/gist_id}",
"starred_url": "https://api.github.com/users/tianjianjiang/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/tianjianjiang/subscriptions",
"organizations_url": "https://api.github.com/users/tianjianjiang/orgs",
"repos_url": "https://api.github.com/users/tianjianjiang/repos",
"events_url": "https://api.github.com/users/tianjianjiang/events{/privacy}",
"received_events_url": "https://api.github.com/users/tianjianjiang/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | open | false | null | [] | null | [] | 1,638,382,893,000 | 1,638,382,972,000 | null | NONE | null | null | null | ## Describe the bug
http://skeeto.s3.amazonaws.com/share/JEOPARDY_QUESTIONS1.json.gz returns Access Denied now.
However, https://drive.google.com/file/d/0BwT5wj_P7BKXb2hfM3d2RHU1ckE/view?usp=sharing from the original Reddit post https://www.reddit.com/r/datasets/comments/1uyd0t/200000_jeopardy_questions_in_a_json_file/ may work.
## Steps to reproduce the bug
```shell
> python
Python 3.7.12 (default, Sep 5 2021, 08:34:29)
[Clang 11.0.3 (clang-1103.0.32.62)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
```
```python
>>> from datasets import load_dataset
>>> load_dataset("jeopardy")
```
## Expected results
The download completes.
## Actual results
```shell
Downloading: 4.18kB [00:00, 1.60MB/s]
Downloading: 2.03kB [00:00, 1.04MB/s]
Using custom data configuration default
Downloading and preparing dataset jeopardy/default (download: 12.13 MiB, generated: 34.46 MiB, post-processed: Unknown size, total: 46.59 MiB) to /Users/mike/.cache/huggingface/datasets/jeopardy/default/0.1.0/25ee3e4a73755e637b8810f6493fd36e4523dea3ca8a540529d0a6e24c7f9810...
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/Users/mike/Library/Caches/pypoetry/virtualenvs/promptsource-hsdAcWsQ-py3.7/lib/python3.7/site-packages/datasets/load.py", line 1632, in load_dataset
use_auth_token=use_auth_token,
File "/Users/mike/Library/Caches/pypoetry/virtualenvs/promptsource-hsdAcWsQ-py3.7/lib/python3.7/site-packages/datasets/builder.py", line 608, in download_and_prepare
dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs
File "/Users/mike/Library/Caches/pypoetry/virtualenvs/promptsource-hsdAcWsQ-py3.7/lib/python3.7/site-packages/datasets/builder.py", line 675, in _download_and_prepare
split_generators = self._split_generators(dl_manager, **split_generators_kwargs)
File "/Users/mike/.cache/huggingface/modules/datasets_modules/datasets/jeopardy/25ee3e4a73755e637b8810f6493fd36e4523dea3ca8a540529d0a6e24c7f9810/jeopardy.py", line 72, in _split_generators
filepath = dl_manager.download_and_extract(_DATA_URL)
File "/Users/mike/Library/Caches/pypoetry/virtualenvs/promptsource-hsdAcWsQ-py3.7/lib/python3.7/site-packages/datasets/utils/download_manager.py", line 284, in download_and_extract
return self.extract(self.download(url_or_urls))
File "/Users/mike/Library/Caches/pypoetry/virtualenvs/promptsource-hsdAcWsQ-py3.7/lib/python3.7/site-packages/datasets/utils/download_manager.py", line 197, in download
download_func, url_or_urls, map_tuple=True, num_proc=download_config.num_proc, disable_tqdm=False
File "/Users/mike/Library/Caches/pypoetry/virtualenvs/promptsource-hsdAcWsQ-py3.7/lib/python3.7/site-packages/datasets/utils/py_utils.py", line 197, in map_nested
return function(data_struct)
File "/Users/mike/Library/Caches/pypoetry/virtualenvs/promptsource-hsdAcWsQ-py3.7/lib/python3.7/site-packages/datasets/utils/download_manager.py", line 217, in _download
return cached_path(url_or_filename, download_config=download_config)
File "/Users/mike/Library/Caches/pypoetry/virtualenvs/promptsource-hsdAcWsQ-py3.7/lib/python3.7/site-packages/datasets/utils/file_utils.py", line 305, in cached_path
use_auth_token=download_config.use_auth_token,
File "/Users/mike/Library/Caches/pypoetry/virtualenvs/promptsource-hsdAcWsQ-py3.7/lib/python3.7/site-packages/datasets/utils/file_utils.py", line 594, in get_from_cache
raise ConnectionError("Couldn't reach {}".format(url))
ConnectionError: Couldn't reach http://skeeto.s3.amazonaws.com/share/JEOPARDY_QUESTIONS1.json.gz
```
---
```shell
> curl http://skeeto.s3.amazonaws.com/share/JEOPARDY_QUESTIONS1.json.gz
```
```xml
<?xml version="1.0" encoding="UTF-8"?>
<Error><Code>AccessDenied</Code><Message>Access Denied</Message><RequestId>70Y9R36XNPEQXMGV</RequestId><HostId>G6F5AK4qo7JdaEdKGMtS0P6gdLPeFOdEfSEfvTOZEfk9km0/jAfp08QLfKSTFFj1oWIKoAoBehM=</HostId></Error>
```
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 1.14.0
- Platform: macOS Catalina 10.15.7
- Python version: 3.7.12
- PyArrow version: 6.0.1
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3361/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3361/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/3360 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3360/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3360/comments | https://api.github.com/repos/huggingface/datasets/issues/3360/events | https://github.com/huggingface/datasets/pull/3360 | 1,068,724,697 | PR_kwDODunzps4vQ_16 | 3,360 | Add The Pile USPTO subset | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,638,382,085,000 | 1,638,531,929,000 | null | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3360",
"html_url": "https://github.com/huggingface/datasets/pull/3360",
"diff_url": "https://github.com/huggingface/datasets/pull/3360.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3360.patch",
"merged_at": 1638531927000
} | Add:
- USPTO subset of The Pile: "uspto" config
Close bigscience-workshop/data_tooling#297.
CC: @StellaAthena | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3360/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3360/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3359 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3359/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3359/comments | https://api.github.com/repos/huggingface/datasets/issues/3359/events | https://github.com/huggingface/datasets/pull/3359 | 1,068,638,213 | PR_kwDODunzps4vQtI0 | 3,359 | Add The Pile Free Law subset | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"@albertvillanova Is there a specific reason you’re adding the Pile under “the” instead of under “pile”? That does not appear to be consistent with other datasets.",
"Hi @StellaAthena,\r\n\r\nI asked myself the same question, but at the end I decided to be consistent with previously added Pile subsets:\r\n- #2817\r\n\r\nI guess the reason is to stress that the definite article is always used before the name of the dataset (your site says: \"The Pile. An 800GB Dataset of Diverse Text for Language Modeling\"). Other datasets are not usually preceded by the definite article, like \"the SQuAD\" or \"the GLUE\" or \"the Common Voice\"...\r\n\r\nCC: @lhoestq "
] | 1,638,377,164,000 | 1,638,438,019,000 | null | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3359",
"html_url": "https://github.com/huggingface/datasets/pull/3359",
"diff_url": "https://github.com/huggingface/datasets/pull/3359.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3359.patch",
"merged_at": 1638379843000
} | Add:
- Free Law subset of The Pile: "free_law" config
Close bigscience-workshop/data_tooling#75.
CC: @StellaAthena | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3359/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3359/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3358 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3358/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3358/comments | https://api.github.com/repos/huggingface/datasets/issues/3358/events | https://github.com/huggingface/datasets/issues/3358 | 1,068,623,216 | I_kwDODunzps4_seVw | 3,358 | add new field, and get errors | {
"login": "yanllearnn",
"id": 38966558,
"node_id": "MDQ6VXNlcjM4OTY2NTU4",
"avatar_url": "https://avatars.githubusercontent.com/u/38966558?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yanllearnn",
"html_url": "https://github.com/yanllearnn",
"followers_url": "https://api.github.com/users/yanllearnn/followers",
"following_url": "https://api.github.com/users/yanllearnn/following{/other_user}",
"gists_url": "https://api.github.com/users/yanllearnn/gists{/gist_id}",
"starred_url": "https://api.github.com/users/yanllearnn/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yanllearnn/subscriptions",
"organizations_url": "https://api.github.com/users/yanllearnn/orgs",
"repos_url": "https://api.github.com/users/yanllearnn/repos",
"events_url": "https://api.github.com/users/yanllearnn/events{/privacy}",
"received_events_url": "https://api.github.com/users/yanllearnn/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi, \r\n\r\ncould you please post this question on our [Forum](https://discuss.huggingface.co/) as we keep issues for bugs and feature requests? ",
"> Hi,\r\n> \r\n> could you please post this question on our [Forum](https://discuss.huggingface.co/) as we keep issues for bugs and feature requests?\r\n\r\nok."
] | 1,638,376,538,000 | 1,638,411,982,000 | null | NONE | null | null | null | after adding new field **tokenized_examples["example_id"]**, and get errors below,
I think it is due to changing data to tensor, and **tokenized_examples["example_id"]** is string list
**all fields**
```
***************** train_dataset 1: Dataset({
features: ['attention_mask', 'end_positions', 'example_id', 'input_ids', 'start_positions', 'token_type_ids'],
num_rows: 87714
})
```
**Errors**
```
Traceback (most recent call last):
File "/usr/local/lib/python3.7/site-packages/transformers/tokenization_utils_base.py", line 705, in convert_to_tensors
tensor = as_tensor(value)
ValueError: too many dimensions 'str'
``` | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3358/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3358/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/3357 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3357/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3357/comments | https://api.github.com/repos/huggingface/datasets/issues/3357/events | https://github.com/huggingface/datasets/pull/3357 | 1,068,607,382 | PR_kwDODunzps4vQmcL | 3,357 | Update README.md | {
"login": "apergo-ai",
"id": 68908804,
"node_id": "MDQ6VXNlcjY4OTA4ODA0",
"avatar_url": "https://avatars.githubusercontent.com/u/68908804?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/apergo-ai",
"html_url": "https://github.com/apergo-ai",
"followers_url": "https://api.github.com/users/apergo-ai/followers",
"following_url": "https://api.github.com/users/apergo-ai/following{/other_user}",
"gists_url": "https://api.github.com/users/apergo-ai/gists{/gist_id}",
"starred_url": "https://api.github.com/users/apergo-ai/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/apergo-ai/subscriptions",
"organizations_url": "https://api.github.com/users/apergo-ai/orgs",
"repos_url": "https://api.github.com/users/apergo-ai/repos",
"events_url": "https://api.github.com/users/apergo-ai/events{/privacy}",
"received_events_url": "https://api.github.com/users/apergo-ai/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | [] | 1,638,375,646,000 | 1,638,375,646,000 | null | NONE | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3357",
"html_url": "https://github.com/huggingface/datasets/pull/3357",
"diff_url": "https://github.com/huggingface/datasets/pull/3357.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3357.patch",
"merged_at": null
} | After having worked a bit with the dataset.
As far as I know, it is solely in English (en-US). There are only a few mails in Spanish, French or German (less than a dozen I would estimate). | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3357/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3357/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3356 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3356/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3356/comments | https://api.github.com/repos/huggingface/datasets/issues/3356/events | https://github.com/huggingface/datasets/pull/3356 | 1,068,503,932 | PR_kwDODunzps4vQQLD | 3,356 | to_tf_dataset() refactor | {
"login": "Rocketknight1",
"id": 12866554,
"node_id": "MDQ6VXNlcjEyODY2NTU0",
"avatar_url": "https://avatars.githubusercontent.com/u/12866554?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Rocketknight1",
"html_url": "https://github.com/Rocketknight1",
"followers_url": "https://api.github.com/users/Rocketknight1/followers",
"following_url": "https://api.github.com/users/Rocketknight1/following{/other_user}",
"gists_url": "https://api.github.com/users/Rocketknight1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Rocketknight1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Rocketknight1/subscriptions",
"organizations_url": "https://api.github.com/users/Rocketknight1/orgs",
"repos_url": "https://api.github.com/users/Rocketknight1/repos",
"events_url": "https://api.github.com/users/Rocketknight1/events{/privacy}",
"received_events_url": "https://api.github.com/users/Rocketknight1/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | [
"Also, please don't merge yet - I need to make sure all the code samples and notebooks have a collate_fn specified, since we're removing the ability for this method to work without one!"
] | 1,638,370,470,000 | 1,638,541,032,000 | null | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3356",
"html_url": "https://github.com/huggingface/datasets/pull/3356",
"diff_url": "https://github.com/huggingface/datasets/pull/3356.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3356.patch",
"merged_at": null
} | This is the promised cleanup to `to_tf_dataset()` now that the course is out of the way! The main changes are:
- A collator is always required (there was way too much hackiness making things like labels work without it)
- Lots of cleanup and a lot of code moved to `_get_output_signature`
- Should now handle it gracefully when the data collator adds unexpected columns | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3356/reactions",
"total_count": 2,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 2,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3356/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3355 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3355/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3355/comments | https://api.github.com/repos/huggingface/datasets/issues/3355/events | https://github.com/huggingface/datasets/pull/3355 | 1,068,468,573 | PR_kwDODunzps4vQIoy | 3,355 | Extend support for streaming datasets that use pd.read_excel | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | [] | 1,638,368,563,000 | 1,638,460,006,000 | null | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3355",
"html_url": "https://github.com/huggingface/datasets/pull/3355",
"diff_url": "https://github.com/huggingface/datasets/pull/3355.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3355.patch",
"merged_at": null
} | This PR fixes error:
```
ValueError: Cannot seek streaming HTTP file
```
CC: @severo | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3355/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3355/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3354 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3354/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3354/comments | https://api.github.com/repos/huggingface/datasets/issues/3354/events | https://github.com/huggingface/datasets/pull/3354 | 1,068,307,271 | PR_kwDODunzps4vPl9d | 3,354 | Remove duplicate name from dataset cards | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,638,359,140,000 | 1,638,364,470,000 | null | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3354",
"html_url": "https://github.com/huggingface/datasets/pull/3354",
"diff_url": "https://github.com/huggingface/datasets/pull/3354.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3354.patch",
"merged_at": 1638364469000
} | Remove duplicate name from dataset card for:
- ajgt_twitter_ar
- emotone_ar | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3354/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3354/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3353 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3353/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3353/comments | https://api.github.com/repos/huggingface/datasets/issues/3353/events | https://github.com/huggingface/datasets/issues/3353 | 1,068,173,783 | I_kwDODunzps4_qwnX | 3,353 | add one field "example_id", but I can't see it in the "comput_loss" function | {
"login": "yanllearnn",
"id": 38966558,
"node_id": "MDQ6VXNlcjM4OTY2NTU4",
"avatar_url": "https://avatars.githubusercontent.com/u/38966558?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yanllearnn",
"html_url": "https://github.com/yanllearnn",
"followers_url": "https://api.github.com/users/yanllearnn/followers",
"following_url": "https://api.github.com/users/yanllearnn/following{/other_user}",
"gists_url": "https://api.github.com/users/yanllearnn/gists{/gist_id}",
"starred_url": "https://api.github.com/users/yanllearnn/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yanllearnn/subscriptions",
"organizations_url": "https://api.github.com/users/yanllearnn/orgs",
"repos_url": "https://api.github.com/users/yanllearnn/repos",
"events_url": "https://api.github.com/users/yanllearnn/events{/privacy}",
"received_events_url": "https://api.github.com/users/yanllearnn/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi ! Your function looks fine, I used to map `squad` locally and it indeed added the `example_id` field correctly.\r\n\r\nHowever I think that in the `compute_loss` method only a subset of the fields are available: the model inputs. Since `example_id` is not a model input (it's not passed as a parameter to the model), the data loader doesn't need to return it by default.\r\n\r\nHowever you can disable this behavior by setting `remove_unused_columns` to `False` to your training arguments. In this case in `compute_loss` you will get the full item with all the fields.\r\n\r\nNote that since the model doesn't take `example_id` as input, you will have to remove it from the inputs when `model(**inputs)` is called",
"Hi, I have set **args.remove_unused_columns=False** and **training_args.remove_unused_columns=False**, but the field doesn't been contained yet.\r\n```\r\ndef main():\r\n argp = HfArgumentParser(TrainingArguments)\r\n # The HfArgumentParser object collects command-line arguments into an object (and provides default values for unspecified arguments).\r\n # In particular, TrainingArguments has several keys that you'll need/want to specify (when you call run.py from the command line):\r\n # --do_train\r\n # When included, this argument tells the script to train a model.\r\n # See docstrings for \"--task\" and \"--dataset\" for how the training dataset is selected.\r\n # --do_eval\r\n # When included, this argument tells the script to evaluate the trained/loaded model on the validation split of the selected dataset.\r\n # --per_device_train_batch_size <int, default=8>\r\n # This is the training batch size.\r\n # If you're running on GPU, you should try to make this as large as you can without getting CUDA out-of-memory errors.\r\n # For reference, with --max_length=128 and the default ELECTRA-small model, a batch size of 32 should fit in 4gb of GPU memory.\r\n # --num_train_epochs <float, default=3.0>\r\n # How many passes to do through the training data.\r\n # --output_dir <path>\r\n # Where to put the trained model checkpoint(s) and any eval predictions.\r\n # *This argument is required*.\r\n\r\n argp.add_argument('--model', type=str,\r\n default='google/electra-small-discriminator',\r\n help=\"\"\"This argument specifies the base model to fine-tune.\r\n This should either be a HuggingFace model ID (see https://huggingface.co/models)\r\n or a path to a saved model checkpoint (a folder containing config.json and pytorch_model.bin).\"\"\")\r\n argp.add_argument('--task', type=str, choices=['nli', 'qa'], required=True,\r\n help=\"\"\"This argument specifies which task to train/evaluate on.\r\n Pass \"nli\" for natural language inference or \"qa\" for question answering.\r\n By default, \"nli\" will use the SNLI dataset, and \"qa\" will use the SQuAD dataset.\"\"\")\r\n argp.add_argument('--dataset', type=str, default=None,\r\n help=\"\"\"This argument overrides the default dataset used for the specified task.\"\"\")\r\n argp.add_argument('--max_length', type=int, default=128,\r\n help=\"\"\"This argument limits the maximum sequence length used during training/evaluation.\r\n Shorter sequence lengths need less memory and computation time, but some examples may end up getting truncated.\"\"\")\r\n argp.add_argument('--max_train_samples', type=int, default=None,\r\n help='Limit the number of examples to train on.')\r\n argp.add_argument('--max_eval_samples', type=int, default=None,\r\n help='Limit the number of examples to evaluate on.')\r\n\r\n argp.remove_unused_columns = False\r\n training_args, args = argp.parse_args_into_dataclasses()\r\n args.remove_unused_columns=False\r\n training_args.remove_unused_columns=False\r\n```\r\n\r\n\r\n```\r\n**************** train_dataset: Dataset({\r\n features: ['id', 'title', 'context', 'question', 'answers'],\r\n num_rows: 87599\r\n})\r\n\r\n\r\n**************** train_dataset_featurized: Dataset({\r\n features: ['attention_mask', 'end_positions', 'input_ids', 'start_positions', 'token_type_ids'],\r\n num_rows: 87714\r\n})\r\n```",
"Hi, I print the value, all are set to False, but don't work.\r\n```\r\n********************* training_args: TrainingArguments(\r\n_n_gpu=1,\r\nadafactor=False,\r\nadam_beta1=0.9,\r\nadam_beta2=0.999,\r\nadam_epsilon=1e-08,\r\ndataloader_drop_last=False,\r\ndataloader_num_workers=0,\r\ndataloader_pin_memory=True,\r\nddp_find_unused_parameters=None,\r\ndebug=[],\r\ndeepspeed=None,\r\ndisable_tqdm=False,\r\ndo_eval=False,\r\ndo_predict=False,\r\ndo_train=True,\r\neval_accumulation_steps=None,\r\neval_steps=None,\r\nevaluation_strategy=IntervalStrategy.NO,\r\nfp16=False,\r\nfp16_backend=auto,\r\nfp16_full_eval=False,\r\nfp16_opt_level=O1,\r\ngradient_accumulation_steps=1,\r\ngreater_is_better=None,\r\ngroup_by_length=False,\r\nignore_data_skip=False,\r\nlabel_names=None,\r\nlabel_smoothing_factor=0.0,\r\nlearning_rate=5e-05,\r\nlength_column_name=length,\r\nload_best_model_at_end=False,\r\nlocal_rank=-1,\r\nlog_level=-1,\r\nlog_level_replica=-1,\r\nlog_on_each_node=True,\r\nlogging_dir=./re_trained_model/runs/Dec01_14-15-08_399b9290604c,\r\nlogging_first_step=False,\r\nlogging_steps=500,\r\nlogging_strategy=IntervalStrategy.STEPS,\r\nlr_scheduler_type=SchedulerType.LINEAR,\r\nmax_grad_norm=1.0,\r\nmax_steps=-1,\r\nmetric_for_best_model=None,\r\nmp_parameters=,\r\nno_cuda=False,\r\nnum_train_epochs=3.0,\r\noutput_dir=./re_trained_model,\r\noverwrite_output_dir=False,\r\npast_index=-1,\r\nper_device_eval_batch_size=8,\r\nper_device_train_batch_size=8,\r\nprediction_loss_only=False,\r\npush_to_hub=False,\r\npush_to_hub_model_id=re_trained_model,\r\npush_to_hub_organization=None,\r\npush_to_hub_token=None,\r\nremove_unused_columns=False,\r\nreport_to=['tensorboard'],\r\nresume_from_checkpoint=None,\r\nrun_name=./re_trained_model,\r\nsave_on_each_node=False,\r\nsave_steps=500,\r\nsave_strategy=IntervalStrategy.STEPS,\r\nsave_total_limit=None,\r\nseed=42,\r\nsharded_ddp=[],\r\nskip_memory_metrics=True,\r\ntpu_metrics_debug=False,\r\ntpu_num_cores=None,\r\nuse_legacy_prediction_loop=False,\r\nwarmup_ratio=0.0,\r\nwarmup_steps=0,\r\nweight_decay=0.0,\r\n)\r\n```\r\n```\r\n********************* args: Namespace(dataset='squad', max_eval_samples=None, max_length=128, max_train_samples=None, model='google/electra-small-discriminator', remove_unused_columns=False, task='qa')\r\n2021-12-01 14:15:10,048 - WARNING - datasets.builder - Reusing dataset squad (/root/.cache/huggingface/datasets/squad/plain_text/1.0.0/d6ec3ceb99ca480ce37cdd35555d6cb2511d223b9150cce08a837ef62ffea453)\r\nSome weights of the model checkpoint at google/electra-small-discriminator were not used when initializing ElectraForQuestionAnswering: ['discriminator_predictions.dense_prediction.weight', 'discriminator_predictions.dense_prediction.bias', 'discriminator_predictions.dense.weight', 'discriminator_predictions.dense.bias']\r\n- This IS expected if you are initializing ElectraForQuestionAnswering from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model).\r\n- This IS NOT expected if you are initializing ElectraForQuestionAnswering from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).\r\nSome weights of ElectraForQuestionAnswering were not initialized from the model checkpoint at google/electra-small-discriminator and are newly initialized: ['qa_outputs.bias', 'qa_outputs.weight']\r\nYou should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.\r\nPreprocessing data... (this takes a little bit, should only happen once per dataset)\r\n```",
"Hmmm, it might be because the default data collator removes all the fields with `string` type:\r\n\r\nhttps://github.com/huggingface/transformers/blob/4c0dd199c8305903564c2edeae23d294edd4b321/src/transformers/data/data_collator.py#L107-L112\r\n\r\nI guess you also need a custom data collator that doesn't remove them.",
"can you give a tutorial about how to do this?",
"I overwrite **get_train_dataloader**, and remove **_remove_unused_columns**, but it doesn't work.\r\n\r\n```\r\n def get_train_dataloader(self) -> DataLoader:\r\n \"\"\"\r\n Returns the training :class:`~torch.utils.data.DataLoader`.\r\n\r\n Will use no sampler if :obj:`self.train_dataset` does not implement :obj:`__len__`, a random sampler (adapted\r\n to distributed training if necessary) otherwise.\r\n\r\n Subclass and override this method if you want to inject some custom behavior.\r\n \"\"\"\r\n if self.train_dataset is None:\r\n raise ValueError(\"Trainer: training requires a train_dataset.\")\r\n\r\n train_dataset = self.train_dataset\r\n # if is_datasets_available() and isinstance(train_dataset, datasets.Dataset):\r\n # train_dataset = self._remove_unused_columns(train_dataset, description=\"training\")\r\n\r\n if isinstance(train_dataset, torch.utils.data.IterableDataset):\r\n if self.args.world_size > 1:\r\n train_dataset = IterableDatasetShard(\r\n train_dataset,\r\n batch_size=self.args.train_batch_size,\r\n drop_last=self.args.dataloader_drop_last,\r\n num_processes=self.args.world_size,\r\n process_index=self.args.process_index,\r\n )\r\n\r\n return DataLoader(\r\n train_dataset,\r\n batch_size=self.args.train_batch_size,\r\n collate_fn=self.data_collator,\r\n num_workers=self.args.dataloader_num_workers,\r\n pin_memory=self.args.dataloader_pin_memory,\r\n )\r\n\r\n train_sampler = self._get_train_sampler()\r\n\r\n return DataLoader(\r\n train_dataset,\r\n batch_size=self.args.train_batch_size,\r\n sampler=train_sampler,\r\n collate_fn=self.data_collator,\r\n drop_last=self.args.dataloader_drop_last,\r\n num_workers=self.args.dataloader_num_workers,\r\n pin_memory=self.args.dataloader_pin_memory,\r\n )\r\n```",
"Hi, it works now, thank you.\r\n1. **args.remove_unused_columns=False** and **training_args.remove_unused_columns=False**\r\n2. overwrite **get_train_dataloader**, and remove **_remove_unused_columns**\r\n3. add new fields, and can be got in **inputs**. "
] | 1,638,351,309,000 | 1,638,374,559,000 | null | NONE | null | null | null | Hi, I add one field **example_id**, but I can't see it in the **comput_loss** function, how can I do this? below is the information of inputs
```
*********************** inputs: {'attention_mask': tensor([[1, 1, 1, ..., 0, 0, 0],
[1, 1, 1, ..., 0, 0, 0],
[1, 1, 1, ..., 0, 0, 0],
...,
[1, 1, 1, ..., 0, 0, 0],
[1, 1, 1, ..., 0, 0, 0],
[1, 1, 1, ..., 0, 0, 0]], device='cuda:0'), 'end_positions': tensor([ 25, 97, 93, 44, 25, 112, 109, 134], device='cuda:0'), 'input_ids': tensor([[ 101, 2054, 2390, ..., 0, 0, 0],
[ 101, 2054, 2515, ..., 0, 0, 0],
[ 101, 2054, 2106, ..., 0, 0, 0],
...,
[ 101, 2339, 2001, ..., 0, 0, 0],
[ 101, 2054, 2515, ..., 0, 0, 0],
[ 101, 2054, 2003, ..., 0, 0, 0]], device='cuda:0'), 'start_positions': tensor([ 20, 90, 89, 41, 25, 96, 106, 132], device='cuda:0'), 'token_type_ids': tensor([[0, 0, 0, ..., 0, 0, 0],
[0, 0, 0, ..., 0, 0, 0],
[0, 0, 0, ..., 0, 0, 0],
...,
[0, 0, 0, ..., 0, 0, 0],
[0, 0, 0, ..., 0, 0, 0],
[0, 0, 0, ..., 0, 0, 0]], device='cuda:0')}
```
```
# This function preprocesses a question answering dataset, tokenizing the question and context text
# and finding the right offsets for the answer spans in the tokenized context (to use as labels).
# Adapted from https://github.com/huggingface/transformers/blob/master/examples/pytorch/question-answering/run_qa.py
def prepare_train_dataset_qa(examples, tokenizer, max_seq_length=None):
questions = [q.lstrip() for q in examples["question"]]
max_seq_length = tokenizer.model_max_length
# tokenize both questions and the corresponding context
# if the context length is longer than max_length, we split it to several
# chunks of max_length
tokenized_examples = tokenizer(
questions,
examples["context"],
truncation="only_second",
max_length=max_seq_length,
stride=min(max_seq_length // 2, 128),
return_overflowing_tokens=True,
return_offsets_mapping=True,
padding="max_length"
)
# Since one example might give us several features if it has a long context,
# we need a map from a feature to its corresponding example.
sample_mapping = tokenized_examples.pop("overflow_to_sample_mapping")
# The offset mappings will give us a map from token to character position
# in the original context. This will help us compute the start_positions
# and end_positions to get the final answer string.
offset_mapping = tokenized_examples.pop("offset_mapping")
tokenized_examples["start_positions"] = []
tokenized_examples["end_positions"] = []
tokenized_examples["example_id"] = []
for i, offsets in enumerate(offset_mapping):
input_ids = tokenized_examples["input_ids"][i]
# We will label features not containing the answer the index of the CLS token.
cls_index = input_ids.index(tokenizer.cls_token_id)
sequence_ids = tokenized_examples.sequence_ids(i)
# from the feature idx to sample idx
sample_index = sample_mapping[i]
# get the answer for a feature
answers = examples["answers"][sample_index]
tokenized_examples["example_id"].append(examples["id"][sample_index])
if len(answers["answer_start"]) == 0:
tokenized_examples["start_positions"].append(cls_index)
tokenized_examples["end_positions"].append(cls_index)
else:
# Start/end character index of the answer in the text.
start_char = answers["answer_start"][0]
end_char = start_char + len(answers["text"][0])
# Start token index of the current span in the text.
token_start_index = 0
while sequence_ids[token_start_index] != 1:
token_start_index += 1
# End token index of the current span in the text.
token_end_index = len(input_ids) - 1
while sequence_ids[token_end_index] != 1:
token_end_index -= 1
# Detect if the answer is out of the span (in which case this feature is labeled with the CLS index).
if not (offsets[token_start_index][0] <= start_char and
offsets[token_end_index][1] >= end_char):
tokenized_examples["start_positions"].append(cls_index)
tokenized_examples["end_positions"].append(cls_index)
else:
# Otherwise move the token_start_index and token_end_index to the two ends of the answer.
# Note: we could go after the last offset if the answer is the last word (edge case).
while token_start_index < len(offsets) and \
offsets[token_start_index][0] <= start_char:
token_start_index += 1
tokenized_examples["start_positions"].append(
token_start_index - 1)
while offsets[token_end_index][1] >= end_char:
token_end_index -= 1
tokenized_examples["end_positions"].append(token_end_index + 1)
return tokenized_examples
```
_Originally posted by @yanllearnn in https://github.com/huggingface/datasets/issues/3333#issuecomment-983457161_ | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3353/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3353/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/3352 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3352/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3352/comments | https://api.github.com/repos/huggingface/datasets/issues/3352/events | https://github.com/huggingface/datasets/pull/3352 | 1,068,102,994 | PR_kwDODunzps4vO6uZ | 3,352 | Make LABR dataset streamable | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,638,346,947,000 | 1,638,355,742,000 | null | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3352",
"html_url": "https://github.com/huggingface/datasets/pull/3352",
"diff_url": "https://github.com/huggingface/datasets/pull/3352.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3352.patch",
"merged_at": 1638355741000
} | Fix LABR dataset to make it streamable.
Related to: #3350. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3352/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3352/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3351 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3351/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3351/comments | https://api.github.com/repos/huggingface/datasets/issues/3351/events | https://github.com/huggingface/datasets/pull/3351 | 1,068,094,873 | PR_kwDODunzps4vO5AS | 3,351 | Add VCTK dataset | {
"login": "jaketae",
"id": 25360440,
"node_id": "MDQ6VXNlcjI1MzYwNDQw",
"avatar_url": "https://avatars.githubusercontent.com/u/25360440?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jaketae",
"html_url": "https://github.com/jaketae",
"followers_url": "https://api.github.com/users/jaketae/followers",
"following_url": "https://api.github.com/users/jaketae/following{/other_user}",
"gists_url": "https://api.github.com/users/jaketae/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jaketae/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jaketae/subscriptions",
"organizations_url": "https://api.github.com/users/jaketae/orgs",
"repos_url": "https://api.github.com/users/jaketae/repos",
"events_url": "https://api.github.com/users/jaketae/events{/privacy}",
"received_events_url": "https://api.github.com/users/jaketae/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | [
"Hello @patrickvonplaten, I hope it's okay to ping you with a (dumb) question!\r\n\r\nI've been trying to get `dl_manager.download_and_extract(_DL_URL)` to work with no avail. I verified that this is a problem on two different machines (lab server, GCP), so I doubt it's an issue with network connectivity. Here is the full trace.\r\n\r\n```\r\n(venv) (base) jaketae@jake-gpu1:~/documents/datasets$ datasets-cli test datasets/vctk --save_infos --all_configs\r\nTesting builder 'main' (1/1)\r\nDownloading and preparing dataset vctk/main to /home/jaketae/.cache/huggingface/datasets/vctk/main/0.9.2/2bfa52a93469fa9d6d4b1831c6511db5442b9f4e48620aef2bc3890d7a5268a8...\r\nTraceback (most recent call last):\r\n File \"/home/jaketae/documents/datasets/venv/bin/datasets-cli\", line 33, in <module>\r\n sys.exit(load_entry_point('datasets', 'console_scripts', 'datasets-cli')())\r\n File \"/home/jaketae/documents/datasets/src/datasets/commands/datasets_cli.py\", line 33, in main\r\n service.run()\r\n File \"/home/jaketae/documents/datasets/src/datasets/commands/test.py\", line 146, in run\r\n builder.download_and_prepare(\r\n File \"/home/jaketae/documents/datasets/src/datasets/builder.py\", line 593, in download_and_prepare\r\n self._download_and_prepare(\r\n File \"/home/jaketae/documents/datasets/src/datasets/builder.py\", line 659, in _download_and_prepare\r\n split_generators = self._split_generators(dl_manager, **split_generators_kwargs)\r\n File \"/home/jaketae/.cache/huggingface/modules/datasets_modules/datasets/vctk/2bfa52a93469fa9d6d4b1831c6511db5442b9f4e48620aef2bc3890d7a5268a8/vctk.py\", line 76, in _split_generators\r\n root_path = dl_manager.download_and_extract(_DL_URL)\r\n File \"/home/jaketae/documents/datasets/src/datasets/utils/download_manager.py\", line 283, in download_and_extract\r\n return self.extract(self.download(url_or_urls))\r\n File \"/home/jaketae/documents/datasets/src/datasets/utils/download_manager.py\", line 195, in download\r\n downloaded_path_or_paths = map_nested(\r\n File \"/home/jaketae/documents/datasets/src/datasets/utils/py_utils.py\", line 234, in map_nested\r\n return function(data_struct)\r\n File \"/home/jaketae/documents/datasets/src/datasets/utils/download_manager.py\", line 216, in _download\r\n return cached_path(url_or_filename, download_config=download_config)\r\n File \"/home/jaketae/documents/datasets/src/datasets/utils/file_utils.py\", line 298, in cached_path\r\n output_path = get_from_cache(\r\n File \"/home/jaketae/documents/datasets/src/datasets/utils/file_utils.py\", line 608, in get_from_cache\r\n raise ConnectionError(f\"Couldn't reach {url}\")\r\nConnectionError: Couldn't reach https://datashare.is.ed.ac.uk/bitstream/handle/10283/3443/VCTK-Corpus-0.92.zip\r\n```\r\n\r\nOn my local, however, the URL correctly points to the download zip file. My admittedly naive guess is that the website is web-crawler or scraper proof (requiring specific headers, etc.), but I also think I might have just missed a very basic step in the process.\r\n\r\nApologies for the delayed PR, and TIA for the help!",
"Hey @jaketae, \r\n\r\nHmm, yeah I don't know really either - the link also works correctly for me when doing:\r\n\r\n```\r\nwget https://datashare.is.ed.ac.uk/bitstream/handle/10283/3443/VCTK-Corpus-0.92.zip\r\n```\r\n\r\nI think however that I had a similar problem previously with Edinburgh's (`.ed.ac.uk`) websites which I solved with the following hack. Not sure if this could be the same problem here...\r\nhttps://github.com/huggingface/datasets/blob/e1104ad5d3e83f8b1571e0d6fef4fdabf0a1fde5/datasets/ami/ami.py#L364\r\n\r\n",
"The AMI dataset is stored under a different website though it seems: `\"https://groups.inf.ed.ac.uk/ami/AMICorpusMirror//amicorpus/{}/audio/{}\"`\r\n\r\nso not 100p sure if this solves the problem",
"Hi @patrickvonplaten,\r\n\r\nThanks for the feedback! Sadly, disabling multi-processing didn't cut it for me. \r\n\r\nI've been looking at VCTK code in [`torchaudio`](https://pytorch.org/audio/stable/_modules/torchaudio/datasets/vctk.html) and [`tfds`](https://github.com/tensorflow/datasets/blob/master/tensorflow_datasets/audio/vctk.py). I don't think they're using a hack to accomplish this, so I'll try to look into it to see if I can pinpoint the cause. I'll keep you in the loop here. Thank you!",
"Hi @patrickvonplaten, \r\n\r\nAfter more investigation, I found that simply increasing `etag_timeout` in `get_from_cache` from 10 to 100 solved it. However, unless I'm missing something, an issue is that `etag_timeout` is basically hard-coded as a default parameter because `cached_path`, which calls `get_from_cache` has no way of modifying the default. \r\n\r\nhttps://github.com/huggingface/datasets/blob/b25ac1d62670e7b339ed552ecc37846d2abd30c7/src/datasets/utils/file_utils.py#L298-L310\r\n\r\nhttps://github.com/huggingface/datasets/blob/b25ac1d62670e7b339ed552ecc37846d2abd30c7/src/datasets/utils/file_utils.py#L497-L510\r\n\r\n\r\nI can think of two solutions.\r\n\r\n* Simply increase the default to 100\r\n* Allow `etag_timeout` to be modifiable on a per-dataset basis by integrating it to `download_config` (maybe this is already supported?)\r\n\r\nThank you!"
] | 1,638,346,397,000 | 1,638,479,591,000 | null | CONTRIBUTOR | null | true | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3351",
"html_url": "https://github.com/huggingface/datasets/pull/3351",
"diff_url": "https://github.com/huggingface/datasets/pull/3351.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3351.patch",
"merged_at": null
} | Fixes #1837. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3351/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3351/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3350 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3350/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3350/comments | https://api.github.com/repos/huggingface/datasets/issues/3350/events | https://github.com/huggingface/datasets/pull/3350 | 1,068,078,160 | PR_kwDODunzps4vO1aj | 3,350 | Avoid content-encoding issue while streaming datasets | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,638,345,408,000 | 1,638,346,501,000 | null | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3350",
"html_url": "https://github.com/huggingface/datasets/pull/3350",
"diff_url": "https://github.com/huggingface/datasets/pull/3350.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3350.patch",
"merged_at": 1638346500000
} | This PR will fix streaming of datasets served with gzip content-encoding:
```
ClientPayloadError: 400, message='Can not decode content-encoding: gzip'
```
Fix #2918.
CC: @severo | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3350/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3350/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3349 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3349/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3349/comments | https://api.github.com/repos/huggingface/datasets/issues/3349/events | https://github.com/huggingface/datasets/pull/3349 | 1,067,853,601 | PR_kwDODunzps4vOF-s | 3,349 | raise exception instead of using assertions. | {
"login": "manisnesan",
"id": 153142,
"node_id": "MDQ6VXNlcjE1MzE0Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/153142?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/manisnesan",
"html_url": "https://github.com/manisnesan",
"followers_url": "https://api.github.com/users/manisnesan/followers",
"following_url": "https://api.github.com/users/manisnesan/following{/other_user}",
"gists_url": "https://api.github.com/users/manisnesan/gists{/gist_id}",
"starred_url": "https://api.github.com/users/manisnesan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/manisnesan/subscriptions",
"organizations_url": "https://api.github.com/users/manisnesan/orgs",
"repos_url": "https://api.github.com/users/manisnesan/repos",
"events_url": "https://api.github.com/users/manisnesan/events{/privacy}",
"received_events_url": "https://api.github.com/users/manisnesan/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | [
"@mariosasko - Thanks for the review & suggestions. Updated as per the suggestions. "
] | 1,638,322,671,000 | 1,638,380,421,000 | null | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3349",
"html_url": "https://github.com/huggingface/datasets/pull/3349",
"diff_url": "https://github.com/huggingface/datasets/pull/3349.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3349.patch",
"merged_at": null
} | fix for the remaining files https://github.com/huggingface/datasets/issues/3171 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3349/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3349/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3348 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3348/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3348/comments | https://api.github.com/repos/huggingface/datasets/issues/3348/events | https://github.com/huggingface/datasets/pull/3348 | 1,067,831,113 | PR_kwDODunzps4vOBOQ | 3,348 | Match key names to correspond with filename | {
"login": "jaehlee",
"id": 11873078,
"node_id": "MDQ6VXNlcjExODczMDc4",
"avatar_url": "https://avatars.githubusercontent.com/u/11873078?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jaehlee",
"html_url": "https://github.com/jaehlee",
"followers_url": "https://api.github.com/users/jaehlee/followers",
"following_url": "https://api.github.com/users/jaehlee/following{/other_user}",
"gists_url": "https://api.github.com/users/jaehlee/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jaehlee/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jaehlee/subscriptions",
"organizations_url": "https://api.github.com/users/jaehlee/orgs",
"repos_url": "https://api.github.com/users/jaehlee/repos",
"events_url": "https://api.github.com/users/jaehlee/events{/privacy}",
"received_events_url": "https://api.github.com/users/jaehlee/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | [
"Thanks for the suggestion! I think the current checked-in `CHECKPOINT_URLS` is already not working. I believe anyone who tried using the new ckpts (`BLEURT-20-X`) can't unless this fix is in. The zip file from bleurt side unzips to directory name matching the filename (capitalized for new ones). For example without current changes I'd get the following error\r\n\r\n```\r\n---------------------------------------------------------------------------\r\nAssertionError Traceback (most recent call last)\r\n<ipython-input-5-f6832fe20f84> in <module>()\r\n 1 predictions = [\"hello there\", \"general kenobi\"]\r\n 2 references = [\"hello there\", \"general kenobi\"]\r\n----> 3 bleurt = datasets.load_metric(\"bleurt\", \"bleurt-20\")\r\n 4 results = bleurt.compute(predictions=predictions, references=references)\r\n\r\n4 frames\r\n/usr/local/lib/python3.7/dist-packages/bleurt/checkpoint.py in read_bleurt_config(path)\r\n 84 \"\"\"Reads and checks config file from a BLEURT checkpoint.\"\"\"\r\n 85 assert tf.io.gfile.exists(path), \\\r\n---> 86 \"Could not find BLEURT checkpoint {}\".format(path)\r\n 87 config_path = os.path.join(path, CONFIG_FILE)\r\n 88 assert tf.io.gfile.exists(config_path), \\\r\n\r\nAssertionError: Could not find BLEURT checkpoint /root/.cache/huggingface/metrics/bleurt/bleurt-20/downloads/extracted/e34c60f1a05394ecda54e253a10413ca7b5d59f9a23f3cc73258c6b78ffa2f50/bleurt-20\r\n```\r\ninspecting specified path I see that directory name is `BLEURT-20` instead of `bleurt-20`. \r\nOther solution similar to your suggestion is meddle with `dl_manager.download_and_extract` to unzip to paths with lowering all the paths but I imagine this will affect other parts of the library. "
] | 1,638,320,478,000 | 1,638,466,434,000 | null | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3348",
"html_url": "https://github.com/huggingface/datasets/pull/3348",
"diff_url": "https://github.com/huggingface/datasets/pull/3348.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3348.patch",
"merged_at": null
} | In order to properly locate downloaded ckpt files key name needs to match filename. Correcting change introduced in #3235 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3348/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3348/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3347 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3347/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3347/comments | https://api.github.com/repos/huggingface/datasets/issues/3347/events | https://github.com/huggingface/datasets/pull/3347 | 1,067,738,902 | PR_kwDODunzps4vNthw | 3,347 | iter_archive for zip files | {
"login": "Mehdi2402",
"id": 56029953,
"node_id": "MDQ6VXNlcjU2MDI5OTUz",
"avatar_url": "https://avatars.githubusercontent.com/u/56029953?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Mehdi2402",
"html_url": "https://github.com/Mehdi2402",
"followers_url": "https://api.github.com/users/Mehdi2402/followers",
"following_url": "https://api.github.com/users/Mehdi2402/following{/other_user}",
"gists_url": "https://api.github.com/users/Mehdi2402/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Mehdi2402/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Mehdi2402/subscriptions",
"organizations_url": "https://api.github.com/users/Mehdi2402/orgs",
"repos_url": "https://api.github.com/users/Mehdi2402/repos",
"events_url": "https://api.github.com/users/Mehdi2402/events{/privacy}",
"received_events_url": "https://api.github.com/users/Mehdi2402/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | [
"And also don't always try streaming with Google Drive - it can have issues because of how Google Drive works (with quotas, restrictions, etc.) and it can indeed cause `BlockSizeError`.\r\n\r\nFeel free to host your test data elsewhere, such as in a dataset repository on https://huggingface.co (see [here](https://huggingface.co/docs/datasets/upload_dataset.html#upload-your-files) for a tutorial on how to upload files)"
] | 1,638,311,657,000 | 1,638,371,314,000 | null | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3347",
"html_url": "https://github.com/huggingface/datasets/pull/3347",
"diff_url": "https://github.com/huggingface/datasets/pull/3347.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3347.patch",
"merged_at": null
} | * In this PR, I added the option to iterate through zipfiles for `download_manager.py` only.
* Next PR will be the same applied to `streaming_download_manager.py`.
* Related issue #3272.
## Comments :
* There is no `.isreg()` equivalent in zipfile library to check if file is Regular so I used `.is_dir()` instead to skip directories.
* For now I got `streaming_download_manager.py` working for local zip files, but not for urls. I get the following error when I test it on an archive in google drive, so still working on it. `BlockSizeError: Got more bytes so far (>2112) than requested (22)`
## Tasks :
- [x] download_manager.py
- [ ] streaming_download_manager.py | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3347/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 1,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3347/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3346 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3346/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3346/comments | https://api.github.com/repos/huggingface/datasets/issues/3346/events | https://github.com/huggingface/datasets/issues/3346 | 1,067,632,365 | I_kwDODunzps4_osbt | 3,346 | Failed to convert `string` with pyarrow for QED since 1.15.0 | {
"login": "tianjianjiang",
"id": 4812544,
"node_id": "MDQ6VXNlcjQ4MTI1NDQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/4812544?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/tianjianjiang",
"html_url": "https://github.com/tianjianjiang",
"followers_url": "https://api.github.com/users/tianjianjiang/followers",
"following_url": "https://api.github.com/users/tianjianjiang/following{/other_user}",
"gists_url": "https://api.github.com/users/tianjianjiang/gists{/gist_id}",
"starred_url": "https://api.github.com/users/tianjianjiang/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/tianjianjiang/subscriptions",
"organizations_url": "https://api.github.com/users/tianjianjiang/orgs",
"repos_url": "https://api.github.com/users/tianjianjiang/repos",
"events_url": "https://api.github.com/users/tianjianjiang/events{/privacy}",
"received_events_url": "https://api.github.com/users/tianjianjiang/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | [
"Scratch that, probably the old and incompatible usage of dataset builder from promptsource."
] | 1,638,303,102,000 | 1,638,304,581,000 | null | NONE | null | null | null | ## Describe the bug
Loading QED was fine until 1.15.0.
related: bigscience-workshop/promptsource#659, bigscience-workshop/promptsource#670
Not sure where the root cause is, but here are some candidates:
- #3158
- #3120
- #3196
- #2891
## Steps to reproduce the bug
```python
load_dataset("qed")
```
## Expected results
Loading completed.
## Actual results
```shell
ArrowInvalid: Could not convert in with type str: tried to convert to boolean
Traceback:
File "/Users/s0s0cr3/Library/Python/3.9/lib/python/site-packages/streamlit/script_runner.py", line 354, in _run_script
exec(code, module.__dict__)
File "/Users/s0s0cr3/Documents/GitHub/promptsource/promptsource/app.py", line 260, in <module>
dataset = get_dataset(dataset_key, str(conf_option.name) if conf_option else None)
File "/Users/s0s0cr3/Library/Python/3.9/lib/python/site-packages/streamlit/caching.py", line 543, in wrapped_func
return get_or_create_cached_value()
File "/Users/s0s0cr3/Library/Python/3.9/lib/python/site-packages/streamlit/caching.py", line 527, in get_or_create_cached_value
return_value = func(*args, **kwargs)
File "/Users/s0s0cr3/Documents/GitHub/promptsource/promptsource/utils.py", line 49, in get_dataset
builder_instance.download_and_prepare()
File "/Users/s0s0cr3/Library/Python/3.9/lib/python/site-packages/datasets/builder.py", line 607, in download_and_prepare
self._download_and_prepare(
File "/Users/s0s0cr3/Library/Python/3.9/lib/python/site-packages/datasets/builder.py", line 697, in _download_and_prepare
self._prepare_split(split_generator, **prepare_split_kwargs)
File "/Users/s0s0cr3/Library/Python/3.9/lib/python/site-packages/datasets/builder.py", line 1106, in _prepare_split
num_examples, num_bytes = writer.finalize()
File "/Users/s0s0cr3/Library/Python/3.9/lib/python/site-packages/datasets/arrow_writer.py", line 456, in finalize
self.write_examples_on_file()
File "/Users/s0s0cr3/Library/Python/3.9/lib/python/site-packages/datasets/arrow_writer.py", line 325, in write_examples_on_file
pa_array = pa.array(typed_sequence)
File "pyarrow/array.pxi", line 222, in pyarrow.lib.array
File "pyarrow/array.pxi", line 110, in pyarrow.lib._handle_arrow_array_protocol
File "/Users/s0s0cr3/Library/Python/3.9/lib/python/site-packages/datasets/arrow_writer.py", line 121, in __arrow_array__
out = pa.array(cast_to_python_objects(self.data, only_1d_for_numpy=True), type=type)
File "pyarrow/array.pxi", line 305, in pyarrow.lib.array
File "pyarrow/array.pxi", line 39, in pyarrow.lib._sequence_to_array
File "pyarrow/error.pxi", line 122, in pyarrow.lib.pyarrow_internal_check_status
File "pyarrow/error.pxi", line 84, in pyarrow.lib.check_status
```
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 1.15.0, 1.16.1
- Platform: macOS 1.15.7 or above
- Python version: 3.7.12 and 3.9
- PyArrow version: 3.0.0, 5.0.0, 6.0.1
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3346/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3346/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/3345 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3345/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3345/comments | https://api.github.com/repos/huggingface/datasets/issues/3345/events | https://github.com/huggingface/datasets/issues/3345 | 1,067,622,951 | I_kwDODunzps4_oqIn | 3,345 | Failed to download species_800 from Google Drive zip file | {
"login": "tianjianjiang",
"id": 4812544,
"node_id": "MDQ6VXNlcjQ4MTI1NDQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/4812544?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/tianjianjiang",
"html_url": "https://github.com/tianjianjiang",
"followers_url": "https://api.github.com/users/tianjianjiang/followers",
"following_url": "https://api.github.com/users/tianjianjiang/following{/other_user}",
"gists_url": "https://api.github.com/users/tianjianjiang/gists{/gist_id}",
"starred_url": "https://api.github.com/users/tianjianjiang/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/tianjianjiang/subscriptions",
"organizations_url": "https://api.github.com/users/tianjianjiang/orgs",
"repos_url": "https://api.github.com/users/tianjianjiang/repos",
"events_url": "https://api.github.com/users/tianjianjiang/events{/privacy}",
"received_events_url": "https://api.github.com/users/tianjianjiang/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | [
"Hi,\r\n\r\nthe dataset is downloaded normally on my machine. Maybe the URL was down at the time of your download. Could you try again?",
"> Hi,\r\n> \r\n> the dataset is downloaded normally on my machine. Maybe the URL was down at the time of your download. Could you try again?\r\n\r\nI have tried that many times with both load_dataset() and a browser almost simultaneously. The browser always works for me while load_dataset() fails.",
"@mariosasko \r\n> the dataset is downloaded normally on my machine. Maybe the URL was down at the time of your download. Could you try again?\r\n\r\nI've tried yet again just a moment ago. This time I realize that, the step `(... post-processed: Unknown size, total: 20.89 MiB) to /Users/mike/.cache/huggingface/datasets/species800/species_800/1.0.0/532167f0bb8fbc0d77d6d03c4fd642c8c55527b9c5f2b1da77f3d00b0e559976...` and the one after seem unstable. If I want to retry, I will have to delete it (and probably other cache lock files). It **_sometimes_** works.\r\n\r\nBut I didn't try `download_mode=\"force_redownload\"` yet.\r\n\r\nAnyway, I suppose this isn't really a pressing issue for the time being, so I'm going to close this. Thank you.\r\n\r\n"
] | 1,638,302,428,000 | 1,638,381,195,000 | null | NONE | null | null | null | ## Describe the bug
One can manually download the zip file on Google Drive, but `load_dataset()` cannot.
related: #3248
## Steps to reproduce the bug
```shell
> python
Python 3.7.12 (default, Sep 5 2021, 08:34:29)
[Clang 11.0.3 (clang-1103.0.32.62)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
```
```python
>>> from datasets import load_dataset
>>> s800 = load_dataset("species_800")
```
## Expected results
species_800 downloaded.
## Actual results
```shell
Downloading: 5.68kB [00:00, 1.22MB/s]
Downloading: 2.70kB [00:00, 691kB/s]
Downloading and preparing dataset species800/species_800 (download: 17.36 MiB, generated: 3.53 MiB, post-processed: Unknown size, total: 20.89 MiB) to /Users/mike/.cache/huggingface/datasets/species800/species_800/1.0.0/532167f0bb8fbc0d77d6d03c4fd642c8c55527b9c5f2b1da77f3d00b0e559976...
0%| | 0/1 [00:00<?, ?it/s]Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/Users/mike/Library/Caches/pypoetry/virtualenvs/promptsource-hsdAcWsQ-py3.7/lib/python3.7/site-packages/datasets/load.py", line 1632, in load_dataset
use_auth_token=use_auth_token,
File "/Users/mike/Library/Caches/pypoetry/virtualenvs/promptsource-hsdAcWsQ-py3.7/lib/python3.7/site-packages/datasets/builder.py", line 608, in download_and_prepare
dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs
File "/Users/mike/Library/Caches/pypoetry/virtualenvs/promptsource-hsdAcWsQ-py3.7/lib/python3.7/site-packages/datasets/builder.py", line 675, in _download_and_prepare
split_generators = self._split_generators(dl_manager, **split_generators_kwargs)
File "/Users/mike/.cache/huggingface/modules/datasets_modules/datasets/species_800/532167f0bb8fbc0d77d6d03c4fd642c8c55527b9c5f2b1da77f3d00b0e559976/species_800.py", line 104, in _split_generators
downloaded_files = dl_manager.download_and_extract(urls_to_download)
File "/Users/mike/Library/Caches/pypoetry/virtualenvs/promptsource-hsdAcWsQ-py3.7/lib/python3.7/site-packages/datasets/utils/download_manager.py", line 284, in download_and_extract
return self.extract(self.download(url_or_urls))
File "/Users/mike/Library/Caches/pypoetry/virtualenvs/promptsource-hsdAcWsQ-py3.7/lib/python3.7/site-packages/datasets/utils/download_manager.py", line 197, in download
download_func, url_or_urls, map_tuple=True, num_proc=download_config.num_proc, disable_tqdm=False
File "/Users/mike/Library/Caches/pypoetry/virtualenvs/promptsource-hsdAcWsQ-py3.7/lib/python3.7/site-packages/datasets/utils/py_utils.py", line 209, in map_nested
for obj in utils.tqdm(iterable, disable=disable_tqdm)
File "/Users/mike/Library/Caches/pypoetry/virtualenvs/promptsource-hsdAcWsQ-py3.7/lib/python3.7/site-packages/datasets/utils/py_utils.py", line 209, in <listcomp>
for obj in utils.tqdm(iterable, disable=disable_tqdm)
File "/Users/mike/Library/Caches/pypoetry/virtualenvs/promptsource-hsdAcWsQ-py3.7/lib/python3.7/site-packages/datasets/utils/py_utils.py", line 143, in _single_map_nested
return function(data_struct)
File "/Users/mike/Library/Caches/pypoetry/virtualenvs/promptsource-hsdAcWsQ-py3.7/lib/python3.7/site-packages/datasets/utils/download_manager.py", line 217, in _download
return cached_path(url_or_filename, download_config=download_config)
File "/Users/mike/Library/Caches/pypoetry/virtualenvs/promptsource-hsdAcWsQ-py3.7/lib/python3.7/site-packages/datasets/utils/file_utils.py", line 305, in cached_path
use_auth_token=download_config.use_auth_token,
File "/Users/mike/Library/Caches/pypoetry/virtualenvs/promptsource-hsdAcWsQ-py3.7/lib/python3.7/site-packages/datasets/utils/file_utils.py", line 594, in get_from_cache
raise ConnectionError("Couldn't reach {}".format(url))
ConnectionError: Couldn't reach https://drive.google.com/u/0/uc?id=1OletxmPYNkz2ltOr9pyT0b0iBtUWxslh&export=download/
```
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 1.14,0 1.15.0, 1.16.1
- Platform: macOS Catalina 10.15.7
- Python version: 3.7.12
- PyArrow version: 6.0.1
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3345/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3345/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/3344 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3344/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3344/comments | https://api.github.com/repos/huggingface/datasets/issues/3344/events | https://github.com/huggingface/datasets/pull/3344 | 1,067,567,603 | PR_kwDODunzps4vNJwd | 3,344 | Add ArrayXD docs | {
"login": "stevhliu",
"id": 59462357,
"node_id": "MDQ6VXNlcjU5NDYyMzU3",
"avatar_url": "https://avatars.githubusercontent.com/u/59462357?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stevhliu",
"html_url": "https://github.com/stevhliu",
"followers_url": "https://api.github.com/users/stevhliu/followers",
"following_url": "https://api.github.com/users/stevhliu/following{/other_user}",
"gists_url": "https://api.github.com/users/stevhliu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stevhliu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stevhliu/subscriptions",
"organizations_url": "https://api.github.com/users/stevhliu/orgs",
"repos_url": "https://api.github.com/users/stevhliu/repos",
"events_url": "https://api.github.com/users/stevhliu/events{/privacy}",
"received_events_url": "https://api.github.com/users/stevhliu/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,638,298,411,000 | 1,638,389,763,000 | null | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3344",
"html_url": "https://github.com/huggingface/datasets/pull/3344",
"diff_url": "https://github.com/huggingface/datasets/pull/3344.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3344.patch",
"merged_at": 1638387332000
} | Documents support for dynamic first dimension in `ArrayXD` from #2891, and explain the `ArrayXD` feature in general.
Let me know if I'm missing anything @lhoestq :) | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3344/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3344/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3343 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3343/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3343/comments | https://api.github.com/repos/huggingface/datasets/issues/3343/events | https://github.com/huggingface/datasets/pull/3343 | 1,067,505,507 | PR_kwDODunzps4vM8yB | 3,343 | Better error message when download fails | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,638,293,930,000 | 1,638,358,079,000 | null | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3343",
"html_url": "https://github.com/huggingface/datasets/pull/3343",
"diff_url": "https://github.com/huggingface/datasets/pull/3343.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3343.patch",
"merged_at": 1638358078000
} | From our discussions in https://github.com/huggingface/datasets/issues/3269 and https://github.com/huggingface/datasets/issues/3282 it would be nice to have better messages if a download fails.
In particular the error now shows:
- the error from the HEAD request if there's one
- otherwise the response code of the HEAD request
I also added an error to tell users to pass `use_auth_token` when the Hugging Face Hub returns 401 (Unauthorized).
While paying around with this I also fixed a minor issue with the `force_download` parameter that was not always taken into account | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3343/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3343/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3342 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3342/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3342/comments | https://api.github.com/repos/huggingface/datasets/issues/3342/events | https://github.com/huggingface/datasets/pull/3342 | 1,067,481,390 | PR_kwDODunzps4vM3wh | 3,342 | Fix ASSET dataset data URLs | {
"login": "tianjianjiang",
"id": 4812544,
"node_id": "MDQ6VXNlcjQ4MTI1NDQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/4812544?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/tianjianjiang",
"html_url": "https://github.com/tianjianjiang",
"followers_url": "https://api.github.com/users/tianjianjiang/followers",
"following_url": "https://api.github.com/users/tianjianjiang/following{/other_user}",
"gists_url": "https://api.github.com/users/tianjianjiang/gists{/gist_id}",
"starred_url": "https://api.github.com/users/tianjianjiang/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/tianjianjiang/subscriptions",
"organizations_url": "https://api.github.com/users/tianjianjiang/orgs",
"repos_url": "https://api.github.com/users/tianjianjiang/repos",
"events_url": "https://api.github.com/users/tianjianjiang/events{/privacy}",
"received_events_url": "https://api.github.com/users/tianjianjiang/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | [
"> Hi @tianjianjiang, thanks for the fix.\r\n> The links should also be updated in the `dataset_infos.json` file.\r\n> The failing tests are due to the missing tag in the header of the `README.md` file:\r\n\r\nHi @albertvillanova, thank you for the info! My apologies for the messy PR.\r\n"
] | 1,638,292,410,000 | 1,638,301,948,000 | null | NONE | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3342",
"html_url": "https://github.com/huggingface/datasets/pull/3342",
"diff_url": "https://github.com/huggingface/datasets/pull/3342.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3342.patch",
"merged_at": null
} | Change the branch name "master" to "main" in the data URLs, since facebookresearch has changed that. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3342/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3342/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3341 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3341/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3341/comments | https://api.github.com/repos/huggingface/datasets/issues/3341/events | https://github.com/huggingface/datasets/issues/3341 | 1,067,449,569 | I_kwDODunzps4_n_zh | 3,341 | Mirror the canonical datasets to the Hugging Face Hub | {
"login": "severo",
"id": 1676121,
"node_id": "MDQ6VXNlcjE2NzYxMjE=",
"avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/severo",
"html_url": "https://github.com/severo",
"followers_url": "https://api.github.com/users/severo/followers",
"following_url": "https://api.github.com/users/severo/following{/other_user}",
"gists_url": "https://api.github.com/users/severo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/severo/subscriptions",
"organizations_url": "https://api.github.com/users/severo/orgs",
"repos_url": "https://api.github.com/users/severo/repos",
"events_url": "https://api.github.com/users/severo/events{/privacy}",
"received_events_url": "https://api.github.com/users/severo/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | open | false | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"I created a GitHub project to keep track of what needs to be done:\r\nhttps://github.com/huggingface/datasets/projects/3\r\n\r\nI also store my code in a (private for now) repository at https://github.com/huggingface/mirror_canonical_datasets_on_hub"
] | 1,638,290,525,000 | 1,638,387,128,000 | null | CONTRIBUTOR | null | null | null | - [ ] create a repo on https://hf.co/datasets for every canonical dataset
- [ ] on every commit related to a dataset, update the hf.co repo
See https://github.com/huggingface/moon-landing/pull/1562
@SBrandeis: I let you edit this description if needed to precise the intent. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3341/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3341/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/3340 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3340/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3340/comments | https://api.github.com/repos/huggingface/datasets/issues/3340/events | https://github.com/huggingface/datasets/pull/3340 | 1,067,292,636 | PR_kwDODunzps4vMP6Z | 3,340 | Fix JSON ClassLabel casting for integers | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,638,281,994,000 | 1,638,358,050,000 | null | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3340",
"html_url": "https://github.com/huggingface/datasets/pull/3340",
"diff_url": "https://github.com/huggingface/datasets/pull/3340.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3340.patch",
"merged_at": 1638358050000
} | Loading a JSON dataset with ClassLabel feature types currently fails if the JSON data already has integers. Indeed currently it tries to convert the strings to integers without even checking if the data are not integers already.
For example this currently fails:
```python
from datasets import load_dataset, Features, ClassLabel
path = "data.json"
f = Features({"a": ClassLabel(names=["neg", "pos"])})
d = load_dataset("json", data_files=path, features=f)
```
data.json
```json
{"a": 0}
{"a": 1}
```
I fixed that by adding a line that checks the type of the JSON data before trying to convert them
cc @albertvillanova let me know if it sounds good to you | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3340/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3340/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3339 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3339/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3339/comments | https://api.github.com/repos/huggingface/datasets/issues/3339/events | https://github.com/huggingface/datasets/issues/3339 | 1,066,662,477 | I_kwDODunzps4_k_pN | 3,339 | to_tf_dataset fails on TPU | {
"login": "nbroad1881",
"id": 24982805,
"node_id": "MDQ6VXNlcjI0OTgyODA1",
"avatar_url": "https://avatars.githubusercontent.com/u/24982805?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/nbroad1881",
"html_url": "https://github.com/nbroad1881",
"followers_url": "https://api.github.com/users/nbroad1881/followers",
"following_url": "https://api.github.com/users/nbroad1881/following{/other_user}",
"gists_url": "https://api.github.com/users/nbroad1881/gists{/gist_id}",
"starred_url": "https://api.github.com/users/nbroad1881/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/nbroad1881/subscriptions",
"organizations_url": "https://api.github.com/users/nbroad1881/orgs",
"repos_url": "https://api.github.com/users/nbroad1881/repos",
"events_url": "https://api.github.com/users/nbroad1881/events{/privacy}",
"received_events_url": "https://api.github.com/users/nbroad1881/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | open | false | null | [] | null | [
"This might be related to https://github.com/tensorflow/tensorflow/issues/38762 , what do you think @Rocketknight1 ?\r\n> Dataset.from_generator is expected to not work with TPUs as it uses py_function underneath which is incompatible with Cloud TPU 2VM setup. If you would like to read from large datasets, maybe try to materialize it on disk and use TFRecordDataest instead.",
"Hi @lhoestq @nbroad1881, I think it's very similar, yes. Unfortunately `to_tf_dataset` uses `tf.numpy_function` which can't be compiled - this is a necessary evil to load from the underlying Arrow dataset. We need to update the notebooks/examples to clarify that this won't work, or to identify a workaround. You may be able to get it to work on an actual cloud TPU VM, but those are quite new and we haven't tested it yet. ",
"Thank you for the explanation. I didn't realize the nuances of `tf.numpy_function`. In this scenario, would it be better to use `export(format='tfrecord')` ? It's not quite the same, but for very large datasets that don't fit in memory it looks like it is the only option. I haven't used `export` before, but I do recall reading that there are suggestions for how big and how many tfrecords there should be to not bottleneck the TPU. It might be nice if there were a way for the `export` method to split the files up into appropriate chunk sizes depending on the size of the dataset and the number of devices. And if that is too much, it would be nice to be able to specify the number of files that would be created when using `export`. Well... maybe the user should just do the chunking themselves and call `export` a bunch of times. Whatever the case, you have been helpful. Thanks Tensorflow boy ;-) ",
"Yeah, this is something we really should have a proper guide on. I'll make a note to test some things and make a 'TF TPU best practices' notebook at some point, but in the meantime I think your solution of exporting TFRecords will probably work. ",
"Also: I knew that tweet would haunt me"
] | 1,638,233,452,000 | 1,638,454,887,000 | null | NONE | null | null | null | Using `to_tf_dataset` to create a dataset and then putting it in `model.fit` results in an internal error on TPUs. I've only tried on Colab and Kaggle TPUs, not GCP TPUs.
## Steps to reproduce the bug
I made a colab to show the error. https://colab.research.google.com/drive/12x_PFKzGouFxqD4OuWfnycW_1TaT276z?usp=sharing
## Expected results
dataset from `to_tf_dataset` works in `model.fit`
Right below the first error in the colab I use `tf.data.Dataset.from_tensor_slices` and `model.fit` works just fine. This is the desired outcome.
## Actual results
```
InternalError: 5 root error(s) found.
(0) INTERNAL: {{function_node __inference_train_function_30558}} failed to connect to all addresses
Additional GRPC error information from remote target /job:localhost/replica:0/task:0/device:CPU:0:
:{"created":"@1638231897.932218653","description":"Failed to pick subchannel","file":"third_party/grpc/src/core/ext/filters/client_channel/client_channel.cc","file_line":3151,"referenced_errors":[{"created":"@1638231897.932216754","description":"failed to connect to all addresses","file":"third_party/grpc/src/core/lib/transport/error_utils.cc","file_line":161,"grpc_status":14}]}
[[{{node StatefulPartitionedCall}}]]
[[MultiDeviceIteratorGetNextFromShard]]
Executing non-communication op <MultiDeviceIteratorGetNextFromShard> originally returned UnavailableError, and was replaced by InternalError to avoid invoking TF network error handling logic.
[[RemoteCall]]
[[IteratorGetNextAsOptional]]
[[tpu_compile_succeeded_assert/_14023832043698465348/_7/_439]]
```
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 1.16.1
- Platform: Linux-5.4.104+-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.7.12
- PyArrow version: 3.0.0
- Tensorflow 2.7.0
- `transformers` 4.12.5
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3339/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3339/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/3338 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3338/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3338/comments | https://api.github.com/repos/huggingface/datasets/issues/3338/events | https://github.com/huggingface/datasets/pull/3338 | 1,066,371,235 | PR_kwDODunzps4vJRFM | 3,338 | [WIP] Add doctests for tutorials | {
"login": "stevhliu",
"id": 59462357,
"node_id": "MDQ6VXNlcjU5NDYyMzU3",
"avatar_url": "https://avatars.githubusercontent.com/u/59462357?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stevhliu",
"html_url": "https://github.com/stevhliu",
"followers_url": "https://api.github.com/users/stevhliu/followers",
"following_url": "https://api.github.com/users/stevhliu/following{/other_user}",
"gists_url": "https://api.github.com/users/stevhliu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stevhliu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stevhliu/subscriptions",
"organizations_url": "https://api.github.com/users/stevhliu/orgs",
"repos_url": "https://api.github.com/users/stevhliu/repos",
"events_url": "https://api.github.com/users/stevhliu/events{/privacy}",
"received_events_url": "https://api.github.com/users/stevhliu/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | [] | 1,638,211,246,000 | 1,638,227,808,000 | null | CONTRIBUTOR | null | true | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3338",
"html_url": "https://github.com/huggingface/datasets/pull/3338",
"diff_url": "https://github.com/huggingface/datasets/pull/3338.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3338.patch",
"merged_at": null
} | Opening a PR as discussed with @LysandreJik for some help with doctest issues. The goal is to add doctests for each of the tutorials in the documentation to make sure the code samples work as shown.
### Issues
A doctest has been added in the docstring of the `load_dataset_builder` function in `load.py` to handle variable outputs with the `ELLIPSIS` directive. When I run doctest on the `load_hub.rst` file, doctest should recognize the expected output from the docstring, and the corresponding code sample in `load_hub.rst` should pass. I am having the same issue with handling tracebacks in the `load_dataset` function.
From the docstring:
```
>>> dataset_builder.cache_dir #doctest: +ELLIPSIS
/Users/.../.cache/huggingface/datasets/imdb/plain_text/1.0.0/...
```
Test result:
```
Failed example:
dataset_builder.cache_dir
Expected:
/Users/.../.cache/huggingface/datasets/imdb/plain_text/1.0.0/...
Got:
/Users/steven/.cache/huggingface/datasets/imdb/plain_text/1.0.0/2fdd8b9bcadd6e7055e742a706876ba43f19faee861df134affd7a3f60fc38a1
```
I am able to get the doctest to pass by adding the doctest directives (`ELLIPSIS` and `NORMALIZE_WHITESPACE`) to the code samples in the `rst` file directly. But my understanding is that these directives should also work in the docstrings of the functions. I am running the test from the root of the directory:
```
python -m doctest -v docs/source/load_hub.rst
``` | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3338/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3338/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3337 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3337/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3337/comments | https://api.github.com/repos/huggingface/datasets/issues/3337/events | https://github.com/huggingface/datasets/issues/3337 | 1,066,232,936 | I_kwDODunzps4_jWxo | 3,337 | Typing of Dataset.__getitem__ could be improved. | {
"login": "Dref360",
"id": 8976546,
"node_id": "MDQ6VXNlcjg5NzY1NDY=",
"avatar_url": "https://avatars.githubusercontent.com/u/8976546?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Dref360",
"html_url": "https://github.com/Dref360",
"followers_url": "https://api.github.com/users/Dref360/followers",
"following_url": "https://api.github.com/users/Dref360/following{/other_user}",
"gists_url": "https://api.github.com/users/Dref360/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Dref360/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Dref360/subscriptions",
"organizations_url": "https://api.github.com/users/Dref360/orgs",
"repos_url": "https://api.github.com/users/Dref360/repos",
"events_url": "https://api.github.com/users/Dref360/events{/privacy}",
"received_events_url": "https://api.github.com/users/Dref360/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | open | false | {
"login": "Dref360",
"id": 8976546,
"node_id": "MDQ6VXNlcjg5NzY1NDY=",
"avatar_url": "https://avatars.githubusercontent.com/u/8976546?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Dref360",
"html_url": "https://github.com/Dref360",
"followers_url": "https://api.github.com/users/Dref360/followers",
"following_url": "https://api.github.com/users/Dref360/following{/other_user}",
"gists_url": "https://api.github.com/users/Dref360/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Dref360/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Dref360/subscriptions",
"organizations_url": "https://api.github.com/users/Dref360/orgs",
"repos_url": "https://api.github.com/users/Dref360/repos",
"events_url": "https://api.github.com/users/Dref360/events{/privacy}",
"received_events_url": "https://api.github.com/users/Dref360/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "Dref360",
"id": 8976546,
"node_id": "MDQ6VXNlcjg5NzY1NDY=",
"avatar_url": "https://avatars.githubusercontent.com/u/8976546?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Dref360",
"html_url": "https://github.com/Dref360",
"followers_url": "https://api.github.com/users/Dref360/followers",
"following_url": "https://api.github.com/users/Dref360/following{/other_user}",
"gists_url": "https://api.github.com/users/Dref360/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Dref360/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Dref360/subscriptions",
"organizations_url": "https://api.github.com/users/Dref360/orgs",
"repos_url": "https://api.github.com/users/Dref360/repos",
"events_url": "https://api.github.com/users/Dref360/events{/privacy}",
"received_events_url": "https://api.github.com/users/Dref360/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"Hi ! Thanks for the suggestion, I didn't know about this decorator.\r\n\r\nIf you are interesting in contributing, feel free to open a pull request to add the overload methods for each typing combination :) To assign you to this issue, you can comment `#self-assign` in this thread.\r\n\r\n`Dataset.__getitem__` is defined right here: https://github.com/huggingface/datasets/blob/e6f1352fe19679de897f3d962e616936a17094f5/src/datasets/arrow_dataset.py#L1840",
"#self-assign"
] | 1,638,202,811,000 | 1,638,208,108,000 | null | CONTRIBUTOR | null | null | null | ## Describe the bug
The newly added typing for Dataset.__getitem__ is Union[Dict, List]. This makes tools like mypy a bit awkward to use as we need to check the type manually. We could use type overloading to make this easier. [Documentation](https://docs.python.org/3/library/typing.html#typing.overload)
## Steps to reproduce the bug
Let's have a file `test.py`
```python
from typing import List, Dict, Any
from datasets import Dataset
ds = Dataset.from_dict({
'a': [1,2,3],
'b': ["1", "2", "3"]
})
one_colum: List[str] = ds['a']
some_index: Dict[Any, Any] = ds[1]
```
## Expected results
Running `mypy test.py` should not give any error.
## Actual results
```
test.py:10: error: Incompatible types in assignment (expression has type "Union[Dict[Any, Any], List[Any]]", variable has type "List[str]")
test.py:11: error: Incompatible types in assignment (expression has type "Union[Dict[Any, Any], List[Any]]", variable has type "Dict[Any, Any]")
Found 2 errors in 1 file (checked 1 source file)
```
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 1.13.3
- Platform: macOS-10.16-x86_64-i386-64bit
- Python version: 3.8.8
- PyArrow version: 6.0.1
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3337/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3337/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/3336 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3336/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3336/comments | https://api.github.com/repos/huggingface/datasets/issues/3336/events | https://github.com/huggingface/datasets/pull/3336 | 1,066,208,436 | PR_kwDODunzps4vIwUE | 3,336 | Add support for multiple dynamic dimensions and to_pandas conversion for dynamic arrays | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | [] | 1,638,201,539,000 | 1,638,201,539,000 | null | CONTRIBUTOR | null | true | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3336",
"html_url": "https://github.com/huggingface/datasets/pull/3336",
"diff_url": "https://github.com/huggingface/datasets/pull/3336.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3336.patch",
"merged_at": null
} | Add support for multiple dynamic dimensions (e.g. `(None, None, 3)` for arbitrary sized images) and `to_pandas()` conversion for dynamic arrays.
TODOs:
* [ ] Cleaner code
* [ ] Formatting issues (if NumPy doesn't allow broadcasting even though dtype is np.object)
* [ ] Fix some issues with zero-dim tensors
* [ ] Tests
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3336/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3336/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3335 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3335/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3335/comments | https://api.github.com/repos/huggingface/datasets/issues/3335/events | https://github.com/huggingface/datasets/pull/3335 | 1,066,064,126 | PR_kwDODunzps4vISGy | 3,335 | WIP:add Speech commands dataset | {
"login": "polinaeterna",
"id": 16348744,
"node_id": "MDQ6VXNlcjE2MzQ4NzQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/16348744?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/polinaeterna",
"html_url": "https://github.com/polinaeterna",
"followers_url": "https://api.github.com/users/polinaeterna/followers",
"following_url": "https://api.github.com/users/polinaeterna/following{/other_user}",
"gists_url": "https://api.github.com/users/polinaeterna/gists{/gist_id}",
"starred_url": "https://api.github.com/users/polinaeterna/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/polinaeterna/subscriptions",
"organizations_url": "https://api.github.com/users/polinaeterna/orgs",
"repos_url": "https://api.github.com/users/polinaeterna/repos",
"events_url": "https://api.github.com/users/polinaeterna/events{/privacy}",
"received_events_url": "https://api.github.com/users/polinaeterna/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | [
"OMG I hope it's more or less clear what I mean...",
"@anton-l ping",
"@lhoestq \r\nHi Quentin! Thank you for your feedback and suggestions! 🤗\r\n\r\nYes, that was actually what I wanted to do next - I mean the steaming stuff :)\r\nAlso, I need to make some changes to the readme (to account for the updated features set).\r\n\r\nHopefully, I will be done by tomorrow afternoon if that's ok. \r\n"
] | 1,638,193,967,000 | 1,638,558,707,000 | null | NONE | null | true | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3335",
"html_url": "https://github.com/huggingface/datasets/pull/3335",
"diff_url": "https://github.com/huggingface/datasets/pull/3335.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3335.patch",
"merged_at": null
} | I'm confused with the correct train / val / test splits of both versions of this dataset. But know I got the solution on how to address that and will do this tomorrow.
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3335/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3335/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3334 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3334/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3334/comments | https://api.github.com/repos/huggingface/datasets/issues/3334/events | https://github.com/huggingface/datasets/issues/3334 | 1,065,983,923 | I_kwDODunzps4_iZ-z | 3,334 | Integrate Polars library | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | open | false | null | [] | null | [
"If possible, a neat API could be something like `Dataset.to_polars()`, as well as `Dataset.set_format(\"polars\")`",
"Note they use a \"custom\" implementation of Arrow: [Arrow2](https://github.com/jorgecarleitao/arrow2)."
] | 1,638,189,114,000 | 1,638,190,872,000 | null | MEMBER | null | null | null | Check potential integration of the Polars library: https://github.com/pola-rs/polars
- Benchmark: https://h2oai.github.io/db-benchmark/
CC: @thomwolf @lewtun
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3334/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3334/timeline | null | false |
End of preview. Expand
in Dataset Viewer.
No dataset card yet
- Downloads last month
- 21