url
stringlengths
58
61
repository_url
stringclasses
1 value
labels_url
stringlengths
72
75
comments_url
stringlengths
67
70
events_url
stringlengths
65
68
html_url
stringlengths
46
51
id
int64
599M
1.5B
node_id
stringlengths
18
32
number
int64
1
5.38k
title
stringlengths
1
276
user
dict
labels
list
state
stringclasses
2 values
locked
bool
1 class
assignee
dict
assignees
list
milestone
dict
comments
sequence
created_at
stringlengths
20
20
updated_at
stringlengths
20
20
closed_at
stringlengths
20
20
βŒ€
author_association
stringclasses
3 values
active_lock_reason
null
draft
bool
2 classes
pull_request
dict
body
stringlengths
0
228k
βŒ€
reactions
dict
timeline_url
stringlengths
67
70
performed_via_github_app
null
state_reason
stringclasses
3 values
is_pull_request
bool
1 class
https://api.github.com/repos/huggingface/datasets/issues/697
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/697/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/697/comments
https://api.github.com/repos/huggingface/datasets/issues/697/events
https://github.com/huggingface/datasets/pull/697
712,979,029
MDExOlB1bGxSZXF1ZXN0NDk2MzczNDU5
697
Update README.md
{ "avatar_url": "https://avatars.githubusercontent.com/u/71011306?v=4", "events_url": "https://api.github.com/users/bishug/events{/privacy}", "followers_url": "https://api.github.com/users/bishug/followers", "following_url": "https://api.github.com/users/bishug/following{/other_user}", "gists_url": "https://api.github.com/users/bishug/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/bishug", "id": 71011306, "login": "bishug", "node_id": "MDQ6VXNlcjcxMDExMzA2", "organizations_url": "https://api.github.com/users/bishug/orgs", "received_events_url": "https://api.github.com/users/bishug/received_events", "repos_url": "https://api.github.com/users/bishug/repos", "site_admin": false, "starred_url": "https://api.github.com/users/bishug/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/bishug/subscriptions", "type": "User", "url": "https://api.github.com/users/bishug" }
[]
closed
false
null
[]
null
[]
2020-10-01T16:02:42Z
2020-10-01T16:12:00Z
2020-10-01T16:12:00Z
NONE
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/697.diff", "html_url": "https://github.com/huggingface/datasets/pull/697", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/697.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/697" }
Hey I was just telling my subscribers to check out your repositories Thank you
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/697/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/697/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/696
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/696/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/696/comments
https://api.github.com/repos/huggingface/datasets/issues/696/events
https://github.com/huggingface/datasets/pull/696
712,942,977
MDExOlB1bGxSZXF1ZXN0NDk2MzQzMjEy
696
Elasticsearch index docs
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
[]
closed
false
null
[]
null
[]
2020-10-01T15:18:58Z
2020-10-02T07:48:19Z
2020-10-02T07:48:18Z
MEMBER
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/696.diff", "html_url": "https://github.com/huggingface/datasets/pull/696", "merged_at": "2020-10-02T07:48:18Z", "patch_url": "https://github.com/huggingface/datasets/pull/696.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/696" }
I added the docs for ES indexes. I also added a `load_elasticsearch_index` method to load an index that has already been built. I checked the tests for the ES index and we have tests that mock ElasticSearch. I think this is good for now but at some point it would be cool to have an end-to-end test with a real ES running.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/696/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/696/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/695
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/695/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/695/comments
https://api.github.com/repos/huggingface/datasets/issues/695/events
https://github.com/huggingface/datasets/pull/695
712,843,949
MDExOlB1bGxSZXF1ZXN0NDk2MjU5NTM0
695
Update XNLI download link
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
[]
closed
false
null
[]
null
[]
2020-10-01T13:27:22Z
2020-10-01T14:01:15Z
2020-10-01T14:01:14Z
MEMBER
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/695.diff", "html_url": "https://github.com/huggingface/datasets/pull/695", "merged_at": "2020-10-01T14:01:14Z", "patch_url": "https://github.com/huggingface/datasets/pull/695.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/695" }
The old link isn't working anymore. I updated it with the new official link. Fix #690
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/695/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/695/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/694
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/694/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/694/comments
https://api.github.com/repos/huggingface/datasets/issues/694/events
https://github.com/huggingface/datasets/pull/694
712,827,751
MDExOlB1bGxSZXF1ZXN0NDk2MjQ1NzU0
694
Use GitHub instead of aws in remote dataset tests
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
[]
closed
false
null
[]
null
[]
2020-10-01T13:07:50Z
2020-10-02T07:47:28Z
2020-10-02T07:47:27Z
MEMBER
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/694.diff", "html_url": "https://github.com/huggingface/datasets/pull/694", "merged_at": "2020-10-02T07:47:26Z", "patch_url": "https://github.com/huggingface/datasets/pull/694.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/694" }
Recently we switched from aws s3 to github to download dataset scripts. However in the tests, the dummy data were still downloaded from s3. So I changed that to download them from github instead, in the MockDownloadManager. Moreover I noticed that `anli`'s dummy data were quite heavy (18MB compressed, i.e. the entire dataset) so I replaced them with dummy data with few examples.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/694/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/694/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/693
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/693/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/693/comments
https://api.github.com/repos/huggingface/datasets/issues/693/events
https://github.com/huggingface/datasets/pull/693
712,822,200
MDExOlB1bGxSZXF1ZXN0NDk2MjQxMjUw
693
Rachel ker add dataset/mlsum
{ "avatar_url": "https://avatars.githubusercontent.com/u/32742136?v=4", "events_url": "https://api.github.com/users/pdhg/events{/privacy}", "followers_url": "https://api.github.com/users/pdhg/followers", "following_url": "https://api.github.com/users/pdhg/following{/other_user}", "gists_url": "https://api.github.com/users/pdhg/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/pdhg", "id": 32742136, "login": "pdhg", "node_id": "MDQ6VXNlcjMyNzQyMTM2", "organizations_url": "https://api.github.com/users/pdhg/orgs", "received_events_url": "https://api.github.com/users/pdhg/received_events", "repos_url": "https://api.github.com/users/pdhg/repos", "site_admin": false, "starred_url": "https://api.github.com/users/pdhg/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/pdhg/subscriptions", "type": "User", "url": "https://api.github.com/users/pdhg" }
[]
closed
false
null
[]
null
[]
2020-10-01T13:01:10Z
2020-10-01T17:01:13Z
2020-10-01T17:01:13Z
NONE
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/693.diff", "html_url": "https://github.com/huggingface/datasets/pull/693", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/693.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/693" }
.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/693/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/693/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/692
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/692/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/692/comments
https://api.github.com/repos/huggingface/datasets/issues/692/events
https://github.com/huggingface/datasets/pull/692
712,818,968
MDExOlB1bGxSZXF1ZXN0NDk2MjM4NzIw
692
Update README.md
{ "avatar_url": "https://avatars.githubusercontent.com/u/62796466?v=4", "events_url": "https://api.github.com/users/mayank1897/events{/privacy}", "followers_url": "https://api.github.com/users/mayank1897/followers", "following_url": "https://api.github.com/users/mayank1897/following{/other_user}", "gists_url": "https://api.github.com/users/mayank1897/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mayank1897", "id": 62796466, "login": "mayank1897", "node_id": "MDQ6VXNlcjYyNzk2NDY2", "organizations_url": "https://api.github.com/users/mayank1897/orgs", "received_events_url": "https://api.github.com/users/mayank1897/received_events", "repos_url": "https://api.github.com/users/mayank1897/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mayank1897/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mayank1897/subscriptions", "type": "User", "url": "https://api.github.com/users/mayank1897" }
[]
closed
false
null
[]
null
[]
2020-10-01T12:57:22Z
2020-10-02T11:01:59Z
2020-10-02T11:01:59Z
NONE
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/692.diff", "html_url": "https://github.com/huggingface/datasets/pull/692", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/692.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/692" }
{ "+1": 0, "-1": 4, "confused": 2, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 6, "url": "https://api.github.com/repos/huggingface/datasets/issues/692/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/692/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/691
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/691/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/691/comments
https://api.github.com/repos/huggingface/datasets/issues/691/events
https://github.com/huggingface/datasets/issues/691
712,389,499
MDU6SXNzdWU3MTIzODk0OTk=
691
Add UI filter to filter datasets based on task
{ "avatar_url": "https://avatars.githubusercontent.com/u/7589415?v=4", "events_url": "https://api.github.com/users/praateekmahajan/events{/privacy}", "followers_url": "https://api.github.com/users/praateekmahajan/followers", "following_url": "https://api.github.com/users/praateekmahajan/following{/other_user}", "gists_url": "https://api.github.com/users/praateekmahajan/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/praateekmahajan", "id": 7589415, "login": "praateekmahajan", "node_id": "MDQ6VXNlcjc1ODk0MTU=", "organizations_url": "https://api.github.com/users/praateekmahajan/orgs", "received_events_url": "https://api.github.com/users/praateekmahajan/received_events", "repos_url": "https://api.github.com/users/praateekmahajan/repos", "site_admin": false, "starred_url": "https://api.github.com/users/praateekmahajan/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/praateekmahajan/subscriptions", "type": "User", "url": "https://api.github.com/users/praateekmahajan" }
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" } ]
closed
false
null
[]
null
[]
2020-10-01T00:56:18Z
2022-02-15T10:46:50Z
2022-02-15T10:46:50Z
NONE
null
null
null
This is great work, so huge shoutout to contributors and huggingface. The [/nlp/viewer](https://huggingface.co/nlp/viewer/) is great and the [/datasets](https://huggingface.co/datasets) page is great. I was wondering if in both or either places we can have a filter that selects if a dataset is good for the following tasks (non exhaustive list) - Classification - Multi label - Multi class - Q&A - Summarization - Translation I believe this feature might have some value, for folks trying to find datasets for a particular task, and then testing their model capabilities. Thank you :)
{ "+1": 2, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 2, "url": "https://api.github.com/repos/huggingface/datasets/issues/691/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/691/timeline
null
completed
true
https://api.github.com/repos/huggingface/datasets/issues/690
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/690/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/690/comments
https://api.github.com/repos/huggingface/datasets/issues/690/events
https://github.com/huggingface/datasets/issues/690
712,150,321
MDU6SXNzdWU3MTIxNTAzMjE=
690
XNLI dataset: NonMatchingChecksumError
{ "avatar_url": "https://avatars.githubusercontent.com/u/13307358?v=4", "events_url": "https://api.github.com/users/xiey1/events{/privacy}", "followers_url": "https://api.github.com/users/xiey1/followers", "following_url": "https://api.github.com/users/xiey1/following{/other_user}", "gists_url": "https://api.github.com/users/xiey1/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/xiey1", "id": 13307358, "login": "xiey1", "node_id": "MDQ6VXNlcjEzMzA3MzU4", "organizations_url": "https://api.github.com/users/xiey1/orgs", "received_events_url": "https://api.github.com/users/xiey1/received_events", "repos_url": "https://api.github.com/users/xiey1/repos", "site_admin": false, "starred_url": "https://api.github.com/users/xiey1/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/xiey1/subscriptions", "type": "User", "url": "https://api.github.com/users/xiey1" }
[]
closed
false
null
[]
null
[]
2020-09-30T17:50:03Z
2020-10-01T17:15:08Z
2020-10-01T14:01:14Z
NONE
null
null
null
Hi, I tried to download "xnli" dataset in colab using `xnli = load_dataset(path='xnli')` but got 'NonMatchingChecksumError' error `NonMatchingChecksumError Traceback (most recent call last) <ipython-input-27-a87bedc82eeb> in <module>() ----> 1 xnli = load_dataset(path='xnli') 3 frames /usr/local/lib/python3.6/dist-packages/datasets/utils/info_utils.py in verify_checksums(expected_checksums, recorded_checksums, verification_name) 37 if len(bad_urls) > 0: 38 error_msg = "Checksums didn't match" + for_verification_name + ":\n" ---> 39 raise NonMatchingChecksumError(error_msg + str(bad_urls)) 40 logger.info("All the checksums matched successfully" + for_verification_name) 41 NonMatchingChecksumError: Checksums didn't match for dataset source files: ['https://www.nyu.edu/projects/bowman/xnli/XNLI-1.0.zip']` The same code worked well several days ago in colab but stopped working now. Thanks!
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/690/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/690/timeline
null
completed
true
https://api.github.com/repos/huggingface/datasets/issues/689
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/689/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/689/comments
https://api.github.com/repos/huggingface/datasets/issues/689/events
https://github.com/huggingface/datasets/pull/689
712,095,262
MDExOlB1bGxSZXF1ZXN0NDk1NjMzNjMy
689
Switch to pandas reader for text dataset
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
[]
closed
false
null
[]
null
[]
2020-09-30T16:28:12Z
2020-09-30T16:45:32Z
2020-09-30T16:45:31Z
MEMBER
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/689.diff", "html_url": "https://github.com/huggingface/datasets/pull/689", "merged_at": "2020-09-30T16:45:31Z", "patch_url": "https://github.com/huggingface/datasets/pull/689.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/689" }
Following the discussion in #622 , it appears that there's no appropriate ways to use the payrrow csv reader to read text files because of the separator. In this PR I switched to pandas to read the file. Moreover pandas allows to read the file by chunk, which means that you can build the arrow dataset from a text file that is bigger than RAM (we used to have to shard text files an mentioned in https://github.com/huggingface/datasets/issues/610#issuecomment-691672919) From a test that I did locally on a 1GB text file, the pyarrow reader used to run in 150ms while the new one takes 650ms (multithreading off for pyarrow). This is probably due to chunking since I am having the same speed difference by calling `read()` and calling `read(chunksize)` + `readline()` to read the text file.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/689/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/689/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/688
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/688/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/688/comments
https://api.github.com/repos/huggingface/datasets/issues/688/events
https://github.com/huggingface/datasets/pull/688
711,804,828
MDExOlB1bGxSZXF1ZXN0NDk1MzkwMTc1
688
Disable tokenizers parallelism in multiprocessed map
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
[]
closed
false
null
[]
null
[]
2020-09-30T09:53:34Z
2020-10-01T08:45:46Z
2020-10-01T08:45:45Z
MEMBER
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/688.diff", "html_url": "https://github.com/huggingface/datasets/pull/688", "merged_at": "2020-10-01T08:45:45Z", "patch_url": "https://github.com/huggingface/datasets/pull/688.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/688" }
It was reported in #620 that using multiprocessing with a tokenizers shows this message: ``` The current process just got forked. Disabling parallelism to avoid deadlocks... To disable this warning, please explicitly set TOKENIZERS_PARALLELISM=(true | false) ``` This message is shown when TOKENIZERS_PARALLELISM is unset. Moreover if it is set to `true`, then the program just hangs. To hide the message (if TOKENIZERS_PARALLELISM is unset) and avoid hanging (if TOKENIZERS_PARALLELISM is `true`), then I set TOKENIZERS_PARALLELISM to `false` when forking the process. After forking is gets back to its original value. Also I added a warning if TOKENIZERS_PARALLELISM was `true` and is set to `false`: ``` Setting TOKENIZERS_PARALLELISM=false for forked processes. ``` cc @n1t0
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/688/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/688/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/687
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/687/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/687/comments
https://api.github.com/repos/huggingface/datasets/issues/687/events
https://github.com/huggingface/datasets/issues/687
711,664,810
MDU6SXNzdWU3MTE2NjQ4MTA=
687
`ArrowInvalid` occurs while running `Dataset.map()` function
{ "avatar_url": "https://avatars.githubusercontent.com/u/5601012?v=4", "events_url": "https://api.github.com/users/peinan/events{/privacy}", "followers_url": "https://api.github.com/users/peinan/followers", "following_url": "https://api.github.com/users/peinan/following{/other_user}", "gists_url": "https://api.github.com/users/peinan/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/peinan", "id": 5601012, "login": "peinan", "node_id": "MDQ6VXNlcjU2MDEwMTI=", "organizations_url": "https://api.github.com/users/peinan/orgs", "received_events_url": "https://api.github.com/users/peinan/received_events", "repos_url": "https://api.github.com/users/peinan/repos", "site_admin": false, "starred_url": "https://api.github.com/users/peinan/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/peinan/subscriptions", "type": "User", "url": "https://api.github.com/users/peinan" }
[]
closed
false
null
[]
null
[]
2020-09-30T06:16:50Z
2020-09-30T09:53:03Z
2020-09-30T09:53:03Z
NONE
null
null
null
It seems to fail to process the final batch. This [colab](https://colab.research.google.com/drive/1_byLZRHwGP13PHMkJWo62Wp50S_Z2HMD?usp=sharing) can reproduce the error. Code: ```python # train_ds = Dataset(features: { # 'title': Value(dtype='string', id=None), # 'score': Value(dtype='float64', id=None) # }, num_rows: 99999) # suggested in #665 class PicklableTokenizer(BertJapaneseTokenizer): def __getstate__(self): state = dict(self.__dict__) state['do_lower_case'] = self.word_tokenizer.do_lower_case state['never_split'] = self.word_tokenizer.never_split del state['word_tokenizer'] return state def __setstate(self): do_lower_case = state.pop('do_lower_case') never_split = state.pop('never_split') self.__dict__ = state self.word_tokenizer = MecabTokenizer( do_lower_case=do_lower_case, never_split=never_split ) t = PicklableTokenizer.from_pretrained('bert-base-japanese-whole-word-masking') encoded = train_ds.map( lambda examples: {'tokens': t.encode(examples['title'], max_length=1000)}, batched=True, batch_size=1000 ) ``` Error Message: ``` 99% 99/100 [00:22<00:00, 39.07ba/s] --------------------------------------------------------------------------- ArrowInvalid Traceback (most recent call last) <timed exec> in <module> /usr/local/lib/python3.6/site-packages/datasets/arrow_dataset.py in map(self, function, with_indices, input_columns, batched, batch_size, drop_last_batch, remove_columns, keep_in_memory, load_from_cache_file, cache_file_name, writer_batch_size, features, disable_nullable, fn_kwargs, num_proc, suffix_template, new_fingerprint) 1242 fn_kwargs=fn_kwargs, 1243 new_fingerprint=new_fingerprint, -> 1244 update_data=update_data, 1245 ) 1246 else: /usr/local/lib/python3.6/site-packages/datasets/arrow_dataset.py in wrapper(*args, **kwargs) 151 "output_all_columns": self._output_all_columns, 152 } --> 153 out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs) 154 if new_format["columns"] is not None: 155 new_format["columns"] = list(set(new_format["columns"]) & set(out.column_names)) /usr/local/lib/python3.6/site-packages/datasets/fingerprint.py in wrapper(*args, **kwargs) 161 # Call actual function 162 --> 163 out = func(self, *args, **kwargs) 164 165 # Update fingerprint of in-place transforms + update in-place history of transforms /usr/local/lib/python3.6/site-packages/datasets/arrow_dataset.py in _map_single(self, function, with_indices, input_columns, batched, batch_size, drop_last_batch, remove_columns, keep_in_memory, load_from_cache_file, cache_file_name, writer_batch_size, features, disable_nullable, fn_kwargs, new_fingerprint, rank, offset, update_data) 1496 if update_data: 1497 batch = cast_to_python_objects(batch) -> 1498 writer.write_batch(batch) 1499 if update_data: 1500 writer.finalize() # close_stream=bool(buf_writer is None)) # We only close if we are writing in a file /usr/local/lib/python3.6/site-packages/datasets/arrow_writer.py in write_batch(self, batch_examples, writer_batch_size) 271 typed_sequence = TypedSequence(batch_examples[col], type=col_type, try_type=col_try_type) 272 typed_sequence_examples[col] = typed_sequence --> 273 pa_table = pa.Table.from_pydict(typed_sequence_examples) 274 self.write_table(pa_table) 275 /usr/local/lib/python3.6/site-packages/pyarrow/table.pxi in pyarrow.lib.Table.from_pydict() /usr/local/lib/python3.6/site-packages/pyarrow/table.pxi in pyarrow.lib.Table.from_arrays() /usr/local/lib/python3.6/site-packages/pyarrow/table.pxi in pyarrow.lib.Table.validate() /usr/local/lib/python3.6/site-packages/pyarrow/error.pxi in pyarrow.lib.check_status() ArrowInvalid: Column 4 named tokens expected length 999 but got length 1000 ```
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/687/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/687/timeline
null
completed
true
https://api.github.com/repos/huggingface/datasets/issues/686
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/686/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/686/comments
https://api.github.com/repos/huggingface/datasets/issues/686/events
https://github.com/huggingface/datasets/issues/686
711,385,739
MDU6SXNzdWU3MTEzODU3Mzk=
686
Dataset browser url is still https://huggingface.co/nlp/viewer/
{ "avatar_url": "https://avatars.githubusercontent.com/u/4564897?v=4", "events_url": "https://api.github.com/users/jarednielsen/events{/privacy}", "followers_url": "https://api.github.com/users/jarednielsen/followers", "following_url": "https://api.github.com/users/jarednielsen/following{/other_user}", "gists_url": "https://api.github.com/users/jarednielsen/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/jarednielsen", "id": 4564897, "login": "jarednielsen", "node_id": "MDQ6VXNlcjQ1NjQ4OTc=", "organizations_url": "https://api.github.com/users/jarednielsen/orgs", "received_events_url": "https://api.github.com/users/jarednielsen/received_events", "repos_url": "https://api.github.com/users/jarednielsen/repos", "site_admin": false, "starred_url": "https://api.github.com/users/jarednielsen/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jarednielsen/subscriptions", "type": "User", "url": "https://api.github.com/users/jarednielsen" }
[]
closed
false
null
[]
null
[]
2020-09-29T19:21:52Z
2021-01-08T18:29:26Z
2021-01-08T18:29:26Z
CONTRIBUTOR
null
null
null
Might be worth updating to https://huggingface.co/datasets/viewer/
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/686/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/686/timeline
null
completed
true
https://api.github.com/repos/huggingface/datasets/issues/685
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/685/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/685/comments
https://api.github.com/repos/huggingface/datasets/issues/685/events
https://github.com/huggingface/datasets/pull/685
711,182,185
MDExOlB1bGxSZXF1ZXN0NDk0ODg1NjIz
685
Add features parameter to CSV
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
[]
closed
false
null
[]
null
[]
2020-09-29T14:43:36Z
2020-09-30T08:39:56Z
2020-09-30T08:39:54Z
MEMBER
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/685.diff", "html_url": "https://github.com/huggingface/datasets/pull/685", "merged_at": "2020-09-30T08:39:54Z", "patch_url": "https://github.com/huggingface/datasets/pull/685.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/685" }
Add support for the `features` parameter when loading a csv dataset: ```python from datasets import load_dataset, Features features = Features({...}) csv_dataset = load_dataset("csv", data_files=["path/to/my/file.csv"], features=features) ``` I added tests to make sure that it is also compatible with the caching system Fix #623
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/685/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/685/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/684
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/684/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/684/comments
https://api.github.com/repos/huggingface/datasets/issues/684/events
https://github.com/huggingface/datasets/pull/684
711,080,947
MDExOlB1bGxSZXF1ZXN0NDk0ODA2NjE1
684
Fix column order issue in cast
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
[]
closed
false
null
[]
null
[]
2020-09-29T12:49:13Z
2020-09-29T15:56:46Z
2020-09-29T15:56:45Z
MEMBER
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/684.diff", "html_url": "https://github.com/huggingface/datasets/pull/684", "merged_at": "2020-09-29T15:56:45Z", "patch_url": "https://github.com/huggingface/datasets/pull/684.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/684" }
Previously, the order of the columns in the features passes to `cast_` mattered. However even though features passed to `cast_` had the same order as the dataset features, it could fail because the schema that was built was always in alphabetical order. This issue was reported by @lewtun in #623 To fix that I fixed the schema to follow the order of the arrow table columns. I also added the possibility to give features that are not ordered the same way as the dataset features.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/684/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/684/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/683
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/683/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/683/comments
https://api.github.com/repos/huggingface/datasets/issues/683/events
https://github.com/huggingface/datasets/pull/683
710,942,704
MDExOlB1bGxSZXF1ZXN0NDk0NzAwNzY1
683
Fix wrong delimiter in text dataset
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
[]
closed
false
null
[]
null
[]
2020-09-29T09:43:24Z
2021-05-05T18:24:31Z
2020-09-29T09:44:06Z
MEMBER
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/683.diff", "html_url": "https://github.com/huggingface/datasets/pull/683", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/683.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/683" }
The delimiter is set to the bell character as it is used nowhere is text files usually. However in the text dataset the delimiter was set to `\b` which is backspace in python, while the bell character is `\a`. I replace \b by \a Hopefully it fixes issues mentioned by some users in #622
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/683/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/683/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/682
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/682/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/682/comments
https://api.github.com/repos/huggingface/datasets/issues/682/events
https://github.com/huggingface/datasets/pull/682
710,325,399
MDExOlB1bGxSZXF1ZXN0NDk0MTkzMzEw
682
Update navbar chapter titles color
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
[]
closed
false
null
[]
null
[]
2020-09-28T14:35:17Z
2020-09-28T17:30:13Z
2020-09-28T17:30:12Z
MEMBER
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/682.diff", "html_url": "https://github.com/huggingface/datasets/pull/682", "merged_at": "2020-09-28T17:30:12Z", "patch_url": "https://github.com/huggingface/datasets/pull/682.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/682" }
Consistency with the color change that was done in transformers at https://github.com/huggingface/transformers/pull/7423 It makes the background-color of the chapter titles in the docs navbar darker, to differentiate them from the inner sections. see changes [here](https://691-250213286-gh.circle-artifacts.com/0/docs/_build/html/index.html)
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/682/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/682/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/681
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/681/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/681/comments
https://api.github.com/repos/huggingface/datasets/issues/681/events
https://github.com/huggingface/datasets/pull/681
710,075,721
MDExOlB1bGxSZXF1ZXN0NDkzOTkwMjEz
681
Adding missing @property (+2 small flake8 fixes).
{ "avatar_url": "https://avatars.githubusercontent.com/u/204321?v=4", "events_url": "https://api.github.com/users/Narsil/events{/privacy}", "followers_url": "https://api.github.com/users/Narsil/followers", "following_url": "https://api.github.com/users/Narsil/following{/other_user}", "gists_url": "https://api.github.com/users/Narsil/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/Narsil", "id": 204321, "login": "Narsil", "node_id": "MDQ6VXNlcjIwNDMyMQ==", "organizations_url": "https://api.github.com/users/Narsil/orgs", "received_events_url": "https://api.github.com/users/Narsil/received_events", "repos_url": "https://api.github.com/users/Narsil/repos", "site_admin": false, "starred_url": "https://api.github.com/users/Narsil/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Narsil/subscriptions", "type": "User", "url": "https://api.github.com/users/Narsil" }
[]
closed
false
null
[]
null
[]
2020-09-28T08:53:53Z
2020-09-28T10:26:13Z
2020-09-28T10:26:09Z
CONTRIBUTOR
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/681.diff", "html_url": "https://github.com/huggingface/datasets/pull/681", "merged_at": "2020-09-28T10:26:09Z", "patch_url": "https://github.com/huggingface/datasets/pull/681.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/681" }
Fixes #678
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/681/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/681/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/680
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/680/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/680/comments
https://api.github.com/repos/huggingface/datasets/issues/680/events
https://github.com/huggingface/datasets/pull/680
710,066,138
MDExOlB1bGxSZXF1ZXN0NDkzOTgyMjY4
680
Fix bug related to boolean in GAP dataset.
{ "avatar_url": "https://avatars.githubusercontent.com/u/14996977?v=4", "events_url": "https://api.github.com/users/otakumesi/events{/privacy}", "followers_url": "https://api.github.com/users/otakumesi/followers", "following_url": "https://api.github.com/users/otakumesi/following{/other_user}", "gists_url": "https://api.github.com/users/otakumesi/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/otakumesi", "id": 14996977, "login": "otakumesi", "node_id": "MDQ6VXNlcjE0OTk2OTc3", "organizations_url": "https://api.github.com/users/otakumesi/orgs", "received_events_url": "https://api.github.com/users/otakumesi/received_events", "repos_url": "https://api.github.com/users/otakumesi/repos", "site_admin": false, "starred_url": "https://api.github.com/users/otakumesi/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/otakumesi/subscriptions", "type": "User", "url": "https://api.github.com/users/otakumesi" }
[]
closed
false
null
[]
null
[]
2020-09-28T08:39:39Z
2020-09-29T15:54:47Z
2020-09-29T15:54:47Z
CONTRIBUTOR
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/680.diff", "html_url": "https://github.com/huggingface/datasets/pull/680", "merged_at": "2020-09-29T15:54:47Z", "patch_url": "https://github.com/huggingface/datasets/pull/680.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/680" }
### Why I did The value in `row["A-coref"]` and `row["B-coref"]` is `'TRUE'` or `'FALSE'`. This type is `string`, then `bool('FALSE')` is equal to `True` in Python. So, both rows are transformed into `True` now. So, I modified this problem. ### What I did I modified `bool(row["A-coref"])` and `bool(row["B-coref"])` to `row["A-coref"] == "TRUE"` and `row["B-coref"] == "TRUE"`. Thank you!
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/680/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/680/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/679
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/679/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/679/comments
https://api.github.com/repos/huggingface/datasets/issues/679/events
https://github.com/huggingface/datasets/pull/679
710,065,838
MDExOlB1bGxSZXF1ZXN0NDkzOTgyMDMx
679
Fix negative ids when slicing with an array
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
[]
closed
false
null
[]
null
[]
2020-09-28T08:39:08Z
2020-09-28T14:42:20Z
2020-09-28T14:42:19Z
MEMBER
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/679.diff", "html_url": "https://github.com/huggingface/datasets/pull/679", "merged_at": "2020-09-28T14:42:19Z", "patch_url": "https://github.com/huggingface/datasets/pull/679.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/679" }
```python from datasets import Dataset d = ds.Dataset.from_dict({"a": range(10)}) print(d[[0, -1]]) # OverflowError ``` raises an error because of the negative id. This PR fixes that. Fix #668
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/679/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/679/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/678
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/678/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/678/comments
https://api.github.com/repos/huggingface/datasets/issues/678/events
https://github.com/huggingface/datasets/issues/678
710,060,497
MDU6SXNzdWU3MTAwNjA0OTc=
678
The download instructions for c4 datasets are not contained in the error message
{ "avatar_url": "https://avatars.githubusercontent.com/u/204321?v=4", "events_url": "https://api.github.com/users/Narsil/events{/privacy}", "followers_url": "https://api.github.com/users/Narsil/followers", "following_url": "https://api.github.com/users/Narsil/following{/other_user}", "gists_url": "https://api.github.com/users/Narsil/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/Narsil", "id": 204321, "login": "Narsil", "node_id": "MDQ6VXNlcjIwNDMyMQ==", "organizations_url": "https://api.github.com/users/Narsil/orgs", "received_events_url": "https://api.github.com/users/Narsil/received_events", "repos_url": "https://api.github.com/users/Narsil/repos", "site_admin": false, "starred_url": "https://api.github.com/users/Narsil/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Narsil/subscriptions", "type": "User", "url": "https://api.github.com/users/Narsil" }
[]
closed
false
null
[]
null
[]
2020-09-28T08:30:54Z
2020-09-28T10:26:09Z
2020-09-28T10:26:09Z
CONTRIBUTOR
null
null
null
The manual download instructions are not clear ```The dataset c4 with config en requires manual data. Please follow the manual download instructions: <bound method C4.manual_download_instructions of <datasets_modules.datasets.c4.830b0c218bd41fed439812c8dd19dbd4767d2a3faa385eb695cf8666c982b1b3.c4.C4 object at 0x7ff8c5969760>>. Manual data can be loaded with `datasets.load_dataset(c4, data_dir='<path/to/manual/data>') ``` Either `@property` could be added to C4.manual_download_instrcutions (or make it a real property), or the manual_download_instructions function needs to be called I think. Let me know if you want a PR for this, but I'm not sure which possible fix is the correct one.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/678/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/678/timeline
null
completed
true
https://api.github.com/repos/huggingface/datasets/issues/677
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/677/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/677/comments
https://api.github.com/repos/huggingface/datasets/issues/677/events
https://github.com/huggingface/datasets/pull/677
710,055,239
MDExOlB1bGxSZXF1ZXN0NDkzOTczNDE3
677
Move cache dir root creation in builder's init
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
[]
closed
false
null
[]
null
[]
2020-09-28T08:22:46Z
2020-09-28T14:42:43Z
2020-09-28T14:42:42Z
MEMBER
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/677.diff", "html_url": "https://github.com/huggingface/datasets/pull/677", "merged_at": "2020-09-28T14:42:42Z", "patch_url": "https://github.com/huggingface/datasets/pull/677.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/677" }
We use lock files in the builder initialization but sometimes the cache directory where they're supposed to be was not created. To fix that I moved the builder's cache dir root creation in the builder's init. Fix #671
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/677/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/677/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/676
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/676/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/676/comments
https://api.github.com/repos/huggingface/datasets/issues/676/events
https://github.com/huggingface/datasets/issues/676
710,014,319
MDU6SXNzdWU3MTAwMTQzMTk=
676
train_test_split returns empty dataset item
{ "avatar_url": "https://avatars.githubusercontent.com/u/26648528?v=4", "events_url": "https://api.github.com/users/mojave-pku/events{/privacy}", "followers_url": "https://api.github.com/users/mojave-pku/followers", "following_url": "https://api.github.com/users/mojave-pku/following{/other_user}", "gists_url": "https://api.github.com/users/mojave-pku/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mojave-pku", "id": 26648528, "login": "mojave-pku", "node_id": "MDQ6VXNlcjI2NjQ4NTI4", "organizations_url": "https://api.github.com/users/mojave-pku/orgs", "received_events_url": "https://api.github.com/users/mojave-pku/received_events", "repos_url": "https://api.github.com/users/mojave-pku/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mojave-pku/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mojave-pku/subscriptions", "type": "User", "url": "https://api.github.com/users/mojave-pku" }
[]
closed
false
null
[]
null
[]
2020-09-28T07:19:33Z
2020-10-07T13:46:33Z
2020-10-07T13:38:06Z
NONE
null
null
null
I try to split my dataset by `train_test_split`, but after that the item in `train` and `test` `Dataset` is empty. The codes: ``` yelp_data = datasets.load_from_disk('/home/ssd4/huanglianzhe/test_yelp') print(yelp_data[0]) yelp_data = yelp_data.train_test_split(test_size=0.1) print(yelp_data) print(yelp_data['test']) print(yelp_data['test'][0]) ``` The outputs: ``` {'stars': 2.0, 'text': 'xxxx'} Loading cached split indices for dataset at /home/ssd4/huanglianzhe/test_yelp/cache-f9b22d8b9d5a7346.arrow and /home/ssd4/huanglianzhe/test_yelp/cache-4aa26fa4005059d1.arrow DatasetDict({'train': Dataset(features: {'stars': Value(dtype='float64', id=None), 'text': Value(dtype='string', id=None)}, num_rows: 7219009), 'test': Dataset(features: {'stars': Value(dtype='float64', id=None), 'text': Value(dtype='string', id=None)}, num_rows: 802113)}) Dataset(features: {'stars': Value(dtype='float64', id=None), 'text': Value(dtype='string', id=None)}, num_rows: 802113) {} # yelp_data['test'][0] is empty ```
{ "+1": 1, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/676/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/676/timeline
null
completed
true
https://api.github.com/repos/huggingface/datasets/issues/675
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/675/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/675/comments
https://api.github.com/repos/huggingface/datasets/issues/675/events
https://github.com/huggingface/datasets/issues/675
709,818,725
MDU6SXNzdWU3MDk4MTg3MjU=
675
Add custom dataset to NLP?
{ "avatar_url": "https://avatars.githubusercontent.com/u/6556710?v=4", "events_url": "https://api.github.com/users/timpal0l/events{/privacy}", "followers_url": "https://api.github.com/users/timpal0l/followers", "following_url": "https://api.github.com/users/timpal0l/following{/other_user}", "gists_url": "https://api.github.com/users/timpal0l/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/timpal0l", "id": 6556710, "login": "timpal0l", "node_id": "MDQ6VXNlcjY1NTY3MTA=", "organizations_url": "https://api.github.com/users/timpal0l/orgs", "received_events_url": "https://api.github.com/users/timpal0l/received_events", "repos_url": "https://api.github.com/users/timpal0l/repos", "site_admin": false, "starred_url": "https://api.github.com/users/timpal0l/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/timpal0l/subscriptions", "type": "User", "url": "https://api.github.com/users/timpal0l" }
[]
closed
false
null
[]
null
[]
2020-09-27T21:22:50Z
2020-10-20T09:08:49Z
2020-10-20T09:08:49Z
CONTRIBUTOR
null
null
null
Is it possible to add a custom dataset such as a .csv to the NLP library? Thanks.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/675/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/675/timeline
null
completed
true
https://api.github.com/repos/huggingface/datasets/issues/674
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/674/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/674/comments
https://api.github.com/repos/huggingface/datasets/issues/674/events
https://github.com/huggingface/datasets/issues/674
709,661,006
MDU6SXNzdWU3MDk2NjEwMDY=
674
load_dataset() won't download in Windows
{ "avatar_url": "https://avatars.githubusercontent.com/u/34422661?v=4", "events_url": "https://api.github.com/users/ThisDavehead/events{/privacy}", "followers_url": "https://api.github.com/users/ThisDavehead/followers", "following_url": "https://api.github.com/users/ThisDavehead/following{/other_user}", "gists_url": "https://api.github.com/users/ThisDavehead/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/ThisDavehead", "id": 34422661, "login": "ThisDavehead", "node_id": "MDQ6VXNlcjM0NDIyNjYx", "organizations_url": "https://api.github.com/users/ThisDavehead/orgs", "received_events_url": "https://api.github.com/users/ThisDavehead/received_events", "repos_url": "https://api.github.com/users/ThisDavehead/repos", "site_admin": false, "starred_url": "https://api.github.com/users/ThisDavehead/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ThisDavehead/subscriptions", "type": "User", "url": "https://api.github.com/users/ThisDavehead" }
[]
closed
false
null
[]
null
[]
2020-09-27T03:56:25Z
2020-10-05T08:28:18Z
2020-10-05T08:28:18Z
NONE
null
null
null
I don't know if this is just me or Windows. Maybe other Windows users can chime in if they don't have this problem. I've been trying to get some of the tutorials working on Windows, but when I use the load_dataset() function, it just stalls and the script keeps running indefinitely without downloading anything. I've waited upwards of 18 hours to download the 'multi-news' dataset (which isn't very big), and still nothing. I've tried running it through different IDE's and the command line, but it had the same behavior. I've also tried it with all virus and malware protection turned off. I've made sure python and all IDE's are exceptions to the firewall and all the requisite permissions are enabled. Additionally, I checked to see if other packages could download content such as an nltk corpus, and they could. I've also run the same script using Ubuntu and it downloaded fine (and quickly). When I copied the downloaded datasets from my Ubuntu drive to my Windows .cache folder it worked fine by reusing the already-downloaded dataset, but it's cumbersome to do that for every dataset I want to try in my Windows environment. Could this be a bug, or is there something I'm doing wrong or not thinking of? Thanks.
{ "+1": 1, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/674/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/674/timeline
null
completed
true
https://api.github.com/repos/huggingface/datasets/issues/673
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/673/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/673/comments
https://api.github.com/repos/huggingface/datasets/issues/673/events
https://github.com/huggingface/datasets/issues/673
709,603,989
MDU6SXNzdWU3MDk2MDM5ODk=
673
blog_authorship_corpus crashed
{ "avatar_url": "https://avatars.githubusercontent.com/u/7553188?v=4", "events_url": "https://api.github.com/users/Moshiii/events{/privacy}", "followers_url": "https://api.github.com/users/Moshiii/followers", "following_url": "https://api.github.com/users/Moshiii/following{/other_user}", "gists_url": "https://api.github.com/users/Moshiii/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/Moshiii", "id": 7553188, "login": "Moshiii", "node_id": "MDQ6VXNlcjc1NTMxODg=", "organizations_url": "https://api.github.com/users/Moshiii/orgs", "received_events_url": "https://api.github.com/users/Moshiii/received_events", "repos_url": "https://api.github.com/users/Moshiii/repos", "site_admin": false, "starred_url": "https://api.github.com/users/Moshiii/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Moshiii/subscriptions", "type": "User", "url": "https://api.github.com/users/Moshiii" }
[ { "color": "94203D", "default": false, "description": "", "id": 2107841032, "name": "nlp-viewer", "node_id": "MDU6TGFiZWwyMTA3ODQxMDMy", "url": "https://api.github.com/repos/huggingface/datasets/labels/nlp-viewer" } ]
closed
false
null
[]
null
[]
2020-09-26T20:15:28Z
2022-02-15T10:47:58Z
2022-02-15T10:47:58Z
NONE
null
null
null
This is just to report that When I pick blog_authorship_corpus in https://huggingface.co/nlp/viewer/?dataset=blog_authorship_corpus I get this: ![image](https://user-images.githubusercontent.com/7553188/94349542-4364f300-0013-11eb-897d-b25660a449f0.png)
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/673/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/673/timeline
null
completed
true
https://api.github.com/repos/huggingface/datasets/issues/672
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/672/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/672/comments
https://api.github.com/repos/huggingface/datasets/issues/672/events
https://github.com/huggingface/datasets/issues/672
709,575,527
MDU6SXNzdWU3MDk1NzU1Mjc=
672
Questions about XSUM
{ "avatar_url": "https://avatars.githubusercontent.com/u/2441454?v=4", "events_url": "https://api.github.com/users/danyaljj/events{/privacy}", "followers_url": "https://api.github.com/users/danyaljj/followers", "following_url": "https://api.github.com/users/danyaljj/following{/other_user}", "gists_url": "https://api.github.com/users/danyaljj/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/danyaljj", "id": 2441454, "login": "danyaljj", "node_id": "MDQ6VXNlcjI0NDE0NTQ=", "organizations_url": "https://api.github.com/users/danyaljj/orgs", "received_events_url": "https://api.github.com/users/danyaljj/received_events", "repos_url": "https://api.github.com/users/danyaljj/repos", "site_admin": false, "starred_url": "https://api.github.com/users/danyaljj/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/danyaljj/subscriptions", "type": "User", "url": "https://api.github.com/users/danyaljj" }
[]
closed
false
null
[]
null
[]
2020-09-26T17:16:24Z
2022-10-04T17:30:17Z
2022-10-04T17:30:17Z
CONTRIBUTOR
null
null
null
Hi there βœ‹ I'm looking into your `xsum` dataset and I have several questions on that. So here is how I loaded the data: ``` >>> data = datasets.load_dataset('xsum', version='1.0.1') >>> data['train'] Dataset(features: {'document': Value(dtype='string', id=None), 'summary': Value(dtype='string', id=None)}, num_rows: 204017) >>> data['test'] Dataset(features: {'document': Value(dtype='string', id=None), 'summary': Value(dtype='string', id=None)}, num_rows: 11333) ``` The first issue is, the instance counts don’t match what I see on [the dataset's website](https://github.com/EdinburghNLP/XSum/tree/master/XSum-Dataset#what-builds-the-xsum-dataset) (11,333 vs 11,334 for test set; 204,017 vs 204,045 for training set) ``` … training (90%, 204,045), validation (5%, 11,332), and test (5%, 11,334) set. ``` Any thoughts why? Perhaps @mariamabarham could help here, since she recently had a PR on this dataaset https://github.com/huggingface/datasets/pull/289 (reviewed by @patrickvonplaten) Another issue is that the instances don't seem to have IDs. The original datasets provides IDs for the instances: https://github.com/EdinburghNLP/XSum/blob/master/XSum-Dataset/XSum-TRAINING-DEV-TEST-SPLIT-90-5-5.json but to be able to use them, the dataset sizes need to match. CC @jbragg
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/672/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/672/timeline
null
completed
true
https://api.github.com/repos/huggingface/datasets/issues/671
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/671/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/671/comments
https://api.github.com/repos/huggingface/datasets/issues/671/events
https://github.com/huggingface/datasets/issues/671
709,093,151
MDU6SXNzdWU3MDkwOTMxNTE=
671
[BUG] No such file or directory
{ "avatar_url": "https://avatars.githubusercontent.com/u/2238344?v=4", "events_url": "https://api.github.com/users/jbragg/events{/privacy}", "followers_url": "https://api.github.com/users/jbragg/followers", "following_url": "https://api.github.com/users/jbragg/following{/other_user}", "gists_url": "https://api.github.com/users/jbragg/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/jbragg", "id": 2238344, "login": "jbragg", "node_id": "MDQ6VXNlcjIyMzgzNDQ=", "organizations_url": "https://api.github.com/users/jbragg/orgs", "received_events_url": "https://api.github.com/users/jbragg/received_events", "repos_url": "https://api.github.com/users/jbragg/repos", "site_admin": false, "starred_url": "https://api.github.com/users/jbragg/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jbragg/subscriptions", "type": "User", "url": "https://api.github.com/users/jbragg" }
[]
closed
false
null
[]
null
[]
2020-09-25T16:38:54Z
2020-09-28T14:42:42Z
2020-09-28T14:42:42Z
CONTRIBUTOR
null
null
null
This happens when both 1. Huggingface datasets cache dir does not exist 2. Try to load a local dataset script builder.py throws an error when trying to create a filelock in a directory (cache/datasets) that does not exist https://github.com/huggingface/datasets/blob/master/src/datasets/builder.py#L177 Tested on v1.0.2 @lhoestq
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/671/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/671/timeline
null
completed
true
https://api.github.com/repos/huggingface/datasets/issues/670
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/670/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/670/comments
https://api.github.com/repos/huggingface/datasets/issues/670/events
https://github.com/huggingface/datasets/pull/670
709,061,231
MDExOlB1bGxSZXF1ZXN0NDkzMTc4OTQw
670
Fix SQuAD metric kwargs description
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
[]
closed
false
null
[]
null
[]
2020-09-25T16:08:57Z
2020-09-29T15:57:39Z
2020-09-29T15:57:38Z
MEMBER
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/670.diff", "html_url": "https://github.com/huggingface/datasets/pull/670", "merged_at": "2020-09-29T15:57:37Z", "patch_url": "https://github.com/huggingface/datasets/pull/670.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/670" }
The `answer_start` field was missing in the kwargs docstring. This should fix #657 FYI another fix was proposed by @tshrjn in #658 and suggests to remove this field. However IMO `answer_start` is useful to match the squad dataset format for consistency, even though it is not used in the metric computation. I think it's better to keep it this way, so that you can just give references=squad["answers"] to .compute(). Let me know what sounds the best for you
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/670/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/670/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/669
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/669/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/669/comments
https://api.github.com/repos/huggingface/datasets/issues/669/events
https://github.com/huggingface/datasets/issues/669
708,857,595
MDU6SXNzdWU3MDg4NTc1OTU=
669
How to skip a example when running dataset.map
{ "avatar_url": "https://avatars.githubusercontent.com/u/24541791?v=4", "events_url": "https://api.github.com/users/xixiaoyao/events{/privacy}", "followers_url": "https://api.github.com/users/xixiaoyao/followers", "following_url": "https://api.github.com/users/xixiaoyao/following{/other_user}", "gists_url": "https://api.github.com/users/xixiaoyao/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/xixiaoyao", "id": 24541791, "login": "xixiaoyao", "node_id": "MDQ6VXNlcjI0NTQxNzkx", "organizations_url": "https://api.github.com/users/xixiaoyao/orgs", "received_events_url": "https://api.github.com/users/xixiaoyao/received_events", "repos_url": "https://api.github.com/users/xixiaoyao/repos", "site_admin": false, "starred_url": "https://api.github.com/users/xixiaoyao/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/xixiaoyao/subscriptions", "type": "User", "url": "https://api.github.com/users/xixiaoyao" }
[]
closed
false
null
[]
null
[]
2020-09-25T11:17:53Z
2022-06-17T21:45:03Z
2020-10-05T16:28:13Z
NONE
null
null
null
in processing func, I process examples and detect some invalid examples, which I did not want it to be added into train dataset. However I did not find how to skip this recognized invalid example when doing dataset.map.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/669/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/669/timeline
null
completed
true
https://api.github.com/repos/huggingface/datasets/issues/668
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/668/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/668/comments
https://api.github.com/repos/huggingface/datasets/issues/668/events
https://github.com/huggingface/datasets/issues/668
708,310,956
MDU6SXNzdWU3MDgzMTA5NTY=
668
OverflowError when slicing with an array containing negative ids
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
[]
closed
false
null
[]
null
[]
2020-09-24T16:27:14Z
2020-09-28T14:42:19Z
2020-09-28T14:42:19Z
MEMBER
null
null
null
```python from datasets import Dataset d = ds.Dataset.from_dict({"a": range(10)}) print(d[0]) # {'a': 0} print(d[-1]) # {'a': 9} print(d[[0, -1]]) # OverflowError ``` results in ``` --------------------------------------------------------------------------- OverflowError Traceback (most recent call last) <ipython-input-5-863dc3555598> in <module> ----> 1 d[[0, -1]] ~/Desktop/hf/nlp/src/datasets/arrow_dataset.py in __getitem__(self, key) 1070 format_columns=self._format_columns, 1071 output_all_columns=self._output_all_columns, -> 1072 format_kwargs=self._format_kwargs, 1073 ) 1074 ~/Desktop/hf/nlp/src/datasets/arrow_dataset.py in _getitem(self, key, format_type, format_columns, output_all_columns, format_kwargs) 1025 indices = key 1026 -> 1027 indices_array = pa.array([int(i) for i in indices], type=pa.uint64()) 1028 1029 # Check if we need to convert indices ~/.virtualenvs/hf-datasets/lib/python3.7/site-packages/pyarrow/array.pxi in pyarrow.lib.array() ~/.virtualenvs/hf-datasets/lib/python3.7/site-packages/pyarrow/array.pxi in pyarrow.lib._sequence_to_array() OverflowError: can't convert negative value to unsigned int ```
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/668/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/668/timeline
null
completed
true
https://api.github.com/repos/huggingface/datasets/issues/667
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/667/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/667/comments
https://api.github.com/repos/huggingface/datasets/issues/667/events
https://github.com/huggingface/datasets/issues/667
708,258,392
MDU6SXNzdWU3MDgyNTgzOTI=
667
Loss not decrease with Datasets and Transformers
{ "avatar_url": "https://avatars.githubusercontent.com/u/23032865?v=4", "events_url": "https://api.github.com/users/wangcongcong123/events{/privacy}", "followers_url": "https://api.github.com/users/wangcongcong123/followers", "following_url": "https://api.github.com/users/wangcongcong123/following{/other_user}", "gists_url": "https://api.github.com/users/wangcongcong123/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/wangcongcong123", "id": 23032865, "login": "wangcongcong123", "node_id": "MDQ6VXNlcjIzMDMyODY1", "organizations_url": "https://api.github.com/users/wangcongcong123/orgs", "received_events_url": "https://api.github.com/users/wangcongcong123/received_events", "repos_url": "https://api.github.com/users/wangcongcong123/repos", "site_admin": false, "starred_url": "https://api.github.com/users/wangcongcong123/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/wangcongcong123/subscriptions", "type": "User", "url": "https://api.github.com/users/wangcongcong123" }
[]
closed
false
null
[]
null
[]
2020-09-24T15:14:43Z
2021-01-01T20:01:25Z
2021-01-01T20:01:25Z
NONE
null
null
null
HI, The following script is used to fine-tune a BertForSequenceClassification model on SST2. The script is adapted from [this colab](https://colab.research.google.com/github/huggingface/datasets/blob/master/notebooks/Overview.ipynb) that presents an example of fine-tuning BertForQuestionAnswering using squad dataset. In that colab, loss works fine. When I adapt it to SST2, the loss fails to decrease as it should. I attach the adapted script below and appreciate anyone pointing out what I miss? ```python import torch from datasets import load_dataset from transformers import BertForSequenceClassification from transformers import BertTokenizerFast # Load our training dataset and tokenizer dataset = load_dataset("glue", 'sst2') tokenizer = BertTokenizerFast.from_pretrained('bert-base-cased') del dataset["test"] # let's remove it in this demo # Tokenize our training dataset def convert_to_features(example_batch): encodings = tokenizer(example_batch["sentence"]) encodings.update({"labels": example_batch["label"]}) return encodings encoded_dataset = dataset.map(convert_to_features, batched=True) # Format our dataset to outputs torch.Tensor to train a pytorch model columns = ['input_ids', 'token_type_ids', 'attention_mask', 'labels'] encoded_dataset.set_format(type='torch', columns=columns) # Instantiate a PyTorch Dataloader around our dataset # Let's do dynamic batching (pad on the fly with our own collate_fn) def collate_fn(examples): return tokenizer.pad(examples, return_tensors='pt') dataloader = torch.utils.data.DataLoader(encoded_dataset['train'], collate_fn=collate_fn, batch_size=8) # Now let's train our model device = 'cuda' if torch.cuda.is_available() else 'cpu' # Let's load a pretrained Bert model and a simple optimizer model = BertForSequenceClassification.from_pretrained('bert-base-cased', return_dict=True) optimizer = torch.optim.Adam(model.parameters(), lr=1e-5) model.train().to(device) for i, batch in enumerate(dataloader): batch.to(device) outputs = model(**batch) loss = outputs.loss loss.backward() optimizer.step() model.zero_grad() print(f'Step {i} - loss: {loss:.3}') ``` In case needed. - datasets == 1.0.2 - transformers == 3.2.0
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/667/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/667/timeline
null
completed
true
https://api.github.com/repos/huggingface/datasets/issues/666
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/666/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/666/comments
https://api.github.com/repos/huggingface/datasets/issues/666/events
https://github.com/huggingface/datasets/issues/666
707,608,578
MDU6SXNzdWU3MDc2MDg1Nzg=
666
Does both 'bookcorpus' and 'wikipedia' belong to the same datasets which Google used for pretraining BERT?
{ "avatar_url": "https://avatars.githubusercontent.com/u/31090427?v=4", "events_url": "https://api.github.com/users/wahab4114/events{/privacy}", "followers_url": "https://api.github.com/users/wahab4114/followers", "following_url": "https://api.github.com/users/wahab4114/following{/other_user}", "gists_url": "https://api.github.com/users/wahab4114/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/wahab4114", "id": 31090427, "login": "wahab4114", "node_id": "MDQ6VXNlcjMxMDkwNDI3", "organizations_url": "https://api.github.com/users/wahab4114/orgs", "received_events_url": "https://api.github.com/users/wahab4114/received_events", "repos_url": "https://api.github.com/users/wahab4114/repos", "site_admin": false, "starred_url": "https://api.github.com/users/wahab4114/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/wahab4114/subscriptions", "type": "User", "url": "https://api.github.com/users/wahab4114" }
[]
closed
false
null
[]
null
[]
2020-09-23T19:02:25Z
2020-10-27T15:19:25Z
2020-10-27T15:19:25Z
NONE
null
null
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/666/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/666/timeline
null
completed
true
https://api.github.com/repos/huggingface/datasets/issues/665
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/665/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/665/comments
https://api.github.com/repos/huggingface/datasets/issues/665/events
https://github.com/huggingface/datasets/issues/665
707,037,738
MDU6SXNzdWU3MDcwMzc3Mzg=
665
runing dataset.map, it raises TypeError: can't pickle Tokenizer objects
{ "avatar_url": "https://avatars.githubusercontent.com/u/24541791?v=4", "events_url": "https://api.github.com/users/xixiaoyao/events{/privacy}", "followers_url": "https://api.github.com/users/xixiaoyao/followers", "following_url": "https://api.github.com/users/xixiaoyao/following{/other_user}", "gists_url": "https://api.github.com/users/xixiaoyao/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/xixiaoyao", "id": 24541791, "login": "xixiaoyao", "node_id": "MDQ6VXNlcjI0NTQxNzkx", "organizations_url": "https://api.github.com/users/xixiaoyao/orgs", "received_events_url": "https://api.github.com/users/xixiaoyao/received_events", "repos_url": "https://api.github.com/users/xixiaoyao/repos", "site_admin": false, "starred_url": "https://api.github.com/users/xixiaoyao/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/xixiaoyao/subscriptions", "type": "User", "url": "https://api.github.com/users/xixiaoyao" }
[]
closed
false
null
[]
null
[]
2020-09-23T04:28:14Z
2020-10-08T09:32:16Z
2020-10-08T09:32:16Z
NONE
null
null
null
I load squad dataset. Then want to process data use following function with `Huggingface Transformers LongformerTokenizer`. ``` def convert_to_features(example): # Tokenize contexts and questions (as pairs of inputs) input_pairs = [example['question'], example['context']] encodings = tokenizer.encode_plus(input_pairs, pad_to_max_length=True, max_length=512) context_encodings = tokenizer.encode_plus(example['context']) # Compute start and end tokens for labels using Transformers's fast tokenizers alignement methodes. # this will give us the position of answer span in the context text start_idx, end_idx = get_correct_alignement(example['context'], example['answers']) start_positions_context = context_encodings.char_to_token(start_idx) end_positions_context = context_encodings.char_to_token(end_idx-1) # here we will compute the start and end position of the answer in the whole example # as the example is encoded like this <s> question</s></s> context</s> # and we know the postion of the answer in the context # we can just find out the index of the sep token and then add that to position + 1 (+1 because there are two sep tokens) # this will give us the position of the answer span in whole example sep_idx = encodings['input_ids'].index(tokenizer.sep_token_id) start_positions = start_positions_context + sep_idx + 1 end_positions = end_positions_context + sep_idx + 1 if end_positions > 512: start_positions, end_positions = 0, 0 encodings.update({'start_positions': start_positions, 'end_positions': end_positions, 'attention_mask': encodings['attention_mask']}) return encodings ``` Then I run `dataset.map(convert_to_features)`, it raise ``` In [59]: a.map(convert_to_features) --------------------------------------------------------------------------- TypeError Traceback (most recent call last) <ipython-input-59-c453b508761d> in <module> ----> 1 a.map(convert_to_features) /opt/conda/lib/python3.7/site-packages/datasets/arrow_dataset.py in map(self, function, with_indices, input_columns, batched, batch_size, drop_last_batch, remove_columns, keep_in_memory, load_from_cache_file, cache_file_name, writer_batch_size, features, disable_nullable, fn_kwargs, num_proc, suffix_template, new_fingerprint) 1242 fn_kwargs=fn_kwargs, 1243 new_fingerprint=new_fingerprint, -> 1244 update_data=update_data, 1245 ) 1246 else: /opt/conda/lib/python3.7/site-packages/datasets/arrow_dataset.py in wrapper(*args, **kwargs) 151 "output_all_columns": self._output_all_columns, 152 } --> 153 out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs) 154 if new_format["columns"] is not None: 155 new_format["columns"] = list(set(new_format["columns"]) & set(out.column_names)) /opt/conda/lib/python3.7/site-packages/datasets/fingerprint.py in wrapper(*args, **kwargs) 156 kwargs_for_fingerprint["fingerprint_name"] = fingerprint_name 157 kwargs[fingerprint_name] = update_fingerprint( --> 158 self._fingerprint, transform, kwargs_for_fingerprint 159 ) 160 /opt/conda/lib/python3.7/site-packages/datasets/fingerprint.py in update_fingerprint(fingerprint, transform, transform_args) 103 for key in sorted(transform_args): 104 hasher.update(key) --> 105 hasher.update(transform_args[key]) 106 return hasher.hexdigest() 107 /opt/conda/lib/python3.7/site-packages/datasets/fingerprint.py in update(self, value) 55 def update(self, value): 56 self.m.update(f"=={type(value)}==".encode("utf8")) ---> 57 self.m.update(self.hash(value).encode("utf-8")) 58 59 def hexdigest(self): /opt/conda/lib/python3.7/site-packages/datasets/fingerprint.py in hash(cls, value) 51 return cls.dispatch[type(value)](cls, value) 52 else: ---> 53 return cls.hash_default(value) 54 55 def update(self, value): /opt/conda/lib/python3.7/site-packages/datasets/fingerprint.py in hash_default(cls, value) 44 @classmethod 45 def hash_default(cls, value): ---> 46 return cls.hash_bytes(dumps(value)) 47 48 @classmethod /opt/conda/lib/python3.7/site-packages/datasets/utils/py_utils.py in dumps(obj) 365 file = StringIO() 366 with _no_cache_fields(obj): --> 367 dump(obj, file) 368 return file.getvalue() 369 /opt/conda/lib/python3.7/site-packages/datasets/utils/py_utils.py in dump(obj, file) 337 def dump(obj, file): 338 """pickle an object to a file""" --> 339 Pickler(file, recurse=True).dump(obj) 340 return 341 /opt/conda/lib/python3.7/site-packages/dill/_dill.py in dump(self, obj) 444 raise PicklingError(msg) 445 else: --> 446 StockPickler.dump(self, obj) 447 stack.clear() # clear record of 'recursion-sensitive' pickled objects 448 return /opt/conda/lib/python3.7/pickle.py in dump(self, obj) 435 if self.proto >= 4: 436 self.framer.start_framing() --> 437 self.save(obj) 438 self.write(STOP) 439 self.framer.end_framing() /opt/conda/lib/python3.7/pickle.py in save(self, obj, save_persistent_id) 502 f = self.dispatch.get(t) 503 if f is not None: --> 504 f(self, obj) # Call unbound method with explicit self 505 return 506 /opt/conda/lib/python3.7/site-packages/dill/_dill.py in save_function(pickler, obj) 1436 globs, obj.__name__, 1437 obj.__defaults__, obj.__closure__, -> 1438 obj.__dict__, fkwdefaults), obj=obj) 1439 else: 1440 _super = ('super' in getattr(obj.func_code,'co_names',())) and (_byref is not None) and getattr(pickler, '_recurse', False) /opt/conda/lib/python3.7/pickle.py in save_reduce(self, func, args, state, listitems, dictitems, obj) 636 else: 637 save(func) --> 638 save(args) 639 write(REDUCE) 640 /opt/conda/lib/python3.7/pickle.py in save(self, obj, save_persistent_id) 502 f = self.dispatch.get(t) 503 if f is not None: --> 504 f(self, obj) # Call unbound method with explicit self 505 return 506 /opt/conda/lib/python3.7/pickle.py in save_tuple(self, obj) 787 write(MARK) 788 for element in obj: --> 789 save(element) 790 791 if id(obj) in memo: /opt/conda/lib/python3.7/pickle.py in save(self, obj, save_persistent_id) 502 f = self.dispatch.get(t) 503 if f is not None: --> 504 f(self, obj) # Call unbound method with explicit self 505 return 506 /opt/conda/lib/python3.7/site-packages/dill/_dill.py in save_module_dict(pickler, obj) 931 # we only care about session the first pass thru 932 pickler._session = False --> 933 StockPickler.save_dict(pickler, obj) 934 log.info("# D2") 935 return /opt/conda/lib/python3.7/pickle.py in save_dict(self, obj) 857 858 self.memoize(obj) --> 859 self._batch_setitems(obj.items()) 860 861 dispatch[dict] = save_dict /opt/conda/lib/python3.7/pickle.py in _batch_setitems(self, items) 883 for k, v in tmp: 884 save(k) --> 885 save(v) 886 write(SETITEMS) 887 elif n: /opt/conda/lib/python3.7/pickle.py in save(self, obj, save_persistent_id) 547 548 # Save the reduce() output and finally memoize the object --> 549 self.save_reduce(obj=obj, *rv) 550 551 def persistent_id(self, obj): /opt/conda/lib/python3.7/pickle.py in save_reduce(self, func, args, state, listitems, dictitems, obj) 660 661 if state is not None: --> 662 save(state) 663 write(BUILD) 664 /opt/conda/lib/python3.7/pickle.py in save(self, obj, save_persistent_id) 502 f = self.dispatch.get(t) 503 if f is not None: --> 504 f(self, obj) # Call unbound method with explicit self 505 return 506 /opt/conda/lib/python3.7/site-packages/dill/_dill.py in save_module_dict(pickler, obj) 931 # we only care about session the first pass thru 932 pickler._session = False --> 933 StockPickler.save_dict(pickler, obj) 934 log.info("# D2") 935 return /opt/conda/lib/python3.7/pickle.py in save_dict(self, obj) 857 858 self.memoize(obj) --> 859 self._batch_setitems(obj.items()) 860 861 dispatch[dict] = save_dict /opt/conda/lib/python3.7/pickle.py in _batch_setitems(self, items) 883 for k, v in tmp: 884 save(k) --> 885 save(v) 886 write(SETITEMS) 887 elif n: /opt/conda/lib/python3.7/pickle.py in save(self, obj, save_persistent_id) 547 548 # Save the reduce() output and finally memoize the object --> 549 self.save_reduce(obj=obj, *rv) 550 551 def persistent_id(self, obj): /opt/conda/lib/python3.7/pickle.py in save_reduce(self, func, args, state, listitems, dictitems, obj) 660 661 if state is not None: --> 662 save(state) 663 write(BUILD) 664 /opt/conda/lib/python3.7/pickle.py in save(self, obj, save_persistent_id) 502 f = self.dispatch.get(t) 503 if f is not None: --> 504 f(self, obj) # Call unbound method with explicit self 505 return 506 /opt/conda/lib/python3.7/site-packages/dill/_dill.py in save_module_dict(pickler, obj) 931 # we only care about session the first pass thru 932 pickler._session = False --> 933 StockPickler.save_dict(pickler, obj) 934 log.info("# D2") 935 return /opt/conda/lib/python3.7/pickle.py in save_dict(self, obj) 857 858 self.memoize(obj) --> 859 self._batch_setitems(obj.items()) 860 861 dispatch[dict] = save_dict /opt/conda/lib/python3.7/pickle.py in _batch_setitems(self, items) 883 for k, v in tmp: 884 save(k) --> 885 save(v) 886 write(SETITEMS) 887 elif n: /opt/conda/lib/python3.7/pickle.py in save(self, obj, save_persistent_id) 522 reduce = getattr(obj, "__reduce_ex__", None) 523 if reduce is not None: --> 524 rv = reduce(self.proto) 525 else: 526 reduce = getattr(obj, "__reduce__", None) TypeError: can't pickle Tokenizer objects ```
{ "+1": 1, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/665/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/665/timeline
null
completed
true
https://api.github.com/repos/huggingface/datasets/issues/664
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/664/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/664/comments
https://api.github.com/repos/huggingface/datasets/issues/664/events
https://github.com/huggingface/datasets/issues/664
707,017,791
MDU6SXNzdWU3MDcwMTc3OTE=
664
load_dataset from local squad.py, raise error: TypeError: 'NoneType' object is not callable
{ "avatar_url": "https://avatars.githubusercontent.com/u/24541791?v=4", "events_url": "https://api.github.com/users/xixiaoyao/events{/privacy}", "followers_url": "https://api.github.com/users/xixiaoyao/followers", "following_url": "https://api.github.com/users/xixiaoyao/following{/other_user}", "gists_url": "https://api.github.com/users/xixiaoyao/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/xixiaoyao", "id": 24541791, "login": "xixiaoyao", "node_id": "MDQ6VXNlcjI0NTQxNzkx", "organizations_url": "https://api.github.com/users/xixiaoyao/orgs", "received_events_url": "https://api.github.com/users/xixiaoyao/received_events", "repos_url": "https://api.github.com/users/xixiaoyao/repos", "site_admin": false, "starred_url": "https://api.github.com/users/xixiaoyao/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/xixiaoyao/subscriptions", "type": "User", "url": "https://api.github.com/users/xixiaoyao" }
[]
closed
false
null
[]
null
[]
2020-09-23T03:53:36Z
2020-10-20T09:06:13Z
2020-10-20T09:06:13Z
NONE
null
null
null
version: 1.0.2 ``` train_dataset = datasets.load_dataset('squad') ``` The above code can works. However, when I download the squad.py from your server, and saved as `my_squad.py` to local. I run followings raise errors. ``` train_dataset = datasets.load_dataset('./my_squad.py') ``` --------------------------------------------------------------------------- TypeError Traceback (most recent call last) <ipython-input-28-25a84b4d1581> in <module> ----> 1 train_dataset = nlp.load_dataset('./my_squad.py') /opt/conda/lib/python3.7/site-packages/datasets/load.py in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, ignore_verifications, save_infos, script_version, **config_kwargs) 602 hash=hash, 603 features=features, --> 604 **config_kwargs, 605 ) 606 TypeError: 'NoneType' object is not callable
{ "+1": 1, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/664/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/664/timeline
null
completed
true
https://api.github.com/repos/huggingface/datasets/issues/663
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/663/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/663/comments
https://api.github.com/repos/huggingface/datasets/issues/663/events
https://github.com/huggingface/datasets/pull/663
706,732,636
MDExOlB1bGxSZXF1ZXN0NDkxMjI3NzUz
663
Created dataset card snli.md
{ "avatar_url": "https://avatars.githubusercontent.com/u/26722925?v=4", "events_url": "https://api.github.com/users/mcmillanmajora/events{/privacy}", "followers_url": "https://api.github.com/users/mcmillanmajora/followers", "following_url": "https://api.github.com/users/mcmillanmajora/following{/other_user}", "gists_url": "https://api.github.com/users/mcmillanmajora/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mcmillanmajora", "id": 26722925, "login": "mcmillanmajora", "node_id": "MDQ6VXNlcjI2NzIyOTI1", "organizations_url": "https://api.github.com/users/mcmillanmajora/orgs", "received_events_url": "https://api.github.com/users/mcmillanmajora/received_events", "repos_url": "https://api.github.com/users/mcmillanmajora/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mcmillanmajora/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mcmillanmajora/subscriptions", "type": "User", "url": "https://api.github.com/users/mcmillanmajora" }
[ { "color": "72f99f", "default": false, "description": "Discussions on the datasets", "id": 2067401494, "name": "Dataset discussion", "node_id": "MDU6TGFiZWwyMDY3NDAxNDk0", "url": "https://api.github.com/repos/huggingface/datasets/labels/Dataset%20discussion" } ]
closed
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/10469459?v=4", "events_url": "https://api.github.com/users/yjernite/events{/privacy}", "followers_url": "https://api.github.com/users/yjernite/followers", "following_url": "https://api.github.com/users/yjernite/following{/other_user}", "gists_url": "https://api.github.com/users/yjernite/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/yjernite", "id": 10469459, "login": "yjernite", "node_id": "MDQ6VXNlcjEwNDY5NDU5", "organizations_url": "https://api.github.com/users/yjernite/orgs", "received_events_url": "https://api.github.com/users/yjernite/received_events", "repos_url": "https://api.github.com/users/yjernite/repos", "site_admin": false, "starred_url": "https://api.github.com/users/yjernite/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/yjernite/subscriptions", "type": "User", "url": "https://api.github.com/users/yjernite" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/10469459?v=4", "events_url": "https://api.github.com/users/yjernite/events{/privacy}", "followers_url": "https://api.github.com/users/yjernite/followers", "following_url": "https://api.github.com/users/yjernite/following{/other_user}", "gists_url": "https://api.github.com/users/yjernite/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/yjernite", "id": 10469459, "login": "yjernite", "node_id": "MDQ6VXNlcjEwNDY5NDU5", "organizations_url": "https://api.github.com/users/yjernite/orgs", "received_events_url": "https://api.github.com/users/yjernite/received_events", "repos_url": "https://api.github.com/users/yjernite/repos", "site_admin": false, "starred_url": "https://api.github.com/users/yjernite/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/yjernite/subscriptions", "type": "User", "url": "https://api.github.com/users/yjernite" } ]
null
[]
2020-09-22T22:29:37Z
2020-10-13T17:05:20Z
2020-10-12T20:26:52Z
CONTRIBUTOR
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/663.diff", "html_url": "https://github.com/huggingface/datasets/pull/663", "merged_at": "2020-10-12T20:26:52Z", "patch_url": "https://github.com/huggingface/datasets/pull/663.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/663" }
First draft of a dataset card using the SNLI corpus as an example. This is mostly based on the [Google Doc draft](https://docs.google.com/document/d/1dKPGP-dA2W0QoTRGfqQ5eBp0CeSsTy7g2yM8RseHtos/edit), but I added a few sections and moved some things around. - I moved **Who Was Involved** to follow **Language**, both because I thought the authors should be presented more towards the front and because I think it makes sense to present the speakers close to the language so it doesn't have to be repeated. - I created a section I called **Data Characteristics** by pulling some things out of the other sections. I was thinking that this would be more about the language use in context of the specific task construction. That name isn't very descriptive though and could probably be improved. -- Domain and language type out of **Language**. I particularly wanted to keep the Language section as simple and as abstracted from the task as possible. -- 'How was the data collected' out of **Who Was Involved** -- Normalization out of **Features/Dataset Structure** -- I also added an annotation process section. - I kept the **Features** section mostly the same as the Google Doc, but I renamed it **Dataset Structure** to more clearly separate it from the language use, and added some links to the documentation pages. - I also kept **Tasks Supported**, **Known Limitations**, and **Licensing Information** mostly the same. Looking at it again though, maybe **Tasks Supported** should come before **Data Characteristics**? The trickiest part about writing a dataset card for the SNLI corpus specifically is that it's built on datasets which are themselves built on datasets so I had to dig in a lot of places to find information. I think this will be easier with other datasets and once there is more uptake of dataset cards so they can just link to each other. (Maybe that needs to be an added section?) I also made an effort not to repeat information across the sections or to refer to a previous section if the information was relevant in a later one. Is there too much repetition still?
{ "+1": 2, "-1": 0, "confused": 0, "eyes": 0, "heart": 1, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 3, "url": "https://api.github.com/repos/huggingface/datasets/issues/663/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/663/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/662
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/662/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/662/comments
https://api.github.com/repos/huggingface/datasets/issues/662/events
https://github.com/huggingface/datasets/pull/662
706,689,866
MDExOlB1bGxSZXF1ZXN0NDkxMTkyNTM3
662
Created dataset card snli.md
{ "avatar_url": "https://avatars.githubusercontent.com/u/26722925?v=4", "events_url": "https://api.github.com/users/mcmillanmajora/events{/privacy}", "followers_url": "https://api.github.com/users/mcmillanmajora/followers", "following_url": "https://api.github.com/users/mcmillanmajora/following{/other_user}", "gists_url": "https://api.github.com/users/mcmillanmajora/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mcmillanmajora", "id": 26722925, "login": "mcmillanmajora", "node_id": "MDQ6VXNlcjI2NzIyOTI1", "organizations_url": "https://api.github.com/users/mcmillanmajora/orgs", "received_events_url": "https://api.github.com/users/mcmillanmajora/received_events", "repos_url": "https://api.github.com/users/mcmillanmajora/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mcmillanmajora/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mcmillanmajora/subscriptions", "type": "User", "url": "https://api.github.com/users/mcmillanmajora" }
[ { "color": "72f99f", "default": false, "description": "Discussions on the datasets", "id": 2067401494, "name": "Dataset discussion", "node_id": "MDU6TGFiZWwyMDY3NDAxNDk0", "url": "https://api.github.com/repos/huggingface/datasets/labels/Dataset%20discussion" } ]
closed
false
null
[]
null
[]
2020-09-22T21:00:17Z
2020-09-22T21:26:21Z
2020-09-22T21:26:21Z
CONTRIBUTOR
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/662.diff", "html_url": "https://github.com/huggingface/datasets/pull/662", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/662.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/662" }
First draft of a dataset card using the SNLI corpus as an example
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/662/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/662/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/661
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/661/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/661/comments
https://api.github.com/repos/huggingface/datasets/issues/661/events
https://github.com/huggingface/datasets/pull/661
706,465,936
MDExOlB1bGxSZXF1ZXN0NDkxMDA3NjEw
661
Replace pa.OSFile by open
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
[]
closed
false
null
[]
null
[]
2020-09-22T15:05:59Z
2021-05-05T18:24:36Z
2020-09-22T15:15:25Z
MEMBER
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/661.diff", "html_url": "https://github.com/huggingface/datasets/pull/661", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/661.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/661" }
It should fix #643
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/661/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/661/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/660
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/660/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/660/comments
https://api.github.com/repos/huggingface/datasets/issues/660/events
https://github.com/huggingface/datasets/pull/660
706,324,032
MDExOlB1bGxSZXF1ZXN0NDkwODkyMjQ0
660
add openwebtext
{ "avatar_url": "https://avatars.githubusercontent.com/u/17963619?v=4", "events_url": "https://api.github.com/users/richarddwang/events{/privacy}", "followers_url": "https://api.github.com/users/richarddwang/followers", "following_url": "https://api.github.com/users/richarddwang/following{/other_user}", "gists_url": "https://api.github.com/users/richarddwang/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/richarddwang", "id": 17963619, "login": "richarddwang", "node_id": "MDQ6VXNlcjE3OTYzNjE5", "organizations_url": "https://api.github.com/users/richarddwang/orgs", "received_events_url": "https://api.github.com/users/richarddwang/received_events", "repos_url": "https://api.github.com/users/richarddwang/repos", "site_admin": false, "starred_url": "https://api.github.com/users/richarddwang/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/richarddwang/subscriptions", "type": "User", "url": "https://api.github.com/users/richarddwang" }
[]
closed
false
null
[]
null
[]
2020-09-22T12:05:22Z
2020-10-06T09:20:10Z
2020-09-28T09:07:26Z
CONTRIBUTOR
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/660.diff", "html_url": "https://github.com/huggingface/datasets/pull/660", "merged_at": "2020-09-28T09:07:26Z", "patch_url": "https://github.com/huggingface/datasets/pull/660.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/660" }
This adds [The OpenWebText Corpus](https://skylion007.github.io/OpenWebTextCorpus/), which is a clean and large text corpus for nlp pretraining. It is an open source effort to reproduce OpenAI’s WebText dataset used by GPT-2, and it is also needed to reproduce ELECTRA. It solves #132 . ### Besides dataset building script, I made some changes to the library. 1. Extract large amount of compressed files with multi processing I add a `num_proc` argument to `DownloadManager.extract` and pass this `num_proc` to `map_nested`. So I can decompress 20 thousands compressed files faster. `num_proc` I add is default to `None`, so it shouldn't break any other thing. 2. In `cached_path`, I change the order to deal with different kind of compressed files (zip, tar, gzip) Because there is no way to 100% detect a file is a zip file (see [this](https://stackoverflow.com/questions/18194688/how-can-i-determine-if-a-file-is-a-zip-file)), I found it wrongly detect `'./datasets/downloads/extracted/58764bd6898fa339b25d92e7fbbc3d8dbf64fb504edff1a30a1d7d99d1561027/openwebtext/urlsf_subset13-630_data.xz'` as a zip and try decompress it with zip, sure it will get error. So I made it detect wheter the file is tar or gzip first and detect zip in the last. 3. `MockDownloadManager.extract` Cuz I pass `num_proc` to `DownloadManager.extract`, I also have to make `MockDownloadManager.extract` to accept extra keywork arguments. So I make it `extract(path, *args, **kwargs)`, but just return the path as original implementation. **Note**: If there is better way for points mentioned above, thought I would like to help, unless we can solve point4 (make dataset building fast), I may not be able to afford rebuild the dataset again because of change of the dataset script (Building the dataset cost me 4 days). ### There is something I think we can improve 4. Long time to decompress compressed files Even I decompress those 20 thousands compressed files with 12 process on my 16 core 3.x Ghz server. It still took about 3 ~ 4days to complete dataset building. Most of time spent on decompress those files. ### Info about the source data The source data is an tar.xz file with following structure, files/directory beyond compressed file is what can we get after decompress it. ``` openwebtext.tar.xz |__ openwebtext |__subset000.xz | |__ ....txt | |__ ....txt | ... |__ subset001.xz | .... ``` And this the structure of dummy data, same as the original one. ``` dummy_data.zip |__ dummy_data |__ openwebtext |__fake_subset-1_data-dirxz # actually it is a directory | |__ ....txt | |__ ....txt |__ fake_subset-2_data-dirxz |__ ....txt |__ ....txt ```
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 1, "laugh": 0, "rocket": 1, "total_count": 2, "url": "https://api.github.com/repos/huggingface/datasets/issues/660/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/660/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/659
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/659/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/659/comments
https://api.github.com/repos/huggingface/datasets/issues/659/events
https://github.com/huggingface/datasets/pull/659
706,231,506
MDExOlB1bGxSZXF1ZXN0NDkwODE4NTY1
659
Keep new columns in transmit format
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
[]
closed
false
null
[]
null
[]
2020-09-22T09:47:23Z
2020-09-22T10:07:22Z
2020-09-22T10:07:20Z
MEMBER
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/659.diff", "html_url": "https://github.com/huggingface/datasets/pull/659", "merged_at": "2020-09-22T10:07:20Z", "patch_url": "https://github.com/huggingface/datasets/pull/659.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/659" }
When a dataset is formatted with a list of columns that `__getitem__` should return, then calling `map` to add new columns doesn't add the new columns to this list. It caused `KeyError` issues in #620 I changed the logic to add those new columns to the list that `__getitem__` should return.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/659/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/659/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/658
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/658/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/658/comments
https://api.github.com/repos/huggingface/datasets/issues/658/events
https://github.com/huggingface/datasets/pull/658
706,206,247
MDExOlB1bGxSZXF1ZXN0NDkwNzk4MDc0
658
Fix squad metric's Features
{ "avatar_url": "https://avatars.githubusercontent.com/u/8372098?v=4", "events_url": "https://api.github.com/users/tshrjn/events{/privacy}", "followers_url": "https://api.github.com/users/tshrjn/followers", "following_url": "https://api.github.com/users/tshrjn/following{/other_user}", "gists_url": "https://api.github.com/users/tshrjn/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/tshrjn", "id": 8372098, "login": "tshrjn", "node_id": "MDQ6VXNlcjgzNzIwOTg=", "organizations_url": "https://api.github.com/users/tshrjn/orgs", "received_events_url": "https://api.github.com/users/tshrjn/received_events", "repos_url": "https://api.github.com/users/tshrjn/repos", "site_admin": false, "starred_url": "https://api.github.com/users/tshrjn/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/tshrjn/subscriptions", "type": "User", "url": "https://api.github.com/users/tshrjn" }
[]
closed
false
null
[]
null
[]
2020-09-22T09:09:52Z
2020-09-29T15:58:30Z
2020-09-29T15:58:30Z
NONE
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/658.diff", "html_url": "https://github.com/huggingface/datasets/pull/658", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/658.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/658" }
Resolves issue [657](https://github.com/huggingface/datasets/issues/657).
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/658/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/658/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/657
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/657/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/657/comments
https://api.github.com/repos/huggingface/datasets/issues/657/events
https://github.com/huggingface/datasets/issues/657
706,204,383
MDU6SXNzdWU3MDYyMDQzODM=
657
Squad Metric Description & Feature Mismatch
{ "avatar_url": "https://avatars.githubusercontent.com/u/8372098?v=4", "events_url": "https://api.github.com/users/tshrjn/events{/privacy}", "followers_url": "https://api.github.com/users/tshrjn/followers", "following_url": "https://api.github.com/users/tshrjn/following{/other_user}", "gists_url": "https://api.github.com/users/tshrjn/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/tshrjn", "id": 8372098, "login": "tshrjn", "node_id": "MDQ6VXNlcjgzNzIwOTg=", "organizations_url": "https://api.github.com/users/tshrjn/orgs", "received_events_url": "https://api.github.com/users/tshrjn/received_events", "repos_url": "https://api.github.com/users/tshrjn/repos", "site_admin": false, "starred_url": "https://api.github.com/users/tshrjn/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/tshrjn/subscriptions", "type": "User", "url": "https://api.github.com/users/tshrjn" }
[]
closed
false
null
[]
null
[]
2020-09-22T09:07:00Z
2020-10-13T02:16:56Z
2020-09-29T15:57:38Z
NONE
null
null
null
The [description](https://github.com/huggingface/datasets/blob/master/metrics/squad/squad.py#L39) doesn't mention `answer_start` in squad. However the `datasets.features` require [it](https://github.com/huggingface/datasets/blob/master/metrics/squad/squad.py#L68). It's also not used in the evaluation.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/657/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/657/timeline
null
completed
true
https://api.github.com/repos/huggingface/datasets/issues/656
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/656/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/656/comments
https://api.github.com/repos/huggingface/datasets/issues/656/events
https://github.com/huggingface/datasets/pull/656
705,736,319
MDExOlB1bGxSZXF1ZXN0NDkwNDEwODAz
656
Use multiprocess from pathos for multiprocessing
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
[]
closed
false
null
[]
null
[]
2020-09-21T16:12:19Z
2020-09-28T14:45:40Z
2020-09-28T14:45:39Z
MEMBER
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/656.diff", "html_url": "https://github.com/huggingface/datasets/pull/656", "merged_at": "2020-09-28T14:45:39Z", "patch_url": "https://github.com/huggingface/datasets/pull/656.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/656" }
[Multiprocess](https://github.com/uqfoundation/multiprocess) (from the [pathos](https://github.com/uqfoundation/pathos) project) allows to use lambda functions in multiprocessed map. It was suggested to use it by @kandorm. We're already using dill which is its only dependency.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/656/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/656/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/655
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/655/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/655/comments
https://api.github.com/repos/huggingface/datasets/issues/655/events
https://github.com/huggingface/datasets/pull/655
705,672,208
MDExOlB1bGxSZXF1ZXN0NDkwMzU4OTQ3
655
added Winogrande debiased subset
{ "avatar_url": "https://avatars.githubusercontent.com/u/26709476?v=4", "events_url": "https://api.github.com/users/TevenLeScao/events{/privacy}", "followers_url": "https://api.github.com/users/TevenLeScao/followers", "following_url": "https://api.github.com/users/TevenLeScao/following{/other_user}", "gists_url": "https://api.github.com/users/TevenLeScao/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/TevenLeScao", "id": 26709476, "login": "TevenLeScao", "node_id": "MDQ6VXNlcjI2NzA5NDc2", "organizations_url": "https://api.github.com/users/TevenLeScao/orgs", "received_events_url": "https://api.github.com/users/TevenLeScao/received_events", "repos_url": "https://api.github.com/users/TevenLeScao/repos", "site_admin": false, "starred_url": "https://api.github.com/users/TevenLeScao/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/TevenLeScao/subscriptions", "type": "User", "url": "https://api.github.com/users/TevenLeScao" }
[]
closed
false
null
[]
null
[]
2020-09-21T14:51:08Z
2020-09-21T16:20:40Z
2020-09-21T16:16:04Z
MEMBER
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/655.diff", "html_url": "https://github.com/huggingface/datasets/pull/655", "merged_at": "2020-09-21T16:16:04Z", "patch_url": "https://github.com/huggingface/datasets/pull/655.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/655" }
The [Winogrande](https://arxiv.org/abs/1907.10641) paper mentions a `debiased` subset that wasn't in the first release; this PR adds it.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/655/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/655/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/654
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/654/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/654/comments
https://api.github.com/repos/huggingface/datasets/issues/654/events
https://github.com/huggingface/datasets/pull/654
705,511,058
MDExOlB1bGxSZXF1ZXN0NDkwMjI1Nzk3
654
Allow empty inputs in metrics
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
[]
closed
false
null
[]
null
[]
2020-09-21T11:26:36Z
2020-10-06T03:51:48Z
2020-09-21T16:13:38Z
MEMBER
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/654.diff", "html_url": "https://github.com/huggingface/datasets/pull/654", "merged_at": "2020-09-21T16:13:38Z", "patch_url": "https://github.com/huggingface/datasets/pull/654.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/654" }
There was an arrow error when trying to compute a metric with empty inputs. The error was occurring when reading the arrow file, before calling metric._compute.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/654/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/654/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/653
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/653/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/653/comments
https://api.github.com/repos/huggingface/datasets/issues/653/events
https://github.com/huggingface/datasets/pull/653
705,482,391
MDExOlB1bGxSZXF1ZXN0NDkwMjAxOTg4
653
handle data alteration when trying type
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
[]
closed
false
null
[]
null
[]
2020-09-21T10:41:49Z
2020-09-21T16:13:06Z
2020-09-21T16:13:05Z
MEMBER
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/653.diff", "html_url": "https://github.com/huggingface/datasets/pull/653", "merged_at": "2020-09-21T16:13:05Z", "patch_url": "https://github.com/huggingface/datasets/pull/653.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/653" }
Fix #649 The bug came from the type inference that didn't handle a weird case in Pyarrow. Indeed this code runs without error but alters the data in arrow: ```python import pyarrow as pa type = pa.struct({"a": pa.struct({"b": pa.string()})}) array_with_altered_data = pa.array([{"a": {"b": "foo", "c": "bar"}}] * 10, type=type) print(array_with_altered_data[0].as_py()) # {'a': {'b': 'foo'}} -> the sub-field "c" is missing ``` (I don't know if this is intended in pyarrow tbh) We didn't take this case into account during type inference. By default it was keeping old features and maybe alter data. To fix that I added a line that checks that the first element of the array is not altered.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/653/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/653/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/652
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/652/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/652/comments
https://api.github.com/repos/huggingface/datasets/issues/652/events
https://github.com/huggingface/datasets/pull/652
705,390,850
MDExOlB1bGxSZXF1ZXN0NDkwMTI3MjIx
652
handle connection error in download_prepared_from_hf_gcs
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
[]
closed
false
null
[]
null
[]
2020-09-21T08:21:11Z
2020-09-21T08:28:43Z
2020-09-21T08:28:42Z
MEMBER
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/652.diff", "html_url": "https://github.com/huggingface/datasets/pull/652", "merged_at": "2020-09-21T08:28:42Z", "patch_url": "https://github.com/huggingface/datasets/pull/652.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/652" }
Fix #647
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/652/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/652/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/651
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/651/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/651/comments
https://api.github.com/repos/huggingface/datasets/issues/651/events
https://github.com/huggingface/datasets/issues/651
705,212,034
MDU6SXNzdWU3MDUyMTIwMzQ=
651
Problem with JSON dataset format
{ "avatar_url": "https://avatars.githubusercontent.com/u/12724810?v=4", "events_url": "https://api.github.com/users/vikigenius/events{/privacy}", "followers_url": "https://api.github.com/users/vikigenius/followers", "following_url": "https://api.github.com/users/vikigenius/following{/other_user}", "gists_url": "https://api.github.com/users/vikigenius/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/vikigenius", "id": 12724810, "login": "vikigenius", "node_id": "MDQ6VXNlcjEyNzI0ODEw", "organizations_url": "https://api.github.com/users/vikigenius/orgs", "received_events_url": "https://api.github.com/users/vikigenius/received_events", "repos_url": "https://api.github.com/users/vikigenius/repos", "site_admin": false, "starred_url": "https://api.github.com/users/vikigenius/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/vikigenius/subscriptions", "type": "User", "url": "https://api.github.com/users/vikigenius" }
[]
open
false
null
[]
null
[]
2020-09-20T23:57:14Z
2020-09-21T12:14:24Z
null
NONE
null
null
null
I have a local json dataset with the following form. { 'id01234': {'key1': value1, 'key2': value2, 'key3': value3}, 'id01235': {'key1': value1, 'key2': value2, 'key3': value3}, . . . 'id09999': {'key1': value1, 'key2': value2, 'key3': value3} } Note that instead of a list of records it's basically a dictionary of key value pairs with the keys being the record_ids and the values being the corresponding record. Reading this with json: ``` data = datasets.load('json', data_files='path_to_local.json') ``` Throws an error and asks me to chose a field. What's the right way to handle this?
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/651/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/651/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/650
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/650/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/650/comments
https://api.github.com/repos/huggingface/datasets/issues/650/events
https://github.com/huggingface/datasets/issues/650
704,861,844
MDU6SXNzdWU3MDQ4NjE4NDQ=
650
dummy data testing can't test datasets using `dl_manager.extract` in `_split_generators`
{ "avatar_url": "https://avatars.githubusercontent.com/u/17963619?v=4", "events_url": "https://api.github.com/users/richarddwang/events{/privacy}", "followers_url": "https://api.github.com/users/richarddwang/followers", "following_url": "https://api.github.com/users/richarddwang/following{/other_user}", "gists_url": "https://api.github.com/users/richarddwang/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/richarddwang", "id": 17963619, "login": "richarddwang", "node_id": "MDQ6VXNlcjE3OTYzNjE5", "organizations_url": "https://api.github.com/users/richarddwang/orgs", "received_events_url": "https://api.github.com/users/richarddwang/received_events", "repos_url": "https://api.github.com/users/richarddwang/repos", "site_admin": false, "starred_url": "https://api.github.com/users/richarddwang/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/richarddwang/subscriptions", "type": "User", "url": "https://api.github.com/users/richarddwang" }
[]
closed
false
null
[]
null
[]
2020-09-19T11:07:03Z
2020-09-22T11:54:10Z
2020-09-22T11:54:09Z
CONTRIBUTOR
null
null
null
Hi, I recently want to add a dataset whose source data is like this ``` openwebtext.tar.xz |__ openwebtext |__subset000.xz | |__ ....txt | |__ ....txt | ... |__ subset001.xz | .... ``` So I wrote `openwebtext.py` like this ``` def _split_generators(self, dl_manager): dl_dir = dl_manager.download_and_extract(_URL) owt_dir = os.path.join(dl_dir, 'openwebtext') subset_xzs = [ os.path.join(owt_dir, file_name) for file_name in os.listdir(owt_dir) if file_name.endswith('xz') # filter out ...xz.lock ] ex_dirs = dl_manager.extract(subset_xzs, num_proc=round(os.cpu_count()*0.75)) nested_txt_files = [ [ os.path.join(ex_dir,txt_file_name) for txt_file_name in os.listdir(ex_dir) if txt_file_name.endswith('txt') ] for ex_dir in ex_dirs ] txt_files = chain(*nested_txt_files) return [ datasets.SplitGenerator( name=datasets.Split.TRAIN, gen_kwargs={"txt_files": txt_files} ), ] ``` All went good, I can load and use real openwebtext, except when I try to test with dummy data. The problem is `MockDownloadManager.extract` do nothing, so `ex_dirs = dl_manager.extract(subset_xzs)` won't decompress `subset_xxx.xz`s for me. How should I do ? Or you can modify `MockDownloadManager` to make it like a real `DownloadManager` ?
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/650/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/650/timeline
null
completed
true
https://api.github.com/repos/huggingface/datasets/issues/649
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/649/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/649/comments
https://api.github.com/repos/huggingface/datasets/issues/649/events
https://github.com/huggingface/datasets/issues/649
704,838,415
MDU6SXNzdWU3MDQ4Mzg0MTU=
649
Inconsistent behavior in map
{ "avatar_url": "https://avatars.githubusercontent.com/u/10166085?v=4", "events_url": "https://api.github.com/users/krandiash/events{/privacy}", "followers_url": "https://api.github.com/users/krandiash/followers", "following_url": "https://api.github.com/users/krandiash/following{/other_user}", "gists_url": "https://api.github.com/users/krandiash/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/krandiash", "id": 10166085, "login": "krandiash", "node_id": "MDQ6VXNlcjEwMTY2MDg1", "organizations_url": "https://api.github.com/users/krandiash/orgs", "received_events_url": "https://api.github.com/users/krandiash/received_events", "repos_url": "https://api.github.com/users/krandiash/repos", "site_admin": false, "starred_url": "https://api.github.com/users/krandiash/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/krandiash/subscriptions", "type": "User", "url": "https://api.github.com/users/krandiash" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
closed
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" } ]
null
[]
2020-09-19T08:41:12Z
2020-09-21T16:13:05Z
2020-09-21T16:13:05Z
NONE
null
null
null
I'm observing inconsistent behavior when applying .map(). This happens specifically when I'm incrementally adding onto a feature that is a nested dictionary. Here's a simple example that reproduces the problem. ```python import datasets # Dataset with a single feature called 'field' consisting of two examples dataset = datasets.Dataset.from_dict({'field': ['a', 'b']}) print(dataset[0]) # outputs {'field': 'a'} # Map this dataset to create another feature called 'otherfield', which is a dictionary containing a key called 'capital' dataset = dataset.map(lambda example: {'otherfield': {'capital': example['field'].capitalize()}}) print(dataset[0]) # output is okay {'field': 'a', 'otherfield': {'capital': 'A'}} # Now I want to map again to modify 'otherfield', by adding another key called 'append_x' to the dictionary under 'otherfield' print(dataset.map(lambda example: {'otherfield': {'append_x': example['field'] + 'x'}})[0]) # printing out the first example after applying the map shows that the new key 'append_x' doesn't get added # it also messes up the value stored at 'capital' {'field': 'a', 'otherfield': {'capital': None}} # Instead, I try to do the same thing by using a different mapped fn print(dataset.map(lambda example: {'otherfield': {'append_x': example['field'] + 'x', 'capital': example['otherfield']['capital']}})[0]) # this preserves the value under capital, but still no 'append_x' {'field': 'a', 'otherfield': {'capital': 'A'}} # Instead, I try to pass 'otherfield' to remove_columns print(dataset.map(lambda example: {'otherfield': {'append_x': example['field'] + 'x', 'capital': example['otherfield']['capital']}}, remove_columns=['otherfield'])[0]) # this still doesn't fix the problem {'field': 'a', 'otherfield': {'capital': 'A'}} # Alternately, here's what happens if I just directly map both 'capital' and 'append_x' on a fresh dataset. # Recreate the dataset dataset = datasets.Dataset.from_dict({'field': ['a', 'b']}) # Now map the entire 'otherfield' dict directly, instead of incrementally as before print(dataset.map(lambda example: {'otherfield': {'append_x': example['field'] + 'x', 'capital': example['field'].capitalize()}})[0]) # This looks good! {'field': 'a', 'otherfield': {'append_x': 'ax', 'capital': 'A'}} ``` This might be a new issue, because I didn't see this behavior in the `nlp` library. Any help is appreciated!
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/649/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/649/timeline
null
completed
true
https://api.github.com/repos/huggingface/datasets/issues/648
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/648/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/648/comments
https://api.github.com/repos/huggingface/datasets/issues/648/events
https://github.com/huggingface/datasets/issues/648
704,753,123
MDU6SXNzdWU3MDQ3NTMxMjM=
648
offset overflow when multiprocessing batched map on large datasets.
{ "avatar_url": "https://avatars.githubusercontent.com/u/17963619?v=4", "events_url": "https://api.github.com/users/richarddwang/events{/privacy}", "followers_url": "https://api.github.com/users/richarddwang/followers", "following_url": "https://api.github.com/users/richarddwang/following{/other_user}", "gists_url": "https://api.github.com/users/richarddwang/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/richarddwang", "id": 17963619, "login": "richarddwang", "node_id": "MDQ6VXNlcjE3OTYzNjE5", "organizations_url": "https://api.github.com/users/richarddwang/orgs", "received_events_url": "https://api.github.com/users/richarddwang/received_events", "repos_url": "https://api.github.com/users/richarddwang/repos", "site_admin": false, "starred_url": "https://api.github.com/users/richarddwang/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/richarddwang/subscriptions", "type": "User", "url": "https://api.github.com/users/richarddwang" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
closed
false
null
[]
null
[]
2020-09-19T02:15:11Z
2020-09-19T16:47:07Z
2020-09-19T16:46:31Z
CONTRIBUTOR
null
null
null
It only happened when "multiprocessing" + "batched" + "large dataset" at the same time. ``` def bprocess(examples): examples['len'] = [] for text in examples['text']: examples['len'].append(len(text)) return examples wiki.map(brpocess, batched=True, num_proc=8) ``` ``` --------------------------------------------------------------------------- RemoteTraceback Traceback (most recent call last) RemoteTraceback: """ Traceback (most recent call last): File "/home/yisiang/miniconda3/envs/ml/lib/python3.7/multiprocessing/pool.py", line 121, in worker result = (True, func(*args, **kwds)) File "/home/yisiang/datasets/src/datasets/arrow_dataset.py", line 153, in wrapper out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs) File "/home/yisiang/datasets/src/datasets/fingerprint.py", line 163, in wrapper out = func(self, *args, **kwargs) File "/home/yisiang/datasets/src/datasets/arrow_dataset.py", line 1486, in _map_single batch = self[i : i + batch_size] File "/home/yisiang/datasets/src/datasets/arrow_dataset.py", line 1071, in __getitem__ format_kwargs=self._format_kwargs, File "/home/yisiang/datasets/src/datasets/arrow_dataset.py", line 972, in _getitem data_subset = self._data.take(indices_array) File "pyarrow/table.pxi", line 1145, in pyarrow.lib.Table.take File "/home/yisiang/miniconda3/envs/ml/lib/python3.7/site-packages/pyarrow/compute.py", line 268, in take return call_function('take', [data, indices], options) File "pyarrow/_compute.pyx", line 298, in pyarrow._compute.call_function File "pyarrow/_compute.pyx", line 192, in pyarrow._compute.Function.call File "pyarrow/error.pxi", line 122, in pyarrow.lib.pyarrow_internal_check_status File "pyarrow/error.pxi", line 84, in pyarrow.lib.check_status pyarrow.lib.ArrowInvalid: offset overflow while concatenating arrays """ The above exception was the direct cause of the following exception: ArrowInvalid Traceback (most recent call last) in 30 owt = datasets.load_dataset('/home/yisiang/datasets/datasets/openwebtext/openwebtext.py', cache_dir='./datasets')['train'] 31 print('load/create data from OpenWebText Corpus for ELECTRA') ---> 32 e_owt = ELECTRAProcessor(owt, apply_cleaning=False).map(cache_file_name=f"electra_owt_{c.max_length}.arrow") 33 dsets.append(e_owt) 34 ~/Reexamine_Attention/electra_pytorch/_utils/utils.py in map(self, **kwargs) 126 writer_batch_size=10**4, 127 num_proc=num_proc, --> 128 **kwargs 129 ) 130 ~/hugdatafast/hugdatafast/transform.py in my_map(self, *args, **kwargs) 21 if not cache_file_name.endswith('.arrow'): cache_file_name += '.arrow' 22 if '/' not in cache_file_name: cache_file_name = os.path.join(self.cache_directory(), cache_file_name) ---> 23 return self.map(*args, cache_file_name=cache_file_name, **kwargs) 24 25 @patch ~/datasets/src/datasets/arrow_dataset.py in map(self, function, with_indices, input_columns, batched, batch_size, drop_last_batch, remove_columns, keep_in_memory, load_from_cache_file, cache_file_name, writer_batch_size, features, disable_nullable, fn_kwargs, num_proc, suffix_template, new_fingerprint) 1285 logger.info("Spawning {} processes".format(num_proc)) 1286 results = [pool.apply_async(self.__class__._map_single, kwds=kwds) for kwds in kwds_per_shard] -> 1287 transformed_shards = [r.get() for r in results] 1288 logger.info("Concatenating {} shards from multiprocessing".format(num_proc)) 1289 result = concatenate_datasets(transformed_shards) ~/datasets/src/datasets/arrow_dataset.py in (.0) 1285 logger.info("Spawning {} processes".format(num_proc)) 1286 results = [pool.apply_async(self.__class__._map_single, kwds=kwds) for kwds in kwds_per_shard] -> 1287 transformed_shards = [r.get() for r in results] 1288 logger.info("Concatenating {} shards from multiprocessing".format(num_proc)) 1289 result = concatenate_datasets(transformed_shards) ~/miniconda3/envs/ml/lib/python3.7/multiprocessing/pool.py in get(self, timeout) 655 return self._value 656 else: --> 657 raise self._value 658 659 def _set(self, i, obj): ArrowInvalid: offset overflow while concatenating arrays ```
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/648/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/648/timeline
null
completed
true
https://api.github.com/repos/huggingface/datasets/issues/647
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/647/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/647/comments
https://api.github.com/repos/huggingface/datasets/issues/647/events
https://github.com/huggingface/datasets/issues/647
704,734,764
MDU6SXNzdWU3MDQ3MzQ3NjQ=
647
Cannot download dataset_info.json
{ "avatar_url": "https://avatars.githubusercontent.com/u/33407613?v=4", "events_url": "https://api.github.com/users/chiyuzhang94/events{/privacy}", "followers_url": "https://api.github.com/users/chiyuzhang94/followers", "following_url": "https://api.github.com/users/chiyuzhang94/following{/other_user}", "gists_url": "https://api.github.com/users/chiyuzhang94/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/chiyuzhang94", "id": 33407613, "login": "chiyuzhang94", "node_id": "MDQ6VXNlcjMzNDA3NjEz", "organizations_url": "https://api.github.com/users/chiyuzhang94/orgs", "received_events_url": "https://api.github.com/users/chiyuzhang94/received_events", "repos_url": "https://api.github.com/users/chiyuzhang94/repos", "site_admin": false, "starred_url": "https://api.github.com/users/chiyuzhang94/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/chiyuzhang94/subscriptions", "type": "User", "url": "https://api.github.com/users/chiyuzhang94" }
[]
closed
false
null
[]
null
[]
2020-09-19T01:35:15Z
2020-09-21T08:28:42Z
2020-09-21T08:28:42Z
NONE
null
null
null
I am running my job on a cloud server where does not provide for connections from the standard compute nodes to outside resources. Hence, when I use `dataset.load_dataset()` to load data, I got an error like this: ``` ConnectionError: Couldn't reach https://storage.googleapis.com/huggingface-nlp/cache/datasets/text/default-53ee3045f07ba8ca/0.0.0/dataset_info.json ``` I tried to open this link manually, but I cannot access this file. How can I download this file and pass it through `dataset.load_dataset()` manually? Versions: Python version 3.7.3 PyTorch version 1.6.0 TensorFlow version 2.3.0 datasets version: 1.0.1
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/647/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/647/timeline
null
completed
true
https://api.github.com/repos/huggingface/datasets/issues/646
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/646/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/646/comments
https://api.github.com/repos/huggingface/datasets/issues/646/events
https://github.com/huggingface/datasets/pull/646
704,607,371
MDExOlB1bGxSZXF1ZXN0NDg5NTAyMTM3
646
Fix docs typos
{ "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mariosasko", "id": 47462742, "login": "mariosasko", "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "repos_url": "https://api.github.com/users/mariosasko/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "type": "User", "url": "https://api.github.com/users/mariosasko" }
[]
closed
false
null
[]
null
[]
2020-09-18T19:32:27Z
2020-09-21T16:30:54Z
2020-09-21T16:14:12Z
CONTRIBUTOR
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/646.diff", "html_url": "https://github.com/huggingface/datasets/pull/646", "merged_at": "2020-09-21T16:14:12Z", "patch_url": "https://github.com/huggingface/datasets/pull/646.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/646" }
This PR fixes few typos in the docs and the error in the code snippet in the set_format section in docs/source/torch_tensorflow.rst. `torch.utils.data.Dataloader` expects padded batches so it throws an error due to not being able to stack the unpadded tensors. If we follow the Quick tour from the docs where they add the `truncation=True, padding='max_length'` arguments to the tokenizer before passing data to Dataloader, we can easily fix the issue.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/646/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/646/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/645
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/645/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/645/comments
https://api.github.com/repos/huggingface/datasets/issues/645/events
https://github.com/huggingface/datasets/pull/645
704,542,234
MDExOlB1bGxSZXF1ZXN0NDg5NDQ5MjAx
645
Don't use take on dataset table in pyarrow 1.0.x
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
[]
closed
false
null
[]
null
[]
2020-09-18T17:31:34Z
2020-09-19T16:46:32Z
2020-09-19T16:46:31Z
MEMBER
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/645.diff", "html_url": "https://github.com/huggingface/datasets/pull/645", "merged_at": "2020-09-19T16:46:31Z", "patch_url": "https://github.com/huggingface/datasets/pull/645.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/645" }
Fix #615
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/645/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/645/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/644
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/644/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/644/comments
https://api.github.com/repos/huggingface/datasets/issues/644/events
https://github.com/huggingface/datasets/pull/644
704,534,501
MDExOlB1bGxSZXF1ZXN0NDg5NDQzMTk1
644
Better windows support
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
[]
closed
false
null
[]
null
[]
2020-09-18T17:17:36Z
2020-09-25T14:02:30Z
2020-09-25T14:02:28Z
MEMBER
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/644.diff", "html_url": "https://github.com/huggingface/datasets/pull/644", "merged_at": "2020-09-25T14:02:28Z", "patch_url": "https://github.com/huggingface/datasets/pull/644.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/644" }
There are a few differences in the behavior of python and pyarrow on windows. For example there are restrictions when accessing/deleting files that are open Fix #590
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/644/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/644/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/643
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/643/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/643/comments
https://api.github.com/repos/huggingface/datasets/issues/643/events
https://github.com/huggingface/datasets/issues/643
704,477,164
MDU6SXNzdWU3MDQ0NzcxNjQ=
643
Caching processed dataset at wrong folder
{ "avatar_url": "https://avatars.githubusercontent.com/u/3653789?v=4", "events_url": "https://api.github.com/users/mrm8488/events{/privacy}", "followers_url": "https://api.github.com/users/mrm8488/followers", "following_url": "https://api.github.com/users/mrm8488/following{/other_user}", "gists_url": "https://api.github.com/users/mrm8488/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mrm8488", "id": 3653789, "login": "mrm8488", "node_id": "MDQ6VXNlcjM2NTM3ODk=", "organizations_url": "https://api.github.com/users/mrm8488/orgs", "received_events_url": "https://api.github.com/users/mrm8488/received_events", "repos_url": "https://api.github.com/users/mrm8488/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mrm8488/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mrm8488/subscriptions", "type": "User", "url": "https://api.github.com/users/mrm8488" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
closed
false
null
[]
null
[]
2020-09-18T15:41:26Z
2022-02-16T14:53:29Z
2022-02-16T14:53:29Z
CONTRIBUTOR
null
null
null
Hi guys, I run this on my Colab (PRO): ```python from datasets import load_dataset dataset = load_dataset('text', data_files='/content/corpus.txt', cache_dir='/content/drive/My Drive', split='train') def encode(examples): return tokenizer(examples['text'], truncation=True, padding='max_length') dataset = dataset.map(encode, batched=True) ``` The file is about 4 GB, so I cannot process it on the Colab HD because there is no enough space. So I decided to mount my Google Drive fs and do it on it. The dataset is cached in the right place but by processing it (applying `encode` function) seems to use a different folder because Colab HD starts to grow and it crashes when it should be done in the Drive fs. What gets me crazy, it prints it is processing/encoding the dataset in the right folder: ``` Testing the mapped function outputs Testing finished, running the mapping function on the dataset Caching processed dataset at /content/drive/My Drive/text/default-ad3e69d6242ee916/0.0.0/7e13bc0fa76783d4ef197f079dc8acfe54c3efda980f2c9adfab046ede2f0ff7/cache-b16341780a59747d.arrow ```
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/643/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/643/timeline
null
completed
true
https://api.github.com/repos/huggingface/datasets/issues/642
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/642/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/642/comments
https://api.github.com/repos/huggingface/datasets/issues/642/events
https://github.com/huggingface/datasets/pull/642
704,397,499
MDExOlB1bGxSZXF1ZXN0NDg5MzMwMDAx
642
Rename wnut fields
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
[]
closed
false
null
[]
null
[]
2020-09-18T13:51:31Z
2020-09-18T17:18:31Z
2020-09-18T17:18:30Z
MEMBER
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/642.diff", "html_url": "https://github.com/huggingface/datasets/pull/642", "merged_at": "2020-09-18T17:18:30Z", "patch_url": "https://github.com/huggingface/datasets/pull/642.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/642" }
As mentioned in #641 it would be cool to have it follow the naming of the other NER datasets
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/642/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/642/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/641
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/641/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/641/comments
https://api.github.com/repos/huggingface/datasets/issues/641/events
https://github.com/huggingface/datasets/pull/641
704,373,940
MDExOlB1bGxSZXF1ZXN0NDg5MzExOTU3
641
Add Polyglot-NER Dataset
{ "avatar_url": "https://avatars.githubusercontent.com/u/9353833?v=4", "events_url": "https://api.github.com/users/joeddav/events{/privacy}", "followers_url": "https://api.github.com/users/joeddav/followers", "following_url": "https://api.github.com/users/joeddav/following{/other_user}", "gists_url": "https://api.github.com/users/joeddav/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/joeddav", "id": 9353833, "login": "joeddav", "node_id": "MDQ6VXNlcjkzNTM4MzM=", "organizations_url": "https://api.github.com/users/joeddav/orgs", "received_events_url": "https://api.github.com/users/joeddav/received_events", "repos_url": "https://api.github.com/users/joeddav/repos", "site_admin": false, "starred_url": "https://api.github.com/users/joeddav/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/joeddav/subscriptions", "type": "User", "url": "https://api.github.com/users/joeddav" }
[]
closed
false
null
[]
null
[]
2020-09-18T13:21:44Z
2020-09-20T03:04:43Z
2020-09-20T03:04:43Z
CONTRIBUTOR
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/641.diff", "html_url": "https://github.com/huggingface/datasets/pull/641", "merged_at": "2020-09-20T03:04:43Z", "patch_url": "https://github.com/huggingface/datasets/pull/641.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/641" }
Adds the [Polyglot-NER dataset](https://sites.google.com/site/rmyeid/projects/polylgot-ner) with named entity tags for 40 languages. I include separate configs for each language as well as a `combined` config which lumps them all together.
{ "+1": 2, "-1": 0, "confused": 0, "eyes": 0, "heart": 2, "hooray": 0, "laugh": 0, "rocket": 2, "total_count": 6, "url": "https://api.github.com/repos/huggingface/datasets/issues/641/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/641/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/640
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/640/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/640/comments
https://api.github.com/repos/huggingface/datasets/issues/640/events
https://github.com/huggingface/datasets/pull/640
704,311,758
MDExOlB1bGxSZXF1ZXN0NDg5MjYwNTc1
640
Make shuffle compatible with temp_seed
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
[]
closed
false
null
[]
null
[]
2020-09-18T11:38:58Z
2020-09-18T11:47:51Z
2020-09-18T11:47:50Z
MEMBER
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/640.diff", "html_url": "https://github.com/huggingface/datasets/pull/640", "merged_at": "2020-09-18T11:47:50Z", "patch_url": "https://github.com/huggingface/datasets/pull/640.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/640" }
This code used to return different dataset at each run ```python import dataset as ds dataset = ... with ds.temp_seed(42): shuffled = dataset.shuffle() ``` Now it returns the same one since the seed is set
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/640/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/640/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/639
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/639/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/639/comments
https://api.github.com/repos/huggingface/datasets/issues/639/events
https://github.com/huggingface/datasets/pull/639
704,217,963
MDExOlB1bGxSZXF1ZXN0NDg5MTgxOTY3
639
Update glue QQP checksum
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
[]
closed
false
null
[]
null
[]
2020-09-18T09:08:15Z
2020-09-18T11:37:08Z
2020-09-18T11:37:07Z
MEMBER
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/639.diff", "html_url": "https://github.com/huggingface/datasets/pull/639", "merged_at": "2020-09-18T11:37:07Z", "patch_url": "https://github.com/huggingface/datasets/pull/639.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/639" }
Fix #638
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/639/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/639/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/638
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/638/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/638/comments
https://api.github.com/repos/huggingface/datasets/issues/638/events
https://github.com/huggingface/datasets/issues/638
704,146,956
MDU6SXNzdWU3MDQxNDY5NTY=
638
GLUE/QQP dataset: NonMatchingChecksumError
{ "avatar_url": "https://avatars.githubusercontent.com/u/17963619?v=4", "events_url": "https://api.github.com/users/richarddwang/events{/privacy}", "followers_url": "https://api.github.com/users/richarddwang/followers", "following_url": "https://api.github.com/users/richarddwang/following{/other_user}", "gists_url": "https://api.github.com/users/richarddwang/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/richarddwang", "id": 17963619, "login": "richarddwang", "node_id": "MDQ6VXNlcjE3OTYzNjE5", "organizations_url": "https://api.github.com/users/richarddwang/orgs", "received_events_url": "https://api.github.com/users/richarddwang/received_events", "repos_url": "https://api.github.com/users/richarddwang/repos", "site_admin": false, "starred_url": "https://api.github.com/users/richarddwang/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/richarddwang/subscriptions", "type": "User", "url": "https://api.github.com/users/richarddwang" }
[]
closed
false
null
[]
null
[]
2020-09-18T07:09:10Z
2020-09-18T11:37:07Z
2020-09-18T11:37:07Z
CONTRIBUTOR
null
null
null
Hi @lhoestq , I know you are busy and there are also other important issues. But if this is easy to be fixed, I am shamelessly wondering if you can give me some help , so I can evaluate my models and restart with my developing cycle asap. 😚 datasets version: editable install of master at 9/17 `datasets.load_dataset('glue','qqp', cache_dir='./datasets')` ``` Downloading and preparing dataset glue/qqp (download: 57.73 MiB, generated: 107.02 MiB, post-processed: Unknown size, total: 164.75 MiB) to ./datasets/glue/qqp/1.0.0/7c99657241149a24692c402a5c3f34d4c9f1df5ac2e4c3759fadea38f6cb29c4... --------------------------------------------------------------------------- NonMatchingChecksumError Traceback (most recent call last) in ----> 1 datasets.load_dataset('glue','qqp', cache_dir='./datasets') ~/datasets/src/datasets/load.py in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, ignore_verifications, save_infos, script_version, **config_kwargs) 609 download_config=download_config, 610 download_mode=download_mode, --> 611 ignore_verifications=ignore_verifications, 612 ) 613 ~/datasets/src/datasets/builder.py in download_and_prepare(self, download_config, download_mode, ignore_verifications, try_from_hf_gcs, dl_manager, **download_and_prepare_kwargs) 467 if not downloaded_from_gcs: 468 self._download_and_prepare( --> 469 dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs 470 ) 471 # Sync info ~/datasets/src/datasets/builder.py in _download_and_prepare(self, dl_manager, verify_infos, **prepare_split_kwargs) 527 if verify_infos: 528 verify_checksums( --> 529 self.info.download_checksums, dl_manager.get_recorded_sizes_checksums(), "dataset source files" 530 ) 531 ~/datasets/src/datasets/utils/info_utils.py in verify_checksums(expected_checksums, recorded_checksums, verification_name) 37 if len(bad_urls) > 0: 38 error_msg = "Checksums didn't match" + for_verification_name + ":\n" ---> 39 raise NonMatchingChecksumError(error_msg + str(bad_urls)) 40 logger.info("All the checksums matched successfully" + for_verification_name) 41 NonMatchingChecksumError: Checksums didn't match for dataset source files: ['https://dl.fbaipublicfiles.com/glue/data/QQP-clean.zip'] ```
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/638/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/638/timeline
null
completed
true
https://api.github.com/repos/huggingface/datasets/issues/637
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/637/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/637/comments
https://api.github.com/repos/huggingface/datasets/issues/637/events
https://github.com/huggingface/datasets/pull/637
703,539,909
MDExOlB1bGxSZXF1ZXN0NDg4NjMwNzk4
637
Add MATINF
{ "avatar_url": "https://avatars.githubusercontent.com/u/22514219?v=4", "events_url": "https://api.github.com/users/JetRunner/events{/privacy}", "followers_url": "https://api.github.com/users/JetRunner/followers", "following_url": "https://api.github.com/users/JetRunner/following{/other_user}", "gists_url": "https://api.github.com/users/JetRunner/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/JetRunner", "id": 22514219, "login": "JetRunner", "node_id": "MDQ6VXNlcjIyNTE0MjE5", "organizations_url": "https://api.github.com/users/JetRunner/orgs", "received_events_url": "https://api.github.com/users/JetRunner/received_events", "repos_url": "https://api.github.com/users/JetRunner/repos", "site_admin": false, "starred_url": "https://api.github.com/users/JetRunner/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/JetRunner/subscriptions", "type": "User", "url": "https://api.github.com/users/JetRunner" }
[]
closed
false
null
[]
null
[]
2020-09-17T12:24:53Z
2020-09-17T13:23:18Z
2020-09-17T13:23:17Z
CONTRIBUTOR
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/637.diff", "html_url": "https://github.com/huggingface/datasets/pull/637", "merged_at": "2020-09-17T13:23:17Z", "patch_url": "https://github.com/huggingface/datasets/pull/637.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/637" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/637/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/637/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/636
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/636/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/636/comments
https://api.github.com/repos/huggingface/datasets/issues/636/events
https://github.com/huggingface/datasets/pull/636
702,883,989
MDExOlB1bGxSZXF1ZXN0NDg4MDg3OTA5
636
Consistent ner features
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
[]
closed
false
null
[]
null
[]
2020-09-16T15:56:25Z
2020-09-17T09:52:59Z
2020-09-17T09:52:58Z
MEMBER
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/636.diff", "html_url": "https://github.com/huggingface/datasets/pull/636", "merged_at": "2020-09-17T09:52:58Z", "patch_url": "https://github.com/huggingface/datasets/pull/636.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/636" }
As discussed in #613 , this PR aims at making NER feature names consistent across datasets. I changed the feature names of LinCE and XTREME/PAN-X
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/636/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/636/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/635
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/635/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/635/comments
https://api.github.com/repos/huggingface/datasets/issues/635/events
https://github.com/huggingface/datasets/pull/635
702,822,439
MDExOlB1bGxSZXF1ZXN0NDg4MDM2OTE5
635
Loglevel
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
[]
closed
false
null
[]
null
[]
2020-09-16T14:37:53Z
2020-09-17T09:52:19Z
2020-09-17T09:52:18Z
MEMBER
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/635.diff", "html_url": "https://github.com/huggingface/datasets/pull/635", "merged_at": "2020-09-17T09:52:18Z", "patch_url": "https://github.com/huggingface/datasets/pull/635.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/635" }
Continuation of #618
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/635/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/635/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/634
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/634/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/634/comments
https://api.github.com/repos/huggingface/datasets/issues/634/events
https://github.com/huggingface/datasets/pull/634
702,676,041
MDExOlB1bGxSZXF1ZXN0NDg3OTEzOTk4
634
Add ConLL-2000 dataset
{ "avatar_url": "https://avatars.githubusercontent.com/u/458335?v=4", "events_url": "https://api.github.com/users/vblagoje/events{/privacy}", "followers_url": "https://api.github.com/users/vblagoje/followers", "following_url": "https://api.github.com/users/vblagoje/following{/other_user}", "gists_url": "https://api.github.com/users/vblagoje/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/vblagoje", "id": 458335, "login": "vblagoje", "node_id": "MDQ6VXNlcjQ1ODMzNQ==", "organizations_url": "https://api.github.com/users/vblagoje/orgs", "received_events_url": "https://api.github.com/users/vblagoje/received_events", "repos_url": "https://api.github.com/users/vblagoje/repos", "site_admin": false, "starred_url": "https://api.github.com/users/vblagoje/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/vblagoje/subscriptions", "type": "User", "url": "https://api.github.com/users/vblagoje" }
[]
closed
false
null
[]
null
[]
2020-09-16T11:14:11Z
2020-09-17T10:38:10Z
2020-09-17T10:38:10Z
CONTRIBUTOR
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/634.diff", "html_url": "https://github.com/huggingface/datasets/pull/634", "merged_at": "2020-09-17T10:38:10Z", "patch_url": "https://github.com/huggingface/datasets/pull/634.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/634" }
Adds ConLL-2000 dataset used for text chunking. See https://www.clips.uantwerpen.be/conll2000/chunking/ for details and [motivation](https://github.com/huggingface/transformers/pull/7041#issuecomment-692710948) behind this PR
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/634/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/634/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/633
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/633/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/633/comments
https://api.github.com/repos/huggingface/datasets/issues/633/events
https://github.com/huggingface/datasets/issues/633
702,440,484
MDU6SXNzdWU3MDI0NDA0ODQ=
633
Load large text file for LM pre-training resulting in OOM
{ "avatar_url": "https://avatars.githubusercontent.com/u/29704017?v=4", "events_url": "https://api.github.com/users/leethu2012/events{/privacy}", "followers_url": "https://api.github.com/users/leethu2012/followers", "following_url": "https://api.github.com/users/leethu2012/following{/other_user}", "gists_url": "https://api.github.com/users/leethu2012/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/leethu2012", "id": 29704017, "login": "leethu2012", "node_id": "MDQ6VXNlcjI5NzA0MDE3", "organizations_url": "https://api.github.com/users/leethu2012/orgs", "received_events_url": "https://api.github.com/users/leethu2012/received_events", "repos_url": "https://api.github.com/users/leethu2012/repos", "site_admin": false, "starred_url": "https://api.github.com/users/leethu2012/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/leethu2012/subscriptions", "type": "User", "url": "https://api.github.com/users/leethu2012" }
[]
open
false
null
[]
null
[]
2020-09-16T04:33:15Z
2021-02-16T12:02:01Z
null
NONE
null
null
null
I tried to pretrain Longformer using transformers and datasets. But I got OOM issues with loading a large text file. My script is almost like this: ```python from datasets import load_dataset @dataclass class DataCollatorForDatasetsLanguageModeling(DataCollatorForLanguageModeling): """ Data collator used for language modeling based on DataCollatorForLazyLanguageModeling - collates batches of tensors, honoring their tokenizer's pad_token - preprocesses batches for masked language modeling """ block_size: int = 512 def __call__(self, examples: List[dict]) -> Dict[str, torch.Tensor]: examples = [example['text'] for example in examples] batch, attention_mask = self._tensorize_batch(examples) if self.mlm: inputs, labels = self.mask_tokens(batch) return {"input_ids": inputs, "labels": labels} else: labels = batch.clone().detach() if self.tokenizer.pad_token_id is not None: labels[labels == self.tokenizer.pad_token_id] = -100 return {"input_ids": batch, "labels": labels} def _tensorize_batch(self, examples: List[str]) -> Tuple[torch.Tensor, torch.Tensor]: if self.tokenizer._pad_token is None: raise ValueError( "You are attempting to pad samples but the tokenizer you are using" f" ({self.tokenizer.__class__.__name__}) does not have one." ) tensor_examples = self.tokenizer.batch_encode_plus( [ex for ex in examples if ex], max_length=self.block_size, return_tensors="pt", pad_to_max_length=True, return_attention_mask=True, truncation=True, ) input_ids, attention_mask = tensor_examples["input_ids"], tensor_examples["attention_mask"] return input_ids, attention_mask dataset = load_dataset('text', data_files='train.txt',cache_dir="./", , split='train') data_collator = DataCollatorForDatasetsLanguageModeling(tokenizer=tokenizer, mlm=True, mlm_probability=0.15, block_size=tokenizer.max_len) trainer = Trainer(model=model, args=args, data_collator=data_collator, train_dataset=train_dataset, prediction_loss_only=True, ) trainer.train(model_path=model_path) ``` This train.txt is about 1.1GB and has 90k lines where each line is a sequence of 4k words. During training, the memory usage increased fast as the following graph and resulted in OOM before the finish of training. ![image](https://user-images.githubusercontent.com/29704017/93292112-5576b280-f817-11ea-8da2-b2db9bf35665.png) Could you please give me any suggestions on why this happened and how to fix it? Thanks.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/633/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/633/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/632
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/632/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/632/comments
https://api.github.com/repos/huggingface/datasets/issues/632/events
https://github.com/huggingface/datasets/pull/632
702,358,124
MDExOlB1bGxSZXF1ZXN0NDg3NjQ5OTQ2
632
Fix typos in the loading datasets docs
{ "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mariosasko", "id": 47462742, "login": "mariosasko", "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "repos_url": "https://api.github.com/users/mariosasko/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "type": "User", "url": "https://api.github.com/users/mariosasko" }
[]
closed
false
null
[]
null
[]
2020-09-16T00:27:41Z
2020-09-21T16:31:11Z
2020-09-16T06:52:44Z
CONTRIBUTOR
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/632.diff", "html_url": "https://github.com/huggingface/datasets/pull/632", "merged_at": "2020-09-16T06:52:44Z", "patch_url": "https://github.com/huggingface/datasets/pull/632.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/632" }
This PR fixes two typos in the loading datasets docs, one of them being a broken link to the `load_dataset` function.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/632/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/632/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/631
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/631/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/631/comments
https://api.github.com/repos/huggingface/datasets/issues/631/events
https://github.com/huggingface/datasets/pull/631
701,711,255
MDExOlB1bGxSZXF1ZXN0NDg3MTE3OTA0
631
Fix text delimiter
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
[]
closed
false
null
[]
null
[]
2020-09-15T08:08:42Z
2020-09-22T15:03:06Z
2020-09-15T08:26:25Z
MEMBER
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/631.diff", "html_url": "https://github.com/huggingface/datasets/pull/631", "merged_at": "2020-09-15T08:26:25Z", "patch_url": "https://github.com/huggingface/datasets/pull/631.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/631" }
I changed the delimiter in the `text` dataset script. It should fix the `pyarrow.lib.ArrowInvalid: CSV parse error` from #622 I changed the delimiter to an unused ascii character that is not present in text files : `\b`
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/631/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/631/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/630
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/630/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/630/comments
https://api.github.com/repos/huggingface/datasets/issues/630/events
https://github.com/huggingface/datasets/issues/630
701,636,350
MDU6SXNzdWU3MDE2MzYzNTA=
630
Text dataset not working with large files
{ "avatar_url": "https://avatars.githubusercontent.com/u/17930170?v=4", "events_url": "https://api.github.com/users/ksjae/events{/privacy}", "followers_url": "https://api.github.com/users/ksjae/followers", "following_url": "https://api.github.com/users/ksjae/following{/other_user}", "gists_url": "https://api.github.com/users/ksjae/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/ksjae", "id": 17930170, "login": "ksjae", "node_id": "MDQ6VXNlcjE3OTMwMTcw", "organizations_url": "https://api.github.com/users/ksjae/orgs", "received_events_url": "https://api.github.com/users/ksjae/received_events", "repos_url": "https://api.github.com/users/ksjae/repos", "site_admin": false, "starred_url": "https://api.github.com/users/ksjae/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ksjae/subscriptions", "type": "User", "url": "https://api.github.com/users/ksjae" }
[]
closed
false
null
[]
null
[]
2020-09-15T06:02:36Z
2020-09-25T22:21:43Z
2020-09-25T22:21:43Z
NONE
null
null
null
``` Traceback (most recent call last): File "examples/language-modeling/run_language_modeling.py", line 333, in <module> main() File "examples/language-modeling/run_language_modeling.py", line 262, in main get_dataset(data_args, tokenizer=tokenizer, cache_dir=model_args.cache_dir) if training_args.do_train else None File "examples/language-modeling/run_language_modeling.py", line 144, in get_dataset dataset = load_dataset("text", data_files=file_path, split='train+test') File "/home/ksjae/.local/lib/python3.7/site-packages/datasets/load.py", line 611, in load_dataset ignore_verifications=ignore_verifications, File "/home/ksjae/.local/lib/python3.7/site-packages/datasets/builder.py", line 469, in download_and_prepare dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs File "/home/ksjae/.local/lib/python3.7/site-packages/datasets/builder.py", line 546, in _download_and_prepare self._prepare_split(split_generator, **prepare_split_kwargs) File "/home/ksjae/.local/lib/python3.7/site-packages/datasets/builder.py", line 888, in _prepare_split for key, table in utils.tqdm(generator, unit=" tables", leave=False, disable=not_verbose): File "/home/ksjae/.local/lib/python3.7/site-packages/tqdm/std.py", line 1129, in __iter__ for obj in iterable: File "/home/ksjae/.cache/huggingface/modules/datasets_modules/datasets/text/7e13bc0fa76783d4ef197f079dc8acfe54c3efda980f2c9adfab046ede2f0ff7/text.py", line 104, in _generate_tables convert_options=self.config.convert_options, File "pyarrow/_csv.pyx", line 714, in pyarrow._csv.read_csv File "pyarrow/error.pxi", line 122, in pyarrow.lib.pyarrow_internal_check_status File "pyarrow/error.pxi", line 84, in pyarrow.lib.check_status ``` **pyarrow.lib.ArrowInvalid: straddling object straddles two block boundaries (try to increase block size?)** It gives the same message for both 200MB, 10GB .tx files but not for 700MB file. Can't upload due to size & copyright problem. sorry.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/630/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/630/timeline
null
completed
true
https://api.github.com/repos/huggingface/datasets/issues/629
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/629/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/629/comments
https://api.github.com/repos/huggingface/datasets/issues/629/events
https://github.com/huggingface/datasets/issues/629
701,517,550
MDU6SXNzdWU3MDE1MTc1NTA=
629
straddling object straddles two block boundaries
{ "avatar_url": "https://avatars.githubusercontent.com/u/17970177?v=4", "events_url": "https://api.github.com/users/bharaniabhishek123/events{/privacy}", "followers_url": "https://api.github.com/users/bharaniabhishek123/followers", "following_url": "https://api.github.com/users/bharaniabhishek123/following{/other_user}", "gists_url": "https://api.github.com/users/bharaniabhishek123/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/bharaniabhishek123", "id": 17970177, "login": "bharaniabhishek123", "node_id": "MDQ6VXNlcjE3OTcwMTc3", "organizations_url": "https://api.github.com/users/bharaniabhishek123/orgs", "received_events_url": "https://api.github.com/users/bharaniabhishek123/received_events", "repos_url": "https://api.github.com/users/bharaniabhishek123/repos", "site_admin": false, "starred_url": "https://api.github.com/users/bharaniabhishek123/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/bharaniabhishek123/subscriptions", "type": "User", "url": "https://api.github.com/users/bharaniabhishek123" }
[]
closed
false
null
[]
null
[]
2020-09-15T00:30:46Z
2020-09-15T00:36:17Z
2020-09-15T00:32:17Z
NONE
null
null
null
I am trying to read json data (it's an array with lots of dictionaries) and getting block boundaries issue as below : I tried calling read_json with readOptions but no luck . ``` table = json.read_json(fn) Traceback (most recent call last): File "<stdin>", line 1, in <module> File "pyarrow/_json.pyx", line 246, in pyarrow._json.read_json File "pyarrow/error.pxi", line 122, in pyarrow.lib.pyarrow_internal_check_status File "pyarrow/error.pxi", line 84, in pyarrow.lib.check_status pyarrow.lib.ArrowInvalid: straddling object straddles two block boundaries (try to increase block size?) ```
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/629/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/629/timeline
null
completed
true
https://api.github.com/repos/huggingface/datasets/issues/628
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/628/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/628/comments
https://api.github.com/repos/huggingface/datasets/issues/628/events
https://github.com/huggingface/datasets/pull/628
701,496,053
MDExOlB1bGxSZXF1ZXN0NDg2OTQyNzgx
628
Update docs links in the contribution guideline
{ "avatar_url": "https://avatars.githubusercontent.com/u/9285264?v=4", "events_url": "https://api.github.com/users/M-Salti/events{/privacy}", "followers_url": "https://api.github.com/users/M-Salti/followers", "following_url": "https://api.github.com/users/M-Salti/following{/other_user}", "gists_url": "https://api.github.com/users/M-Salti/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/M-Salti", "id": 9285264, "login": "M-Salti", "node_id": "MDQ6VXNlcjkyODUyNjQ=", "organizations_url": "https://api.github.com/users/M-Salti/orgs", "received_events_url": "https://api.github.com/users/M-Salti/received_events", "repos_url": "https://api.github.com/users/M-Salti/repos", "site_admin": false, "starred_url": "https://api.github.com/users/M-Salti/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/M-Salti/subscriptions", "type": "User", "url": "https://api.github.com/users/M-Salti" }
[]
closed
false
null
[]
null
[]
2020-09-14T23:27:19Z
2020-11-02T21:03:23Z
2020-09-15T06:19:35Z
CONTRIBUTOR
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/628.diff", "html_url": "https://github.com/huggingface/datasets/pull/628", "merged_at": "2020-09-15T06:19:35Z", "patch_url": "https://github.com/huggingface/datasets/pull/628.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/628" }
Fixed the `add a dataset` and `share a dataset` links in the contribution guideline to refer to the new docs website.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/628/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/628/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/627
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/627/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/627/comments
https://api.github.com/repos/huggingface/datasets/issues/627/events
https://github.com/huggingface/datasets/pull/627
701,411,661
MDExOlB1bGxSZXF1ZXN0NDg2ODcxMTg2
627
fix (#619) MLQA features names
{ "avatar_url": "https://avatars.githubusercontent.com/u/9285264?v=4", "events_url": "https://api.github.com/users/M-Salti/events{/privacy}", "followers_url": "https://api.github.com/users/M-Salti/followers", "following_url": "https://api.github.com/users/M-Salti/following{/other_user}", "gists_url": "https://api.github.com/users/M-Salti/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/M-Salti", "id": 9285264, "login": "M-Salti", "node_id": "MDQ6VXNlcjkyODUyNjQ=", "organizations_url": "https://api.github.com/users/M-Salti/orgs", "received_events_url": "https://api.github.com/users/M-Salti/received_events", "repos_url": "https://api.github.com/users/M-Salti/repos", "site_admin": false, "starred_url": "https://api.github.com/users/M-Salti/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/M-Salti/subscriptions", "type": "User", "url": "https://api.github.com/users/M-Salti" }
[]
closed
false
null
[]
null
[]
2020-09-14T20:41:59Z
2020-11-02T21:04:32Z
2020-09-16T06:54:11Z
CONTRIBUTOR
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/627.diff", "html_url": "https://github.com/huggingface/datasets/pull/627", "merged_at": "2020-09-16T06:54:11Z", "patch_url": "https://github.com/huggingface/datasets/pull/627.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/627" }
Fixed the features names as suggested in (#619) in the `_generate_examples` and `_info` methods in the MLQA loading script and also changed the names in the `dataset_infos.json` file.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/627/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/627/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/626
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/626/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/626/comments
https://api.github.com/repos/huggingface/datasets/issues/626/events
https://github.com/huggingface/datasets/pull/626
701,352,605
MDExOlB1bGxSZXF1ZXN0NDg2ODIzMTY1
626
Update GLUE URLs (now hosted on FB)
{ "avatar_url": "https://avatars.githubusercontent.com/u/57466294?v=4", "events_url": "https://api.github.com/users/jeswan/events{/privacy}", "followers_url": "https://api.github.com/users/jeswan/followers", "following_url": "https://api.github.com/users/jeswan/following{/other_user}", "gists_url": "https://api.github.com/users/jeswan/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/jeswan", "id": 57466294, "login": "jeswan", "node_id": "MDQ6VXNlcjU3NDY2Mjk0", "organizations_url": "https://api.github.com/users/jeswan/orgs", "received_events_url": "https://api.github.com/users/jeswan/received_events", "repos_url": "https://api.github.com/users/jeswan/repos", "site_admin": false, "starred_url": "https://api.github.com/users/jeswan/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jeswan/subscriptions", "type": "User", "url": "https://api.github.com/users/jeswan" }
[]
closed
false
null
[]
null
[]
2020-09-14T19:05:39Z
2020-09-16T06:53:18Z
2020-09-16T06:53:18Z
CONTRIBUTOR
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/626.diff", "html_url": "https://github.com/huggingface/datasets/pull/626", "merged_at": "2020-09-16T06:53:18Z", "patch_url": "https://github.com/huggingface/datasets/pull/626.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/626" }
NYU is switching dataset hosting from Google to FB. This PR closes https://github.com/huggingface/datasets/issues/608 and is necessary for https://github.com/jiant-dev/jiant/issues/161. This PR updates the data URLs based on changes made in https://github.com/nyu-mll/jiant/pull/1112. Note: rebased on huggingface/datasets
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/626/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/626/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/625
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/625/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/625/comments
https://api.github.com/repos/huggingface/datasets/issues/625/events
https://github.com/huggingface/datasets/issues/625
701,057,799
MDU6SXNzdWU3MDEwNTc3OTk=
625
dtype of tensors should be preserved
{ "avatar_url": "https://avatars.githubusercontent.com/u/2779410?v=4", "events_url": "https://api.github.com/users/BramVanroy/events{/privacy}", "followers_url": "https://api.github.com/users/BramVanroy/followers", "following_url": "https://api.github.com/users/BramVanroy/following{/other_user}", "gists_url": "https://api.github.com/users/BramVanroy/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/BramVanroy", "id": 2779410, "login": "BramVanroy", "node_id": "MDQ6VXNlcjI3Nzk0MTA=", "organizations_url": "https://api.github.com/users/BramVanroy/orgs", "received_events_url": "https://api.github.com/users/BramVanroy/received_events", "repos_url": "https://api.github.com/users/BramVanroy/repos", "site_admin": false, "starred_url": "https://api.github.com/users/BramVanroy/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/BramVanroy/subscriptions", "type": "User", "url": "https://api.github.com/users/BramVanroy" }
[]
closed
false
null
[]
null
[]
2020-09-14T12:38:05Z
2021-08-17T08:30:04Z
2021-08-17T08:30:04Z
CONTRIBUTOR
null
null
null
After switching to `datasets` my model just broke. After a weekend of debugging, the issue was that my model could not handle the double that the Dataset provided, as it expected a float (but didn't give a warning, which seems a [PyTorch issue](https://discuss.pytorch.org/t/is-it-required-that-input-and-hidden-for-gru-have-the-same-dtype-float32/96221)). As a user I did not expect this bug. I have a `map` function that I call on the Dataset that looks like this: ```python def preprocess(sentences: List[str]): token_ids = [[vocab.to_index(t) for t in s.split()] for s in sentences] sembeddings = stransformer.encode(sentences) print(sembeddings.dtype) return {"input_ids": token_ids, "sembedding": sembeddings} ``` Given a list of `sentences` (`List[str]`), it converts those into token_ids on the one hand (list of lists of ints; `List[List[int]]`) and into sentence embeddings on the other (Tensor of dtype `torch.float32`). That means that I actually set the column "sembedding" to a tensor that I as a user expect to be a float32. It appears though that behind the scenes, this tensor is converted into a **list**. I did not find this documented anywhere but I might have missed it. From a user's perspective this is incredibly important though, because it means you cannot do any data_type or tensor casting yourself in a mapping function! Furthermore, this can lead to issues, as was my case. My model expected float32 precision, which I thought `sembedding` was because that is what `stransformer.encode` outputs. But behind the scenes this tensor is first cast to a list, and when we then set its format, as below, this column is cast not to float32 but to double precision float64. ```python dataset.set_format(type="torch", columns=["input_ids", "sembedding"]) ``` This happens because apparently there is an intermediate step of casting to a **numpy** array (?) **whose dtype creation/deduction is different from torch dtypes** (see the snippet below). As you can see, this means that the dtype is not preserved: if I got it right, the dataset goes from torch.float32 -> list -> float64 (numpy) -> torch.float64. ```python import torch import numpy as np l = [-0.03010837361216545, -0.035979013890028, -0.016949838027358055] torch_tensor = torch.tensor(l) np_array = np.array(l) np_to_torch = torch.from_numpy(np_array) print(torch_tensor.dtype) # torch.float32 print(np_array.dtype) # float64 print(np_to_torch.dtype) # torch.float64 ``` This might lead to unwanted behaviour. I understand that the whole library is probably built around casting from numpy to other frameworks, so this might be difficult to solve. Perhaps `set_format` should include a `dtypes` option where for each input column the user can specify the wanted precision. The alternative is that the user needs to cast manually after loading data from the dataset but that does not seem user-friendly, makes the dataset less portable, and might use more space in memory as well as on disk than is actually needed.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/625/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/625/timeline
null
completed
true
https://api.github.com/repos/huggingface/datasets/issues/624
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/624/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/624/comments
https://api.github.com/repos/huggingface/datasets/issues/624/events
https://github.com/huggingface/datasets/issues/624
700,541,628
MDU6SXNzdWU3MDA1NDE2Mjg=
624
Add learningq dataset
{ "avatar_url": "https://avatars.githubusercontent.com/u/17561003?v=4", "events_url": "https://api.github.com/users/krrishdholakia/events{/privacy}", "followers_url": "https://api.github.com/users/krrishdholakia/followers", "following_url": "https://api.github.com/users/krrishdholakia/following{/other_user}", "gists_url": "https://api.github.com/users/krrishdholakia/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/krrishdholakia", "id": 17561003, "login": "krrishdholakia", "node_id": "MDQ6VXNlcjE3NTYxMDAz", "organizations_url": "https://api.github.com/users/krrishdholakia/orgs", "received_events_url": "https://api.github.com/users/krrishdholakia/received_events", "repos_url": "https://api.github.com/users/krrishdholakia/repos", "site_admin": false, "starred_url": "https://api.github.com/users/krrishdholakia/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/krrishdholakia/subscriptions", "type": "User", "url": "https://api.github.com/users/krrishdholakia" }
[ { "color": "e99695", "default": false, "description": "Requesting to add a new dataset", "id": 2067376369, "name": "dataset request", "node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request" } ]
open
false
null
[]
null
[]
2020-09-13T10:20:27Z
2020-09-14T09:50:02Z
null
NONE
null
null
null
Hi, Thank you again for this amazing repo. Would it be possible for y'all to add the LearningQ dataset - https://github.com/AngusGLChen/LearningQ ?
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/624/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/624/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/623
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/623/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/623/comments
https://api.github.com/repos/huggingface/datasets/issues/623/events
https://github.com/huggingface/datasets/issues/623
700,235,308
MDU6SXNzdWU3MDAyMzUzMDg=
623
Custom feature types in `load_dataset` from CSV
{ "avatar_url": "https://avatars.githubusercontent.com/u/8264887?v=4", "events_url": "https://api.github.com/users/lvwerra/events{/privacy}", "followers_url": "https://api.github.com/users/lvwerra/followers", "following_url": "https://api.github.com/users/lvwerra/following{/other_user}", "gists_url": "https://api.github.com/users/lvwerra/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lvwerra", "id": 8264887, "login": "lvwerra", "node_id": "MDQ6VXNlcjgyNjQ4ODc=", "organizations_url": "https://api.github.com/users/lvwerra/orgs", "received_events_url": "https://api.github.com/users/lvwerra/received_events", "repos_url": "https://api.github.com/users/lvwerra/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lvwerra/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lvwerra/subscriptions", "type": "User", "url": "https://api.github.com/users/lvwerra" }
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" } ]
closed
false
null
[]
null
[]
2020-09-12T13:21:34Z
2020-09-30T19:51:43Z
2020-09-30T08:39:54Z
MEMBER
null
null
null
I am trying to load a local file with the `load_dataset` function and I want to predefine the feature types with the `features` argument. However, the types are always the same independent of the value of `features`. I am working with the local files from the emotion dataset. To get the data you can use the following code: ```Python from pathlib import Path import wget EMOTION_PATH = Path("./data/emotion") DOWNLOAD_URLS = [ "https://www.dropbox.com/s/1pzkadrvffbqw6o/train.txt?dl=1", "https://www.dropbox.com/s/2mzialpsgf9k5l3/val.txt?dl=1", "https://www.dropbox.com/s/ikkqxfdbdec3fuj/test.txt?dl=1", ] if not Path.is_dir(EMOTION_PATH): Path.mkdir(EMOTION_PATH) for url in DOWNLOAD_URLS: wget.download(url, str(EMOTION_PATH)) ``` The first five lines of the train set are: ``` i didnt feel humiliated;sadness i can go from feeling so hopeless to so damned hopeful just from being around someone who cares and is awake;sadness im grabbing a minute to post i feel greedy wrong;anger i am ever feeling nostalgic about the fireplace i will know that it is still on the property;love i am feeling grouchy;anger ``` Here the code to reproduce the issue: ```Python from datasets import Features, Value, ClassLabel, load_dataset class_names = ["sadness", "joy", "love", "anger", "fear", "surprise"] emotion_features = Features({'text': Value('string'), 'label': ClassLabel(names=class_names)}) file_dict = {'train': EMOTION_PATH/'train.txt'} dataset = load_dataset('csv', data_files=file_dict, delimiter=';', column_names=['text', 'label'], features=emotion_features) ``` **Observed behaviour:** ```Python dataset['train'].features ``` ```Python {'text': Value(dtype='string', id=None), 'label': Value(dtype='string', id=None)} ``` **Expected behaviour:** ```Python dataset['train'].features ``` ```Python {'text': Value(dtype='string', id=None), 'label': ClassLabel(num_classes=6, names=['sadness', 'joy', 'love', 'anger', 'fear', 'surprise'], names_file=None, id=None)} ``` **Things I've tried:** - deleting the cache - trying other types such as `int64` Am I missing anything? Thanks for any pointer in the right direction.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/623/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/623/timeline
null
completed
true
https://api.github.com/repos/huggingface/datasets/issues/622
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/622/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/622/comments
https://api.github.com/repos/huggingface/datasets/issues/622/events
https://github.com/huggingface/datasets/issues/622
700,225,826
MDU6SXNzdWU3MDAyMjU4MjY=
622
load_dataset for text files not working
{ "avatar_url": "https://avatars.githubusercontent.com/u/2779410?v=4", "events_url": "https://api.github.com/users/BramVanroy/events{/privacy}", "followers_url": "https://api.github.com/users/BramVanroy/followers", "following_url": "https://api.github.com/users/BramVanroy/following{/other_user}", "gists_url": "https://api.github.com/users/BramVanroy/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/BramVanroy", "id": 2779410, "login": "BramVanroy", "node_id": "MDQ6VXNlcjI3Nzk0MTA=", "organizations_url": "https://api.github.com/users/BramVanroy/orgs", "received_events_url": "https://api.github.com/users/BramVanroy/received_events", "repos_url": "https://api.github.com/users/BramVanroy/repos", "site_admin": false, "starred_url": "https://api.github.com/users/BramVanroy/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/BramVanroy/subscriptions", "type": "User", "url": "https://api.github.com/users/BramVanroy" }
[ { "color": "2edb81", "default": false, "description": "A bug in a dataset script provided in the library", "id": 2067388877, "name": "dataset bug", "node_id": "MDU6TGFiZWwyMDY3Mzg4ODc3", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20bug" } ]
closed
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" } ]
null
[]
2020-09-12T12:49:28Z
2020-10-28T11:07:31Z
2020-10-28T11:07:30Z
CONTRIBUTOR
null
null
null
Trying the following snippet, I get different problems on Linux and Windows. ```python dataset = load_dataset("text", data_files="data.txt") # or dataset = load_dataset("text", data_files=["data.txt"]) ``` (ps [This example](https://huggingface.co/docs/datasets/loading_datasets.html#json-files) shows that you can use a string as input for data_files, but the signature is `Union[Dict, List]`.) The problem on Linux is that the script crashes with a CSV error (even though it isn't a CSV file). On Windows the script just seems to freeze or get stuck after loading the config file. Linux stack trace: ``` PyTorch version 1.6.0+cu101 available. Checking /home/bram/.cache/huggingface/datasets/b1d50a0e74da9a7b9822cea8ff4e4f217dd892e09eb14f6274a2169e5436e2ea.30c25842cda32b0540d88b7195147decf9671ee442f4bc2fb6ad74016852978e.py for additional imports. Found main folder for dataset https://raw.githubusercontent.com/huggingface/datasets/1.0.1/datasets/text/text.py at /home/bram/.cache/huggingface/modules/datasets_modules/datasets/text Found specific version folder for dataset https://raw.githubusercontent.com/huggingface/datasets/1.0.1/datasets/text/text.py at /home/bram/.cache/huggingface/modules/datasets_modules/datasets/text/7e13bc0fa76783d4ef197f079dc8acfe54c3efda980f2c9adfab046ede2f0ff7 Found script file from https://raw.githubusercontent.com/huggingface/datasets/1.0.1/datasets/text/text.py to /home/bram/.cache/huggingface/modules/datasets_modules/datasets/text/7e13bc0fa76783d4ef197f079dc8acfe54c3efda980f2c9adfab046ede2f0ff7/text.py Couldn't find dataset infos file at https://raw.githubusercontent.com/huggingface/datasets/1.0.1/datasets/text/dataset_infos.json Found metadata file for dataset https://raw.githubusercontent.com/huggingface/datasets/1.0.1/datasets/text/text.py at /home/bram/.cache/huggingface/modules/datasets_modules/datasets/text/7e13bc0fa76783d4ef197f079dc8acfe54c3efda980f2c9adfab046ede2f0ff7/text.json Using custom data configuration default Generating dataset text (/home/bram/.cache/huggingface/datasets/text/default-0907112cc6cd2a38/0.0.0/7e13bc0fa76783d4ef197f079dc8acfe54c3efda980f2c9adfab046ede2f0ff7) Downloading and preparing dataset text/default-0907112cc6cd2a38 (download: Unknown size, generated: Unknown size, post-processed: Unknown size, total: Unknown size) to /home/bram/.cache/huggingface/datasets/text/default-0907112cc6cd2a38/0.0.0/7e13bc0fa76783d4ef197f079dc8acfe54c3efda980f2c9adfab046ede2f0ff7... Dataset not on Hf google storage. Downloading and preparing it from source Downloading took 0.0 min Checksum Computation took 0.0 min Unable to verify checksums. Generating split train Traceback (most recent call last): File "/home/bram/Python/projects/dutch-simplification/utils.py", line 45, in prepare_data dataset = load_dataset("text", data_files=dataset_f) File "/home/bram/.local/share/virtualenvs/dutch-simplification-NcpPZtDF/lib/python3.8/site-packages/datasets/load.py", line 608, in load_dataset builder_instance.download_and_prepare( File "/home/bram/.local/share/virtualenvs/dutch-simplification-NcpPZtDF/lib/python3.8/site-packages/datasets/builder.py", line 468, in download_and_prepare self._download_and_prepare( File "/home/bram/.local/share/virtualenvs/dutch-simplification-NcpPZtDF/lib/python3.8/site-packages/datasets/builder.py", line 546, in _download_and_prepare self._prepare_split(split_generator, **prepare_split_kwargs) File "/home/bram/.local/share/virtualenvs/dutch-simplification-NcpPZtDF/lib/python3.8/site-packages/datasets/builder.py", line 888, in _prepare_split for key, table in utils.tqdm(generator, unit=" tables", leave=False, disable=not_verbose): File "/home/bram/.local/share/virtualenvs/dutch-simplification-NcpPZtDF/lib/python3.8/site-packages/tqdm/std.py", line 1130, in __iter__ for obj in iterable: File "/home/bram/.cache/huggingface/modules/datasets_modules/datasets/text/7e13bc0fa76783d4ef197f079dc8acfe54c3efda980f2c9adfab046ede2f0ff7/text.py", line 100, in _generate_tables pa_table = pac.read_csv( File "pyarrow/_csv.pyx", line 714, in pyarrow._csv.read_csv File "pyarrow/error.pxi", line 122, in pyarrow.lib.pyarrow_internal_check_status File "pyarrow/error.pxi", line 84, in pyarrow.lib.check_status pyarrow.lib.ArrowInvalid: CSV parse error: Expected 1 columns, got 2 ``` Windows just seems to get stuck. Even with a tiny dataset of 10 lines, it has been stuck for 15 minutes already at this message: ``` Checking C:\Users\bramv\.cache\huggingface\datasets\b1d50a0e74da9a7b9822cea8ff4e4f217dd892e09eb14f6274a2169e5436e2ea.30c25842cda32b0540d88b7195147decf9671ee442f4bc2fb6ad74016852978e.py for additional imports. Found main folder for dataset https://raw.githubusercontent.com/huggingface/datasets/1.0.1/datasets/text/text.py at C:\Users\bramv\.cache\huggingface\modules\datasets_modules\datasets\text Found specific version folder for dataset https://raw.githubusercontent.com/huggingface/datasets/1.0.1/datasets/text/text.py at C:\Users\bramv\.cache\huggingface\modules\datasets_modules\datasets\text\7e13bc0fa76783d4ef197f079dc8acfe54c3efda980f2c9adfab046ede2f0ff7 Found script file from https://raw.githubusercontent.com/huggingface/datasets/1.0.1/datasets/text/text.py to C:\Users\bramv\.cache\huggingface\modules\datasets_modules\datasets\text\7e13bc0fa76783d4ef197f079dc8acfe54c3efda980f2c9adfab046ede2f0ff7\text.py Couldn't find dataset infos file at https://raw.githubusercontent.com/huggingface/datasets/1.0.1/datasets/text\dataset_infos.json Found metadata file for dataset https://raw.githubusercontent.com/huggingface/datasets/1.0.1/datasets/text/text.py at C:\Users\bramv\.cache\huggingface\modules\datasets_modules\datasets\text\7e13bc0fa76783d4ef197f079dc8acfe54c3efda980f2c9adfab046ede2f0ff7\text.json Using custom data configuration default ```
{ "+1": 3, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 3, "url": "https://api.github.com/repos/huggingface/datasets/issues/622/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/622/timeline
null
completed
true
https://api.github.com/repos/huggingface/datasets/issues/621
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/621/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/621/comments
https://api.github.com/repos/huggingface/datasets/issues/621/events
https://github.com/huggingface/datasets/pull/621
700,171,097
MDExOlB1bGxSZXF1ZXN0NDg1ODQ3ODYz
621
[docs] Index: The native emoji looks kinda ugly in large size
{ "avatar_url": "https://avatars.githubusercontent.com/u/326577?v=4", "events_url": "https://api.github.com/users/julien-c/events{/privacy}", "followers_url": "https://api.github.com/users/julien-c/followers", "following_url": "https://api.github.com/users/julien-c/following{/other_user}", "gists_url": "https://api.github.com/users/julien-c/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/julien-c", "id": 326577, "login": "julien-c", "node_id": "MDQ6VXNlcjMyNjU3Nw==", "organizations_url": "https://api.github.com/users/julien-c/orgs", "received_events_url": "https://api.github.com/users/julien-c/received_events", "repos_url": "https://api.github.com/users/julien-c/repos", "site_admin": false, "starred_url": "https://api.github.com/users/julien-c/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/julien-c/subscriptions", "type": "User", "url": "https://api.github.com/users/julien-c" }
[]
closed
false
null
[]
null
[]
2020-09-12T09:48:40Z
2020-09-15T06:20:03Z
2020-09-15T06:20:02Z
MEMBER
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/621.diff", "html_url": "https://github.com/huggingface/datasets/pull/621", "merged_at": "2020-09-15T06:20:02Z", "patch_url": "https://github.com/huggingface/datasets/pull/621.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/621" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/621/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/621/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/620
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/620/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/620/comments
https://api.github.com/repos/huggingface/datasets/issues/620/events
https://github.com/huggingface/datasets/issues/620
699,815,135
MDU6SXNzdWU2OTk4MTUxMzU=
620
map/filter multiprocessing raises errors and corrupts datasets
{ "avatar_url": "https://avatars.githubusercontent.com/u/2000204?v=4", "events_url": "https://api.github.com/users/timothyjlaurent/events{/privacy}", "followers_url": "https://api.github.com/users/timothyjlaurent/followers", "following_url": "https://api.github.com/users/timothyjlaurent/following{/other_user}", "gists_url": "https://api.github.com/users/timothyjlaurent/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/timothyjlaurent", "id": 2000204, "login": "timothyjlaurent", "node_id": "MDQ6VXNlcjIwMDAyMDQ=", "organizations_url": "https://api.github.com/users/timothyjlaurent/orgs", "received_events_url": "https://api.github.com/users/timothyjlaurent/received_events", "repos_url": "https://api.github.com/users/timothyjlaurent/repos", "site_admin": false, "starred_url": "https://api.github.com/users/timothyjlaurent/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/timothyjlaurent/subscriptions", "type": "User", "url": "https://api.github.com/users/timothyjlaurent" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
closed
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" } ]
null
[]
2020-09-11T22:30:06Z
2020-10-08T16:31:47Z
2020-10-08T16:31:46Z
NONE
null
null
null
After upgrading to the 1.0 started seeing errors in my data loading script after enabling multiprocessing. ```python ... ner_ds_dict = ner_ds.train_test_split(test_size=test_pct, shuffle=True, seed=seed) ner_ds_dict["validation"] = ner_ds_dict["test"] rel_ds_dict = rel_ds.train_test_split(test_size=test_pct, shuffle=True, seed=seed) rel_ds_dict["validation"] = rel_ds_dict["test"] return ner_ds_dict, rel_ds_dict ``` The first train_test_split, `ner_ds`/`ner_ds_dict`, returns a `train` and `test` split that are iterable. The second, `rel_ds`/`rel_ds_dict` in this case, returns a Dataset dict that has rows but if selected from or sliced into into returns an empty dictionary. eg `rel_ds_dict['train'][0] == {}` and `rel_ds_dict['train'][0:100] == {}`. Ok I think I know the problem -- the rel_ds was mapped though a mapper with `num_proc=12`. If I remove `num_proc`. The dataset loads. I also see errors with other map and filter functions when `num_proc` is set. ``` Done writing 67 indices in 536 bytes . Done writing 67 indices in 536 bytes . Fatal Python error: PyCOND_WAIT(gil_cond) failed ```
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/620/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/620/timeline
null
completed
true
https://api.github.com/repos/huggingface/datasets/issues/619
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/619/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/619/comments
https://api.github.com/repos/huggingface/datasets/issues/619/events
https://github.com/huggingface/datasets/issues/619
699,733,612
MDU6SXNzdWU2OTk3MzM2MTI=
619
Mistakes in MLQA features names
{ "avatar_url": "https://avatars.githubusercontent.com/u/9285264?v=4", "events_url": "https://api.github.com/users/M-Salti/events{/privacy}", "followers_url": "https://api.github.com/users/M-Salti/followers", "following_url": "https://api.github.com/users/M-Salti/following{/other_user}", "gists_url": "https://api.github.com/users/M-Salti/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/M-Salti", "id": 9285264, "login": "M-Salti", "node_id": "MDQ6VXNlcjkyODUyNjQ=", "organizations_url": "https://api.github.com/users/M-Salti/orgs", "received_events_url": "https://api.github.com/users/M-Salti/received_events", "repos_url": "https://api.github.com/users/M-Salti/repos", "site_admin": false, "starred_url": "https://api.github.com/users/M-Salti/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/M-Salti/subscriptions", "type": "User", "url": "https://api.github.com/users/M-Salti" }
[]
closed
false
null
[]
null
[]
2020-09-11T20:46:23Z
2020-09-16T06:59:19Z
2020-09-16T06:59:19Z
CONTRIBUTOR
null
null
null
I think the following features in MLQA shouldn't be named the way they are: 1. `questions` (should be `question`) 2. `ids` (should be `id`) 3. `start` (should be `answer_start`) The reasons I'm suggesting these features be renamed are: * To make them consistent with other QA datasets like SQuAD, XQuAD, TyDiQA etc. and hence make it easier to concatenate multiple QA datasets. * The features names are not the same as the ones provided in the original MLQA datasets (it uses the names I suggested). I know these columns can be renamed using using `Dataset.rename_column_`, `questions` and `ids` can be easily renamed but `start` on the other hand is annoying to rename since it's nested inside the feature `answers`.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/619/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/619/timeline
null
completed
true
https://api.github.com/repos/huggingface/datasets/issues/618
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/618/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/618/comments
https://api.github.com/repos/huggingface/datasets/issues/618/events
https://github.com/huggingface/datasets/pull/618
699,684,831
MDExOlB1bGxSZXF1ZXN0NDg1NDAxMzI5
618
sync logging utils with transformers
{ "avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4", "events_url": "https://api.github.com/users/stas00/events{/privacy}", "followers_url": "https://api.github.com/users/stas00/followers", "following_url": "https://api.github.com/users/stas00/following{/other_user}", "gists_url": "https://api.github.com/users/stas00/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/stas00", "id": 10676103, "login": "stas00", "node_id": "MDQ6VXNlcjEwNjc2MTAz", "organizations_url": "https://api.github.com/users/stas00/orgs", "received_events_url": "https://api.github.com/users/stas00/received_events", "repos_url": "https://api.github.com/users/stas00/repos", "site_admin": false, "starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stas00/subscriptions", "type": "User", "url": "https://api.github.com/users/stas00" }
[]
closed
false
null
[]
null
[]
2020-09-11T19:46:13Z
2020-09-17T15:40:59Z
2020-09-17T09:53:47Z
MEMBER
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/618.diff", "html_url": "https://github.com/huggingface/datasets/pull/618", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/618.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/618" }
sync the docs/code with the recent changes in transformers' `logging` utils: 1. change the default level to `WARNING` 2. add `DATASETS_VERBOSITY` env var 3. expand docs
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/618/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/618/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/617
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/617/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/617/comments
https://api.github.com/repos/huggingface/datasets/issues/617/events
https://github.com/huggingface/datasets/issues/617
699,472,596
MDU6SXNzdWU2OTk0NzI1OTY=
617
Compare different Rouge implementations
{ "avatar_url": "https://avatars.githubusercontent.com/u/2287797?v=4", "events_url": "https://api.github.com/users/ibeltagy/events{/privacy}", "followers_url": "https://api.github.com/users/ibeltagy/followers", "following_url": "https://api.github.com/users/ibeltagy/following{/other_user}", "gists_url": "https://api.github.com/users/ibeltagy/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/ibeltagy", "id": 2287797, "login": "ibeltagy", "node_id": "MDQ6VXNlcjIyODc3OTc=", "organizations_url": "https://api.github.com/users/ibeltagy/orgs", "received_events_url": "https://api.github.com/users/ibeltagy/received_events", "repos_url": "https://api.github.com/users/ibeltagy/repos", "site_admin": false, "starred_url": "https://api.github.com/users/ibeltagy/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ibeltagy/subscriptions", "type": "User", "url": "https://api.github.com/users/ibeltagy" }
[]
closed
false
null
[]
null
[]
2020-09-11T15:49:32Z
2021-03-31T17:28:33Z
2020-10-02T09:52:18Z
NONE
null
null
null
I used RougeL implementation provided in `datasets` [here](https://github.com/huggingface/datasets/blob/master/metrics/rouge/rouge.py) and it gives numbers that match those reported in the pegasus paper but very different from those reported in other papers, [this](https://arxiv.org/pdf/1909.03186.pdf) for example. Can you make sure the google-research implementation you are using matches the official perl implementation? There are a couple of python wrappers around the perl implementation, [this](https://pypi.org/project/pyrouge/) has been commonly used, and [this](https://github.com/pltrdy/files2rouge) is used in fairseq). There's also a python reimplementation [here](https://github.com/pltrdy/rouge) but its RougeL numbers are way off.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/617/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/617/timeline
null
completed
true
https://api.github.com/repos/huggingface/datasets/issues/616
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/616/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/616/comments
https://api.github.com/repos/huggingface/datasets/issues/616/events
https://github.com/huggingface/datasets/issues/616
699,462,293
MDU6SXNzdWU2OTk0NjIyOTM=
616
UserWarning: The given NumPy array is not writeable, and PyTorch does not support non-writeable tensors
{ "avatar_url": "https://avatars.githubusercontent.com/u/2779410?v=4", "events_url": "https://api.github.com/users/BramVanroy/events{/privacy}", "followers_url": "https://api.github.com/users/BramVanroy/followers", "following_url": "https://api.github.com/users/BramVanroy/following{/other_user}", "gists_url": "https://api.github.com/users/BramVanroy/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/BramVanroy", "id": 2779410, "login": "BramVanroy", "node_id": "MDQ6VXNlcjI3Nzk0MTA=", "organizations_url": "https://api.github.com/users/BramVanroy/orgs", "received_events_url": "https://api.github.com/users/BramVanroy/received_events", "repos_url": "https://api.github.com/users/BramVanroy/repos", "site_admin": false, "starred_url": "https://api.github.com/users/BramVanroy/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/BramVanroy/subscriptions", "type": "User", "url": "https://api.github.com/users/BramVanroy" }
[]
open
false
null
[]
null
[]
2020-09-11T15:39:16Z
2021-07-22T21:12:21Z
null
CONTRIBUTOR
null
null
null
I am trying out the library and want to load in pickled data with `from_dict`. In that dict, one column `text` should be tokenized and the other (an embedding vector) should be retained. All other columns should be removed. When I eventually try to set the format for the columns with `set_format` I am getting this strange Userwarning without a stack trace: > Set __getitem__(key) output type to torch for ['input_ids', 'sembedding'] columns (when key is int or slice) and don't output other (un-formatted) columns. > C:\Users\bramv\.virtualenvs\dutch-simplification-nbNdqK9u\lib\site-packages\datasets\arrow_dataset.py:835: UserWarning: The given NumPy array is not writeable, and PyTorch does not support non-writeable tensors. This means you can write to the underlying (supposedly non-writeable) NumPy array using the tensor. You may want to copy the array to protect its data or make it writeable before converting it to a tensor. This type of warning will be suppressed for the rest of this program. (Triggered internally at ..\torch\csrc\utils\tensor_numpy.cpp:141.) > return torch.tensor(x, **format_kwargs) The first one might not be related to the warning, but it is odd that it is shown, too. It is unclear whether that is something that I should do or something that that the program is doing at that moment. Snippet: ``` dataset = Dataset.from_dict(torch.load("data/dummy.pt.pt")) print(dataset) tokenizer = AutoTokenizer.from_pretrained("bert-base-cased") keys_to_retain = {"input_ids", "sembedding"} dataset = dataset.map(lambda example: tokenizer(example["text"], padding='max_length'), batched=True) dataset.remove_columns_(set(dataset.column_names) - keys_to_retain) dataset.set_format(type="torch", columns=["input_ids", "sembedding"]) dataloader = torch.utils.data.DataLoader(dataset, batch_size=2) print(next(iter(dataloader))) ``` PS: the input type for `remove_columns_` should probably be an Iterable rather than just a List.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 4, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 4, "url": "https://api.github.com/repos/huggingface/datasets/issues/616/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/616/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/615
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/615/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/615/comments
https://api.github.com/repos/huggingface/datasets/issues/615/events
https://github.com/huggingface/datasets/issues/615
699,410,773
MDU6SXNzdWU2OTk0MTA3NzM=
615
Offset overflow when slicing a big dataset with an array of indices in Pyarrow >= 1.0.0
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
[]
closed
false
null
[]
null
[]
2020-09-11T14:50:38Z
2022-06-09T09:40:48Z
2020-09-19T16:46:31Z
MEMBER
null
null
null
How to reproduce: ```python from datasets import load_dataset wiki = load_dataset("wikipedia", "20200501.en", split="train") wiki[[0]] --------------------------------------------------------------------------- ArrowInvalid Traceback (most recent call last) <ipython-input-13-381aedc9811b> in <module> ----> 1 wikipedia[[0]] ~/Desktop/hf/nlp/src/datasets/arrow_dataset.py in __getitem__(self, key) 1069 format_columns=self._format_columns, 1070 output_all_columns=self._output_all_columns, -> 1071 format_kwargs=self._format_kwargs, 1072 ) 1073 ~/Desktop/hf/nlp/src/datasets/arrow_dataset.py in _getitem(self, key, format_type, format_columns, output_all_columns, format_kwargs) 1037 ) 1038 else: -> 1039 data_subset = self._data.take(indices_array) 1040 1041 if format_type is not None: ~/.virtualenvs/hf-datasets/lib/python3.7/site-packages/pyarrow/table.pxi in pyarrow.lib.Table.take() ~/.virtualenvs/hf-datasets/lib/python3.7/site-packages/pyarrow/compute.py in take(data, indices, boundscheck) 266 """ 267 options = TakeOptions(boundscheck) --> 268 return call_function('take', [data, indices], options) 269 270 ~/.virtualenvs/hf-datasets/lib/python3.7/site-packages/pyarrow/_compute.pyx in pyarrow._compute.call_function() ~/.virtualenvs/hf-datasets/lib/python3.7/site-packages/pyarrow/_compute.pyx in pyarrow._compute.Function.call() ~/.virtualenvs/hf-datasets/lib/python3.7/site-packages/pyarrow/error.pxi in pyarrow.lib.pyarrow_internal_check_status() ~/.virtualenvs/hf-datasets/lib/python3.7/site-packages/pyarrow/error.pxi in pyarrow.lib.check_status() ArrowInvalid: offset overflow while concatenating arrays ``` It seems to work fine with small datasets or with pyarrow 0.17.1
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/615/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/615/timeline
null
completed
true
https://api.github.com/repos/huggingface/datasets/issues/614
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/614/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/614/comments
https://api.github.com/repos/huggingface/datasets/issues/614/events
https://github.com/huggingface/datasets/pull/614
699,177,110
MDExOlB1bGxSZXF1ZXN0NDg0OTQ2MzA1
614
[doc] Update deploy.sh
{ "avatar_url": "https://avatars.githubusercontent.com/u/7353373?v=4", "events_url": "https://api.github.com/users/thomwolf/events{/privacy}", "followers_url": "https://api.github.com/users/thomwolf/followers", "following_url": "https://api.github.com/users/thomwolf/following{/other_user}", "gists_url": "https://api.github.com/users/thomwolf/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/thomwolf", "id": 7353373, "login": "thomwolf", "node_id": "MDQ6VXNlcjczNTMzNzM=", "organizations_url": "https://api.github.com/users/thomwolf/orgs", "received_events_url": "https://api.github.com/users/thomwolf/received_events", "repos_url": "https://api.github.com/users/thomwolf/repos", "site_admin": false, "starred_url": "https://api.github.com/users/thomwolf/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/thomwolf/subscriptions", "type": "User", "url": "https://api.github.com/users/thomwolf" }
[]
closed
false
null
[]
null
[]
2020-09-11T11:06:13Z
2020-09-14T08:49:19Z
2020-09-14T08:49:17Z
MEMBER
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/614.diff", "html_url": "https://github.com/huggingface/datasets/pull/614", "merged_at": "2020-09-14T08:49:17Z", "patch_url": "https://github.com/huggingface/datasets/pull/614.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/614" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/614/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/614/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/613
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/613/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/613/comments
https://api.github.com/repos/huggingface/datasets/issues/613/events
https://github.com/huggingface/datasets/pull/613
699,117,070
MDExOlB1bGxSZXF1ZXN0NDg0ODkyMTUx
613
Add CoNLL-2003 shared task dataset
{ "avatar_url": "https://avatars.githubusercontent.com/u/458335?v=4", "events_url": "https://api.github.com/users/vblagoje/events{/privacy}", "followers_url": "https://api.github.com/users/vblagoje/followers", "following_url": "https://api.github.com/users/vblagoje/following{/other_user}", "gists_url": "https://api.github.com/users/vblagoje/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/vblagoje", "id": 458335, "login": "vblagoje", "node_id": "MDQ6VXNlcjQ1ODMzNQ==", "organizations_url": "https://api.github.com/users/vblagoje/orgs", "received_events_url": "https://api.github.com/users/vblagoje/received_events", "repos_url": "https://api.github.com/users/vblagoje/repos", "site_admin": false, "starred_url": "https://api.github.com/users/vblagoje/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/vblagoje/subscriptions", "type": "User", "url": "https://api.github.com/users/vblagoje" }
[]
closed
false
null
[]
null
[]
2020-09-11T10:02:30Z
2020-10-05T10:43:05Z
2020-09-17T10:36:38Z
CONTRIBUTOR
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/613.diff", "html_url": "https://github.com/huggingface/datasets/pull/613", "merged_at": "2020-09-17T10:36:38Z", "patch_url": "https://github.com/huggingface/datasets/pull/613.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/613" }
Please consider adding CoNLL-2003 shared task dataset as it's beneficial for token classification tasks. The motivation behind this PR is the [PR](https://github.com/huggingface/transformers/pull/7041) in the transformers project. This dataset would be not only useful for the usual run-of-the-mill NER tasks but also for syntactic chunking and part-of-speech (POS) tagging.
{ "+1": 1, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/613/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/613/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/612
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/612/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/612/comments
https://api.github.com/repos/huggingface/datasets/issues/612/events
https://github.com/huggingface/datasets/pull/612
699,008,644
MDExOlB1bGxSZXF1ZXN0NDg0Nzk2Mjg5
612
add multi-proc to dataset dict
{ "avatar_url": "https://avatars.githubusercontent.com/u/7353373?v=4", "events_url": "https://api.github.com/users/thomwolf/events{/privacy}", "followers_url": "https://api.github.com/users/thomwolf/followers", "following_url": "https://api.github.com/users/thomwolf/following{/other_user}", "gists_url": "https://api.github.com/users/thomwolf/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/thomwolf", "id": 7353373, "login": "thomwolf", "node_id": "MDQ6VXNlcjczNTMzNzM=", "organizations_url": "https://api.github.com/users/thomwolf/orgs", "received_events_url": "https://api.github.com/users/thomwolf/received_events", "repos_url": "https://api.github.com/users/thomwolf/repos", "site_admin": false, "starred_url": "https://api.github.com/users/thomwolf/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/thomwolf/subscriptions", "type": "User", "url": "https://api.github.com/users/thomwolf" }
[]
closed
false
null
[]
null
[]
2020-09-11T08:18:13Z
2020-09-11T10:20:13Z
2020-09-11T10:20:11Z
MEMBER
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/612.diff", "html_url": "https://github.com/huggingface/datasets/pull/612", "merged_at": "2020-09-11T10:20:11Z", "patch_url": "https://github.com/huggingface/datasets/pull/612.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/612" }
Add multi-proc to `DatasetDict`
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/612/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/612/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/611
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/611/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/611/comments
https://api.github.com/repos/huggingface/datasets/issues/611/events
https://github.com/huggingface/datasets/issues/611
698,863,988
MDU6SXNzdWU2OTg4NjM5ODg=
611
ArrowCapacityError: List array cannot contain more than 2147483646 child elements, have 2147483648
{ "avatar_url": "https://avatars.githubusercontent.com/u/32364921?v=4", "events_url": "https://api.github.com/users/sangyx/events{/privacy}", "followers_url": "https://api.github.com/users/sangyx/followers", "following_url": "https://api.github.com/users/sangyx/following{/other_user}", "gists_url": "https://api.github.com/users/sangyx/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/sangyx", "id": 32364921, "login": "sangyx", "node_id": "MDQ6VXNlcjMyMzY0OTIx", "organizations_url": "https://api.github.com/users/sangyx/orgs", "received_events_url": "https://api.github.com/users/sangyx/received_events", "repos_url": "https://api.github.com/users/sangyx/repos", "site_admin": false, "starred_url": "https://api.github.com/users/sangyx/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sangyx/subscriptions", "type": "User", "url": "https://api.github.com/users/sangyx" }
[]
closed
false
null
[]
null
[]
2020-09-11T05:29:12Z
2022-06-01T15:11:43Z
2022-06-01T15:11:43Z
NONE
null
null
null
Hi, I'm trying to load a dataset from Dataframe, but I get the error: ```bash --------------------------------------------------------------------------- ArrowCapacityError Traceback (most recent call last) <ipython-input-7-146b6b495963> in <module> ----> 1 dataset = Dataset.from_pandas(emb) ~/miniconda3/envs/dev/lib/python3.7/site-packages/nlp/arrow_dataset.py in from_pandas(cls, df, features, info, split) 223 info.features = features 224 pa_table: pa.Table = pa.Table.from_pandas( --> 225 df=df, schema=pa.schema(features.type) if features is not None else None 226 ) 227 return cls(pa_table, info=info, split=split) ~/miniconda3/envs/dev/lib/python3.7/site-packages/pyarrow/table.pxi in pyarrow.lib.Table.from_pandas() ~/miniconda3/envs/dev/lib/python3.7/site-packages/pyarrow/pandas_compat.py in dataframe_to_arrays(df, schema, preserve_index, nthreads, columns, safe) 591 for i, maybe_fut in enumerate(arrays): 592 if isinstance(maybe_fut, futures.Future): --> 593 arrays[i] = maybe_fut.result() 594 595 types = [x.type for x in arrays] ~/miniconda3/envs/dev/lib/python3.7/concurrent/futures/_base.py in result(self, timeout) 426 raise CancelledError() 427 elif self._state == FINISHED: --> 428 return self.__get_result() 429 430 self._condition.wait(timeout) ~/miniconda3/envs/dev/lib/python3.7/concurrent/futures/_base.py in __get_result(self) 382 def __get_result(self): 383 if self._exception: --> 384 raise self._exception 385 else: 386 return self._result ~/miniconda3/envs/dev/lib/python3.7/concurrent/futures/thread.py in run(self) 55 56 try: ---> 57 result = self.fn(*self.args, **self.kwargs) 58 except BaseException as exc: 59 self.future.set_exception(exc) ~/miniconda3/envs/dev/lib/python3.7/site-packages/pyarrow/pandas_compat.py in convert_column(col, field) 557 558 try: --> 559 result = pa.array(col, type=type_, from_pandas=True, safe=safe) 560 except (pa.ArrowInvalid, 561 pa.ArrowNotImplementedError, ~/miniconda3/envs/dev/lib/python3.7/site-packages/pyarrow/array.pxi in pyarrow.lib.array() ~/miniconda3/envs/dev/lib/python3.7/site-packages/pyarrow/array.pxi in pyarrow.lib._ndarray_to_array() ~/miniconda3/envs/dev/lib/python3.7/site-packages/pyarrow/error.pxi in pyarrow.lib.check_status() ArrowCapacityError: List array cannot contain more than 2147483646 child elements, have 2147483648 ``` My code is : ```python from nlp import Dataset dataset = Dataset.from_pandas(emb) ```
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/611/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/611/timeline
null
completed
true
https://api.github.com/repos/huggingface/datasets/issues/610
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/610/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/610/comments
https://api.github.com/repos/huggingface/datasets/issues/610/events
https://github.com/huggingface/datasets/issues/610
698,349,388
MDU6SXNzdWU2OTgzNDkzODg=
610
Load text file for RoBERTa pre-training.
{ "avatar_url": "https://avatars.githubusercontent.com/u/33407613?v=4", "events_url": "https://api.github.com/users/chiyuzhang94/events{/privacy}", "followers_url": "https://api.github.com/users/chiyuzhang94/followers", "following_url": "https://api.github.com/users/chiyuzhang94/following{/other_user}", "gists_url": "https://api.github.com/users/chiyuzhang94/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/chiyuzhang94", "id": 33407613, "login": "chiyuzhang94", "node_id": "MDQ6VXNlcjMzNDA3NjEz", "organizations_url": "https://api.github.com/users/chiyuzhang94/orgs", "received_events_url": "https://api.github.com/users/chiyuzhang94/received_events", "repos_url": "https://api.github.com/users/chiyuzhang94/repos", "site_admin": false, "starred_url": "https://api.github.com/users/chiyuzhang94/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/chiyuzhang94/subscriptions", "type": "User", "url": "https://api.github.com/users/chiyuzhang94" }
[]
closed
false
null
[]
null
[]
2020-09-10T18:41:38Z
2022-11-22T13:51:24Z
2022-11-22T13:51:23Z
NONE
null
null
null
I migrate my question from https://github.com/huggingface/transformers/pull/4009#issuecomment-690039444 I tried to train a Roberta from scratch using transformers. But I got OOM issues with loading a large text file. According to the suggestion from @thomwolf , I tried to implement `datasets` to load my text file. This test.txt is a simple sample where each line is a sentence. ``` from datasets import load_dataset dataset = load_dataset('text', data_files='test.txt',cache_dir="./") dataset.set_format(type='torch',columns=["text"]) dataloader = torch.utils.data.DataLoader(dataset, batch_size=8) next(iter(dataloader)) ``` But dataload cannot yield sample and error is: ``` --------------------------------------------------------------------------- KeyError Traceback (most recent call last) <ipython-input-12-388aca337e2f> in <module> ----> 1 next(iter(dataloader)) /Library/Python/3.7/site-packages/torch/utils/data/dataloader.py in __next__(self) 361 362 def __next__(self): --> 363 data = self._next_data() 364 self._num_yielded += 1 365 if self._dataset_kind == _DatasetKind.Iterable and \ /Library/Python/3.7/site-packages/torch/utils/data/dataloader.py in _next_data(self) 401 def _next_data(self): 402 index = self._next_index() # may raise StopIteration --> 403 data = self._dataset_fetcher.fetch(index) # may raise StopIteration 404 if self._pin_memory: 405 data = _utils.pin_memory.pin_memory(data) /Library/Python/3.7/site-packages/torch/utils/data/_utils/fetch.py in fetch(self, possibly_batched_index) 42 def fetch(self, possibly_batched_index): 43 if self.auto_collation: ---> 44 data = [self.dataset[idx] for idx in possibly_batched_index] 45 else: 46 data = self.dataset[possibly_batched_index] /Library/Python/3.7/site-packages/torch/utils/data/_utils/fetch.py in <listcomp>(.0) 42 def fetch(self, possibly_batched_index): 43 if self.auto_collation: ---> 44 data = [self.dataset[idx] for idx in possibly_batched_index] 45 else: 46 data = self.dataset[possibly_batched_index] KeyError: 0 ``` `dataset.set_format(type='torch',columns=["text"])` returns a log says: ``` Set __getitem__(key) output type to torch for ['text'] columns (when key is int or slice) and don't output other (un-formatted) columns. ``` I noticed the dataset is `DatasetDict({'train': Dataset(features: {'text': Value(dtype='string', id=None)}, num_rows: 44)})`. Each sample can be accessed by `dataset["train"]["text"]` instead of `dataset["text"]`. Could you please give me any suggestions on how to modify this code to load the text file? Versions: Python version 3.7.3 PyTorch version 1.6.0 TensorFlow version 2.3.0 datasets version: 1.0.1
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/610/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/610/timeline
null
completed
true
https://api.github.com/repos/huggingface/datasets/issues/609
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/609/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/609/comments
https://api.github.com/repos/huggingface/datasets/issues/609/events
https://github.com/huggingface/datasets/pull/609
698,323,989
MDExOlB1bGxSZXF1ZXN0NDg0MTc4Nzky
609
Update GLUE URLs (now hosted on FB)
{ "avatar_url": "https://avatars.githubusercontent.com/u/57466294?v=4", "events_url": "https://api.github.com/users/jeswan/events{/privacy}", "followers_url": "https://api.github.com/users/jeswan/followers", "following_url": "https://api.github.com/users/jeswan/following{/other_user}", "gists_url": "https://api.github.com/users/jeswan/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/jeswan", "id": 57466294, "login": "jeswan", "node_id": "MDQ6VXNlcjU3NDY2Mjk0", "organizations_url": "https://api.github.com/users/jeswan/orgs", "received_events_url": "https://api.github.com/users/jeswan/received_events", "repos_url": "https://api.github.com/users/jeswan/repos", "site_admin": false, "starred_url": "https://api.github.com/users/jeswan/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jeswan/subscriptions", "type": "User", "url": "https://api.github.com/users/jeswan" }
[]
closed
false
null
[]
null
[]
2020-09-10T18:16:32Z
2020-09-14T19:06:02Z
2020-09-14T19:06:01Z
CONTRIBUTOR
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/609.diff", "html_url": "https://github.com/huggingface/datasets/pull/609", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/609.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/609" }
NYU is switching dataset hosting from Google to FB. This PR closes https://github.com/huggingface/datasets/issues/608 and is necessary for https://github.com/jiant-dev/jiant/issues/161. This PR updates the data URLs based on changes made in https://github.com/nyu-mll/jiant/pull/1112.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/609/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/609/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/608
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/608/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/608/comments
https://api.github.com/repos/huggingface/datasets/issues/608/events
https://github.com/huggingface/datasets/issues/608
698,291,156
MDU6SXNzdWU2OTgyOTExNTY=
608
Don't use the old NYU GLUE dataset URLs
{ "avatar_url": "https://avatars.githubusercontent.com/u/57466294?v=4", "events_url": "https://api.github.com/users/jeswan/events{/privacy}", "followers_url": "https://api.github.com/users/jeswan/followers", "following_url": "https://api.github.com/users/jeswan/following{/other_user}", "gists_url": "https://api.github.com/users/jeswan/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/jeswan", "id": 57466294, "login": "jeswan", "node_id": "MDQ6VXNlcjU3NDY2Mjk0", "organizations_url": "https://api.github.com/users/jeswan/orgs", "received_events_url": "https://api.github.com/users/jeswan/received_events", "repos_url": "https://api.github.com/users/jeswan/repos", "site_admin": false, "starred_url": "https://api.github.com/users/jeswan/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jeswan/subscriptions", "type": "User", "url": "https://api.github.com/users/jeswan" }
[]
closed
false
null
[]
null
[]
2020-09-10T17:47:02Z
2020-09-16T06:53:18Z
2020-09-16T06:53:18Z
CONTRIBUTOR
null
null
null
NYU is switching dataset hosting from Google to FB. Initial changes to `datasets` are in https://github.com/jeswan/nlp/commit/b7d4a071d432592ded971e30ef73330529de25ce. What tests do you suggest I run before opening a PR? See: https://github.com/jiant-dev/jiant/issues/161 and https://github.com/nyu-mll/jiant/pull/1112
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/608/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/608/timeline
null
completed
true
https://api.github.com/repos/huggingface/datasets/issues/607
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/607/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/607/comments
https://api.github.com/repos/huggingface/datasets/issues/607/events
https://github.com/huggingface/datasets/pull/607
698,094,442
MDExOlB1bGxSZXF1ZXN0NDgzOTcyMDg4
607
Add transmit_format wrapper and tests
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
[]
closed
false
null
[]
null
[]
2020-09-10T15:03:50Z
2020-09-10T15:21:48Z
2020-09-10T15:21:47Z
MEMBER
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/607.diff", "html_url": "https://github.com/huggingface/datasets/pull/607", "merged_at": "2020-09-10T15:21:47Z", "patch_url": "https://github.com/huggingface/datasets/pull/607.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/607" }
Same as #605 but using a decorator on-top of dataset transforms that are not in place
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/607/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/607/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/606
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/606/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/606/comments
https://api.github.com/repos/huggingface/datasets/issues/606/events
https://github.com/huggingface/datasets/pull/606
698,050,442
MDExOlB1bGxSZXF1ZXN0NDgzOTMzMDA1
606
Quick fix :)
{ "avatar_url": "https://avatars.githubusercontent.com/u/7353373?v=4", "events_url": "https://api.github.com/users/thomwolf/events{/privacy}", "followers_url": "https://api.github.com/users/thomwolf/followers", "following_url": "https://api.github.com/users/thomwolf/following{/other_user}", "gists_url": "https://api.github.com/users/thomwolf/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/thomwolf", "id": 7353373, "login": "thomwolf", "node_id": "MDQ6VXNlcjczNTMzNzM=", "organizations_url": "https://api.github.com/users/thomwolf/orgs", "received_events_url": "https://api.github.com/users/thomwolf/received_events", "repos_url": "https://api.github.com/users/thomwolf/repos", "site_admin": false, "starred_url": "https://api.github.com/users/thomwolf/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/thomwolf/subscriptions", "type": "User", "url": "https://api.github.com/users/thomwolf" }
[]
closed
false
null
[]
null
[]
2020-09-10T14:32:06Z
2020-09-10T16:18:32Z
2020-09-10T16:18:30Z
MEMBER
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/606.diff", "html_url": "https://github.com/huggingface/datasets/pull/606", "merged_at": "2020-09-10T16:18:30Z", "patch_url": "https://github.com/huggingface/datasets/pull/606.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/606" }
`nlp` => `datasets`
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 1, "hooray": 1, "laugh": 1, "rocket": 0, "total_count": 3, "url": "https://api.github.com/repos/huggingface/datasets/issues/606/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/606/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/605
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/605/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/605/comments
https://api.github.com/repos/huggingface/datasets/issues/605/events
https://github.com/huggingface/datasets/pull/605
697,887,401
MDExOlB1bGxSZXF1ZXN0NDgzNzg1Mjc1
605
[Datasets] Transmit format to children
{ "avatar_url": "https://avatars.githubusercontent.com/u/7353373?v=4", "events_url": "https://api.github.com/users/thomwolf/events{/privacy}", "followers_url": "https://api.github.com/users/thomwolf/followers", "following_url": "https://api.github.com/users/thomwolf/following{/other_user}", "gists_url": "https://api.github.com/users/thomwolf/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/thomwolf", "id": 7353373, "login": "thomwolf", "node_id": "MDQ6VXNlcjczNTMzNzM=", "organizations_url": "https://api.github.com/users/thomwolf/orgs", "received_events_url": "https://api.github.com/users/thomwolf/received_events", "repos_url": "https://api.github.com/users/thomwolf/repos", "site_admin": false, "starred_url": "https://api.github.com/users/thomwolf/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/thomwolf/subscriptions", "type": "User", "url": "https://api.github.com/users/thomwolf" }
[]
closed
false
null
[]
null
[]
2020-09-10T12:30:18Z
2020-09-10T16:15:21Z
2020-09-10T16:15:21Z
MEMBER
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/605.diff", "html_url": "https://github.com/huggingface/datasets/pull/605", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/605.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/605" }
Transmit format to children obtained when processing a dataset. Added a test. When concatenating datasets, if the formats are disparate, the concatenated dataset has a format reset to defaults.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/605/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/605/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/604
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/604/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/604/comments
https://api.github.com/repos/huggingface/datasets/issues/604/events
https://github.com/huggingface/datasets/pull/604
697,774,581
MDExOlB1bGxSZXF1ZXN0NDgzNjgxNTc0
604
Update bucket prefix
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
[]
closed
false
null
[]
null
[]
2020-09-10T11:01:13Z
2020-09-10T12:45:33Z
2020-09-10T12:45:32Z
MEMBER
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/604.diff", "html_url": "https://github.com/huggingface/datasets/pull/604", "merged_at": "2020-09-10T12:45:32Z", "patch_url": "https://github.com/huggingface/datasets/pull/604.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/604" }
cc @julien-c
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/604/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/604/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/603
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/603/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/603/comments
https://api.github.com/repos/huggingface/datasets/issues/603/events
https://github.com/huggingface/datasets/pull/603
697,758,750
MDExOlB1bGxSZXF1ZXN0NDgzNjY2ODk5
603
Set scripts version to master
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
[]
closed
false
null
[]
null
[]
2020-09-10T10:47:44Z
2020-09-10T11:02:05Z
2020-09-10T11:02:04Z
MEMBER
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/603.diff", "html_url": "https://github.com/huggingface/datasets/pull/603", "merged_at": "2020-09-10T11:02:04Z", "patch_url": "https://github.com/huggingface/datasets/pull/603.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/603" }
By default the scripts version is master, so that if the library is installed with ``` pip install git+http://github.com/huggingface/nlp.git ``` or ``` git clone http://github.com/huggingface/nlp.git pip install -e ./nlp ``` will use the latest scripts, and not the ones from the previous version.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/603/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/603/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/602
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/602/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/602/comments
https://api.github.com/repos/huggingface/datasets/issues/602/events
https://github.com/huggingface/datasets/pull/602
697,636,605
MDExOlB1bGxSZXF1ZXN0NDgzNTU3NDM0
602
apply offset to indices in multiprocessed map
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
[]
closed
false
null
[]
null
[]
2020-09-10T08:54:30Z
2020-09-10T11:03:39Z
2020-09-10T11:03:37Z
MEMBER
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/602.diff", "html_url": "https://github.com/huggingface/datasets/pull/602", "merged_at": "2020-09-10T11:03:37Z", "patch_url": "https://github.com/huggingface/datasets/pull/602.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/602" }
Fix #597 I fixed the indices by applying an offset. I added the case to our tests to make sure it doesn't happen again. I also added the message proposed by @thomwolf in #597 ```python >>> d.select(range(10)).map(fn, with_indices=True, batched=True, num_proc=2, load_from_cache_file=False) Done writing 10 indices in 80 bytes . Testing the mapped function outputs [0, 1] Testing finished, running the mapping function on the dataset Done writing 5 indices in 41 bytes . Done writing 5 indices in 41 bytes . Spawning 2 processes [0, 1, 2, 3, 4] [5, 6, 7, 8, 9] #0: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 1/1 [00:00<00:00, 377.90ba/s] #1: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 1/1 [00:00<00:00, 378.92ba/s] Concatenating 2 shards from multiprocessing # Dataset(features: {'label': ClassLabel(num_classes=2, names=['neg', 'pos'], names_file=None, id=None), 'text': Value(dtype='string', id=None)}, num_rows: 10) ```
{ "+1": 1, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/602/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/602/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/601
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/601/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/601/comments
https://api.github.com/repos/huggingface/datasets/issues/601/events
https://github.com/huggingface/datasets/pull/601
697,574,848
MDExOlB1bGxSZXF1ZXN0NDgzNTAzMjAw
601
check if trasnformers has PreTrainedTokenizerBase
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
[]
closed
false
null
[]
null
[]
2020-09-10T07:54:56Z
2020-09-10T11:01:37Z
2020-09-10T11:01:36Z
MEMBER
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/601.diff", "html_url": "https://github.com/huggingface/datasets/pull/601", "merged_at": "2020-09-10T11:01:36Z", "patch_url": "https://github.com/huggingface/datasets/pull/601.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/601" }
Fix #598
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/601/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/601/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/600
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/600/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/600/comments
https://api.github.com/repos/huggingface/datasets/issues/600/events
https://github.com/huggingface/datasets/issues/600
697,496,913
MDU6SXNzdWU2OTc0OTY5MTM=
600
Pickling error when loading dataset
{ "avatar_url": "https://avatars.githubusercontent.com/u/17310286?v=4", "events_url": "https://api.github.com/users/kandorm/events{/privacy}", "followers_url": "https://api.github.com/users/kandorm/followers", "following_url": "https://api.github.com/users/kandorm/following{/other_user}", "gists_url": "https://api.github.com/users/kandorm/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/kandorm", "id": 17310286, "login": "kandorm", "node_id": "MDQ6VXNlcjE3MzEwMjg2", "organizations_url": "https://api.github.com/users/kandorm/orgs", "received_events_url": "https://api.github.com/users/kandorm/received_events", "repos_url": "https://api.github.com/users/kandorm/repos", "site_admin": false, "starred_url": "https://api.github.com/users/kandorm/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/kandorm/subscriptions", "type": "User", "url": "https://api.github.com/users/kandorm" }
[]
closed
false
null
[]
null
[]
2020-09-10T06:28:08Z
2020-09-25T14:31:54Z
2020-09-25T14:31:54Z
NONE
null
null
null
Hi, I modified line 136 in the original [run_language_modeling.py](https://github.com/huggingface/transformers/blob/master/examples/language-modeling/run_language_modeling.py) as: ``` # line 136: return LineByLineTextDataset(tokenizer=tokenizer, file_path=file_path, block_size=args.block_size) dataset = load_dataset("text", data_files=file_path, split="train") dataset = dataset.map(lambda ex: tokenizer(ex["text"], add_special_tokens=True, truncation=True, max_length=args.block_size), batched=True) dataset.set_format(type='torch', columns=['input_ids']) return dataset ``` When I run this with transformers (3.1.0) and nlp (0.4.0), I get the following error: ``` Traceback (most recent call last): File "src/run_language_modeling.py", line 319, in <module> main() File "src/run_language_modeling.py", line 248, in main get_dataset(data_args, tokenizer=tokenizer, cache_dir=model_args.cache_dir) if training_args.do_train else None File "src/run_language_modeling.py", line 139, in get_dataset dataset = dataset.map(lambda ex: tokenizer(ex["text"], add_special_tokens=True, truncation=True, max_length=args.block_size), batched=True) File "/data/nlp/src/nlp/arrow_dataset.py", line 1136, in map new_fingerprint=new_fingerprint, File "/data/nlp/src/nlp/fingerprint.py", line 158, in wrapper self._fingerprint, transform, kwargs_for_fingerprint File "/data/nlp/src/nlp/fingerprint.py", line 105, in update_fingerprint hasher.update(transform_args[key]) File "/data/nlp/src/nlp/fingerprint.py", line 57, in update self.m.update(self.hash(value).encode("utf-8")) File "/data/nlp/src/nlp/fingerprint.py", line 53, in hash return cls.hash_default(value) File "/data/nlp/src/nlp/fingerprint.py", line 46, in hash_default return cls.hash_bytes(dumps(value)) File "/data/nlp/src/nlp/utils/py_utils.py", line 362, in dumps dump(obj, file) File "/data/nlp/src/nlp/utils/py_utils.py", line 339, in dump Pickler(file, recurse=True).dump(obj) File "/root/miniconda3/envs/py3.6/lib/python3.6/site-packages/dill/_dill.py", line 446, in dump StockPickler.dump(self, obj) File "/root/miniconda3/envs/py3.6/lib/python3.6/pickle.py", line 409, in dump self.save(obj) File "/root/miniconda3/envs/py3.6/lib/python3.6/pickle.py", line 476, in save f(self, obj) # Call unbound method with explicit self File "/root/miniconda3/envs/py3.6/lib/python3.6/site-packages/dill/_dill.py", line 1438, in save_function obj.__dict__, fkwdefaults), obj=obj) File "/root/miniconda3/envs/py3.6/lib/python3.6/pickle.py", line 610, in save_reduce save(args) File "/root/miniconda3/envs/py3.6/lib/python3.6/pickle.py", line 476, in save f(self, obj) # Call unbound method with explicit self File "/root/miniconda3/envs/py3.6/lib/python3.6/pickle.py", line 751, in save_tuple save(element) File "/root/miniconda3/envs/py3.6/lib/python3.6/pickle.py", line 476, in save f(self, obj) # Call unbound method with explicit self File "/root/miniconda3/envs/py3.6/lib/python3.6/pickle.py", line 736, in save_tuple save(element) File "/root/miniconda3/envs/py3.6/lib/python3.6/pickle.py", line 476, in save f(self, obj) # Call unbound method with explicit self File "/root/miniconda3/envs/py3.6/lib/python3.6/site-packages/dill/_dill.py", line 1170, in save_cell pickler.save_reduce(_create_cell, (f,), obj=obj) File "/root/miniconda3/envs/py3.6/lib/python3.6/pickle.py", line 610, in save_reduce save(args) File "/root/miniconda3/envs/py3.6/lib/python3.6/pickle.py", line 476, in save f(self, obj) # Call unbound method with explicit self File "/root/miniconda3/envs/py3.6/lib/python3.6/pickle.py", line 736, in save_tuple save(element) File "/root/miniconda3/envs/py3.6/lib/python3.6/pickle.py", line 521, in save self.save_reduce(obj=obj, *rv) File "/root/miniconda3/envs/py3.6/lib/python3.6/pickle.py", line 605, in save_reduce save(cls) File "/root/miniconda3/envs/py3.6/lib/python3.6/pickle.py", line 476, in save f(self, obj) # Call unbound method with explicit self File "/root/miniconda3/envs/py3.6/lib/python3.6/site-packages/dill/_dill.py", line 1365, in save_type obj.__bases__, _dict), obj=obj) File "/root/miniconda3/envs/py3.6/lib/python3.6/pickle.py", line 610, in save_reduce save(args) File "/root/miniconda3/envs/py3.6/lib/python3.6/pickle.py", line 476, in save f(self, obj) # Call unbound method with explicit self File "/root/miniconda3/envs/py3.6/lib/python3.6/pickle.py", line 751, in save_tuple save(element) File "/root/miniconda3/envs/py3.6/lib/python3.6/pickle.py", line 476, in save f(self, obj) # Call unbound method with explicit self File "/root/miniconda3/envs/py3.6/lib/python3.6/site-packages/dill/_dill.py", line 933, in save_module_dict StockPickler.save_dict(pickler, obj) File "/root/miniconda3/envs/py3.6/lib/python3.6/pickle.py", line 821, in save_dict self._batch_setitems(obj.items()) File "/root/miniconda3/envs/py3.6/lib/python3.6/pickle.py", line 847, in _batch_setitems save(v) File "/root/miniconda3/envs/py3.6/lib/python3.6/pickle.py", line 476, in save f(self, obj) # Call unbound method with explicit self File "/root/miniconda3/envs/py3.6/lib/python3.6/site-packages/dill/_dill.py", line 933, in save_module_dict StockPickler.save_dict(pickler, obj) File "/root/miniconda3/envs/py3.6/lib/python3.6/pickle.py", line 821, in save_dict self._batch_setitems(obj.items()) File "/root/miniconda3/envs/py3.6/lib/python3.6/pickle.py", line 847, in _batch_setitems save(v) File "/root/miniconda3/envs/py3.6/lib/python3.6/pickle.py", line 507, in save self.save_global(obj, rv) File "/root/miniconda3/envs/py3.6/lib/python3.6/pickle.py", line 927, in save_global (obj, module_name, name)) _pickle.PicklingError: Can't pickle typing.Union[str, NoneType]: it's not the same object as typing.Union ```
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/600/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/600/timeline
null
completed
true
https://api.github.com/repos/huggingface/datasets/issues/599
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/599/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/599/comments
https://api.github.com/repos/huggingface/datasets/issues/599/events
https://github.com/huggingface/datasets/pull/599
697,377,786
MDExOlB1bGxSZXF1ZXN0NDgzMzI3ODQ5
599
Add MATINF dataset
{ "avatar_url": "https://avatars.githubusercontent.com/u/22514219?v=4", "events_url": "https://api.github.com/users/JetRunner/events{/privacy}", "followers_url": "https://api.github.com/users/JetRunner/followers", "following_url": "https://api.github.com/users/JetRunner/following{/other_user}", "gists_url": "https://api.github.com/users/JetRunner/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/JetRunner", "id": 22514219, "login": "JetRunner", "node_id": "MDQ6VXNlcjIyNTE0MjE5", "organizations_url": "https://api.github.com/users/JetRunner/orgs", "received_events_url": "https://api.github.com/users/JetRunner/received_events", "repos_url": "https://api.github.com/users/JetRunner/repos", "site_admin": false, "starred_url": "https://api.github.com/users/JetRunner/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/JetRunner/subscriptions", "type": "User", "url": "https://api.github.com/users/JetRunner" }
[]
closed
false
null
[]
null
[]
2020-09-10T03:31:09Z
2020-09-17T12:17:25Z
2020-09-17T12:17:25Z
CONTRIBUTOR
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/599.diff", "html_url": "https://github.com/huggingface/datasets/pull/599", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/599.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/599" }
@lhoestq The command to create metadata failed. I guess it's because the zip is not downloaded from a remote address? How to solve that? Also the CI fails and I don't know how to fix that :(
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/599/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/599/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/598
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/598/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/598/comments
https://api.github.com/repos/huggingface/datasets/issues/598/events
https://github.com/huggingface/datasets/issues/598
697,156,501
MDU6SXNzdWU2OTcxNTY1MDE=
598
The current version of the package on github has an error when loading dataset
{ "avatar_url": "https://avatars.githubusercontent.com/u/43428393?v=4", "events_url": "https://api.github.com/users/zeyuyun1/events{/privacy}", "followers_url": "https://api.github.com/users/zeyuyun1/followers", "following_url": "https://api.github.com/users/zeyuyun1/following{/other_user}", "gists_url": "https://api.github.com/users/zeyuyun1/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/zeyuyun1", "id": 43428393, "login": "zeyuyun1", "node_id": "MDQ6VXNlcjQzNDI4Mzkz", "organizations_url": "https://api.github.com/users/zeyuyun1/orgs", "received_events_url": "https://api.github.com/users/zeyuyun1/received_events", "repos_url": "https://api.github.com/users/zeyuyun1/repos", "site_admin": false, "starred_url": "https://api.github.com/users/zeyuyun1/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/zeyuyun1/subscriptions", "type": "User", "url": "https://api.github.com/users/zeyuyun1" }
[]
closed
false
null
[]
null
[]
2020-09-09T21:03:23Z
2020-09-10T06:25:21Z
2020-09-09T22:57:28Z
NONE
null
null
null
Instead of downloading the package from pip, downloading the version from source will result in an error when loading dataset (the pip version is completely fine): To recreate the error: First, installing nlp directly from source: ``` git clone https://github.com/huggingface/nlp.git cd nlp pip install -e . ``` Then run: ``` from nlp import load_dataset dataset = load_dataset('wikitext', 'wikitext-2-v1',split = 'train') ``` will give error: ``` >>> dataset = load_dataset('wikitext', 'wikitext-2-v1',split = 'train') Checking /home/zeyuy/.cache/huggingface/datasets/84a754b488511b109e2904672d809c041008416ae74e38f9ee0c80a8dffa1383.2e21f48d63b5572d19c97e441fbb802257cf6a4c03fbc5ed8fae3d2c2273f59e.py for additional imports. Found main folder for dataset https://raw.githubusercontent.com/huggingface/nlp/0.4.0/datasets/wikitext/wikitext.py at /home/zeyuy/.cache/huggingface/modules/nlp_modules/datasets/wikitext Found specific version folder for dataset https://raw.githubusercontent.com/huggingface/nlp/0.4.0/datasets/wikitext/wikitext.py at /home/zeyuy/.cache/huggingface/modules/nlp_modules/datasets/wikitext/5de6e79516446f747fcccc09aa2614fa159053b75909594d28d262395f72d89d Found script file from https://raw.githubusercontent.com/huggingface/nlp/0.4.0/datasets/wikitext/wikitext.py to /home/zeyuy/.cache/huggingface/modules/nlp_modules/datasets/wikitext/5de6e79516446f747fcccc09aa2614fa159053b75909594d28d262395f72d89d/wikitext.py Found dataset infos file from https://raw.githubusercontent.com/huggingface/nlp/0.4.0/datasets/wikitext/dataset_infos.json to /home/zeyuy/.cache/huggingface/modules/nlp_modules/datasets/wikitext/5de6e79516446f747fcccc09aa2614fa159053b75909594d28d262395f72d89d/dataset_infos.json Found metadata file for dataset https://raw.githubusercontent.com/huggingface/nlp/0.4.0/datasets/wikitext/wikitext.py at /home/zeyuy/.cache/huggingface/modules/nlp_modules/datasets/wikitext/5de6e79516446f747fcccc09aa2614fa159053b75909594d28d262395f72d89d/wikitext.json Loading Dataset Infos from /home/zeyuy/.cache/huggingface/modules/nlp_modules/datasets/wikitext/5de6e79516446f747fcccc09aa2614fa159053b75909594d28d262395f72d89d Overwrite dataset info from restored data version. Loading Dataset info from /home/zeyuy/.cache/huggingface/datasets/wikitext/wikitext-2-v1/1.0.0/5de6e79516446f747fcccc09aa2614fa159053b75909594d28d262395f72d89d Reusing dataset wikitext (/home/zeyuy/.cache/huggingface/datasets/wikitext/wikitext-2-v1/1.0.0/5de6e79516446f747fcccc09aa2614fa159053b75909594d28d262395f72d89d) Constructing Dataset for split train, from /home/zeyuy/.cache/huggingface/datasets/wikitext/wikitext-2-v1/1.0.0/5de6e79516446f747fcccc09aa2614fa159053b75909594d28d262395f72d89d Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/home/zeyuy/transformers/examples/language-modeling/nlp/src/nlp/load.py", line 600, in load_dataset ds = builder_instance.as_dataset(split=split, ignore_verifications=ignore_verifications) File "/home/zeyuy/transformers/examples/language-modeling/nlp/src/nlp/builder.py", line 611, in as_dataset datasets = utils.map_nested( File "/home/zeyuy/transformers/examples/language-modeling/nlp/src/nlp/utils/py_utils.py", line 216, in map_nested return function(data_struct) File "/home/zeyuy/transformers/examples/language-modeling/nlp/src/nlp/builder.py", line 631, in _build_single_dataset ds = self._as_dataset( File "/home/zeyuy/transformers/examples/language-modeling/nlp/src/nlp/builder.py", line 704, in _as_dataset return Dataset(**dataset_kwargs) File "/home/zeyuy/transformers/examples/language-modeling/nlp/src/nlp/arrow_dataset.py", line 188, in __init__ self._fingerprint = generate_fingerprint(self) File "/home/zeyuy/transformers/examples/language-modeling/nlp/src/nlp/fingerprint.py", line 91, in generate_fingerprint hasher.update(key) File "/home/zeyuy/transformers/examples/language-modeling/nlp/src/nlp/fingerprint.py", line 57, in update self.m.update(self.hash(value).encode("utf-8")) File "/home/zeyuy/transformers/examples/language-modeling/nlp/src/nlp/fingerprint.py", line 53, in hash return cls.hash_default(value) File "/home/zeyuy/transformers/examples/language-modeling/nlp/src/nlp/fingerprint.py", line 46, in hash_default return cls.hash_bytes(dumps(value)) File "/home/zeyuy/transformers/examples/language-modeling/nlp/src/nlp/utils/py_utils.py", line 361, in dumps with _no_cache_fields(obj): File "/home/zeyuy/miniconda3/lib/python3.8/contextlib.py", line 113, in __enter__ return next(self.gen) File "/home/zeyuy/transformers/examples/language-modeling/nlp/src/nlp/utils/py_utils.py", line 348, in _no_cache_fields if isinstance(obj, tr.PreTrainedTokenizerBase) and hasattr(obj, "cache") and isinstance(obj.cache, dict): AttributeError: module 'transformers' has no attribute 'PreTrainedTokenizerBase' ```
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/598/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/598/timeline
null
completed
true