url
stringlengths 61
61
| repository_url
stringclasses 1
value | labels_url
stringlengths 75
75
| comments_url
stringlengths 70
70
| events_url
stringlengths 68
68
| html_url
stringlengths 49
51
| id
int64 778M
1.87B
| node_id
stringlengths 18
32
| number
int64 1.68k
6.18k
| title
stringlengths 1
290
| user
dict | labels
listlengths 0
4
| state
stringclasses 2
values | locked
bool 1
class | assignee
dict | assignees
listlengths 0
4
| milestone
dict | comments
sequencelengths 0
30
| created_at
unknown | updated_at
unknown | closed_at
unknown | author_association
stringclasses 3
values | active_lock_reason
float64 | body
stringlengths 0
228k
⌀ | reactions
dict | timeline_url
stringlengths 70
70
| performed_via_github_app
float64 | state_reason
stringclasses 3
values | draft
float64 0
1
⌀ | pull_request
dict | is_pull_request
bool 2
classes |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://api.github.com/repos/huggingface/datasets/issues/1984 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1984/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1984/comments | https://api.github.com/repos/huggingface/datasets/issues/1984/events | https://github.com/huggingface/datasets/issues/1984 | 821,816,588 | MDU6SXNzdWU4MjE4MTY1ODg= | 1,984 | Add tests for WMT datasets | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | [] | closed | false | null | [] | null | [
"Dummy data generation is deprecated now. Closing."
] | "2021-03-04T06:46:42Z" | "2022-11-04T14:19:16Z" | "2022-11-04T14:19:16Z" | MEMBER | null | As requested in #1981, we need tests for WMT datasets, using dummy data. | {
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1984/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1984/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/1983 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1983/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1983/comments | https://api.github.com/repos/huggingface/datasets/issues/1983/events | https://github.com/huggingface/datasets/issues/1983 | 821,746,008 | MDU6SXNzdWU4MjE3NDYwMDg= | 1,983 | The size of CoNLL-2003 is not consistant with the official release. | {
"avatar_url": "https://avatars.githubusercontent.com/u/39556019?v=4",
"events_url": "https://api.github.com/users/h-peng17/events{/privacy}",
"followers_url": "https://api.github.com/users/h-peng17/followers",
"following_url": "https://api.github.com/users/h-peng17/following{/other_user}",
"gists_url": "https://api.github.com/users/h-peng17/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/h-peng17",
"id": 39556019,
"login": "h-peng17",
"node_id": "MDQ6VXNlcjM5NTU2MDE5",
"organizations_url": "https://api.github.com/users/h-peng17/orgs",
"received_events_url": "https://api.github.com/users/h-peng17/received_events",
"repos_url": "https://api.github.com/users/h-peng17/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/h-peng17/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/h-peng17/subscriptions",
"type": "User",
"url": "https://api.github.com/users/h-peng17"
} | [] | closed | false | null | [] | null | [
"Hi,\r\n\r\nif you inspect the raw data, you can find there are 946 occurrences of `-DOCSTART- -X- -X- O` in the train split and `14041 + 946 = 14987`, which is exactly the number of sentences the authors report. `-DOCSTART-` is a special line that acts as a boundary between two different documents and is filtered out in our implementation.\r\n\r\n@lhoestq What do you think about including these lines? ([Link](https://github.com/flairNLP/flair/issues/1097) to a similar issue in the flairNLP repo)",
"We should mention in the Conll2003 dataset card that these lines have been removed indeed.\r\n\r\nIf some users are interested in using these lines (maybe to recombine documents ?) then we can add a parameter to the conll2003 dataset to include them.\r\n\r\nBut IMO the default config should stay the current one (without the `-DOCSTART-` stuff), so that you can directly train NER models without additional preprocessing. Let me know what you think",
"@lhoestq Yes, I agree adding a small note should be sufficient.\r\n\r\nCurrently, NLTK's `ConllCorpusReader` ignores the `-DOCSTART-` lines so I think it's ok if we do the same. If there is an interest in the future to use these lines, then we can include them.",
"I added a mention of this in conll2003's dataset card:\r\nhttps://github.com/huggingface/datasets/blob/fc9796920da88486c3b97690969aabf03d6b4088/datasets/conll2003/README.md#conll2003\r\n\r\nEdit: just saw your PR @mariosasko (noticed it too late ^^)\r\nLet me take a look at it :)"
] | "2021-03-04T04:41:34Z" | "2022-10-05T13:13:26Z" | "2022-10-05T13:13:26Z" | NONE | null | Thanks for the dataset sharing! But when I use conll-2003, I meet some questions.
The statistics of conll-2003 in this repo is :
\#train 14041 \#dev 3250 \#test 3453
While the official statistics is:
\#train 14987 \#dev 3466 \#test 3684
Wish for your reply~ | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1983/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1983/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/1982 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1982/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1982/comments | https://api.github.com/repos/huggingface/datasets/issues/1982/events | https://github.com/huggingface/datasets/pull/1982 | 821,448,791 | MDExOlB1bGxSZXF1ZXN0NTg0MjM2NzQ0 | 1,982 | Fix NestedDataStructure.data for empty dict | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | [] | closed | false | null | [] | null | [
"I validated that this fixed the problem, thank you, @albertvillanova!\r\n",
"still facing the same issue or similar:\r\nfrom datasets import load_dataset\r\nwtm14_test = load_dataset('wmt14',\"de-en\",cache_dir='./datasets')\r\n\r\n~\\.cache\\huggingface\\modules\\datasets_modules\\datasets\\wmt14\\43e717d978d2261502b0194999583acb874ba73b0f4aed0ada2889d1bb00f36e\\wmt_utils.py in _split_generators(self, dl_manager)\r\n 758 # Extract manually downloaded files.\r\n 759 manual_files = dl_manager.extract(manual_paths_dict)\r\n--> 760 extraction_map = dict(downloaded_files, **manual_files)\r\n 761 \r\n 762 for language in self.config.language_pair:\r\n\r\nTypeError: type object argument after ** must be a mapping, not list",
"Hi @sabania \r\nWe released a patch version that fixes this issue (1.4.1), can you try with the new version please ?\r\n```\r\npip install --upgrade datasets\r\n```",
"I re-validated with the hotfix and the problem is no more.",
"It's working. thanks a lot."
] | "2021-03-03T20:16:51Z" | "2021-03-04T16:46:04Z" | "2021-03-03T22:48:36Z" | MEMBER | null | Fix #1981 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 1,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1982/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1982/timeline | null | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/1982.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1982",
"merged_at": "2021-03-03T22:48:36Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1982.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1982"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/1981 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1981/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1981/comments | https://api.github.com/repos/huggingface/datasets/issues/1981/events | https://github.com/huggingface/datasets/issues/1981 | 821,411,109 | MDU6SXNzdWU4MjE0MTExMDk= | 1,981 | wmt datasets fail to load | {
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/stas00",
"id": 10676103,
"login": "stas00",
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"repos_url": "https://api.github.com/users/stas00/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"type": "User",
"url": "https://api.github.com/users/stas00"
} | [] | closed | false | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
}
] | null | [
"@stas00 Mea culpa... May I fix this tomorrow morning?",
"yes, of course, I reverted to the version before that and it works ;)\r\n\r\nbut since a new release was just made you will probably need to make a hotfix.\r\n\r\nand add the wmt to the tests?",
"Sure, I will implement a regression test!",
"@stas00 it is fixed. @lhoestq are you releasing the hot fix or would you prefer me to do it?",
"I'll do a patch release for this issue early tomorrow.\r\n\r\nAnd yes we absolutly need tests for the wmt datasets: The missing tests for wmt are an artifact from the early development of the lib but now we have tools to generate automatically the dummy data used for tests :)",
"still facing the same issue or similar:\r\nfrom datasets import load_dataset\r\nwtm14_test = load_dataset('wmt14',\"de-en\",cache_dir='./datasets')\r\n\r\n~.cache\\huggingface\\modules\\datasets_modules\\datasets\\wmt14\\43e717d978d2261502b0194999583acb874ba73b0f4aed0ada2889d1bb00f36e\\wmt_utils.py in _split_generators(self, dl_manager)\r\n758 # Extract manually downloaded files.\r\n759 manual_files = dl_manager.extract(manual_paths_dict)\r\n--> 760 extraction_map = dict(downloaded_files, **manual_files)\r\n761\r\n762 for language in self.config.language_pair:\r\n\r\nTypeError: type object argument after ** must be a mapping, not list"
] | "2021-03-03T19:21:39Z" | "2021-03-04T14:16:47Z" | "2021-03-03T22:48:36Z" | MEMBER | null | on master:
```
python -c 'from datasets import load_dataset; load_dataset("wmt14", "de-en")'
Downloading and preparing dataset wmt14/de-en (download: Unknown size, generated: Unknown size, post-processed: Unknown size, total: Unknown size) to /home/stas/.cache/huggingface/datasets/wmt14/de-en/1.0.0/43e717d978d2261502b0194999583acb874ba73b0f4aed0ada2889d1bb00f36e...
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "/mnt/nvme1/code/huggingface/datasets-master/src/datasets/load.py", line 740, in load_dataset
builder_instance.download_and_prepare(
File "/mnt/nvme1/code/huggingface/datasets-master/src/datasets/builder.py", line 578, in download_and_prepare
self._download_and_prepare(
File "/mnt/nvme1/code/huggingface/datasets-master/src/datasets/builder.py", line 634, in _download_and_prepare
split_generators = self._split_generators(dl_manager, **split_generators_kwargs)
File "/home/stas/.cache/huggingface/modules/datasets_modules/datasets/wmt14/43e717d978d2261502b0194999583acb874ba73b0f4aed0ada2889d1bb00f36e/wmt_utils.py", line 760, in _split_generators
extraction_map = dict(downloaded_files, **manual_files)
```
it worked fine recently. same problem if I try wmt16.
git bisect points to this commit from Feb 25 as the culprit https://github.com/huggingface/datasets/commit/792f1d9bb1c5361908f73e2ef7f0181b2be409fa
@albertvillanova | {
"+1": 0,
"-1": 0,
"confused": 1,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1981/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1981/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/1980 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1980/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1980/comments | https://api.github.com/repos/huggingface/datasets/issues/1980/events | https://github.com/huggingface/datasets/pull/1980 | 821,312,810 | MDExOlB1bGxSZXF1ZXN0NTg0MTI1OTUy | 1,980 | Loading all answers from drop | {
"avatar_url": "https://avatars.githubusercontent.com/u/25499439?v=4",
"events_url": "https://api.github.com/users/KaijuML/events{/privacy}",
"followers_url": "https://api.github.com/users/KaijuML/followers",
"following_url": "https://api.github.com/users/KaijuML/following{/other_user}",
"gists_url": "https://api.github.com/users/KaijuML/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/KaijuML",
"id": 25499439,
"login": "KaijuML",
"node_id": "MDQ6VXNlcjI1NDk5NDM5",
"organizations_url": "https://api.github.com/users/KaijuML/orgs",
"received_events_url": "https://api.github.com/users/KaijuML/received_events",
"repos_url": "https://api.github.com/users/KaijuML/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/KaijuML/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/KaijuML/subscriptions",
"type": "User",
"url": "https://api.github.com/users/KaijuML"
} | [] | closed | false | null | [] | null | [
"Nice thanks for the change !\r\nThis looks all good to me\r\n\r\nBefore we merge can you just update the dataset_infos.json file of drop ? You can do it by running\r\n```\r\ndatasets-cli test ./datasets/drop --all_configs --save_infos --ignore_verifications\r\n```",
"Done!"
] | "2021-03-03T17:13:07Z" | "2021-03-15T11:27:26Z" | "2021-03-15T11:27:26Z" | CONTRIBUTOR | null | Hello all,
I propose this change to the DROP loading script so that all answers are loaded no matter their type. Currently, only "span" answers are loaded, which excludes a significant amount of answers from drop (i.e. "number" and "date").
I updated the script with the version I use for my work. However, I couldn't find a way to verify that all is working when integrated with the datasets repo, since the `load_dataset` method seems to always download the script from github and not local files.
Note that 9 items from the train set have no answers, as well as 1 from the validation set. The script I propose simply do not load them.
Let me know if there is anything else I can do,
Clément | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1980/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1980/timeline | null | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/1980.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1980",
"merged_at": "2021-03-15T11:27:26Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1980.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1980"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/1979 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1979/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1979/comments | https://api.github.com/repos/huggingface/datasets/issues/1979/events | https://github.com/huggingface/datasets/pull/1979 | 820,977,853 | MDExOlB1bGxSZXF1ZXN0NTgzODQ3MTk3 | 1,979 | Add article_id and process test set template for semeval 2020 task 11… | {
"avatar_url": "https://avatars.githubusercontent.com/u/8195444?v=4",
"events_url": "https://api.github.com/users/hemildesai/events{/privacy}",
"followers_url": "https://api.github.com/users/hemildesai/followers",
"following_url": "https://api.github.com/users/hemildesai/following{/other_user}",
"gists_url": "https://api.github.com/users/hemildesai/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/hemildesai",
"id": 8195444,
"login": "hemildesai",
"node_id": "MDQ6VXNlcjgxOTU0NDQ=",
"organizations_url": "https://api.github.com/users/hemildesai/orgs",
"received_events_url": "https://api.github.com/users/hemildesai/received_events",
"repos_url": "https://api.github.com/users/hemildesai/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/hemildesai/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/hemildesai/subscriptions",
"type": "User",
"url": "https://api.github.com/users/hemildesai"
} | [] | closed | false | null | [] | null | [
"Thanks !\r\nNow to fix the CI the only thing left is to add a dummy `test-task-tc-template.out` file inside the `dummy_data.zip` at `./datasets/sem_eval_2020_task_11/dummy/1.1.0`\r\nIt must contain the labels template for each dummy article of the test set included in `dummy_data.zip`\r\n\r\nAfter that we should be good to merge this one :)",
"@lhoestq Made the changes! The failure now seems to be unrelated to the changes. Any idea what's going on?",
"This is a bug on master that we're investigating. You can ignore it"
] | "2021-03-03T10:34:32Z" | "2021-03-13T10:59:40Z" | "2021-03-12T13:10:50Z" | CONTRIBUTOR | null | … dataset
- `article_id` is needed to create the submission file for the task at https://propaganda.qcri.org/semeval2020-task11/
- The `technique classification` task provides the span indices in a template for the test set that is necessary to complete the task. This PR implements processing of that template for the dataset. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1979/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1979/timeline | null | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/1979.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1979",
"merged_at": "2021-03-12T13:10:50Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1979.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1979"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/1978 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1978/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1978/comments | https://api.github.com/repos/huggingface/datasets/issues/1978/events | https://github.com/huggingface/datasets/pull/1978 | 820,956,806 | MDExOlB1bGxSZXF1ZXN0NTgzODI5Njgz | 1,978 | Adding ro sts dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/36982089?v=4",
"events_url": "https://api.github.com/users/lorinczb/events{/privacy}",
"followers_url": "https://api.github.com/users/lorinczb/followers",
"following_url": "https://api.github.com/users/lorinczb/following{/other_user}",
"gists_url": "https://api.github.com/users/lorinczb/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lorinczb",
"id": 36982089,
"login": "lorinczb",
"node_id": "MDQ6VXNlcjM2OTgyMDg5",
"organizations_url": "https://api.github.com/users/lorinczb/orgs",
"received_events_url": "https://api.github.com/users/lorinczb/received_events",
"repos_url": "https://api.github.com/users/lorinczb/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lorinczb/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lorinczb/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lorinczb"
} | [] | closed | false | null | [] | null | [
"@lhoestq thank you very much for the quick review and useful comments! \r\n\r\nI have tried to address them all, and a few comments that you left for ro_sts I have applied to the ro_sts_parallel as well (in read-me: fixed source_datasets, links to homepage, repository, leaderboard, thanks to me message, in ro_sts_parallel.py changed to camel case as well). In the ro_sts_parallel I have changed the order on the languages, also in the example, as you said order doesn't matter, but just to have them listed in the readme in the same order.\r\n\r\nI have commented above on why we would like to keep them as separate datasets, hope it makes sense.\r\n\r\nIf there is anything else I should change please let me know.\r\n\r\nThanks again!",
"@lhoestq I tried to adjust the ro_sts_parallel, locally when I run the tests they are passing, but somewhere it has the old name of rosts-parallel-ro-en which I am trying to change to ro_sts_parallel. I don't think I have left anything related to rosts-parallel-ro-en, but when the dataset_infos.json is regenerated it adds it. Could you please help me out, how can I fix this? Thanks in advance!",
"Great, thanks for all your help! "
] | "2021-03-03T10:08:53Z" | "2021-03-05T10:00:14Z" | "2021-03-05T09:33:55Z" | CONTRIBUTOR | null | Adding [RO-STS](https://github.com/dumitrescustefan/RO-STS) dataset | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1978/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1978/timeline | null | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/1978.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1978",
"merged_at": "2021-03-05T09:33:55Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1978.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1978"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/1977 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1977/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1977/comments | https://api.github.com/repos/huggingface/datasets/issues/1977/events | https://github.com/huggingface/datasets/issues/1977 | 820,312,022 | MDU6SXNzdWU4MjAzMTIwMjI= | 1,977 | ModuleNotFoundError: No module named 'apache_beam' for wikipedia datasets | {
"avatar_url": "https://avatars.githubusercontent.com/u/79165106?v=4",
"events_url": "https://api.github.com/users/dorost1234/events{/privacy}",
"followers_url": "https://api.github.com/users/dorost1234/followers",
"following_url": "https://api.github.com/users/dorost1234/following{/other_user}",
"gists_url": "https://api.github.com/users/dorost1234/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/dorost1234",
"id": 79165106,
"login": "dorost1234",
"node_id": "MDQ6VXNlcjc5MTY1MTA2",
"organizations_url": "https://api.github.com/users/dorost1234/orgs",
"received_events_url": "https://api.github.com/users/dorost1234/received_events",
"repos_url": "https://api.github.com/users/dorost1234/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/dorost1234/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dorost1234/subscriptions",
"type": "User",
"url": "https://api.github.com/users/dorost1234"
} | [] | open | false | null | [] | null | [
"I sometimes also get this error with other languages of the same dataset:\r\n\r\n File \"/dara/libs/anaconda3/envs/code/lib/python3.7/site-packages/datasets-1.3.0-py3.7.egg/datasets/arrow_reader.py\", line 322, in read_table\r\n stream = stream_from(filename)\r\n File \"pyarrow/io.pxi\", line 782, in pyarrow.lib.memory_map\r\n File \"pyarrow/io.pxi\", line 743, in pyarrow.lib.MemoryMappedFile._open\r\n File \"pyarrow/error.pxi\", line 122, in pyarrow.lib.pyarrow_internal_check_status\r\n File \"pyarrow/error.pxi\", line 99, in pyarrow.lib.check_status\r\nOSError: Memory mapping file failed: Cannot allocate memory\r\n\r\n@lhoestq \r\n",
"Hi ! Thanks for reporting\r\nSome wikipedia configurations do require the user to have `apache_beam` in order to parse the wikimedia data.\r\n\r\nOn the other hand regarding your second issue\r\n```\r\nOSError: Memory mapping file failed: Cannot allocate memory\r\n```\r\nI've never experienced this, can you open a new issue for this specific error and provide more details please ?\r\nFor example what script did you use to get this, what language did you use, what's your environment details (os, python version, pyarrow version).."
] | "2021-03-02T19:21:28Z" | "2021-03-03T10:17:40Z" | null | NONE | null | Hi
I am trying to run run_mlm.py code [1] of huggingface with following "wikipedia"/ "20200501.aa" dataset:
`python run_mlm.py --model_name_or_path bert-base-multilingual-cased --dataset_name wikipedia --dataset_config_name 20200501.aa --do_train --do_eval --output_dir /tmp/test-mlm --max_seq_length 256
`
I am getting this error, but as per documentation, huggingface dataset provide processed version of this dataset and users can load it without requiring setup extra settings for apache-beam. could you help me please to load this dataset?
Do you think I can run run_ml.py with this dataset? or anyway I could subsample and train the model? I greatly appreciate providing the processed version of all languages for this dataset, which allow the user to use them without setting up apache-beam,. thanks
I really appreciate your help.
@lhoestq
thanks.
[1] https://github.com/huggingface/transformers/blob/master/examples/language-modeling/run_mlm.py
error I get:
```
>>> import datasets
>>> datasets.load_dataset("wikipedia", "20200501.aa")
Downloading and preparing dataset wikipedia/20200501.aa (download: Unknown size, generated: Unknown size, post-processed: Unknown size, total: Unknown size) to /dara/temp/cache_home_2/datasets/wikipedia/20200501.aa/1.0.0/4021357e28509391eab2f8300d9b689e7e8f3a877ebb3d354b01577d497ebc63...
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/dara/temp/libs/anaconda3/envs/codes/lib/python3.7/site-packages/datasets-1.3.0-py3.7.egg/datasets/load.py", line 746, in load_dataset
use_auth_token=use_auth_token,
File "/dara/temp/libs/anaconda3/envs/codes/lib/python3.7/site-packages/datasets-1.3.0-py3.7.egg/datasets/builder.py", line 573, in download_and_prepare
dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs
File "/dara/temp/libs/anaconda3/envs/codes/lib/python3.7/site-packages/datasets-1.3.0-py3.7.egg/datasets/builder.py", line 1099, in _download_and_prepare
import apache_beam as beam
ModuleNotFoundError: No module named 'apache_beam'
``` | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1977/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1977/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/1976 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1976/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1976/comments | https://api.github.com/repos/huggingface/datasets/issues/1976/events | https://github.com/huggingface/datasets/pull/1976 | 820,228,538 | MDExOlB1bGxSZXF1ZXN0NTgzMjA3NDI4 | 1,976 | Add datasets full offline mode with HF_DATASETS_OFFLINE | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [] | closed | false | null | [] | null | [] | "2021-03-02T17:26:59Z" | "2021-03-03T15:45:31Z" | "2021-03-03T15:45:30Z" | MEMBER | null | Add the HF_DATASETS_OFFLINE environment variable for users who want to use `datasets` offline without having to wait for the network timeouts/retries to happen. This was requested in https://github.com/huggingface/datasets/issues/1939
cc @stas00 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 1,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1976/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1976/timeline | null | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/1976.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1976",
"merged_at": "2021-03-03T15:45:30Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1976.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1976"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/1975 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1975/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1975/comments | https://api.github.com/repos/huggingface/datasets/issues/1975/events | https://github.com/huggingface/datasets/pull/1975 | 820,205,485 | MDExOlB1bGxSZXF1ZXN0NTgzMTg4NjM3 | 1,975 | Fix flake8 | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | [] | closed | false | null | [] | null | [] | "2021-03-02T16:59:13Z" | "2021-03-04T10:43:22Z" | "2021-03-04T10:43:22Z" | MEMBER | null | Fix flake8 style. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1975/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1975/timeline | null | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/1975.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1975",
"merged_at": "2021-03-04T10:43:22Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1975.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1975"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/1974 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1974/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1974/comments | https://api.github.com/repos/huggingface/datasets/issues/1974/events | https://github.com/huggingface/datasets/pull/1974 | 820,122,223 | MDExOlB1bGxSZXF1ZXN0NTgzMTE5MDI0 | 1,974 | feat(docs): navigate with left/right arrow keys | {
"avatar_url": "https://avatars.githubusercontent.com/u/32727188?v=4",
"events_url": "https://api.github.com/users/ydcjeff/events{/privacy}",
"followers_url": "https://api.github.com/users/ydcjeff/followers",
"following_url": "https://api.github.com/users/ydcjeff/following{/other_user}",
"gists_url": "https://api.github.com/users/ydcjeff/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/ydcjeff",
"id": 32727188,
"login": "ydcjeff",
"node_id": "MDQ6VXNlcjMyNzI3MTg4",
"organizations_url": "https://api.github.com/users/ydcjeff/orgs",
"received_events_url": "https://api.github.com/users/ydcjeff/received_events",
"repos_url": "https://api.github.com/users/ydcjeff/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/ydcjeff/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydcjeff/subscriptions",
"type": "User",
"url": "https://api.github.com/users/ydcjeff"
} | [] | closed | false | null | [] | null | [] | "2021-03-02T15:24:50Z" | "2021-03-04T10:44:12Z" | "2021-03-04T10:42:48Z" | NONE | null | Enables docs navigation with left/right arrow keys. It can be useful for the ones who navigate with keyboard a lot.
More info : https://github.com/sphinx-doc/sphinx/pull/2064
You can try here : https://29353-250213286-gh.circle-artifacts.com/0/docs/_build/html/index.html | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1974/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1974/timeline | null | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/1974.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1974",
"merged_at": "2021-03-04T10:42:48Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1974.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1974"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/1973 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1973/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1973/comments | https://api.github.com/repos/huggingface/datasets/issues/1973/events | https://github.com/huggingface/datasets/issues/1973 | 820,077,312 | MDU6SXNzdWU4MjAwNzczMTI= | 1,973 | Question: what gets stored in the datasets cache and why is it so huge? | {
"avatar_url": "https://avatars.githubusercontent.com/u/17202292?v=4",
"events_url": "https://api.github.com/users/ioana-blue/events{/privacy}",
"followers_url": "https://api.github.com/users/ioana-blue/followers",
"following_url": "https://api.github.com/users/ioana-blue/following{/other_user}",
"gists_url": "https://api.github.com/users/ioana-blue/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/ioana-blue",
"id": 17202292,
"login": "ioana-blue",
"node_id": "MDQ6VXNlcjE3MjAyMjky",
"organizations_url": "https://api.github.com/users/ioana-blue/orgs",
"received_events_url": "https://api.github.com/users/ioana-blue/received_events",
"repos_url": "https://api.github.com/users/ioana-blue/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/ioana-blue/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ioana-blue/subscriptions",
"type": "User",
"url": "https://api.github.com/users/ioana-blue"
} | [] | closed | false | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
}
] | null | [
"Echo'ing this observation: I have a few datasets in the neighborhood of 2GB CSVs uncompressed, and when I use something like `Dataset.save_to_disk()` it's ~18GB on disk.\r\n\r\nIf this is unexpected behavior, would be happy to help run debugging as needed.",
"Thanks @ioana-blue for pointing out this problem (and thanks also @justin-yan). You are right that current implementation of the datasets caching files take too much memory. We are definitely changing this and optimizing the defaults, so that the file sizes are considerably reduced. I will come back to you as soon as this is fixed.",
"Thank you! Also I noticed that the files don't seem to be cleaned after the jobs finish. Last night I had only 3 jobs running, but the cache was still at 180GB. ",
"And to clarify, it's not memory, it's disk space. Thank you!",
"Hi ! As Albert said they can sometimes take more space that expected but we'll fix that soon.\r\n\r\nAlso, to give more details about caching: computations on a dataset are cached by default so that you don't have to recompute them the next time you run them.\r\n\r\nSo by default the cache files stay on your disk when you job is finished (so that if you re-execute it, it will be reloaded from the cache).\r\nFeel free to clear your cache after your job has finished, or disable caching using\r\n```python\r\nimport datasets\r\n\r\ndatasets.set_caching_enabled(False)\r\n```",
"Thanks for the tip, this is useful. ",
"Hi @ioana-blue, we have optimized Datasets' disk usage in the latest release v1.5.\r\n\r\nFeel free to update your Datasets version\r\n```shell\r\npip install -U datasets\r\n```\r\nand see if it better suits your needs.",
"Thank you!"
] | "2021-03-02T14:35:53Z" | "2021-03-30T14:03:59Z" | "2021-03-16T09:44:00Z" | NONE | null | I'm running several training jobs (around 10) with a relatively large dataset (3M samples). The datasets cache reached 178G and it seems really large. What is it stored in there and why is it so large? I don't think I noticed this problem before and seems to be related to the new version of the datasets library. Any insight? Thank you! | {
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1973/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1973/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/1972 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1972/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1972/comments | https://api.github.com/repos/huggingface/datasets/issues/1972/events | https://github.com/huggingface/datasets/issues/1972 | 819,752,761 | MDU6SXNzdWU4MTk3NTI3NjE= | 1,972 | 'Dataset' object has no attribute 'rename_column' | {
"avatar_url": "https://avatars.githubusercontent.com/u/23195502?v=4",
"events_url": "https://api.github.com/users/farooqzaman1/events{/privacy}",
"followers_url": "https://api.github.com/users/farooqzaman1/followers",
"following_url": "https://api.github.com/users/farooqzaman1/following{/other_user}",
"gists_url": "https://api.github.com/users/farooqzaman1/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/farooqzaman1",
"id": 23195502,
"login": "farooqzaman1",
"node_id": "MDQ6VXNlcjIzMTk1NTAy",
"organizations_url": "https://api.github.com/users/farooqzaman1/orgs",
"received_events_url": "https://api.github.com/users/farooqzaman1/received_events",
"repos_url": "https://api.github.com/users/farooqzaman1/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/farooqzaman1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/farooqzaman1/subscriptions",
"type": "User",
"url": "https://api.github.com/users/farooqzaman1"
} | [] | closed | false | null | [] | null | [
"Hi ! `rename_column` has been added recently and will be available in the next release"
] | "2021-03-02T08:01:49Z" | "2022-06-01T16:08:47Z" | "2022-06-01T16:08:47Z" | NONE | null | 'Dataset' object has no attribute 'rename_column' | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1972/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1972/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/1971 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1971/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1971/comments | https://api.github.com/repos/huggingface/datasets/issues/1971/events | https://github.com/huggingface/datasets/pull/1971 | 819,714,231 | MDExOlB1bGxSZXF1ZXN0NTgyNzgyNTU0 | 1,971 | Fix ArrowWriter closes stream at exit | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | [] | closed | false | null | [] | null | [
"Oh nice thanks for adding the context manager ! All the streams and RecordBatchWriter will be properly closed now. Hopefully this gives a better experience on windows on which it's super important to close stuff.\r\n\r\nNot sure about the error, it looks like a process crashed silently.\r\nLet me take a look",
"> Hopefully this gives a better experience on windows on which it's super important to close stuff.\r\n\r\nExactly! On Windows, you got:\r\n> PermissionError: [WinError 32] The process cannot access the file because it is being used by another process\r\n\r\nwhen trying to access the unclosed `stream` file, e.g. by `with incomplete_dir(self._cache_dir) as tmp_data_dir`: `shutil.rmtree(tmp_dir)`\r\n\r\nThe reason is: https://docs.python.org/3/library/os.html#os.remove\r\n\r\n> On Windows, attempting to remove a file that is in use causes an exception to be raised; on Unix, the directory entry is removed but the storage allocated to the file is not made available until the original file is no longer in use.\r\n\r\n\r\n",
"The test passes on my windows. This was probably a circleCI issue. I re-ran the circleCI tests",
"NICE! It passed!",
"Maybe you can merge master into this branch and check the CI before merging ?",
"@lhoestq done! ;)",
"Thanks ! merging"
] | "2021-03-02T07:12:34Z" | "2021-03-10T16:36:57Z" | "2021-03-10T16:36:57Z" | MEMBER | null | Current implementation of ArrowWriter does not properly release the `stream` resource (by closing it) if its `finalize()` method is not called and/or an Exception is raised before/during the call to its `finalize()` method.
Therefore, ArrowWriter should be used as a context manager that properly closes its `stream` resource at exit. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1971/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1971/timeline | null | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/1971.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1971",
"merged_at": "2021-03-10T16:36:56Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1971.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1971"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/1970 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1970/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1970/comments | https://api.github.com/repos/huggingface/datasets/issues/1970/events | https://github.com/huggingface/datasets/pull/1970 | 819,500,620 | MDExOlB1bGxSZXF1ZXN0NTgyNjAzMzEw | 1,970 | Fixing the URL filtering for bad MLSUM examples in GEM | {
"avatar_url": "https://avatars.githubusercontent.com/u/10469459?v=4",
"events_url": "https://api.github.com/users/yjernite/events{/privacy}",
"followers_url": "https://api.github.com/users/yjernite/followers",
"following_url": "https://api.github.com/users/yjernite/following{/other_user}",
"gists_url": "https://api.github.com/users/yjernite/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/yjernite",
"id": 10469459,
"login": "yjernite",
"node_id": "MDQ6VXNlcjEwNDY5NDU5",
"organizations_url": "https://api.github.com/users/yjernite/orgs",
"received_events_url": "https://api.github.com/users/yjernite/received_events",
"repos_url": "https://api.github.com/users/yjernite/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/yjernite/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yjernite/subscriptions",
"type": "User",
"url": "https://api.github.com/users/yjernite"
} | [] | closed | false | null | [] | null | [] | "2021-03-02T01:22:58Z" | "2021-03-02T03:19:06Z" | "2021-03-02T02:01:33Z" | MEMBER | null | This updates the code and metadata to use the updated `gem_mlsum_bad_ids_fixed.json` file provided by @juand-r
cc @sebastianGehrmann | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1970/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1970/timeline | null | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/1970.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1970",
"merged_at": "2021-03-02T02:01:33Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1970.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1970"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/1967 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1967/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1967/comments | https://api.github.com/repos/huggingface/datasets/issues/1967/events | https://github.com/huggingface/datasets/pull/1967 | 819,129,568 | MDExOlB1bGxSZXF1ZXN0NTgyMjc5OTEx | 1,967 | Add Turkish News Category Dataset - 270K - Lite Version | {
"avatar_url": "https://avatars.githubusercontent.com/u/5150963?v=4",
"events_url": "https://api.github.com/users/yavuzKomecoglu/events{/privacy}",
"followers_url": "https://api.github.com/users/yavuzKomecoglu/followers",
"following_url": "https://api.github.com/users/yavuzKomecoglu/following{/other_user}",
"gists_url": "https://api.github.com/users/yavuzKomecoglu/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/yavuzKomecoglu",
"id": 5150963,
"login": "yavuzKomecoglu",
"node_id": "MDQ6VXNlcjUxNTA5NjM=",
"organizations_url": "https://api.github.com/users/yavuzKomecoglu/orgs",
"received_events_url": "https://api.github.com/users/yavuzKomecoglu/received_events",
"repos_url": "https://api.github.com/users/yavuzKomecoglu/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/yavuzKomecoglu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yavuzKomecoglu/subscriptions",
"type": "User",
"url": "https://api.github.com/users/yavuzKomecoglu"
} | [] | closed | false | null | [] | null | [
"Thanks for the change, merging now !"
] | "2021-03-01T18:21:59Z" | "2021-03-02T17:25:00Z" | "2021-03-02T17:25:00Z" | CONTRIBUTOR | null | This PR adds the Turkish News Categories Dataset (270K - Lite Version) dataset which is a text classification dataset by me, @basakbuluz and @serdarakyol.
This dataset contains the same news from the current [interpress_news_category_tr dataset](https://huggingface.co/datasets/interpress_news_category_tr) but contains less information, OCR errors are reduced, can be easily separated, and can be divided into 10 classes ("kültürsanat", "ekonomi", "siyaset", "eğitim", "dünya", "spor", "teknoloji", "magazin", "sağlık", "gündem") were rearranged. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1967/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1967/timeline | null | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/1967.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1967",
"merged_at": "2021-03-02T17:25:00Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1967.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1967"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/1966 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1966/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1966/comments | https://api.github.com/repos/huggingface/datasets/issues/1966/events | https://github.com/huggingface/datasets/pull/1966 | 819,101,253 | MDExOlB1bGxSZXF1ZXN0NTgyMjU2MzE0 | 1,966 | Fix metrics collision in separate multiprocessed experiments | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [] | closed | false | null | [] | null | [
"Since the failure was originally intermittent, there is no 100% telling that the problem is gone. \r\nBut if my artificial race condition setup https://github.com/huggingface/datasets/issues/1942#issuecomment-787124529 is to be the litmus test then the problem has been fixed, as with this PR branch that particular race condition is taken care of correctly.\r\n\r\nThank you for taking care of this, @lhoestq - locking can be very tricky to do right!"
] | "2021-03-01T17:45:18Z" | "2021-03-02T13:05:45Z" | "2021-03-02T13:05:44Z" | MEMBER | null | As noticed in #1942 , there's a issue with locks if you run multiple separate evaluation experiments in a multiprocessed setup.
Indeed there is a time span in Metric._finalize() where the process 0 loses its lock before re-acquiring it. This is bad since the lock of the process 0 tells the other process that the corresponding cache file is available for writing/reading/deleting: we end up having one metric cache that collides with another one. This can raise FileNotFound errors when a metric tries to read the cache file and if the second conflicting metric deleted it.
To fix that I made sure that the lock file of the process 0 stays acquired from the cache file creation to the end of the metric computation. This way the other metrics can simply sample a new hashing name in order to avoid the collision.
Finally I added missing tests for separate experiments in distributed setup. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 1,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1966/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1966/timeline | null | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/1966.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1966",
"merged_at": "2021-03-02T13:05:44Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1966.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1966"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/1965 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1965/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1965/comments | https://api.github.com/repos/huggingface/datasets/issues/1965/events | https://github.com/huggingface/datasets/issues/1965 | 818,833,460 | MDU6SXNzdWU4MTg4MzM0NjA= | 1,965 | Can we parallelized the add_faiss_index process over dataset shards ? | {
"avatar_url": "https://avatars.githubusercontent.com/u/16892570?v=4",
"events_url": "https://api.github.com/users/shamanez/events{/privacy}",
"followers_url": "https://api.github.com/users/shamanez/followers",
"following_url": "https://api.github.com/users/shamanez/following{/other_user}",
"gists_url": "https://api.github.com/users/shamanez/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/shamanez",
"id": 16892570,
"login": "shamanez",
"node_id": "MDQ6VXNlcjE2ODkyNTcw",
"organizations_url": "https://api.github.com/users/shamanez/orgs",
"received_events_url": "https://api.github.com/users/shamanez/received_events",
"repos_url": "https://api.github.com/users/shamanez/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/shamanez/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/shamanez/subscriptions",
"type": "User",
"url": "https://api.github.com/users/shamanez"
} | [] | closed | false | null | [] | null | [
"Hi !\r\nAs far as I know not all faiss indexes can be computed in parallel and then merged. \r\nFor example [here](https://github.com/facebookresearch/faiss/wiki/Special-operations-on-indexes#splitting-and-merging-indexes) is is mentioned that only IndexIVF indexes can be merged.\r\nMoreover faiss already works using multithreading to parallelize the workload over your different CPU cores. You can find more info [here](https://github.com/facebookresearch/faiss/wiki/Threads-and-asynchronous-calls#internal-threading)\r\nSo I feel like the gains we would get by implementing a parallel `add_faiss_index` would not be that important, but let me know what you think.\r\n",
"Actually, you are right. I also had the same idea. I am trying this in the context of end-ton-end retrieval training in RAG. So far I have parallelized the embedding re-computation within the training loop by using datasets shards. \r\n\r\nThen I was thinking of can I calculate the indexes for each shard and combined them with **concatenate** before I save.",
"@lhoestq As you mentioned faiss is already using multiprocessing. I tried to do the add_index with faiss for a dataset object inside a RAY actor and the process became very slow... if fact it takes so much time. It is because a ray actor comes with a single CPU core unless we assign it more. I also tried assigning more cores but still running add_index in the main process is very fast. "
] | "2021-03-01T12:47:34Z" | "2021-03-04T19:40:56Z" | "2021-03-04T19:40:42Z" | NONE | null | I am thinking of making the **add_faiss_index** process faster. What if we run the add_faiss_index process on separate dataset shards and then combine them before (dataset.concatenate) saving the faiss.index file ?
I feel theoretically this will reduce the accuracy of retrieval since it affects the indexing process.
@lhoestq
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1965/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1965/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/1964 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1964/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1964/comments | https://api.github.com/repos/huggingface/datasets/issues/1964/events | https://github.com/huggingface/datasets/issues/1964 | 818,624,864 | MDU6SXNzdWU4MTg2MjQ4NjQ= | 1,964 | Datasets.py function load_dataset does not match squad dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/44536699?v=4",
"events_url": "https://api.github.com/users/LeopoldACC/events{/privacy}",
"followers_url": "https://api.github.com/users/LeopoldACC/followers",
"following_url": "https://api.github.com/users/LeopoldACC/following{/other_user}",
"gists_url": "https://api.github.com/users/LeopoldACC/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/LeopoldACC",
"id": 44536699,
"login": "LeopoldACC",
"node_id": "MDQ6VXNlcjQ0NTM2Njk5",
"organizations_url": "https://api.github.com/users/LeopoldACC/orgs",
"received_events_url": "https://api.github.com/users/LeopoldACC/received_events",
"repos_url": "https://api.github.com/users/LeopoldACC/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/LeopoldACC/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LeopoldACC/subscriptions",
"type": "User",
"url": "https://api.github.com/users/LeopoldACC"
} | [] | closed | false | null | [] | null | [
"Hi !\r\n\r\nTo fix 1, an you try to run this code ?\r\n```python\r\nfrom datasets import load_dataset\r\n\r\nload_dataset(\"squad\", download_mode=\"force_redownload\")\r\n```\r\nMaybe the file your downloaded was corrupted, in this case redownloading this way should fix your issue 1.\r\n\r\nRegarding your 2nd point, you're right that loading the raw json this way doesn't give you a dataset with the column \"context\", \"question\" and \"answers\". Indeed the squad format is a very nested format so you have to preprocess the data. You can do it this way:\r\n```python\r\ndef process_squad(examples):\r\n \"\"\"\r\n Process a dataset in the squad format with columns \"title\" and \"paragraphs\"\r\n to return the dataset with columns \"context\", \"question\" and \"answers\".\r\n \"\"\"\r\n out = {\"context\": [], \"question\": [], \"answers\":[]} \r\n for paragraphs in examples[\"paragraphs\"]: \r\n for paragraph in paragraphs: \r\n for qa in paragraph[\"qas\"]: \r\n answers = [{\"answer_start\": answer[\"answer_start\"], \"text\": answer[\"text\"].strip()} for answer in qa[\"answers\"]] \r\n out[\"context\"].append(paragraph[\"context\"].strip()) \r\n out[\"question\"].append(qa[\"question\"].strip()) \r\n out[\"answers\"].append(answers) \r\n return out\r\n\r\ndatasets = load_dataset(extension, data_files=data_files, field=\"data\")\r\ncolumn_names = datasets[\"train\"].column_names\r\n\r\nif set(column_names) == {\"title\", \"paragraphs\"}:\r\n datasets = datasets.map(process_squad, batched=True, remove_columns=column_names)\r\n```\r\n\r\nHope that helps :)",
"Thks for quickly answering!\r\n### 1 I try the first way,but seems not work \r\n```\r\nTraceback (most recent call last):\r\n File \"examples/question-answering/run_qa.py\", line 503, in <module>\r\n main()\r\n File \"examples/question-answering/run_qa.py\", line 218, in main\r\n datasets = load_dataset(data_args.dataset_name, download_mode=\"force_redownload\")\r\n File \"/home2/zhenggo1/anaconda3/envs/lpot/lib/python3.7/site-packages/datasets/load.py\", line 746, in load_dataset\r\n use_auth_token=use_auth_token,\r\n File \"/home2/zhenggo1/anaconda3/envs/lpot/lib/python3.7/site-packages/datasets/builder.py\", line 573, in download_and_prepare\r\n dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs\r\n File \"/home2/zhenggo1/anaconda3/envs/lpot/lib/python3.7/site-packages/datasets/builder.py\", line 633, in _download_and_prepare\r\n self.info.download_checksums, dl_manager.get_recorded_sizes_checksums(), \"dataset source files\"\r\n File \"/home2/zhenggo1/anaconda3/envs/lpot/lib/python3.7/site-packages/datasets/utils/info_utils.py\", line 39, in verify_checksums\r\n raise NonMatchingChecksumError(error_msg + str(bad_urls))\r\ndatasets.utils.info_utils.NonMatchingChecksumError: Checksums didn't match for dataset source files:\r\n['https://rajpurkar.github.io/SQuAD-explorer/dataset/train-v1.1.json']\r\n```\r\n### 2 I try the second way,and run the examples/question-answering/run_qa.py,it lead to another bug orz..\r\n```\r\nTraceback (most recent call last):\r\n File \"examples/question-answering/run_qa.py\", line 523, in <module>\r\n main()\r\n File \"examples/question-answering/run_qa.py\", line 379, in main\r\n load_from_cache_file=not data_args.overwrite_cache,\r\n File \"/home2/zhenggo1/anaconda3/envs/lpot/lib/python3.7/site-packages/datasets/arrow_dataset.py\", line 1120, in map\r\n update_data = does_function_return_dict(test_inputs, test_indices)\r\n File \"/home2/zhenggo1/anaconda3/envs/lpot/lib/python3.7/site-packages/datasets/arrow_dataset.py\", line 1091, in does_function_return_dict\r\n function(*fn_args, indices, **fn_kwargs) if with_indices else function(*fn_args, **fn_kwargs)\r\n File \"examples/question-answering/run_qa.py\", line 339, in prepare_train_features\r\n if len(answers[\"answer_start\"]) == 0:\r\nTypeError: list indices must be integers or slices, not str\r\n```\r\n## may be the function prepare_train_features in run_qa.py need to fix,I think is that the prep\r\n```python\r\nfor i, offsets in enumerate(offset_mapping):\r\n # We will label impossible answers with the index of the CLS token.\r\n input_ids = tokenized_examples[\"input_ids\"][i]\r\n cls_index = input_ids.index(tokenizer.cls_token_id)\r\n\r\n # Grab the sequence corresponding to that example (to know what is the context and what is the question).\r\n sequence_ids = tokenized_examples.sequence_ids(i)\r\n\r\n # One example can give several spans, this is the index of the example containing this span of text.\r\n sample_index = sample_mapping[i]\r\n answers = examples[answer_column_name][sample_index]\r\n print(examples,answers)\r\n # If no answers are given, set the cls_index as answer.\r\n if len(answers[\"answer_start\"]) == 0:\r\n tokenized_examples[\"start_positions\"].append(cls_index)\r\n tokenized_examples[\"end_positions\"].append(cls_index)\r\n else:\r\n # Start/end character index of the answer in the text.\r\n start_char = answers[\"answer_start\"][0]\r\n end_char = start_char + len(answers[\"text\"][0])\r\n\r\n # Start token index of the current span in the text.\r\n token_start_index = 0\r\n while sequence_ids[token_start_index] != (1 if pad_on_right else 0):\r\n token_start_index += 1\r\n\r\n # End token index of the current span in the text.\r\n token_end_index = len(input_ids) - 1\r\n while sequence_ids[token_end_index] != (1 if pad_on_right else 0):\r\n token_end_index -= 1\r\n\r\n # Detect if the answer is out of the span (in which case this feature is labeled with the CLS index).\r\n if not (offsets[token_start_index][0] <= start_char and offsets[token_end_index][1] >= end_char):\r\n tokenized_examples[\"start_positions\"].append(cls_index)\r\n tokenized_examples[\"end_positions\"].append(cls_index)\r\n else:\r\n # Otherwise move the token_start_index and token_end_index to the two ends of the answer.\r\n # Note: we could go after the last offset if the answer is the last word (edge case).\r\n while token_start_index < len(offsets) and offsets[token_start_index][0] <= start_char:\r\n token_start_index += 1\r\n tokenized_examples[\"start_positions\"].append(token_start_index - 1)\r\n while offsets[token_end_index][1] >= end_char:\r\n token_end_index -= 1\r\n tokenized_examples[\"end_positions\"].append(token_end_index + 1)\r\n\r\n return tokenized_examples\r\n``` ",
"## I have fixed it, @lhoestq \r\n### the first section change as you said and add [\"id\"]\r\n```python\r\ndef process_squad(examples):\r\n \"\"\"\r\n Process a dataset in the squad format with columns \"title\" and \"paragraphs\"\r\n to return the dataset with columns \"context\", \"question\" and \"answers\".\r\n \"\"\"\r\n # print(examples)\r\n out = {\"context\": [], \"question\": [], \"answers\":[],\"id\":[]} \r\n for paragraphs in examples[\"paragraphs\"]: \r\n for paragraph in paragraphs: \r\n for qa in paragraph[\"qas\"]: \r\n answers = [{\"answer_start\": answer[\"answer_start\"], \"text\": answer[\"text\"].strip()} for answer in qa[\"answers\"]] \r\n out[\"context\"].append(paragraph[\"context\"].strip()) \r\n out[\"question\"].append(qa[\"question\"].strip()) \r\n out[\"answers\"].append(answers) \r\n out[\"id\"].append(qa[\"id\"]) \r\n return out\r\ncolumn_names = datasets[\"train\"].column_names if training_args.do_train else datasets[\"validation\"].column_names\r\n# print(datasets[\"train\"].column_names)\r\nif set(column_names) == {\"title\", \"paragraphs\"}:\r\n datasets = datasets.map(process_squad, batched=True, remove_columns=column_names)\r\n# Preprocessing the datasets.\r\n# Preprocessing is slighlty different for training and evaluation.\r\nif training_args.do_train:\r\n column_names = datasets[\"train\"].column_names\r\nelse:\r\n column_names = datasets[\"validation\"].column_names\r\n# print(column_names)\r\nquestion_column_name = \"question\" if \"question\" in column_names else column_names[0]\r\ncontext_column_name = \"context\" if \"context\" in column_names else column_names[1]\r\nanswer_column_name = \"answers\" if \"answers\" in column_names else column_names[2]\r\n```\r\n### the second section\r\n```python\r\ndef prepare_train_features(examples):\r\n # Tokenize our examples with truncation and maybe padding, but keep the overflows using a stride. This results\r\n # in one example possible giving several features when a context is long, each of those features having a\r\n # context that overlaps a bit the context of the previous feature.\r\n tokenized_examples = tokenizer(\r\n examples[question_column_name if pad_on_right else context_column_name],\r\n examples[context_column_name if pad_on_right else question_column_name],\r\n truncation=\"only_second\" if pad_on_right else \"only_first\",\r\n max_length=data_args.max_seq_length,\r\n stride=data_args.doc_stride,\r\n return_overflowing_tokens=True,\r\n return_offsets_mapping=True,\r\n padding=\"max_length\" if data_args.pad_to_max_length else False,\r\n )\r\n\r\n # Since one example might give us several features if it has a long context, we need a map from a feature to\r\n # its corresponding example. This key gives us just that.\r\n sample_mapping = tokenized_examples.pop(\"overflow_to_sample_mapping\")\r\n # The offset mappings will give us a map from token to character position in the original context. This will\r\n # help us compute the start_positions and end_positions.\r\n offset_mapping = tokenized_examples.pop(\"offset_mapping\")\r\n\r\n # Let's label those examples!\r\n tokenized_examples[\"start_positions\"] = []\r\n tokenized_examples[\"end_positions\"] = []\r\n\r\n for i, offsets in enumerate(offset_mapping):\r\n # We will label impossible answers with the index of the CLS token.\r\n input_ids = tokenized_examples[\"input_ids\"][i]\r\n cls_index = input_ids.index(tokenizer.cls_token_id)\r\n\r\n # Grab the sequence corresponding to that example (to know what is the context and what is the question).\r\n sequence_ids = tokenized_examples.sequence_ids(i)\r\n\r\n # One example can give several spans, this is the index of the example containing this span of text.\r\n sample_index = sample_mapping[i]\r\n answers = examples[answer_column_name][sample_index]\r\n # print(examples,answers,offset_mapping,tokenized_examples)\r\n # If no answers are given, set the cls_index as answer.\r\n if len(answers) == 0:#len(answers[\"answer_start\"]) == 0:\r\n tokenized_examples[\"start_positions\"].append(cls_index)\r\n tokenized_examples[\"end_positions\"].append(cls_index)\r\n else:\r\n # Start/end character index of the answer in the text.\r\n start_char = answers[0][\"answer_start\"]\r\n end_char = start_char + len(answers[0][\"text\"])\r\n\r\n # Start token index of the current span in the text.\r\n token_start_index = 0\r\n while sequence_ids[token_start_index] != (1 if pad_on_right else 0):\r\n token_start_index += 1\r\n\r\n # End token index of the current span in the text.\r\n token_end_index = len(input_ids) - 1\r\n while sequence_ids[token_end_index] != (1 if pad_on_right else 0):\r\n token_end_index -= 1\r\n\r\n # Detect if the answer is out of the span (in which case this feature is labeled with the CLS index).\r\n if not (offsets[token_start_index][0] <= start_char and offsets[token_end_index][1] >= end_char):\r\n tokenized_examples[\"start_positions\"].append(cls_index)\r\n tokenized_examples[\"end_positions\"].append(cls_index)\r\n else:\r\n # Otherwise move the token_start_index and token_end_index to the two ends of the answer.\r\n # Note: we could go after the last offset if the answer is the last word (edge case).\r\n while token_start_index < len(offsets) and offsets[token_start_index][0] <= start_char:\r\n token_start_index += 1\r\n tokenized_examples[\"start_positions\"].append(token_start_index - 1)\r\n while offsets[token_end_index][1] >= end_char:\r\n token_end_index -= 1\r\n tokenized_examples[\"end_positions\"].append(token_end_index + 1)\r\n return tokenized_examples\r\n```",
"I'm glad you managed to fix run_qa.py for your case :)\r\n\r\nRegarding the checksum error, I'm not able to reproduce on my side.\r\nThis errors says that the downloaded file doesn't match the expected file.\r\n\r\nCould you try running this and let me know if you get the same output as me ?\r\n```python\r\nfrom datasets.utils.info_utils import get_size_checksum_dict\r\nfrom datasets import cached_path\r\n\r\nget_size_checksum_dict(cached_path(\"https://rajpurkar.github.io/SQuAD-explorer/dataset/train-v1.1.json\"))\r\n# {'num_bytes': 30288272, 'checksum': '3527663986b8295af4f7fcdff1ba1ff3f72d07d61a20f487cb238a6ef92fd955'}\r\n```",
"I run the code,and it show below:\r\n```\r\n>>> from datasets.utils.info_utils import get_size_checksum_dict\r\n>>> from datasets import cached_path\r\n>>> get_size_checksum_dict(cached_path(\"https://rajpurkar.github.io/SQuAD-explorer/dataset/train-v1.1.json\"))\r\nDownloading: 30.3MB [04:13, 120kB/s]\r\n{'num_bytes': 30288272, 'checksum': '3527663986b8295af4f7fcdff1ba1ff3f72d07d61a20f487cb238a6ef92fd955'}\r\n```",
"Alright ! So in this case redownloading the file with `download_mode=\"force_redownload\"` should fix it. Can you try using `download_mode=\"force_redownload\"` again ?\r\n\r\nNot sure why it didn't work for you the first time though :/"
] | "2021-03-01T08:41:31Z" | "2022-10-05T13:09:47Z" | "2022-10-05T13:09:47Z" | NONE | null | ### 1 When I try to train lxmert,and follow the code in README that --dataset name:
```shell
python examples/question-answering/run_qa.py --model_name_or_path unc-nlp/lxmert-base-uncased --dataset_name squad --do_train --do_eval --per_device_train_batch_size 12 --learning_rate 3e-5 --num_train_epochs 2 --max_seq_length 384 --doc_stride 128 --output_dir /home2/zhenggo1/checkpoint/lxmert_squad
```
the bug is that:
```
Downloading and preparing dataset squad/plain_text (download: 33.51 MiB, generated: 85.75 MiB, post-processed: Unknown size, total: 119.27 MiB) to /home2/zhenggo1/.cache/huggingface/datasets/squad/plain_text/1.0.0/4c81550d83a2ac7c7ce23783bd8ff36642800e6633c1f18417fb58c3ff50cdd7...
Traceback (most recent call last):
File "examples/question-answering/run_qa.py", line 501, in <module>
main()
File "examples/question-answering/run_qa.py", line 217, in main
datasets = load_dataset(data_args.dataset_name, data_args.dataset_config_name)
File "/home2/zhenggo1/anaconda3/envs/lpot/lib/python3.7/site-packages/datasets/load.py", line 746, in load_dataset
use_auth_token=use_auth_token,
File "/home2/zhenggo1/anaconda3/envs/lpot/lib/python3.7/site-packages/datasets/builder.py", line 573, in download_and_prepare
dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs
File "/home2/zhenggo1/anaconda3/envs/lpot/lib/python3.7/site-packages/datasets/builder.py", line 633, in _download_and_prepare
self.info.download_checksums, dl_manager.get_recorded_sizes_checksums(), "dataset source files"
File "/home2/zhenggo1/anaconda3/envs/lpot/lib/python3.7/site-packages/datasets/utils/info_utils.py", line 39, in verify_checksums
raise NonMatchingChecksumError(error_msg + str(bad_urls))
datasets.utils.info_utils.NonMatchingChecksumError: Checksums didn't match for dataset source files:
['https://rajpurkar.github.io/SQuAD-explorer/dataset/train-v1.1.json']
```
And I try to find the [checksum link](https://github.com/huggingface/datasets/blob/master/datasets/squad/dataset_infos.json)
,is the problem plain_text do not have a checksum?
### 2 When I try to train lxmert,and use local dataset:
```
python examples/question-answering/run_qa.py --model_name_or_path unc-nlp/lxmert-base-uncased --train_file $SQUAD_DIR/train-v1.1.json --validation_file $SQUAD_DIR/dev-v1.1.json --do_train --do_eval --per_device_train_batch_size 12 --learning_rate 3e-5 --num_train_epochs 2 --max_seq_length 384 --doc_stride 128 --output_dir /home2/zhenggo1/checkpoint/lxmert_squad
```
The bug is that
```
['title', 'paragraphs']
Traceback (most recent call last):
File "examples/question-answering/run_qa.py", line 501, in <module>
main()
File "examples/question-answering/run_qa.py", line 273, in main
answer_column_name = "answers" if "answers" in column_names else column_names[2]
IndexError: list index out of range
```
I print the answer_column_name and find that local squad dataset need the package datasets to preprocessing so that the code below can work:
```
if training_args.do_train:
column_names = datasets["train"].column_names
else:
column_names = datasets["validation"].column_names
print(datasets["train"].column_names)
question_column_name = "question" if "question" in column_names else column_names[0]
context_column_name = "context" if "context" in column_names else column_names[1]
answer_column_name = "answers" if "answers" in column_names else column_names[2]
```
## Please tell me how to fix the bug,thks a lot! | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1964/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1964/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/1963 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1963/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1963/comments | https://api.github.com/repos/huggingface/datasets/issues/1963/events | https://github.com/huggingface/datasets/issues/1963 | 818,289,967 | MDU6SXNzdWU4MTgyODk5Njc= | 1,963 | bug in SNLI dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/79165106?v=4",
"events_url": "https://api.github.com/users/dorost1234/events{/privacy}",
"followers_url": "https://api.github.com/users/dorost1234/followers",
"following_url": "https://api.github.com/users/dorost1234/following{/other_user}",
"gists_url": "https://api.github.com/users/dorost1234/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/dorost1234",
"id": 79165106,
"login": "dorost1234",
"node_id": "MDQ6VXNlcjc5MTY1MTA2",
"organizations_url": "https://api.github.com/users/dorost1234/orgs",
"received_events_url": "https://api.github.com/users/dorost1234/received_events",
"repos_url": "https://api.github.com/users/dorost1234/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/dorost1234/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dorost1234/subscriptions",
"type": "User",
"url": "https://api.github.com/users/dorost1234"
} | [] | closed | false | null | [] | null | [
"Hi ! The labels -1 correspond to the examples without gold labels in the original snli dataset.\r\nFeel free to remove these examples if you don't need them by using\r\n```python\r\ndata = data.filter(lambda x: x[\"label\"] != -1)\r\n```"
] | "2021-02-28T19:36:20Z" | "2022-10-05T13:13:46Z" | "2022-10-05T13:13:46Z" | NONE | null | Hi
There is label of -1 in train set of SNLI dataset, please find the code below:
```
import numpy as np
import datasets
data = datasets.load_dataset("snli")["train"]
labels = []
for d in data:
labels.append(d["label"])
print(np.unique(labels))
```
and results:
`[-1 0 1 2]`
version of datasets used:
`datasets 1.2.1 <pip>
`
thanks for your help. @lhoestq | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1963/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1963/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/1962 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1962/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1962/comments | https://api.github.com/repos/huggingface/datasets/issues/1962/events | https://github.com/huggingface/datasets/pull/1962 | 818,089,156 | MDExOlB1bGxSZXF1ZXN0NTgxNDQwNzM4 | 1,962 | Fix unused arguments | {
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko"
} | [] | closed | false | null | [] | null | [
"@lhoestq Re-added the arg. The ConnectionError in CI seems unrelated to this PR (the same test fails on master as well).",
"Thanks !\r\nI'm re-running the CI, maybe this was an issue with circleCI",
"Looks all good now, merged :)"
] | "2021-02-28T02:47:07Z" | "2021-03-11T02:18:17Z" | "2021-03-03T16:37:50Z" | CONTRIBUTOR | null | Noticed some args in the codebase are not used, so managed to find all such occurrences with Pylance and fix them. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1962/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1962/timeline | null | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/1962.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1962",
"merged_at": "2021-03-03T16:37:50Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1962.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1962"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/1961 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1961/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1961/comments | https://api.github.com/repos/huggingface/datasets/issues/1961/events | https://github.com/huggingface/datasets/pull/1961 | 818,077,947 | MDExOlB1bGxSZXF1ZXN0NTgxNDM3NDI0 | 1,961 | Add sst dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/15801338?v=4",
"events_url": "https://api.github.com/users/patpizio/events{/privacy}",
"followers_url": "https://api.github.com/users/patpizio/followers",
"following_url": "https://api.github.com/users/patpizio/following{/other_user}",
"gists_url": "https://api.github.com/users/patpizio/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/patpizio",
"id": 15801338,
"login": "patpizio",
"node_id": "MDQ6VXNlcjE1ODAxMzM4",
"organizations_url": "https://api.github.com/users/patpizio/orgs",
"received_events_url": "https://api.github.com/users/patpizio/received_events",
"repos_url": "https://api.github.com/users/patpizio/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/patpizio/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patpizio/subscriptions",
"type": "User",
"url": "https://api.github.com/users/patpizio"
} | [] | closed | false | null | [] | null | [] | "2021-02-28T02:08:29Z" | "2021-03-04T10:38:53Z" | "2021-03-04T10:38:53Z" | CONTRIBUTOR | null | Related to #1934—Add the Stanford Sentiment Treebank dataset. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1961/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1961/timeline | null | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/1961.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1961",
"merged_at": "2021-03-04T10:38:53Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1961.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1961"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/1960 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1960/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1960/comments | https://api.github.com/repos/huggingface/datasets/issues/1960/events | https://github.com/huggingface/datasets/pull/1960 | 818,073,154 | MDExOlB1bGxSZXF1ZXN0NTgxNDMzOTY4 | 1,960 | Allow stateful function in dataset.map | {
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko"
} | [] | closed | false | null | [] | null | [
"@lhoestq Added a test. If you can come up with a better stateful callable, I'm all ears 😄. ",
"Sorry I said earlier that it was good to have it inside the loop, my mistake !",
"@lhoestq Okay, did some refactoring and now the \"cache\" part comes before the for loop. Thanks for the guidance.\r\n\r\nThink this is ready for the final review."
] | "2021-02-28T01:29:05Z" | "2021-03-23T15:26:49Z" | "2021-03-23T15:26:49Z" | CONTRIBUTOR | null | Removes the "test type" section in Dataset.map which would modify the state of the stateful function. Now, the return type of the map function is inferred after processing the first example.
Fixes #1940
@lhoestq Not very happy with the usage of `nonlocal`. Would like to hear your opinion on this. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1960/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1960/timeline | null | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/1960.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1960",
"merged_at": "2021-03-23T15:26:49Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1960.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1960"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/1959 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1959/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1959/comments | https://api.github.com/repos/huggingface/datasets/issues/1959/events | https://github.com/huggingface/datasets/issues/1959 | 818,055,644 | MDU6SXNzdWU4MTgwNTU2NDQ= | 1,959 | Bug in skip_rows argument of load_dataset function ? | {
"avatar_url": "https://avatars.githubusercontent.com/u/73159756?v=4",
"events_url": "https://api.github.com/users/LedaguenelArthur/events{/privacy}",
"followers_url": "https://api.github.com/users/LedaguenelArthur/followers",
"following_url": "https://api.github.com/users/LedaguenelArthur/following{/other_user}",
"gists_url": "https://api.github.com/users/LedaguenelArthur/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/LedaguenelArthur",
"id": 73159756,
"login": "LedaguenelArthur",
"node_id": "MDQ6VXNlcjczMTU5NzU2",
"organizations_url": "https://api.github.com/users/LedaguenelArthur/orgs",
"received_events_url": "https://api.github.com/users/LedaguenelArthur/received_events",
"repos_url": "https://api.github.com/users/LedaguenelArthur/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/LedaguenelArthur/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LedaguenelArthur/subscriptions",
"type": "User",
"url": "https://api.github.com/users/LedaguenelArthur"
} | [] | closed | false | null | [] | null | [
"Hi,\r\n\r\ntry `skiprows` instead. This part is not properly documented in the docs it seems.\r\n\r\n@lhoestq I'll fix this as part of a bigger PR that fixes typos in the docs."
] | "2021-02-27T23:32:54Z" | "2021-03-09T10:21:32Z" | "2021-03-09T10:21:32Z" | NONE | null | Hello everyone,
I'm quite new to Git so sorry in advance if I'm breaking some ground rules of issues posting... :/
I tried to use the load_dataset function, from Huggingface datasets library, on a csv file using the skip_rows argument described on Huggingface page to skip the first row containing column names
`test_dataset = load_dataset('csv', data_files=['test_wLabel.tsv'], delimiter='\t', column_names=["id", "sentence", "label"], skip_rows=1)`
But I got the following error message
`__init__() got an unexpected keyword argument 'skip_rows'`
Have I used the wrong argument ? Am I missing something or is this a bug ?
Thank you very much for your time,
Best regards,
Arthur | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1959/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1959/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/1958 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1958/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1958/comments | https://api.github.com/repos/huggingface/datasets/issues/1958/events | https://github.com/huggingface/datasets/issues/1958 | 818,037,548 | MDU6SXNzdWU4MTgwMzc1NDg= | 1,958 | XSum dataset download link broken | {
"avatar_url": "https://avatars.githubusercontent.com/u/1156974?v=4",
"events_url": "https://api.github.com/users/himat/events{/privacy}",
"followers_url": "https://api.github.com/users/himat/followers",
"following_url": "https://api.github.com/users/himat/following{/other_user}",
"gists_url": "https://api.github.com/users/himat/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/himat",
"id": 1156974,
"login": "himat",
"node_id": "MDQ6VXNlcjExNTY5NzQ=",
"organizations_url": "https://api.github.com/users/himat/orgs",
"received_events_url": "https://api.github.com/users/himat/received_events",
"repos_url": "https://api.github.com/users/himat/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/himat/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/himat/subscriptions",
"type": "User",
"url": "https://api.github.com/users/himat"
} | [] | closed | false | null | [] | null | [
"Never mind, I ran it again and it worked this time. Strange."
] | "2021-02-27T21:47:56Z" | "2021-02-27T21:50:16Z" | "2021-02-27T21:50:16Z" | NONE | null | I did
```
from datasets import load_dataset
dataset = load_dataset("xsum")
```
This returns
`ConnectionError: Couldn't reach http://bollin.inf.ed.ac.uk/public/direct/XSUM-EMNLP18-Summary-Data-Original.tar.gz` | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1958/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1958/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/1956 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1956/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1956/comments | https://api.github.com/repos/huggingface/datasets/issues/1956/events | https://github.com/huggingface/datasets/issues/1956 | 818,013,741 | MDU6SXNzdWU4MTgwMTM3NDE= | 1,956 | [distributed env] potentially unsafe parallel execution | {
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/stas00",
"id": 10676103,
"login": "stas00",
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"repos_url": "https://api.github.com/users/stas00/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"type": "User",
"url": "https://api.github.com/users/stas00"
} | [] | closed | false | null | [] | null | [
"You can pass the same `experiment_id` for all the metrics of the same group, and use another `experiment_id` for the other groups.\r\nMaybe we can add an environment variable that sets the default value for `experiment_id` ? What do you think ?",
"Ah, you're absolutely correct, @lhoestq - it's exactly the equivalent of the shared secret. Thank you!"
] | "2021-02-27T20:38:45Z" | "2021-03-01T17:24:42Z" | "2021-03-01T17:24:42Z" | MEMBER | null | ```
metric = load_metric('glue', 'mrpc', num_process=num_process, process_id=rank)
```
presumes that there is only one set of parallel processes running - and will intermittently fail if you have multiple sets running as they will surely overwrite each other. Similar to https://github.com/huggingface/datasets/issues/1942 (but for a different reason).
That's why dist environments use some unique to a group identifier so that each group is dealt with separately.
e.g. the env-way of pytorch dist syncing is done with a unique per set `MASTER_ADDRESS+MASTER_PORT`
So ideally this interface should ask for a shared secret to do the right thing.
I'm not reporting an immediate need, but am only flagging that this will hit someone down the road.
This problem can be remedied by adding a new optional `shared_secret` option, which can then be used to differentiate different groups of processes. and this secret should be part of the file lock name and the experiment.
Thank you | {
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1956/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1956/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/1955 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1955/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1955/comments | https://api.github.com/repos/huggingface/datasets/issues/1955/events | https://github.com/huggingface/datasets/pull/1955 | 818,010,664 | MDExOlB1bGxSZXF1ZXN0NTgxMzk2OTA5 | 1,955 | typos + grammar | {
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/stas00",
"id": 10676103,
"login": "stas00",
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"repos_url": "https://api.github.com/users/stas00/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"type": "User",
"url": "https://api.github.com/users/stas00"
} | [] | closed | false | null | [] | null | [] | "2021-02-27T20:21:43Z" | "2021-03-01T17:20:38Z" | "2021-03-01T14:43:19Z" | MEMBER | null | This PR proposes a few typo + grammar fixes, and rewrites some sentences in an attempt to improve readability.
N.B. When referring to the library `datasets` in the docs it is typically used as a singular, and it definitely is a singular when written as "`datasets` library", that is "`datasets` library is ..." and not "are ...". | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1955/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1955/timeline | null | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/1955.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1955",
"merged_at": "2021-03-01T14:43:19Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1955.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1955"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/1954 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1954/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1954/comments | https://api.github.com/repos/huggingface/datasets/issues/1954/events | https://github.com/huggingface/datasets/issues/1954 | 817,565,563 | MDU6SXNzdWU4MTc1NjU1NjM= | 1,954 | add a new column | {
"avatar_url": "https://avatars.githubusercontent.com/u/79165106?v=4",
"events_url": "https://api.github.com/users/dorost1234/events{/privacy}",
"followers_url": "https://api.github.com/users/dorost1234/followers",
"following_url": "https://api.github.com/users/dorost1234/following{/other_user}",
"gists_url": "https://api.github.com/users/dorost1234/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/dorost1234",
"id": 79165106,
"login": "dorost1234",
"node_id": "MDQ6VXNlcjc5MTY1MTA2",
"organizations_url": "https://api.github.com/users/dorost1234/orgs",
"received_events_url": "https://api.github.com/users/dorost1234/received_events",
"repos_url": "https://api.github.com/users/dorost1234/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/dorost1234/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dorost1234/subscriptions",
"type": "User",
"url": "https://api.github.com/users/dorost1234"
} | [] | closed | false | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
}
] | null | [
"Hi\r\nnot sure how change the lable after creation, but this is an issue not dataset request. thanks ",
"Hi ! Currently you have to use `map` . You can see an example of how to do it in this comment: https://github.com/huggingface/datasets/issues/853#issuecomment-727872188\r\n\r\nIn the future we'll add support for a more native way of adding a new column ;)"
] | "2021-02-26T18:17:27Z" | "2021-04-29T14:50:43Z" | "2021-04-29T14:50:43Z" | NONE | null | Hi
I'd need to add a new column to the dataset, I was wondering how this can be done? thanks
@lhoestq | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1954/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1954/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/1953 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1953/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1953/comments | https://api.github.com/repos/huggingface/datasets/issues/1953/events | https://github.com/huggingface/datasets/pull/1953 | 817,498,869 | MDExOlB1bGxSZXF1ZXN0NTgwOTgyMDMz | 1,953 | Documentation for to_csv, to_pandas and to_dict | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [] | closed | false | null | [] | null | [] | "2021-02-26T16:35:49Z" | "2021-03-01T14:03:48Z" | "2021-03-01T14:03:47Z" | MEMBER | null | I added these methods to the documentation with a small paragraph.
I also fixed some formatting issues in the docstrings | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 1,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1953/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1953/timeline | null | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/1953.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1953",
"merged_at": "2021-03-01T14:03:47Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1953.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1953"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/1952 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1952/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1952/comments | https://api.github.com/repos/huggingface/datasets/issues/1952/events | https://github.com/huggingface/datasets/pull/1952 | 817,428,160 | MDExOlB1bGxSZXF1ZXN0NTgwOTIyNjQw | 1,952 | Handle timeouts | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [] | closed | false | null | [] | null | [
"I never said the calls were hanging indefinitely, what we need is quite different - in the firewalled env with a network, there should be no network calls or they should fail instantly.\r\n\r\nTo make this work I suppose on top of this PR we need:\r\n1. `DATASETS_OFFLINE` env var to force set timeout to 0 globally (or to 0.0001 if 0 has a special meaning of no timeout)\r\n2. `DATASETS_OFFLINE` should guard against failing network calls and not fail the program if it has all the data it needs locally.\r\n\r\nBottom line - if the logic wants to check online if the local file matches online dataset name, let it go wild, but it should fail instantly, recover and use the local file - if one is specified explicitly or cache if there is one. And only if neither was found only then assert.\r\n\r\nI hope this makes sense and is doable.\r\n\r\nI have started on the same approach for transformers https://github.com/huggingface/transformers/pull/10407\r\n\r\nThank you, @lhoestq ",
"Yes that was the first step to add DATASETS_OFFLINE :)\r\n\r\nWith this PR, if a request times out (which couldn't happen before because no time out was set), it falls back on the local files with no error.\r\n\r\nAs you said, setting the timeout to something like 1e-16 makes the requests fail instantly, which is one step forward. One last thing left is to disable request retries and everything will be instant !",
"Ah, fantastic. Thank you for elucidating that this PR is part of a bigger master plan! ",
"Merging this one, then I'll open a new PR for the `DATASETS_OFFLINE` env var :)"
] | "2021-02-26T15:02:07Z" | "2021-03-01T14:29:24Z" | "2021-03-01T14:29:24Z" | MEMBER | null | As noticed in https://github.com/huggingface/datasets/issues/1939, timeouts were not properly handled when loading a dataset.
This caused the connection to hang indefinitely when working in a firewalled environment cc @stas00
I added a default timeout, and included an option to our offline environment for tests to be able to simulate both connection errors and timeout errors (previously it was simulating connection errors only).
Now networks calls don't hang indefinitely.
The default timeout is set to 10sec (we might reduce it). | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1952/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1952/timeline | null | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/1952.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1952",
"merged_at": "2021-03-01T14:29:24Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1952.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1952"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/1951 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1951/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1951/comments | https://api.github.com/repos/huggingface/datasets/issues/1951/events | https://github.com/huggingface/datasets/pull/1951 | 817,423,573 | MDExOlB1bGxSZXF1ZXN0NTgwOTE4ODE2 | 1,951 | Add cross-platform support for datasets-cli | {
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko"
} | [] | closed | false | null | [] | null | [
"@mariosasko This is kinda cool! "
] | "2021-02-26T14:56:25Z" | "2021-03-11T02:18:26Z" | "2021-02-26T15:30:26Z" | CONTRIBUTOR | null | One thing I've noticed while going through the codebase is the usage of `scripts` in `setup.py`. This [answer](https://stackoverflow.com/a/28119736/14095927) on SO explains it nicely why it's better to use `entry_points` instead of `scripts`. To add cross-platform support to the CLI, this PR replaces `scripts` with `entry_points` in `setup.py` and moves datasets-cli to src/datasets/commands/datasets_cli.py. All *.md and *.rst files are updated accordingly. The same changes were made in the transformers repo to add cross-platform ([link to PR](https://github.com/huggingface/transformers/pull/4131)). | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1951/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1951/timeline | null | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/1951.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1951",
"merged_at": "2021-02-26T15:30:26Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1951.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1951"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/1950 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1950/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1950/comments | https://api.github.com/repos/huggingface/datasets/issues/1950/events | https://github.com/huggingface/datasets/pull/1950 | 817,295,235 | MDExOlB1bGxSZXF1ZXN0NTgwODExMjMz | 1,950 | updated multi_nli dataset with missing fields | {
"avatar_url": "https://avatars.githubusercontent.com/u/19718818?v=4",
"events_url": "https://api.github.com/users/bhavitvyamalik/events{/privacy}",
"followers_url": "https://api.github.com/users/bhavitvyamalik/followers",
"following_url": "https://api.github.com/users/bhavitvyamalik/following{/other_user}",
"gists_url": "https://api.github.com/users/bhavitvyamalik/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/bhavitvyamalik",
"id": 19718818,
"login": "bhavitvyamalik",
"node_id": "MDQ6VXNlcjE5NzE4ODE4",
"organizations_url": "https://api.github.com/users/bhavitvyamalik/orgs",
"received_events_url": "https://api.github.com/users/bhavitvyamalik/received_events",
"repos_url": "https://api.github.com/users/bhavitvyamalik/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/bhavitvyamalik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bhavitvyamalik/subscriptions",
"type": "User",
"url": "https://api.github.com/users/bhavitvyamalik"
} | [] | closed | false | null | [] | null | [] | "2021-02-26T11:54:36Z" | "2021-03-01T11:08:30Z" | "2021-03-01T11:08:29Z" | CONTRIBUTOR | null | 1) updated fields which were missing earlier
2) added tags to README
3) updated a few fields of README
4) new dataset_infos.json and dummy files | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1950/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1950/timeline | null | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/1950.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1950",
"merged_at": "2021-03-01T11:08:29Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1950.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1950"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/1949 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1949/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1949/comments | https://api.github.com/repos/huggingface/datasets/issues/1949/events | https://github.com/huggingface/datasets/issues/1949 | 816,986,936 | MDU6SXNzdWU4MTY5ODY5MzY= | 1,949 | Enable Fast Filtering using Arrow Dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/29076344?v=4",
"events_url": "https://api.github.com/users/gchhablani/events{/privacy}",
"followers_url": "https://api.github.com/users/gchhablani/followers",
"following_url": "https://api.github.com/users/gchhablani/following{/other_user}",
"gists_url": "https://api.github.com/users/gchhablani/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/gchhablani",
"id": 29076344,
"login": "gchhablani",
"node_id": "MDQ6VXNlcjI5MDc2MzQ0",
"organizations_url": "https://api.github.com/users/gchhablani/orgs",
"received_events_url": "https://api.github.com/users/gchhablani/received_events",
"repos_url": "https://api.github.com/users/gchhablani/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/gchhablani/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gchhablani/subscriptions",
"type": "User",
"url": "https://api.github.com/users/gchhablani"
} | [] | open | false | null | [] | null | [
"Hi @gchhablani :)\r\nThanks for proposing your help !\r\n\r\nI'll be doing a refactor of some parts related to filtering in the scope of https://github.com/huggingface/datasets/issues/1877\r\nSo I would first wait for this refactor to be done before working on the filtering. In particular because I plan to make things simpler to manipulate.\r\n\r\nYour feedback on this refactor would also be appreciated since it also aims at making the core code more accessible (basically my goal is that no one's ever \"having troubles getting started\" ^^)\r\n\r\nThis will be available in a few days, I will be able to give you more details at that time if you don't mind waiting a bit !",
"Sure! I don't mind waiting. I'll check the refactor and try to understand what you're trying to do :)"
] | "2021-02-26T02:53:37Z" | "2021-02-26T19:18:29Z" | null | CONTRIBUTOR | null | Hi @lhoestq,
As mentioned in Issue #1796, I would love to work on enabling fast filtering/mapping. Can you please share the expectations? It would be great if you could point me to the relevant methods/files involved. Or the docs or maybe an overview of `arrow_dataset.py`. I only ask this because I am having trouble getting started ;-;
Any help would be appreciated.
Thanks,
Gunjan | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1949/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1949/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/1948 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1948/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1948/comments | https://api.github.com/repos/huggingface/datasets/issues/1948/events | https://github.com/huggingface/datasets/issues/1948 | 816,689,329 | MDU6SXNzdWU4MTY2ODkzMjk= | 1,948 | dataset loading logger level | {
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/stas00",
"id": 10676103,
"login": "stas00",
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"repos_url": "https://api.github.com/users/stas00/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"type": "User",
"url": "https://api.github.com/users/stas00"
} | [] | closed | false | null | [] | null | [
"These warnings are showed when there's a call to `.map` to say to the user that a dataset is reloaded from the cache instead of being recomputed.\r\nThey are warnings since we want to make sure the users know that it's not recomputed.",
"Thank you for explaining the intention, @lhoestq \r\n\r\n1. Could it be then made more human-friendly? Currently the hex gibberish tells me nothing of what's really going on. e.g. the following is instructive, IMHO:\r\n\r\n```\r\nWARNING: wmt16/ro-en/train dataset was loaded from cache instead of being recomputed\r\nWARNING: wmt16/ro-en/validation dataset was loaded from cache instead of being recomputed\r\nWARNING: wmt16/ro-en/test dataset was loaded from cache instead of being recomputed\r\n```\r\nnote that it removes the not so useful hex info and tells the user instead which split it's referring to - but probably no harm in keeping the path if it helps the debug. But the key is that now the warning is telling me what it is it's warning me about.\r\n```\r\nWarning:Loading cache path\r\n```\r\non the other hand isn't telling what it is warning about.\r\n\r\nAnd I still suggest this is INFO level, otherwise you need to turn all 'using cache' statements to WARNING to be consistent. The user is most likely well aware the cache is used for models, etc. So this feels very similar.\r\n\r\n2. Should there be a way for a user to void warranty by having a flag - `I know I'm expecting the cached version to load if it's available - please do not warn me about it=True`\r\n\r\nTo explain the need: Warnings are a problem, they constantly take attention away because they could be the harbinger of a problem. Therefore I prefer not to have any warnings in the log, and if I get any I usually try to deal with those so that my log is clean. \r\n\r\nIt's less of an issue for somebody doing long runs. It's a huge issue for someone who does a new run every few minutes and on the lookout for any potential problems which is what I have been doing a lot of integrating DeepSpeed and other things. And since there are already problems to deal with during the integration it's nice to have a clean log to start with. \r\n\r\nI hope my need is not unreasonable and I was able to explain it adequately. \r\n\r\nThank you.",
"Hey, any news about the issue? So many warnings when I'm really ok with the dataset not being recomputed :)"
] | "2021-02-25T18:33:37Z" | "2023-07-12T17:19:30Z" | "2023-07-12T17:19:30Z" | MEMBER | null | on master I get this with `--dataset_name wmt16 --dataset_config ro-en`:
```
WARNING:datasets.arrow_dataset:Loading cached processed dataset at /home/stas/.cache/huggingface/datasets/wmt16/ro-en/1.0.0/9dc00622c30446e99c4c63d12a484ea4fb653f2f37c867d6edcec839d7eae50f/cache-2e01bead8cf42e26.arrow
WARNING:datasets.arrow_dataset:Loading cached processed dataset at /home/stas/.cache/huggingface/datasets/wmt16/ro-en/1.0.0/9dc00622c30446e99c4c63d12a484ea4fb653f2f37c867d6edcec839d7eae50f/cache-ac3bebaf4f91f776.arrow
WARNING:datasets.arrow_dataset:Loading cached processed dataset at /home/stas/.cache/huggingface/datasets/wmt16/ro-en/1.0.0/9dc00622c30446e99c4c63d12a484ea4fb653f2f37c867d6edcec839d7eae50f/cache-810c3e61259d73a9.arrow
```
why are those WARNINGs? Should be INFO, no?
warnings should only be used when a user needs to pay attention to something, this is just informative - I'd even say it should be DEBUG, but definitely not WARNING.
Thank you.
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1948/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1948/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/1947 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1947/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1947/comments | https://api.github.com/repos/huggingface/datasets/issues/1947/events | https://github.com/huggingface/datasets/pull/1947 | 816,590,299 | MDExOlB1bGxSZXF1ZXN0NTgwMjI2MDk5 | 1,947 | Update documentation with not in place transforms and update DatasetDict | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [] | closed | false | null | [] | null | [] | "2021-02-25T16:23:18Z" | "2021-03-01T14:36:54Z" | "2021-03-01T14:36:53Z" | MEMBER | null | In #1883 were added the not in-place transforms `flatten`, `remove_columns`, `rename_column` and `cast`.
I added them to the documentation and added a paragraph on how to use them
You can preview the documentation [here](https://28862-250213286-gh.circle-artifacts.com/0/docs/_build/html/processing.html#renaming-removing-casting-and-flattening-columns)
I also added these methods to the DatasetDict class. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 1,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1947/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1947/timeline | null | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/1947.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1947",
"merged_at": "2021-03-01T14:36:53Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1947.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1947"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/1946 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1946/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1946/comments | https://api.github.com/repos/huggingface/datasets/issues/1946/events | https://github.com/huggingface/datasets/pull/1946 | 816,526,294 | MDExOlB1bGxSZXF1ZXN0NTgwMTcyNzI2 | 1,946 | Implement Dataset from CSV | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | [] | closed | false | null | [] | null | [
"@lhoestq question about public API: `keep_in_memory` or just `in_memory`?",
"For consistence I'd say `keep_in_memory`, but no strong opinion.",
"@lhoestq done!"
] | "2021-02-25T15:10:13Z" | "2021-03-12T09:42:48Z" | "2021-03-12T09:42:48Z" | MEMBER | null | Implement `Dataset.from_csv`.
Analogue to #1943.
If finally, the scripts should be used instead, at least we can reuse the tests here. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1946/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1946/timeline | null | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/1946.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1946",
"merged_at": "2021-03-12T09:42:48Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1946.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1946"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/1945 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1945/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1945/comments | https://api.github.com/repos/huggingface/datasets/issues/1945/events | https://github.com/huggingface/datasets/issues/1945 | 816,421,966 | MDU6SXNzdWU4MTY0MjE5NjY= | 1,945 | AttributeError: 'DatasetDict' object has no attribute 'concatenate_datasets' | {
"avatar_url": "https://avatars.githubusercontent.com/u/79165106?v=4",
"events_url": "https://api.github.com/users/dorost1234/events{/privacy}",
"followers_url": "https://api.github.com/users/dorost1234/followers",
"following_url": "https://api.github.com/users/dorost1234/following{/other_user}",
"gists_url": "https://api.github.com/users/dorost1234/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/dorost1234",
"id": 79165106,
"login": "dorost1234",
"node_id": "MDQ6VXNlcjc5MTY1MTA2",
"organizations_url": "https://api.github.com/users/dorost1234/orgs",
"received_events_url": "https://api.github.com/users/dorost1234/received_events",
"repos_url": "https://api.github.com/users/dorost1234/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/dorost1234/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dorost1234/subscriptions",
"type": "User",
"url": "https://api.github.com/users/dorost1234"
} | [] | closed | false | null | [] | null | [
"sorry my mistake, datasets were overwritten closing now, thanks a lot"
] | "2021-02-25T13:09:45Z" | "2021-02-25T13:20:35Z" | "2021-02-25T13:20:26Z" | NONE | null | Hi
I am trying to concatenate a list of huggingface datastes as:
` train_dataset = datasets.concatenate_datasets(train_datasets)
`
Here is the `train_datasets` when I print:
```
[Dataset({
features: ['attention_mask', 'idx', 'input_ids', 'label', 'question1', 'question2', 'token_type_ids'],
num_rows: 120361
}), Dataset({
features: ['attention_mask', 'idx', 'input_ids', 'label', 'question1', 'question2', 'token_type_ids'],
num_rows: 2670
}), Dataset({
features: ['attention_mask', 'idx', 'input_ids', 'label', 'question1', 'question2', 'token_type_ids'],
num_rows: 6944
}), Dataset({
features: ['attention_mask', 'idx', 'input_ids', 'label', 'question1', 'question2', 'token_type_ids'],
num_rows: 38140
}), Dataset({
features: ['attention_mask', 'idx', 'input_ids', 'label', 'question1', 'question2', 'token_type_ids'],
num_rows: 173711
}), Dataset({
features: ['attention_mask', 'idx', 'input_ids', 'label', 'question1', 'question2', 'token_type_ids'],
num_rows: 1655
}), Dataset({
features: ['attention_mask', 'idx', 'input_ids', 'label', 'question1', 'question2', 'token_type_ids'],
num_rows: 4274
}), Dataset({
features: ['attention_mask', 'idx', 'input_ids', 'label', 'question1', 'question2', 'token_type_ids'],
num_rows: 2019
}), Dataset({
features: ['attention_mask', 'idx', 'input_ids', 'label', 'question1', 'question2', 'token_type_ids'],
num_rows: 2109
}), Dataset({
features: ['attention_mask', 'idx', 'input_ids', 'label', 'question1', 'question2', 'token_type_ids'],
num_rows: 11963
})]
```
I am getting the following error:
`AttributeError: 'DatasetDict' object has no attribute 'concatenate_datasets'
`
I was wondering if you could help me with this issue, thanks a lot | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1945/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1945/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/1944 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1944/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1944/comments | https://api.github.com/repos/huggingface/datasets/issues/1944/events | https://github.com/huggingface/datasets/pull/1944 | 816,267,216 | MDExOlB1bGxSZXF1ZXN0NTc5OTU2Nzc3 | 1,944 | Add Turkish News Category Dataset (270K - Lite Version) | {
"avatar_url": "https://avatars.githubusercontent.com/u/5150963?v=4",
"events_url": "https://api.github.com/users/yavuzKomecoglu/events{/privacy}",
"followers_url": "https://api.github.com/users/yavuzKomecoglu/followers",
"following_url": "https://api.github.com/users/yavuzKomecoglu/following{/other_user}",
"gists_url": "https://api.github.com/users/yavuzKomecoglu/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/yavuzKomecoglu",
"id": 5150963,
"login": "yavuzKomecoglu",
"node_id": "MDQ6VXNlcjUxNTA5NjM=",
"organizations_url": "https://api.github.com/users/yavuzKomecoglu/orgs",
"received_events_url": "https://api.github.com/users/yavuzKomecoglu/received_events",
"repos_url": "https://api.github.com/users/yavuzKomecoglu/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/yavuzKomecoglu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yavuzKomecoglu/subscriptions",
"type": "User",
"url": "https://api.github.com/users/yavuzKomecoglu"
} | [] | closed | false | null | [] | null | [
"I updated your suggestions. Thank you very much for your support. @lhoestq ",
"> Thanks for changing to ClassLabel :)\r\n> This is all good now !\r\n> \r\n> However I can see changes in other files than the ones for interpress_news_category_tr_lite, can you please fix that ?\r\n> To do so you can create another branch and another PR to only include the interpress_news_category_tr_lite files.\r\n> \r\n> Maybe this happened because of a git rebase ? Once you've already pushed your code, please use git merge instead of rebase in order to avoid this.\r\n\r\nThanks for the feedback.\r\nNew PR https://github.com/huggingface/datasets/pull/1967"
] | "2021-02-25T09:45:22Z" | "2021-03-02T17:46:41Z" | "2021-03-01T18:23:21Z" | CONTRIBUTOR | null | This PR adds the Turkish News Categories Dataset (270K - Lite Version) dataset which is a text classification dataset by me, @basakbuluz and @serdarakyol.
This dataset contains the same news from the current [interpress_news_category_tr dataset](https://huggingface.co/datasets/interpress_news_category_tr) but contains less information, OCR errors are reduced, can be easily separated, and can be divided into 10 classes ("kültürsanat", "ekonomi", "siyaset", "eğitim", "dünya", "spor", "teknoloji", "magazin", "sağlık", "gündem") were rearranged.
@SBrandeis @lhoestq, can you please review this PR?
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1944/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1944/timeline | null | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/1944.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1944",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/1944.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1944"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/1943 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1943/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1943/comments | https://api.github.com/repos/huggingface/datasets/issues/1943/events | https://github.com/huggingface/datasets/pull/1943 | 816,160,453 | MDExOlB1bGxSZXF1ZXN0NTc5ODY5NTk0 | 1,943 | Implement Dataset from JSON and JSON Lines | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | [] | closed | false | null | [] | null | [
"Thanks @lhoestq. I was trying to follow @thomwolf suggestion about integrating that script but as `from_json` method...\r\n> Note that I don't think this is necessary a breaking change, we can still keep the old scripts around\r\n\r\nDo you think there is a better way of doing it?\r\n\r\nI was trying to implement more or less the same logic as in the script, but I confess I assumed the target was in-memory only...",
"Basically, I was trying to reimplement `Json(datasets.ArrowBasedBuilder)._generate_tables`, and no writing to arrow file (I assumed only in-memory usage). I started with the first \"else\" clause... \r\n\r\nI was planning to remove my `_cast_table_to_info_features` and use `paj.read_json(parse_options=...)` instead (like in the script).",
"@lhoestq I am wondering why `keep_in_memory` has no effect for JSON...",
"What's the issue exactly ? Apparently it's correctly passed to as_dataset so I don't find the issue",
"Nevermind @lhoestq, I found where the problem was in my code... I push!",
"<s>merging master into this branch should fix the CI issue :)</s>\r\n\r\nOops I didn't refresh the page sorry ^^'\r\n\r\nLooks all good !",
"Good job ! I think we can merge after the last changes regarding the error message and the docstring above :)",
"@lhoestq Done! And I have also added some tests for the `field` parameter.",
"Let me add some more tests for dict of lists JSON file, please.",
"@lhoestq done! ;)",
"We can merge. Additional work will be done in another PR. ;)"
] | "2021-02-25T07:17:33Z" | "2021-03-18T09:42:08Z" | "2021-03-18T09:42:08Z" | MEMBER | null | Implement `Dataset.from_jsonl`. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1943/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1943/timeline | null | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/1943.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1943",
"merged_at": "2021-03-18T09:42:08Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1943.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1943"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/1942 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1942/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1942/comments | https://api.github.com/repos/huggingface/datasets/issues/1942/events | https://github.com/huggingface/datasets/issues/1942 | 816,037,520 | MDU6SXNzdWU4MTYwMzc1MjA= | 1,942 | [experiment] missing default_experiment-1-0.arrow | {
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/stas00",
"id": 10676103,
"login": "stas00",
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"repos_url": "https://api.github.com/users/stas00/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"type": "User",
"url": "https://api.github.com/users/stas00"
} | [] | closed | false | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
}
] | null | [
"Hi !\r\n\r\nThe cache at `~/.cache/huggingface/metrics` stores the users data for metrics computations (hence the arrow files).\r\n\r\nHowever python modules (i.e. dataset scripts, metric scripts) are stored in `~/.cache/huggingface/modules/datasets_modules`.\r\n\r\nIn particular the metrics are cached in `~/.cache/huggingface/modules/datasets_modules/metrics/`\r\n\r\nFeel free to take a look at your cache and let me know if you find any issue that would help explaining why you had an issue with `rouge` with no connection. I'm doing some tests on my side to try to reproduce the issue you have\r\n",
"Thank you for clarifying that the metrics files are to be found elsewhere, @lhoestq \r\n\r\n> The cache at ~/.cache/huggingface/metrics stores the users data for metrics computations (hence the arrow files).\r\n\r\ncould it be renamed to reflect that? otherwise it misleadingly suggests that it's the metrics. Perhaps `~/.cache/huggingface/metrics-user-data`?\r\n\r\nAnd there are so many `.lock` files w/o corresponding files under `~/.cache/huggingface/metrics/`. Why are they there? \r\n\r\nfor example after I wipe out the dir completely and do one training I end up with:\r\n```\r\n~/.cache/huggingface/metrics/sacrebleu/default/default_experiment-1-0.arrow.lock\r\n```\r\nwhat is that lock file locking when nothing is running?",
"The lock files come from an issue with filelock (see comment in the code [here](https://github.com/benediktschmitt/py-filelock/blob/master/filelock.py#L394-L398)). Basically on unix there're always .lock files left behind. I haven't dove into this issue",
"are you sure you need an external lock file? if it's a single purpose locking in the same scope you can lock the caller `__file__` instead, e.g. here is how one can `flock` the script file itself to ensure atomic printing:\r\n\r\n```\r\nimport fcntl\r\ndef printflock(*msgs):\r\n \"\"\" print in multiprocess env so that the outputs from different processes don't get interleaved \"\"\"\r\n with open(__file__, \"r\") as fh:\r\n fcntl.flock(fh, fcntl.LOCK_EX)\r\n try:\r\n print(*msgs)\r\n finally:\r\n fcntl.flock(fh, fcntl.LOCK_UN)\r\n```\r\n",
"OK, this issue is not about caching but some internal conflict/race condition it seems, I have just run into it on my normal env:\r\n\r\n```\r\nTraceback (most recent call last):\r\n File \"/mnt/nvme1/code/huggingface/datasets-master/src/datasets/metric.py\", line 356, in _finalize\r\n self.data = Dataset(**reader.read_files([{\"filename\": f} for f in file_paths]))\r\n File \"/mnt/nvme1/code/huggingface/datasets-master/src/datasets/arrow_reader.py\", line 236, in read_files\r\n pa_table = self._read_files(files, in_memory=in_memory)\r\n File \"/mnt/nvme1/code/huggingface/datasets-master/src/datasets/arrow_reader.py\", line 171, in _read_files\r\n pa_table: pa.Table = self._get_dataset_from_filename(f_dict, in_memory=in_memory)\r\n File \"/mnt/nvme1/code/huggingface/datasets-master/src/datasets/arrow_reader.py\", line 302, in _get_dataset_from_filename\r\n pa_table = ArrowReader.read_table(filename, in_memory=in_memory)\r\n File \"/mnt/nvme1/code/huggingface/datasets-master/src/datasets/arrow_reader.py\", line 322, in read_table\r\n stream = stream_from(filename)\r\n File \"pyarrow/io.pxi\", line 782, in pyarrow.lib.memory_map\r\n File \"pyarrow/io.pxi\", line 743, in pyarrow.lib.MemoryMappedFile._open\r\n File \"pyarrow/error.pxi\", line 122, in pyarrow.lib.pyarrow_internal_check_status\r\n File \"pyarrow/error.pxi\", line 97, in pyarrow.lib.check_status\r\nFileNotFoundError: [Errno 2] Failed to open local file '/home/stas/.cache/huggingface/metrics/sacrebleu/default/default_experiment-1-0.arrow'. Detail: [errno 2] No such file or directory\r\n\r\nDuring handling of the above exception, another exception occurred:\r\n\r\nTraceback (most recent call last):\r\n File \"examples/seq2seq/run_seq2seq.py\", line 655, in <module>\r\n main()\r\n File \"examples/seq2seq/run_seq2seq.py\", line 619, in main\r\n test_results = trainer.predict(\r\n File \"/mnt/nvme1/code/huggingface/transformers-master/src/transformers/trainer_seq2seq.py\", line 121, in predict\r\n return super().predict(test_dataset, ignore_keys=ignore_keys, metric_key_prefix=metric_key_prefix)\r\n File \"/mnt/nvme1/code/huggingface/transformers-master/src/transformers/trainer.py\", line 1706, in predict\r\n output = self.prediction_loop(\r\n File \"/mnt/nvme1/code/huggingface/transformers-master/src/transformers/trainer.py\", line 1813, in prediction_loop\r\n metrics = self.compute_metrics(EvalPrediction(predictions=preds, label_ids=label_ids))\r\n File \"examples/seq2seq/run_seq2seq.py\", line 556, in compute_metrics\r\n result = metric.compute(predictions=decoded_preds, references=decoded_labels)\r\n File \"/mnt/nvme1/code/huggingface/datasets-master/src/datasets/metric.py\", line 388, in compute\r\n self._finalize()\r\n File \"/mnt/nvme1/code/huggingface/datasets-master/src/datasets/metric.py\", line 358, in _finalize\r\n raise ValueError(\r\nValueError: Error in finalize: another metric instance is already using the local cache file. Please specify an experiment_id to avoid colision between distributed metric instances.\r\n```\r\n\r\nI'm just running `run_seq2seq.py` under DeepSpeed:\r\n\r\n```\r\nexport BS=16; rm -r output_dir; PYTHONPATH=src USE_TF=0 CUDA_VISIBLE_DEVICES=0,1 deepspeed --num_gpus=2 examples/seq2seq/run_seq2seq.py --model_name_or_path t5-small --output_dir output_dir --adam_eps 1e-06 --do_eval --do_train --do_predict --evaluation_strategy=steps --label_smoothing 0.1 --learning_rate 3e-5 --logging_first_step --logging_steps 1000 --max_source_length 128 --max_target_length 128 --num_train_epochs 1 --overwrite_output_dir --per_device_eval_batch_size $BS --per_device_train_batch_size $BS --predict_with_generate --eval_steps 25000 --sortish_sampler --task translation_en_to_ro --val_max_target_length 128 --warmup_steps 500 --max_train_samples 100 --max_val_samples 100 --max_test_samples 100 --dataset_name wmt16 --dataset_config ro-en --source_prefix \"translate English to Romanian: \" --deepspeed examples/tests/deepspeed/ds_config.json\r\n```\r\n\r\nIt finished the evaluation OK and crashed on the prediction part of the Trainer. But the eval / predict parts no longer run under Deepspeed, it's just plain ddp.\r\n\r\nIs this some kind of race condition? It happens intermittently - there is nothing else running at the same time.\r\n\r\nBut if 2 independent instances of the same script were to run at the same time it's clear to see that this problem would happen. Perhaps it'd help to create a unique hash which is shared between all processes in the group and use that as the default experiment id?\r\n",
"When you're using metrics in a distributed setup, there are two cases:\r\n1. you're doing two completely different experiments (two evaluations) and the 2 metrics jobs have nothing to do with each other\r\n2. you're doing one experiment (one evaluation) but use multiple processes to feed the data to the metric.\r\n\r\nIn case 1. you just need to provide two different `experiment_id` so that the metrics don't collide.\r\nIn case 2. they must have the same experiment_id (or use the default one), but in this case you also need to provide the `num_processes` and `process_id`\r\n\r\nIf understand correctly you're in situation 2.\r\n\r\nIf so, you make sure that you instantiate the metrics with both the right `num_processes` and `process_id` parameters ?\r\n\r\nIf they're not set, then the cache files of the two metrics collide it can cause issues. For example if one metric finishes before the other, then the cache file is deleted and the other metric gets a FileNotFoundError\r\nThere's more information in the [documentation](https://huggingface.co/docs/datasets/loading_metrics.html#distributed-setups) if you want\r\n\r\nHope that helps !",
"Thank you for explaining that in a great way, @lhoestq \r\n\r\nSo the bottom line is that the `transformers` examples are broken since they don't do any of that. At least `run_seq2seq.py` just does `metric = load_metric(metric_name)`\r\n\r\nWhat test would you recommend to reliably reproduce this bug in `examples/seq2seq/run_seq2seq.py`?",
"To give more context, we are just using the metrics for the `comput_metric` function and nothing else. Is there something else we can use that just applies the function to the full arrays of predictions and labels? Because that's all we need, all the gathering has already been done because the datasets Metric multiprocessing relies on file storage and thus does not work in a multi-node distributed setup (whereas the Trainer does).\r\n\r\nOtherwise, we'll have to switch to something else to compute the metrics :-(",
"OK, it definitely leads to a race condition in how it's used right now. Here is how you can reproduce it - by injecting a random sleep time different for each process before the locks are acquired. \r\n```\r\n--- a/src/datasets/metric.py\r\n+++ b/src/datasets/metric.py\r\n@@ -348,6 +348,16 @@ class Metric(MetricInfoMixin):\r\n\r\n elif self.process_id == 0:\r\n # Let's acquire a lock on each node files to be sure they are finished writing\r\n+\r\n+ import time\r\n+ import random\r\n+ import os\r\n+ pid = os.getpid()\r\n+ random.seed(pid)\r\n+ secs = random.randint(1, 15)\r\n+ time.sleep(secs)\r\n+ print(f\"sleeping {secs}\")\r\n+\r\n file_paths, filelocks = self._get_all_cache_files()\r\n\r\n # Read the predictions and references\r\n@@ -385,7 +395,10 @@ class Metric(MetricInfoMixin):\r\n\r\n if predictions is not None:\r\n self.add_batch(predictions=predictions, references=references)\r\n+ print(\"FINALIZE START\")\r\n+\r\n self._finalize()\r\n+ print(\"FINALIZE END\")\r\n\r\n self.cache_file_name = None\r\n self.filelock = None\r\n```\r\n\r\nthen run with 2 procs: `python -m torch.distributed.launch --nproc_per_node=2`\r\n```\r\nexport BS=16; rm -r output_dir; PYTHONPATH=src USE_TF=0 CUDA_VISIBLE_DEVICES=0,1 python -m torch.distributed.launch --nproc_per_node=2 examples/seq2seq/run_seq2seq.py --model_name_or_path t5-small --output_dir output_dir --adam_eps 1e-06 --do_eval --do_train --do_predict --evaluation_strategy=steps --label_smoothing 0.1 --learning_rate 3e-5 --logging_first_step --logging_steps 1000 --max_source_length 128 --max_target_length 128 --num_train_epochs 1 --overwrite_output_dir --per_device_eval_batch_size $BS --per_device_train_batch_size $BS --predict_with_generate --eval_steps 25000 --sortish_sampler --task translation_en_to_ro --val_max_target_length 128 --warmup_steps 500 --max_train_samples 10 --max_val_samples 10 --max_test_samples 10 --dataset_name wmt16 --dataset_config ro-en --source_prefix \"translate English to Romanian: \"\r\n```\r\n\r\n```\r\n***** Running Evaluation *****\r\n Num examples = 10\r\n Batch size = 16\r\n 0%| | 0/1 [00:00<?, ?it/s]FINALIZE START\r\nFINALIZE START\r\nsleeping 11\r\nFINALIZE END\r\n100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1/1 [00:11<00:00, 11.06s/it]\r\nsleeping 11\r\nTraceback (most recent call last):\r\n File \"/mnt/nvme1/code/huggingface/datasets-master/src/datasets/metric.py\", line 368, in _finalize\r\n self.data = Dataset(**reader.read_files([{\"filename\": f} for f in file_paths]))\r\n File \"/mnt/nvme1/code/huggingface/datasets-master/src/datasets/arrow_reader.py\", line 236, in read_files\r\n pa_table = self._read_files(files, in_memory=in_memory)\r\n File \"/mnt/nvme1/code/huggingface/datasets-master/src/datasets/arrow_reader.py\", line 171, in _read_files\r\n pa_table: pa.Table = self._get_dataset_from_filename(f_dict, in_memory=in_memory)\r\n File \"/mnt/nvme1/code/huggingface/datasets-master/src/datasets/arrow_reader.py\", line 302, in _get_dataset_from_filename\r\n pa_table = ArrowReader.read_table(filename, in_memory=in_memory)\r\n File \"/mnt/nvme1/code/huggingface/datasets-master/src/datasets/arrow_reader.py\", line 322, in read_table\r\n stream = stream_from(filename)\r\n File \"pyarrow/io.pxi\", line 782, in pyarrow.lib.memory_map\r\n File \"pyarrow/io.pxi\", line 743, in pyarrow.lib.MemoryMappedFile._open\r\n File \"pyarrow/error.pxi\", line 122, in pyarrow.lib.pyarrow_internal_check_status\r\n File \"pyarrow/error.pxi\", line 97, in pyarrow.lib.check_status\r\nFileNotFoundError: [Errno 2] Failed to open local file '/home/stas/.cache/huggingface/metrics/sacrebleu/default/default_experiment-1-0.arrow'. Detail: [errno 2] No such file or directory\r\n\r\nDuring handling of the above exception, another exception occurred:\r\n\r\nTraceback (most recent call last):\r\n File \"examples/seq2seq/run_seq2seq.py\", line 645, in <module>\r\n main()\r\n File \"examples/seq2seq/run_seq2seq.py\", line 601, in main\r\n metrics = trainer.evaluate(\r\n File \"/mnt/nvme1/code/huggingface/transformers-mp-pp/src/transformers/trainer_seq2seq.py\", line 74, in evaluate\r\n return super().evaluate(eval_dataset, ignore_keys=ignore_keys, metric_key_prefix=metric_key_prefix)\r\n File \"/mnt/nvme1/code/huggingface/transformers-mp-pp/src/transformers/trainer.py\", line 1703, in evaluate\r\n output = self.prediction_loop(\r\n File \"/mnt/nvme1/code/huggingface/transformers-mp-pp/src/transformers/trainer.py\", line 1876, in prediction_loop\r\n metrics = self.compute_metrics(EvalPrediction(predictions=preds, label_ids=label_ids))\r\n File \"examples/seq2seq/run_seq2seq.py\", line 556, in compute_metrics\r\n result = metric.compute(predictions=decoded_preds, references=decoded_labels)\r\n File \"/mnt/nvme1/code/huggingface/datasets-master/src/datasets/metric.py\", line 402, in compute\r\n self._finalize()\r\n File \"/mnt/nvme1/code/huggingface/datasets-master/src/datasets/metric.py\", line 370, in _finalize\r\n raise ValueError(\r\nValueError: Error in finalize: another metric instance is already using the local cache file. Please specify an experiment_id to avoid colision between distributed metric instances.\r\n```",
"I tried to adjust `run_seq2seq.py` and trainer to use the suggested dist env:\r\n```\r\n import torch.distributed as dist\r\n metric = load_metric(metric_name, num_process=dist.get_world_size(), process_id=dist.get_rank())\r\n```\r\nand in `trainer.py` added to call just for rank 0:\r\n```\r\n if self.is_world_process_zero() and self.compute_metrics is not None and preds is not None and label_ids is not None:\r\n metrics = self.compute_metrics(EvalPrediction(predictions=preds, label_ids=label_ids))\r\n```\r\nand then the process hangs in a deadlock. \r\n\r\nHere is the tb:\r\n```\r\n File \"/mnt/nvme1/code/huggingface/datasets-master/src/datasets/utils/filelock.py\", line 275 in acquire\r\n File \"/mnt/nvme1/code/huggingface/datasets-master/src/datasets/metric.py\", line 306 in _check_all_processes_locks\r\n File \"/mnt/nvme1/code/huggingface/datasets-master/src/datasets/metric.py\", line 501 in _init_writer\r\n File \"/mnt/nvme1/code/huggingface/datasets-master/src/datasets/metric.py\", line 440 in add_batch\r\n File \"/mnt/nvme1/code/huggingface/datasets-master/src/datasets/metric.py\", line 397 in compute\r\n File \"examples/seq2seq/run_seq2seq.py\", line 558 in compute_metrics\r\n File \"/mnt/nvme1/code/huggingface/transformers-mp-pp/src/transformers/trainer.py\", line 1876 in prediction_loop\r\n File \"/mnt/nvme1/code/huggingface/transformers-mp-pp/src/transformers/trainer.py\", line 1703 in evaluate\r\n File \"/mnt/nvme1/code/huggingface/transformers-mp-pp/src/transformers/trainer_seq2seq.py\", line 74 in evaluate\r\n File \"examples/seq2seq/run_seq2seq.py\", line 603 in main\r\n File \"examples/seq2seq/run_seq2seq.py\", line 651 in <module>\r\n```\r\n\r\nBut this sounds right, since in the above diff I set up a distributed metric and only called one process - so it's blocking on waiting for other processes to do the same.\r\n\r\nSo one working solution is to leave:\r\n\r\n```\r\n metric = load_metric(metric_name)\r\n```\r\nalone, and only call `compute_metrics` from rank 0\r\n```\r\n if self.is_world_process_zero() and self.compute_metrics is not None and preds is not None and label_ids is not None:\r\n metrics = self.compute_metrics(EvalPrediction(predictions=preds, label_ids=label_ids))\r\n```\r\n\r\nso we now no longer use the distributed env as far as `datasets` is concerned, it's just a single process.\r\n\r\nAre there any repercussions/side-effects to this proposed change in Trainer? If it always gathers all inputs on rank 0 then this is how it should have been done in first place - i.e. only run for rank 0. It appears that currently it was re-calculating the metrics on all processes on the same data just to throw the results away other than for rank 0. Unless I missed something.\r\n",
"But no, since \r\n`\r\n metric = load_metric(metric_name)\r\n`\r\nis called for each process, the race condition is still there. So still getting:\r\n\r\n```\r\nValueError: Error in finalize: another metric instance is already using the local cache file. Please specify an experiment_id to avoid colision between distributed metric instances.\r\n```\r\n\r\ni.e. the only way to fix this is to `load_metric` only for rank 0, but this requires huge changes in the code and all end users' code.\r\n",
"OK, here is a workaround that works. The onus here is absolutely on the user:\r\n\r\n```\r\ndiff --git a/examples/seq2seq/run_seq2seq.py b/examples/seq2seq/run_seq2seq.py\r\nindex 2a060dac5..c82fd83ea 100755\r\n--- a/examples/seq2seq/run_seq2seq.py\r\n+++ b/examples/seq2seq/run_seq2seq.py\r\n@@ -520,7 +520,11 @@ def main():\r\n\r\n # Metric\r\n metric_name = \"rouge\" if data_args.task.startswith(\"summarization\") else \"sacrebleu\"\r\n- metric = load_metric(metric_name)\r\n+ import torch.distributed as dist\r\n+ if dist.is_initialized():\r\n+ metric = load_metric(metric_name, num_process=dist.get_world_size(), process_id=dist.get_rank())\r\n+ else:\r\n+ metric = load_metric(metric_name)\r\n\r\n def postprocess_text(preds, labels):\r\n preds = [pred.strip() for pred in preds]\r\n@@ -548,12 +552,17 @@ def main():\r\n # Some simple post-processing\r\n decoded_preds, decoded_labels = postprocess_text(decoded_preds, decoded_labels)\r\n\r\n+ kwargs = dict(predictions=decoded_preds, references=decoded_labels)\r\n+ if metric_name == \"rouge\":\r\n+ kwargs.update(use_stemmer=True)\r\n+ result = metric.compute(**kwargs) # must call for all processes\r\n+ if result is None: # only process with rank-0 will return metrics, others None\r\n+ return {}\r\n+\r\n if metric_name == \"rouge\":\r\n- result = metric.compute(predictions=decoded_preds, references=decoded_labels, use_stemmer=True)\r\n # Extract a few results from ROUGE\r\n result = {key: value.mid.fmeasure * 100 for key, value in result.items()}\r\n else:\r\n- result = metric.compute(predictions=decoded_preds, references=decoded_labels)\r\n result = {\"bleu\": result[\"score\"]}\r\n\r\n prediction_lens = [np.count_nonzero(pred != tokenizer.pad_token_id) for pred in preds]\r\n```\r\n\r\nThis is not user-friendly to say the least. And it's still wasteful as we don't need other processes to do anything.\r\n\r\nBut it solves the current race condition.\r\n\r\nClearly this calls for a design discussion as it's the responsibility of the Trainer to handle this and not user's. Perhaps in the `transformers` land?",
"I don't see how this could be the responsibility of `Trainer`, who hasn't the faintest idea of what a `datasets.Metric` is. The trainer takes a function `compute_metrics` that goes from predictions + labels to metric results, there is nothing there. That computation is done on all processes \r\n\r\nThe fact a `datasets.Metric` object cannot be used as a simple compute function in a multi-process environment is, in my opinion, a bug in `datasets`. Especially since, as I mentioned before, the multiprocessing part of `datasets.Metric` has a deep flaw since it can't work in a multinode environment. So you actually need to do the job of gather predictions and labels yourself.\r\n\r\nThe changes you are proposing Stas are making the code less readable and also concatenate all the predictions and labels `number_of_processes` times I believe, which is not going to make the metric computation any faster.\r\n\r\n",
"Right, to clarify, I meant it'd be good to have it sorted on the library side and not requiring the user to figure it out. This is too complex and error-prone and if not coded correctly the bug will be intermittent which is even worse.\r\n\r\nOh I guess I wasn't clear in my message - in no way I'm proposing that we use this workaround code - I was just showing what I had to do to make it work.\r\n\r\nWe are on the same page.\r\n\r\n> The changes you are proposing Stas are making the code less readable and also concatenate all the predictions and labels number_of_processes times I believe, which is not going to make the metric computation any faster.\r\n\r\nAnd yes, this is another problem that my workaround introduces. Thank you for pointing it out, @sgugger \r\n",
"> The fact a datasets.Metric object cannot be used as a simple compute function in a multi-process environment is, in my opinion, a bug in datasets\r\n\r\nYes totally, this use case is supposed to be supported by `datasets`. And in this case there shouldn't be any collision between the metrics. I'm looking into it :)\r\nMy guess is that at one point the metric isn't using the right file name. It's supposed to use one with a unique uuid in order to avoid the collisions.",
"I just opened #1966 to fix this :)\r\n@stas00 if have a chance feel free to try it !",
"Thank you, @lhoestq - I will experiment and report back. \r\n\r\nedit: It works! Thank you",
"Fixed in https://github.com/huggingface/datasets/pull/1966"
] | "2021-02-25T03:02:15Z" | "2022-10-05T13:08:45Z" | "2022-10-05T13:08:45Z" | MEMBER | null | the original report was pretty bad and incomplete - my apologies!
Please see the complete version here: https://github.com/huggingface/datasets/issues/1942#issuecomment-786336481
------------
As mentioned here https://github.com/huggingface/datasets/issues/1939 metrics don't get cached, looking at my local `~/.cache/huggingface/metrics` - there are many `*.arrow.lock` files but zero metrics files.
w/o the network I get:
```
FileNotFoundError: [Errno 2] No such file or directory: '~/.cache/huggingface/metrics/sacrebleu/default/default_experiment-1-0.arrow
```
there is just `~/.cache/huggingface/metrics/sacrebleu/default/default_experiment-1-0.arrow.lock`
I did run the same `run_seq2seq.py` script on the instance with network and it worked just fine, but only the lock file was left behind.
this is with master.
Thank you. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1942/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1942/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/1941 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1941/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1941/comments | https://api.github.com/repos/huggingface/datasets/issues/1941/events | https://github.com/huggingface/datasets/issues/1941 | 815,985,167 | MDU6SXNzdWU4MTU5ODUxNjc= | 1,941 | Loading of FAISS index fails for index_name = 'exact' | {
"avatar_url": "https://avatars.githubusercontent.com/u/2992022?v=4",
"events_url": "https://api.github.com/users/mkserge/events{/privacy}",
"followers_url": "https://api.github.com/users/mkserge/followers",
"following_url": "https://api.github.com/users/mkserge/following{/other_user}",
"gists_url": "https://api.github.com/users/mkserge/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mkserge",
"id": 2992022,
"login": "mkserge",
"node_id": "MDQ6VXNlcjI5OTIwMjI=",
"organizations_url": "https://api.github.com/users/mkserge/orgs",
"received_events_url": "https://api.github.com/users/mkserge/received_events",
"repos_url": "https://api.github.com/users/mkserge/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mkserge/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mkserge/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mkserge"
} | [] | closed | false | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
}
] | null | [
"Thanks for reporting ! I'm taking a look",
"Index training was missing, I fixed it here: https://github.com/huggingface/datasets/commit/f5986c46323583989f6ed1dabaf267854424a521\r\n\r\nCan you try again please ?",
"Works great 👍 I just put a minor comment on the commit, I think you meant to pass the `train_size` from the one obtained from the config.\r\n\r\nThanks for a quick response!"
] | "2021-02-25T01:30:54Z" | "2021-02-25T14:28:46Z" | "2021-02-25T14:28:46Z" | CONTRIBUTOR | null | Hi,
It looks like loading of FAISS index now fails when using index_name = 'exact'.
For example, from the RAG [model card](https://huggingface.co/facebook/rag-token-nq?fbclid=IwAR3bTfhls5U_t9DqsX2Vzb7NhtRHxJxfQ-uwFT7VuCPMZUM2AdAlKF_qkI8#usage).
Running `transformers==4.3.2` and datasets installed from source on latest `master` branch.
```bash
(venv) sergey_mkrtchyan datasets (master) $ python
Python 3.8.6 (v3.8.6:db455296be, Sep 23 2020, 13:31:39)
[Clang 6.0 (clang-600.0.57)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>> from transformers import RagTokenizer, RagRetriever, RagTokenForGeneration
>>> tokenizer = RagTokenizer.from_pretrained("facebook/rag-token-nq")
>>> retriever = RagRetriever.from_pretrained("facebook/rag-token-nq", index_name="exact", use_dummy_dataset=True)
Using custom data configuration dummy.psgs_w100.nq.no_index-dummy=True,with_index=False
Reusing dataset wiki_dpr (/Users/sergey_mkrtchyan/.cache/huggingface/datasets/wiki_dpr/dummy.psgs_w100.nq.no_index-dummy=True,with_index=False/0.0.0/8a97e0f4fa5bc46e179474db6a61b09d5d2419d2911835bd3f91d110c936d8bb)
Using custom data configuration dummy.psgs_w100.nq.exact-50b6cda57ff32ab4
Reusing dataset wiki_dpr (/Users/sergey_mkrtchyan/.cache/huggingface/datasets/wiki_dpr/dummy.psgs_w100.nq.exact-50b6cda57ff32ab4/0.0.0/8a97e0f4fa5bc46e179474db6a61b09d5d2419d2911835bd3f91d110c936d8bb)
0%| | 0/10 [00:00<?, ?it/s]
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/Users/sergey_mkrtchyan/workspace/cformers/venv/lib/python3.8/site-packages/transformers/models/rag/retrieval_rag.py", line 425, in from_pretrained
return cls(
File "/Users/sergey_mkrtchyan/workspace/cformers/venv/lib/python3.8/site-packages/transformers/models/rag/retrieval_rag.py", line 387, in __init__
self.init_retrieval()
File "/Users/sergey_mkrtchyan/workspace/cformers/venv/lib/python3.8/site-packages/transformers/models/rag/retrieval_rag.py", line 458, in init_retrieval
self.index.init_index()
File "/Users/sergey_mkrtchyan/workspace/cformers/venv/lib/python3.8/site-packages/transformers/models/rag/retrieval_rag.py", line 284, in init_index
self.dataset = load_dataset(
File "/Users/sergey_mkrtchyan/workspace/huggingface/datasets/src/datasets/load.py", line 750, in load_dataset
ds = builder_instance.as_dataset(split=split, ignore_verifications=ignore_verifications, in_memory=keep_in_memory)
File "/Users/sergey_mkrtchyan/workspace/huggingface/datasets/src/datasets/builder.py", line 734, in as_dataset
datasets = utils.map_nested(
File "/Users/sergey_mkrtchyan/workspace/huggingface/datasets/src/datasets/utils/py_utils.py", line 195, in map_nested
return function(data_struct)
File "/Users/sergey_mkrtchyan/workspace/huggingface/datasets/src/datasets/builder.py", line 769, in _build_single_dataset
post_processed = self._post_process(ds, resources_paths)
File "/Users/sergey_mkrtchyan/.cache/huggingface/modules/datasets_modules/datasets/wiki_dpr/8a97e0f4fa5bc46e179474db6a61b09d5d2419d2911835bd3f91d110c936d8bb/wiki_dpr.py", line 205, in _post_process
dataset.add_faiss_index("embeddings", custom_index=index)
File "/Users/sergey_mkrtchyan/workspace/huggingface/datasets/src/datasets/arrow_dataset.py", line 2516, in add_faiss_index
super().add_faiss_index(
File "/Users/sergey_mkrtchyan/workspace/huggingface/datasets/src/datasets/search.py", line 416, in add_faiss_index
faiss_index.add_vectors(self, column=column, train_size=train_size, faiss_verbose=faiss_verbose)
File "/Users/sergey_mkrtchyan/workspace/huggingface/datasets/src/datasets/search.py", line 281, in add_vectors
self.faiss_index.add(vecs)
File "/Users/sergey_mkrtchyan/workspace/cformers/venv/lib/python3.8/site-packages/faiss/__init__.py", line 104, in replacement_add
self.add_c(n, swig_ptr(x))
File "/Users/sergey_mkrtchyan/workspace/cformers/venv/lib/python3.8/site-packages/faiss/swigfaiss.py", line 3263, in add
return _swigfaiss.IndexHNSW_add(self, n, x)
RuntimeError: Error in virtual void faiss::IndexHNSW::add(faiss::Index::idx_t, const float *) at /Users/runner/work/faiss-wheels/faiss-wheels/faiss/faiss/IndexHNSW.cpp:356: Error: 'is_trained' failed
>>>
```
The issue seems to be related to the scalar quantization in faiss added in this commit: 8c5220307c33f00e01c3bf7b8. Reverting it fixes the issue.
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1941/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1941/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/1940 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1940/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1940/comments | https://api.github.com/repos/huggingface/datasets/issues/1940/events | https://github.com/huggingface/datasets/issues/1940 | 815,770,012 | MDU6SXNzdWU4MTU3NzAwMTI= | 1,940 | Side effect when filtering data due to `does_function_return_dict` call in `Dataset.map()` | {
"avatar_url": "https://avatars.githubusercontent.com/u/918006?v=4",
"events_url": "https://api.github.com/users/francisco-perez-sorrosal/events{/privacy}",
"followers_url": "https://api.github.com/users/francisco-perez-sorrosal/followers",
"following_url": "https://api.github.com/users/francisco-perez-sorrosal/following{/other_user}",
"gists_url": "https://api.github.com/users/francisco-perez-sorrosal/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/francisco-perez-sorrosal",
"id": 918006,
"login": "francisco-perez-sorrosal",
"node_id": "MDQ6VXNlcjkxODAwNg==",
"organizations_url": "https://api.github.com/users/francisco-perez-sorrosal/orgs",
"received_events_url": "https://api.github.com/users/francisco-perez-sorrosal/received_events",
"repos_url": "https://api.github.com/users/francisco-perez-sorrosal/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/francisco-perez-sorrosal/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/francisco-perez-sorrosal/subscriptions",
"type": "User",
"url": "https://api.github.com/users/francisco-perez-sorrosal"
} | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | closed | false | null | [] | null | [
"Thanks for the report !\r\n\r\nCurrently we don't have a way to let the user easily disable this behavior.\r\nHowever I agree that we should support stateful processing functions, ideally by removing `does_function_return_dict`.\r\n\r\nWe needed this function in order to know whether the `map` functions needs to write data or not. if `does_function_return_dict` returns False then we don't write anything.\r\n\r\nInstead of checking the output of the processing function outside of the for loop that iterates through the dataset to process it, we can check the output of the first processed example and at that point decide if we need to write data or not.\r\n\r\nTherefore it's definitely possible to fix this unwanted behavior, any contribution going into this direction is welcome :)",
"Thanks @mariosasko for the PR!"
] | "2021-02-24T19:18:56Z" | "2021-03-23T15:26:49Z" | "2021-03-23T15:26:49Z" | CONTRIBUTOR | null | Hi there!
In my codebase I have a function to filter rows in a dataset, selecting only a certain number of examples per class. The function passes a extra argument to maintain a counter of the number of dataset rows/examples already selected per each class, which are the ones I want to keep in the end:
```python
def fill_train_examples_per_class(example, per_class_limit: int, counter: collections.Counter):
label = int(example['label'])
current_counter = counter.get(label, 0)
if current_counter < per_class_limit:
counter[label] = current_counter + 1
return True
return False
```
At some point I invoke it through the `Dataset.filter()` method in the `arrow_dataset.py` module like this:
```python
...
kwargs = {"per_class_limit": train_examples_per_class_limit, "counter": Counter()}
datasets['train'] = datasets['train'].filter(fill_train_examples_per_class, num_proc=1, fn_kwargs=kwargs)
...
```
The problem is that, passing a stateful container (the counter,) provokes a side effect in the new filtered dataset obtained. This is due to the fact that at some point in `filter()`, the `map()`'s function `does_function_return_dict` is invoked in line [1290](https://github.com/huggingface/datasets/blob/96578adface7e4bc1f3e8bafbac920d72ca1ca60/src/datasets/arrow_dataset.py#L1290).
When this occurs, the state of the counter is initially modified by the effects of the function call on the 1 or 2 rows selected in lines 1288 and 1289 of the same file (which are marked as `test_inputs` & `test_indices` respectively in lines 1288 and 1289. This happens out of the control of the user (which for example can't reset the state of the counter before continuing the execution,) provoking in the end an undesired side effect in the results obtained.
In my case, the resulting dataset -despite of the counter results are ok- lacks an instance of the classes 0 and 1 (which happen to be the classes of the first two examples of my dataset.) The rest of the classes I have in my dataset, contain the right number of examples as they were not affected by the effects of `does_function_return_dict` call.
I've debugged my code extensively and made a workaround myself hardcoding the necessary stuff (basically putting `update_data=True` in line 1290,) and then I obtain the results I expected without the side effect.
Is there a way to avoid that call to `does_function_return_dict` in map()'s line 1290 ? (e.g. extracting the required information that `does_function_return_dict` returns without making the testing calls to the user function on dataset rows 0 & 1)
Thanks in advance,
Francisco Perez-Sorrosal
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1940/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1940/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/1939 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1939/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1939/comments | https://api.github.com/repos/huggingface/datasets/issues/1939/events | https://github.com/huggingface/datasets/issues/1939 | 815,680,510 | MDU6SXNzdWU4MTU2ODA1MTA= | 1,939 | [firewalled env] OFFLINE mode | {
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/stas00",
"id": 10676103,
"login": "stas00",
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"repos_url": "https://api.github.com/users/stas00/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"type": "User",
"url": "https://api.github.com/users/stas00"
} | [] | closed | false | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
}
] | null | [
"Thanks for reporting and for all the details and suggestions.\r\n\r\nI'm totally in favor of having a HF_DATASETS_OFFLINE env variable to disable manually all the connection checks, remove retries etc.\r\n\r\nMoreover you may know that the use case that you are mentioning is already supported from `datasets` 1.3.0, i.e. you already can:\r\n- first load datasets and metrics from an instance with internet connection\r\n- then be able to reload datasets and metrics from another instance without connection (as long as the filesystem is shared)\r\n\r\nThis is already implemented, but currently it only works if the requests return a `ConnectionError` (or any error actually). Not sure why it would hang instead of returning an error.\r\n\r\nMaybe this is just a issue with the timeout value being not set or too high ?\r\nIs there a way I can have access to one of the instances on which there's this issue (we can discuss this offline) ?\r\n",
"I'm on master, so using all the available bells and whistles already.\r\n\r\nIf you look at the common issues - it for example tries to look up files if they appear in `_PACKAGED_DATASETS_MODULES` which it shouldn't do.\r\n\r\n--------------\r\n\r\nYes, there is a nuance to it. As I mentioned it's firewalled - that is it has a network but making any calls outside - it just hangs in:\r\n\r\n```\r\nsin_addr=inet_addr(\"xx.xx.xx.xx\")}, [28->16]) = 0\r\nclose(5) = 0\r\nsocket(AF_INET, SOCK_STREAM|SOCK_CLOEXEC, IPPROTO_TCP) = 5\r\nconnect(5, {sa_family=AF_INET, sin_port=htons(3128), sin_addr=inet_addr(\"yy.yy.yy.yy\")}, 16^C) = ? ERESTARTSYS (To be restarted if SA_RESTART is set)\r\n```\r\nuntil it times out.\r\n\r\nThat's why we need to be able to tell the software that there is no network to rely on even if there is one (good for testing too).\r\n\r\nSo what I'm thinking is that this is a simple matter of pre-ambling any network call wrappers with:\r\n\r\n```\r\nif HF_DATASETS_OFFLINE:\r\n assert \"Attempting to make a network call under Offline mode\"\r\n```\r\n\r\nand then fixing up if there is anything else to fix to make it work.\r\n\r\n--------------\r\n\r\nOtherwise I think the only other problem I encountered is that we need to find a way to pre-cache metrics, for some reason it's not caching it and wanting to fetch it from online.\r\n\r\nWhich is extra strange since it already has those files in the `datasets` repo itself that is on the filesystem.\r\n\r\nThe workaround I had to do is to copy `rouge/rouge.py` (with the parent folder) from the datasets repo to the current dir - and then it proceeded.",
"Ok understand better the hanging issue.\r\nI guess catching connection errors is not enough, we should also avoid all the hangings.\r\nCurrently the offline mode tests are only done by simulating an instant connection fail that returns an error, let's have another connection mock that hangs instead.\r\n\r\nI'll also take a look at why you had to do this for `rouge`.\r\n",
"FWIW, I think instant failure on the behalf of a network call is the simplest solution to correctly represent the environment and having the caller to sort it out is the next thing to do, since here it is the case of having no functional network, it's just that the software doesn't know this is the case, because there is some network. So we just need to help it to bail out instantly rather than hang waiting for it to time out. And afterwards everything else you said.",
"Update on this: \r\n\r\nI managed to create a mock environment for tests that makes the connections hang until timeout.\r\nI managed to reproduce the issue you're having in this environment.\r\n\r\nI'll update the offline test cases to also test the robustness to connection hangings, and make sure we set proper timeouts where it's needed in the code. This should cover the _automatic_ section you mentioned.",
"Fabulous! I'm glad you were able to reproduce the issues, @lhoestq!",
"I lost access to the firewalled setup, but I emulated it with:\r\n\r\n```\r\nsudo ufw enable\r\nsudo ufw default deny outgoing\r\n```\r\n(thanks @mfuntowicz)\r\n\r\nI was able to test `HF_DATASETS_OFFLINE=1` and it worked great - i.e. didn't try to reach out with it and used the cached files instead.\r\n\r\nThank you!"
] | "2021-02-24T17:13:42Z" | "2021-03-05T05:09:54Z" | "2021-03-05T05:09:54Z" | MEMBER | null | This issue comes from a need to be able to run `datasets` in a firewalled env, which currently makes the software hang until it times out, as it's unable to complete the network calls.
I propose the following approach to solving this problem, using the example of `run_seq2seq.py` as a sample program. There are 2 possible ways to going about it.
## 1. Manual
manually prepare data and metrics files, that is transfer to the firewalled instance the dataset and the metrics and run:
```
DATASETS_OFFLINE=1 run_seq2seq.py --train_file xyz.csv --validation_file xyz.csv ...
```
`datasets` must not make any network calls and if there is a logic to do that and something is missing it should assert that this or that action requires network and therefore it can't proceed.
## 2. Automatic
In some clouds one can prepare a datastorage ahead of time with a normal networked environment but which doesn't have gpus and then one switches to the gpu instance which is firewalled, but it can access all the cached data. This is the ideal situation, since in this scenario we don't have to do anything manually, but simply run the same application twice:
1. on the non-firewalled instance:
```
run_seq2seq.py --dataset_name wmt16 --dataset_config ro-en ...
```
which should download and cached everything.
2. and then immediately after on the firewalled instance, which shares the same filesystem
```
DATASETS_OFFLINE=1 run_seq2seq.py --dataset_name wmt16 --dataset_config ro-en ...
```
and the metrics and datasets should be cached by the invocation number 1 and any network calls be skipped and if the logic is missing data it should assert and not try to fetch any data from online.
## Common Issues
1. for example currently `datasets` tries to look up online datasets if the files contain json or csv, despite the paths already provided
```
if dataset and path in _PACKAGED_DATASETS_MODULES:
```
2. it has an issue with metrics. e.g. I had to manually copy `rouge/rouge.py` from the `datasets` repo to the current dir - or it was hanging.
I had to comment out `head_hf_s3(...)` calls to make things work. So all those `try: head_hf_s3(...)` shouldn't be tried with `DATASETS_OFFLINE=1`
Here is the corresponding issue for `transformers`: https://github.com/huggingface/transformers/issues/10379
Thanks. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1939/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1939/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/1938 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1938/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1938/comments | https://api.github.com/repos/huggingface/datasets/issues/1938/events | https://github.com/huggingface/datasets/pull/1938 | 815,647,774 | MDExOlB1bGxSZXF1ZXN0NTc5NDQyNDkw | 1,938 | Disallow ClassLabel with no names | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [] | closed | false | null | [] | null | [] | "2021-02-24T16:37:57Z" | "2021-02-25T11:27:29Z" | "2021-02-25T11:27:29Z" | MEMBER | null | It was possible to create a ClassLabel without specifying the names or the number of classes.
This was causing silent issues as in #1936 and breaking the conversion methods str2int and int2str.
cc @justin-yan | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1938/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1938/timeline | null | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/1938.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1938",
"merged_at": "2021-02-25T11:27:29Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1938.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1938"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/1937 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1937/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1937/comments | https://api.github.com/repos/huggingface/datasets/issues/1937/events | https://github.com/huggingface/datasets/issues/1937 | 815,163,943 | MDU6SXNzdWU4MTUxNjM5NDM= | 1,937 | CommonGen dataset page shows an error OSError: [Errno 28] No space left on device | {
"avatar_url": "https://avatars.githubusercontent.com/u/10104354?v=4",
"events_url": "https://api.github.com/users/yuchenlin/events{/privacy}",
"followers_url": "https://api.github.com/users/yuchenlin/followers",
"following_url": "https://api.github.com/users/yuchenlin/following{/other_user}",
"gists_url": "https://api.github.com/users/yuchenlin/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/yuchenlin",
"id": 10104354,
"login": "yuchenlin",
"node_id": "MDQ6VXNlcjEwMTA0MzU0",
"organizations_url": "https://api.github.com/users/yuchenlin/orgs",
"received_events_url": "https://api.github.com/users/yuchenlin/received_events",
"repos_url": "https://api.github.com/users/yuchenlin/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/yuchenlin/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yuchenlin/subscriptions",
"type": "User",
"url": "https://api.github.com/users/yuchenlin"
} | [
{
"color": "94203D",
"default": false,
"description": "",
"id": 2107841032,
"name": "nlp-viewer",
"node_id": "MDU6TGFiZWwyMTA3ODQxMDMy",
"url": "https://api.github.com/repos/huggingface/datasets/labels/nlp-viewer"
}
] | closed | false | null | [] | null | [
"Facing the same issue for [Squad](https://huggingface.co/datasets/viewer/?dataset=squad) and [TriviaQA](https://huggingface.co/datasets/viewer/?dataset=trivia_qa) datasets as well.",
"We just fixed the issue, thanks for reporting !"
] | "2021-02-24T06:47:33Z" | "2021-02-26T11:10:06Z" | "2021-02-26T11:10:06Z" | CONTRIBUTOR | null | The page of the CommonGen data https://huggingface.co/datasets/viewer/?dataset=common_gen shows
![image](https://user-images.githubusercontent.com/10104354/108959311-1865e600-7629-11eb-868c-cf4cb27034ea.png)
| {
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1937/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1937/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/1936 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1936/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1936/comments | https://api.github.com/repos/huggingface/datasets/issues/1936/events | https://github.com/huggingface/datasets/pull/1936 | 814,726,512 | MDExOlB1bGxSZXF1ZXN0NTc4NjY3NTQ4 | 1,936 | [WIP] Adding Support for Reading Pandas Category | {
"avatar_url": "https://avatars.githubusercontent.com/u/7731709?v=4",
"events_url": "https://api.github.com/users/justin-yan/events{/privacy}",
"followers_url": "https://api.github.com/users/justin-yan/followers",
"following_url": "https://api.github.com/users/justin-yan/following{/other_user}",
"gists_url": "https://api.github.com/users/justin-yan/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/justin-yan",
"id": 7731709,
"login": "justin-yan",
"node_id": "MDQ6VXNlcjc3MzE3MDk=",
"organizations_url": "https://api.github.com/users/justin-yan/orgs",
"received_events_url": "https://api.github.com/users/justin-yan/received_events",
"repos_url": "https://api.github.com/users/justin-yan/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/justin-yan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/justin-yan/subscriptions",
"type": "User",
"url": "https://api.github.com/users/justin-yan"
} | [] | closed | false | null | [] | null | [
"Thanks ! could you maybe add a few tests in test_arrow_dataset.py to make sure from_pandas works as expected with categorical types ?\r\n\r\nIn particular I'm pretty sure that if you now try to `cast` the dataset to the same features at its current features, it will break instead of just being a no-op.\r\nThis is because `features.type` returns an arrow int64 type for the classlabel column instead of the arrow dictionary type that you have in the arrow table. There are two issues in this case:\r\n- it will try to replace the arrow type from dictionary to int64 instead of being a no-op\r\n- it will crash because pyarrow is not able to cast a dictionary to int64 (even if it's actually possible do cast the column by hand by accessing the sub-array of the dictionary array containing the indices/integers)\r\n\r\nIt would be awesome to fix this case ! Ideally the arrow `pa_type` of classlabel ([here](https://github.com/huggingface/datasets/blob/7072e1becd69d421d863374b825e3da4c6551798/src/datasets/features.py#L558)) should be an arrow dictionary type. This should fix the issue. Then we can start working on backward compatibility.\r\n\r\nLet me know if you have questions or if I can help.\r\nIn particular if there is some glue-ing to do I can take care of that if you want ;)\r\n\r\n--------------\r\n\r\nAlso just a few information regarding the functions you mentioned\r\n\r\n`int2str` and `str2int` are used by users to transforms the labels if they want to. Here sine ClassLabel is instantiated without the class names, they would crash. I was about to make a PR to disallow the creation of an empty ClassLabel feature type.\r\nTherefore can you provide class_names= when creating the ClassLabel ?\r\n\r\n`encode_example` is mostly used with a dataset builder (e.g. squad.py) so it's not used when using .from_pandas.\r\n\r\n\r\n",
"Got it - that's super helpful, I was trying to figure out what would break!\r\n\r\nI think there are two issues we're discussing here:\r\n\r\n1. modifying the pa_type of ClassLabel: totally agree with you on that one if that's OK from a back-compat perspective. (i.e. are users of `datasets` not supposed to access or use the .pa_type attribute of ClassLabel?)\r\n2. creating a ClassLabel requires information that's not present on the pa.DictionaryType object: I think the crux of the problem is that at this line (https://github.com/huggingface/datasets/pull/1936/files#diff-54081ede051fd0a7ef65748c481cc06f90209f01bb89968747089d13a2ca052bR933) - you only have access to the `pa_type`, which is `DictionaryType[int8, string]`. I've unpacked it and looked at all of the available methods, and I don't believe that any of the actual values (\"names\") are present - those are stored on the `pyarrow.DictArray.dictionary` attribute (i.e. as data, not on the pyarrow.DataType) - so in order to actually be able to instantiate the ClassLabel with the names= parameter, we need to pass in more information to this method.\r\n\r\nWe *could* mostly accomplish this by modifying https://github.com/huggingface/datasets/pull/1936/files#diff-54081ede051fd0a7ef65748c481cc06f90209f01bb89968747089d13a2ca052bR909 to accept a pyarrow Table in addition to the type, and it's not too difficult to do, but it feels a little bit off to me:\r\n\r\n- It feels a bit off that a \"schema\" definition will change depending on what data gets added to the dataset. In particular, if someone adds rows or concatenates two datasets, the ClassLabel \"names\" will also need to change, right? I think maybe we're getting around this because a Dataset is immutable (I think?) and so any new dataset is freshly constructed, but for example - I think this check wouldn't work for `ClassLabel`s if we were to compare the `Dataset.features` instead of the underlying pyarrow type https://github.com/huggingface/datasets/blob/master/src/datasets/arrow_dataset.py#L2664\r\n- To that end I wonder if ClassLabel should actually just be the \"type\" akin to Category, and the \"names\" should be considered \"data\" and not part of the \"type\"? Similar to how pyarrow maintains two data objects - the array of indices and the array of string values.\r\n\r\nWith that in mind, I'm wondering if you *should* allow an empty ClassLabel (and`int2str`, etc. can be updated to have more descriptive error messages if labels aren't provided or inferred), and if the underlying data is a pa.DictionaryType, then the names can be inferred and applied at these points in the code:\r\n- https://github.com/huggingface/datasets/blob/96578adface7e4bc1f3e8bafbac920d72ca1ca60/src/datasets/arrow_dataset.py#L274\r\n- https://github.com/huggingface/datasets/blob/96578adface7e4bc1f3e8bafbac920d72ca1ca60/src/datasets/arrow_dataset.py#L686\r\n- https://github.com/huggingface/datasets/blob/96578adface7e4bc1f3e8bafbac920d72ca1ca60/src/datasets/arrow_dataset.py#L673\r\n\r\nI think perhaps the mismatch here is when the data is stored on disk as an int there should be a convenient way of saying \"this is a dictionary and here are some explicitly provided labels\", whereas when it's stored as a string, we'd ideally like to say \"this is a Category and please condense the representation and automatically infer the labels\".\r\n\r\nSorry for the long comment! Hopefully my thoughts make sense - thanks for taking the time to discuss!",
"Yes that makes sense. I completely forgot that the label names of an arrow Dictionary type were not stored in the type but in the DictionaryArray.\r\n\r\nThis is made me realize that it's actually pretty unpractical and I feel that handling this can add unnecessary complexity in the handling of dtypes.\r\nMore specifically:\r\n- it's not possible to create a DictionaryArray from a call to pyarrow.array with python objects, which is the function we use to convert python objects to pyarrow objects (or we would need to convert the python objects to pandas categorical series beforehand but it doesn't work for nested types)\r\n- casting nested types containing Dictionary types would require a lot of array manipulations since it's not compatible with pyarrow.array.cast\r\n\r\nI feel like the original feature request (support of pandas Categorical) should be addressable without adding so much complexity to the library.\r\n\r\nIf we admit that we don't want to deal with arrow Dictionary type, maybe we can simply convert the pandas categorical series to an int64 series and set the feature type to the right ClassLabel in `from_pandas`. We can have the reverse operation in `to_pandas`. This way we don't need to support the arrow DictionaryType and so we can keep simple/accessible code for conversion from python to arrow and also for type casting. Let me know what you think.\r\n\r\nIn the future depending on the usage of the ClassLabel types with pandas/pyarrow we might reconsider this but for now I believe this simple solution is enough.",
"I like that idea! Let me try working up a PR for this",
"OK! I just whipped up the `from_pandas()` portion of this PR, and it works, though I'm not *super* familiar with the available APIs so I'm not sure if there's a more \"vectorized\" way of doing all of these updates - so happy to get some feedback and iterate!\r\n\r\nApologies for multiple commits - I realized how to solve a few different problems right after I gave up and pushed with the intent to ask for help :-)\r\n\r\nI wanted to get some guidance on how to handle the reverse direction: I think there are two main areas to look at, `.to_pandas()` and also `.set_format('pandas')` and then pulling out a dataframe like so: `dataset[:]`. Is there a single place where I can handle both of these cases at once or do these need to be handled independently?",
"Thanks ! This is awesome :) \r\nCould you also add a test ? There is already `test_to_pandas` in test_arrow_dataset.py\r\nFeel free to complete this test to make sure it works for Categorical :)\r\n\r\nTo make it work with the \"pandas\" formating (when you do `set_format(\"pandas\")` and then query `dataset[0]`, `dataset[:]`, etc.), you can take a look and the `PandasFormatter` in formatting.py\r\nIt takes a pyarrow table as input of its formatting methods (one method for rows, one for columns and one for batches) and returns a pandas DataFrame (or a Series for the method for formatting a column). You can cast to Categorical in each one of the formatter methods and it should work directly when you use a pandas-formatted dataset.\r\n\r\nThis formatter can then also be used in `to_pandas` (currently it does `pa_table.to_pandas()` but `PandasFormatter().format_batch(pa_table)` can be used instead)."
] | "2021-02-23T18:32:54Z" | "2022-03-09T18:46:22Z" | "2022-03-09T18:46:22Z" | CONTRIBUTOR | null | @lhoestq - continuing our conversation from https://github.com/huggingface/datasets/issues/1906#issuecomment-784247014
The goal of this PR is to support `Dataset.from_pandas(df)` where the dataframe contains a Category.
Just the 4 line change below actually does seem to work:
```
>>> from datasets import Dataset
>>> import pandas as pd
>>> df = pd.DataFrame(pd.Series(["a", "b", "c", "a"], dtype="category"))
>>> ds = Dataset.from_pandas(df)
>>> ds.to_pandas()
0
0 a
1 b
2 c
3 a
>>> ds.to_pandas().dtypes
0 category
dtype: object
```
save_to_disk, etc. all seem to work as well. The main things that are theoretically "incorrect" if we leave this are:
```
>>> ds.features.type
StructType(struct<0: int64>)
```
there are a decent number of references to this property in the library, but I can't find anything that seems to actually break as a result of this being int64 vs. dictionary? I think the gist of my question is: a) do we *need* to change the dtype of Classlabel and have get_nested_type return a pyarrow.DictionaryType instead of int64? and b) do you *want* it to change? The biggest challenge I see to implementing this correctly is that the data will need to be passed in along with the pyarrow schema when instantiating the Classlabel (I *think* this is unavoidable, since the type itself doesn't contain the actual label values) which could be a fairly intrusive change - e.g. `from_arrow_schema`'s interface would need to change to include optional arrow data? Once we start going down this path of modifying the public interfaces I am admittedly feeling a little bit outside of my comfort zone
Additionally I think `int2str`, `str2int`, and `encode_example` probably won't work - but I can't find any usages of them in the library itself. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1936/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1936/timeline | null | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/1936.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1936",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/1936.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1936"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/1935 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1935/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1935/comments | https://api.github.com/repos/huggingface/datasets/issues/1935/events | https://github.com/huggingface/datasets/pull/1935 | 814,623,827 | MDExOlB1bGxSZXF1ZXN0NTc4NTgyMzk1 | 1,935 | add CoVoST2 | {
"avatar_url": "https://avatars.githubusercontent.com/u/27137566?v=4",
"events_url": "https://api.github.com/users/patil-suraj/events{/privacy}",
"followers_url": "https://api.github.com/users/patil-suraj/followers",
"following_url": "https://api.github.com/users/patil-suraj/following{/other_user}",
"gists_url": "https://api.github.com/users/patil-suraj/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/patil-suraj",
"id": 27137566,
"login": "patil-suraj",
"node_id": "MDQ6VXNlcjI3MTM3NTY2",
"organizations_url": "https://api.github.com/users/patil-suraj/orgs",
"received_events_url": "https://api.github.com/users/patil-suraj/received_events",
"repos_url": "https://api.github.com/users/patil-suraj/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/patil-suraj/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patil-suraj/subscriptions",
"type": "User",
"url": "https://api.github.com/users/patil-suraj"
} | [] | closed | false | null | [] | null | [
"@patrickvonplaten \r\nI removed the mp3 files, dummy_data is much smaller now!"
] | "2021-02-23T16:28:16Z" | "2021-02-24T18:09:32Z" | "2021-02-24T18:05:09Z" | MEMBER | null | This PR adds the CoVoST2 dataset for speech translation and ASR.
https://github.com/facebookresearch/covost#covost-2
The dataset requires manual download as the download page requests an email address and the URLs are temporary.
The dummy data is a bit bigger because of the mp3 files and 36 configs. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 1,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1935/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1935/timeline | null | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/1935.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1935",
"merged_at": "2021-02-24T18:05:09Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1935.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1935"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/1934 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1934/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1934/comments | https://api.github.com/repos/huggingface/datasets/issues/1934/events | https://github.com/huggingface/datasets/issues/1934 | 814,437,190 | MDU6SXNzdWU4MTQ0MzcxOTA= | 1,934 | Add Stanford Sentiment Treebank (SST) | {
"avatar_url": "https://avatars.githubusercontent.com/u/15801338?v=4",
"events_url": "https://api.github.com/users/patpizio/events{/privacy}",
"followers_url": "https://api.github.com/users/patpizio/followers",
"following_url": "https://api.github.com/users/patpizio/following{/other_user}",
"gists_url": "https://api.github.com/users/patpizio/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/patpizio",
"id": 15801338,
"login": "patpizio",
"node_id": "MDQ6VXNlcjE1ODAxMzM4",
"organizations_url": "https://api.github.com/users/patpizio/orgs",
"received_events_url": "https://api.github.com/users/patpizio/received_events",
"repos_url": "https://api.github.com/users/patpizio/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/patpizio/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patpizio/subscriptions",
"type": "User",
"url": "https://api.github.com/users/patpizio"
} | [
{
"color": "e99695",
"default": false,
"description": "Requesting to add a new dataset",
"id": 2067376369,
"name": "dataset request",
"node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request"
}
] | closed | false | null | [] | null | [
"Dataset added in release [1.5.0](https://github.com/huggingface/datasets/releases/tag/1.5.0), I think I can close this."
] | "2021-02-23T12:53:16Z" | "2021-03-18T17:51:44Z" | "2021-03-18T17:51:44Z" | CONTRIBUTOR | null | I am going to add SST:
- **Name:** The Stanford Sentiment Treebank
- **Description:** The first corpus with fully labeled parse trees that allows for a complete analysis of the compositional effects of sentiment in language
- **Paper:** [Recursive Deep Models for Semantic Compositionality Over a Sentiment Treebank](https://nlp.stanford.edu/~socherr/EMNLP2013_RNTN.pdf)
- **Data:** https://nlp.stanford.edu/sentiment/index.html
- **Motivation:** Already requested in #353, SST is a popular dataset for Sentiment Classification
What's the difference with the [_SST-2_](https://huggingface.co/datasets/viewer/?dataset=glue&config=sst2) dataset included in GLUE? Essentially, SST-2 is a version of SST where:
- the labels were mapped from real numbers in [0.0, 1.0] to a binary label: {0, 1}
- the labels of the *sub-sentences* were included only in the training set
- the labels in the test set are obfuscated
So there is a lot more information in the original SST. The tricky bit is, the data is scattered into many text files and, for one in particular, I couldn't find the original encoding ([*but I'm not the only one*](https://groups.google.com/g/word2vec-toolkit/c/QIUjLw6RqFk/m/_iEeyt428wkJ) 🎵). The only solution I found was to manually replace all the è, ë, ç and so on into an `utf-8` copy of the text file. I uploaded the result in my Dropbox and I am using that as the main repo for the dataset.
Also, the _sub-sentences_ are built at run-time from the information encoded in several text files, so generating the examples is a bit more cumbersome than usual. Luckily, the dataset is not enormous.
I plan to divide the dataset in 2 configs: one with just whole sentences with their labels, the other with sentences _and their sub-sentences_ with their labels. Each config will be split in train, validation and test. Hopefully this makes sense, we may discuss it in the PR I'm going to submit.
| {
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1934/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1934/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/1933 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1933/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1933/comments | https://api.github.com/repos/huggingface/datasets/issues/1933/events | https://github.com/huggingface/datasets/pull/1933 | 814,335,846 | MDExOlB1bGxSZXF1ZXN0NTc4MzQwMzk3 | 1,933 | Use arrow ipc file format | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [] | open | false | null | [] | null | [] | "2021-02-23T10:38:24Z" | "2022-07-06T15:19:48Z" | null | MEMBER | null | According to the [documentation](https://arrow.apache.org/docs/format/Columnar.html?highlight=arrow1#ipc-file-format), it's identical to the streaming format except that it contains the memory offsets of each sample:
> We define a “file format” supporting random access that is build with the stream format. The file starts and ends with a magic string ARROW1 (plus padding). What follows in the file is identical to the stream format. At the end of the file, we write a footer containing a redundant copy of the schema (which is a part of the streaming format) plus memory offsets and sizes for each of the data blocks in the file. This enables random access any record batch in the file. See File.fbs for the precise details of the file footer.
Since it stores more metadata regarding the positions of the examples in the file, it should enable better example retrieval performances. However from the discussion in https://github.com/huggingface/datasets/issues/1803 it looks like it's not the case unfortunately. Maybe in the future this will allow speed gains.
I think it's still a good idea to start using it anyway for these reasons:
- in the future we may have speed gains
- it contains the arrow streaming format data
- it's compatible with the pyarrow Dataset implementation (it allows to load remote dataframes for example) if we want to use it in the future
- it's also the format used by arrow feather if we want to use it in the future
- it's roughly the same size as the streaming format
- it's easy to have backward compatibility with the streaming format
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1933/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1933/timeline | null | null | 1 | {
"diff_url": "https://github.com/huggingface/datasets/pull/1933.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1933",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/1933.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1933"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/1932 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1932/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1932/comments | https://api.github.com/repos/huggingface/datasets/issues/1932/events | https://github.com/huggingface/datasets/pull/1932 | 814,326,116 | MDExOlB1bGxSZXF1ZXN0NTc4MzMyMTQy | 1,932 | Fix builder config creation with data_dir | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [] | closed | false | null | [] | null | [] | "2021-02-23T10:26:02Z" | "2021-02-23T10:45:28Z" | "2021-02-23T10:45:27Z" | MEMBER | null | The data_dir parameter wasn't taken into account to create the config_id, therefore the resulting builder config was considered not custom. However a builder config that is non-custom must not have a name that collides with the predefined builder config names. Therefore it resulted in a `ValueError("Cannot name a custom BuilderConfig the same as an available...")`
I fixed that by commenting the line that used to ignore the data_dir when creating the config.
It was previously ignored before the introduction of config id because we didn't want to change the config name. Now it's fine to take it into account for the config id.
Now creating a config with a data_dir works again @patrickvonplaten | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1932/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1932/timeline | null | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/1932.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1932",
"merged_at": "2021-02-23T10:45:27Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1932.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1932"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/1931 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1931/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1931/comments | https://api.github.com/repos/huggingface/datasets/issues/1931/events | https://github.com/huggingface/datasets/pull/1931 | 814,225,074 | MDExOlB1bGxSZXF1ZXN0NTc4MjQ4NTA5 | 1,931 | add m_lama (multilingual lama) dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/13961899?v=4",
"events_url": "https://api.github.com/users/pdufter/events{/privacy}",
"followers_url": "https://api.github.com/users/pdufter/followers",
"following_url": "https://api.github.com/users/pdufter/following{/other_user}",
"gists_url": "https://api.github.com/users/pdufter/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/pdufter",
"id": 13961899,
"login": "pdufter",
"node_id": "MDQ6VXNlcjEzOTYxODk5",
"organizations_url": "https://api.github.com/users/pdufter/orgs",
"received_events_url": "https://api.github.com/users/pdufter/received_events",
"repos_url": "https://api.github.com/users/pdufter/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/pdufter/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pdufter/subscriptions",
"type": "User",
"url": "https://api.github.com/users/pdufter"
} | [] | closed | false | null | [] | null | [
"Hi, it seems I am somewhat stuck here. The failed test `ci/circleci: run_dataset_script_tests_pyarrow_1_WIN` seems to be caused by some broken connection (`ConnectionResetError: [WinError 10054] An existing connection was forcibly closed by the remote host`). Any help on this is appreciated. \r\n\r\nEdit: Seems to be resolved now.",
"I guess the `dummy_data.zip` is too large. I can reduce the languages that are contained there, but when testing it, it obviously throws an error, as not all files can be found. I guess I can either i) change the default value regarding which languages are loaded or ii) let the `_generate_examples` silently skip any language for which it cannot find files. Both solutions are not really pretty - is there another way around this?",
"Thanks for the review and the constructive comments :) ! I tried to address them, and reduced the number of lines in the dummy data to 1 to reduce its size. "
] | "2021-02-23T08:11:57Z" | "2021-03-01T10:01:03Z" | "2021-03-01T10:01:03Z" | CONTRIBUTOR | null | Add a multilingual (machine translated and automatically generated) version of the LAMA benchmark. For details see the paper https://arxiv.org/pdf/2102.00894.pdf | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1931/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1931/timeline | null | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/1931.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1931",
"merged_at": "2021-03-01T10:01:03Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1931.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1931"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/1930 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1930/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1930/comments | https://api.github.com/repos/huggingface/datasets/issues/1930/events | https://github.com/huggingface/datasets/pull/1930 | 814,055,198 | MDExOlB1bGxSZXF1ZXN0NTc4MTAwNzI0 | 1,930 | updated the wino_bias dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/22306304?v=4",
"events_url": "https://api.github.com/users/JieyuZhao/events{/privacy}",
"followers_url": "https://api.github.com/users/JieyuZhao/followers",
"following_url": "https://api.github.com/users/JieyuZhao/following{/other_user}",
"gists_url": "https://api.github.com/users/JieyuZhao/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/JieyuZhao",
"id": 22306304,
"login": "JieyuZhao",
"node_id": "MDQ6VXNlcjIyMzA2MzA0",
"organizations_url": "https://api.github.com/users/JieyuZhao/orgs",
"received_events_url": "https://api.github.com/users/JieyuZhao/received_events",
"repos_url": "https://api.github.com/users/JieyuZhao/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/JieyuZhao/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/JieyuZhao/subscriptions",
"type": "User",
"url": "https://api.github.com/users/JieyuZhao"
} | [] | closed | false | null | [] | null | [
"Hi @JieyuZhao ! Have you had a chance to add the different configurations ?\r\nThanks again for your help on this !",
"> Hi @JieyuZhao ! Have you had a chance to add the different configurations ?\r\n> Thanks again for your help on this !\r\n\r\nHi @lhoestq Yes, I've updated the code. Now the configuration will have dev/test splits.",
"> Cool thanks !\r\n> This looks perfect this way.\r\n> \r\n> Now we just need to update the dataset_infos.json (it contains the metadata of the dataset) and add dummy data to be able to test this script automatically.\r\n> \r\n> To update the dataset_infos.json you just need delete the current one at `./datasets/wino_biais/dataset_infos.json`, and then run this command:\r\n> \r\n> ```\r\n> datasets-cli test ./datasets/wino_biais --save_infos --all_configs --ignore_verifications\r\n> ```\r\n> \r\n> To add the dummy data there's also a tool to add them automatically.\r\n> First delete the folder at `./datasets/wino_biais/dummy` and then run\r\n> \r\n> ```\r\n> datasets-cli dummy_data ./datasets/wino_biais --auto_generate --match_text_files \"*conll\" --n_lines 15\r\n> ```\r\n> \r\n> Let me know if you have questions :)\r\n> Also don't forget to run `make style` to format the code properly.\r\n\r\nThanks for the instruction! I've updated the metadata and the dummy data and also do the formatting. Please let me know if more is needed. :)"
] | "2021-02-23T03:07:40Z" | "2021-04-07T15:24:56Z" | "2021-04-07T15:24:56Z" | CONTRIBUTOR | null | Updated the wino_bias.py script.
- updated the data_url
- added different configurations for different data splits
- added the coreference_cluster to the data features | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1930/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1930/timeline | null | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/1930.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1930",
"merged_at": "2021-04-07T15:24:56Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1930.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1930"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/1929 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1929/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1929/comments | https://api.github.com/repos/huggingface/datasets/issues/1929/events | https://github.com/huggingface/datasets/pull/1929 | 813,929,669 | MDExOlB1bGxSZXF1ZXN0NTc3OTk1MTE4 | 1,929 | Improve typing and style and fix some inconsistencies | {
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko"
} | [] | closed | false | null | [] | null | [
"@lhoestq Thanks for the quick review.",
"I merged master to this branch to re-run the CI before merging :)"
] | "2021-02-22T22:47:41Z" | "2021-02-24T16:16:14Z" | "2021-02-24T14:03:54Z" | CONTRIBUTOR | null | This PR:
* improves typing (mostly more consistent use of `typing.Optional`)
* `DatasetDict.cleanup_cache_files` now correctly returns a dict
* replaces `dict()` with the corresponding literal
* uses `dict_to_copy.copy()` instead of `dict(dict_to_copy)` for shallow copying | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1929/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1929/timeline | null | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/1929.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1929",
"merged_at": "2021-02-24T14:03:53Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1929.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1929"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/1928 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1928/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1928/comments | https://api.github.com/repos/huggingface/datasets/issues/1928/events | https://github.com/huggingface/datasets/pull/1928 | 813,793,434 | MDExOlB1bGxSZXF1ZXN0NTc3ODgyMDM4 | 1,928 | Updating old cards | {
"avatar_url": "https://avatars.githubusercontent.com/u/26722925?v=4",
"events_url": "https://api.github.com/users/mcmillanmajora/events{/privacy}",
"followers_url": "https://api.github.com/users/mcmillanmajora/followers",
"following_url": "https://api.github.com/users/mcmillanmajora/following{/other_user}",
"gists_url": "https://api.github.com/users/mcmillanmajora/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mcmillanmajora",
"id": 26722925,
"login": "mcmillanmajora",
"node_id": "MDQ6VXNlcjI2NzIyOTI1",
"organizations_url": "https://api.github.com/users/mcmillanmajora/orgs",
"received_events_url": "https://api.github.com/users/mcmillanmajora/received_events",
"repos_url": "https://api.github.com/users/mcmillanmajora/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mcmillanmajora/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mcmillanmajora/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mcmillanmajora"
} | [] | closed | false | null | [] | null | [] | "2021-02-22T19:26:04Z" | "2021-02-23T18:19:25Z" | "2021-02-23T18:19:25Z" | CONTRIBUTOR | null | Updated the cards for [Allocine](https://github.com/mcmillanmajora/datasets/tree/updating-old-cards/datasets/allocine), [CNN/DailyMail](https://github.com/mcmillanmajora/datasets/tree/updating-old-cards/datasets/cnn_dailymail), and [SNLI](https://github.com/mcmillanmajora/datasets/tree/updating-old-cards/datasets/snli). For the most part, the information was just rearranged or rephrased, but the social impact statements are new. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1928/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1928/timeline | null | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/1928.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1928",
"merged_at": "2021-02-23T18:19:25Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1928.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1928"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/1927 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1927/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1927/comments | https://api.github.com/repos/huggingface/datasets/issues/1927/events | https://github.com/huggingface/datasets/pull/1927 | 813,768,935 | MDExOlB1bGxSZXF1ZXN0NTc3ODYxODM5 | 1,927 | Update dataset card of wino_bias | {
"avatar_url": "https://avatars.githubusercontent.com/u/22306304?v=4",
"events_url": "https://api.github.com/users/JieyuZhao/events{/privacy}",
"followers_url": "https://api.github.com/users/JieyuZhao/followers",
"following_url": "https://api.github.com/users/JieyuZhao/following{/other_user}",
"gists_url": "https://api.github.com/users/JieyuZhao/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/JieyuZhao",
"id": 22306304,
"login": "JieyuZhao",
"node_id": "MDQ6VXNlcjIyMzA2MzA0",
"organizations_url": "https://api.github.com/users/JieyuZhao/orgs",
"received_events_url": "https://api.github.com/users/JieyuZhao/received_events",
"repos_url": "https://api.github.com/users/JieyuZhao/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/JieyuZhao/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/JieyuZhao/subscriptions",
"type": "User",
"url": "https://api.github.com/users/JieyuZhao"
} | [
{
"color": "0e8a16",
"default": false,
"description": "Contribution to a dataset script",
"id": 4564477500,
"name": "dataset contribution",
"node_id": "LA_kwDODunzps8AAAABEBBmPA",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20contribution"
}
] | closed | false | null | [] | null | [
"Thanks @JieyuZhao.\r\n\r\nI think this PR was superseded by your other PRs:\r\n- #1930\r\n- #2152 \r\n\r\nI'm closing this."
] | "2021-02-22T18:51:34Z" | "2022-09-23T13:35:09Z" | "2022-09-23T13:35:08Z" | CONTRIBUTOR | null | Updated the info for the wino_bias dataset. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1927/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1927/timeline | null | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/1927.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1927",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/1927.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1927"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/1926 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1926/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1926/comments | https://api.github.com/repos/huggingface/datasets/issues/1926/events | https://github.com/huggingface/datasets/pull/1926 | 813,607,994 | MDExOlB1bGxSZXF1ZXN0NTc3NzI4Mjgy | 1,926 | Fix: Wiki_dpr - add missing scalar quantizer | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [] | closed | false | null | [] | null | [] | "2021-02-22T15:32:05Z" | "2021-02-22T15:49:54Z" | "2021-02-22T15:49:53Z" | MEMBER | null | All the prebuilt wiki_dpr indexes already use SQ8, I forgot to update the wiki_dpr script after building them. Now it's finally done.
The scalar quantizer SQ8 doesn't reduce the performance of the index as shown in retrieval experiments on RAG.
The quantizer reduces the size of the index a lot but increases index building time. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1926/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1926/timeline | null | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/1926.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1926",
"merged_at": "2021-02-22T15:49:53Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1926.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1926"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/1925 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1925/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1925/comments | https://api.github.com/repos/huggingface/datasets/issues/1925/events | https://github.com/huggingface/datasets/pull/1925 | 813,600,902 | MDExOlB1bGxSZXF1ZXN0NTc3NzIyMzc3 | 1,925 | Fix: Wiki_dpr - fix when with_embeddings is False or index_name is "no_index" | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [] | closed | false | null | [] | null | [
"Hi @lhoestq ,\r\n\r\nI am running into an issue now when trying to run RAG. Running exactly as described [here](https://huggingface.co/facebook/rag-token-nq?fbclid=IwAR3bTfhls5U_t9DqsX2Vzb7NhtRHxJxfQ-uwFT7VuCPMZUM2AdAlKF_qkI8#usage) I get the error below. Wondering if it's related to this.\r\n\r\nRunning Transformers 4.3.2 with datasets installed from source from `master` branch.\r\n\r\n```bash\r\n(venv) sergey_mkrtchyan datasets (master) $ python\r\nPython 3.8.6 (v3.8.6:db455296be, Sep 23 2020, 13:31:39)\r\n[Clang 6.0 (clang-600.0.57)] on darwin\r\nType \"help\", \"copyright\", \"credits\" or \"license\" for more information.\r\n>>> from transformers import RagTokenizer, RagRetriever, RagTokenForGeneration\r\n>>> tokenizer = RagTokenizer.from_pretrained(\"facebook/rag-token-nq\")\r\n>>> retriever = RagRetriever.from_pretrained(\"facebook/rag-token-nq\", index_name=\"exact\", use_dummy_dataset=True)\r\nUsing custom data configuration dummy.psgs_w100.nq.no_index-dummy=True,with_index=False\r\nReusing dataset wiki_dpr (/Users/sergey_mkrtchyan/.cache/huggingface/datasets/wiki_dpr/dummy.psgs_w100.nq.no_index-dummy=True,with_index=False/0.0.0/8a97e0f4fa5bc46e179474db6a61b09d5d2419d2911835bd3f91d110c936d8bb)\r\nUsing custom data configuration dummy.psgs_w100.nq.exact-50b6cda57ff32ab4\r\nReusing dataset wiki_dpr (/Users/sergey_mkrtchyan/.cache/huggingface/datasets/wiki_dpr/dummy.psgs_w100.nq.exact-50b6cda57ff32ab4/0.0.0/8a97e0f4fa5bc46e179474db6a61b09d5d2419d2911835bd3f91d110c936d8bb)\r\n 0%| | 0/10 [00:00<?, ?it/s]\r\nTraceback (most recent call last):\r\n File \"<stdin>\", line 1, in <module>\r\n File \"/Users/sergey_mkrtchyan/workspace/cformers/venv/lib/python3.8/site-packages/transformers/models/rag/retrieval_rag.py\", line 425, in from_pretrained\r\n return cls(\r\n File \"/Users/sergey_mkrtchyan/workspace/cformers/venv/lib/python3.8/site-packages/transformers/models/rag/retrieval_rag.py\", line 387, in __init__\r\n self.init_retrieval()\r\n File \"/Users/sergey_mkrtchyan/workspace/cformers/venv/lib/python3.8/site-packages/transformers/models/rag/retrieval_rag.py\", line 458, in init_retrieval\r\n self.index.init_index()\r\n File \"/Users/sergey_mkrtchyan/workspace/cformers/venv/lib/python3.8/site-packages/transformers/models/rag/retrieval_rag.py\", line 284, in init_index\r\n self.dataset = load_dataset(\r\n File \"/Users/sergey_mkrtchyan/workspace/huggingface/datasets/src/datasets/load.py\", line 750, in load_dataset\r\n ds = builder_instance.as_dataset(split=split, ignore_verifications=ignore_verifications, in_memory=keep_in_memory)\r\n File \"/Users/sergey_mkrtchyan/workspace/huggingface/datasets/src/datasets/builder.py\", line 734, in as_dataset\r\n datasets = utils.map_nested(\r\n File \"/Users/sergey_mkrtchyan/workspace/huggingface/datasets/src/datasets/utils/py_utils.py\", line 195, in map_nested\r\n return function(data_struct)\r\n File \"/Users/sergey_mkrtchyan/workspace/huggingface/datasets/src/datasets/builder.py\", line 769, in _build_single_dataset\r\n post_processed = self._post_process(ds, resources_paths)\r\n File \"/Users/sergey_mkrtchyan/.cache/huggingface/modules/datasets_modules/datasets/wiki_dpr/8a97e0f4fa5bc46e179474db6a61b09d5d2419d2911835bd3f91d110c936d8bb/wiki_dpr.py\", line 205, in _post_process\r\n dataset.add_faiss_index(\"embeddings\", custom_index=index)\r\n File \"/Users/sergey_mkrtchyan/workspace/huggingface/datasets/src/datasets/arrow_dataset.py\", line 2516, in add_faiss_index\r\n super().add_faiss_index(\r\n File \"/Users/sergey_mkrtchyan/workspace/huggingface/datasets/src/datasets/search.py\", line 416, in add_faiss_index\r\n faiss_index.add_vectors(self, column=column, train_size=train_size, faiss_verbose=faiss_verbose)\r\n File \"/Users/sergey_mkrtchyan/workspace/huggingface/datasets/src/datasets/search.py\", line 281, in add_vectors\r\n self.faiss_index.add(vecs)\r\n File \"/Users/sergey_mkrtchyan/workspace/cformers/venv/lib/python3.8/site-packages/faiss/__init__.py\", line 104, in replacement_add\r\n self.add_c(n, swig_ptr(x))\r\n File \"/Users/sergey_mkrtchyan/workspace/cformers/venv/lib/python3.8/site-packages/faiss/swigfaiss.py\", line 3263, in add\r\n return _swigfaiss.IndexHNSW_add(self, n, x)\r\nRuntimeError: Error in virtual void faiss::IndexHNSW::add(faiss::Index::idx_t, const float *) at /Users/runner/work/faiss-wheels/faiss-wheels/faiss/faiss/IndexHNSW.cpp:356: Error: 'is_trained' failed\r\n>>>\r\n```\r\n\r\nThe error message is hinting that it could be related to this, but I might be wrong. Any ideas?\r\n\r\n\r\nEdit: Can confirm it's working fine with datasets==1.2.0\r\n\r\nDouble Edit: Did some further digging. The issue is related to this commit: 8c5220307c33f00e01c3bf7b8. I opened a separate issue #1941 for proper tracking."
] | "2021-02-22T15:23:46Z" | "2021-02-25T01:33:48Z" | "2021-02-22T15:36:08Z" | MEMBER | null | Fix the bugs noticed in #1915
There was a bug when `with_embeddings=False` where the configuration name was the same as if `with_embeddings=True`, which led the dataset builder to do bad verifications (for example it used to expect to download the embeddings for `with_embeddings=False`).
Another issue was that setting `index_name="no_index"` didn't set `with_index` to False.
I fixed both of them and added dummy data for those configurations for testing. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1925/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1925/timeline | null | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/1925.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1925",
"merged_at": "2021-02-22T15:36:07Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1925.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1925"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/1924 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1924/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1924/comments | https://api.github.com/repos/huggingface/datasets/issues/1924/events | https://github.com/huggingface/datasets/issues/1924 | 813,599,733 | MDU6SXNzdWU4MTM1OTk3MzM= | 1,924 | Anonymous Dataset Addition (i.e Anonymous PR?) | {
"avatar_url": "https://avatars.githubusercontent.com/u/22492839?v=4",
"events_url": "https://api.github.com/users/PierreColombo/events{/privacy}",
"followers_url": "https://api.github.com/users/PierreColombo/followers",
"following_url": "https://api.github.com/users/PierreColombo/following{/other_user}",
"gists_url": "https://api.github.com/users/PierreColombo/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/PierreColombo",
"id": 22492839,
"login": "PierreColombo",
"node_id": "MDQ6VXNlcjIyNDkyODM5",
"organizations_url": "https://api.github.com/users/PierreColombo/orgs",
"received_events_url": "https://api.github.com/users/PierreColombo/received_events",
"repos_url": "https://api.github.com/users/PierreColombo/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/PierreColombo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/PierreColombo/subscriptions",
"type": "User",
"url": "https://api.github.com/users/PierreColombo"
} | [] | closed | false | null | [] | null | [
"Hi !\r\nI guess you can add a dataset without the fields that must be kept anonymous, and then update those when the anonymity period is over.\r\nYou can also make the PR from an anonymous org.\r\nPinging @yjernite just to make sure it's ok",
"Hello,\r\nI would prefer to do the reverse: adding a link to an anonymous paper without the people names/institution in the PR. Would it be conceivable ?\r\nCheers\r\n",
"Sure, I think it's ok on our side",
"Yup, sounds good!"
] | "2021-02-22T15:22:30Z" | "2022-10-05T13:07:11Z" | "2022-10-05T13:07:11Z" | CONTRIBUTOR | null | Hello,
Thanks a lot for your librairy.
We plan to submit a paper on OpenReview using the Anonymous setting. Is it possible to add a new dataset without breaking the anonimity, with a link to the paper ?
Cheers
@eusip | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1924/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1924/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/1923 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1923/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1923/comments | https://api.github.com/repos/huggingface/datasets/issues/1923/events | https://github.com/huggingface/datasets/pull/1923 | 813,363,472 | MDExOlB1bGxSZXF1ZXN0NTc3NTI0MTU0 | 1,923 | Fix save_to_disk with relative path | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [] | closed | false | null | [] | null | [] | "2021-02-22T10:27:19Z" | "2021-02-22T11:22:44Z" | "2021-02-22T11:22:43Z" | MEMBER | null | As noticed in #1919 and #1920 the target directory was not created using `makedirs` so saving to it raises `FileNotFoundError`. For absolute paths it works but not for the good reason. This is because the target path was the same as the temporary path where in-memory data are written as an intermediary step.
I added the `makedirs` call using `fs.makedirs` in order to support remote filesystems.
I also fixed the issue with the target path being the temporary path.
I added a test case for relative paths as well for save_to_disk.
Thanks to @M-Salti for reporting and investigating | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 1,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1923/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1923/timeline | null | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/1923.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1923",
"merged_at": "2021-02-22T11:22:43Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1923.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1923"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/1922 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1922/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1922/comments | https://api.github.com/repos/huggingface/datasets/issues/1922/events | https://github.com/huggingface/datasets/issues/1922 | 813,140,806 | MDU6SXNzdWU4MTMxNDA4MDY= | 1,922 | How to update the "wino_bias" dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/22306304?v=4",
"events_url": "https://api.github.com/users/JieyuZhao/events{/privacy}",
"followers_url": "https://api.github.com/users/JieyuZhao/followers",
"following_url": "https://api.github.com/users/JieyuZhao/following{/other_user}",
"gists_url": "https://api.github.com/users/JieyuZhao/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/JieyuZhao",
"id": 22306304,
"login": "JieyuZhao",
"node_id": "MDQ6VXNlcjIyMzA2MzA0",
"organizations_url": "https://api.github.com/users/JieyuZhao/orgs",
"received_events_url": "https://api.github.com/users/JieyuZhao/received_events",
"repos_url": "https://api.github.com/users/JieyuZhao/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/JieyuZhao/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/JieyuZhao/subscriptions",
"type": "User",
"url": "https://api.github.com/users/JieyuZhao"
} | [] | open | false | null | [] | null | [
"Hi @JieyuZhao !\r\n\r\nYou can edit the dataset card of wino_bias to update the URL via a Pull Request. This would be really appreciated :)\r\n\r\nThe dataset card is the README.md file you can find at https://github.com/huggingface/datasets/tree/master/datasets/wino_bias\r\nAlso the homepage url is also mentioned in the wino_bias.py so feel free to update it there as well.\r\n\r\nYou can create a Pull Request directly from the github interface by editing the files you want and submit a PR, or from a local clone of the repository.\r\n\r\nThanks for noticing !"
] | "2021-02-22T05:39:39Z" | "2021-02-22T10:35:59Z" | null | CONTRIBUTOR | null | Hi all,
Thanks for the efforts to collect all the datasets! But I think there is a problem with the wino_bias dataset. The current link is not correct. How can I update that?
Thanks! | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1922/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1922/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/1921 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1921/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1921/comments | https://api.github.com/repos/huggingface/datasets/issues/1921/events | https://github.com/huggingface/datasets/pull/1921 | 812,716,042 | MDExOlB1bGxSZXF1ZXN0NTc3MDEzMDM4 | 1,921 | Standardizing datasets dtypes | {
"avatar_url": "https://avatars.githubusercontent.com/u/7731709?v=4",
"events_url": "https://api.github.com/users/justin-yan/events{/privacy}",
"followers_url": "https://api.github.com/users/justin-yan/followers",
"following_url": "https://api.github.com/users/justin-yan/following{/other_user}",
"gists_url": "https://api.github.com/users/justin-yan/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/justin-yan",
"id": 7731709,
"login": "justin-yan",
"node_id": "MDQ6VXNlcjc3MzE3MDk=",
"organizations_url": "https://api.github.com/users/justin-yan/orgs",
"received_events_url": "https://api.github.com/users/justin-yan/received_events",
"repos_url": "https://api.github.com/users/justin-yan/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/justin-yan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/justin-yan/subscriptions",
"type": "User",
"url": "https://api.github.com/users/justin-yan"
} | [] | closed | false | null | [] | null | [
"@lhoestq - apologies for the multiple PRs, my previous one (#1905) got mangled due to some merge conflicts that I had trouble resolving so I just cherry-picked my changes onto a fresh branch here."
] | "2021-02-20T22:04:01Z" | "2021-02-22T09:44:10Z" | "2021-02-22T09:44:10Z" | CONTRIBUTOR | null | This PR follows up on discussion in #1900 to have an explicit set of basic dtypes for datasets.
This moves away from str(pyarrow.DataType) as the method of choice for creating dtypes, favoring an explicit mapping to a list of supported Value dtypes.
I believe in practice this should be backward compatible, since anyone previously using Value() would only have been able to use dtypes that had an identically named pyarrow factory function, which are all explicitly supported here, with `float32` and `float64` acting as the official datasets dtypes, which resolves the tension between `double` being the pyarrow dtype and `float64` being the pyarrow type factory function. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1921/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1921/timeline | null | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/1921.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1921",
"merged_at": "2021-02-22T09:44:10Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1921.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1921"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/1920 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1920/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1920/comments | https://api.github.com/repos/huggingface/datasets/issues/1920/events | https://github.com/huggingface/datasets/pull/1920 | 812,628,220 | MDExOlB1bGxSZXF1ZXN0NTc2OTQ5NzI2 | 1,920 | Fix save_to_disk issue | {
"avatar_url": "https://avatars.githubusercontent.com/u/9285264?v=4",
"events_url": "https://api.github.com/users/M-Salti/events{/privacy}",
"followers_url": "https://api.github.com/users/M-Salti/followers",
"following_url": "https://api.github.com/users/M-Salti/following{/other_user}",
"gists_url": "https://api.github.com/users/M-Salti/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/M-Salti",
"id": 9285264,
"login": "M-Salti",
"node_id": "MDQ6VXNlcjkyODUyNjQ=",
"organizations_url": "https://api.github.com/users/M-Salti/orgs",
"received_events_url": "https://api.github.com/users/M-Salti/received_events",
"repos_url": "https://api.github.com/users/M-Salti/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/M-Salti/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/M-Salti/subscriptions",
"type": "User",
"url": "https://api.github.com/users/M-Salti"
} | [] | closed | false | null | [] | null | [
"So I was curious why the issue reported at #1919 wasn't caught in [this test](https://github.com/huggingface/datasets/blob/248104c4bdb2e01c036b7578867199191fbff181/tests/test_arrow_dataset.py#L209), so I did some digging.\r\nI tried to save to a temporary directory (just like in the test), like this:\r\n```python\r\nwith tempfile.TemporaryDirectory() as requested_tempdir:\r\n squad.save_to_disk(requested_tempdir) # no error\r\n```\r\nand it executes succesfuly without problems.\r\nSo why does it work, but this doesn't?\r\n```python\r\nsquad.save_to_disk(\"./squad\") # error\r\n```\r\nIt's because `save_to_disk` also creates a temporary directory (let's call it `tempdir`), and since `tempdir` and `requested_tempdir` share the same parents, the `Path.joinpath` method [(here)](https://github.com/huggingface/datasets/blob/248104c4bdb2e01c036b7578867199191fbff181/src/datasets/arrow_dataset.py#L469) will keep `requested_tempdir` as it is and the *train* directory will be created under `requested_tempdir` and hence no errors will arise.\r\n\r\nBut in the second case (where we are saving to a local dir), the *train* directory is created under *squad* which in turn is created under `tempdir`, not under `.` (current dir).\r\n\r\nSo, all of this probably doesn't help solving the issue but it might help creating a better test, and it also makes me wonder why are we saving to a temporary dir in `save_to_disk` anyway? I mean, won't it be removed with all its contents upon execution completion? what's the point then? ",
"CLosing in favor of #1923"
] | "2021-02-20T14:22:39Z" | "2021-02-22T10:30:11Z" | "2021-02-22T10:30:11Z" | CONTRIBUTOR | null | Fixes #1919
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1920/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1920/timeline | null | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/1920.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1920",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/1920.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1920"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/1919 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1919/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1919/comments | https://api.github.com/repos/huggingface/datasets/issues/1919/events | https://github.com/huggingface/datasets/issues/1919 | 812,626,872 | MDU6SXNzdWU4MTI2MjY4NzI= | 1,919 | Failure to save with save_to_disk | {
"avatar_url": "https://avatars.githubusercontent.com/u/9285264?v=4",
"events_url": "https://api.github.com/users/M-Salti/events{/privacy}",
"followers_url": "https://api.github.com/users/M-Salti/followers",
"following_url": "https://api.github.com/users/M-Salti/following{/other_user}",
"gists_url": "https://api.github.com/users/M-Salti/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/M-Salti",
"id": 9285264,
"login": "M-Salti",
"node_id": "MDQ6VXNlcjkyODUyNjQ=",
"organizations_url": "https://api.github.com/users/M-Salti/orgs",
"received_events_url": "https://api.github.com/users/M-Salti/received_events",
"repos_url": "https://api.github.com/users/M-Salti/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/M-Salti/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/M-Salti/subscriptions",
"type": "User",
"url": "https://api.github.com/users/M-Salti"
} | [] | closed | false | null | [] | null | [
"Hi thanks for reporting and for proposing a fix :)\r\n\r\nI just merged a fix, feel free to try it from the master branch !",
"Closing since this has been fixed by #1923"
] | "2021-02-20T14:18:10Z" | "2021-03-03T17:40:27Z" | "2021-03-03T17:40:27Z" | CONTRIBUTOR | null | When I try to save a dataset locally using the `save_to_disk` method I get the error:
```bash
FileNotFoundError: [Errno 2] No such file or directory: '/content/squad/train/squad-train.arrow'
```
To replicate:
1. Install `datasets` from master
2. Run this code:
```python
from datasets import load_dataset
squad = load_dataset("squad") # or any other dataset
squad.save_to_disk("squad") # error here
```
The problem is that the method is not creating a directory with the name `dataset_path` for saving the dataset in (i.e. it's not creating the *train* and *validation* directories in this case). After creating the directory the problem resolves.
I'll open a PR soon doing that and linking this issue.
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1919/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1919/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/1918 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1918/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1918/comments | https://api.github.com/repos/huggingface/datasets/issues/1918/events | https://github.com/huggingface/datasets/pull/1918 | 812,541,510 | MDExOlB1bGxSZXF1ZXN0NTc2ODg2OTQ0 | 1,918 | Fix QA4MRE download URLs | {
"avatar_url": "https://avatars.githubusercontent.com/u/9285264?v=4",
"events_url": "https://api.github.com/users/M-Salti/events{/privacy}",
"followers_url": "https://api.github.com/users/M-Salti/followers",
"following_url": "https://api.github.com/users/M-Salti/following{/other_user}",
"gists_url": "https://api.github.com/users/M-Salti/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/M-Salti",
"id": 9285264,
"login": "M-Salti",
"node_id": "MDQ6VXNlcjkyODUyNjQ=",
"organizations_url": "https://api.github.com/users/M-Salti/orgs",
"received_events_url": "https://api.github.com/users/M-Salti/received_events",
"repos_url": "https://api.github.com/users/M-Salti/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/M-Salti/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/M-Salti/subscriptions",
"type": "User",
"url": "https://api.github.com/users/M-Salti"
} | [] | closed | false | null | [] | null | [] | "2021-02-20T07:32:17Z" | "2021-02-22T13:35:06Z" | "2021-02-22T13:35:06Z" | CONTRIBUTOR | null | The URLs in the `dataset_infos` and `README` are correct, only the ones in the download script needed updating. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1918/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1918/timeline | null | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/1918.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1918",
"merged_at": "2021-02-22T13:35:06Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1918.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1918"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/1917 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1917/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1917/comments | https://api.github.com/repos/huggingface/datasets/issues/1917/events | https://github.com/huggingface/datasets/issues/1917 | 812,390,178 | MDU6SXNzdWU4MTIzOTAxNzg= | 1,917 | UnicodeDecodeError: windows 10 machine | {
"avatar_url": "https://avatars.githubusercontent.com/u/900951?v=4",
"events_url": "https://api.github.com/users/yosiasz/events{/privacy}",
"followers_url": "https://api.github.com/users/yosiasz/followers",
"following_url": "https://api.github.com/users/yosiasz/following{/other_user}",
"gists_url": "https://api.github.com/users/yosiasz/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/yosiasz",
"id": 900951,
"login": "yosiasz",
"node_id": "MDQ6VXNlcjkwMDk1MQ==",
"organizations_url": "https://api.github.com/users/yosiasz/orgs",
"received_events_url": "https://api.github.com/users/yosiasz/received_events",
"repos_url": "https://api.github.com/users/yosiasz/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/yosiasz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yosiasz/subscriptions",
"type": "User",
"url": "https://api.github.com/users/yosiasz"
} | [] | closed | false | null | [] | null | [
"upgraded to php 3.9.2 and it works!"
] | "2021-02-19T22:13:05Z" | "2021-02-19T22:41:11Z" | "2021-02-19T22:40:28Z" | NONE | null | Windows 10
Php 3.6.8
when running
```
import datasets
oscar_am = datasets.load_dataset("oscar", "unshuffled_deduplicated_am")
print(oscar_am["train"][0])
```
I get the following error
```
file "C:\PYTHON\3.6.8\lib\encodings\cp1252.py", line 23, in decode
return codecs.charmap_decode(input,self.errors,decoding_table)[0]
UnicodeDecodeError: 'charmap' codec can't decode byte 0x9d in position 58: character maps to <undefined>
``` | {
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1917/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1917/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/1916 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1916/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1916/comments | https://api.github.com/repos/huggingface/datasets/issues/1916/events | https://github.com/huggingface/datasets/pull/1916 | 812,291,984 | MDExOlB1bGxSZXF1ZXN0NTc2NjgwNjY5 | 1,916 | Remove unused py_utils objects | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | [] | closed | false | null | [] | null | [
"Hmmm this one broke master. I'm fixing it.\r\n\r\nMaybe because your branch was outdated ?",
"Sorry @lhoestq, I forgot to update the imports... :/",
"It's fine, the CI should have caught this tbh. Not sure why it did't fail"
] | "2021-02-19T19:51:25Z" | "2021-02-22T14:56:56Z" | "2021-02-22T13:32:49Z" | MEMBER | null | Remove unused/unnecessary py_utils functions/classes. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1916/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1916/timeline | null | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/1916.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1916",
"merged_at": "2021-02-22T13:32:49Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1916.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1916"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/1915 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1915/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1915/comments | https://api.github.com/repos/huggingface/datasets/issues/1915/events | https://github.com/huggingface/datasets/issues/1915 | 812,229,654 | MDU6SXNzdWU4MTIyMjk2NTQ= | 1,915 | Unable to download `wiki_dpr` | {
"avatar_url": "https://avatars.githubusercontent.com/u/18504534?v=4",
"events_url": "https://api.github.com/users/nitarakad/events{/privacy}",
"followers_url": "https://api.github.com/users/nitarakad/followers",
"following_url": "https://api.github.com/users/nitarakad/following{/other_user}",
"gists_url": "https://api.github.com/users/nitarakad/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/nitarakad",
"id": 18504534,
"login": "nitarakad",
"node_id": "MDQ6VXNlcjE4NTA0NTM0",
"organizations_url": "https://api.github.com/users/nitarakad/orgs",
"received_events_url": "https://api.github.com/users/nitarakad/received_events",
"repos_url": "https://api.github.com/users/nitarakad/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/nitarakad/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/nitarakad/subscriptions",
"type": "User",
"url": "https://api.github.com/users/nitarakad"
} | [] | closed | false | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
}
] | null | [
"Thanks for reporting ! This is a bug. For now feel free to set `ignore_verifications=False` in `load_dataset`.\r\nI'm working on a fix",
"I just merged a fix :)\r\n\r\nWe'll do a patch release soon. In the meantime feel free to try it from the master branch\r\nThanks again for reporting !",
"Closing since this has been fixed by #1925"
] | "2021-02-19T18:11:32Z" | "2021-03-03T17:40:48Z" | "2021-03-03T17:40:48Z" | NONE | null | I am trying to download the `wiki_dpr` dataset. Specifically, I want to download `psgs_w100.multiset.no_index` with no embeddings/no index. In order to do so, I ran:
`curr_dataset = load_dataset("wiki_dpr", embeddings_name="multiset", index_name="no_index")`
However, I got the following error:
`datasets.utils.info_utils.UnexpectedDownloadedFile: {'embeddings_index'}`
I tried adding in flags `with_embeddings=False` and `with_index=False`:
`curr_dataset = load_dataset("wiki_dpr", with_embeddings=False, with_index=False, embeddings_name="multiset", index_name="no_index")`
But I got the following error:
`raise ExpectedMoreDownloadedFiles(str(set(expected_checksums) - set(recorded_checksums)))
datasets.utils.info_utils.ExpectedMoreDownloadedFiles: {‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_5’, ‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_15’, ‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_30’, ‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_36’, ‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_18’, ‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_41’, ‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_13’, ‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_48’, ‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_10’, ‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_23’, ‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_14’, ‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_34’, ‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_43’, ‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_40’, ‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_47’, ‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_3’, ‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_24’, ‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_7’, ‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_33’, ‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_46’, ‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_42’, ‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_27’, ‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_29’, ‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_26’, ‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_22’, ‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_4’, ‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_20’, ‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_39’, ‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_6’, ‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_16’, ‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_8’, ‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_35’, ‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_49’, ‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_17’, ‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_25’, ‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_0’, ‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_38’, ‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_12’, ‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_44’, ‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_1’, ‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_32’, ‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_19’, ‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_31’, ‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_37’, ‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_9’, ‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_11’, ‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_21’, ‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_28’, ‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_45’, ‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_2’}`
Is there anything else I need to set to download the dataset?
**UPDATE**: just running `curr_dataset = load_dataset("wiki_dpr", with_embeddings=False, with_index=False)` gives me the same error.
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1915/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1915/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/1914 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1914/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1914/comments | https://api.github.com/repos/huggingface/datasets/issues/1914/events | https://github.com/huggingface/datasets/pull/1914 | 812,149,201 | MDExOlB1bGxSZXF1ZXN0NTc2NTYyNTkz | 1,914 | Fix logging imports and make all datasets use library logger | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | [] | closed | false | null | [] | null | [] | "2021-02-19T16:12:34Z" | "2021-02-21T19:48:03Z" | "2021-02-21T19:48:03Z" | MEMBER | null | Fix library relative logging imports and make all datasets use library logger. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1914/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1914/timeline | null | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/1914.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1914",
"merged_at": "2021-02-21T19:48:03Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1914.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1914"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/1913 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1913/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1913/comments | https://api.github.com/repos/huggingface/datasets/issues/1913/events | https://github.com/huggingface/datasets/pull/1913 | 812,127,307 | MDExOlB1bGxSZXF1ZXN0NTc2NTQ0NjQw | 1,913 | Add keep_linebreaks parameter to text loader | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [] | closed | false | null | [] | null | [
"Just so I understand how it can be used in practice, do you have an example showing how to load a text dataset with this option?",
"Sure ! Here is an example:\r\n```python\r\nfrom datasets import load_dataset\r\n\r\nload_dataset(\"text\", keep_linebreaks=True, data_files=...)\r\n```\r\n\r\nI'll update the documentation to explain this",
"Perfect!"
] | "2021-02-19T15:43:45Z" | "2021-02-19T18:36:12Z" | "2021-02-19T18:36:11Z" | MEMBER | null | As asked in #870 and https://github.com/huggingface/transformers/issues/10269 there should be a parameter to keep the linebreaks when loading a text dataset.
cc @sgugger @jncasey | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 1,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1913/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1913/timeline | null | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/1913.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1913",
"merged_at": "2021-02-19T18:36:11Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1913.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1913"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/1912 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1912/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1912/comments | https://api.github.com/repos/huggingface/datasets/issues/1912/events | https://github.com/huggingface/datasets/pull/1912 | 812,034,140 | MDExOlB1bGxSZXF1ZXN0NTc2NDY2ODQx | 1,912 | Update: WMT - use mirror links | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [] | closed | false | null | [] | null | [
"So much better - thank you for doing that, @lhoestq!",
"Also fixed the `uncorpus` urls for wmt19 ru-en and zh-en for https://github.com/huggingface/datasets/issues/1893",
"Thanks!\r\nCan this be merged sooner? \r\nI manually update it and it works well."
] | "2021-02-19T13:42:34Z" | "2021-02-24T13:44:53Z" | "2021-02-24T13:44:53Z" | MEMBER | null | As asked in #1892 I created mirrors of the data hosted on statmt.org and updated the wmt scripts.
Now downloading the wmt datasets is blazing fast :)
cc @stas00 @patrickvonplaten | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 4,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 4,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1912/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1912/timeline | null | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/1912.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1912",
"merged_at": "2021-02-24T13:44:53Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1912.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1912"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/1911 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1911/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1911/comments | https://api.github.com/repos/huggingface/datasets/issues/1911/events | https://github.com/huggingface/datasets/issues/1911 | 812,009,956 | MDU6SXNzdWU4MTIwMDk5NTY= | 1,911 | Saving processed dataset running infinitely | {
"avatar_url": "https://avatars.githubusercontent.com/u/20911334?v=4",
"events_url": "https://api.github.com/users/ayubSubhaniya/events{/privacy}",
"followers_url": "https://api.github.com/users/ayubSubhaniya/followers",
"following_url": "https://api.github.com/users/ayubSubhaniya/following{/other_user}",
"gists_url": "https://api.github.com/users/ayubSubhaniya/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/ayubSubhaniya",
"id": 20911334,
"login": "ayubSubhaniya",
"node_id": "MDQ6VXNlcjIwOTExMzM0",
"organizations_url": "https://api.github.com/users/ayubSubhaniya/orgs",
"received_events_url": "https://api.github.com/users/ayubSubhaniya/received_events",
"repos_url": "https://api.github.com/users/ayubSubhaniya/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/ayubSubhaniya/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ayubSubhaniya/subscriptions",
"type": "User",
"url": "https://api.github.com/users/ayubSubhaniya"
} | [] | open | false | null | [] | null | [
"@thomwolf @lhoestq can you guys please take a look and recommend some solution.",
"am suspicious of this thing? what's the purpose of this? pickling and unplickling\r\n`self = pickle.loads(pickle.dumps(self))`\r\n\r\n```\r\n def save_to_disk(self, dataset_path: str, fs=None):\r\n \"\"\"\r\n Saves a dataset to a dataset directory, or in a filesystem using either :class:`datasets.filesystem.S3FileSystem` or any implementation of ``fsspec.spec.AbstractFileSystem``.\r\n\r\n Args:\r\n dataset_path (``str``): path (e.g. ``dataset/train``) or remote uri (e.g. ``s3://my-bucket/dataset/train``) of the dataset directory where the dataset will be saved to\r\n fs (Optional[:class:`datasets.filesystem.S3FileSystem`,``fsspec.spec.AbstractFileSystem``], `optional`, defaults ``None``): instance of :class:`datasets.filesystem.S3FileSystem` or ``fsspec.spec.AbstractFileSystem`` used to download the files from remote filesystem.\r\n \"\"\"\r\n assert (\r\n not self.list_indexes()\r\n ), \"please remove all the indexes using `dataset.drop_index` before saving a dataset\"\r\n self = pickle.loads(pickle.dumps(self))\r\n ```",
"It's been 24 hours and sadly it's still running. With not a single byte written",
"Tried finding the root cause but was unsuccessful.\r\nI am using lazy tokenization with `dataset.set_transform()`, it works like a charm with almost same performance as pre-compute.",
"Hi ! This very probably comes from the hack you used.\r\n\r\nThe pickling line was added an a sanity check because save_to_disk uses the same assumptions as pickling for a dataset object. The main assumption is that memory mapped pyarrow tables must be reloadable from the disk. In your case it's not possible since you altered the pyarrow table.\r\nI would suggest you to rebuild a valid Dataset object from your new pyarrow table. To do so you must first save your new table to a file, and then make a new Dataset object from that arrow file.\r\n\r\nYou can save the raw arrow table (without all the `datasets.Datasets` metadata) by calling `map` with `cache_file_name=\"path/to/outut.arrow\"` and `function=None`. Having `function=None` makes the `map` write your dataset on disk with no data transformation.\r\n\r\nOnce you have your new arrow file, load it with `datasets.Dataset.from_file` to have a brand new Dataset object :)\r\n\r\nIn the future we'll have a better support for the fast filtering method from pyarrow so you don't have to do this very unpractical workaround. Since it breaks somes assumptions regarding the core behavior of Dataset objects, this is very discouraged.",
"Thanks, @lhoestq for your response. Will try your solution and let you know."
] | "2021-02-19T13:09:19Z" | "2021-02-23T07:34:44Z" | null | NONE | null | I have a text dataset of size 220M.
For pre-processing, I need to tokenize this and filter rows with the large sequence.
My tokenization took roughly 3hrs. I used map() with batch size 1024 and multi-process with 96 processes.
filter() function was way to slow, so I used a hack to use pyarrow filter table function, which is damm fast. Mentioned [here](https://github.com/huggingface/datasets/issues/1796)
```dataset._data = dataset._data.filter(...)```
It took 1 hr for the filter.
Then i use `save_to_disk()` on processed dataset and it is running forever.
I have been waiting since 8 hrs, it has not written a single byte.
Infact it has actually read from disk more than 100GB, screenshot below shows the stats using `iotop`.
Second process is the one.
<img width="1672" alt="Screenshot 2021-02-19 at 6 36 53 PM" src="https://user-images.githubusercontent.com/20911334/108508197-7325d780-72e1-11eb-8369-7c057d137d81.png">
I am not able to figure out, whether this is some issue with dataset library or that it is due to my hack for filter() function. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1911/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1911/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/1910 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1910/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1910/comments | https://api.github.com/repos/huggingface/datasets/issues/1910/events | https://github.com/huggingface/datasets/pull/1910 | 811,697,108 | MDExOlB1bGxSZXF1ZXN0NTc2MTg0MDQ3 | 1,910 | Adding CoNLLpp dataset. | {
"avatar_url": "https://avatars.githubusercontent.com/u/21319243?v=4",
"events_url": "https://api.github.com/users/ZihanWangKi/events{/privacy}",
"followers_url": "https://api.github.com/users/ZihanWangKi/followers",
"following_url": "https://api.github.com/users/ZihanWangKi/following{/other_user}",
"gists_url": "https://api.github.com/users/ZihanWangKi/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/ZihanWangKi",
"id": 21319243,
"login": "ZihanWangKi",
"node_id": "MDQ6VXNlcjIxMzE5MjQz",
"organizations_url": "https://api.github.com/users/ZihanWangKi/orgs",
"received_events_url": "https://api.github.com/users/ZihanWangKi/received_events",
"repos_url": "https://api.github.com/users/ZihanWangKi/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/ZihanWangKi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ZihanWangKi/subscriptions",
"type": "User",
"url": "https://api.github.com/users/ZihanWangKi"
} | [] | closed | false | null | [] | null | [
"It looks like this PR now includes changes to many other files than the ones for CoNLLpp.\r\n\r\nTo fix that feel free to create another branch and another PR.\r\n\r\nThis was probably caused by a git rebase. You can avoid this issue by using git merge if you've already pushed your branch."
] | "2021-02-19T05:12:30Z" | "2021-03-04T22:02:47Z" | "2021-03-04T22:02:47Z" | CONTRIBUTOR | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1910/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1910/timeline | null | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/1910.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1910",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/1910.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1910"
} | true |
|
https://api.github.com/repos/huggingface/datasets/issues/1907 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1907/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1907/comments | https://api.github.com/repos/huggingface/datasets/issues/1907/events | https://github.com/huggingface/datasets/issues/1907 | 811,520,569 | MDU6SXNzdWU4MTE1MjA1Njk= | 1,907 | DBPedia14 Dataset Checksum bug? | {
"avatar_url": "https://avatars.githubusercontent.com/u/918006?v=4",
"events_url": "https://api.github.com/users/francisco-perez-sorrosal/events{/privacy}",
"followers_url": "https://api.github.com/users/francisco-perez-sorrosal/followers",
"following_url": "https://api.github.com/users/francisco-perez-sorrosal/following{/other_user}",
"gists_url": "https://api.github.com/users/francisco-perez-sorrosal/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/francisco-perez-sorrosal",
"id": 918006,
"login": "francisco-perez-sorrosal",
"node_id": "MDQ6VXNlcjkxODAwNg==",
"organizations_url": "https://api.github.com/users/francisco-perez-sorrosal/orgs",
"received_events_url": "https://api.github.com/users/francisco-perez-sorrosal/received_events",
"repos_url": "https://api.github.com/users/francisco-perez-sorrosal/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/francisco-perez-sorrosal/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/francisco-perez-sorrosal/subscriptions",
"type": "User",
"url": "https://api.github.com/users/francisco-perez-sorrosal"
} | [] | closed | false | null | [] | null | [
"Hi ! :)\r\n\r\nThis looks like the same issue as https://github.com/huggingface/datasets/issues/1856 \r\nBasically google drive has quota issues that makes it inconvenient for downloading files.\r\n\r\nIf the quota of a file is exceeded, you have to wait 24h for the quota to reset (which is painful).\r\n\r\nThe error says that the checksum of the downloaded file doesn't match because google drive returns a text file with the \"Quota Exceeded\" error instead of the actual data file.",
"Thanks @lhoestq! Yes, it seems back to normal after a couple of days."
] | "2021-02-18T22:25:48Z" | "2021-02-22T23:22:05Z" | "2021-02-22T23:22:04Z" | CONTRIBUTOR | null | Hi there!!!
I've been using successfully the DBPedia dataset (https://huggingface.co/datasets/dbpedia_14) with my codebase in the last couple of weeks, but in the last couple of days now I get this error:
```
Traceback (most recent call last):
File "./conditional_classification/basic_pipeline.py", line 178, in <module>
main()
File "./conditional_classification/basic_pipeline.py", line 128, in main
corpus.load_data(limit_train_examples_per_class=args.data_args.train_examples_per_class,
File "/home/fp/dev/conditional_classification/conditional_classification/datasets_base.py", line 83, in load_data
datasets = load_dataset(self.name, split=dataset_split)
File "/home/fp/anaconda3/envs/conditional/lib/python3.8/site-packages/datasets/load.py", line 609, in load_dataset
builder_instance.download_and_prepare(
File "/home/fp/anaconda3/envs/conditional/lib/python3.8/site-packages/datasets/builder.py", line 526, in download_and_prepare
self._download_and_prepare(
File "/home/fp/anaconda3/envs/conditional/lib/python3.8/site-packages/datasets/builder.py", line 586, in _download_and_prepare
verify_checksums(
File "/home/fp/anaconda3/envs/conditional/lib/python3.8/site-packages/datasets/utils/info_utils.py", line 39, in verify_checksums
raise NonMatchingChecksumError(error_msg + str(bad_urls))
datasets.utils.info_utils.NonMatchingChecksumError: Checksums didn't match for dataset source files:
['https://drive.google.com/uc?export=download&id=0Bz8a_Dbh9QhbQ2Vic1kxMmZZQ1k']
```
I've seen this has happened before in other datasets as reported in #537.
I've tried clearing my cache and call again `load_dataset` but still is not working. My same codebase is successfully downloading and using other datasets (e.g. AGNews) without any problem, so I guess something has happened specifically to the DBPedia dataset in the last few days.
Can you please check if there's a problem with the checksums?
Or this is related to any other stuff? I've seen that the path in the cache for the dataset is `/home/fp/.cache/huggingface/datasets/d_bpedia14/dbpedia_14/2.0.0/a70413e39e7a716afd0e90c9e53cb053691f56f9ef5fe317bd07f2c368e8e897...` and includes `d_bpedia14` instead maybe of `dbpedia_14`. Was this maybe a bug introduced recently?
Thanks! | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1907/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1907/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/1906 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1906/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1906/comments | https://api.github.com/repos/huggingface/datasets/issues/1906/events | https://github.com/huggingface/datasets/issues/1906 | 811,405,274 | MDU6SXNzdWU4MTE0MDUyNzQ= | 1,906 | Feature Request: Support for Pandas `Categorical` | {
"avatar_url": "https://avatars.githubusercontent.com/u/7731709?v=4",
"events_url": "https://api.github.com/users/justin-yan/events{/privacy}",
"followers_url": "https://api.github.com/users/justin-yan/followers",
"following_url": "https://api.github.com/users/justin-yan/following{/other_user}",
"gists_url": "https://api.github.com/users/justin-yan/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/justin-yan",
"id": 7731709,
"login": "justin-yan",
"node_id": "MDQ6VXNlcjc3MzE3MDk=",
"organizations_url": "https://api.github.com/users/justin-yan/orgs",
"received_events_url": "https://api.github.com/users/justin-yan/received_events",
"repos_url": "https://api.github.com/users/justin-yan/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/justin-yan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/justin-yan/subscriptions",
"type": "User",
"url": "https://api.github.com/users/justin-yan"
} | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
},
{
"color": "c5def5",
"default": false,
"description": "Generic discussion on the library",
"id": 2067400324,
"name": "generic discussion",
"node_id": "MDU6TGFiZWwyMDY3NDAwMzI0",
"url": "https://api.github.com/repos/huggingface/datasets/labels/generic%20discussion"
}
] | open | false | null | [] | null | [
"We already have a ClassLabel type that does this kind of mapping between the label ids (integers) and actual label values (strings).\r\n\r\nI wonder if actually we should use the DictionaryType from Arrow and the Categorical type from pandas for the `datasets` ClassLabel feature type.\r\nCurrently ClassLabel corresponds to `pa.int64()` in pyarrow and `dtype('int64')` in pandas (so the label names are lost during conversions).\r\n\r\nWhat do you think ?",
"Now that I've heard you explain ClassLabel, that makes a lot of sense! While DictionaryType for Arrow (I think) can have arbitrarily typed keys, so it won't cover all potential cases, pandas' Category is *probably* the most common use for that pyarrow type, and ClassLabel should match that perfectly?\r\n\r\nOther thoughts:\r\n\r\n- changing the resulting patype on ClassLabel might be backward-incompatible? I'm not totally sure if users of the `datasets` library tend to directly access the `patype` attribute (I don't think we really do, but we haven't been using it for very long yet).\r\n- would ClassLabel's dtype change to `dict[int64, string]`? It seems like in practice a ClassLabel (when not explicitly specified) would be constructed from the DictionaryType branch of `generate_from_arrow_type`, so it's not totally clear to me that anyone ever actually accesses/uses that dtype?\r\n- I don't quite know how `.int2str` and `.str2int` are used in practice - would those be kept? Perhaps the implementation might actually be substantially smaller if we can just delegate to pyarrow's dict methods?\r\n\r\nAnother idea that just occurred to me: add a branch in here to generate a ClassLabel if the dict key is int64 and the values are string: https://github.com/huggingface/datasets/blob/master/src/datasets/features.py#L932 , and then don't touch anything else.\r\n\r\nIn practice, I don't think this would be backward-incompatible in a way anyone would care about since the current behavior just throws an exception, and this way, we could support *reading* a pandas Categorical into a `Dataset` as a ClassLabel. I *think* from there, while it would require some custom glue it wouldn't be too hard to convert the ClassLabel into a pandas Category if we want to go back - I think this would improve on the current behavior without risking changing the behavior of ClassLabel in a backward-incompat way.\r\n\r\nThoughts? I'm not sure if this is overly cautious. Whichever approach you think is better, I'd be happy to take it on!\r\n",
"I think we can first keep the int64 precision but with an arrow Dictionary for ClassLabel, and focus on the connection with arrow and pandas.\r\n\r\nIn this scope, I really like the idea of checking for the dictionary type:\r\n\r\n> Another idea that just occurred to me: add a branch in here to generate a ClassLabel if the dict key is int64 and the values are string: https://github.com/huggingface/datasets/blob/master/src/datasets/features.py#L932 , and then don't touch anything else.\r\n\r\nThis looks like a great start.\r\n\r\nThen as you said we'd have to add the conversion from classlabel to the correct arrow dictionary type. Arrow is already able to convert from arrow Dictionary to pandas Categorical so it should be enough.\r\n\r\nI can see two things that we must take case of to make this change backward compatible:\r\n- first we must still be able to load an arrow file with arrow int64 dtype and `datasets` ClassLabel type without crashing. This can be fixed by casting the arrow int64 array to an arrow Dictionary array on-the-fly when loading the table in the ArrowReader.\r\n- then we still have to return integers when accessing examples from a ClassLabel column. Currently it would return the strings values since it's based on the pandas behavior for converting from pandas to python/numpy. To do so we just have to adapt the python/numpy extractors in formatting.py (it takes care of converting an arrow table to a dictionary of python objects by doing arrow table -> pandas dataframe -> python dictionary)\r\n\r\nAny help on this matter is very much welcome :)"
] | "2021-02-18T19:46:05Z" | "2021-02-23T14:38:50Z" | null | CONTRIBUTOR | null | ```
from datasets import Dataset
import pandas as pd
import pyarrow
df = pd.DataFrame(pd.Series(["a", "b", "c", "a"], dtype="category"))
pyarrow.Table.from_pandas(df)
Dataset.from_pandas(df)
# Throws NotImplementedError
# TODO(thom) this will need access to the dictionary as well (for labels). I.e. to the py_table
```
I'm curious if https://github.com/huggingface/datasets/blob/master/src/datasets/features.py#L796 could be built out in a way similar to `Sequence`?
e.g. a `Map` class (or whatever name the maintainers might prefer) that can accept:
```
index_type = generate_from_arrow_type(pa_type.index_type)
value_type = generate_from_arrow_type(pa_type.value_type)
```
and then additional code points to modify:
- FeatureType: https://github.com/huggingface/datasets/blob/master/src/datasets/features.py#L694
- A branch to handle Map in get_nested_type: https://github.com/huggingface/datasets/blob/master/src/datasets/features.py#L719
- I don't quite understand what `encode_nested_example` does but perhaps a branch there? https://github.com/huggingface/datasets/blob/master/src/datasets/features.py#L755
- Similarly, I don't quite understand why `Sequence` is used this way in `generate_from_dict`, but perhaps a branch here? https://github.com/huggingface/datasets/blob/master/src/datasets/features.py#L775
I couldn't find other usages of `Sequence` outside of defining specific datasets, so I'm not sure if that's a comprehensive set of touchpoints. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1906/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1906/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/1905 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1905/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1905/comments | https://api.github.com/repos/huggingface/datasets/issues/1905/events | https://github.com/huggingface/datasets/pull/1905 | 811,384,174 | MDExOlB1bGxSZXF1ZXN0NTc1OTIxMDk1 | 1,905 | Standardizing datasets.dtypes | {
"avatar_url": "https://avatars.githubusercontent.com/u/7731709?v=4",
"events_url": "https://api.github.com/users/justin-yan/events{/privacy}",
"followers_url": "https://api.github.com/users/justin-yan/followers",
"following_url": "https://api.github.com/users/justin-yan/following{/other_user}",
"gists_url": "https://api.github.com/users/justin-yan/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/justin-yan",
"id": 7731709,
"login": "justin-yan",
"node_id": "MDQ6VXNlcjc3MzE3MDk=",
"organizations_url": "https://api.github.com/users/justin-yan/orgs",
"received_events_url": "https://api.github.com/users/justin-yan/received_events",
"repos_url": "https://api.github.com/users/justin-yan/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/justin-yan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/justin-yan/subscriptions",
"type": "User",
"url": "https://api.github.com/users/justin-yan"
} | [] | closed | false | null | [] | null | [
"Also - I took a stab at updating the docs, but I'm not sure how to actually check the outputs to see if it's formatted properly."
] | "2021-02-18T19:15:31Z" | "2021-02-20T22:01:30Z" | "2021-02-20T22:01:30Z" | CONTRIBUTOR | null | This PR was further branched off of jdy-str-to-pyarrow-parsing, so it depends on https://github.com/huggingface/datasets/pull/1900 going first for the diff to be up-to-date (I'm not sure if there's a way for me to use jdy-str-to-pyarrow-parsing as a base branch while having it appear in the pull requests here).
This moves away from `str(pyarrow.DataType)` as the method of choice for creating dtypes, favoring an explicit mapping to a list of supported Value dtypes.
I believe in practice this should be backward compatible, since anyone previously using Value() would only have been able to use dtypes that had an identically named pyarrow factory function, which are all explicitly supported here. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1905/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1905/timeline | null | null | 1 | {
"diff_url": "https://github.com/huggingface/datasets/pull/1905.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1905",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/1905.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1905"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/1904 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1904/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1904/comments | https://api.github.com/repos/huggingface/datasets/issues/1904/events | https://github.com/huggingface/datasets/pull/1904 | 811,260,904 | MDExOlB1bGxSZXF1ZXN0NTc1ODE4MjA0 | 1,904 | Fix to_pandas for boolean ArrayXD | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [] | closed | false | null | [] | null | [
"Thanks!"
] | "2021-02-18T16:30:46Z" | "2021-02-18T17:10:03Z" | "2021-02-18T17:10:01Z" | MEMBER | null | As noticed in #1887 the conversion of a dataset with a boolean ArrayXD feature types fails because of the underlying ListArray conversion to numpy requires `zero_copy_only=False`.
zero copy is available for all primitive types except booleans
see https://arrow.apache.org/docs/python/generated/pyarrow.Array.html#pyarrow.Array.to_numpy
and https://issues.apache.org/jira/browse/ARROW-2871?jql=text%20~%20%22boolean%20to_numpy%22
cc @SBrandeis | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1904/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1904/timeline | null | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/1904.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1904",
"merged_at": "2021-02-18T17:10:00Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1904.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1904"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/1903 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1903/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1903/comments | https://api.github.com/repos/huggingface/datasets/issues/1903/events | https://github.com/huggingface/datasets/pull/1903 | 811,145,531 | MDExOlB1bGxSZXF1ZXN0NTc1NzIwOTk2 | 1,903 | Initial commit for the addition of TIMIT dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/16264631?v=4",
"events_url": "https://api.github.com/users/vrindaprabhu/events{/privacy}",
"followers_url": "https://api.github.com/users/vrindaprabhu/followers",
"following_url": "https://api.github.com/users/vrindaprabhu/following{/other_user}",
"gists_url": "https://api.github.com/users/vrindaprabhu/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/vrindaprabhu",
"id": 16264631,
"login": "vrindaprabhu",
"node_id": "MDQ6VXNlcjE2MjY0NjMx",
"organizations_url": "https://api.github.com/users/vrindaprabhu/orgs",
"received_events_url": "https://api.github.com/users/vrindaprabhu/received_events",
"repos_url": "https://api.github.com/users/vrindaprabhu/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/vrindaprabhu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/vrindaprabhu/subscriptions",
"type": "User",
"url": "https://api.github.com/users/vrindaprabhu"
} | [] | closed | false | null | [] | null | [
"@patrickvonplaten could you please review and help me close this PR?",
"@lhoestq Thank you so much for your comments and for patiently reviewing the code. Have _hopefully_ included all the suggested changes. Let me know if any more changes are required.\r\n\r\nSorry the code had lots of silly errors from my side!:' Will be more careful from next time! :)\r\n\r\n\r\n"
] | "2021-02-18T14:23:12Z" | "2021-03-01T09:39:12Z" | "2021-03-01T09:39:12Z" | CONTRIBUTOR | null | Below points needs to be addressed:
- Creation of dummy dataset is failing
- Need to check on the data representation
- License is not creative commons. Copyright: Portions © 1993 Trustees of the University of Pennsylvania
Also the links (_except the download_) point to the ami corpus! ;-)
@patrickvonplaten Requesting your comments, will be happy to address them! | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 1,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1903/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1903/timeline | null | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/1903.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1903",
"merged_at": "2021-03-01T09:39:12Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1903.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1903"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/1902 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1902/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1902/comments | https://api.github.com/repos/huggingface/datasets/issues/1902/events | https://github.com/huggingface/datasets/pull/1902 | 810,931,171 | MDExOlB1bGxSZXF1ZXN0NTc1NTQwMDM1 | 1,902 | Fix setimes_2 wmt urls | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [] | closed | false | null | [] | null | [] | "2021-02-18T09:42:26Z" | "2021-02-18T09:55:41Z" | "2021-02-18T09:55:41Z" | MEMBER | null | Continuation of #1901
Some other urls were missing https | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1902/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1902/timeline | null | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/1902.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1902",
"merged_at": "2021-02-18T09:55:41Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1902.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1902"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/1901 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1901/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1901/comments | https://api.github.com/repos/huggingface/datasets/issues/1901/events | https://github.com/huggingface/datasets/pull/1901 | 810,845,605 | MDExOlB1bGxSZXF1ZXN0NTc1NDY5MDUy | 1,901 | Fix OPUS dataset download errors | {
"avatar_url": "https://avatars.githubusercontent.com/u/3883941?v=4",
"events_url": "https://api.github.com/users/YangWang92/events{/privacy}",
"followers_url": "https://api.github.com/users/YangWang92/followers",
"following_url": "https://api.github.com/users/YangWang92/following{/other_user}",
"gists_url": "https://api.github.com/users/YangWang92/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/YangWang92",
"id": 3883941,
"login": "YangWang92",
"node_id": "MDQ6VXNlcjM4ODM5NDE=",
"organizations_url": "https://api.github.com/users/YangWang92/orgs",
"received_events_url": "https://api.github.com/users/YangWang92/received_events",
"repos_url": "https://api.github.com/users/YangWang92/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/YangWang92/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/YangWang92/subscriptions",
"type": "User",
"url": "https://api.github.com/users/YangWang92"
} | [] | closed | false | null | [] | null | [] | "2021-02-18T07:39:41Z" | "2021-02-18T15:07:20Z" | "2021-02-18T09:39:21Z" | CONTRIBUTOR | null | Replace http to https.
https://github.com/huggingface/datasets/issues/854
https://discuss.huggingface.co/t/cannot-download-wmt16/2081
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1901/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1901/timeline | null | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/1901.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1901",
"merged_at": "2021-02-18T09:39:21Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1901.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1901"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/1900 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1900/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1900/comments | https://api.github.com/repos/huggingface/datasets/issues/1900/events | https://github.com/huggingface/datasets/pull/1900 | 810,512,488 | MDExOlB1bGxSZXF1ZXN0NTc1MTkxNTc3 | 1,900 | Issue #1895: Bugfix for string_to_arrow timestamp[ns] support | {
"avatar_url": "https://avatars.githubusercontent.com/u/7731709?v=4",
"events_url": "https://api.github.com/users/justin-yan/events{/privacy}",
"followers_url": "https://api.github.com/users/justin-yan/followers",
"following_url": "https://api.github.com/users/justin-yan/following{/other_user}",
"gists_url": "https://api.github.com/users/justin-yan/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/justin-yan",
"id": 7731709,
"login": "justin-yan",
"node_id": "MDQ6VXNlcjc3MzE3MDk=",
"organizations_url": "https://api.github.com/users/justin-yan/orgs",
"received_events_url": "https://api.github.com/users/justin-yan/received_events",
"repos_url": "https://api.github.com/users/justin-yan/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/justin-yan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/justin-yan/subscriptions",
"type": "User",
"url": "https://api.github.com/users/justin-yan"
} | [] | closed | false | null | [] | null | [
"OK! Thank you for the review - I will follow up with a separate PR for the comments here (https://github.com/huggingface/datasets/pull/1900#discussion_r578319725)!"
] | "2021-02-17T20:26:04Z" | "2021-02-19T18:27:11Z" | "2021-02-19T18:27:11Z" | CONTRIBUTOR | null | Should resolve https://github.com/huggingface/datasets/issues/1895
The main part of this PR adds additional parsing in `string_to_arrow` to convert the timestamp dtypes that result from `str(pa_type)` back into the pa.DataType TimestampType.
While adding unit-testing, I noticed that support for the double/float types also don't invert correctly, so I added them, which I believe would hypothetically make this section of `Value` redundant:
```
def __post_init__(self):
if self.dtype == "double": # fix inferred type
self.dtype = "float64"
if self.dtype == "float": # fix inferred type
self.dtype = "float32"
```
However, since I think Value.dtype is part of the public interface, removing that would result in a backward-incompatible change, so I didn't muck with that.
The rest of the PR consists of docstrings that I added while developing locally so I could keep track of which functions were supposed to be inverses of each other, and thought I'd include them initially in case you want to keep them around, but I'm happy to delete or remove any of them at your request! | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1900/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1900/timeline | null | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/1900.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1900",
"merged_at": "2021-02-19T18:27:11Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1900.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1900"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/1899 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1899/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1899/comments | https://api.github.com/repos/huggingface/datasets/issues/1899/events | https://github.com/huggingface/datasets/pull/1899 | 810,308,332 | MDExOlB1bGxSZXF1ZXN0NTc1MDIxMjc4 | 1,899 | Fix: ALT - fix duplicated examples in alt-parallel | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [] | closed | false | null | [] | null | [] | "2021-02-17T15:53:56Z" | "2021-02-17T17:20:49Z" | "2021-02-17T17:20:49Z" | MEMBER | null | As noticed in #1898 by @10-zin the examples of the `alt-paralel` configurations have all the same values for the `translation` field.
This was due to a bad copy of a python dict.
This PR fixes that. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1899/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1899/timeline | null | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/1899.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1899",
"merged_at": "2021-02-17T17:20:49Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1899.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1899"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/1898 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1898/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1898/comments | https://api.github.com/repos/huggingface/datasets/issues/1898/events | https://github.com/huggingface/datasets/issues/1898 | 810,157,251 | MDU6SXNzdWU4MTAxNTcyNTE= | 1,898 | ALT dataset has repeating instances in all splits | {
"avatar_url": "https://avatars.githubusercontent.com/u/33179372?v=4",
"events_url": "https://api.github.com/users/10-zin/events{/privacy}",
"followers_url": "https://api.github.com/users/10-zin/followers",
"following_url": "https://api.github.com/users/10-zin/following{/other_user}",
"gists_url": "https://api.github.com/users/10-zin/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/10-zin",
"id": 33179372,
"login": "10-zin",
"node_id": "MDQ6VXNlcjMzMTc5Mzcy",
"organizations_url": "https://api.github.com/users/10-zin/orgs",
"received_events_url": "https://api.github.com/users/10-zin/received_events",
"repos_url": "https://api.github.com/users/10-zin/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/10-zin/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/10-zin/subscriptions",
"type": "User",
"url": "https://api.github.com/users/10-zin"
} | [
{
"color": "2edb81",
"default": false,
"description": "A bug in a dataset script provided in the library",
"id": 2067388877,
"name": "dataset bug",
"node_id": "MDU6TGFiZWwyMDY3Mzg4ODc3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20bug"
}
] | closed | false | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
}
] | null | [
"Thanks for reporting. This looks like a very bad issue. I'm looking into it",
"I just merged a fix, we'll do a patch release soon. Thanks again for reporting, and sorry for the inconvenience.\r\nIn the meantime you can load `ALT` using `datasets` from the master branch",
"Thanks!!! works perfectly in the bleading edge master version",
"Closed by #1899"
] | "2021-02-17T12:51:42Z" | "2021-02-19T06:18:46Z" | "2021-02-19T06:18:46Z" | NONE | null | The [ALT](https://huggingface.co/datasets/alt) dataset has all the same instances within each split :/
Seemed like a great dataset for some experiments I wanted to carry out, especially since its medium-sized, and has all splits.
Would be great if this could be fixed :)
Added a snapshot of the contents from `explore-datset` feature, for quick reference.
![image](https://user-images.githubusercontent.com/33179372/108206321-442a2d00-714c-11eb-882f-b4b6e708ef9c.png)
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1898/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1898/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/1897 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1897/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1897/comments | https://api.github.com/repos/huggingface/datasets/issues/1897/events | https://github.com/huggingface/datasets/pull/1897 | 810,113,263 | MDExOlB1bGxSZXF1ZXN0NTc0ODU3MTIy | 1,897 | Fix PandasArrayExtensionArray conversion to native type | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [] | closed | false | null | [] | null | [] | "2021-02-17T11:48:24Z" | "2021-02-17T13:15:16Z" | "2021-02-17T13:15:15Z" | MEMBER | null | To make the conversion to csv work in #1887 , we need PandasArrayExtensionArray used for multidimensional numpy arrays to be converted to pandas native types.
However previously pandas.core.internals.ExtensionBlock.to_native_types would fail with an PandasExtensionArray because
1. the PandasExtensionArray.isna method was wrong
2. the conversion of a PandasExtensionArray to a numpy array with dtype=object was returning a multidimensional array while pandas excepts a 1D array in this case (more info [here](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.api.extensions.ExtensionArray.html#pandas.api.extensions.ExtensionArray))
I fixed these two issues and now the conversion to native types works, and so is the export to csv.
cc @SBrandeis | {
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1897/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1897/timeline | null | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/1897.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1897",
"merged_at": "2021-02-17T13:15:15Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1897.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1897"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/1895 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1895/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1895/comments | https://api.github.com/repos/huggingface/datasets/issues/1895/events | https://github.com/huggingface/datasets/issues/1895 | 809,630,271 | MDU6SXNzdWU4MDk2MzAyNzE= | 1,895 | Bug Report: timestamp[ns] not recognized | {
"avatar_url": "https://avatars.githubusercontent.com/u/7731709?v=4",
"events_url": "https://api.github.com/users/justin-yan/events{/privacy}",
"followers_url": "https://api.github.com/users/justin-yan/followers",
"following_url": "https://api.github.com/users/justin-yan/following{/other_user}",
"gists_url": "https://api.github.com/users/justin-yan/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/justin-yan",
"id": 7731709,
"login": "justin-yan",
"node_id": "MDQ6VXNlcjc3MzE3MDk=",
"organizations_url": "https://api.github.com/users/justin-yan/orgs",
"received_events_url": "https://api.github.com/users/justin-yan/received_events",
"repos_url": "https://api.github.com/users/justin-yan/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/justin-yan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/justin-yan/subscriptions",
"type": "User",
"url": "https://api.github.com/users/justin-yan"
} | [] | closed | false | null | [] | null | [
"Thanks for reporting !\r\n\r\nYou're right, `string_to_arrow` should be able to take `\"timestamp[ns]\"` as input and return the right pyarrow timestamp type.\r\nFeel free to suggest a fix for `string_to_arrow` and open a PR if you want to contribute ! This would be very appreciated :)\r\n\r\nTo give you more context:\r\n\r\nAs you may know we define the features types of a dataset using the `Features` object in combination with feature types like `Value`. For example\r\n```python\r\nfeatures = Features({\r\n \"age\": Value(\"int32\")\r\n})\r\n```\r\nHowever under the hood we are actually using pyarrow to store the data, and so we have a mapping between the feature types of `datasets` and the types of pyarrow.\r\n\r\nFor example, the `Value` feature types are created from a pyarrow type with `Value(str(pa_type))`.\r\nHowever it looks like the conversion back to a pyarrow type doesn't work with `\"timestamp[ns]\"`.\r\nThis is the `string_to_arrow` function you highlighted that does this conversion, so we should fix that.\r\n\r\n",
"Thanks for the clarification @lhoestq !\r\n\r\nThis may be a little bit of a stupid question, but I wanted to clarify one more thing before I took a stab at this:\r\n\r\nWhen the features get inferred, I believe they already have a pyarrow schema (https://github.com/huggingface/datasets/blob/master/src/datasets/arrow_dataset.py#L234).\r\n\r\nWe then convert it to a string (https://github.com/huggingface/datasets/blob/master/src/datasets/features.py#L778) only to convert it back into the arrow type (https://github.com/huggingface/datasets/blob/master/src/datasets/features.py#L143, and https://github.com/huggingface/datasets/blob/master/src/datasets/features.py#L35). Is there a reason for this round-trip?\r\n\r\nI'll open a PR later to add `timestamp` support to `string_to_arrow`, but I'd be curious to understand since it feels like there may be some opportunities to simplify!",
"The objective in terms of design is to make it easy to create Features in a pythonic way. So for example we use a string to define a Value type.\r\nThat's why when inferring the Features from an arrow schema we have to find the right string definitions for Value types. I guess we could also have a constructor `Value.from_arrow_type` to avoid recreating the arrow type, but this could create silent errors if the pyarrow type doesn't have a valid mapping with the string definition. The \"round-trip\" is used to enforce that the ground truth is the string definition, not the pyarrow type, and also as a sanity check.\r\n\r\nLet me know if that makes sense ",
"OK I think I understand now:\r\n\r\nFeatures are datasets' internal representation of a schema type, distinct from pyarrow's schema.\r\nValue() corresponds to pyarrow's \"primitive\" types (e.g. `int` or `string`, but not things like `list` or `dict`).\r\n`get_nested_type()` (https://github.com/huggingface/datasets/blob/master/src/datasets/features.py#L698) and `generate_from_arrow_type()` (https://github.com/huggingface/datasets/blob/master/src/datasets/features.py#L778) *should* be inverses of each other, and similarly, for the primitive values, `string_to_arrow()` and `Value.__call__` (https://github.com/huggingface/datasets/blob/master/src/datasets/features.py#L146) should be inverses of each other?\r\n\r\nThanks for taking the time to answer - I just wanted to make sure I understood before opening a PR so I'm not disrupting anything about how the codebase is expected to work!",
"Yes you're totally right :)"
] | "2021-02-16T20:38:04Z" | "2021-02-19T18:27:11Z" | "2021-02-19T18:27:11Z" | CONTRIBUTOR | null | Repro:
```
from datasets import Dataset
import pandas as pd
import pyarrow
df = pd.DataFrame(pd.date_range("2018-01-01", periods=3, freq="H"))
pyarrow.Table.from_pandas(df)
Dataset.from_pandas(df)
# Throws ValueError: Neither timestamp[ns] nor timestamp[ns]_ seems to be a pyarrow data type.
```
The factory function seems to be just "timestamp": https://arrow.apache.org/docs/python/generated/pyarrow.timestamp.html#pyarrow.timestamp
It seems like https://github.com/huggingface/datasets/blob/master/src/datasets/features.py#L36-L43 could have a little bit of additional structure for handling these cases? I'd be happy to take a shot at opening a PR if I could receive some guidance on whether parsing something like `timestamp[ns]` and resolving it to timestamp('ns') is the goal of this method.
Alternatively, if I'm using this incorrectly (e.g. is the expectation that we always provide a schema when timestamps are involved?), that would be very helpful to know as well!
```
$ pip list # only the relevant libraries/versions
datasets 1.2.1
pandas 1.0.3
pyarrow 3.0.0
``` | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1895/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1895/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/1894 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1894/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1894/comments | https://api.github.com/repos/huggingface/datasets/issues/1894/events | https://github.com/huggingface/datasets/issues/1894 | 809,609,654 | MDU6SXNzdWU4MDk2MDk2NTQ= | 1,894 | benchmarking against MMapIndexedDataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/sshleifer",
"id": 6045025,
"login": "sshleifer",
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"type": "User",
"url": "https://api.github.com/users/sshleifer"
} | [] | open | false | null | [] | null | [
"Hi sam !\r\nIndeed we can expect the performances to be very close since both MMapIndexedDataset and the `datasets` implem use memory mapping. With memory mapping what determines the I/O performance is the speed of your hard drive/SSD.\r\n\r\nIn terms of performance we're pretty close to the optimal speed for reading text, even though I found recently that we could still slightly improve speed for big datasets (see [here](https://github.com/huggingface/datasets/issues/1803)).\r\n\r\nIn terms of number of examples and example sizes, the only limit is the available disk space you have.\r\n\r\nI haven't used `psrecord` yet but it seems to be a very interesting tool for benchmarking. Currently for benchmarks we only have github actions to avoid regressions in terms of speed. But it would be cool to have benchmarks with comparisons with other dataset tools ! This would be useful to many people",
"Also I would be interested to know what data types `MMapIndexedDataset` supports. Is there some documentation somewhere ?",
"no docs haha, it's written to support integer numpy arrays.\r\n\r\nYou can build one in fairseq with, roughly:\r\n```bash\r\n\r\nwget https://s3.amazonaws.com/research.metamind.io/wikitext/wikitext-103-raw-v1.zip\r\nunzip wikitext-103-raw-v1.zip\r\nexport dd=$HOME/fairseq-py/wikitext-103-raw\r\n\r\nexport mm_dir=$HOME/mmap_wikitext2\r\nmkdir -p gpt2_bpe\r\nwget -O gpt2_bpe/encoder.json https://dl.fbaipublicfiles.com/fairseq/gpt2_bpe/encoder.json\r\nwget -O gpt2_bpe/vocab.bpe https://dl.fbaipublicfiles.com/fairseq/gpt2_bpe/vocab.bpe\r\nwget -O gpt2_bpe/dict.txt https://dl.fbaipublicfiles.com/fairseq/gpt2_bpe/dict.txt\r\nfor SPLIT in train valid; do \\\r\n python -m examples.roberta.multiprocessing_bpe_encoder \\\r\n --encoder-json gpt2_bpe/encoder.json \\\r\n --vocab-bpe gpt2_bpe/vocab.bpe \\\r\n --inputs /scratch/stories_small/${SPLIT}.txt \\\r\n --outputs /scratch/stories_small/${SPLIT}.bpe \\\r\n --keep-empty \\\r\n --workers 60; \\\r\ndone\r\n\r\nmkdir -p $mm_dir\r\nfairseq-preprocess \\\r\n --only-source \\\r\n --srcdict gpt2_bpe/dict.txt \\\r\n --trainpref $dd/wiki.train.bpe \\\r\n --validpref $dd/wiki.valid.bpe \\\r\n --destdir $mm_dir \\\r\n --workers 60 \\\r\n --dataset-impl mmap\r\n```\r\n\r\nI'm noticing in my benchmarking that it's much smaller on disk than arrow (200mb vs 900mb), and that both incur significant cost by increasing the number of data loader workers. \r\nThis somewhat old [post](https://ray-project.github.io/2017/10/15/fast-python-serialization-with-ray-and-arrow.html) suggests there are some gains to be had from using `pyarrow.serialize(array).tobuffer()`. I haven't yet figured out how much of this stuff `pa.Table` does under the hood.\r\n\r\nThe `MMapIndexedDataset` bottlenecks we are working on improving (by using arrow) are:\r\n1) `MMapIndexedDataset`'s index, which stores offsets, basically gets read in its entirety by each dataloading process.\r\n2) we have separate, identical, `MMapIndexedDatasets` on each dataloading worker, so there's redundancy there; we wonder if there is a way that arrow can somehow dedupe these in shared memory.\r\n\r\nIt will take me a few hours to get `MMapIndexedDataset` benchmarks out of `fairseq`/onto a branch in this repo, but I'm happy to invest the time if you're interested in collaborating on some performance hacking."
] | "2021-02-16T20:04:58Z" | "2021-02-17T18:52:28Z" | null | CONTRIBUTOR | null | I am trying to benchmark my datasets based implementation against fairseq's [`MMapIndexedDataset`](https://github.com/pytorch/fairseq/blob/master/fairseq/data/indexed_dataset.py#L365) and finding that, according to psrecord, my `datasets` implem uses about 3% more CPU memory and runs 1% slower for `wikitext103` (~1GB of tokens).
Questions:
1) Is this (basically identical) performance expected?
2) Is there a scenario where this library will outperform `MMapIndexedDataset`? (maybe more examples/larger examples?)
3) Should I be using different benchmarking tools than `psrecord`/how do you guys do benchmarks?
Thanks in advance! Sam | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1894/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1894/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/1893 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1893/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1893/comments | https://api.github.com/repos/huggingface/datasets/issues/1893/events | https://github.com/huggingface/datasets/issues/1893 | 809,556,503 | MDU6SXNzdWU4MDk1NTY1MDM= | 1,893 | wmt19 is broken | {
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/stas00",
"id": 10676103,
"login": "stas00",
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"repos_url": "https://api.github.com/users/stas00/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"type": "User",
"url": "https://api.github.com/users/stas00"
} | [
{
"color": "2edb81",
"default": false,
"description": "A bug in a dataset script provided in the library",
"id": 2067388877,
"name": "dataset bug",
"node_id": "MDU6TGFiZWwyMDY3Mzg4ODc3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20bug"
}
] | closed | false | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
}
] | null | [
"This was also mentioned in https://github.com/huggingface/datasets/issues/488 \r\n\r\nThe bucket where is data was stored seems to be unavailable now. Maybe we can change the URL to the ones in https://conferences.unite.un.org/uncorpus/en/downloadoverview ?",
"Closing since this has been fixed by #1912"
] | "2021-02-16T18:39:58Z" | "2021-03-03T17:42:02Z" | "2021-03-03T17:42:02Z" | MEMBER | null | 1. Check which lang pairs we have: `--dataset_name wmt19`:
Please pick one among the available configs: ['cs-en', 'de-en', 'fi-en', 'gu-en', 'kk-en', 'lt-en', 'ru-en', 'zh-en', 'fr-de']
2. OK, let's pick `ru-en`:
`--dataset_name wmt19 --dataset_config "ru-en"`
no cookies:
```
Traceback (most recent call last):
File "./run_seq2seq.py", line 661, in <module>
main()
File "./run_seq2seq.py", line 317, in main
datasets = load_dataset(data_args.dataset_name, data_args.dataset_config_name)
File "/mnt/nvme1/code/huggingface/datasets-master/src/datasets/load.py", line 740, in load_dataset
builder_instance.download_and_prepare(
File "/mnt/nvme1/code/huggingface/datasets-master/src/datasets/builder.py", line 572, in download_and_prepare
self._download_and_prepare(
File "/mnt/nvme1/code/huggingface/datasets-master/src/datasets/builder.py", line 628, in _download_and_prepare
split_generators = self._split_generators(dl_manager, **split_generators_kwargs)
File "/home/stas/.cache/huggingface/modules/datasets_modules/datasets/wmt19/436092de5f3faaf0fc28bc84875475b384e90a5470fa6afaee11039ceddc5052/wmt_utils.py", line 755, in _split_generators
downloaded_files = dl_manager.download_and_extract(urls_to_download)
File "/mnt/nvme1/code/huggingface/datasets-master/src/datasets/utils/download_manager.py", line 276, in download_and_extract
return self.extract(self.download(url_or_urls))
File "/mnt/nvme1/code/huggingface/datasets-master/src/datasets/utils/download_manager.py", line 191, in download
downloaded_path_or_paths = map_nested(
File "/mnt/nvme1/code/huggingface/datasets-master/src/datasets/utils/py_utils.py", line 233, in map_nested
mapped = [
File "/mnt/nvme1/code/huggingface/datasets-master/src/datasets/utils/py_utils.py", line 234, in <listcomp>
_single_map_nested((function, obj, types, None, True)) for obj in tqdm(iterable, disable=disable_tqdm)
File "/mnt/nvme1/code/huggingface/datasets-master/src/datasets/utils/py_utils.py", line 190, in _single_map_nested
mapped = [_single_map_nested((function, v, types, None, True)) for v in pbar]
File "/mnt/nvme1/code/huggingface/datasets-master/src/datasets/utils/py_utils.py", line 190, in <listcomp>
mapped = [_single_map_nested((function, v, types, None, True)) for v in pbar]
File "/mnt/nvme1/code/huggingface/datasets-master/src/datasets/utils/py_utils.py", line 172, in _single_map_nested
return function(data_struct)
File "/mnt/nvme1/code/huggingface/datasets-master/src/datasets/utils/download_manager.py", line 211, in _download
return cached_path(url_or_filename, download_config=download_config)
File "/mnt/nvme1/code/huggingface/datasets-master/src/datasets/utils/file_utils.py", line 274, in cached_path
output_path = get_from_cache(
File "/mnt/nvme1/code/huggingface/datasets-master/src/datasets/utils/file_utils.py", line 584, in get_from_cache
raise FileNotFoundError("Couldn't find file at {}".format(url))
FileNotFoundError: Couldn't find file at https://storage.googleapis.com/tfdataset-data/downloadataset/uncorpus/UNv1.0.en-ru.tar.gz
``` | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1893/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1893/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/1892 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1892/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1892/comments | https://api.github.com/repos/huggingface/datasets/issues/1892/events | https://github.com/huggingface/datasets/issues/1892 | 809,554,174 | MDU6SXNzdWU4MDk1NTQxNzQ= | 1,892 | request to mirror wmt datasets, as they are really slow to download | {
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/stas00",
"id": 10676103,
"login": "stas00",
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"repos_url": "https://api.github.com/users/stas00/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"type": "User",
"url": "https://api.github.com/users/stas00"
} | [] | closed | false | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
}
] | null | [
"Yes that would be awesome. Not only the download speeds are awful, but also some files are missing.\r\nWe list all the URLs in the datasets/wmt19/wmt_utils.py so we can make a script to download them all and host on S3.\r\nAlso I think most of the materials are under the CC BY-NC-SA 3.0 license (must double check) so it should be possible to redistribute the data with no issues.\r\n\r\ncc @patrickvonplaten who knows more about the wmt scripts",
"Yeah, the scripts are pretty ugly! A big refactor would make sense here...and I also remember that the datasets were veeery slow to download",
"I'm downloading them.\r\nI'm starting with the ones hosted on http://data.statmt.org which are the slowest ones",
"@lhoestq better to use our new git-based system than just raw S3, no? (that way we have built-in CDN etc.)",
"Closing since the urls were changed to mirror urls in #1912 ",
"Hi there! What about mirroring other datasets like [CCAligned](http://www.statmt.org/cc-aligned/) as well? All of them are really slow to download..."
] | "2021-02-16T18:36:11Z" | "2021-10-26T06:55:42Z" | "2021-03-25T11:53:23Z" | MEMBER | null | Would it be possible to mirror the wmt data files under hf? Some of them take hours to download and not because of the local speed. They are all quite small datasets, just extremely slow to download.
Thank you! | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1892/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1892/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/1891 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1891/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1891/comments | https://api.github.com/repos/huggingface/datasets/issues/1891/events | https://github.com/huggingface/datasets/issues/1891 | 809,550,001 | MDU6SXNzdWU4MDk1NTAwMDE= | 1,891 | suggestion to improve a missing dataset error | {
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/stas00",
"id": 10676103,
"login": "stas00",
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"repos_url": "https://api.github.com/users/stas00/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"type": "User",
"url": "https://api.github.com/users/stas00"
} | [] | closed | false | null | [] | null | [
"This is the current error thrown for missing datasets:\r\n```\r\nFileNotFoundError: Couldn't find a dataset script at C:\\Users\\Mario\\Desktop\\projects\\datasets\\missing_dataset\\missing_dataset.py or any data file in the same directory. Couldn't find 'missing_dataset' on the Hugging Face Hub either: FileNotFoundError: Dataset 'missing_dataset' doesn't exist on the Hub. If the repo is private, make sure you are authenticated with `use_auth_token=True` after logging in with `huggingface-cli login`.\r\n```\r\n\r\nSeems much more informative, so I think we can close this issue."
] | "2021-02-16T18:29:13Z" | "2022-10-05T12:48:38Z" | "2022-10-05T12:48:38Z" | MEMBER | null | I was using `--dataset_name wmt19` all was good. Then thought perhaps wmt20 is out, so I tried to use `--dataset_name wmt20`, got 3 different errors (1 repeated twice), none telling me the real issue - that `wmt20` isn't in the `datasets`:
```
True, predict_with_generate=True)
Traceback (most recent call last):
File "/mnt/nvme1/code/huggingface/datasets-master/src/datasets/load.py", line 323, in prepare_module
local_path = cached_path(file_path, download_config=download_config)
File "/mnt/nvme1/code/huggingface/datasets-master/src/datasets/utils/file_utils.py", line 274, in cached_path
output_path = get_from_cache(
File "/mnt/nvme1/code/huggingface/datasets-master/src/datasets/utils/file_utils.py", line 584, in get_from_cache
raise FileNotFoundError("Couldn't find file at {}".format(url))
FileNotFoundError: Couldn't find file at https://raw.githubusercontent.com/huggingface/datasets/master/datasets/wmt20/wmt20.py
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/mnt/nvme1/code/huggingface/datasets-master/src/datasets/load.py", line 335, in prepare_module
local_path = cached_path(file_path, download_config=download_config)
File "/mnt/nvme1/code/huggingface/datasets-master/src/datasets/utils/file_utils.py", line 274, in cached_path
output_path = get_from_cache(
File "/mnt/nvme1/code/huggingface/datasets-master/src/datasets/utils/file_utils.py", line 584, in get_from_cache
raise FileNotFoundError("Couldn't find file at {}".format(url))
FileNotFoundError: Couldn't find file at https://raw.githubusercontent.com/huggingface/datasets/master/datasets/wmt20/wmt20.py
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "./run_seq2seq.py", line 661, in <module>
main()
File "./run_seq2seq.py", line 317, in main
datasets = load_dataset(data_args.dataset_name, data_args.dataset_config_name)
File "/mnt/nvme1/code/huggingface/datasets-master/src/datasets/load.py", line 706, in load_dataset
module_path, hash, resolved_file_path = prepare_module(
File "/mnt/nvme1/code/huggingface/datasets-master/src/datasets/load.py", line 343, in prepare_module
raise FileNotFoundError(
FileNotFoundError: Couldn't find file locally at wmt20/wmt20.py, or remotely at https://raw.githubusercontent.com/huggingface/datasets/master/datasets/wmt20/wmt20.py.
The file is also not present on the master branch on github.
```
Suggestion: if it is not in a local path, check that there is an actual `https://github.com/huggingface/datasets/tree/master/datasets/wmt20` first and assert "dataset `wmt20` doesn't exist in datasets", rather than trying to find a load script - since the whole repo is not there.
The error occured when running:
```
cd examples/seq2seq
export BS=16; rm -r output_dir; PYTHONPATH=../../src USE_TF=0 CUDA_VISIBLE_DEVICES=0 python ./run_seq2seq.py --model_name_or_path t5-small --output_dir output_dir --adam_eps 1e-06 --do_eval --evaluation_strategy=steps --label_smoothing 0.1 --learning_rate 3e-5 --logging_first_step --logging_steps 1000 --max_source_length 128 --max_target_length 128 --num_train_epochs 1 --overwrite_output_dir --per_device_eval_batch_size $BS --predict_with_generate --eval_steps 25000 --sortish_sampler --task translation_en_to_ro --val_max_target_length 128 --warmup_steps 500 --max_val_samples 500 --dataset_name wmt20 --dataset_config "ro-en" --source_prefix "translate English to Romanian: "
```
Thanks. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1891/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1891/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/1890 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1890/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1890/comments | https://api.github.com/repos/huggingface/datasets/issues/1890/events | https://github.com/huggingface/datasets/pull/1890 | 809,395,586 | MDExOlB1bGxSZXF1ZXN0NTc0MjY0OTMx | 1,890 | Reformat dataset cards section titles | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [] | closed | false | null | [] | null | [] | "2021-02-16T15:11:47Z" | "2021-02-16T15:12:34Z" | "2021-02-16T15:12:33Z" | MEMBER | null | Titles are formatted like [Foo](#foo) instead of just Foo | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1890/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1890/timeline | null | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/1890.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1890",
"merged_at": "2021-02-16T15:12:33Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1890.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1890"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/1889 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1889/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1889/comments | https://api.github.com/repos/huggingface/datasets/issues/1889/events | https://github.com/huggingface/datasets/pull/1889 | 809,276,015 | MDExOlB1bGxSZXF1ZXN0NTc0MTY1NDAz | 1,889 | Implement to_dict and to_pandas for Dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/33657802?v=4",
"events_url": "https://api.github.com/users/SBrandeis/events{/privacy}",
"followers_url": "https://api.github.com/users/SBrandeis/followers",
"following_url": "https://api.github.com/users/SBrandeis/following{/other_user}",
"gists_url": "https://api.github.com/users/SBrandeis/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/SBrandeis",
"id": 33657802,
"login": "SBrandeis",
"node_id": "MDQ6VXNlcjMzNjU3ODAy",
"organizations_url": "https://api.github.com/users/SBrandeis/orgs",
"received_events_url": "https://api.github.com/users/SBrandeis/received_events",
"repos_url": "https://api.github.com/users/SBrandeis/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/SBrandeis/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/SBrandeis/subscriptions",
"type": "User",
"url": "https://api.github.com/users/SBrandeis"
} | [] | closed | false | null | [] | null | [
"Next step is going to add these two in the documentation ^^"
] | "2021-02-16T12:38:19Z" | "2021-02-18T18:42:37Z" | "2021-02-18T18:42:34Z" | CONTRIBUTOR | null | With options to return a generator or the full dataset | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1889/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1889/timeline | null | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/1889.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1889",
"merged_at": "2021-02-18T18:42:34Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1889.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1889"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/1888 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1888/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1888/comments | https://api.github.com/repos/huggingface/datasets/issues/1888/events | https://github.com/huggingface/datasets/pull/1888 | 809,241,123 | MDExOlB1bGxSZXF1ZXN0NTc0MTM2MDU4 | 1,888 | Docs for adding new column on formatted dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [] | closed | false | null | [] | null | [
"Close #1872"
] | "2021-02-16T11:45:00Z" | "2021-03-30T14:01:03Z" | "2021-02-16T11:58:57Z" | MEMBER | null | As mentioned in #1872 we should add in the documentation how the format gets updated when new columns are added
Close #1872 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1888/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1888/timeline | null | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/1888.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1888",
"merged_at": "2021-02-16T11:58:57Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1888.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1888"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/1887 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1887/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1887/comments | https://api.github.com/repos/huggingface/datasets/issues/1887/events | https://github.com/huggingface/datasets/pull/1887 | 809,229,809 | MDExOlB1bGxSZXF1ZXN0NTc0MTI2NTMy | 1,887 | Implement to_csv for Dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/33657802?v=4",
"events_url": "https://api.github.com/users/SBrandeis/events{/privacy}",
"followers_url": "https://api.github.com/users/SBrandeis/followers",
"following_url": "https://api.github.com/users/SBrandeis/following{/other_user}",
"gists_url": "https://api.github.com/users/SBrandeis/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/SBrandeis",
"id": 33657802,
"login": "SBrandeis",
"node_id": "MDQ6VXNlcjMzNjU3ODAy",
"organizations_url": "https://api.github.com/users/SBrandeis/orgs",
"received_events_url": "https://api.github.com/users/SBrandeis/received_events",
"repos_url": "https://api.github.com/users/SBrandeis/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/SBrandeis/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/SBrandeis/subscriptions",
"type": "User",
"url": "https://api.github.com/users/SBrandeis"
} | [] | closed | false | null | [] | null | [
"@lhoestq I stumbled upon an interesting failure when adding tests for CSV serialization of `ArrayXD` features (see the failing unit tests in the CI)\r\n\r\nIt's due to the fact that booleans cannot be converted from arrow format to numpy without copy: https://arrow.apache.org/docs/python/generated/pyarrow.Array.html#pyarrow.Array.to_numpy",
"Good catch ! I must be able to fix that one by allowing copies for this kind of arrays.\r\nThis is the kind of surprise you get sometimes when playing with arrow x)",
"Raising this error for booleans was introduced in https://issues.apache.org/jira/browse/ARROW-2871?jql=text%20~%20%22boolean%20to_numpy%22 without much explanations unfortunately.\r\nSo \"no copy\" only works for primitive types - except booleans.\r\nThis is confirmed in the source code at https://github.com/wesm/arrow/blob/c07b9b48cf3e0bbbab493992a492ae47e5b04cad/python/pyarrow/array.pxi#L621\r\n\r\nI'm opening a PR to allow copies for booleans...",
"I just merged the fix for boolean ArrayXD, feel free to merge from master to see if it fixes the ci :)",
"@lhoestq unfirtunately, arrays of strings (or any other non-primitive type) require a copy too\r\n\r\nA list of primitive types can be found here: https://github.com/wesm/arrow/blob/c07b9b48cf3e0bbbab493992a492ae47e5b04cad/python/pyarrow/types.pxi#L821\r\n\r\npyarrow provides a `is_primitive` function to check whether a type is primitive , I used it to set `zero_copy_only`\r\n\r\nAlso, `PandasArrayExtensionArray.isna` was using `numpy.isnan` which fails for arrays of strings. I replaced it with `pandas.isna`. Let me know what you think! :) "
] | "2021-02-16T11:27:29Z" | "2021-02-19T09:41:59Z" | "2021-02-19T09:41:59Z" | CONTRIBUTOR | null | cc @thomwolf
`to_csv` supports passing either a file path or a *binary* file object
The writing is batched to avoid loading the whole table in memory | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 1,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1887/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1887/timeline | null | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/1887.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1887",
"merged_at": "2021-02-19T09:41:59Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1887.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1887"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/1886 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1886/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1886/comments | https://api.github.com/repos/huggingface/datasets/issues/1886/events | https://github.com/huggingface/datasets/pull/1886 | 809,221,885 | MDExOlB1bGxSZXF1ZXN0NTc0MTE5ODcz | 1,886 | Common voice | {
"avatar_url": "https://avatars.githubusercontent.com/u/1704131?v=4",
"events_url": "https://api.github.com/users/BirgerMoell/events{/privacy}",
"followers_url": "https://api.github.com/users/BirgerMoell/followers",
"following_url": "https://api.github.com/users/BirgerMoell/following{/other_user}",
"gists_url": "https://api.github.com/users/BirgerMoell/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/BirgerMoell",
"id": 1704131,
"login": "BirgerMoell",
"node_id": "MDQ6VXNlcjE3MDQxMzE=",
"organizations_url": "https://api.github.com/users/BirgerMoell/orgs",
"received_events_url": "https://api.github.com/users/BirgerMoell/received_events",
"repos_url": "https://api.github.com/users/BirgerMoell/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/BirgerMoell/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BirgerMoell/subscriptions",
"type": "User",
"url": "https://api.github.com/users/BirgerMoell"
} | [] | closed | false | null | [] | null | [
"Does it make sense to make the domains as the different languages?\r\nA problem is that you need to download the datasets from the browser.\r\nOne idea would be to either contact Mozilla regarding API access to the dataset or make use of a headless browser for downloading the datasets (might be hard since we have to figure out how to host them). An even more creative idea would be to host the dataset inside a torrent and figure out a way to download specific datasets from within that torrent.\r\n\r\nHere is some information about the download authorization. They are hosting the data on S3.\r\n\r\nhttps://docs.aws.amazon.com/AmazonS3/latest/API/sigv4-auth-using-authorization-header.html\r\n\r\nHere is an example of how a download link looks.\r\n\r\nhttps://mozilla-common-voice-datasets.s3.dualstack.us-west-2.amazonaws.com/cv-corpus-6.1-2020-12-11/nl.tar.gz?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=ASIAQ3GQRTO3ND4UAQXB%2F20210217%2Fus-west-2%2Fs3%2Faws4_request&X-Amz-Date=20210217T080740Z&X-Amz-Expires=43200&X-Amz-Security-Token=FwoGZXIvYXdzEGIaDCC6ALh%2FwIK9ovvRdCKSBCs5WaSJNsZ2h0SnhpnWFv4yiAJHJTe%2BY6pBcCqadRMs0RABHeQ2n1QDACJ5V9WOqIHfMfT0AI%2Bfe6iFkTGLgRrJOMYpgV%2FmIBcXCjeb72r4ZvudMA8tprkSxZsEh53bJkIDQx1tXqfpz0yoefM0geD3461suEGhHnLIyiwffrUpRg%2BkNZN9%2FLZZXpF5F2pogieKKV533Jetkd1xlWOR%2Bem9R2bENu2RV563XX3JvbWxSYN9IHkVT1xwd4ZiOpUtX7%2F2RoluJUKw%2BUPpyml3J%2FOPPGdr7CyPLjqNxdq9ceRi8lRybty64XvNYZGt45VNTQ3pkTTz4VpUCJAGkgxq95Ve%2BOwW%2Fsc8JtblTFKrH11vej62NB7C0n7JPPS4SLKXHKW%2B7ZbybcNf3BnsAVouPdsGTMslcgkD81b9trnjyXJdOZkzdHUf2KcWVXVceEsZnMhcCZQ1cJpI7qXPEk8QrKCQcNByPLHmPIEdHpj9IrIBKDkl2qO7VX7CCB65WDt2eZRltOcNHXWVFXFktMdQOQztI1j0XSZz2iOX4jPKKaqz193VEytlAqmehNi8pePOnxkP9Z1SP7d3I6rayuBF3phmpHxw499tY3ECYYgoCnJ6QSFa3KxMjFmEpQlmjxuwEMHd4CDL2FJYGcCiIxbCcL1r8ZE3%2BbGdcu7PRsVCHX3Huh%2FqGIaF4h40FgteN6teyKCHKOebs4EGMipb9xmEMZ9ZbVopz4bkhLdMTrjKon9w624Xem0MTPqN7XY%2BB6lRgrW8rd4%3D&X-Amz-Signature=28eabdfce72a472a70b0f9e1e2c37fe1471b5ec8ed60614fbe900bfa97ae1ac8&X-Amz-SignedHeaders=host\r\n\r\nIt could be that we simply need to make a http-request with the right parameters and we can download the datasets.",
"> Wow, this looks great already! It's really a difficult dataset so thanks a lot for opening a PR.\r\n> I think the tagging tool is not too important for now and we can take a look at that later!\r\n> \r\n> At the moment, it would be very good to correctly generate some dummy data for all the possible languages. I think the structure of the `.tsv` file as you've noted in the PR is the one we want to use as the structure for `features = datasets.Features(`\r\n> \r\n> The splits `'Train\"`, `\"Test\"`, `\"Validation\"` look great to me! Because this is a special dataset that also has files called `\"Invalidated\"` I think the best option is to also add those as splits, _i.e._ `\"other\"`, `\"invalidated\"`, `\"reported\"`, `\"validated\"` . Those split names can be gives as shown here for example:\r\n> \r\n> https://github.com/huggingface/datasets/blob/28be129db862ec89a87ac9349c64df6b6118aff4/datasets/librispeech_asr/librispeech_asr.py#L124\r\n> \r\n> Also putting @lhoestq in cc here to hear his opinion on the different splits. @lhoestq Common Voicie is a crowd collected dataset where if a collected data sample did not receive enough \"up_votes\" from the community -> then it is (If I understood it correctly) marked as invalid -> hence the file `\"invalidated.tsv\"`. I think this is still useful data, so I would include it what do you think?\r\n> \r\n> @BirgerMoell let me know if you have any more questions :-)\r\n\r\nI think reporting is a separate feature. People can help annotate the data and then they can report things while annotating.\r\nhttps://commonvoice.mozilla.org/sv-SE/listen\r\n\r\nHere is the interface that shows reporting and the thumbs up and down which gives upvotes and downvotes.\r\n<img src=\"https://i.imgur.com/utWjszt.png\" height=\"800px\">\r\n",
"I added splits and features. I'm not sure how you want me to generate dummy data for all the languages?",
"Hey @BirgerMoell,\r\n\r\nI tweaked your dataset file a bit to have a first working version. To test this dataset downloading script, you can do the following:\r\n\r\n- 1) Download the Common Voice Georgian dataset from https://commonvoice.mozilla.org/en/datasets (It's pretty small which is why I chose it)\r\n- 2) Run the following command using this branch: \r\n\r\n```python\r\nfrom datasets import load_dataset\r\n\r\nds = load_dataset(\"./../datasets/datasets/common_voice\", \"Georgian\", data_dir=\"./cv-corpus-6.1-2020-12-11/ka/\", split=\"train\")\r\n```\r\n\r\nNote that I'm loading a local version of the dataset script (`\"./../datasets/datasets/common_voice/\"` points to the folder in your branch) and that I also insert the downloaded data with the `data_dir` arg.\r\n\r\n-> You'll see that the data is correctly loaded and that `ds` contains all the information we need.\r\n\r\nNow there are a lot of different datasets on Common Voice, so it probably takes too much time to test all of those, but maybe you can test whether the current script works as well *e.g.* for Swedish, 3,4 other languages.\r\n\r\nIt would be very nice if we can use the exact same structure for all languages, meaning that we don't have to change the `datasets.Features(...)` structure depending on the language, but can use the exact same one for every language.\r\n\r\nIf everything works as expected we can then go over to cleaning the script and seeing how to add dummy data tests for it."
] | "2021-02-16T11:16:10Z" | "2021-03-09T18:51:31Z" | "2021-03-09T18:51:31Z" | CONTRIBUTOR | null | Started filling out information about the dataset and a dataset card.
To do
Create tagging file
Update the common_voice.py file with more information | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1886/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1886/timeline | null | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/1886.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1886",
"merged_at": "2021-03-09T18:51:31Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1886.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1886"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/1885 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1885/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1885/comments | https://api.github.com/repos/huggingface/datasets/issues/1885/events | https://github.com/huggingface/datasets/pull/1885 | 808,881,501 | MDExOlB1bGxSZXF1ZXN0NTczODQyNzcz | 1,885 | add missing info on how to add large files | {
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/stas00",
"id": 10676103,
"login": "stas00",
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"repos_url": "https://api.github.com/users/stas00/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"type": "User",
"url": "https://api.github.com/users/stas00"
} | [] | closed | false | null | [] | null | [] | "2021-02-15T23:46:39Z" | "2021-02-16T16:22:19Z" | "2021-02-16T11:44:12Z" | MEMBER | null | Thanks to @lhoestq's instructions I was able to add data files to a custom dataset repo. This PR is attempting to tell others how to do the same if they need to.
@lhoestq | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1885/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1885/timeline | null | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/1885.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1885",
"merged_at": "2021-02-16T11:44:12Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1885.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1885"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/1884 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1884/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1884/comments | https://api.github.com/repos/huggingface/datasets/issues/1884/events | https://github.com/huggingface/datasets/pull/1884 | 808,755,894 | MDExOlB1bGxSZXF1ZXN0NTczNzQwNzI5 | 1,884 | dtype fix when using numpy arrays | {
"avatar_url": "https://avatars.githubusercontent.com/u/19718818?v=4",
"events_url": "https://api.github.com/users/bhavitvyamalik/events{/privacy}",
"followers_url": "https://api.github.com/users/bhavitvyamalik/followers",
"following_url": "https://api.github.com/users/bhavitvyamalik/following{/other_user}",
"gists_url": "https://api.github.com/users/bhavitvyamalik/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/bhavitvyamalik",
"id": 19718818,
"login": "bhavitvyamalik",
"node_id": "MDQ6VXNlcjE5NzE4ODE4",
"organizations_url": "https://api.github.com/users/bhavitvyamalik/orgs",
"received_events_url": "https://api.github.com/users/bhavitvyamalik/received_events",
"repos_url": "https://api.github.com/users/bhavitvyamalik/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/bhavitvyamalik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bhavitvyamalik/subscriptions",
"type": "User",
"url": "https://api.github.com/users/bhavitvyamalik"
} | [] | closed | false | null | [] | null | [] | "2021-02-15T18:55:25Z" | "2021-07-30T11:01:18Z" | "2021-07-30T11:01:18Z" | CONTRIBUTOR | null | As discussed in #625 this fix lets the user preserve the dtype of numpy array to pyarrow array which was getting lost due to conversion of numpy array -> list -> pyarrow array | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1884/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1884/timeline | null | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/1884.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1884",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/1884.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1884"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/1883 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1883/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1883/comments | https://api.github.com/repos/huggingface/datasets/issues/1883/events | https://github.com/huggingface/datasets/pull/1883 | 808,750,623 | MDExOlB1bGxSZXF1ZXN0NTczNzM2NTIz | 1,883 | Add not-in-place implementations for several dataset transforms | {
"avatar_url": "https://avatars.githubusercontent.com/u/33657802?v=4",
"events_url": "https://api.github.com/users/SBrandeis/events{/privacy}",
"followers_url": "https://api.github.com/users/SBrandeis/followers",
"following_url": "https://api.github.com/users/SBrandeis/following{/other_user}",
"gists_url": "https://api.github.com/users/SBrandeis/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/SBrandeis",
"id": 33657802,
"login": "SBrandeis",
"node_id": "MDQ6VXNlcjMzNjU3ODAy",
"organizations_url": "https://api.github.com/users/SBrandeis/orgs",
"received_events_url": "https://api.github.com/users/SBrandeis/received_events",
"repos_url": "https://api.github.com/users/SBrandeis/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/SBrandeis/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/SBrandeis/subscriptions",
"type": "User",
"url": "https://api.github.com/users/SBrandeis"
} | [] | closed | false | null | [] | null | [
"@lhoestq I am not sure how to test `dictionary_encode_column` (in-place version was not tested before)",
"I can take a look at dictionary_encode_column tomorrow.\r\nAlthough it's likely that it doesn't work then. It was added at the beginning of the lib and never tested nor used afaik.",
"Now let's update the documentation to use the new methods x)"
] | "2021-02-15T18:44:26Z" | "2021-02-24T14:54:49Z" | "2021-02-24T14:53:26Z" | CONTRIBUTOR | null | Should we deprecate in-place versions of such methods? | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1883/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1883/timeline | null | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/1883.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1883",
"merged_at": "2021-02-24T14:53:26Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1883.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1883"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/1882 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1882/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1882/comments | https://api.github.com/repos/huggingface/datasets/issues/1882/events | https://github.com/huggingface/datasets/pull/1882 | 808,716,576 | MDExOlB1bGxSZXF1ZXN0NTczNzA4OTEw | 1,882 | Create Remote Manager | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | [] | open | false | null | [] | null | [
"@lhoestq I have refactorized the logic. Instead of the previous hierarchy call (local temp file opening -> remote call -> use again temp local file logic but from within the remote caller scope), now it is flattened. Schematically:\r\n```python\r\nwith src.open() as src_file, dst.open() as dst_file:\r\n src_file.fetch(dst_file)\r\n```\r\n\r\nI have created `RemotePath` (analogue to Path) with method `.open()` that returns `FtpFile`/`HttpFile` (analogue to file-like).\r\n\r\nNow I am going to implement `RemotePath.exists()` method (analogue to the Path's method) to check if remote resource is accessible, using `Ftp/Http.head()`.",
"Quick update on this one:\r\nwe discussed offline with @albertvillanova on this PR and I think using `fsspec` can help a lot, since it already implements many parts of the abstraction we need to have nice download tools for both http and ftp (and others !)"
] | "2021-02-15T17:36:24Z" | "2022-07-06T15:19:47Z" | null | MEMBER | null | Refactoring to separate the concern of remote (HTTP/FTP requests) management. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1882/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1882/timeline | null | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/1882.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1882",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/1882.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1882"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/1881 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1881/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1881/comments | https://api.github.com/repos/huggingface/datasets/issues/1881/events | https://github.com/huggingface/datasets/pull/1881 | 808,578,200 | MDExOlB1bGxSZXF1ZXN0NTczNTk1Nzkw | 1,881 | `list_datasets()` returns a list of strings, not objects | {
"avatar_url": "https://avatars.githubusercontent.com/u/227357?v=4",
"events_url": "https://api.github.com/users/pminervini/events{/privacy}",
"followers_url": "https://api.github.com/users/pminervini/followers",
"following_url": "https://api.github.com/users/pminervini/following{/other_user}",
"gists_url": "https://api.github.com/users/pminervini/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/pminervini",
"id": 227357,
"login": "pminervini",
"node_id": "MDQ6VXNlcjIyNzM1Nw==",
"organizations_url": "https://api.github.com/users/pminervini/orgs",
"received_events_url": "https://api.github.com/users/pminervini/received_events",
"repos_url": "https://api.github.com/users/pminervini/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/pminervini/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pminervini/subscriptions",
"type": "User",
"url": "https://api.github.com/users/pminervini"
} | [] | closed | false | null | [] | null | [] | "2021-02-15T14:20:15Z" | "2021-02-15T15:09:49Z" | "2021-02-15T15:09:48Z" | CONTRIBUTOR | null | Here and there in the docs there is still stuff like this:
```python
>>> datasets_list = list_datasets()
>>> print(', '.join(dataset.id for dataset in datasets_list))
```
However, my understanding is that `list_datasets()` returns a list of strings rather than a list of objects. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1881/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1881/timeline | null | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/1881.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1881",
"merged_at": "2021-02-15T15:09:48Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1881.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1881"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/1880 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1880/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1880/comments | https://api.github.com/repos/huggingface/datasets/issues/1880/events | https://github.com/huggingface/datasets/pull/1880 | 808,563,439 | MDExOlB1bGxSZXF1ZXN0NTczNTgzNjg0 | 1,880 | Update multi_woz_v22 checksums | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [] | closed | false | null | [] | null | [] | "2021-02-15T14:00:18Z" | "2021-02-15T14:18:19Z" | "2021-02-15T14:18:18Z" | MEMBER | null | As noticed in #1876 the checksums of this dataset are outdated.
I updated them in this PR | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1880/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1880/timeline | null | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/1880.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1880",
"merged_at": "2021-02-15T14:18:18Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1880.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1880"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/1879 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1879/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1879/comments | https://api.github.com/repos/huggingface/datasets/issues/1879/events | https://github.com/huggingface/datasets/pull/1879 | 808,541,442 | MDExOlB1bGxSZXF1ZXN0NTczNTY1NDAx | 1,879 | Replace flatten_nested | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | [] | closed | false | null | [] | null | [
"Hi @lhoestq. If you agree to merge this, I will start separating the logic for NestedDataStructure.map ;)"
] | "2021-02-15T13:29:40Z" | "2021-02-19T18:35:14Z" | "2021-02-19T18:35:14Z" | MEMBER | null | Replace `flatten_nested` with `NestedDataStructure.flatten`.
This is a first step towards having all NestedDataStructure logic as a separated concern, independent of the caller/user of the data structure.
Eventually, all checks (whether the underlying data is list, dict, etc.) will be only inside this class.
I have also generalized the flattening, and now it handles multiple levels of nesting. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1879/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1879/timeline | null | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/1879.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1879",
"merged_at": "2021-02-19T18:35:14Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1879.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1879"
} | true |