id
int64
599M
2.47B
url
stringlengths
58
61
repository_url
stringclasses
1 value
events_url
stringlengths
65
68
labels
listlengths
0
4
active_lock_reason
null
updated_at
stringlengths
20
20
assignees
listlengths
0
4
html_url
stringlengths
46
51
author_association
stringclasses
4 values
state_reason
stringclasses
3 values
draft
bool
2 classes
milestone
dict
comments
sequencelengths
0
30
title
stringlengths
1
290
reactions
dict
node_id
stringlengths
18
32
pull_request
dict
created_at
stringlengths
20
20
comments_url
stringlengths
67
70
body
stringlengths
0
228k
user
dict
labels_url
stringlengths
72
75
timeline_url
stringlengths
67
70
state
stringclasses
2 values
locked
bool
1 class
number
int64
1
7.11k
performed_via_github_app
null
closed_at
stringlengths
20
20
assignee
dict
is_pull_request
bool
2 classes
871,111,235
https://api.github.com/repos/huggingface/datasets/issues/2288
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2288/events
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
null
2021-06-15T13:49:26Z
[]
https://github.com/huggingface/datasets/issues/2288
NONE
completed
null
null
[ "Hi,\r\n\r\nthis is not a standard CSV file (requires additional preprocessing) so I wouldn't label this as s bug. You could parse the examples with the regex module or the string API to extract the data, but the following approach is probably the easiest (once you load the data):\r\n```python\r\nimport ast\r\n# load the dataset and copy the features\r\ndef process(ex):\r\n return {\"tokens\": ast.literal_eval(ex[\"tokens\"]), \"labels\": ast.literal_eval(ex[\"labels\"])}\r\ndataset = dataset.map(process, features=new_features)\r\n```\r\n", "Hi,\r\n\r\nThanks for the reply.\r\nI have already used ```ast.literal_eval``` to evaluate the string into list, but I was getting another error:\r\n```\r\nArrowInvalid: Could not convert X with type str: tried to convert to int\r\n```\r\nWhy this happens ? Should labels be mapped to their ids and use int instead of str ?", "Yes, just map the labels to their ids." ]
Load_dataset for local CSV files
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2288/reactions" }
MDU6SXNzdWU4NzExMTEyMzU=
null
2021-04-29T15:01:10Z
https://api.github.com/repos/huggingface/datasets/issues/2288/comments
The method load_dataset fails to correctly load a dataset from csv. Moreover, I am working on a token-classification task ( POS tagging) , where each row in my CSV contains two columns each of them having a list of strings. row example: ```tokens | labels ['I' , 'am', 'John'] | ['PRON', 'AUX', 'PROPN' ] ``` The method, loads each list as a string: (i.g "['I' , 'am', 'John']"). To solve this issue, I copied the Datasets.Features, created Sequence types ( instead of Value) and tried to cast the features type ``` new_features['tokens'] = Sequence(feature=Value(dtype='string', id=None)) new_features['labels'] = Sequence(feature=ClassLabel(num_classes=len(tag2idx), names=list(unique_tags))) dataset = dataset.cast(new_features) ``` but I got the following error ``` ArrowNotImplementedError: Unsupported cast from string to list using function cast_list ``` Moreover, I tried to set feature parameter in load_dataset method, to my new_features, but this fails as well. How can this be solved ?
{ "avatar_url": "https://avatars.githubusercontent.com/u/17052700?v=4", "events_url": "https://api.github.com/users/sstojanoska/events{/privacy}", "followers_url": "https://api.github.com/users/sstojanoska/followers", "following_url": "https://api.github.com/users/sstojanoska/following{/other_user}", "gists_url": "https://api.github.com/users/sstojanoska/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/sstojanoska", "id": 17052700, "login": "sstojanoska", "node_id": "MDQ6VXNlcjE3MDUyNzAw", "organizations_url": "https://api.github.com/users/sstojanoska/orgs", "received_events_url": "https://api.github.com/users/sstojanoska/received_events", "repos_url": "https://api.github.com/users/sstojanoska/repos", "site_admin": false, "starred_url": "https://api.github.com/users/sstojanoska/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sstojanoska/subscriptions", "type": "User", "url": "https://api.github.com/users/sstojanoska" }
https://api.github.com/repos/huggingface/datasets/issues/2288/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2288/timeline
closed
false
2,288
null
2021-06-15T13:49:26Z
null
false
871,063,374
https://api.github.com/repos/huggingface/datasets/issues/2287
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2287/events
[]
null
2021-04-29T16:34:23Z
[]
https://github.com/huggingface/datasets/pull/2287
COLLABORATOR
null
false
null
[ "Thanks for fixing it. I actually included a similar fix in #2291 along with some updates in tests\r\nI'm closing this one in favor of #2291 if you don't mind.\r\n\r\nThanks again !" ]
Avoid copying table's record batches
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2287/reactions" }
MDExOlB1bGxSZXF1ZXN0NjI2MTQ0MTQ3
{ "diff_url": "https://github.com/huggingface/datasets/pull/2287.diff", "html_url": "https://github.com/huggingface/datasets/pull/2287", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/2287.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/2287" }
2021-04-29T14:15:01Z
https://api.github.com/repos/huggingface/datasets/issues/2287/comments
Fixes #2276
{ "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mariosasko", "id": 47462742, "login": "mariosasko", "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "repos_url": "https://api.github.com/users/mariosasko/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "type": "User", "url": "https://api.github.com/users/mariosasko" }
https://api.github.com/repos/huggingface/datasets/issues/2287/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2287/timeline
closed
false
2,287
null
2021-04-29T16:34:22Z
null
true
871,032,393
https://api.github.com/repos/huggingface/datasets/issues/2286
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2286/events
[]
null
2021-04-29T14:07:29Z
[]
https://github.com/huggingface/datasets/pull/2286
MEMBER
null
false
null
[]
Fix metadata validation with config names
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2286/reactions" }
MDExOlB1bGxSZXF1ZXN0NjI2MTE5MTE2
{ "diff_url": "https://github.com/huggingface/datasets/pull/2286.diff", "html_url": "https://github.com/huggingface/datasets/pull/2286", "merged_at": "2021-04-29T14:07:28Z", "patch_url": "https://github.com/huggingface/datasets/pull/2286.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/2286" }
2021-04-29T13:44:32Z
https://api.github.com/repos/huggingface/datasets/issues/2286/comments
I noticed in https://github.com/huggingface/datasets/pull/2280 that the metadata validator doesn't parse the tags in the readme properly when then contain the tags per config.
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
https://api.github.com/repos/huggingface/datasets/issues/2286/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2286/timeline
closed
false
2,286
null
2021-04-29T14:07:28Z
null
true
871,005,236
https://api.github.com/repos/huggingface/datasets/issues/2285
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2285/events
[]
null
2021-05-19T07:22:45Z
[]
https://github.com/huggingface/datasets/issues/2285
NONE
completed
null
null
[ "\r\nI received an answer for this question on the HuggingFace Datasets forum by @lhoestq\r\n\r\nHi !\r\n\r\nIf you want to tokenize line by line, you can use this:\r\n\r\n```\r\nmax_seq_length = 512\r\nnum_proc = 4\r\n\r\ndef tokenize_function(examples):\r\n# Remove empty lines\r\nexamples[\"text\"] = [line for line in examples[\"text\"] if len(line) > 0 and not line.isspace()]\r\nreturn tokenizer(\r\n examples[\"text\"],\r\n truncation=True,\r\n max_length=max_seq_length,\r\n)\r\n\r\ntokenized_dataset = dataset.map(\r\ntokenize_function,\r\nbatched=True,\r\nnum_proc=num_proc,\r\nremove_columns=[\"text\"],\r\n)\r\n```\r\n\r\nThough the TextDataset was doing a different processing by concatenating all the texts and building blocks of size 512. If you need this behavior, then you must apply an additional map function after the tokenization:\r\n\r\n```\r\n# Main data processing function that will concatenate all texts from\r\n# our dataset and generate chunks of max_seq_length.\r\ndef group_texts(examples):\r\n# Concatenate all texts.\r\nconcatenated_examples = {k: sum(examples[k], []) for k in examples.keys()}\r\ntotal_length = len(concatenated_examples[list(examples.keys())[0]])\r\n# We drop the small remainder, we could add padding if the model supported it instead of this drop,\r\n# you can customize this part to your needs.\r\ntotal_length = (total_length // max_seq_length) * max_seq_length\r\n# Split by chunks of max_len.\r\nresult = {\r\n k: [t[i : i + max_seq_length] for i in range(0, total_length, max_seq_length)]\r\n for k, t in concatenated_examples.items()\r\n}\r\nreturn result\r\n\r\n# Note that with `batched=True`, this map processes 1,000 texts together,\r\n# so group_texts throws away a remainder for each of those groups of 1,000 texts.\r\n# You can adjust that batch_size here but a higher value might be slower to preprocess.\r\n\r\ntokenized_dataset = tokenized_dataset.map(\r\ngroup_texts,\r\nbatched=True,\r\nnum_proc=num_proc,\r\n)\r\n```\r\n\r\nThis code comes from the processing of the run_mlm.py example script of transformers\r\n\r\n", "Resolved" ]
Help understanding how to build a dataset for language modeling as with the old TextDataset
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2285/reactions" }
MDU6SXNzdWU4NzEwMDUyMzY=
null
2021-04-29T13:16:45Z
https://api.github.com/repos/huggingface/datasets/issues/2285/comments
Hello, I am trying to load a custom dataset that I will then use for language modeling. The dataset consists of a text file that has a whole document in each line, meaning that each line overpasses the normal 512 tokens limit of most tokenizers. I would like to understand what is the process to build a text dataset that tokenizes each line, having previously split the documents in the dataset into lines of a "tokenizable" size, as the old TextDataset class would do, where you only had to do the following, and a tokenized dataset without text loss would be available to pass to a DataCollator: ``` model_checkpoint = 'distilbert-base-uncased' from transformers import AutoTokenizer tokenizer = AutoTokenizer.from_pretrained(model_checkpoint) from transformers import TextDataset dataset = TextDataset( tokenizer=tokenizer, file_path="path/to/text_file.txt", block_size=512, ) ``` For now, what I have is the following, which, of course, throws an error because each line is longer than the maximum block size in the tokenizer: ``` import datasets dataset = datasets.load_dataset('path/to/text_file.txt') model_checkpoint = 'distilbert-base-uncased' tokenizer = AutoTokenizer.from_pretrained(model_checkpoint) def tokenize_function(examples): return tokenizer(examples["text"]) tokenized_datasets = dataset.map(tokenize_function, batched=True, num_proc=4, remove_columns=["text"]) tokenized_datasets ``` So what would be the "standard" way of creating a dataset in the way it was done before? Thank you very much for the help :))
{ "avatar_url": "https://avatars.githubusercontent.com/u/46021411?v=4", "events_url": "https://api.github.com/users/danieldiezmallo/events{/privacy}", "followers_url": "https://api.github.com/users/danieldiezmallo/followers", "following_url": "https://api.github.com/users/danieldiezmallo/following{/other_user}", "gists_url": "https://api.github.com/users/danieldiezmallo/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/danieldiezmallo", "id": 46021411, "login": "danieldiezmallo", "node_id": "MDQ6VXNlcjQ2MDIxNDEx", "organizations_url": "https://api.github.com/users/danieldiezmallo/orgs", "received_events_url": "https://api.github.com/users/danieldiezmallo/received_events", "repos_url": "https://api.github.com/users/danieldiezmallo/repos", "site_admin": false, "starred_url": "https://api.github.com/users/danieldiezmallo/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/danieldiezmallo/subscriptions", "type": "User", "url": "https://api.github.com/users/danieldiezmallo" }
https://api.github.com/repos/huggingface/datasets/issues/2285/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2285/timeline
closed
false
2,285
null
2021-05-19T07:22:39Z
null
false
870,932,710
https://api.github.com/repos/huggingface/datasets/issues/2284
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2284/events
[]
null
2021-04-29T12:54:34Z
[]
https://github.com/huggingface/datasets/pull/2284
NONE
null
false
null
[]
Initialize Imdb dataset as used in Don't Stop Pretraining Paper
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2284/reactions" }
MDExOlB1bGxSZXF1ZXN0NjI2MDM5MDc5
{ "diff_url": "https://github.com/huggingface/datasets/pull/2284.diff", "html_url": "https://github.com/huggingface/datasets/pull/2284", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/2284.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/2284" }
2021-04-29T11:52:38Z
https://api.github.com/repos/huggingface/datasets/issues/2284/comments
{ "avatar_url": "https://avatars.githubusercontent.com/u/52530809?v=4", "events_url": "https://api.github.com/users/BobbyManion/events{/privacy}", "followers_url": "https://api.github.com/users/BobbyManion/followers", "following_url": "https://api.github.com/users/BobbyManion/following{/other_user}", "gists_url": "https://api.github.com/users/BobbyManion/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/BobbyManion", "id": 52530809, "login": "BobbyManion", "node_id": "MDQ6VXNlcjUyNTMwODA5", "organizations_url": "https://api.github.com/users/BobbyManion/orgs", "received_events_url": "https://api.github.com/users/BobbyManion/received_events", "repos_url": "https://api.github.com/users/BobbyManion/repos", "site_admin": false, "starred_url": "https://api.github.com/users/BobbyManion/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/BobbyManion/subscriptions", "type": "User", "url": "https://api.github.com/users/BobbyManion" }
https://api.github.com/repos/huggingface/datasets/issues/2284/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2284/timeline
closed
false
2,284
null
2021-04-29T12:54:34Z
null
true
870,926,475
https://api.github.com/repos/huggingface/datasets/issues/2283
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2283/events
[]
null
2021-04-29T11:50:24Z
[]
https://github.com/huggingface/datasets/pull/2283
NONE
null
false
null
[]
Initialize imdb dataset from don't stop pretraining paper
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2283/reactions" }
MDExOlB1bGxSZXF1ZXN0NjI2MDM0MDk5
{ "diff_url": "https://github.com/huggingface/datasets/pull/2283.diff", "html_url": "https://github.com/huggingface/datasets/pull/2283", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/2283.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/2283" }
2021-04-29T11:44:54Z
https://api.github.com/repos/huggingface/datasets/issues/2283/comments
{ "avatar_url": "https://avatars.githubusercontent.com/u/52530809?v=4", "events_url": "https://api.github.com/users/BobbyManion/events{/privacy}", "followers_url": "https://api.github.com/users/BobbyManion/followers", "following_url": "https://api.github.com/users/BobbyManion/following{/other_user}", "gists_url": "https://api.github.com/users/BobbyManion/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/BobbyManion", "id": 52530809, "login": "BobbyManion", "node_id": "MDQ6VXNlcjUyNTMwODA5", "organizations_url": "https://api.github.com/users/BobbyManion/orgs", "received_events_url": "https://api.github.com/users/BobbyManion/received_events", "repos_url": "https://api.github.com/users/BobbyManion/repos", "site_admin": false, "starred_url": "https://api.github.com/users/BobbyManion/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/BobbyManion/subscriptions", "type": "User", "url": "https://api.github.com/users/BobbyManion" }
https://api.github.com/repos/huggingface/datasets/issues/2283/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2283/timeline
closed
false
2,283
null
2021-04-29T11:50:24Z
null
true
870,900,332
https://api.github.com/repos/huggingface/datasets/issues/2282
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2282/events
[]
null
2021-04-29T11:43:51Z
[]
https://github.com/huggingface/datasets/pull/2282
NONE
null
false
null
[]
Initialize imdb dataset from don't stop pretraining paper
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2282/reactions" }
MDExOlB1bGxSZXF1ZXN0NjI2MDEyMzM3
{ "diff_url": "https://github.com/huggingface/datasets/pull/2282.diff", "html_url": "https://github.com/huggingface/datasets/pull/2282", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/2282.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/2282" }
2021-04-29T11:17:56Z
https://api.github.com/repos/huggingface/datasets/issues/2282/comments
{ "avatar_url": "https://avatars.githubusercontent.com/u/52530809?v=4", "events_url": "https://api.github.com/users/BobbyManion/events{/privacy}", "followers_url": "https://api.github.com/users/BobbyManion/followers", "following_url": "https://api.github.com/users/BobbyManion/following{/other_user}", "gists_url": "https://api.github.com/users/BobbyManion/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/BobbyManion", "id": 52530809, "login": "BobbyManion", "node_id": "MDQ6VXNlcjUyNTMwODA5", "organizations_url": "https://api.github.com/users/BobbyManion/orgs", "received_events_url": "https://api.github.com/users/BobbyManion/received_events", "repos_url": "https://api.github.com/users/BobbyManion/repos", "site_admin": false, "starred_url": "https://api.github.com/users/BobbyManion/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/BobbyManion/subscriptions", "type": "User", "url": "https://api.github.com/users/BobbyManion" }
https://api.github.com/repos/huggingface/datasets/issues/2282/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2282/timeline
closed
false
2,282
null
2021-04-29T11:43:51Z
null
true
870,792,784
https://api.github.com/repos/huggingface/datasets/issues/2281
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2281/events
[]
null
2021-04-29T13:41:35Z
[]
https://github.com/huggingface/datasets/pull/2281
MEMBER
null
false
null
[]
Update multi_woz_v22 checksum
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2281/reactions" }
MDExOlB1bGxSZXF1ZXN0NjI1OTI2MjAw
{ "diff_url": "https://github.com/huggingface/datasets/pull/2281.diff", "html_url": "https://github.com/huggingface/datasets/pull/2281", "merged_at": "2021-04-29T13:41:34Z", "patch_url": "https://github.com/huggingface/datasets/pull/2281.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/2281" }
2021-04-29T09:09:11Z
https://api.github.com/repos/huggingface/datasets/issues/2281/comments
Fix issue https://github.com/huggingface/datasets/issues/1876 The files were changed in https://github.com/budzianowski/multiwoz/pull/72
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
https://api.github.com/repos/huggingface/datasets/issues/2281/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2281/timeline
closed
false
2,281
null
2021-04-29T13:41:34Z
null
true
870,780,431
https://api.github.com/repos/huggingface/datasets/issues/2280
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2280/events
[]
null
2021-04-29T16:41:22Z
[]
https://github.com/huggingface/datasets/pull/2280
CONTRIBUTOR
null
false
null
[ "Hi ! Thanks for the fix :)\r\nThe CI fail isn't related to your PR. I opened a PR #2286 to fix the CI.\r\nWe'll wait for #2286 to be merged to master first if you don't mind", "The PR has been merged ! Feel free to merge master into your branch to fix the CI" ]
Fixed typo seperate->separate
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2280/reactions" }
MDExOlB1bGxSZXF1ZXN0NjI1OTE2Mzcy
{ "diff_url": "https://github.com/huggingface/datasets/pull/2280.diff", "html_url": "https://github.com/huggingface/datasets/pull/2280", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/2280.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/2280" }
2021-04-29T08:55:46Z
https://api.github.com/repos/huggingface/datasets/issues/2280/comments
{ "avatar_url": "https://avatars.githubusercontent.com/u/32505743?v=4", "events_url": "https://api.github.com/users/laksh9950/events{/privacy}", "followers_url": "https://api.github.com/users/laksh9950/followers", "following_url": "https://api.github.com/users/laksh9950/following{/other_user}", "gists_url": "https://api.github.com/users/laksh9950/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/laksh9950", "id": 32505743, "login": "laksh9950", "node_id": "MDQ6VXNlcjMyNTA1NzQz", "organizations_url": "https://api.github.com/users/laksh9950/orgs", "received_events_url": "https://api.github.com/users/laksh9950/received_events", "repos_url": "https://api.github.com/users/laksh9950/repos", "site_admin": false, "starred_url": "https://api.github.com/users/laksh9950/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/laksh9950/subscriptions", "type": "User", "url": "https://api.github.com/users/laksh9950" }
https://api.github.com/repos/huggingface/datasets/issues/2280/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2280/timeline
closed
false
2,280
null
2021-04-29T16:41:16Z
null
true
870,431,662
https://api.github.com/repos/huggingface/datasets/issues/2279
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2279/events
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
null
2021-04-29T07:42:42Z
[]
https://github.com/huggingface/datasets/issues/2279
NONE
completed
null
null
[ "From the trace this seems like an error in the tokenizer library instead.\r\n\r\nDo you mind opening an issue at https://github.com/huggingface/tokenizers instead?", "Hi @tginart, thanks for reporting.\r\n\r\nI think this issue is already open at `tokenizers` library: https://github.com/huggingface/tokenizers/issues/685" ]
Compatibility with Ubuntu 18 and GLIBC 2.27?
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2279/reactions" }
MDU6SXNzdWU4NzA0MzE2NjI=
null
2021-04-28T22:08:07Z
https://api.github.com/repos/huggingface/datasets/issues/2279/comments
## Describe the bug For use on Ubuntu systems, it seems that datasets requires GLIBC 2.29. However, Ubuntu 18 runs with GLIBC 2.27 and it seems [non-trivial to upgrade GLIBC to 2.29 for Ubuntu 18 users](https://www.digitalocean.com/community/questions/how-install-glibc-2-29-or-higher-in-ubuntu-18-04). I'm not sure if there is anything that can be done about this, but I'd like to confirm that using huggingface/datasets requires either an upgrade to Ubuntu 19/20 or a hand-rolled install of a higher version of GLIBC. ## Steps to reproduce the bug 1. clone the transformers repo 2. move to examples/pytorch/language-modeling 3. run example command: ```python run_clm.py --model_name_or_path gpt2 --dataset_name wikitext --dataset_config_name wikitext-2-raw-v1 --do_train --do_eval --output_dir /tmp/test-clm``` ## Expected results As described in the transformers repo. ## Actual results ```Traceback (most recent call last): File "run_clm.py", line 34, in <module> from transformers import ( File "/home/tginart/anaconda3/envs/huggingface/lib/python3.7/site-packages/transformers/__init__.py", line 2487, in __getattr__ return super().__getattr__(name) File "/home/tginart/anaconda3/envs/huggingface/lib/python3.7/site-packages/transformers/file_utils.py", line 1699, in __getattr__ module = self._get_module(self._class_to_module[name]) File "/home/tginart/anaconda3/envs/huggingface/lib/python3.7/site-packages/transformers/__init__.py", line 2481, in _get_module return importlib.import_module("." + module_name, self.__name__) File "/home/tginart/anaconda3/envs/huggingface/lib/python3.7/importlib/__init__.py", line 127, in import_module return _bootstrap._gcd_import(name[level:], package, level) File "/home/tginart/anaconda3/envs/huggingface/lib/python3.7/site-packages/transformers/models/__init__.py", line 19, in <module> from . import ( File "/home/tginart/anaconda3/envs/huggingface/lib/python3.7/site-packages/transformers/models/layoutlm/__init__.py", line 23, in <module> from .tokenization_layoutlm import LayoutLMTokenizer File "/home/tginart/anaconda3/envs/huggingface/lib/python3.7/site-packages/transformers/models/layoutlm/tokenization_layoutlm.py", line 19, in <module> from ..bert.tokenization_bert import BertTokenizer File "/home/tginart/anaconda3/envs/huggingface/lib/python3.7/site-packages/transformers/models/bert/tokenization_bert.py", line 23, in <module> from ...tokenization_utils import PreTrainedTokenizer, _is_control, _is_punctuation, _is_whitespace File "/home/tginart/anaconda3/envs/huggingface/lib/python3.7/site-packages/transformers/tokenization_utils.py", line 26, in <module> from .tokenization_utils_base import ( File "/home/tginart/anaconda3/envs/huggingface/lib/python3.7/site-packages/transformers/tokenization_utils_base.py", line 68, in <module> from tokenizers import AddedToken File "/home/tginart/anaconda3/envs/huggingface/lib/python3.7/site-packages/tokenizers/__init__.py", line 79, in <module> from .tokenizers import ( ImportError: /lib/x86_64-linux-gnu/libm.so.6: version `GLIBC_2.29' not found (required by /home/tginart/anaconda3/envs/huggingface/lib/python3.7/site-packages/tokenizers/tokenizers.cpython-37m-x86_64-linux-gnu.so) ``` ## Versions Paste the output of the following code: ``` - Datasets: 1.6.1 - Python: 3.7.10 (default, Feb 26 2021, 18:47:35) [GCC 7.3.0] - Platform: Linux-4.15.0-128-generic-x86_64-with-debian-buster-sid ```
{ "avatar_url": "https://avatars.githubusercontent.com/u/11379648?v=4", "events_url": "https://api.github.com/users/tginart/events{/privacy}", "followers_url": "https://api.github.com/users/tginart/followers", "following_url": "https://api.github.com/users/tginart/following{/other_user}", "gists_url": "https://api.github.com/users/tginart/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/tginart", "id": 11379648, "login": "tginart", "node_id": "MDQ6VXNlcjExMzc5NjQ4", "organizations_url": "https://api.github.com/users/tginart/orgs", "received_events_url": "https://api.github.com/users/tginart/received_events", "repos_url": "https://api.github.com/users/tginart/repos", "site_admin": false, "starred_url": "https://api.github.com/users/tginart/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/tginart/subscriptions", "type": "User", "url": "https://api.github.com/users/tginart" }
https://api.github.com/repos/huggingface/datasets/issues/2279/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2279/timeline
closed
false
2,279
null
2021-04-29T07:42:42Z
null
false
870,088,059
https://api.github.com/repos/huggingface/datasets/issues/2278
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2278/events
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" } ]
null
2021-05-06T16:14:23Z
[]
https://github.com/huggingface/datasets/issues/2278
NONE
completed
null
null
[ "Hi ! I think you might have to ask on the `transformers` repo on or the forum at https://discuss.huggingface.co/\r\n\r\nClosing since it's not related to this library" ]
Loss result inGptNeoForCasual
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2278/reactions" }
MDU6SXNzdWU4NzAwODgwNTk=
null
2021-04-28T15:39:52Z
https://api.github.com/repos/huggingface/datasets/issues/2278/comments
Is there any way you give the " loss" and "logits" results in the gpt neo api?
{ "avatar_url": "https://avatars.githubusercontent.com/u/51174606?v=4", "events_url": "https://api.github.com/users/Yossillamm/events{/privacy}", "followers_url": "https://api.github.com/users/Yossillamm/followers", "following_url": "https://api.github.com/users/Yossillamm/following{/other_user}", "gists_url": "https://api.github.com/users/Yossillamm/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/Yossillamm", "id": 51174606, "login": "Yossillamm", "node_id": "MDQ6VXNlcjUxMTc0NjA2", "organizations_url": "https://api.github.com/users/Yossillamm/orgs", "received_events_url": "https://api.github.com/users/Yossillamm/received_events", "repos_url": "https://api.github.com/users/Yossillamm/repos", "site_admin": false, "starred_url": "https://api.github.com/users/Yossillamm/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Yossillamm/subscriptions", "type": "User", "url": "https://api.github.com/users/Yossillamm" }
https://api.github.com/repos/huggingface/datasets/issues/2278/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2278/timeline
closed
false
2,278
null
2021-05-06T16:14:23Z
null
false
870,071,994
https://api.github.com/repos/huggingface/datasets/issues/2277
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2277/events
[ { "color": "B67A40", "default": false, "description": "Restructuring existing code without changing its external behavior", "id": 2851292821, "name": "refactoring", "node_id": "MDU6TGFiZWwyODUxMjkyODIx", "url": "https://api.github.com/repos/huggingface/datasets/labels/refactoring" } ]
null
2022-07-06T15:19:48Z
[]
https://github.com/huggingface/datasets/pull/2277
MEMBER
null
false
{ "closed_at": null, "closed_issues": 2, "created_at": "2021-07-21T15:34:56Z", "creator": { "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }, "description": "Next minor release", "due_on": "2021-08-30T07:00:00Z", "html_url": "https://github.com/huggingface/datasets/milestone/8", "id": 6968069, "labels_url": "https://api.github.com/repos/huggingface/datasets/milestones/8/labels", "node_id": "MI_kwDODunzps4AalMF", "number": 8, "open_issues": 4, "state": "open", "title": "1.12", "updated_at": "2021-10-13T10:26:33Z", "url": "https://api.github.com/repos/huggingface/datasets/milestones/8" }
[]
Create CacheManager
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2277/reactions" }
MDExOlB1bGxSZXF1ZXN0NjI1MzI5NjIz
{ "diff_url": "https://github.com/huggingface/datasets/pull/2277.diff", "html_url": "https://github.com/huggingface/datasets/pull/2277", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/2277.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/2277" }
2021-04-28T15:23:42Z
https://api.github.com/repos/huggingface/datasets/issues/2277/comments
Perform refactoring to decouple cache functionality (method `as_dataset`).
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
https://api.github.com/repos/huggingface/datasets/issues/2277/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2277/timeline
open
false
2,277
null
null
null
true
870,010,511
https://api.github.com/repos/huggingface/datasets/issues/2276
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2276/events
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
null
2021-05-03T08:41:55Z
[ { "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" } ]
https://github.com/huggingface/datasets/issues/2276
NONE
completed
null
null
[ "Therefore, when I try to concatenate larger datasets (5x 35GB data sets) I also get an out of memory error, since over 90GB of swap space was used at the time of the crash:\r\n\r\n```\r\n---------------------------------------------------------------------------\r\nMemoryError Traceback (most recent call last)\r\n<ipython-input-6-9766d77530b9> in <module>\r\n 20 print(file_name)\r\n 21 cv_batch = load_from_disk(file_name)\r\n---> 22 cv_sampled_train = concatenate_datasets([cv_sampled_train, cv_batch])\r\n 23 \r\n 24 print(\"Saving to disk!\")\r\n\r\nC:\\ProgramData\\Anaconda3\\lib\\site-packages\\datasets\\arrow_dataset.py in concatenate_datasets(dsets, info, split, axis)\r\n 2891 \r\n 2892 # Concatenate tables\r\n-> 2893 table = concat_tables([dset._data for dset in dsets if len(dset._data) > 0], axis=axis)\r\n 2894 table = update_metadata_with_features(table, None)\r\n 2895 \r\n\r\nC:\\ProgramData\\Anaconda3\\lib\\site-packages\\datasets\\table.py in concat_tables(tables, axis)\r\n 837 if len(tables) == 1:\r\n 838 return tables[0]\r\n--> 839 return ConcatenationTable.from_tables(tables, axis=axis)\r\n 840 \r\n 841 \r\n\r\nC:\\ProgramData\\Anaconda3\\lib\\site-packages\\datasets\\table.py in from_tables(cls, tables, axis)\r\n 697 return result\r\n 698 \r\n--> 699 blocks = to_blocks(tables[0])\r\n 700 for table in tables[1:]:\r\n 701 table_blocks = to_blocks(table)\r\n\r\nC:\\ProgramData\\Anaconda3\\lib\\site-packages\\datasets\\table.py in to_blocks(table)\r\n 669 return [[InMemoryTable(table)]]\r\n 670 elif isinstance(table, ConcatenationTable):\r\n--> 671 return copy.deepcopy(table.blocks)\r\n 672 else:\r\n 673 return [[table]]\r\n\r\nC:\\ProgramData\\Anaconda3\\lib\\copy.py in deepcopy(x, memo, _nil)\r\n 144 copier = _deepcopy_dispatch.get(cls)\r\n 145 if copier is not None:\r\n--> 146 y = copier(x, memo)\r\n 147 else:\r\n 148 if issubclass(cls, type):\r\n\r\nC:\\ProgramData\\Anaconda3\\lib\\copy.py in _deepcopy_list(x, memo, deepcopy)\r\n 203 append = y.append\r\n 204 for a in x:\r\n--> 205 append(deepcopy(a, memo))\r\n 206 return y\r\n 207 d[list] = _deepcopy_list\r\n\r\nC:\\ProgramData\\Anaconda3\\lib\\copy.py in deepcopy(x, memo, _nil)\r\n 144 copier = _deepcopy_dispatch.get(cls)\r\n 145 if copier is not None:\r\n--> 146 y = copier(x, memo)\r\n 147 else:\r\n 148 if issubclass(cls, type):\r\n\r\nC:\\ProgramData\\Anaconda3\\lib\\copy.py in _deepcopy_list(x, memo, deepcopy)\r\n 203 append = y.append\r\n 204 for a in x:\r\n--> 205 append(deepcopy(a, memo))\r\n 206 return y\r\n 207 d[list] = _deepcopy_list\r\n\r\nC:\\ProgramData\\Anaconda3\\lib\\copy.py in deepcopy(x, memo, _nil)\r\n 151 copier = getattr(x, \"__deepcopy__\", None)\r\n 152 if copier is not None:\r\n--> 153 y = copier(memo)\r\n 154 else:\r\n 155 reductor = dispatch_table.get(cls)\r\n\r\nC:\\ProgramData\\Anaconda3\\lib\\site-packages\\datasets\\table.py in __deepcopy__(self, memo)\r\n 143 # by adding it to the memo, self.table won't be copied\r\n 144 memo[id(self.table)] = self.table\r\n--> 145 return _deepcopy(self, memo)\r\n 146 \r\n 147 def __getstate__(self):\r\n\r\nC:\\ProgramData\\Anaconda3\\lib\\site-packages\\datasets\\table.py in _deepcopy(x, memo)\r\n 62 memo[id(x)] = result\r\n 63 for k, v in x.__dict__.items():\r\n---> 64 setattr(result, k, copy.deepcopy(v, memo))\r\n 65 return result\r\n 66 \r\n\r\nC:\\ProgramData\\Anaconda3\\lib\\copy.py in deepcopy(x, memo, _nil)\r\n 144 copier = _deepcopy_dispatch.get(cls)\r\n 145 if copier is not None:\r\n--> 146 y = copier(x, memo)\r\n 147 else:\r\n 148 if issubclass(cls, type):\r\n\r\nC:\\ProgramData\\Anaconda3\\lib\\copy.py in _deepcopy_list(x, memo, deepcopy)\r\n 203 append = y.append\r\n 204 for a in x:\r\n--> 205 append(deepcopy(a, memo))\r\n 206 return y\r\n 207 d[list] = _deepcopy_list\r\n\r\nC:\\ProgramData\\Anaconda3\\lib\\copy.py in deepcopy(x, memo, _nil)\r\n 170 y = x\r\n 171 else:\r\n--> 172 y = _reconstruct(x, memo, *rv)\r\n 173 \r\n 174 # If is its own copy, don't memoize.\r\n\r\nC:\\ProgramData\\Anaconda3\\lib\\copy.py in _reconstruct(x, memo, func, args, state, listiter, dictiter, deepcopy)\r\n 262 if deep and args:\r\n 263 args = (deepcopy(arg, memo) for arg in args)\r\n--> 264 y = func(*args)\r\n 265 if deep:\r\n 266 memo[id(x)] = y\r\n\r\nC:\\ProgramData\\Anaconda3\\lib\\copy.py in <genexpr>(.0)\r\n 261 deep = memo is not None\r\n 262 if deep and args:\r\n--> 263 args = (deepcopy(arg, memo) for arg in args)\r\n 264 y = func(*args)\r\n 265 if deep:\r\n\r\nC:\\ProgramData\\Anaconda3\\lib\\copy.py in deepcopy(x, memo, _nil)\r\n 144 copier = _deepcopy_dispatch.get(cls)\r\n 145 if copier is not None:\r\n--> 146 y = copier(x, memo)\r\n 147 else:\r\n 148 if issubclass(cls, type):\r\n\r\nC:\\ProgramData\\Anaconda3\\lib\\copy.py in _deepcopy_list(x, memo, deepcopy)\r\n 203 append = y.append\r\n 204 for a in x:\r\n--> 205 append(deepcopy(a, memo))\r\n 206 return y\r\n 207 d[list] = _deepcopy_list\r\n\r\nC:\\ProgramData\\Anaconda3\\lib\\copy.py in deepcopy(x, memo, _nil)\r\n 170 y = x\r\n 171 else:\r\n--> 172 y = _reconstruct(x, memo, *rv)\r\n 173 \r\n 174 # If is its own copy, don't memoize.\r\n\r\nC:\\ProgramData\\Anaconda3\\lib\\copy.py in _reconstruct(x, memo, func, args, state, listiter, dictiter, deepcopy)\r\n 262 if deep and args:\r\n 263 args = (deepcopy(arg, memo) for arg in args)\r\n--> 264 y = func(*args)\r\n 265 if deep:\r\n 266 memo[id(x)] = y\r\n\r\nC:\\ProgramData\\Anaconda3\\lib\\copy.py in <genexpr>(.0)\r\n 261 deep = memo is not None\r\n 262 if deep and args:\r\n--> 263 args = (deepcopy(arg, memo) for arg in args)\r\n 264 y = func(*args)\r\n 265 if deep:\r\n\r\nC:\\ProgramData\\Anaconda3\\lib\\copy.py in deepcopy(x, memo, _nil)\r\n 144 copier = _deepcopy_dispatch.get(cls)\r\n 145 if copier is not None:\r\n--> 146 y = copier(x, memo)\r\n 147 else:\r\n 148 if issubclass(cls, type):\r\n\r\nC:\\ProgramData\\Anaconda3\\lib\\copy.py in _deepcopy_tuple(x, memo, deepcopy)\r\n 208 \r\n 209 def _deepcopy_tuple(x, memo, deepcopy=deepcopy):\r\n--> 210 y = [deepcopy(a, memo) for a in x]\r\n 211 # We're not going to put the tuple in the memo, but it's still important we\r\n 212 # check for it, in case the tuple contains recursive mutable structures.\r\n\r\nC:\\ProgramData\\Anaconda3\\lib\\copy.py in <listcomp>(.0)\r\n 208 \r\n 209 def _deepcopy_tuple(x, memo, deepcopy=deepcopy):\r\n--> 210 y = [deepcopy(a, memo) for a in x]\r\n 211 # We're not going to put the tuple in the memo, but it's still important we\r\n 212 # check for it, in case the tuple contains recursive mutable structures.\r\n\r\nC:\\ProgramData\\Anaconda3\\lib\\copy.py in deepcopy(x, memo, _nil)\r\n 144 copier = _deepcopy_dispatch.get(cls)\r\n 145 if copier is not None:\r\n--> 146 y = copier(x, memo)\r\n 147 else:\r\n 148 if issubclass(cls, type):\r\n\r\nC:\\ProgramData\\Anaconda3\\lib\\copy.py in _deepcopy_list(x, memo, deepcopy)\r\n 203 append = y.append\r\n 204 for a in x:\r\n--> 205 append(deepcopy(a, memo))\r\n 206 return y\r\n 207 d[list] = _deepcopy_list\r\n\r\nC:\\ProgramData\\Anaconda3\\lib\\copy.py in deepcopy(x, memo, _nil)\r\n 144 copier = _deepcopy_dispatch.get(cls)\r\n 145 if copier is not None:\r\n--> 146 y = copier(x, memo)\r\n 147 else:\r\n 148 if issubclass(cls, type):\r\n\r\nC:\\ProgramData\\Anaconda3\\lib\\copy.py in _deepcopy_tuple(x, memo, deepcopy)\r\n 208 \r\n 209 def _deepcopy_tuple(x, memo, deepcopy=deepcopy):\r\n--> 210 y = [deepcopy(a, memo) for a in x]\r\n 211 # We're not going to put the tuple in the memo, but it's still important we\r\n 212 # check for it, in case the tuple contains recursive mutable structures.\r\n\r\nC:\\ProgramData\\Anaconda3\\lib\\copy.py in <listcomp>(.0)\r\n 208 \r\n 209 def _deepcopy_tuple(x, memo, deepcopy=deepcopy):\r\n--> 210 y = [deepcopy(a, memo) for a in x]\r\n 211 # We're not going to put the tuple in the memo, but it's still important we\r\n 212 # check for it, in case the tuple contains recursive mutable structures.\r\n\r\nC:\\ProgramData\\Anaconda3\\lib\\copy.py in deepcopy(x, memo, _nil)\r\n 144 copier = _deepcopy_dispatch.get(cls)\r\n 145 if copier is not None:\r\n--> 146 y = copier(x, memo)\r\n 147 else:\r\n 148 if issubclass(cls, type):\r\n\r\nC:\\ProgramData\\Anaconda3\\lib\\copy.py in _deepcopy_list(x, memo, deepcopy)\r\n 203 append = y.append\r\n 204 for a in x:\r\n--> 205 append(deepcopy(a, memo))\r\n 206 return y\r\n 207 d[list] = _deepcopy_list\r\n\r\nC:\\ProgramData\\Anaconda3\\lib\\copy.py in deepcopy(x, memo, _nil)\r\n 159 reductor = getattr(x, \"__reduce_ex__\", None)\r\n 160 if reductor is not None:\r\n--> 161 rv = reductor(4)\r\n 162 else:\r\n 163 reductor = getattr(x, \"__reduce__\", None)\r\n\r\nC:\\ProgramData\\Anaconda3\\lib\\site-packages\\pyarrow\\io.pxi in pyarrow.lib.Buffer.__reduce_ex__()\r\n\r\nC:\\ProgramData\\Anaconda3\\lib\\site-packages\\pyarrow\\io.pxi in pyarrow.lib.Buffer.to_pybytes()\r\n\r\nMemoryError: \r\n\r\n```", "Hi ! this looks like an important issue. Let me try to reproduce this.\r\nCc @samsontmr this might be related to the memory issue you have in #2134 ", "@lhoestq Just went to open a similar issue.\r\n\r\nIt seems like deep copying (tested on master) the dataset object writes the table's record batches (`dset._data._batches`) into RAM.\r\n\r\nTo find the bug, I modified the `_deepcopy` function in `table.py` as follows:\r\n```python\r\ndef _deepcopy(x, memo: dict):\r\n \"\"\"deepcopy a regular class instance\"\"\"\r\n import psutil # pip install this package\r\n import time\r\n cls = x.__class__\r\n result = cls.__new__(cls)\r\n memo[id(x)] = result\r\n for k, v in x.__dict__.items():\r\n print(\"=\"* 50)\r\n print(\"Current memory:\", psutil.virtual_memory().percent)\r\n print(f\"Saving object {k} with value {v}\")\r\n setattr(result, k, copy.deepcopy(v, memo))\r\n time.sleep(5)\r\n print(\"Memory after copy:\", psutil.virtual_memory().percent)\r\n return result\r\n```\r\nTest script:\r\n```python\r\nimport copy\r\nfrom datasets import load_dataset\r\nbk = load_dataset(\"bookcorpus\", split=\"train\")\r\nbk_copy = copy.deepcopy(bk)\r\n```", "Thanks for the insights @mariosasko ! I'm working on a fix.\r\nSince this is a big issue I'll make a patch release as soon as this is fixed", "Hi @samsontmr @TaskManager91 the fix is on the master branch, feel free to install `datasets` from source and let us know if you still have issues", "We just released `datasets` 1.6.2 that includes the fix :)", "thanks it works like a charm! :)" ]
concatenate_datasets loads all the data into memory
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2276/reactions" }
MDU6SXNzdWU4NzAwMTA1MTE=
null
2021-04-28T14:27:21Z
https://api.github.com/repos/huggingface/datasets/issues/2276/comments
## Describe the bug When I try to concatenate 2 datasets (10GB each) , the entire data is loaded into memory instead of being written directly to disk. Interestingly, this happens when trying to save the new dataset to disk or concatenating it again. ![image](https://user-images.githubusercontent.com/7063207/116420321-2b21b480-a83e-11eb-9006-8f6ca729fb6f.png) ## Steps to reproduce the bug ```python from datasets import concatenate_datasets, load_from_disk test_sampled_pro = load_from_disk("test_sampled_pro") val_sampled_pro = load_from_disk("val_sampled_pro") big_set = concatenate_datasets([test_sampled_pro, val_sampled_pro]) # Loaded to memory big_set.save_to_disk("big_set") # Loaded to memory big_set = concatenate_datasets([big_set, val_sampled_pro]) ``` ## Expected results The data should be loaded into memory in batches and then saved directly to disk. ## Actual results The entire data set is loaded into the memory and then saved to the hard disk. ## Versions Paste the output of the following code: ```python - Datasets: 1.6.1 - Python: 3.8.8 (default, Apr 13 2021, 19:58:26) [GCC 7.3.0] - Platform: Linux-5.4.72-microsoft-standard-WSL2-x86_64-with-glibc2.10 ```
{ "avatar_url": "https://avatars.githubusercontent.com/u/7063207?v=4", "events_url": "https://api.github.com/users/chbensch/events{/privacy}", "followers_url": "https://api.github.com/users/chbensch/followers", "following_url": "https://api.github.com/users/chbensch/following{/other_user}", "gists_url": "https://api.github.com/users/chbensch/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/chbensch", "id": 7063207, "login": "chbensch", "node_id": "MDQ6VXNlcjcwNjMyMDc=", "organizations_url": "https://api.github.com/users/chbensch/orgs", "received_events_url": "https://api.github.com/users/chbensch/received_events", "repos_url": "https://api.github.com/users/chbensch/repos", "site_admin": false, "starred_url": "https://api.github.com/users/chbensch/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/chbensch/subscriptions", "type": "User", "url": "https://api.github.com/users/chbensch" }
https://api.github.com/repos/huggingface/datasets/issues/2276/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2276/timeline
closed
false
2,276
null
2021-05-03T08:41:55Z
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
false
869,378,311
https://api.github.com/repos/huggingface/datasets/issues/2275
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2275/events
[]
null
2021-05-17T13:34:18Z
[]
https://github.com/huggingface/datasets/issues/2275
NONE
completed
null
null
[ "Hi @puzzler10, \r\nThose examples where `gold_label` field was empty, -1 label was alloted to it. In order to remove it you can filter the samples from train/val/test splits. Here's how you can drop those rows from the dataset:\r\n`dataset = load_dataset(\"snli\")`\r\n`dataset_test_filter = dataset['test'].filter(lambda example: example['label'] != -1)`\r\n\r\nI agree it should have been mentioned in the documentation. I'll raise a PR regarding the same. Thanks for pointing out!" ]
SNLI dataset has labels of -1
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2275/reactions" }
MDU6SXNzdWU4NjkzNzgzMTE=
null
2021-04-28T00:32:25Z
https://api.github.com/repos/huggingface/datasets/issues/2275/comments
There are a number of rows with a label of -1 in the SNLI dataset. The dataset descriptions [here](https://nlp.stanford.edu/projects/snli/) and [here](https://github.com/huggingface/datasets/tree/master/datasets/snli) don't list -1 as a label possibility, and neither does the dataset viewer. As examples, see index 107 or 124 of the test set. It isn't clear what these labels mean. I found a [line of code](https://github.com/huggingface/datasets/blob/80e59ef178d3bb2090d091bc32315c655eb0633d/datasets/snli/snli.py#L94) that seems to put them in but it seems still unclear why they are there. The current workaround is to just drop the rows from any model being trained. Perhaps the documentation should be updated.
{ "avatar_url": "https://avatars.githubusercontent.com/u/17426779?v=4", "events_url": "https://api.github.com/users/puzzler10/events{/privacy}", "followers_url": "https://api.github.com/users/puzzler10/followers", "following_url": "https://api.github.com/users/puzzler10/following{/other_user}", "gists_url": "https://api.github.com/users/puzzler10/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/puzzler10", "id": 17426779, "login": "puzzler10", "node_id": "MDQ6VXNlcjE3NDI2Nzc5", "organizations_url": "https://api.github.com/users/puzzler10/orgs", "received_events_url": "https://api.github.com/users/puzzler10/received_events", "repos_url": "https://api.github.com/users/puzzler10/repos", "site_admin": false, "starred_url": "https://api.github.com/users/puzzler10/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/puzzler10/subscriptions", "type": "User", "url": "https://api.github.com/users/puzzler10" }
https://api.github.com/repos/huggingface/datasets/issues/2275/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2275/timeline
closed
false
2,275
null
2021-05-17T13:34:18Z
null
false
869,186,276
https://api.github.com/repos/huggingface/datasets/issues/2274
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2274/events
[]
null
2022-06-03T08:31:19Z
[]
https://github.com/huggingface/datasets/pull/2274
MEMBER
null
false
null
[]
Always update metadata in arrow schema
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2274/reactions" }
MDExOlB1bGxSZXF1ZXN0NjI0NTkyMjQx
{ "diff_url": "https://github.com/huggingface/datasets/pull/2274.diff", "html_url": "https://github.com/huggingface/datasets/pull/2274", "merged_at": "2021-04-29T09:57:50Z", "patch_url": "https://github.com/huggingface/datasets/pull/2274.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/2274" }
2021-04-27T19:21:57Z
https://api.github.com/repos/huggingface/datasets/issues/2274/comments
We store a redundant copy of the features in the metadata of the schema of the arrow table. This is used to recover the features when doing `Dataset.from_file`. These metadata are updated after each transfor, that changes the feature types. For each function that transforms the feature types of the dataset, I added a step in the tests to make sure the metadata in the arrow schema are up to date. I also added a line to update the metadata directly in the Dataset.__init__ method. This way even a dataset instantiated with __init__ will have a table with the right metadata. Fix #2271. cc @mariosasko
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
https://api.github.com/repos/huggingface/datasets/issues/2274/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2274/timeline
closed
false
2,274
null
2021-04-29T09:57:50Z
null
true
869,046,290
https://api.github.com/repos/huggingface/datasets/issues/2273
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2273/events
[]
null
2021-04-29T13:59:47Z
[]
https://github.com/huggingface/datasets/pull/2273
CONTRIBUTOR
null
false
null
[]
Added CUAD metrics
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 1, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/2273/reactions" }
MDExOlB1bGxSZXF1ZXN0NjI0NDcxODc1
{ "diff_url": "https://github.com/huggingface/datasets/pull/2273.diff", "html_url": "https://github.com/huggingface/datasets/pull/2273", "merged_at": "2021-04-29T13:59:47Z", "patch_url": "https://github.com/huggingface/datasets/pull/2273.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/2273" }
2021-04-27T16:49:12Z
https://api.github.com/repos/huggingface/datasets/issues/2273/comments
`EM`, `F1`, `AUPR`, `Precision@80%Recall`, and `Precision@90%Recall` metrics supported for CUAD
{ "avatar_url": "https://avatars.githubusercontent.com/u/19718818?v=4", "events_url": "https://api.github.com/users/bhavitvyamalik/events{/privacy}", "followers_url": "https://api.github.com/users/bhavitvyamalik/followers", "following_url": "https://api.github.com/users/bhavitvyamalik/following{/other_user}", "gists_url": "https://api.github.com/users/bhavitvyamalik/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/bhavitvyamalik", "id": 19718818, "login": "bhavitvyamalik", "node_id": "MDQ6VXNlcjE5NzE4ODE4", "organizations_url": "https://api.github.com/users/bhavitvyamalik/orgs", "received_events_url": "https://api.github.com/users/bhavitvyamalik/received_events", "repos_url": "https://api.github.com/users/bhavitvyamalik/repos", "site_admin": false, "starred_url": "https://api.github.com/users/bhavitvyamalik/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/bhavitvyamalik/subscriptions", "type": "User", "url": "https://api.github.com/users/bhavitvyamalik" }
https://api.github.com/repos/huggingface/datasets/issues/2273/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2273/timeline
closed
false
2,273
null
2021-04-29T13:59:47Z
null
true
869,017,977
https://api.github.com/repos/huggingface/datasets/issues/2272
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2272/events
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
null
2021-04-30T12:54:27Z
[]
https://github.com/huggingface/datasets/issues/2272
MEMBER
completed
null
null
[ "This has been fixed in this commit: https://github.com/huggingface/datasets/pull/2254/commits/88676c930216cd4cc31741b99827b477d2b46cb6\r\n\r\nIt was introduced in #2246 : using map with `input_columns` doesn't return the other columns anymore" ]
Bug in Dataset.class_encode_column
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2272/reactions" }
MDU6SXNzdWU4NjkwMTc5Nzc=
null
2021-04-27T16:13:18Z
https://api.github.com/repos/huggingface/datasets/issues/2272/comments
## Describe the bug All the rest of the columns except the one passed to `Dataset.class_encode_column` are discarded. ## Expected results All the original columns should be kept. This needs regression tests.
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
https://api.github.com/repos/huggingface/datasets/issues/2272/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2272/timeline
closed
false
2,272
null
2021-04-30T12:54:27Z
null
false
869,002,141
https://api.github.com/repos/huggingface/datasets/issues/2271
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2271/events
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" } ]
null
2022-06-01T17:13:21Z
[]
https://github.com/huggingface/datasets/issues/2271
MEMBER
completed
null
null
[ "See PR #2274 " ]
Synchronize table metadata with features
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2271/reactions" }
MDU6SXNzdWU4NjkwMDIxNDE=
null
2021-04-27T15:55:13Z
https://api.github.com/repos/huggingface/datasets/issues/2271/comments
**Is your feature request related to a problem? Please describe.** As pointed out in this [comment](https://github.com/huggingface/datasets/pull/2145#discussion_r621326767): > Metadata stored in the schema is just a redundant information regarding the feature types. It is used when calling Dataset.from_file to know which feature types to use. These metadata are stored in the schema of the pyarrow table by using `update_metadata_with_features`. However this something that's almost never tested properly. **Describe the solution you'd like** We should find a way to always make sure that the metadata (in `self.data.schema.metadata`) are synced with the actual feature types (in `self.info.features`).
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
https://api.github.com/repos/huggingface/datasets/issues/2271/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2271/timeline
closed
false
2,271
null
2022-06-01T17:13:21Z
null
false
868,913,660
https://api.github.com/repos/huggingface/datasets/issues/2270
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2270/events
[]
null
2021-04-28T17:39:27Z
[]
https://github.com/huggingface/datasets/pull/2270
MEMBER
null
false
null
[ "It's been fixed in this commit: https://github.com/huggingface/datasets/commit/549110e08238b3716a5904667095fb003acda54e\r\n\r\nBasically #2246 broke querying an index with a simple iterable.\r\nWith the fix, it's again possible to use iterables and we can keep RandIter as it is.\r\n\r\nClosing since the fix is already on master" ]
Fix iterable interface expected by numpy
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2270/reactions" }
MDExOlB1bGxSZXF1ZXN0NjI0MzU5Njky
{ "diff_url": "https://github.com/huggingface/datasets/pull/2270.diff", "html_url": "https://github.com/huggingface/datasets/pull/2270", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/2270.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/2270" }
2021-04-27T14:35:56Z
https://api.github.com/repos/huggingface/datasets/issues/2270/comments
Numpy expects the old iterable interface with `__getitem__` instead of `__iter__`.
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
https://api.github.com/repos/huggingface/datasets/issues/2270/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2270/timeline
closed
false
2,270
null
2021-04-28T17:39:27Z
null
true
868,878,468
https://api.github.com/repos/huggingface/datasets/issues/2269
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2269/events
[]
null
2021-04-27T14:21:57Z
[]
https://github.com/huggingface/datasets/pull/2269
MEMBER
null
false
null
[]
Fix query table with iterable
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2269/reactions" }
MDExOlB1bGxSZXF1ZXN0NjI0MzMwNDA3
{ "diff_url": "https://github.com/huggingface/datasets/pull/2269.diff", "html_url": "https://github.com/huggingface/datasets/pull/2269", "merged_at": "2021-04-27T14:21:56Z", "patch_url": "https://github.com/huggingface/datasets/pull/2269.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/2269" }
2021-04-27T13:59:38Z
https://api.github.com/repos/huggingface/datasets/issues/2269/comments
The benchmark runs are failing on master because it tries to use an iterable to query the dataset. However there's currently an issue caused by the use of `np.array` instead of `np.fromiter` on the iterable. This PR fixes it
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
https://api.github.com/repos/huggingface/datasets/issues/2269/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2269/timeline
closed
false
2,269
null
2021-04-27T14:21:56Z
null
true
868,773,380
https://api.github.com/repos/huggingface/datasets/issues/2268
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2268/events
[]
null
2021-06-12T12:44:49Z
[]
https://github.com/huggingface/datasets/pull/2268
MEMBER
null
false
null
[ "@lhoestq note that the segfault also occurs on Linux.", "Created the ticket at\r\nhttps://issues.apache.org/jira/browse/ARROW-12568", "@lhoestq the ticket you mentioned is now in state resolved. Pyarrow supports AArch64 after version 4.0.0. Because of this restriction `datasets` is not installing in AArch64 systems." ]
Don't use pyarrow 4.0.0 since it segfaults when casting a sliced ListArray of integers
{ "+1": 1, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/2268/reactions" }
MDExOlB1bGxSZXF1ZXN0NjI0MjQyODg1
{ "diff_url": "https://github.com/huggingface/datasets/pull/2268.diff", "html_url": "https://github.com/huggingface/datasets/pull/2268", "merged_at": "2021-04-27T13:43:20Z", "patch_url": "https://github.com/huggingface/datasets/pull/2268.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/2268" }
2021-04-27T11:58:28Z
https://api.github.com/repos/huggingface/datasets/issues/2268/comments
This test `tests/test_table.py::test_concatenation_table_cast` segfaults with the latest update of pyarrow 4.0.0. Setting `pyarrow<4.0.0` for now. I'll open an issue on JIRA once I know more about the origin of the issue
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
https://api.github.com/repos/huggingface/datasets/issues/2268/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2268/timeline
closed
false
2,268
null
2021-04-27T13:43:20Z
null
true
868,291,129
https://api.github.com/repos/huggingface/datasets/issues/2267
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2267/events
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
null
2021-05-28T15:27:34Z
[]
https://github.com/huggingface/datasets/issues/2267
NONE
null
null
null
[ "Thanks for reporting ! We're looking into it", "I'm not able to reproduce this, do you think you can provide a code that creates a DatasetDict that has this issue when saving and reloading ?", "Hi, I just ran into a similar error. Here is the minimal code to reproduce:\r\n```python\r\nfrom datasets import load_dataset, DatasetDict\r\nds = load_dataset('super_glue', 'multirc')\r\n\r\nds.save_to_disk('tempds')\r\n\r\nds = DatasetDict.load_from_disk('tempds')\r\n\r\n```\r\n\r\n```bash\r\nReusing dataset super_glue (/home/idahl/.cache/huggingface/datasets/super_glue/multirc/1.0.2/2fb163bca9085c1deb906aff20f00c242227ff704a4e8c9cfdfe820be3abfc83)\r\nTraceback (most recent call last):\r\n File \"/home/idahl/eval-util-expl/multirc/tmp.py\", line 7, in <module>\r\n ds = DatasetDict.load_from_disk('tempds')\r\n File \"/home/idahl/miniconda3/envs/eval-util-expl/lib/python3.9/site-packages/datasets/dataset_dict.py\", line 710, in load_from_disk\r\n dataset_dict[k] = Dataset.load_from_disk(dataset_dict_split_path, fs, keep_in_memory=keep_in_memory)\r\n File \"/home/idahl/miniconda3/envs/eval-util-expl/lib/python3.9/site-packages/datasets/arrow_dataset.py\", line 687, in load_from_disk\r\n return Dataset(\r\n File \"/home/idahl/miniconda3/envs/eval-util-expl/lib/python3.9/site-packages/datasets/arrow_dataset.py\", line 274, in __init__\r\n raise ValueError(\r\nValueError: External features info don't match the dataset:\r\nGot\r\n{'answer': Value(dtype='string', id=None), 'idx': {'answer': Value(dtype='int32', id=None), 'paragraph': Value(dtype='int32', id=None), 'question': Value(dtype='int32', id=None)}, 'label': ClassLabel(num_classes=2, names=['False', 'True'], names_file=None, id=None), 'paragraph': Value(dtype='string', id=None), 'question': Value(dtype='string', id=None)}\r\nwith type\r\nstruct<answer: string, idx: struct<answer: int32, paragraph: int32, question: int32>, label: int64, paragraph: string, question: string>\r\n\r\nbut expected something like\r\n{'answer': Value(dtype='string', id=None), 'idx': {'paragraph': Value(dtype='int32', id=None), 'question': Value(dtype='int32', id=None), 'answer': Value(dtype='int32', id=None)}, 'label': Value(dtype='int64', id=None), 'paragraph': Value(dtype='string', id=None), 'question': Value(dtype='string', id=None)}\r\nwith type\r\nstruct<answer: string, idx: struct<paragraph: int32, question: int32, answer: int32>, label: int64, paragraph: string, question: string>\r\n\r\n```\r\n\r\nThe non-matching part seems to be\r\n`'label': ClassLabel(num_classes=2, names=['False', 'True'], names_file=None, id=None),`\r\nvs \r\n`'label': Value(dtype='int64', id=None),`\r\n\r\nAnd the order in the `<struct...` being different, which might cause the [features.type != inferred_features.type](https://github.com/huggingface/datasets/blob/master/src/datasets/arrow_dataset.py#L274) condition to become true and raise this ValueError.\r\n\r\n\r\nI am using datasets version 1.6.2.\r\n\r\nEdit: can confirm, this works without error in version 1.5.0", "My current workaround is to remove the idx feature:\r\n\r\n```\r\n\r\nfrom datasets import load_dataset, DatasetDict, Value\r\nds = load_dataset('super_glue', 'multirc')\r\nds = ds.remove_columns('idx')\r\n\r\nds.save_to_disk('tempds')\r\n\r\nds = DatasetDict.load_from_disk('tempds')\r\n\r\n```\r\n\r\nworks.", "It looks like this issue comes from the order of the fields in the 'idx' struct that is different for some reason.\r\nI'm looking into it. Note that as a workaround you can also flatten the nested features with `ds = ds.flatten()`", "I just pushed a fix on `master`. We'll do a new release soon !\r\n\r\nThanks for reporting" ]
DatasetDict save load Failing test in 1.6 not in 1.5
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2267/reactions" }
MDU6SXNzdWU4NjgyOTExMjk=
null
2021-04-27T00:03:25Z
https://api.github.com/repos/huggingface/datasets/issues/2267/comments
## Describe the bug We have a test that saves a DatasetDict to disk and then loads it from disk. In 1.6 there is an incompatibility in the schema. Downgrading to `>1.6` -- fixes the problem. ## Steps to reproduce the bug ```python ### Load a dataset dict from jsonl path = '/test/foo' ds_dict.save_to_disk(path) ds_from_disk = DatasetDict.load_from_disk(path). ## <-- this is where I see the error on 1.6 ``` ## Expected results Upgrading to 1.6 shouldn't break that test. We should be able to serialize to and from disk. ## Actual results ``` # Infer features if None inferred_features = Features.from_arrow_schema(arrow_table.schema) if self.info.features is None: self.info.features = inferred_features # Infer fingerprint if None if self._fingerprint is None: self._fingerprint = generate_fingerprint(self) # Sanity checks assert self.features is not None, "Features can't be None in a Dataset object" assert self._fingerprint is not None, "Fingerprint can't be None in a Dataset object" if self.info.features.type != inferred_features.type: > raise ValueError( "External features info don't match the dataset:\nGot\n{}\nwith type\n{}\n\nbut expected something like\n{}\nwith type\n{}".format( self.info.features, self.info.features.type, inferred_features, inferred_features.type ) ) E ValueError: External features info don't match the dataset: E Got E {'_input_hash': Value(dtype='int64', id=None), '_task_hash': Value(dtype='int64', id=None), '_view_id': Value(dtype='string', id=None), 'answer': Value(dtype='string', id=None), 'encoding__ids': Sequence(feature=Value(dtype='int64', id=None), length=-1, id=None), 'encoding__offsets': Sequence(feature=Sequence(feature=Value(dtype='int64', id=None), length=-1, id=None), length=-1, id=None), 'encoding__overflowing': Sequence(feature=Value(dtype='null', id=None), length=-1, id=None), 'encoding__tokens': Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), 'encoding__words': Sequence(feature=Value(dtype='int64', id=None), length=-1, id=None), 'ner_ids': Sequence(feature=Value(dtype='int64', id=None), length=-1, id=None), 'ner_labels': Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), 'relations': [{'child': Value(dtype='int64', id=None), 'child_span': {'end': Value(dtype='int64', id=None), 'label': Value(dtype='string', id=None), 'start': Value(dtype='int64', id=None), 'token_end': Value(dtype='int64', id=None), 'token_start': Value(dtype='int64', id=None)}, 'color': Value(dtype='string', id=None), 'head': Value(dtype='int64', id=None), 'head_span': {'end': Value(dtype='int64', id=None), 'label': Value(dtype='string', id=None), 'start': Value(dtype='int64', id=None), 'token_end': Value(dtype='int64', id=None), 'token_start': Value(dtype='int64', id=None)}, 'label': Value(dtype='string', id=None)}], 'spans': [{'end': Value(dtype='int64', id=None), 'label': Value(dtype='string', id=None), 'start': Value(dtype='int64', id=None), 'text': Value(dtype='string', id=None), 'token_end': Value(dtype='int64', id=None), 'token_start': Value(dtype='int64', id=None), 'type': Value(dtype='string', id=None)}], 'text': Value(dtype='string', id=None), 'tokens': [{'disabled': Value(dtype='bool', id=None), 'end': Value(dtype='int64', id=None), 'id': Value(dtype='int64', id=None), 'start': Value(dtype='int64', id=None), 'text': Value(dtype='string', id=None), 'ws': Value(dtype='bool', id=None)}]} E with type E struct<_input_hash: int64, _task_hash: int64, _view_id: string, answer: string, encoding__ids: list<item: int64>, encoding__offsets: list<item: list<item: int64>>, encoding__overflowing: list<item: null>, encoding__tokens: list<item: string>, encoding__words: list<item: int64>, ner_ids: list<item: int64>, ner_labels: list<item: string>, relations: list<item: struct<child: int64, child_span: struct<end: int64, label: string, start: int64, token_end: int64, token_start: int64>, color: string, head: int64, head_span: struct<end: int64, label: string, start: int64, token_end: int64, token_start: int64>, label: string>>, spans: list<item: struct<end: int64, label: string, start: int64, text: string, token_end: int64, token_start: int64, type: string>>, text: string, tokens: list<item: struct<disabled: bool, end: int64, id: int64, start: int64, text: string, ws: bool>>> E E but expected something like E {'_input_hash': Value(dtype='int64', id=None), '_task_hash': Value(dtype='int64', id=None), '_view_id': Value(dtype='string', id=None), 'answer': Value(dtype='string', id=None), 'encoding__ids': Sequence(feature=Value(dtype='int64', id=None), length=-1, id=None), 'encoding__offsets': Sequence(feature=Sequence(feature=Value(dtype='int64', id=None), length=-1, id=None), length=-1, id=None), 'encoding__overflowing': Sequence(feature=Value(dtype='null', id=None), length=-1, id=None), 'encoding__tokens': Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), 'encoding__words': Sequence(feature=Value(dtype='int64', id=None), length=-1, id=None), 'ner_ids': Sequence(feature=Value(dtype='int64', id=None), length=-1, id=None), 'ner_labels': Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), 'relations': [{'head': Value(dtype='int64', id=None), 'child': Value(dtype='int64', id=None), 'head_span': {'start': Value(dtype='int64', id=None), 'end': Value(dtype='int64', id=None), 'token_start': Value(dtype='int64', id=None), 'token_end': Value(dtype='int64', id=None), 'label': Value(dtype='string', id=None)}, 'child_span': {'start': Value(dtype='int64', id=None), 'end': Value(dtype='int64', id=None), 'token_start': Value(dtype='int64', id=None), 'token_end': Value(dtype='int64', id=None), 'label': Value(dtype='string', id=None)}, 'color': Value(dtype='string', id=None), 'label': Value(dtype='string', id=None)}], 'spans': [{'text': Value(dtype='string', id=None), 'start': Value(dtype='int64', id=None), 'token_start': Value(dtype='int64', id=None), 'token_end': Value(dtype='int64', id=None), 'end': Value(dtype='int64', id=None), 'type': Value(dtype='string', id=None), 'label': Value(dtype='string', id=None)}], 'text': Value(dtype='string', id=None), 'tokens': [{'text': Value(dtype='string', id=None), 'start': Value(dtype='int64', id=None), 'end': Value(dtype='int64', id=None), 'id': Value(dtype='int64', id=None), 'ws': Value(dtype='bool', id=None), 'disabled': Value(dtype='bool', id=None)}]} E with type E struct<_input_hash: int64, _task_hash: int64, _view_id: string, answer: string, encoding__ids: list<item: int64>, encoding__offsets: list<item: list<item: int64>>, encoding__overflowing: list<item: null>, encoding__tokens: list<item: string>, encoding__words: list<item: int64>, ner_ids: list<item: int64>, ner_labels: list<item: string>, relations: list<item: struct<head: int64, child: int64, head_span: struct<start: int64, end: int64, token_start: int64, token_end: int64, label: string>, child_span: struct<start: int64, end: int64, token_start: int64, token_end: int64, label: string>, color: string, label: string>>, spans: list<item: struct<text: string, start: int64, token_start: int64, token_end: int64, end: int64, type: string, label: string>>, text: string, tokens: list<item: struct<text: string, start: int64, end: int64, id: int64, ws: bool, disabled: bool>>> ../../../../../.virtualenvs/tf_ner_rel_lib/lib/python3.8/site-packages/datasets/arrow_dataset.py:274: ValueError ``` ## Versions - Datasets: 1.6.1 - Python: 3.8.5 (default, Jan 26 2021, 10:01:04) [Clang 12.0.0 (clang-1200.0.32.2)] - Platform: macOS-10.15.7-x86_64-i386-64bit ```
{ "avatar_url": "https://avatars.githubusercontent.com/u/2000204?v=4", "events_url": "https://api.github.com/users/timothyjlaurent/events{/privacy}", "followers_url": "https://api.github.com/users/timothyjlaurent/followers", "following_url": "https://api.github.com/users/timothyjlaurent/following{/other_user}", "gists_url": "https://api.github.com/users/timothyjlaurent/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/timothyjlaurent", "id": 2000204, "login": "timothyjlaurent", "node_id": "MDQ6VXNlcjIwMDAyMDQ=", "organizations_url": "https://api.github.com/users/timothyjlaurent/orgs", "received_events_url": "https://api.github.com/users/timothyjlaurent/received_events", "repos_url": "https://api.github.com/users/timothyjlaurent/repos", "site_admin": false, "starred_url": "https://api.github.com/users/timothyjlaurent/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/timothyjlaurent/subscriptions", "type": "User", "url": "https://api.github.com/users/timothyjlaurent" }
https://api.github.com/repos/huggingface/datasets/issues/2267/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2267/timeline
open
false
2,267
null
null
null
false
867,864,353
https://api.github.com/repos/huggingface/datasets/issues/2266
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2266/events
[]
null
2021-04-29T10:00:13Z
[]
https://github.com/huggingface/datasets/pull/2266
MEMBER
null
false
null
[ "LOL, I was also working on something similar 😅. I'm gonna have a look!!!", "Sorry I didn't know you were also working on it ^^'\r\nAnd yes I 100% agree with you on the points you mentioned. We should definitely improve the coverage. It would be nice to have a clearer separation to know which tests in the suite are unit tests and which ones are integration tests\r\n", "Never mind: we both noticed tests can be improved. More PRs to come... 😉 \r\n\r\nAccording to the literature, unit tests are those that test a behavior unit, isolated from the other components and must be very fast: for me, this last requirement implies that they must be performed completely _in memory_.\r\n\r\nAs opposed, integration tests are those which also test interactions with _external_ components, like web services, databases, file system, etc.\r\n\r\nThe problem I see is that our code is still too coupled and it is difficult to isolate components for testing. Therefore, I would suggest acting iteratively, by refactoring to decouple components and then implement unit tests for each component in isolation." ]
Make tests run faster
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 2, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 2, "url": "https://api.github.com/repos/huggingface/datasets/issues/2266/reactions" }
MDExOlB1bGxSZXF1ZXN0NjIzNDY1OTI5
{ "diff_url": "https://github.com/huggingface/datasets/pull/2266.diff", "html_url": "https://github.com/huggingface/datasets/pull/2266", "merged_at": "2021-04-29T10:00:04Z", "patch_url": "https://github.com/huggingface/datasets/pull/2266.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/2266" }
2021-04-26T15:55:40Z
https://api.github.com/repos/huggingface/datasets/issues/2266/comments
From 7min to 2min to run pytest. Ideally we should keep the whole CI run time below 10min. In this PR I removed the remote tests that were never used. I also replaced nested parametrized tests with unit tests. This makes me think that we could still add more high level tests to check for a few combinations of parameters (but not all of them since there are too many of them). Let me know what you think Finally in another PR we can also separate in two circleci jobs: - the tests of the code code of the lib - the tests of the all the dataset/metric scripts.
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
https://api.github.com/repos/huggingface/datasets/issues/2266/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2266/timeline
closed
false
2,266
null
2021-04-29T10:00:04Z
null
true
867,490,646
https://api.github.com/repos/huggingface/datasets/issues/2265
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2265/events
[]
null
2021-04-26T09:47:48Z
[]
https://github.com/huggingface/datasets/pull/2265
MEMBER
null
false
null
[]
Update black
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2265/reactions" }
MDExOlB1bGxSZXF1ZXN0NjIzMTUyOTg5
{ "diff_url": "https://github.com/huggingface/datasets/pull/2265.diff", "html_url": "https://github.com/huggingface/datasets/pull/2265", "merged_at": "2021-04-26T09:47:47Z", "patch_url": "https://github.com/huggingface/datasets/pull/2265.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/2265" }
2021-04-26T09:35:09Z
https://api.github.com/repos/huggingface/datasets/issues/2265/comments
Latest black version 21.4b0 requires to reformat most dataset scripts and also the core code of the lib. This makes the CI currently fail on master
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
https://api.github.com/repos/huggingface/datasets/issues/2265/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2265/timeline
closed
false
2,265
null
2021-04-26T09:47:47Z
null
true
867,476,228
https://api.github.com/repos/huggingface/datasets/issues/2264
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2264/events
[]
null
2021-04-26T10:30:28Z
[]
https://github.com/huggingface/datasets/pull/2264
MEMBER
null
false
null
[ "The code quality check is going to be fixed by #2265 ", "The memory issue didn't come from `self.__dict__.copy()` but from the fact that this dict contains `_batches` which has all the batches of the table in it.\r\nTherefore for a MemoryMappedTable all the data in `_batches` were copied in memory when pickling and this is the issue.", "I'm still investigating why we didn't catch this issue in the tests.\r\nThis test should have caught it but didn't:\r\n\r\nhttps://github.com/huggingface/datasets/blob/3db67f5ff6cbf807b129d2b4d1107af27623b608/tests/test_table.py#L350-L353", "I'll focus on the patch release and fix the test in another PR after the release", "Yes, I think it is better that way..." ]
Fix memory issue in multiprocessing: Don't pickle table index
{ "+1": 2, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 2, "url": "https://api.github.com/repos/huggingface/datasets/issues/2264/reactions" }
MDExOlB1bGxSZXF1ZXN0NjIzMTQwODA1
{ "diff_url": "https://github.com/huggingface/datasets/pull/2264.diff", "html_url": "https://github.com/huggingface/datasets/pull/2264", "merged_at": "2021-04-26T10:08:14Z", "patch_url": "https://github.com/huggingface/datasets/pull/2264.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/2264" }
2021-04-26T09:21:35Z
https://api.github.com/repos/huggingface/datasets/issues/2264/comments
The table index is currently being pickled when doing multiprocessing, which brings all the record batches of the dataset in memory. I fixed that by not pickling the index attributes. Therefore each process has to rebuild the index when unpickling the table. Fix issue #2256 We'll do a patch release asap !
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
https://api.github.com/repos/huggingface/datasets/issues/2264/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2264/timeline
closed
false
2,264
null
2021-04-26T10:08:14Z
null
true
867,420,912
https://api.github.com/repos/huggingface/datasets/issues/2263
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2263/events
[]
null
2021-04-29T09:30:21Z
[]
https://github.com/huggingface/datasets/pull/2263
CONTRIBUTOR
null
false
null
[]
test data added, dataset_infos updated
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 1, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/2263/reactions" }
MDExOlB1bGxSZXF1ZXN0NjIzMDk0NTcy
{ "diff_url": "https://github.com/huggingface/datasets/pull/2263.diff", "html_url": "https://github.com/huggingface/datasets/pull/2263", "merged_at": "2021-04-29T09:30:20Z", "patch_url": "https://github.com/huggingface/datasets/pull/2263.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/2263" }
2021-04-26T08:27:18Z
https://api.github.com/repos/huggingface/datasets/issues/2263/comments
Fixes #2262. Thanks for pointing out issue with dataset @jinmang2!
{ "avatar_url": "https://avatars.githubusercontent.com/u/19718818?v=4", "events_url": "https://api.github.com/users/bhavitvyamalik/events{/privacy}", "followers_url": "https://api.github.com/users/bhavitvyamalik/followers", "following_url": "https://api.github.com/users/bhavitvyamalik/following{/other_user}", "gists_url": "https://api.github.com/users/bhavitvyamalik/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/bhavitvyamalik", "id": 19718818, "login": "bhavitvyamalik", "node_id": "MDQ6VXNlcjE5NzE4ODE4", "organizations_url": "https://api.github.com/users/bhavitvyamalik/orgs", "received_events_url": "https://api.github.com/users/bhavitvyamalik/received_events", "repos_url": "https://api.github.com/users/bhavitvyamalik/repos", "site_admin": false, "starred_url": "https://api.github.com/users/bhavitvyamalik/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/bhavitvyamalik/subscriptions", "type": "User", "url": "https://api.github.com/users/bhavitvyamalik" }
https://api.github.com/repos/huggingface/datasets/issues/2263/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2263/timeline
closed
false
2,263
null
2021-04-29T09:30:20Z
null
true
867,325,351
https://api.github.com/repos/huggingface/datasets/issues/2262
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2262/events
[ { "color": "2edb81", "default": false, "description": "A bug in a dataset script provided in the library", "id": 2067388877, "name": "dataset bug", "node_id": "MDU6TGFiZWwyMDY3Mzg4ODc3", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20bug" } ]
null
2021-04-29T09:32:03Z
[]
https://github.com/huggingface/datasets/issues/2262
NONE
completed
null
null
[ "Thanks @bhavitvyamalik for the fix !\r\nThe fix will be available in the next release.\r\nIt's already available on the `master` branch. For now you can either install `datasets` from source or use `script_version=\"master\"` in `load_dataset` to use the fixed version of this dataset." ]
NewsPH NLI dataset script fails to access test data.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2262/reactions" }
MDU6SXNzdWU4NjczMjUzNTE=
null
2021-04-26T06:44:41Z
https://api.github.com/repos/huggingface/datasets/issues/2262/comments
In Newsph-NLI Dataset (#1192), it fails to access test data. According to the script below, the download manager will download the train data when trying to download the test data. https://github.com/huggingface/datasets/blob/2a2dd6316af2cc7fdf24e4779312e8ee0c7ed98b/datasets/newsph_nli/newsph_nli.py#L71 If you download it according to the script above, you can see that train and test receive the same data as shown below. ```python >>> from datasets import load_dataset >>> newsph_nli = load_dataset(path="./datasets/newsph_nli.py") >>> newsph_nli DatasetDict({ train: Dataset({ features: ['premise', 'hypothesis', 'label'], num_rows: 420000 }) test: Dataset({ features: ['premise', 'hypothesis', 'label'], num_rows: 420000 }) validation: Dataset({ features: ['premise', 'hypothesis', 'label'], num_rows: 90000 }) }) >>> newsph_nli["train"][0] {'hypothesis': 'Ito ang dineklara ni Atty. Romulo Macalintal, abogado ni Robredo, kaugnay ng pagsisimula ng preliminary conference ngayong hapon sa Presidential Electoral Tribunal (PET).', 'label': 1, 'premise': '"Hindi ko ugali ang mamulitika; mas gusto kong tahimik na magtrabaho. Pero sasabihin ko ito ngayon: ang tapang, lakas, at diskarte, hindi nadadaan sa mapanirang salita. Ang kailangan ng taumbayan ay tapang sa gawa," ayon kay Robredo sa inilabas nitong statement.'} >>> newsph_nli["test"][0] {'hypothesis': 'Ito ang dineklara ni Atty. Romulo Macalintal, abogado ni Robredo, kaugnay ng pagsisimula ng preliminary conference ngayong hapon sa Presidential Electoral Tribunal (PET).', 'label': 1, 'premise': '"Hindi ko ugali ang mamulitika; mas gusto kong tahimik na magtrabaho. Pero sasabihin ko ito ngayon: ang tapang, lakas, at diskarte, hindi nadadaan sa mapanirang salita. Ang kailangan ng taumbayan ay tapang sa gawa," ayon kay Robredo sa inilabas nitong statement.'} ``` In local, I modified the code of the source as below and got the correct result. ```python 71 test_path = os.path.join(download_path, "test.csv") ``` ```python >>> from datasets import load_dataset >>> newsph_nli = load_dataset(path="./datasets/newsph_nli.py") >>> newsph_nli DatasetDict({ train: Dataset({ features: ['premise', 'hypothesis', 'label'], num_rows: 420000 }) test: Dataset({ features: ['premise', 'hypothesis', 'label'], num_rows: 9000 }) validation: Dataset({ features: ['premise', 'hypothesis', 'label'], num_rows: 90000 }) }) >>> newsph_nli["train"][0] {'hypothesis': 'Ito ang dineklara ni Atty. Romulo Macalintal, abogado ni Robredo, kaugnay ng pagsisimula ng preliminary conference ngayong hapon sa Presidential Electoral Tribunal (PET).', 'label': 1, 'premise': '"Hindi ko ugali ang mamulitika; mas gusto kong tahimik na magtrabaho. Pero sasabihin ko ito ngayon: ang tapang, lakas, at diskarte, hindi nadadaan sa mapanirang salita. Ang kailangan ng taumbayan ay tapang sa gawa," ayon kay Robredo sa inilabas nitong statement.'} >>> newsph_nli["test"][0] {'hypothesis': '-- JAI (@JaiPaller) September 13, 2019', 'label': 1, 'premise': 'Pinag-iingat ng Konsulado ng Pilipinas sa Dubai ang publiko, partikular ang mga donor, laban sa mga scam na gumagamit ng mga charitable organization.'} ``` I don't have experience with open source pull requests, so I suggest that you reflect them in the source. Thank you for reading :)
{ "avatar_url": "https://avatars.githubusercontent.com/u/37775784?v=4", "events_url": "https://api.github.com/users/jinmang2/events{/privacy}", "followers_url": "https://api.github.com/users/jinmang2/followers", "following_url": "https://api.github.com/users/jinmang2/following{/other_user}", "gists_url": "https://api.github.com/users/jinmang2/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/jinmang2", "id": 37775784, "login": "jinmang2", "node_id": "MDQ6VXNlcjM3Nzc1Nzg0", "organizations_url": "https://api.github.com/users/jinmang2/orgs", "received_events_url": "https://api.github.com/users/jinmang2/received_events", "repos_url": "https://api.github.com/users/jinmang2/repos", "site_admin": false, "starred_url": "https://api.github.com/users/jinmang2/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jinmang2/subscriptions", "type": "User", "url": "https://api.github.com/users/jinmang2" }
https://api.github.com/repos/huggingface/datasets/issues/2262/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2262/timeline
closed
false
2,262
null
2021-04-29T09:30:20Z
null
false
867,088,818
https://api.github.com/repos/huggingface/datasets/issues/2261
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2261/events
[]
null
2021-05-17T18:24:44Z
[]
https://github.com/huggingface/datasets/pull/2261
COLLABORATOR
null
false
null
[ "Ready for the final review" ]
Improve ReadInstruction logic and update docs
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2261/reactions" }
MDExOlB1bGxSZXF1ZXN0NjIyODIxNzQw
{ "diff_url": "https://github.com/huggingface/datasets/pull/2261.diff", "html_url": "https://github.com/huggingface/datasets/pull/2261", "merged_at": "2021-05-17T16:48:57Z", "patch_url": "https://github.com/huggingface/datasets/pull/2261.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/2261" }
2021-04-25T19:07:26Z
https://api.github.com/repos/huggingface/datasets/issues/2261/comments
Improve ReadInstruction logic and docs.
{ "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mariosasko", "id": 47462742, "login": "mariosasko", "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "repos_url": "https://api.github.com/users/mariosasko/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "type": "User", "url": "https://api.github.com/users/mariosasko" }
https://api.github.com/repos/huggingface/datasets/issues/2261/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2261/timeline
closed
false
2,261
null
2021-05-17T16:48:57Z
null
true
866,961,697
https://api.github.com/repos/huggingface/datasets/issues/2260
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2260/events
[]
null
2021-05-07T08:36:17Z
[]
https://github.com/huggingface/datasets/pull/2260
CONTRIBUTOR
null
false
null
[ "Thanks for adding this one !\r\nThe download manager does support downloading files on git lfs via their github url. No need for a manual download option ;)" ]
GooAQ dataset added
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2260/reactions" }
MDExOlB1bGxSZXF1ZXN0NjIyNzMwODYx
{ "diff_url": "https://github.com/huggingface/datasets/pull/2260.diff", "html_url": "https://github.com/huggingface/datasets/pull/2260", "merged_at": "2021-05-07T08:36:17Z", "patch_url": "https://github.com/huggingface/datasets/pull/2260.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/2260" }
2021-04-25T09:26:48Z
https://api.github.com/repos/huggingface/datasets/issues/2260/comments
@lhoestq here the dataset is stored with Git LFS. Should I add option for manual downloading of dataset using `git lfs pull` post repo cloning or can we accommodate this in the current `download_and_extract`?
{ "avatar_url": "https://avatars.githubusercontent.com/u/19718818?v=4", "events_url": "https://api.github.com/users/bhavitvyamalik/events{/privacy}", "followers_url": "https://api.github.com/users/bhavitvyamalik/followers", "following_url": "https://api.github.com/users/bhavitvyamalik/following{/other_user}", "gists_url": "https://api.github.com/users/bhavitvyamalik/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/bhavitvyamalik", "id": 19718818, "login": "bhavitvyamalik", "node_id": "MDQ6VXNlcjE5NzE4ODE4", "organizations_url": "https://api.github.com/users/bhavitvyamalik/orgs", "received_events_url": "https://api.github.com/users/bhavitvyamalik/received_events", "repos_url": "https://api.github.com/users/bhavitvyamalik/repos", "site_admin": false, "starred_url": "https://api.github.com/users/bhavitvyamalik/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/bhavitvyamalik/subscriptions", "type": "User", "url": "https://api.github.com/users/bhavitvyamalik" }
https://api.github.com/repos/huggingface/datasets/issues/2260/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2260/timeline
closed
false
2,260
null
2021-05-07T08:36:17Z
null
true
866,880,092
https://api.github.com/repos/huggingface/datasets/issues/2259
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2259/events
[]
null
2021-06-28T08:21:27Z
[]
https://github.com/huggingface/datasets/pull/2259
COLLABORATOR
null
false
null
[ "Honestly, I think we should fix some other issues in Split API before this change. E. g. currently the following will not work, even though it should:\r\n```python\r\nimport datasets\r\ndatasets.load_dataset(\"sst\", split=datasets.Split.TRAIN+datasets.Split.TEST) # AssertionError\r\n```\r\n\r\nEDIT:\r\nActually, think it's OK to merge this PR because the fix will not touch this PR's code." ]
Add support for Split.ALL
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2259/reactions" }
MDExOlB1bGxSZXF1ZXN0NjIyNjc2ODA0
{ "diff_url": "https://github.com/huggingface/datasets/pull/2259.diff", "html_url": "https://github.com/huggingface/datasets/pull/2259", "merged_at": "2021-06-28T08:21:27Z", "patch_url": "https://github.com/huggingface/datasets/pull/2259.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/2259" }
2021-04-25T01:45:42Z
https://api.github.com/repos/huggingface/datasets/issues/2259/comments
The title says it all.
{ "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mariosasko", "id": 47462742, "login": "mariosasko", "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "repos_url": "https://api.github.com/users/mariosasko/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "type": "User", "url": "https://api.github.com/users/mariosasko" }
https://api.github.com/repos/huggingface/datasets/issues/2259/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2259/timeline
closed
false
2,259
null
2021-06-28T08:21:27Z
null
true
866,870,588
https://api.github.com/repos/huggingface/datasets/issues/2258
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2258/events
[]
null
2021-04-26T17:16:30Z
[]
https://github.com/huggingface/datasets/pull/2258
COLLABORATOR
null
false
null
[ "@lhoestq Maybe a test that runs the functions that call `update_metadata_with_features` and checks if metadata was updated would be nice to prevent this from happening in the future." ]
Fix incorrect update_metadata_with_features calls in ArrowDataset
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2258/reactions" }
MDExOlB1bGxSZXF1ZXN0NjIyNjcxNTQy
{ "diff_url": "https://github.com/huggingface/datasets/pull/2258.diff", "html_url": "https://github.com/huggingface/datasets/pull/2258", "merged_at": "2021-04-26T16:54:04Z", "patch_url": "https://github.com/huggingface/datasets/pull/2258.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/2258" }
2021-04-25T00:48:38Z
https://api.github.com/repos/huggingface/datasets/issues/2258/comments
Fixes bugs in the `unpdate_metadata_with_features` calls (caused by changes in #2151)
{ "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mariosasko", "id": 47462742, "login": "mariosasko", "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "repos_url": "https://api.github.com/users/mariosasko/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "type": "User", "url": "https://api.github.com/users/mariosasko" }
https://api.github.com/repos/huggingface/datasets/issues/2258/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2258/timeline
closed
false
2,258
null
2021-04-26T16:54:04Z
null
true
866,755,203
https://api.github.com/repos/huggingface/datasets/issues/2257
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2257/events
[]
null
2021-04-29T09:53:38Z
[]
https://github.com/huggingface/datasets/pull/2257
CONTRIBUTOR
null
false
null
[ "> For now I've added F1, AUPR, Precision at 80% recall, and Precision at 90%. Last 3 metrics were reported in the [paper](https://arxiv.org/pdf/2103.06268.pdf). Please let me know if we require `exact_match` metric too here\r\n\r\n@bhavitvyamalik I guess the mentioned metrics are enough but it would be better if exact match is also added since the standard SQUAD dataset also has it.", "I would like to quote it from the website that I am following to learn\nthese things.\nExact Match:\nThis metric is as simple as it sounds. For each question+answer pair, if\nthe characters of the model's prediction exactly match the characters of\n*(one\nof) the True Answer(s)*, EM = 1, otherwise EM = 0. This is a strict\nall-or-nothing metric; being off by a single character results in a score\nof 0. When assessing against a negative example, if the model predicts any\ntext at all, it automatically receives a 0 for that example.\n\nSo, I guess you need to ensure at least 1 predicted answer matches for EM\nto be 1.\nSource:\nhttps://qa.fastforwardlabs.com/no%20answer/null%20threshold/bert/distilbert/exact%20match/f1/robust%20predictions/2020/06/09/Evaluating_BERT_on_SQuAD.html\n\nYou can go to their homepage and read the other links. They have detailed\nexplanations on evaluation metrics. You can also have a look at the\nsquad_v2 metric file for further clarification.\n\nRegards,\nMohammed Rakib\n\nOn Sun, 25 Apr 2021 at 15:20, Bhavitvya Malik ***@***.***>\nwrote:\n\n> I'm a little confused when it comes to 2 ground truths which can be a\n> possible answer. Like here for eg.\n>\n> predictions = [{'prediction_text': ['The seller:', 'The buyer/End-User:\n> Shenzhen LOHAS Supply Chain Management Co., Ltd.'], 'id':\n> 'LohaCompanyltd_20191209_F-1_EX-10.16_11917878_EX-10.16_Supply\n> Agreement__Parties'}]\n>\n> references = [{'answers': {'answer_start': [143, 49], 'text': ['The\n> seller:', 'The buyer/End-User: Shenzhen LOHAS Supply Chain Management Co.,\n> Ltd.']}, 'id':\n> 'LohaCompanyltd_20191209_F-1_EX-10.16_11917878_EX-10.16_Supply\n> Agreement__Parties'}]\n>\n> Should I ensure at least 1 predicted answer matches or both predicted\n> answers should match (like in this case) for EM to be 1?\n>\n> —\n> You are receiving this because you commented.\n> Reply to this email directly, view it on GitHub\n> <https://github.com/huggingface/datasets/pull/2257#issuecomment-826289753>,\n> or unsubscribe\n> <https://github.com/notifications/unsubscribe-auth/AHMYZAZSAEZNFWEMVAPK6M3TKPNHLANCNFSM43QFZVPQ>\n> .\n>\n", "Updated the same @MohammedRakib! Even if a single answer matches I'm returning 1 in that case for EM (not traversing all predictions once we have one `exact_match` from prediction)" ]
added metrics for CUAD
{ "+1": 1, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/2257/reactions" }
MDExOlB1bGxSZXF1ZXN0NjIyNTkwMDQw
{ "diff_url": "https://github.com/huggingface/datasets/pull/2257.diff", "html_url": "https://github.com/huggingface/datasets/pull/2257", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/2257.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/2257" }
2021-04-24T14:09:54Z
https://api.github.com/repos/huggingface/datasets/issues/2257/comments
For now I've added F1, AUPR, Precision at 80% recall, and Precision at 90%. Last 3 metrics were reported in the [paper](https://arxiv.org/pdf/2103.06268.pdf). Please let me know if we require `exact_match` metric too here
{ "avatar_url": "https://avatars.githubusercontent.com/u/19718818?v=4", "events_url": "https://api.github.com/users/bhavitvyamalik/events{/privacy}", "followers_url": "https://api.github.com/users/bhavitvyamalik/followers", "following_url": "https://api.github.com/users/bhavitvyamalik/following{/other_user}", "gists_url": "https://api.github.com/users/bhavitvyamalik/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/bhavitvyamalik", "id": 19718818, "login": "bhavitvyamalik", "node_id": "MDQ6VXNlcjE5NzE4ODE4", "organizations_url": "https://api.github.com/users/bhavitvyamalik/orgs", "received_events_url": "https://api.github.com/users/bhavitvyamalik/received_events", "repos_url": "https://api.github.com/users/bhavitvyamalik/repos", "site_admin": false, "starred_url": "https://api.github.com/users/bhavitvyamalik/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/bhavitvyamalik/subscriptions", "type": "User", "url": "https://api.github.com/users/bhavitvyamalik" }
https://api.github.com/repos/huggingface/datasets/issues/2257/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2257/timeline
closed
false
2,257
null
2021-04-27T16:16:32Z
null
true
866,708,609
https://api.github.com/repos/huggingface/datasets/issues/2256
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2256/events
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
null
2021-04-26T17:12:15Z
[ { "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" } ]
https://github.com/huggingface/datasets/issues/2256
NONE
completed
null
null
[ "Thanks for reporting ! We are working on this and we'll do a patch release very soon.", "We did a patch release to fix this issue.\r\nIt should be fixed in the new version 1.6.1\r\n\r\nThanks again for reporting and for the details :)" ]
Running `datase.map` with `num_proc > 1` uses a lot of memory
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2256/reactions" }
MDU6SXNzdWU4NjY3MDg2MDk=
null
2021-04-24T09:56:20Z
https://api.github.com/repos/huggingface/datasets/issues/2256/comments
## Describe the bug Running `datase.map` with `num_proc > 1` leads to a tremendous memory usage that requires swapping on disk and it becomes very slow. ## Steps to reproduce the bug ```python from datasets import load_dataset dstc8_datset = load_dataset("roskoN/dstc8-reddit-corpus", keep_in_memory=False) def _prepare_sample(batch): return {"input_ids": list(), "attention_mask": list()} for split_name, dataset_split in list(dstc8_datset.items()): print(f"Processing {split_name}") encoded_dataset_split = dataset_split.map( function=_prepare_sample, batched=True, num_proc=4, remove_columns=dataset_split.column_names, batch_size=10, writer_batch_size=10, keep_in_memory=False, ) print(encoded_dataset_split) path = f"./data/encoded_{split_name}" encoded_dataset_split.save_to_disk(path) ``` ## Expected results Memory usage should stay within reasonable boundaries. ## Actual results This is htop-output from running the provided script. ![image](https://user-images.githubusercontent.com/8143425/115954836-66954980-a4f3-11eb-8340-0153bdc3a475.png) ## Versions ``` - Datasets: 1.6.0 - Python: 3.8.8 (default, Apr 13 2021, 19:58:26) [GCC 7.3.0] - Platform: Linux-4.19.128-microsoft-standard-x86_64-with-glibc2.10 ``` Running on WSL2
{ "avatar_url": "https://avatars.githubusercontent.com/u/8143425?v=4", "events_url": "https://api.github.com/users/roskoN/events{/privacy}", "followers_url": "https://api.github.com/users/roskoN/followers", "following_url": "https://api.github.com/users/roskoN/following{/other_user}", "gists_url": "https://api.github.com/users/roskoN/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/roskoN", "id": 8143425, "login": "roskoN", "node_id": "MDQ6VXNlcjgxNDM0MjU=", "organizations_url": "https://api.github.com/users/roskoN/orgs", "received_events_url": "https://api.github.com/users/roskoN/received_events", "repos_url": "https://api.github.com/users/roskoN/repos", "site_admin": false, "starred_url": "https://api.github.com/users/roskoN/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/roskoN/subscriptions", "type": "User", "url": "https://api.github.com/users/roskoN" }
https://api.github.com/repos/huggingface/datasets/issues/2256/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2256/timeline
closed
false
2,256
null
2021-04-26T17:12:15Z
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
false
866,242,892
https://api.github.com/repos/huggingface/datasets/issues/2255
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2255/events
[]
null
2021-05-18T13:31:36Z
[]
https://github.com/huggingface/datasets/pull/2255
CONTRIBUTOR
null
false
null
[ "cc @abhi1thakur ", "Looks really nice so far, thanks !\r\nMaybe if a dataset doesn't have a template for a specific task we could try the default template of this task ?", "hey @SBrandeis @lhoestq,\r\n\r\ni now have a better idea about what you guys are trying to achieve with the task templates and have a few follow-up questions:\r\n\r\n1. how did you envision using `DatasetInfo` for running evaluation? my understanding is that all `dataset_infos.json` files are stored in the `datasets` repo (unlike `transformers` where each model's weights etc are stored in a dedicated repo). \r\nthis suggests the following workflow:\r\n\r\n```\r\n- git clone datasets\r\n- load target dataset to evaluate\r\n- load `dataset_infos.json` for target dataset\r\n- run eval for each task template in `task_templates`\r\n- store metrics as evaluation cards (similar to what is done in `autonlp`)\r\n```\r\n2. assuming the above workflow, i see that the current `TaskTemplate` attributes of `task`, `input_schema`, and `label_schema` still require some wrangling from `dataset_infos.json` to reproduce additional mappings like `label2id` that we'd need for e.g. text classification. an alternative would be to instantiate the task template class directly from the JSON with something like\r\n```python\r\nfrom datasets.tasks import TextClassification\r\nfrom transformers import AutoModelForSequenceClassification, AutoConfig\r\n\r\ntc = TextClassification.from_json(\"path/to/dataset_infos.json\")\r\n# load a model with the desired config\r\nmodel_ckpt = ...\r\nconfig = AutoConfig.from_pretrained(model_ckpt, label2id=tc.label2id, id2label=tc.id2label)\r\nmodel = AutoModelForSequenceClassification.from_pretrained(model_ckpt, config=config)\r\n# run eval ...\r\n```\r\nperhaps this is what @SBrandeis had in mind with the `TaskTemplate.from_dict` method?\r\n\r\n3. i personally prefer using `task_templates` over `supervised_keys` because it encourages the contributor to think in terms of 1 or more tasks. my question here is do we currently use `supervised_keys` for anything important in the `datasets` library?", "1. How do you envision using DatasetInfo for running evaluation?\r\n\r\nThe initial idea was to be able to do something like this:\r\n```python\r\nfrom datasets import load_dataset\r\ndset = load_dataset(\"name\", task=\"binary_classification\")\r\n# OR\r\ndset = load_dataset(\"name\")\r\ndset = dset.prepare_for_task(\"binary_classification\")\r\n```\r\n\r\n2. I don't think that's needed if we proceed as mentioned above\r\n\r\n3. `supervised_keys` are mostly a legacy compatibility thing with TF datasets, not sure it's used for anything right now. I'll let @lhoestq give more details on that\r\n\r\n[Edit 1] Typo", "> The initial idea was to be able to do something like this:\r\n> \r\n> ```python\r\n> from datasets import load_dataset\r\n> dset = load_dataset(\"name\", task=\"binary_classification\")\r\n> # OR\r\n> dset = load_dataset(\"name\")\r\n> dset = dset.prepare_for_task(\"binary_classification\")\r\n> ```\r\n\r\nah that's very elegant! just so i've completely understood, the result would be that the relevant column names of `dset` would be mapped to e.g. `text` and `label` and thus we'd have a uniform schema for the evaluation of all `binary_classification` tasks?", "That's correct! Also, the features need to be appropriately casted\r\nFor a classification task for example, we would need to cast the datasets features to something like this:\r\n```python\r\ndatasets.Features({\r\n \"text\": datasets.Value(\"string\"),\r\n \"label\": datasets.ClassLabel(names=[...]),\r\n})\r\n```\r\n", "3. We can ignore `supervised_keys` (it came from TFDS and we're not using it) and use `task_templates`", "great, thanks a lot for your answers! now it's much clearer what i need to do next 😃 ", "hey @lhoestq @SBrandeis, \r\n\r\ni've made some small tweaks to @SBrandeis's code so that `Dataset.prepare_for_task` is called in `DatasetBuilder`. using the `emotion` dataset as a test case, the following now works:\r\n\r\n ```python\r\n# DatasetDict with default columns\r\nds = load_dataset(\"./datasets/emotion/\")\r\n# DatasetDict({\r\n# train: Dataset({\r\n# features: ['tweet', 'emotion'],\r\n# num_rows: 16000\r\n# })\r\n# validation: Dataset({\r\n# features: ['tweet', 'emotion'],\r\n# num_rows: 2000\r\n# })\r\n# test: Dataset({\r\n# features: ['tweet', 'emotion'],\r\n# num_rows: 2000\r\n# })\r\n# })\r\n\r\n# DatasetDict with remapped columns\r\nds = load_dataset(\"./datasets/emotion/\", task=\"text_classification\")\r\nDatasetDict({\r\n# train: Dataset({\r\n# features: ['text', 'label'],\r\n# num_rows: 16000\r\n# })\r\n# validation: Dataset({\r\n# features: ['text', 'label'],\r\n# num_rows: 2000\r\n# })\r\n# test: Dataset({\r\n# features: ['text', 'label'],\r\n# num_rows: 2000\r\n# })\r\n# })\r\n\r\n# Dataset with default columns\r\nds = load_dataset(\"./datasets/emotion/\", split=\"train\")\r\n# Map/cast features\r\nds = ds.prepare_for_task(\"text_classification\")\r\n# Dataset({\r\n# features: ['text', 'label'],\r\n# num_rows: 16000\r\n# })\r\n```\r\n\r\ni have a few follow-up questions / remarks:\r\n\r\n1. i'm working under the assumption that contributors / users only provide a unique set of task types. in particular, the current implementation does not support something like:\r\n```python\r\ntask_templates=[TextClassification(labels=class_names, text_column=\"tweet\", label_column=\"emotion\"), TextClassification(labels=class_names, text_column=\"some_other_column\", label_column=\"some_other_column\")]\r\n```\r\nsince we use `TaskTemplate.task` and the filter for compatible templates in `Dataset.prepare_for_task`. should we support these scenarios? my hunch is that this is rare in practice, but please correct me if i'm wrong.\r\n\r\n2. when we eventually run evaluation for `transformers` models, i expect we'll be using the `Trainer` for which we can pass the standard label names to `TrainingArguments.label_names`. if that's the case, it might be prudent to heed the warning from the [docs](https://huggingface.co/transformers/main_classes/trainer.html?highlight=trainer#trainer) and use `labels` instead of `label` in the schema:\r\n> your model can accept multiple label arguments (use the label_names in your TrainingArguments to indicate their name to the Trainer) but none of them should be named \"label\".\r\n\r\n3. i plan to forge ahead on the rest of the pipeline taxonomy. please let me know if you'd prefer smaller, self-contained pull requests (e.g. one per task)", "hey @lhoestq @SBrandeis, i think this is ready for another review 😃 \r\n\r\nin addition to a few comments / questions i've left in the pr, here's a few remarks:\r\n\r\n1. after some experimentation, i decided against allowing the user to specify nested column names for question-answering. i couldn't find a simple solution with the current api and suspect that i'd have to touch many areas of `datasets` to \"unflatten\" columns in a generic fashion.\r\n2. in the current implementation, the user can specify the outer column name for question-answering, but is expected to follow the inner schema for e.g. `answers.text` and `answers.answer_start`. we can decide later how much flexibility we want to give users\r\n3. i added a few unit tests\r\n4. as discussed, let's keep this pr focused on text classification / question answering and i'll add the other tasks in separate prs\r\n5. i renamed the tasks e.g. `text_classification` -> `text-classification` for consistency with the `Trainer` model cards [here](https://github.com/huggingface/transformers/pull/11599#pullrequestreview-656371007).", "i'm not sure why the benchmarks are getting cancelled - is this expected?", "> i'm not sure why the benchmarks are getting cancelled - is this expected?\r\n\r\nHmm I don't know. It's certainly unrelated to this PR though. Maybe github has some issues", "Something is happening with actions: https://www.githubstatus.com/", "hey @lhoestq and @SBrandeis, i've: \r\n\r\n* extended the `prepare_for_task` API along the lines that @lhoestq suggested. i wasn't entirely sure what the `datasets` convention is for docstrings with mixed types, so please see if my proposal makes sense\r\n* added a few new tests to check that we trigger the value errors on incorrect input\r\n\r\ni think this is ready for another review :)", "> Looks all good thank you :)\r\n> \r\n> Can you also add `prepare_for_task` in the `main_classes.rst` file of the documentation ?\r\n\r\nDone! I also remembered that I needed to do the same for `DatasetDict`, so included this as well :)" ]
Task casting for text classification & question answering
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2255/reactions" }
MDExOlB1bGxSZXF1ZXN0NjIyMTc0Njg4
{ "diff_url": "https://github.com/huggingface/datasets/pull/2255.diff", "html_url": "https://github.com/huggingface/datasets/pull/2255", "merged_at": "2021-05-18T13:31:35Z", "patch_url": "https://github.com/huggingface/datasets/pull/2255.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/2255" }
2021-04-23T16:00:41Z
https://api.github.com/repos/huggingface/datasets/issues/2255/comments
This PR implements task preparation for a given task, in the continuation of #2143 Task taxonomy follows 🤗 Transformers's pipelines taxonomy: https://github.com/huggingface/transformers/tree/master/src/transformers/pipelines Edit by @lewtun: This PR implements support for the following tasks: * `text-classification` * `question-answering` The intended usage is as follows: ```python # Load a dataset with default column names / features ds = load_dataset("dataset_name") # Cast column names / features to schema. Casting is defined in the dataset's `DatasetInfo` ds = ds.prepare_for_task(task="text-classification") # Casting can also be realised during load ds = load_dataset("dataset_name", task="text-classification") # We can also combine shared tasks across dataset concatenation ds1 = load_dataset("dataset_name_1", task="text-classification") ds2 = load_dataset("dataset_name_2", task="text-classification") # If the tasks have the same schema, so will `ds_concat` ds_concat = concatenate_datasets([ds1, ds2]) ``` Note that the current implementation assumes that `DatasetInfo.task_templates` has been pre-defined by the user / contributor when overriding the `MyDataset(GeneratorBasedBuilder)._info` function. As pointed out by @SBrandeis, for evaluation we'll need a way to detect which datasets are already have a compatible schema so we don't have to edit hundreds of dataset scripts. One possibility is to check if the schema features are a subset of the dataset ones, e.g. ```python squad = load_dataset("./datasets/squad", split="train") qa = QuestionAnswering() schema = Features({**qa.input_schema, **qa.label_schema}) assert all(item in squad.features.items() for item in schema.items()) ```
{ "avatar_url": "https://avatars.githubusercontent.com/u/33657802?v=4", "events_url": "https://api.github.com/users/SBrandeis/events{/privacy}", "followers_url": "https://api.github.com/users/SBrandeis/followers", "following_url": "https://api.github.com/users/SBrandeis/following{/other_user}", "gists_url": "https://api.github.com/users/SBrandeis/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/SBrandeis", "id": 33657802, "login": "SBrandeis", "node_id": "MDQ6VXNlcjMzNjU3ODAy", "organizations_url": "https://api.github.com/users/SBrandeis/orgs", "received_events_url": "https://api.github.com/users/SBrandeis/received_events", "repos_url": "https://api.github.com/users/SBrandeis/repos", "site_admin": false, "starred_url": "https://api.github.com/users/SBrandeis/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/SBrandeis/subscriptions", "type": "User", "url": "https://api.github.com/users/SBrandeis" }
https://api.github.com/repos/huggingface/datasets/issues/2255/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2255/timeline
closed
false
2,255
null
2021-05-18T13:31:35Z
null
true
866,169,312
https://api.github.com/repos/huggingface/datasets/issues/2254
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2254/events
[]
null
2021-04-27T16:30:49Z
[]
https://github.com/huggingface/datasets/pull/2254
MEMBER
null
false
null
[ "I renamed the variable, added a test for dataset._indices and fixed an issue with class_encode_column" ]
Update format, fingerprint and indices after add_item
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 1, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/2254/reactions" }
MDExOlB1bGxSZXF1ZXN0NjIyMTE1NDI0
{ "diff_url": "https://github.com/huggingface/datasets/pull/2254.diff", "html_url": "https://github.com/huggingface/datasets/pull/2254", "merged_at": "2021-04-27T16:30:48Z", "patch_url": "https://github.com/huggingface/datasets/pull/2254.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/2254" }
2021-04-23T14:31:49Z
https://api.github.com/repos/huggingface/datasets/issues/2254/comments
Added fingerprint and format update wrappers + update the indices by adding the index of the newly added item in the table.
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
https://api.github.com/repos/huggingface/datasets/issues/2254/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2254/timeline
closed
false
2,254
null
2021-04-27T16:30:48Z
null
true
866,034,321
https://api.github.com/repos/huggingface/datasets/issues/2253
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2253/events
[ { "color": "B67A40", "default": false, "description": "Restructuring existing code without changing its external behavior", "id": 2851292821, "name": "refactoring", "node_id": "MDU6TGFiZWwyODUxMjkyODIx", "url": "https://api.github.com/repos/huggingface/datasets/labels/refactoring" } ]
null
2021-05-27T09:12:45Z
[]
https://github.com/huggingface/datasets/pull/2253
MEMBER
null
false
null
[ "@lhoestq is there a problem in the master branch? I got a segmentation fault...\r\n```\r\ntests/test_table.py::test_concatenation_table_cast[in_memory] Fatal Python error: Segmentation fault\r\n```", "Oh wow. Let me re-run the CI just to make sure", "Hmm interesting, the segfault is still there. I'm investigating this issue on my windows machine", "Feel free to merge master into this branch to fix the CI :)" ]
Perform minor refactoring: use config
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2253/reactions" }
MDExOlB1bGxSZXF1ZXN0NjIyMDA2Njg3
{ "diff_url": "https://github.com/huggingface/datasets/pull/2253.diff", "html_url": "https://github.com/huggingface/datasets/pull/2253", "merged_at": "2021-04-27T15:02:58Z", "patch_url": "https://github.com/huggingface/datasets/pull/2253.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/2253" }
2021-04-23T11:45:47Z
https://api.github.com/repos/huggingface/datasets/issues/2253/comments
Perform minor refactoring related to `config`.
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
https://api.github.com/repos/huggingface/datasets/issues/2253/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2253/timeline
closed
false
2,253
null
2021-04-27T15:02:59Z
null
true
865,870,710
https://api.github.com/repos/huggingface/datasets/issues/2252
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2252/events
[]
null
2024-01-26T15:10:28Z
[ { "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" } ]
https://github.com/huggingface/datasets/issues/2252
NONE
completed
null
null
[ "Hi ! Sorry to hear that. This may come from another issue then.\r\n\r\nFirst can we check if this latency comes from the dataset itself ?\r\nYou can try to load your dataset and benchmark the speed of querying random examples inside it ?\r\n```python\r\nimport time\r\nimport numpy as np\r\n\r\nfrom datasets import load_from_disk\r\n\r\ndataset = load_from_disk(...) # or from load_dataset...\r\n\r\n_start = time.time()\r\nn = 100\r\nfor i in np.random.default_rng(42).integers(0, len(dataset), size=n):\r\n _ = dataset[i]\r\nprint(time.time() - _start)\r\n```\r\n\r\nIf we see a significant speed difference between your two datasets then it would mean that there's an issue somewhere", "Hi @lhoestq, here is the result. I additionally measured time to `load_from_disk`:\r\n* 60GB\r\n```\r\nloading took: 22.618776321411133\r\nramdom indexing 100 times took: 0.10214924812316895\r\n```\r\n\r\n* 600GB\r\n```\r\nloading took: 1176.1764674186707\r\nramdom indexing 100 times took: 2.853600025177002\r\n```\r\n\r\nHmm.. I double checked that it's version 1.6.0. The difference seems quite big, could it be related to the running environment? \r\n", "I'm surprised by the speed change. Can you give more details about your dataset ?\r\nThe speed depends on the number of batches in the arrow tables and the distribution of the lengths of the batches.\r\nYou can access the batches by doing `dataset.data.to_batches()` (use only for debugging) (it doesn't bring data in memory).\r\n\r\nAlso can you explain what parameters you used if you used `map` calls ?\r\nAlso if you have some code that reproduces the issue I'd be happy to investigate it.", "Also if you could give us more info about your env like your OS, version of pyarrow and if you're using an HDD or a SSD", "Here are some details of my 600GB dataset. This is a dataset AFTER the `map` function and once I load this dataset, I do not use `map` anymore in the training. Regarding the distribution of the lengths, it is almost uniform (90% is 512 tokens, and 10% is randomly shorter than that -- typical setting for language modeling).\r\n```\r\nlen(batches):\r\n492763\r\n\r\nbatches[0]: \r\npyarrow.RecordBatch\r\nattention_mask: list<item: uint8>\r\n child 0, item: uint8\r\ninput_ids: list<item: int16>\r\n child 0, item: int16\r\nspecial_tokens_mask: list<item: uint8>\r\n child 0, item: uint8\r\ntoken_type_ids: list<item: uint8>\r\n child 0, item: uint8\r\n```\r\n\r\nHere the some parameters to `map` function just in case it is relevant:\r\n```\r\nnum_proc=1 # as multi processing is slower in my case\r\nload_from_cache_file=False\r\n```\r\n", "Regarding the environment, I am running the code on a cloud server. Here are some info:\r\n```\r\nUbuntu 18.04.5 LTS # cat /etc/issue\r\npyarrow 3.0.0 # pip list | grep pyarrow\r\n```\r\nThe data is stored in SSD and it is mounted to the machine via Network File System.\r\n\r\nIf you could point me to some of the commands to check the details of the environment, I would be happy to provide relevant information @lhoestq !", "I am not sure how I could provide you with the reproducible code, since the problem only arises when the data is big. For the moment, I would share the part that I think is relevant. Feel free to ask me for more info.\r\n\r\n```python\r\nclass MyModel(pytorch_lightning.LightningModule)\r\n def setup(self, stage):\r\n self.dataset = datasets.load_from_disk(path)\r\n self.dataset.set_format(\"torch\")\r\n\r\n def train_dataloader(self):\r\n collate_fn = transformers.DataCollatorForLanguageModeling(\r\n tokenizer=transformers.ElectraTokenizerFast.from_pretrained(tok_path)\r\n )\r\n dataloader = torch.utils.DataLoader(\r\n self.dataset,\r\n batch_size=32,\r\n collate_fn=collate_fn,\r\n num_workers=8,\r\n pin_memory=True,\r\n )\r\n```", "Hi ! Sorry for the delay I haven't had a chance to take a look at this yet. Are you still experiencing this issue ?\r\nI'm asking because the latest patch release 1.6.2 fixed a few memory issues that could have lead to slow downs", "Hi! I just ran the same code with different datasets (one is 60 GB and another 600 GB), and the latter runs much slower. ETA differs by 10x.", "@lhoestq and @hwijeen\r\n\r\nDespite upgrading to datasets 1.6.2, still experiencing extremely slow (2h00) loading for a 300Gb local dataset shard size 1.1Gb on local HDD (40Mb/s read speed). This corresponds almost exactly to total data divided by reading speed implying that it reads the entire dataset at each load.\r\n\r\nStack details:\r\n=========\r\n\r\n> GCC version: Could not collect\r\n> Clang version: Could not collect\r\n> CMake version: Could not collect\r\n> \r\n> Python version: 3.7 (64-bit runtime)\r\n> Is CUDA available: True\r\n> CUDA runtime version: 10.2.89\r\n> GPU models and configuration: GPU 0: GeForce GTX 1050\r\n> Nvidia driver version: 457.63\r\n> cuDNN version: C:\\Program Files\\NVIDIA GPU Computing Toolkit\\CUDA\\v10.2\\bin\\cudnn64_7.dll\r\n> HIP runtime version: N/A\r\n> MIOpen runtime version: N/A\r\n> \r\n> Versions of relevant libraries:\r\n> [pip3] datasets==1.6.2\r\n> [pip3] transformers==4.5.1\r\n> [pip3] numpy==1.19.1\r\n> [pip3] numpydoc==1.1.0\r\n> [pip3] pytorch-metric-learning==0.9.98\r\n> [pip3] torch==1.8.1\r\n> [pip3] torchaudio==0.8.1\r\n> [pip3] torchvision==0.2.2\r\n> [conda] blas 2.16 mkl conda-forge\r\n> [conda] cudatoolkit 10.2.89 hb195166_8 conda-forge\r\n> [conda] libblas 3.8.0 16_mkl conda-forge\r\n> [conda] libcblas 3.8.0 16_mkl conda-forge\r\n> [conda] liblapack 3.8.0 16_mkl conda-forge\r\n> [conda] liblapacke 3.8.0 16_mkl conda-forge\r\n> [conda] mkl 2020.1 216\r\n> [conda] numpy 1.19.1 py37hae9e721_0 conda-forge\r\n> [conda] numpydoc 1.1.0 py_1 conda-forge\r\n> [conda] pytorch 1.8.1 py3.7_cuda10.2_cudnn7_0 pytorch\r\n> [conda] pytorch-metric-learning 0.9.98 pyh39e3cac_0 metric-learning\r\n> [conda] torchaudio 0.8.1 py37 pytorch\r\n> [conda] torchvision 0.2.2 py_3 pytorch", "Hi @BenoitDalFerro how do your load your dataset ?", "Hi @lhoestq thanks for the quick turn-around, actually the plain vanilla way, without an particular knack or fashion, I tried to look into the documentation for some alternative but couldn't find any\r\n\r\n> dataset = load_from_disk(dataset_path=os.path.join(datasets_dir,dataset_dir))", "I’m facing the same issue when loading a 900GB dataset (stored via `save_to_disk`): `load_from_disk(path_to_dir)` takes 1.5 hours and htop consistently shows high IO rates > 120 M/s.", "@tsproisl same here, smells like ~~teen spirit~~ intended generator inadvertently ending up iterator\r\n\r\n@lhoestq perhaps solution to detect bug location in code is to track its signature via HD read usage monitoring, option is to add tracking decorator on top each function and sequentially close all hatches from top to bottom, suggest PySmart https://pypi.org/project/pySMART/ a Smartmontools implementation", "I wasn't able to reproduce this on a toy dataset of around 300GB:\r\n\r\n```python\r\nimport datasets as ds\r\n\r\ns = ds.load_dataset(\"squad\", split=\"train\")\r\ns4000 = ds.concatenate_datasets([s] * 4000)\r\nprint(ds.utils.size_str(s4000.data.nbytes)) # '295.48 GiB'\r\n\r\ns4000.save_to_disk(\"tmp/squad_4000\")\r\n```\r\n\r\n```python\r\nimport psutil\r\nimport time\r\nfrom datasets import load_from_disk\r\n\r\ndisk = \"disk0\" # You may have to change your disk here\r\niocnt1 = psutil.disk_io_counters(perdisk=True)[disk]\r\ntime1 = time.time()\r\n\r\ns4000_reloaded = load_from_disk(\"tmp/squad_4000\")\r\n\r\ntime2 = time.time()\r\niocnt2 = psutil.disk_io_counters(perdisk=True)[disk]\r\n\r\nprint(f\"Blocks read {iocnt2.read_count - iocnt1.read_count}\") # Blocks read 18\r\nprint(f\"Elapsed time: {time2 - time1:.02f}s\") # Elapsed time: 14.60s\r\n```\r\n\r\nCould you run this on your side and tell me if how much time it takes ? Please run this when your machine is idle so that other processes don't interfere.\r\n\r\nI got these results on my macbook pro on datasets 1.6.2", "@lhoestq thanks, test running as we speak, bear with me", "Just tried on google colab and got ~1min for a 15GB dataset (only 200 times SQuAD), while it should be instantaneous. The time is spent reading the Apache Arrow table from the memory mapped file. This might come a virtual disk management issue. I'm trying to see if I can still speed it up on colab.", "@lhoestq what is Google Colab's HD read speed, is it possible to introspect incl. make like SSD or HDD ?", "@lhoestq Thank you! The issue is getting more interesting. The second script is still running, but it's definitely taking much longer than 15 seconds.", "Okay, here’s the ouput:\r\nBlocks read 158396\r\nElapsed time: 529.10s\r\n\r\nAlso using datasets 1.6.2. Do you have any ideas, how to pinpoint the problem?", "@lhoestq, @tsproisl mmmh still writing on my side about 1h to go, thinking on it are your large datasets all monoblock unsharded ? mine is 335 times 1.18Gb shards.", "The 529.10s was a bit too optimistic. I cancelled the reading process once before running it completely, therefore the harddrive cache probably did its work.\r\n\r\nHere are three consecutive runs\r\nFirst run (freshly written to disk):\r\nBlocks read 309702\r\nElapsed time: 1267.74s\r\nSecond run (immediately after):\r\nBlocks read 113944\r\nElapsed time: 417.55s\r\nThird run (immediately after):\r\nBlocks read 42518\r\nElapsed time: 199.19s\r\n", "@lhoestq \r\nFirst test\r\n> elapsed time: 11219.05s\r\n\r\nSecond test running bear with me, for Windows users slight trick to modify original \"disk0\" string:\r\n\r\nFirst find physical unit relevant key in dictionnary\r\n```\r\nimport psutil\r\npsutil.disk_io_counters(perdisk=True)\r\n```\r\n\r\n> {'PhysicalDrive0': sdiskio(read_count=18453286, write_count=4075333, read_bytes=479546467840, write_bytes=161590275072, read_time=20659, write_time=2464),\r\n> 'PhysicalDrive1': sdiskio(read_count=1495778, write_count=388781, read_bytes=548628622336, write_bytes=318234849280, read_time=426066, write_time=19085)}\r\n\r\nIn my case it's _PhysicalDrive1_\r\n\r\nThen insert relevant key's string as _disk_ variable\r\n\r\n```\r\npsutil.disk_io_counters()\r\ndisk = 'PhysicalDrive1' # You may have to change your disk here\r\niocnt1 = psutil.disk_io_counters(perdisk=True)[disk]\r\ntime1 = time.time()\r\ns4000_reloaded = load_from_disk(\"your path here\")\r\ntime2 = time.time()\r\niocnt2 = psutil.disk_io_counters(perdisk=True)[disk]\r\nprint(f\"Blocks read {iocnt2.read_count - iocnt1.read_count}\") # Blocks read 18\r\nprint(f\"Elapsed time: {time2 - time1:.02f}s\") # Elapsed time: 14.60s\r\n```", "@lhoestq\r\nSecond test\r\n\r\n> Blocks read 1265609\r\n> Elapsed time: 11216.55s", "@lhoestq any luck ?", "Unfortunately no. Thanks for running the benchmark though, it shows that you machine does a lot of read operations. This is not expected: in other machines it does almost no read operations which enables a very fast loading.\r\n\r\nI did some tests on google colab and have the same issue. The first time the dataset arrow file is memory mapped takes always a lot of time (time seems linear with respect to the dataset size). Reloading the dataset is then instantaneous since the arrow file has already been memory mapped.\r\n\r\nI also tried using the Arrow IPC file format (see #1933) instead of the current streaming format that we use but it didn't help.\r\n\r\nMemory mapping is handled by the OS and depends on the disk you're using, so I'm not sure we can do much about it. I'll continue to investigate anyway, because I still don't know why in some cases it would go through the entire file (high `Blocks read ` as in your tests) and in other cases it would do almost no reading.", "@lhoestq thanks for the effort, let's stay in touch", "Just want to say that I am seeing the same issue. Dataset size if 268GB and it takes **3 hours** to load `load_from_disk`, using dataset version `1.9.0`. Filesystem underneath is `Lustre` ", "Hi @lhoestq, confirmed Windows issue, exact same code running on Linux OS total loading time about 3 minutes.", "Hmm that's different from what I got. I was on Ubuntu when reporting the initial issue." ]
Slow dataloading with big datasets issue persists
{ "+1": 5, "-1": 0, "confused": 0, "eyes": 4, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 9, "url": "https://api.github.com/repos/huggingface/datasets/issues/2252/reactions" }
MDU6SXNzdWU4NjU4NzA3MTA=
null
2021-04-23T08:18:20Z
https://api.github.com/repos/huggingface/datasets/issues/2252/comments
Hi, I reported too slow data fetching when data is large(#2210) a couple of weeks ago, and @lhoestq referred me to the fix (#2122). However, the problem seems to persist. Here is the profiled results: 1) Running with 60GB ``` Action | Mean duration (s) |Num calls | Total time (s) | Percentage % | ------------------------------------------------------------------------------------------------------------------------------------ Total | - |_ | 517.96 | 100 % | ------------------------------------------------------------------------------------------------------------------------------------ model_backward | 0.26144 |100 | 26.144 | 5.0475 | model_forward | 0.11123 |100 | 11.123 | 2.1474 | get_train_batch | 0.097121 |100 | 9.7121 | 1.8751 | ``` 3) Running with 600GB, datasets==1.6.0 ``` Action | Mean duration (s) |Num calls | Total time (s) | Percentage % | ------------------------------------------------------------------------------------------------------------------------------------ Total | - |_ | 4563.2 | 100 % | ------------------------------------------------------------------------------------------------------------------------------------ get_train_batch | 5.1279 |100 | 512.79 | 11.237 | model_backward | 4.8394 |100 | 483.94 | 10.605 | model_forward | 0.12162 |100 | 12.162 | 0.26652 | ``` I see that `get_train_batch` lags when data is large. Could this be related to different issues? I would be happy to provide necessary information to investigate.
{ "avatar_url": "https://avatars.githubusercontent.com/u/29157715?v=4", "events_url": "https://api.github.com/users/hwijeen/events{/privacy}", "followers_url": "https://api.github.com/users/hwijeen/followers", "following_url": "https://api.github.com/users/hwijeen/following{/other_user}", "gists_url": "https://api.github.com/users/hwijeen/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/hwijeen", "id": 29157715, "login": "hwijeen", "node_id": "MDQ6VXNlcjI5MTU3NzE1", "organizations_url": "https://api.github.com/users/hwijeen/orgs", "received_events_url": "https://api.github.com/users/hwijeen/received_events", "repos_url": "https://api.github.com/users/hwijeen/repos", "site_admin": false, "starred_url": "https://api.github.com/users/hwijeen/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/hwijeen/subscriptions", "type": "User", "url": "https://api.github.com/users/hwijeen" }
https://api.github.com/repos/huggingface/datasets/issues/2252/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2252/timeline
closed
false
2,252
null
2024-01-26T15:10:28Z
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
false
865,848,705
https://api.github.com/repos/huggingface/datasets/issues/2251
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2251/events
[]
null
2021-04-23T07:51:03Z
[]
https://github.com/huggingface/datasets/issues/2251
NONE
null
null
null
[]
while running run_qa.py, ran into a value error
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2251/reactions" }
MDU6SXNzdWU4NjU4NDg3MDU=
null
2021-04-23T07:51:03Z
https://api.github.com/repos/huggingface/datasets/issues/2251/comments
command: python3 run_qa.py --model_name_or_path hyunwoongko/kobart --dataset_name squad_kor_v2 --do_train --do_eval --per_device_train_batch_size 8 --learning_rate 3e-5 --num_train_epochs 3 --max_seq_length 512 --doc_stride 128 --output_dir /tmp/debug_squad/ error: ValueError: External features info don't match the dataset: Got {'id': Value(dtype='string', id=None), 'title': Value(dtype='string', id=None), 'context': Value(dtype='string', id=None), 'question': Value(dtype='string', id=None), 'answer': {'text': Value(dtype='string', id=None), 'answer_start': Value(dtype='int32', id=None), 'html_answer_start': Value(dtype='int32', id=None)}, 'url': Value(dtype='string', id=None), 'raw_html': Value(dtype='string', id=None)} with type struct<answer: struct<text: string, answer_start: int32, html_answer_start: int32>, context: string, id: string, question: string, raw_html: string, title: string, url: string> but expected something like {'answer': {'answer_start': Value(dtype='int32', id=None), 'html_answer_start': Value(dtype='int32', id=None), 'text': Value(dtype='string', id=None)}, 'context': Value(dtype='string', id=None), 'id': Value(dtype='string', id=None), 'question': Value(dtype='string', id=None), 'raw_html': Value(dtype='string', id=None), 'title': Value(dtype='string', id=None), 'url': Value(dtype='string', id=None)} with type struct<answer: struct<answer_start: int32, html_answer_start: int32, text: string>, context: string, id: string, question: string, raw_html: string, title: string, url: string> I didn't encounter this error 4 hours ago. any solutions for this kind of issue? looks like gained dataset format refers to 'Data Fields', while expected refers to 'Data Instances'.
{ "avatar_url": "https://avatars.githubusercontent.com/u/44570724?v=4", "events_url": "https://api.github.com/users/nlee0212/events{/privacy}", "followers_url": "https://api.github.com/users/nlee0212/followers", "following_url": "https://api.github.com/users/nlee0212/following{/other_user}", "gists_url": "https://api.github.com/users/nlee0212/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/nlee0212", "id": 44570724, "login": "nlee0212", "node_id": "MDQ6VXNlcjQ0NTcwNzI0", "organizations_url": "https://api.github.com/users/nlee0212/orgs", "received_events_url": "https://api.github.com/users/nlee0212/received_events", "repos_url": "https://api.github.com/users/nlee0212/repos", "site_admin": false, "starred_url": "https://api.github.com/users/nlee0212/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/nlee0212/subscriptions", "type": "User", "url": "https://api.github.com/users/nlee0212" }
https://api.github.com/repos/huggingface/datasets/issues/2251/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2251/timeline
open
false
2,251
null
null
null
false
865,402,449
https://api.github.com/repos/huggingface/datasets/issues/2250
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2250/events
[]
null
2022-03-30T08:29:47Z
[]
https://github.com/huggingface/datasets/issues/2250
NONE
completed
null
null
[ "Hi,\r\n\r\n1. try\r\n ```python\r\n dataset = load_dataset(\"text\", data_files={\"train\": [\"a1.txt\", \"b1.txt\"], \"test\": [\"c1.txt\"]})\r\n ```\r\n instead.\r\n\r\n Sadly, I can't reproduce the error on my machine. If the above code doesn't resolve the issue, try to update the library to the \r\n newest version (`pip install datasets --upgrade`).\r\n\r\n2. https://github.com/huggingface/transformers/blob/3ed5e97ba04ce9b24b4a7161ea74572598a4c480/examples/pytorch/language-modeling/run_mlm.py#L258-L259\r\nThis is the original code. You'll have to modify the example source to work with multiple train files. To make it easier, let's say \"|\" will act as a delimiter between files:\r\n ```python\r\n if data_args.train_file is not None:\r\n data_files[\"train\"] = data_args.train_file.split(\"|\") # + .split(\"|\")\r\n ```\r\n Then call the script as follows (**dataset_name must be None**):\r\n ```bash\r\n python run_mlm.py [... other args] --train_file a1.txt|b1.txt\r\n ```", "i meet the same error with datasets 1.11.0, is there any insight about this?" ]
some issue in loading local txt file as Dataset for run_mlm.py
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2250/reactions" }
MDU6SXNzdWU4NjU0MDI0NDk=
null
2021-04-22T19:39:13Z
https://api.github.com/repos/huggingface/datasets/issues/2250/comments
![image](https://user-images.githubusercontent.com/14968123/115773877-18cef300-a3c6-11eb-8e58-a9cbfd1001ec.png) first of all, I tried to load 3 .txt files as a dataset (sure that the directory and permission is OK.), I face with the below error. > FileNotFoundError: [Errno 2] No such file or directory: 'c' by removing one of the training .txt files It's fixed and although if I put all file as training it's ok ![image](https://user-images.githubusercontent.com/14968123/115774207-867b1f00-a3c6-11eb-953b-905cfb112d25.png) ![image](https://user-images.githubusercontent.com/14968123/115774264-9b57b280-a3c6-11eb-9f36-7b109f0e5a31.png) after this, my question is how could I use this defined Dataset for run_mlm.py for from scratch pretraining. by using --train_file path_to_train_file just can use one .txt , .csv or, .json file. I tried to set my defined Dataset as --dataset_name but the below issue occurs. > Traceback (most recent call last): File "/usr/local/lib/python3.7/dist-packages/datasets/load.py", line 336, in prepare_module local_path = cached_path(file_path, download_config=download_config) File "/usr/local/lib/python3.7/dist-packages/datasets/utils/file_utils.py", line 291, in cached_path use_auth_token=download_config.use_auth_token, File "/usr/local/lib/python3.7/dist-packages/datasets/utils/file_utils.py", line 621, in get_from_cache raise FileNotFoundError("Couldn't find file at {}".format(url)) FileNotFoundError: Couldn't find file at https://raw.githubusercontent.com/huggingface/datasets/master/datasets/dataset/dataset.py > During handling of the above exception, another exception occurred: > Traceback (most recent call last): File "run_mlm.py", line 486, in <module> main() File "run_mlm.py", line 242, in main datasets = load_dataset(data_args.dataset_name, data_args.dataset_config_name, cache_dir=model_args.cache_dir) File "/usr/local/lib/python3.7/dist-packages/datasets/load.py", line 719, in load_dataset use_auth_token=use_auth_token, File "/usr/local/lib/python3.7/dist-packages/datasets/load.py", line 347, in prepare_module combined_path, github_file_path FileNotFoundError: Couldn't find file locally at dataset/dataset.py, or remotely at https://raw.githubusercontent.com/huggingface/datasets/1.6.0/datasets/dataset/dataset.py. The file is also not present on the master branch on github.
{ "avatar_url": "https://avatars.githubusercontent.com/u/14968123?v=4", "events_url": "https://api.github.com/users/alighofrani95/events{/privacy}", "followers_url": "https://api.github.com/users/alighofrani95/followers", "following_url": "https://api.github.com/users/alighofrani95/following{/other_user}", "gists_url": "https://api.github.com/users/alighofrani95/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/alighofrani95", "id": 14968123, "login": "alighofrani95", "node_id": "MDQ6VXNlcjE0OTY4MTIz", "organizations_url": "https://api.github.com/users/alighofrani95/orgs", "received_events_url": "https://api.github.com/users/alighofrani95/received_events", "repos_url": "https://api.github.com/users/alighofrani95/repos", "site_admin": false, "starred_url": "https://api.github.com/users/alighofrani95/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/alighofrani95/subscriptions", "type": "User", "url": "https://api.github.com/users/alighofrani95" }
https://api.github.com/repos/huggingface/datasets/issues/2250/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2250/timeline
closed
false
2,250
null
2022-03-30T08:29:47Z
null
false
865,257,826
https://api.github.com/repos/huggingface/datasets/issues/2249
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2249/events
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" } ]
null
2022-07-06T15:19:48Z
[]
https://github.com/huggingface/datasets/pull/2249
MEMBER
null
false
{ "closed_at": null, "closed_issues": 2, "created_at": "2021-07-21T15:34:56Z", "creator": { "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }, "description": "Next minor release", "due_on": "2021-08-30T07:00:00Z", "html_url": "https://github.com/huggingface/datasets/milestone/8", "id": 6968069, "labels_url": "https://api.github.com/repos/huggingface/datasets/milestones/8/labels", "node_id": "MI_kwDODunzps4AalMF", "number": 8, "open_issues": 4, "state": "open", "title": "1.12", "updated_at": "2021-10-13T10:26:33Z", "url": "https://api.github.com/repos/huggingface/datasets/milestones/8" }
[ "> If you pass a dictionary like this:\r\n> \r\n> ```\r\n> {\"main_metadata\": url_to_main_data,\r\n> \"secondary_metadata\": url_to_sec_data,\r\n> \"train\": url_train_data,\r\n> \"test\": url_test_data}\r\n> ```\r\n> \r\n> then only the train or test keys will be kept, which I feel not intuitive.\r\n> \r\n> For example if the users asks to load the \"train\" split, then the main and secondary metadata won't be downloaded.\r\n> You can fix that by keeping all the keys except the splits to ignore\r\n\r\nHi @lhoestq, I have been thinking about this and I think it is worth that we discuss about it.\r\n\r\nWhen I created this PR, my first idea was to create a \"hack\" inside the download manager that will be able to filter some split(s) without touching any dataset script. Of course, the download manager does not know about splits logic, and thus this trick would only work for some very specific datasets: only the ones containing that pass a dict to the download manager containing only the keys \"train\", \"validation\", \"test\" (or the one passed by the user for advanced users knowing they can do it), e.g. the `natural_questions` dataset (which was one of the targets).\r\n\r\nThe big inconvenient of this approach is that it is not applicable to many datasets (or worse, it should be constantly tweaked to cope with exceptional cases). One exceptional case is the one you pointed out. But I see others:\r\n- the split keys can be different: train, test, dev, val, validation, eval,...\r\n- in `hope_edi` dataset, the split keys are: TRAIN_DOWNLOAD_URL, VALIDATION_DOWNLOAD_URL\r\n- in `few_rel` dataset, the split keys are: train_wiki, val_nyt, val_pubmed,..., pid2name\r\n- in `curiosity_dialogs`, the split keys are: train, val, test, test_zero; this means that for every split we pass, we will also get test_zero\r\n- in `deal_or_no_dialog`, each of the splits URL is passed separately to the download manager, so all splits would be always downloaded\r\n- etc.\r\n\r\nThen after discussing, another idea emerged: pass a `split` parameter to `_split_generators`, which know about the splits logic, so that it can handle which splits are passed to the download manager. This approach is more accurate and can be tweaked so that it works with all the datasets we want. The only inconvenient is that then for every target dataset, we must modify its corresponding `_split_generators` script method.\r\n\r\nMy point is that I don't think it is a good idea to implement both approaches. They could even interfere with each other! \r\n\r\nIf you agree, I would implement ONLY the second one, which is simpler, more consistent and stable and will avoid future problems.", "Hi @albertvillanova !\r\nYup I agree with you, implementing the 2nd approach seems to be the right solution" ]
Allow downloading/processing/caching only specific splits
{ "+1": 1, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/2249/reactions" }
MDExOlB1bGxSZXF1ZXN0NjIxMzU1MzE3
{ "diff_url": "https://github.com/huggingface/datasets/pull/2249.diff", "html_url": "https://github.com/huggingface/datasets/pull/2249", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/2249.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/2249" }
2021-04-22T17:51:44Z
https://api.github.com/repos/huggingface/datasets/issues/2249/comments
Allow downloading/processing/caching only specific splits without downloading/processing/caching the other splits. This PR implements two steps to handle only specific splits: - it allows processing/caching only specific splits into Arrow files - for some simple cases, it allows downloading only specific splits (which is more intricate as it depends on the user-defined method `_split_generators`) This PR makes several assumptions: - `DownloadConfig` contains the configuration settings for downloading - the parameter `split` passed to `load_dataset` is just a parameter for loading (from cache), not for downloading
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
https://api.github.com/repos/huggingface/datasets/issues/2249/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2249/timeline
open
false
2,249
null
null
null
true
864,853,447
https://api.github.com/repos/huggingface/datasets/issues/2248
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2248/events
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" } ]
null
2021-04-27T15:29:21Z
[]
https://github.com/huggingface/datasets/pull/2248
MEMBER
null
false
{ "closed_at": "2021-05-31T16:20:53Z", "closed_issues": 3, "created_at": "2021-04-09T13:16:31Z", "creator": { "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }, "description": "Next minor release", "due_on": "2021-05-14T07:00:00Z", "html_url": "https://github.com/huggingface/datasets/milestone/3", "id": 6644287, "labels_url": "https://api.github.com/repos/huggingface/datasets/milestones/3/labels", "node_id": "MDk6TWlsZXN0b25lNjY0NDI4Nw==", "number": 3, "open_issues": 0, "state": "closed", "title": "1.7", "updated_at": "2021-05-31T16:20:53Z", "url": "https://api.github.com/repos/huggingface/datasets/milestones/3" }
[]
Implement Dataset to JSON
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2248/reactions" }
MDExOlB1bGxSZXF1ZXN0NjIxMDEyNzg5
{ "diff_url": "https://github.com/huggingface/datasets/pull/2248.diff", "html_url": "https://github.com/huggingface/datasets/pull/2248", "merged_at": "2021-04-27T15:29:20Z", "patch_url": "https://github.com/huggingface/datasets/pull/2248.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/2248" }
2021-04-22T11:46:51Z
https://api.github.com/repos/huggingface/datasets/issues/2248/comments
Implement `Dataset.to_json`.
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
https://api.github.com/repos/huggingface/datasets/issues/2248/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2248/timeline
closed
false
2,248
null
2021-04-27T15:29:20Z
null
true
864,817,520
https://api.github.com/repos/huggingface/datasets/issues/2247
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2247/events
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" } ]
null
2021-07-26T13:28:52Z
[]
https://github.com/huggingface/datasets/pull/2247
MEMBER
null
false
{ "closed_at": "2021-09-02T05:34:03Z", "closed_issues": 2, "created_at": "2021-07-09T05:49:00Z", "creator": { "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }, "description": "Next minor release", "due_on": "2021-07-30T07:00:00Z", "html_url": "https://github.com/huggingface/datasets/milestone/7", "id": 6931350, "labels_url": "https://api.github.com/repos/huggingface/datasets/milestones/7/labels", "node_id": "MDk6TWlsZXN0b25lNjkzMTM1MA==", "number": 7, "open_issues": 0, "state": "closed", "title": "1.11", "updated_at": "2021-09-02T05:34:03Z", "url": "https://api.github.com/repos/huggingface/datasets/milestones/7" }
[ "Hi @albertvillanova , I'll implement the parquet builder as an ArrowBasedBuilder if you don't mind", "closing in favor of #2537 that is already merged" ]
Implement Dataset from Parquet
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2247/reactions" }
MDExOlB1bGxSZXF1ZXN0NjIwOTgzNzY3
{ "diff_url": "https://github.com/huggingface/datasets/pull/2247.diff", "html_url": "https://github.com/huggingface/datasets/pull/2247", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/2247.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/2247" }
2021-04-22T11:01:38Z
https://api.github.com/repos/huggingface/datasets/issues/2247/comments
Implement instantiation of Dataset from Parquet file.
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
https://api.github.com/repos/huggingface/datasets/issues/2247/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2247/timeline
closed
false
2,247
null
2021-07-26T13:28:51Z
null
true
864,220,031
https://api.github.com/repos/huggingface/datasets/issues/2246
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2246/events
[]
null
2021-04-26T16:13:59Z
[]
https://github.com/huggingface/datasets/pull/2246
CONTRIBUTOR
null
false
null
[ "@lhoestq Just fixed the code style issues— I think it should be good to merge now :)" ]
Faster map w/ input_columns & faster slicing w/ Iterable keys
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2246/reactions" }
MDExOlB1bGxSZXF1ZXN0NjIwNDg3OTUw
{ "diff_url": "https://github.com/huggingface/datasets/pull/2246.diff", "html_url": "https://github.com/huggingface/datasets/pull/2246", "merged_at": "2021-04-26T16:13:58Z", "patch_url": "https://github.com/huggingface/datasets/pull/2246.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/2246" }
2021-04-21T19:49:07Z
https://api.github.com/repos/huggingface/datasets/issues/2246/comments
@lhoestq Fixes #2193 - `map` now uses `with_format` to only load needed columns in memory when `input_columns` is set - Slicing datasets with Iterables of indices now uses a new `Table.fast_gather` method, implemented with `np.searchsorted`, to find the appropriate batch indices all at once. `pa.concat_tables` is no longer used for this; we just call `pa.Table.from_batches` with a list of all the batch slices. Together these changes have sped up batched `map()` calls over subsets of columns quite considerably in my initial testing.
{ "avatar_url": "https://avatars.githubusercontent.com/u/39116809?v=4", "events_url": "https://api.github.com/users/norabelrose/events{/privacy}", "followers_url": "https://api.github.com/users/norabelrose/followers", "following_url": "https://api.github.com/users/norabelrose/following{/other_user}", "gists_url": "https://api.github.com/users/norabelrose/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/norabelrose", "id": 39116809, "login": "norabelrose", "node_id": "MDQ6VXNlcjM5MTE2ODA5", "organizations_url": "https://api.github.com/users/norabelrose/orgs", "received_events_url": "https://api.github.com/users/norabelrose/received_events", "repos_url": "https://api.github.com/users/norabelrose/repos", "site_admin": false, "starred_url": "https://api.github.com/users/norabelrose/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/norabelrose/subscriptions", "type": "User", "url": "https://api.github.com/users/norabelrose" }
https://api.github.com/repos/huggingface/datasets/issues/2246/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2246/timeline
closed
false
2,246
null
2021-04-26T16:13:59Z
null
true
863,191,655
https://api.github.com/repos/huggingface/datasets/issues/2245
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2245/events
[]
null
2021-05-10T18:04:37Z
[]
https://github.com/huggingface/datasets/pull/2245
CONTRIBUTOR
null
false
null
[ "@lhoestq The tests for key type and duplicate keys have been added and verified successfully.\r\nAfter generating with an intentionally faulty `mnist` script, when there is an incompatible key type, it shows:\r\n\r\n```\r\nDownloading and preparing dataset mnist/mnist (download: 11.06 MiB, generated: 60.62 MiB, post-processed: Unknown size, total: 71.67 MiB) to C:\\Users\\nikhil\\.cache\\huggingface\\datasets\\mnist\\mnist\\1.0.0\\5064c25e57a1678f700d2dc798ffe8a6d519405cca7d33670fffda477857a994...\r\n0 examples [00:00, ? examples/s]2021-04-26 02:50:03.703836: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library cudart64_110.dll\r\n\r\nFAILURE TO GENERATE DATASET: Invalid key type detected\r\nFound Key [0, 0] of type <class 'list'>\r\nKeys should be either str, int or bytes type\r\n```\r\n\r\nIn the case of duplicate keys, it now gives:\r\n\r\n```\r\nDownloading and preparing dataset mnist/mnist (download: 11.06 MiB, generated: 60.62 MiB, post-processed: Unknown size, total: 71.67 MiB) to C:\\Users\\nikhil\\.cache\\huggingface\\datasets\\mnist\\mnist\\1.0.0\\5064c25e57a1678f700d2dc798ffe8a6d519405cca7d33670fffda477857a994...\r\n0 examples [00:00, ? examples/s]2021-04-26 02:53:13.498579: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library cudart64_110.dll\r\nTraceback (most recent call last):\r\n File \"<stdin>\", line 1, in <module>\r\n File \"f:\\datasets\\datasets-1\\src\\datasets\\load.py\", line 746, in load_dataset\r\n builder_instance.download_and_prepare(\r\n File \"f:\\datasets\\datasets-1\\src\\datasets\\builder.py\", line 587, in download_and_prepare\r\n self._download_and_prepare(\r\n File \"f:\\datasets\\datasets-1\\src\\datasets\\builder.py\", line 665, in _download_and_prepare\r\n self._prepare_split(split_generator, **prepare_split_kwargs)\r\n File \"f:\\datasets\\datasets-1\\src\\datasets\\builder.py\", line 1002, in _prepare_split\r\n writer.write(example, key)\r\n File \"f:\\datasets\\datasets-1\\src\\datasets\\arrow_writer.py\", line 321, in write\r\n self.check_duplicates()\r\n File \"f:\\datasets\\datasets-1\\src\\datasets\\arrow_writer.py\", line 331, in check_duplicates\r\n raise DuplicatedKeysError(key)\r\ndatasets.keyhash.DuplicatedKeysError: FAILURE TO GENERATE DATASET !\r\nFound duplicate Key: 234467\r\nKeys should be unique and deterministic in nature\r\n```\r\nPlease let me know if this is what we wanted to implement. Thanks!", "This looks pretty cool !\r\nWe can make focus on the GeneratorBasedBuilder for now yes.\r\n\r\nDo you think we could make the ArrowWriter not look for duplicates by default ?\r\nThis way we can just enable duplicate detections when instantiating the writer in the GeneratorBasedBuilder for now.", "Thank you @lhoestq\r\n\r\n\r\n\r\n> Do you think we could make the ArrowWriter not look for duplicates by default ?\r\n\r\nWe can definitely do that by including a `check_duplicates` argument while instantiating `ArrowWriter()`. \r\n\r\nHowever, since only `GeneratorBasedBuilder` uses the `write()` function (which includes the detection code) and the others like `ArrowBasedBuilder` use `write_table()` which remains as it was (without duplicate detection). I don't think it would be necessary.\r\n\r\nNonetheless, doing this would require just some small changes. Please let me know your thoughts on this. Thanks!", "I like the idea of having the duplicate detection optional for other uses of the ArrowWriter.\r\nThis class is the main tool to write python data in arrow format so I'd expect it to be flexible.\r\nThat's why I think by default it shouldn't require users to provide keys or do any duplicates detection.\r\n\r\nAn alternative would be to subclass the writer to include duplicates detection in another class.\r\n\r\nBoth options are fine for me, let me know what you think !", "> This class is the main tool to write python data in arrow format so I'd expect it to be flexible.\r\n> That's why I think by default it shouldn't require users to provide keys or do any duplicates detection.\r\n\r\nWell, that makes sense as the writer can indeed be used for other purposes as well.\r\n\r\n> We can definitely do that by including a `check_duplicates` argument while instantiating `ArrowWriter()`.\r\n\r\nI think that this would be the simplest and the more efficient option for achieving this as subclassing the writer only for this would lead to unnecessary complexity and code duplication (in case of `writer()`). \r\n\r\nI will be adding the changes soon. Thanks for the feedback @lhoestq!", "@lhoestq I have pushed the final changes just now. \r\nNow, the keys and duplicate checking will be necessary only when the `ArrowWriter` is initialized with `check_duplicates=True` specifically (in this case, for `GeneratorBasedBuilders`)\r\n\r\nLet me know if this is what was required. Thanks!", "@lhoestq Thanks for the feedback! I will be adding the tests for the same very soon. \r\n\r\nHowever, I'm not sure as to what exactly is causing the `segmentation fault` in the failing CI tests. It seems to be something from `test_concatenation_table_cast` from `test_table.py`, but I'm not sure as to what exactly. Would be great if you could help. Thanks!", "You can merge master into your branch to fix this issue.\r\nBasically pyarrow 4.0.0 has a segfault issue (which has now been resolved on the master branch of pyarrow).\r\nSo until 4.0.1 comes out we changed to using `pyarrow<4.0.0` recently.", "@lhoestq Thanks for the help with the CI failures. Apologies for the multiple merge commits. My local repo got messy while merging which led to this.\r\nWill be pushing the commit for the tests soon!", "Hey @lhoestq, I've just added the required tests for checking key duplicates and invalid key data types.\r\nI think we have caught a nice little issue as 27 datasets are currently using non-unique keys (hence, the failing tests: All these datasets are giving `DuplicateKeysError` during testing). \r\nThese datasets were not detected earlier as there was no key checking when `num_examples < writer_batch_size` due to which they passed the dummy data generation test. This bug was fixed by adding the test to `writer.finalize()` method as well for checking any leftover examples from batches. \r\n\r\nI'd like to make changes to the faulty datasets' scripts. However, I was wondering if I should do that in this PR itself or open a new PR as this might get messy in the same PR. Let me know your thoughts on this. Thanks!", "Hi ! Once https://github.com/huggingface/datasets/pull/2333 is merged, feel free to merge master into your branch to fix the CI :)", "Thanks a lot for the help @lhoestq. Besides merging the new changes, I guess this PR is completed for now :)", "I just merged the PR, feel free to merge `master` into your branch. It should fix most most of the CI issues. If there are some left we can fix them in this PR :)", "@lhoestq Looks like the PR is completed now. Thanks for helping me out so much in this :)", "Hey @lhoestq, I've added the test and corrected the Cl errors as well. Do let me know if this requires any change. Thanks!", "Merging. I'll update the comment on the master branch (for some reason I can edit files on this branch)", "@lhoestq Thank you for the help and feedback. Feels great to contribute!" ]
Add `key` type and duplicates verification with hashing
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2245/reactions" }
MDExOlB1bGxSZXF1ZXN0NjE5NjQzMjQ3
{ "diff_url": "https://github.com/huggingface/datasets/pull/2245.diff", "html_url": "https://github.com/huggingface/datasets/pull/2245", "merged_at": "2021-05-10T17:31:21Z", "patch_url": "https://github.com/huggingface/datasets/pull/2245.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/2245" }
2021-04-20T20:03:19Z
https://api.github.com/repos/huggingface/datasets/issues/2245/comments
Closes #2230 There is currently no verification for the data type and the uniqueness of the keys yielded by the `dataset_builder`. This PR is currently a work in progress with the following goals: - [x] Adding `hash_salt` to `ArrowWriter` so that the keys belonging to different splits have different hash - [x] Add `key` arrtibute to `ArrowWriter.write()` for hashing - [x] Add a hashing class which takes an input key of certain type (`str`/`int`/anything convertible to string) and produces a 128-bit hash using `hashlib.md5` - [x] Creating a function giving a custom error message when non-unique keys are found **[This will take care of type-checking for keys]** - [x] Checking for duplicate keys in `writer.write()` for each batch [**NOTE**: This PR is currently concerned with `GeneratorBasedBuilder` only, for simplification. A subsequent PR will be made in future for `ArrowBasedBuilder`] @lhoestq Thank you for the feedback. It would be great to have your guidance on this!
{ "avatar_url": "https://avatars.githubusercontent.com/u/42388668?v=4", "events_url": "https://api.github.com/users/NikhilBartwal/events{/privacy}", "followers_url": "https://api.github.com/users/NikhilBartwal/followers", "following_url": "https://api.github.com/users/NikhilBartwal/following{/other_user}", "gists_url": "https://api.github.com/users/NikhilBartwal/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/NikhilBartwal", "id": 42388668, "login": "NikhilBartwal", "node_id": "MDQ6VXNlcjQyMzg4NjY4", "organizations_url": "https://api.github.com/users/NikhilBartwal/orgs", "received_events_url": "https://api.github.com/users/NikhilBartwal/received_events", "repos_url": "https://api.github.com/users/NikhilBartwal/repos", "site_admin": false, "starred_url": "https://api.github.com/users/NikhilBartwal/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/NikhilBartwal/subscriptions", "type": "User", "url": "https://api.github.com/users/NikhilBartwal" }
https://api.github.com/repos/huggingface/datasets/issues/2245/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2245/timeline
closed
false
2,245
null
2021-05-10T17:31:22Z
null
true
863,029,946
https://api.github.com/repos/huggingface/datasets/issues/2244
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2244/events
[]
null
2022-07-06T15:19:48Z
[]
https://github.com/huggingface/datasets/pull/2244
MEMBER
null
false
{ "closed_at": null, "closed_issues": 2, "created_at": "2021-07-21T15:34:56Z", "creator": { "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }, "description": "Next minor release", "due_on": "2021-08-30T07:00:00Z", "html_url": "https://github.com/huggingface/datasets/milestone/8", "id": 6968069, "labels_url": "https://api.github.com/repos/huggingface/datasets/milestones/8/labels", "node_id": "MI_kwDODunzps4AalMF", "number": 8, "open_issues": 4, "state": "open", "title": "1.12", "updated_at": "2021-10-13T10:26:33Z", "url": "https://api.github.com/repos/huggingface/datasets/milestones/8" }
[ "@lhoestq, I think this reaches some memory limit on Linux instances... (?)", "It looks like the `comet` metric test fails because it tries to load a model in memory.\r\nIn the tests I think we have `patch_comet` that mocks the model download + inference. Not sure why it didn't work though.\r\nI can take a look tomorrow (this afternoon is the pytorch ecosystem day)", "@lhoestq thanks for the hint: I'm going to have a look at that mock... ;)", "@lhoestq finally I did not find out why the mock is not used... If you can give me some other hint tomorrow..." ]
Set specific cache directories per test function call
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2244/reactions" }
MDExOlB1bGxSZXF1ZXN0NjE5NTAyODc0
{ "diff_url": "https://github.com/huggingface/datasets/pull/2244.diff", "html_url": "https://github.com/huggingface/datasets/pull/2244", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/2244.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/2244" }
2021-04-20T17:06:22Z
https://api.github.com/repos/huggingface/datasets/issues/2244/comments
Implement specific cache directories (datasets, metrics and modules) per test function call. Currently, the cache directories are set within the temporary test directory, but they are shared across all test function calls. This PR implements specific cache directories for each test function call, so that tests are atomic and there are no side effects.
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
https://api.github.com/repos/huggingface/datasets/issues/2244/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2244/timeline
open
false
2,244
null
null
null
true
862,909,389
https://api.github.com/repos/huggingface/datasets/issues/2243
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2243/events
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
null
2021-05-03T17:54:33Z
[]
https://github.com/huggingface/datasets/issues/2243
NONE
completed
null
null
[ "Hi @villmow, thanks for reporting.\r\n\r\nCould you please try with the Datasets version 1.6? We released it yesterday and it fixes some issues about the processing speed. You can see the fix implemented by @lhoestq here: #2122.\r\n\r\nOnce you update Datasets, please confirm if the problem persists.", "Hi @albertvillanova, thanks for the reply. I just tried the new version and the problem still persists. \r\n\r\nDo I need to rebuild the saved dataset (which I load from disk) with the 1.6.0 version of datasets? My script loads this dataset and creates new datasets from it. I tried it without rebuilding.\r\n\r\nSee this short video of what happens. It does not create all processes at the same time:\r\n\r\nhttps://user-images.githubusercontent.com/2743060/115720139-0da3a500-a37d-11eb-833a-9bbacc70868d.mp4\r\n\r\n", "There can be a bit of delay between the creations of the processes but this delay should be the same for both your `map` calls. We should look into this.\r\nAlso if you hav some code that reproduces this issue on google colab that'd be really useful !\r\n\r\nRegarding the speed differences:\r\nThis looks like a similar issue as https://github.com/huggingface/datasets/issues/1992 who is experiencing the same speed differences between processes.\r\nThis is a known bug that we are investigating. As of now I've never managed to reproduce it on my machine so it's pretty hard for me to find where this issue comes from.\r\n", "Upgrade to 1.6.1 solved my problem somehow. I did not change any of my code, but now it starts all processes around the same time.", "Nice ! I'm glad this works now.\r\nClosing for now, but feel free to re-open if you experience this issue again." ]
Map is slow and processes batches one after another
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2243/reactions" }
MDU6SXNzdWU4NjI5MDkzODk=
null
2021-04-20T14:58:20Z
https://api.github.com/repos/huggingface/datasets/issues/2243/comments
## Describe the bug I have a somewhat unclear bug to me, where I can't figure out what the problem is. The code works as expected on a small subset of my dataset (2000 samples) on my local machine, but when I execute the same code with a larger dataset (1.4 million samples) this problem occurs. Thats why I can't give exact steps to reproduce, I'm sorry. I process a large dataset in a two step process. I first call map on a dataset I load from disk and create a new dataset from it. This works like expected and `map` uses all workers I started it with. Then I process the dataset created by the first step, again with `map`, which is really slow and starting only one or two process at a time. Number of processes is the same for both steps. pseudo code: ```python ds = datasets.load_from_disk("path") new_dataset = ds.map(work, batched=True, ...) # fast uses all processes final_dataset = new_dataset.map(work2, batched=True, ...) # slow starts one process after another ``` ## Expected results Second stage should be as fast as the first stage. ## Versions Paste the output of the following code: - Datasets: 1.5.0 - Python: 3.8.8 (default, Feb 24 2021, 21:46:12) - Platform: Linux-5.4.0-60-generic-x86_64-with-glibc2.10 Do you guys have any idea? Thanks a lot!
{ "avatar_url": "https://avatars.githubusercontent.com/u/2743060?v=4", "events_url": "https://api.github.com/users/villmow/events{/privacy}", "followers_url": "https://api.github.com/users/villmow/followers", "following_url": "https://api.github.com/users/villmow/following{/other_user}", "gists_url": "https://api.github.com/users/villmow/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/villmow", "id": 2743060, "login": "villmow", "node_id": "MDQ6VXNlcjI3NDMwNjA=", "organizations_url": "https://api.github.com/users/villmow/orgs", "received_events_url": "https://api.github.com/users/villmow/received_events", "repos_url": "https://api.github.com/users/villmow/repos", "site_admin": false, "starred_url": "https://api.github.com/users/villmow/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/villmow/subscriptions", "type": "User", "url": "https://api.github.com/users/villmow" }
https://api.github.com/repos/huggingface/datasets/issues/2243/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2243/timeline
closed
false
2,243
null
2021-05-03T17:54:32Z
null
false
862,870,205
https://api.github.com/repos/huggingface/datasets/issues/2242
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2242/events
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
null
2021-04-20T15:02:45Z
[]
https://github.com/huggingface/datasets/issues/2242
NONE
completed
null
null
[ "This should be fixed now!\r\n\r\ncc @srush " ]
Link to datasets viwer on Quick Tour page returns "502 Bad Gateway"
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2242/reactions" }
MDU6SXNzdWU4NjI4NzAyMDU=
null
2021-04-20T14:19:51Z
https://api.github.com/repos/huggingface/datasets/issues/2242/comments
Link to datasets viwer (https://huggingface.co/datasets/viewer/) on Quick Tour page (https://huggingface.co/docs/datasets/quicktour.html) returns "502 Bad Gateway" The same error with https://huggingface.co/datasets/viewer/?dataset=glue&config=mrpc
{ "avatar_url": "https://avatars.githubusercontent.com/u/6735707?v=4", "events_url": "https://api.github.com/users/martavillegas/events{/privacy}", "followers_url": "https://api.github.com/users/martavillegas/followers", "following_url": "https://api.github.com/users/martavillegas/following{/other_user}", "gists_url": "https://api.github.com/users/martavillegas/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/martavillegas", "id": 6735707, "login": "martavillegas", "node_id": "MDQ6VXNlcjY3MzU3MDc=", "organizations_url": "https://api.github.com/users/martavillegas/orgs", "received_events_url": "https://api.github.com/users/martavillegas/received_events", "repos_url": "https://api.github.com/users/martavillegas/repos", "site_admin": false, "starred_url": "https://api.github.com/users/martavillegas/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/martavillegas/subscriptions", "type": "User", "url": "https://api.github.com/users/martavillegas" }
https://api.github.com/repos/huggingface/datasets/issues/2242/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2242/timeline
closed
false
2,242
null
2021-04-20T15:02:45Z
null
false
862,696,460
https://api.github.com/repos/huggingface/datasets/issues/2241
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2241/events
[]
null
2021-04-23T16:21:24Z
[]
https://github.com/huggingface/datasets/pull/2241
CONTRIBUTOR
null
false
null
[ "> And yet another one ! Thanks a lot :)\r\n\r\nI just hope you don’t get fed up with openslr PR 😊 there are still few other datasets created by google in openslr that is not in hf dataset yet\r\n" ]
Add SLR32 to OpenSLR
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2241/reactions" }
MDExOlB1bGxSZXF1ZXN0NjE5MjI0MzIw
{ "diff_url": "https://github.com/huggingface/datasets/pull/2241.diff", "html_url": "https://github.com/huggingface/datasets/pull/2241", "merged_at": "2021-04-23T15:36:15Z", "patch_url": "https://github.com/huggingface/datasets/pull/2241.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/2241" }
2021-04-20T11:02:45Z
https://api.github.com/repos/huggingface/datasets/issues/2241/comments
I would like to add SLR32 to OpenSLR. It contains four South African languages: Afrikaans, Sesotho, Setswana and isiXhosa
{ "avatar_url": "https://avatars.githubusercontent.com/u/7669893?v=4", "events_url": "https://api.github.com/users/cahya-wirawan/events{/privacy}", "followers_url": "https://api.github.com/users/cahya-wirawan/followers", "following_url": "https://api.github.com/users/cahya-wirawan/following{/other_user}", "gists_url": "https://api.github.com/users/cahya-wirawan/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/cahya-wirawan", "id": 7669893, "login": "cahya-wirawan", "node_id": "MDQ6VXNlcjc2Njk4OTM=", "organizations_url": "https://api.github.com/users/cahya-wirawan/orgs", "received_events_url": "https://api.github.com/users/cahya-wirawan/received_events", "repos_url": "https://api.github.com/users/cahya-wirawan/repos", "site_admin": false, "starred_url": "https://api.github.com/users/cahya-wirawan/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/cahya-wirawan/subscriptions", "type": "User", "url": "https://api.github.com/users/cahya-wirawan" }
https://api.github.com/repos/huggingface/datasets/issues/2241/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2241/timeline
closed
false
2,241
null
2021-04-23T15:36:15Z
null
true
862,537,856
https://api.github.com/repos/huggingface/datasets/issues/2240
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2240/events
[]
null
2021-04-21T09:54:57Z
[]
https://github.com/huggingface/datasets/pull/2240
MEMBER
null
false
null
[]
Clarify how to load wikihow
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2240/reactions" }
MDExOlB1bGxSZXF1ZXN0NjE5MDkyODc5
{ "diff_url": "https://github.com/huggingface/datasets/pull/2240.diff", "html_url": "https://github.com/huggingface/datasets/pull/2240", "merged_at": "2021-04-21T09:54:57Z", "patch_url": "https://github.com/huggingface/datasets/pull/2240.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/2240" }
2021-04-20T08:02:58Z
https://api.github.com/repos/huggingface/datasets/issues/2240/comments
Explain clearer how to load the dataset in the manual download instructions. En relation with #2239.
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
https://api.github.com/repos/huggingface/datasets/issues/2240/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2240/timeline
closed
false
2,240
null
2021-04-21T09:54:57Z
null
true
861,904,306
https://api.github.com/repos/huggingface/datasets/issues/2239
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2239/events
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
null
2021-04-20T16:33:11Z
[]
https://github.com/huggingface/datasets/issues/2239
CONTRIBUTOR
completed
null
null
[ "Hi @odellus, thanks for reporting.\r\n\r\nThe `wikihow` dataset has 2 versions:\r\n- `all`: Consisting of the concatenation of all paragraphs as the articles and the bold lines as the reference summaries.\r\n- `sep`: Consisting of each paragraph and its summary.\r\n\r\nTherefore, in order to load it, you have to specify which version you would like, for example:\r\n```python\r\ndataset = load_dataset('wikihow', 'all')\r\n```\r\n\r\nPlease, tell me if this solves your problem.", "Good call out. I did try that and that's when it told me to download the\ndataset. Don't believe I have tried it with local files. Will try first\nthing in the morning and get back to you.\n\nOn Mon, Apr 19, 2021, 11:17 PM Albert Villanova del Moral <\n***@***.***> wrote:\n\n> Hi @odellus <https://github.com/odellus>, thanks for reporting.\n>\n> The wikihow dataset has 2 versions:\n>\n> - all: Consisting of the concatenation of all paragraphs as the\n> articles and the bold lines as the reference summaries.\n> - sep: Consisting of each paragraph and its summary.\n>\n> Therefore, in order to load it, you have to specify which version you\n> would like, for example:\n>\n> dataset = load_dataset('wikihow', 'all')\n>\n> Please, tell me if this solves your problem.\n>\n> —\n> You are receiving this because you were mentioned.\n> Reply to this email directly, view it on GitHub\n> <https://github.com/huggingface/datasets/issues/2239#issuecomment-823004146>,\n> or unsubscribe\n> <https://github.com/notifications/unsubscribe-auth/ABDYI3HVRTBI2QT3BOG262DTJUL57ANCNFSM43GV5BZQ>\n> .\n>\n", "Hi @odellus, yes you are right.\r\n\r\nDue to the server where the `wikihow` dataset is hosted, the dataset can't be downloaded automatically by `huggingface` and you have to download it manually as you did.\r\n\r\nNevertheless, you have to specify which dataset version you would like to load anyway:\r\n```python\r\ndataset = load_dataset('wikihow', 'all', data_dir='./wikihow')\r\n```\r\nor\r\n```python\r\ndataset = load_dataset('wikihow', 'sep', data_dir='./wikihow')\r\n```\r\nI find that the instructions given by `huggingface` are not clear enough: I am going to fix this.\r\nPlease tell me if this eventually works for you.", "That was it. Thank you Albert!" ]
Error loading wikihow dataset
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2239/reactions" }
MDU6SXNzdWU4NjE5MDQzMDY=
null
2021-04-19T21:02:31Z
https://api.github.com/repos/huggingface/datasets/issues/2239/comments
## Describe the bug When attempting to load wikihow into a dataset with ```python from datasets import load_dataset dataset = load_dataset('wikihow', data_dir='./wikihow') ``` I get the message: ``` AttributeError: 'BuilderConfig' object has no attribute 'filename' ``` at the end of a [full stack trace](https://gist.github.com/odellus/602c3b2de52f541d353b1022f320ffc2). ## Steps to reproduce the bug I have followed the instructions for creating a wikihow dataset. The [wikihow dataset site](https://huggingface.co/datasets/wikihow) says to use ```python from datasets import load_dataset dataset = load_dataset('wikihow') ``` to load the dataset. I do so and I get the message ``` AssertionError: The dataset wikihow with config all requires manual data. Please follow the manual download instructions: You need to manually download two wikihow files. An overview of which files to download can be seen at https://github.com/mahnazkoupaee/WikiHow-Dataset. You need to download the following two files manually: 1) https://ucsb.app.box.com/s/ap23l8gafpezf4tq3wapr6u8241zz358 and save the file under <path/to/folder>/wikihowAll.csv 2) https://ucsb.app.box.com/s/7yq601ijl1lzvlfu4rjdbbxforzd2oag and save the file under <path/to/folder>/wikihowSep.csv The <path/to/folder> can e.g. be "~/manual_wikihow_data". Wikihow can then be loaded using the following command `datasets.load_dataset("wikihow", data_dir="<path/to/folder>")`. . Manual data can be loaded with `datasets.load_dataset(wikihow, data_dir='<path/to/manual/data>') ``` So I create a directory `./wikihow` and download `wikihowAll.csv` and `wikihowSep.csv` into the new directory. Then I run ```python from datasets import load_dataset dataset = load_dataset('wikihow', data_dir='./wikihow') ``` that's when I get the [stack trace](https://gist.github.com/odellus/602c3b2de52f541d353b1022f320ffc2) ## Expected results I expected it to load the downloaded files into a dataset. ## Actual results ```python Using custom data configuration default-data_dir=.%2Fwikihow Downloading and preparing dataset wikihow/default (download: Unknown size, generated: Unknown size, post-processed: Unknown size, total: Unknown size) to /home/azureuser/.cache/huggingface/datasets/wikihow/default-data_dir=.%2Fwikihow/0.0.0/58f42f8f0e4d459811a0f69aaab35870093830ccd58006769e7e1eb3e0e686c2... --------------------------------------------------------------------------- AttributeError Traceback (most recent call last) <ipython-input-9-5e4d40142f30> in <module> ----> 1 dataset = load_dataset('wikihow',data_dir='./wikihow') ~/.local/lib/python3.6/site-packages/datasets/load.py in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, ignore_verifications, keep_in_memory, save_infos, script_version, use_auth_token, **config_kwargs) 745 try_from_hf_gcs=try_from_hf_gcs, 746 base_path=base_path,--> 747 use_auth_token=use_auth_token, 748 ) 749 ~/.local/lib/python3.6/site-packages/datasets/builder.py in download_and_prepare(self, download_config, download_mode, ignore_verifications, try_from_hf_gcs, dl_manager, base_path, use_auth_token, **download_and_prepare_kwargs) 577 if not downloaded_from_gcs: 578 self._download_and_prepare( --> 579 dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs 580 ) 581 # Sync info ~/.local/lib/python3.6/site-packages/datasets/builder.py in _download_and_prepare(self, dl_manager, verify_infos, **prepare_split_kwargs) 632 split_dict = SplitDict(dataset_name=self.name) 633 split_generators_kwargs = self._make_split_generators_kwargs(prepare_split_kwargs) --> 634 split_generators = self._split_generators(dl_manager, **split_generators_kwargs) 635 636 # Checksums verification ~/.cache/huggingface/modules/datasets_modules/datasets/wikihow/58f42f8f0e4d459811a0f69aaab35870093830ccd58006769e7e1eb3e0e686c2/wikihow.py in _split_generators(self, dl_manager) 132 133 path_to_manual_file = os.path.join( --> 134 os.path.abspath(os.path.expanduser(dl_manager.manual_dir)), self.config.filename 135 ) 136 AttributeError: 'BuilderConfig' object has no attribute 'filename' ``` ## Versions Paste the output of the following code: ```python import datasets import sys import platform print(f""" - Datasets: {datasets.__version__} - Python: {sys.version} - Platform: {platform.platform()} """) ``` ``` - Datasets: 1.5.0 - Python: 3.6.9 (default, Jan 26 2021, 15:33:00) [GCC 8.4.0] - Platform: Linux-5.4.0-1046-azure-x86_64-with-Ubuntu-18.04-bionic ```
{ "avatar_url": "https://avatars.githubusercontent.com/u/4686956?v=4", "events_url": "https://api.github.com/users/odellus/events{/privacy}", "followers_url": "https://api.github.com/users/odellus/followers", "following_url": "https://api.github.com/users/odellus/following{/other_user}", "gists_url": "https://api.github.com/users/odellus/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/odellus", "id": 4686956, "login": "odellus", "node_id": "MDQ6VXNlcjQ2ODY5NTY=", "organizations_url": "https://api.github.com/users/odellus/orgs", "received_events_url": "https://api.github.com/users/odellus/received_events", "repos_url": "https://api.github.com/users/odellus/repos", "site_admin": false, "starred_url": "https://api.github.com/users/odellus/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/odellus/subscriptions", "type": "User", "url": "https://api.github.com/users/odellus" }
https://api.github.com/repos/huggingface/datasets/issues/2239/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2239/timeline
closed
false
2,239
null
2021-04-20T16:33:11Z
null
false
861,518,291
https://api.github.com/repos/huggingface/datasets/issues/2238
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2238/events
[]
null
2021-04-23T15:32:05Z
[]
https://github.com/huggingface/datasets/pull/2238
CONTRIBUTOR
null
false
null
[]
NLU evaluation data
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2238/reactions" }
MDExOlB1bGxSZXF1ZXN0NjE4MTY5NzM5
{ "diff_url": "https://github.com/huggingface/datasets/pull/2238.diff", "html_url": "https://github.com/huggingface/datasets/pull/2238", "merged_at": "2021-04-23T15:32:05Z", "patch_url": "https://github.com/huggingface/datasets/pull/2238.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/2238" }
2021-04-19T16:47:20Z
https://api.github.com/repos/huggingface/datasets/issues/2238/comments
New intent classification dataset from https://github.com/xliuhw/NLU-Evaluation-Data
{ "avatar_url": "https://avatars.githubusercontent.com/u/32985207?v=4", "events_url": "https://api.github.com/users/dkajtoch/events{/privacy}", "followers_url": "https://api.github.com/users/dkajtoch/followers", "following_url": "https://api.github.com/users/dkajtoch/following{/other_user}", "gists_url": "https://api.github.com/users/dkajtoch/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/dkajtoch", "id": 32985207, "login": "dkajtoch", "node_id": "MDQ6VXNlcjMyOTg1MjA3", "organizations_url": "https://api.github.com/users/dkajtoch/orgs", "received_events_url": "https://api.github.com/users/dkajtoch/received_events", "repos_url": "https://api.github.com/users/dkajtoch/repos", "site_admin": false, "starred_url": "https://api.github.com/users/dkajtoch/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dkajtoch/subscriptions", "type": "User", "url": "https://api.github.com/users/dkajtoch" }
https://api.github.com/repos/huggingface/datasets/issues/2238/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2238/timeline
closed
false
2,238
null
2021-04-23T15:32:05Z
null
true
861,427,439
https://api.github.com/repos/huggingface/datasets/issues/2237
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2237/events
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" } ]
null
2021-04-20T14:22:05Z
[]
https://github.com/huggingface/datasets/issues/2237
MEMBER
null
null
null
[ "@albertvillanova I would like to take this up. It would be great if you could point me as to how the dataset size is calculated in HF. Thanks!" ]
Update Dataset.dataset_size after transformed with map
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2237/reactions" }
MDU6SXNzdWU4NjE0Mjc0Mzk=
null
2021-04-19T15:19:38Z
https://api.github.com/repos/huggingface/datasets/issues/2237/comments
After loading a dataset, if we transform it by using `.map` its `dataset_size` attirbute is not updated.
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
https://api.github.com/repos/huggingface/datasets/issues/2237/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2237/timeline
open
false
2,237
null
null
null
false
861,388,145
https://api.github.com/repos/huggingface/datasets/issues/2236
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2236/events
[ { "color": "e99695", "default": false, "description": "Requesting to add a new dataset", "id": 2067376369, "name": "dataset request", "node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request" } ]
null
2021-04-19T14:46:26Z
[]
https://github.com/huggingface/datasets/issues/2236
NONE
null
null
null
[]
Request to add StrategyQA dataset
{ "+1": 2, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 2, "url": "https://api.github.com/repos/huggingface/datasets/issues/2236/reactions" }
MDU6SXNzdWU4NjEzODgxNDU=
null
2021-04-19T14:46:26Z
https://api.github.com/repos/huggingface/datasets/issues/2236/comments
## Request to add StrategyQA dataset - **Name:** StrategyQA - **Description:** open-domain QA [(project page)](https://allenai.org/data/strategyqa) - **Paper:** [url](https://arxiv.org/pdf/2101.02235.pdf) - **Data:** [here](https://allenai.org/data/strategyqa) - **Motivation:** uniquely-formulated dataset that also includes a question-decomposition breakdown and associated Wikipedia annotations for each step. Good for multi-hop reasoning modeling.
{ "avatar_url": "https://avatars.githubusercontent.com/u/8027676?v=4", "events_url": "https://api.github.com/users/sarahwie/events{/privacy}", "followers_url": "https://api.github.com/users/sarahwie/followers", "following_url": "https://api.github.com/users/sarahwie/following{/other_user}", "gists_url": "https://api.github.com/users/sarahwie/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/sarahwie", "id": 8027676, "login": "sarahwie", "node_id": "MDQ6VXNlcjgwMjc2NzY=", "organizations_url": "https://api.github.com/users/sarahwie/orgs", "received_events_url": "https://api.github.com/users/sarahwie/received_events", "repos_url": "https://api.github.com/users/sarahwie/repos", "site_admin": false, "starred_url": "https://api.github.com/users/sarahwie/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sarahwie/subscriptions", "type": "User", "url": "https://api.github.com/users/sarahwie" }
https://api.github.com/repos/huggingface/datasets/issues/2236/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2236/timeline
open
false
2,236
null
null
null
false
861,040,716
https://api.github.com/repos/huggingface/datasets/issues/2235
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2235/events
[]
null
2021-04-19T12:49:19Z
[]
https://github.com/huggingface/datasets/pull/2235
CONTRIBUTOR
null
false
null
[]
Update README.md
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2235/reactions" }
MDExOlB1bGxSZXF1ZXN0NjE3Nzc0NDUw
{ "diff_url": "https://github.com/huggingface/datasets/pull/2235.diff", "html_url": "https://github.com/huggingface/datasets/pull/2235", "merged_at": "2021-04-19T12:49:19Z", "patch_url": "https://github.com/huggingface/datasets/pull/2235.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/2235" }
2021-04-19T08:21:02Z
https://api.github.com/repos/huggingface/datasets/issues/2235/comments
Adding relevant citations (paper accepted at AAAI 2020 & EMNLP 2020) to the benchmark
{ "avatar_url": "https://avatars.githubusercontent.com/u/22492839?v=4", "events_url": "https://api.github.com/users/PierreColombo/events{/privacy}", "followers_url": "https://api.github.com/users/PierreColombo/followers", "following_url": "https://api.github.com/users/PierreColombo/following{/other_user}", "gists_url": "https://api.github.com/users/PierreColombo/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/PierreColombo", "id": 22492839, "login": "PierreColombo", "node_id": "MDQ6VXNlcjIyNDkyODM5", "organizations_url": "https://api.github.com/users/PierreColombo/orgs", "received_events_url": "https://api.github.com/users/PierreColombo/received_events", "repos_url": "https://api.github.com/users/PierreColombo/repos", "site_admin": false, "starred_url": "https://api.github.com/users/PierreColombo/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/PierreColombo/subscriptions", "type": "User", "url": "https://api.github.com/users/PierreColombo" }
https://api.github.com/repos/huggingface/datasets/issues/2235/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2235/timeline
closed
false
2,235
null
2021-04-19T12:49:19Z
null
true
860,442,246
https://api.github.com/repos/huggingface/datasets/issues/2234
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2234/events
[]
null
2021-04-19T10:57:31Z
[]
https://github.com/huggingface/datasets/pull/2234
COLLABORATOR
null
false
null
[]
Fix bash snippet formatting in ADD_NEW_DATASET.md
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2234/reactions" }
MDExOlB1bGxSZXF1ZXN0NjE3MzI4NDU3
{ "diff_url": "https://github.com/huggingface/datasets/pull/2234.diff", "html_url": "https://github.com/huggingface/datasets/pull/2234", "merged_at": "2021-04-19T07:51:36Z", "patch_url": "https://github.com/huggingface/datasets/pull/2234.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/2234" }
2021-04-17T16:01:08Z
https://api.github.com/repos/huggingface/datasets/issues/2234/comments
This PR indents the paragraphs around the bash snippets in ADD_NEW_DATASET.md to fix formatting.
{ "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mariosasko", "id": 47462742, "login": "mariosasko", "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "repos_url": "https://api.github.com/users/mariosasko/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "type": "User", "url": "https://api.github.com/users/mariosasko" }
https://api.github.com/repos/huggingface/datasets/issues/2234/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2234/timeline
closed
false
2,234
null
2021-04-19T07:51:36Z
null
true
860,097,084
https://api.github.com/repos/huggingface/datasets/issues/2233
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2233/events
[]
null
2021-04-19T08:56:42Z
[]
https://github.com/huggingface/datasets/pull/2233
CONTRIBUTOR
null
false
null
[]
Fix `xnli` dataset tuple key
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2233/reactions" }
MDExOlB1bGxSZXF1ZXN0NjE3MDYwMTkw
{ "diff_url": "https://github.com/huggingface/datasets/pull/2233.diff", "html_url": "https://github.com/huggingface/datasets/pull/2233", "merged_at": "2021-04-19T08:56:42Z", "patch_url": "https://github.com/huggingface/datasets/pull/2233.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/2233" }
2021-04-16T19:12:42Z
https://api.github.com/repos/huggingface/datasets/issues/2233/comments
Closes #2229 The `xnli` dataset yields a tuple key in case of `ar` which is inconsistant with the acceptable key types (str/int). The key was thus ported to `str` keeping the original information intact.
{ "avatar_url": "https://avatars.githubusercontent.com/u/42388668?v=4", "events_url": "https://api.github.com/users/NikhilBartwal/events{/privacy}", "followers_url": "https://api.github.com/users/NikhilBartwal/followers", "following_url": "https://api.github.com/users/NikhilBartwal/following{/other_user}", "gists_url": "https://api.github.com/users/NikhilBartwal/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/NikhilBartwal", "id": 42388668, "login": "NikhilBartwal", "node_id": "MDQ6VXNlcjQyMzg4NjY4", "organizations_url": "https://api.github.com/users/NikhilBartwal/orgs", "received_events_url": "https://api.github.com/users/NikhilBartwal/received_events", "repos_url": "https://api.github.com/users/NikhilBartwal/repos", "site_admin": false, "starred_url": "https://api.github.com/users/NikhilBartwal/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/NikhilBartwal/subscriptions", "type": "User", "url": "https://api.github.com/users/NikhilBartwal" }
https://api.github.com/repos/huggingface/datasets/issues/2233/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2233/timeline
closed
false
2,233
null
2021-04-19T08:56:42Z
null
true
860,075,931
https://api.github.com/repos/huggingface/datasets/issues/2232
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2232/events
[]
null
2021-04-21T09:33:09Z
[]
https://github.com/huggingface/datasets/pull/2232
MEMBER
null
false
null
[ "I replaced all the \"we\" and applied your suggestion", "Merging this for now, we can continue improving this card in other PRs :)" ]
Start filling GLUE dataset card
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2232/reactions" }
MDExOlB1bGxSZXF1ZXN0NjE3MDQyNTI4
{ "diff_url": "https://github.com/huggingface/datasets/pull/2232.diff", "html_url": "https://github.com/huggingface/datasets/pull/2232", "merged_at": "2021-04-21T09:33:08Z", "patch_url": "https://github.com/huggingface/datasets/pull/2232.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/2232" }
2021-04-16T18:37:37Z
https://api.github.com/repos/huggingface/datasets/issues/2232/comments
The dataset card was pretty much empty. I added the descriptions (mainly from TFDS since the script is the same), and I also added the tasks tags as well as examples for a subset of the tasks. cc @sgugger
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
https://api.github.com/repos/huggingface/datasets/issues/2232/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2232/timeline
closed
false
2,232
null
2021-04-21T09:33:08Z
null
true
859,850,488
https://api.github.com/repos/huggingface/datasets/issues/2231
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2231/events
[]
null
2021-04-16T15:10:05Z
[]
https://github.com/huggingface/datasets/pull/2231
MEMBER
null
false
null
[]
Fix map when removing columns on a formatted dataset
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2231/reactions" }
MDExOlB1bGxSZXF1ZXN0NjE2ODYyNTEx
{ "diff_url": "https://github.com/huggingface/datasets/pull/2231.diff", "html_url": "https://github.com/huggingface/datasets/pull/2231", "merged_at": "2021-04-16T15:10:04Z", "patch_url": "https://github.com/huggingface/datasets/pull/2231.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/2231" }
2021-04-16T14:08:55Z
https://api.github.com/repos/huggingface/datasets/issues/2231/comments
This should fix issue #2226 The `remove_columns` argument was ignored on formatted datasets
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
https://api.github.com/repos/huggingface/datasets/issues/2231/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2231/timeline
closed
false
2,231
null
2021-04-16T15:10:04Z
null
true
859,817,159
https://api.github.com/repos/huggingface/datasets/issues/2230
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2230/events
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" } ]
null
2021-05-10T17:31:21Z
[]
https://github.com/huggingface/datasets/issues/2230
CONTRIBUTOR
completed
null
null
[ "Hi ! Indeed there's no verification on the uniqueness nor the types of the keys.\r\nDo you already have some ideas of what you would like to implement and how ?", "Hey @lhoestq, thank you so much for the opportunity.\r\nAlthough I haven't had much experience with the HF Datasets code, after a careful look at how the `ArrowWriter` functions, I think we can implement this as follows:\r\n\r\n1. First, we would have to update the `ArrowWriter.write()` function here:\r\nhttps://github.com/huggingface/datasets/blob/fcd3c3c8e3b1d9a2f3686a496082e21f06591380/src/datasets/arrow_writer.py#L296\r\nso that it accepts an additional argument `key` which would be appended along with the example here after hashing.\r\n\r\n2. Then, we would need to create a `Hasher` class which will take the key as its input and return a hash for it (We might need to use some hash salt which can be passed to the ArrowWriter.writer() with value equal to the `split_name` for differentiating between same keys of different splits)\r\n\r\n We can use the `hashlib.md5` function for hashing which will conert each key to its byte code before hashing (depending on the data type of the key) **Thus, the `key` type will be verified here**.\r\n\r\n3. Now, we would have to edit this\r\nhttps://github.com/huggingface/datasets/blob/fcd3c3c8e3b1d9a2f3686a496082e21f06591380/src/datasets/arrow_writer.py#L257\r\n so that it iterates over each `(hash, example)` pair (sorted according to hash). We can then simply **check whether each hash is different from the previous hash** (since they will be sorted)\r\n\r\nHowever, since I'm not very familiar with how the data is being written on disk in the form of a table, I might need some guidance for Step 3. \r\nPlease let me know your thought on this. Thanks!", "Interesting !\r\nWe keep the dataset sorted in the order examples are generated by the builder (we expect the dataset builders to generate examples in deterministic order). Therefore I don't think we should shuffle the examples with the hashing. Let me know what you think.\r\nOther that that, I really like the idea of checking for keys duplicates in `write_examples_on_file` :)\r\n\r\nThis looks like a great plan ! Feel free to open a PR and ping me if you have questions or if I can help\r\n", "@lhoestq I'm glad you liked the idea!\r\nI think that since the keys will be unique and deterministic in the nature themselves, so even if we shuffle the examples according to the hash, a deterministic order would still be maintained (as the keys will always have the same hash, whenever the dataset is generated). \r\nAnd since, we are not dealing with time series data (which would require the data to be in original order), I don't think the order of examples would matter much, as long as the order is deterministic and constant for all users.\r\n\r\nI think that this is also what was originally envisioned as mentioned in the documentation here:\r\nhttps://github.com/huggingface/datasets/blob/6775661b19d2ec339784f3d84553a3996a1d86c3/src/datasets/builder.py#L973\r\n\r\nAlso, if we avoid this, we would need to keep track of all the hashed keys in some place and compare each individual key with all others. This can cause some major overhead as each dataset consists of tens of thousands of examples.\r\nLet me know your thoughts in it! I would be opening a PR soon :)", "When users load their own data, they expect the order to stay the same. I think that shuffling the data can make things inconvenient.\r\n\r\n> I think that this is also what was originally envisioned as mentioned in the documentation here:\r\n\r\nThis part was originally developed by tensorflow datasets, and tensorflow datasets indeed does the shuffling. However in this library this is probably not what we want in the general case. But if @albertvillanova and @thomwolf you have opinions on this please let us know.\r\n\r\n> Also, if we avoid this, we would need to keep track of all the hashed keys in some place and compare each individual key with all others. This can cause some major overhead as each dataset consists of tens of thousands of examples.\r\n\r\nMaybe we cam simply keep track of the hashes of of each batch being written ? The size of the batch when the data are save in arrow is 10 000 examples. This would only ensure that we don't have duplicates in each batch, but there might still be duplicates across batches. For 10 000 examples the hashes can just be stored as a python `set`.\r\n\r\nOtherwise if we want full deduplication, we need an extra tool that allows to temporarily save and query hashes that may need to use disk space rather than memory.", "Yes I think we want to keep the original order by default and only shuffle when the user ask for it (for instance by calling `dataset.shuffle()`). That’s how I had it in mind originally.", "Hey @lhoestq, I just had a more in-depth look at the original TFDS code about why the keys and hash were used in the first place.\r\n\r\nIn my opinion, the only use that the `hash(key)` serves is that it allows us to shuffle the examples in a deterministic order (as each example will always yield the same key and thus, the same hash on every system) so that the same dataset is generated for each user, irrespective of the order the examples are yielded by the dataset builder on different user systems.\r\n\r\nOtherwise, if we are not shuffling, then while yielding and writing the data, after getting the key and hashing it for an example, I can't quite see the use of the hash or the key. The hash will simply be generated for each example but not actually used anywhere?\r\n\r\n@lhoestq @thomwolf It would be great if you could explain a bit more about the usage of keys. Thanks!\r\n", "In `datasets` the keys are currently ignored.\r\nFor shuffling we don't use the keys. Instead we shuffle an array of indices. Since both the original order of the dataset and the indices shuffling are deterministic, then `dataset.shuffle` is deterministic as well.\r\nWe can use it to:\r\n1. detect duplicates\r\n2. verify that the generation order is indeed deterministic\r\n3. maybe more ?", "Thanks a lot @lhoestq. I think I understand what we need to do now. The keys can indeed be used for detecting duplicates in generated examples as well as ensuring the order.\r\n\r\n> Maybe we cam simply keep track of the hashes of of each batch being written ? The size of the batch when the data are save in arrow is 10 000 examples. This would only ensure that we don't have duplicates in each batch,\r\n\r\nI think that checking for duplicates in every batch independently would be sufficient as the probability of collisions using something like `MD5` is very low. I would be opening a draft PR soon. It would be great to have your guidance. Thanks!" ]
Keys yielded while generating dataset are not being checked
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2230/reactions" }
MDU6SXNzdWU4NTk4MTcxNTk=
null
2021-04-16T13:29:47Z
https://api.github.com/repos/huggingface/datasets/issues/2230/comments
The keys used in the dataset generation script to ensure the same order is generated on every user's end should be checked for their types (i.e either `str` or `int`) as well as whether they are unique or not. Currently, the keys are not being checked for any of these, as evident from `xnli' dataset generation: https://github.com/huggingface/datasets/blob/56346791aed417306d054d89bd693d6b7eab17f7/datasets/xnli/xnli.py#L196 Even after having a tuple as key, the dataset is generated without any warning. Also, as tested in the case of `anli` dataset (I tweeked the dataset script to use `1` as a key for every example): ``` >>> import datasets >>> nik = datasets.load_dataset('anli') Downloading and preparing dataset anli/plain_text (download: 17.76 MiB, generated: 73.55 MiB, post-processed: Unknown size, total: 91.31 MiB) to C:\Users\nikhil\.cache\huggingface\datasets\anli\plain_text\0.1.0\43fa2c99c10bf8478f1fa0860f7b122c6b277c4c41306255b7641257cf4e3299... 0 examples [00:00, ? examples/s]1 {'uid': '0fd0abfb-659e-4453-b196-c3a64d2d8267', 'premise': 'The Parma trolleybus system (Italian: "Rete filoviaria di Parma" ) forms part of the public transport network of the city and "comune" of Parma, in the region of Emilia-Romagna, northern Italy. In operation since 1953, the system presently comprises four urban routes.', 'hypothesis': 'The trolleybus system has over 2 urban routes', 'label': 'entailment', 'reason': ''} 2021-04-16 12:38:14.483968: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library cudart64_110.dll 1 examples [00:01, 1.87s/ examples]1 {'uid': '7ed72ff4-40b7-4f8a-b1b9-6c612aa62c84', 'premise': 'Alexandra Lendon Bastedo (9 March 1946 – 12 January 2014) was a British actress, best known for her role as secret agent Sharron Macready in the 1968 British espionage/science fiction adventure series "The Champions". She has been cited as a sex symbol of the 1960s and 1970s. Bastedo was a vegetarian and animal welfare advocate.', 'hypothesis': "Sharron Macready was a popular character through the 1980's.", 'label': 'neutral', 'reason': ''} 1 {'uid': '5d2930a3-62ac-485d-94d7-4e36cbbcd7b5', 'premise': 'Alexandra Lendon Bastedo (9 March 1946 – 12 January 2014) was a British actress, best known for her role as secret agent Sharron Macready in the 1968 British espionage/science fiction adventure series "The Champions". She has been cited as a sex symbol of the 1960s and 1970s. Bastedo was a vegetarian and animal welfare advocate.', 'hypothesis': "Bastedo didn't keep any pets because of her views on animal rights.", 'label': 'neutral', 'reason': ''} 1 {'uid': '324db753-ddc9-4a85-a825-f09e2e5aebdd', 'premise': 'Alexandra Lendon Bastedo (9 March 1946 – 12 January 2014) was a British actress, best known for her role as secret agent Sharron Macready in the 1968 British espionage/science fiction adventure series "The Champions". She has been cited as a sex symbol of the 1960s and 1970s. Bastedo was a vegetarian and animal welfare advocate.', 'hypothesis': 'Alexandra Bastedo was named by her mother.', 'label': 'neutral', 'reason': ''} 1 {'uid': '4874f429-da0e-406a-90c7-22240ff3ddf8', 'premise': 'Alexandra Lendon Bastedo (9 March 1946 – 12 January 2014) was a British actress, best known for her role as secret agent Sharron Macready in the 1968 British espionage/science fiction adventure series "The Champions". She has been cited as a sex symbol of the 1960s and 1970s. Bastedo was a vegetarian and animal welfare advocate.', 'hypothesis': 'Bastedo cared for all the animals that inhabit the earth.', 'label': 'neutral', 'reason': ''} ``` Here also, the dataset was generated successfuly even hough it had same keys without any warning. The reason appears to stem from here: https://github.com/huggingface/datasets/blob/56346791aed417306d054d89bd693d6b7eab17f7/src/datasets/builder.py#L988 Here, although it has access to every key, but it is not being checked and the example is written directly: https://github.com/huggingface/datasets/blob/56346791aed417306d054d89bd693d6b7eab17f7/src/datasets/builder.py#L992 I would like to take this issue if you allow me. Thank You!
{ "avatar_url": "https://avatars.githubusercontent.com/u/42388668?v=4", "events_url": "https://api.github.com/users/NikhilBartwal/events{/privacy}", "followers_url": "https://api.github.com/users/NikhilBartwal/followers", "following_url": "https://api.github.com/users/NikhilBartwal/following{/other_user}", "gists_url": "https://api.github.com/users/NikhilBartwal/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/NikhilBartwal", "id": 42388668, "login": "NikhilBartwal", "node_id": "MDQ6VXNlcjQyMzg4NjY4", "organizations_url": "https://api.github.com/users/NikhilBartwal/orgs", "received_events_url": "https://api.github.com/users/NikhilBartwal/received_events", "repos_url": "https://api.github.com/users/NikhilBartwal/repos", "site_admin": false, "starred_url": "https://api.github.com/users/NikhilBartwal/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/NikhilBartwal/subscriptions", "type": "User", "url": "https://api.github.com/users/NikhilBartwal" }
https://api.github.com/repos/huggingface/datasets/issues/2230/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2230/timeline
closed
false
2,230
null
2021-05-10T17:31:21Z
null
false
859,810,602
https://api.github.com/repos/huggingface/datasets/issues/2229
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2229/events
[]
null
2021-04-19T08:56:42Z
[]
https://github.com/huggingface/datasets/issues/2229
CONTRIBUTOR
completed
null
null
[ "Hi ! Sure sounds good. Also if you find other datasets that use tuples instead of str/int, you can also fix them !\r\nthanks :)", "@lhoestq I have sent a PR for fixing the issue. Would be great if you could have a look! Thanks!" ]
`xnli` dataset creating a tuple key while yielding instead of `str` or `int`
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2229/reactions" }
MDU6SXNzdWU4NTk4MTA2MDI=
null
2021-04-16T13:21:53Z
https://api.github.com/repos/huggingface/datasets/issues/2229/comments
When using `ds = datasets.load_dataset('xnli', 'ar')`, the dataset generation script uses the following section of code in the egging, which yields a tuple key instead of the specified `str` or `int` key: https://github.com/huggingface/datasets/blob/56346791aed417306d054d89bd693d6b7eab17f7/datasets/xnli/xnli.py#L196 Since, community datasets in Tensorflow Datasets also use HF datasets, this causes a Tuple key error while loading HF's `xnli` dataset. I'm up for sending a fix for this, I think we can simply use `file_idx + "_" + row_idx` as a unique key instead of a tuple.
{ "avatar_url": "https://avatars.githubusercontent.com/u/42388668?v=4", "events_url": "https://api.github.com/users/NikhilBartwal/events{/privacy}", "followers_url": "https://api.github.com/users/NikhilBartwal/followers", "following_url": "https://api.github.com/users/NikhilBartwal/following{/other_user}", "gists_url": "https://api.github.com/users/NikhilBartwal/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/NikhilBartwal", "id": 42388668, "login": "NikhilBartwal", "node_id": "MDQ6VXNlcjQyMzg4NjY4", "organizations_url": "https://api.github.com/users/NikhilBartwal/orgs", "received_events_url": "https://api.github.com/users/NikhilBartwal/received_events", "repos_url": "https://api.github.com/users/NikhilBartwal/repos", "site_admin": false, "starred_url": "https://api.github.com/users/NikhilBartwal/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/NikhilBartwal/subscriptions", "type": "User", "url": "https://api.github.com/users/NikhilBartwal" }
https://api.github.com/repos/huggingface/datasets/issues/2229/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2229/timeline
closed
false
2,229
null
2021-04-19T08:56:42Z
null
false
859,795,563
https://api.github.com/repos/huggingface/datasets/issues/2228
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2228/events
[]
null
2022-07-06T15:19:48Z
[]
https://github.com/huggingface/datasets/pull/2228
NONE
null
false
null
[ "Awesome thanks ! To fix the CI you just need to merge master into your branch.\r\nThe error is unrelated to your PR" ]
[WIP] Add ArrayXD support for fixed size list.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2228/reactions" }
MDExOlB1bGxSZXF1ZXN0NjE2ODE2MTQz
{ "diff_url": "https://github.com/huggingface/datasets/pull/2228.diff", "html_url": "https://github.com/huggingface/datasets/pull/2228", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/2228.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/2228" }
2021-04-16T13:04:08Z
https://api.github.com/repos/huggingface/datasets/issues/2228/comments
Add support for fixed size list for ArrayXD when shape is known . See https://github.com/huggingface/datasets/issues/2146 Since offset are not stored anymore, the file size is now roughly equal to the actual data size.
{ "avatar_url": "https://avatars.githubusercontent.com/u/22685854?v=4", "events_url": "https://api.github.com/users/jblemoine/events{/privacy}", "followers_url": "https://api.github.com/users/jblemoine/followers", "following_url": "https://api.github.com/users/jblemoine/following{/other_user}", "gists_url": "https://api.github.com/users/jblemoine/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/jblemoine", "id": 22685854, "login": "jblemoine", "node_id": "MDQ6VXNlcjIyNjg1ODU0", "organizations_url": "https://api.github.com/users/jblemoine/orgs", "received_events_url": "https://api.github.com/users/jblemoine/received_events", "repos_url": "https://api.github.com/users/jblemoine/repos", "site_admin": false, "starred_url": "https://api.github.com/users/jblemoine/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jblemoine/subscriptions", "type": "User", "url": "https://api.github.com/users/jblemoine" }
https://api.github.com/repos/huggingface/datasets/issues/2228/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2228/timeline
open
false
2,228
null
null
null
true
859,771,526
https://api.github.com/repos/huggingface/datasets/issues/2227
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2227/events
[]
null
2021-04-16T13:49:40Z
[]
https://github.com/huggingface/datasets/pull/2227
CONTRIBUTOR
null
false
null
[]
Use update_metadata_with_features decorator in class_encode_column method
{ "+1": 1, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/2227/reactions" }
MDExOlB1bGxSZXF1ZXN0NjE2Nzk1NjMx
{ "diff_url": "https://github.com/huggingface/datasets/pull/2227.diff", "html_url": "https://github.com/huggingface/datasets/pull/2227", "merged_at": "2021-04-16T13:49:39Z", "patch_url": "https://github.com/huggingface/datasets/pull/2227.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/2227" }
2021-04-16T12:31:41Z
https://api.github.com/repos/huggingface/datasets/issues/2227/comments
Following @mariosasko 's comment
{ "avatar_url": "https://avatars.githubusercontent.com/u/33657802?v=4", "events_url": "https://api.github.com/users/SBrandeis/events{/privacy}", "followers_url": "https://api.github.com/users/SBrandeis/followers", "following_url": "https://api.github.com/users/SBrandeis/following{/other_user}", "gists_url": "https://api.github.com/users/SBrandeis/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/SBrandeis", "id": 33657802, "login": "SBrandeis", "node_id": "MDQ6VXNlcjMzNjU3ODAy", "organizations_url": "https://api.github.com/users/SBrandeis/orgs", "received_events_url": "https://api.github.com/users/SBrandeis/received_events", "repos_url": "https://api.github.com/users/SBrandeis/repos", "site_admin": false, "starred_url": "https://api.github.com/users/SBrandeis/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/SBrandeis/subscriptions", "type": "User", "url": "https://api.github.com/users/SBrandeis" }
https://api.github.com/repos/huggingface/datasets/issues/2227/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2227/timeline
closed
false
2,227
null
2021-04-16T13:49:39Z
null
true
859,720,302
https://api.github.com/repos/huggingface/datasets/issues/2226
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2226/events
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
null
2022-10-05T17:32:15Z
[ { "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" } ]
https://github.com/huggingface/datasets/issues/2226
NONE
completed
null
null
[ "I found the problem. I called `set_format` on some columns before. This makes it crash. Here is a complete example to reproduce:\r\n\r\n```python\r\nfrom datasets import load_dataset\r\nsst = load_dataset(\"sst\")\r\nsst.set_format(\"torch\", columns=[\"label\"], output_all_columns=True)\r\nds = sst[\"train\"]\r\n\r\n# crashes\r\nds.map(\r\n lambda x: {\"a\": list(range(20))},\r\n remove_columns=ds.column_names,\r\n load_from_cache_file=False,\r\n num_proc=1,\r\n batched=True,\r\n)\r\n```", "Thanks for reporting and for providing this code to reproduce the issue, this is really helpful !", "I merged a fix, it should work on `master` now :)\r\nWe'll do a new release soon !" ]
Batched map fails when removing all columns
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2226/reactions" }
MDU6SXNzdWU4NTk3MjAzMDI=
null
2021-04-16T11:17:01Z
https://api.github.com/repos/huggingface/datasets/issues/2226/comments
Hi @lhoestq , I'm hijacking this issue, because I'm currently trying to do the approach you recommend: > Currently the optimal setup for single-column computations is probably to do something like > > ```python > result = dataset.map(f, input_columns="my_col", remove_columns=dataset.column_names) > ``` Here is my code: (see edit, in which I added a simplified version ``` This is the error: ```bash pyarrow.lib.ArrowInvalid: Column 1 named tokens expected length 8964 but got length 1000 ``` I wonder why this error occurs, when I delete every column? Can you give me a hint? ### Edit: I preprocessed my dataset before (using map with the features argument) and saved it to disk. May this be part of the error? I can iterate over the complete dataset and print every sample before calling map. There seems to be no other problem with the dataset. I tried to simplify the code that crashes: ```python # works log.debug(dataset.column_names) log.debug(dataset) for i, sample in enumerate(dataset): log.debug(i, sample) # crashes counted_dataset = dataset.map( lambda x: {"a": list(range(20))}, input_columns=column, remove_columns=dataset.column_names, load_from_cache_file=False, num_proc=num_workers, batched=True, ) ``` ``` pyarrow.lib.ArrowInvalid: Column 1 named tokens expected length 20 but got length 1000 ``` Edit2: May this be a problem with a schema I set when preprocessing the dataset before? I tried to add the `features` argument to the function and then I get a new error: ```python # crashes counted_dataset = dataset.map( lambda x: {"a": list(range(20))}, input_columns=column, remove_columns=dataset.column_names, load_from_cache_file=False, num_proc=num_workers, batched=True, features=datasets.Features( { "a": datasets.Sequence(datasets.Value("int32")) } ) ) ``` ``` File "env/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 1704, in _map_single writer.write_batch(batch) File "env/lib/python3.8/site-packages/datasets/arrow_writer.py", line 312, in write_batch col_type = schema.field(col).type if schema is not None else None File "pyarrow/types.pxi", line 1341, in pyarrow.lib.Schema.field KeyError: 'Column tokens does not exist in schema' ``` _Originally posted by @villmow in https://github.com/huggingface/datasets/issues/2193#issuecomment-820230874_
{ "avatar_url": "https://avatars.githubusercontent.com/u/2743060?v=4", "events_url": "https://api.github.com/users/villmow/events{/privacy}", "followers_url": "https://api.github.com/users/villmow/followers", "following_url": "https://api.github.com/users/villmow/following{/other_user}", "gists_url": "https://api.github.com/users/villmow/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/villmow", "id": 2743060, "login": "villmow", "node_id": "MDQ6VXNlcjI3NDMwNjA=", "organizations_url": "https://api.github.com/users/villmow/orgs", "received_events_url": "https://api.github.com/users/villmow/received_events", "repos_url": "https://api.github.com/users/villmow/repos", "site_admin": false, "starred_url": "https://api.github.com/users/villmow/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/villmow/subscriptions", "type": "User", "url": "https://api.github.com/users/villmow" }
https://api.github.com/repos/huggingface/datasets/issues/2226/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2226/timeline
closed
false
2,226
null
2022-10-05T17:32:15Z
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
false
858,469,561
https://api.github.com/repos/huggingface/datasets/issues/2225
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2225/events
[]
null
2021-04-15T22:09:50Z
[]
https://github.com/huggingface/datasets/pull/2225
CONTRIBUTOR
null
false
null
[ "Thanks ! good catch\r\n\r\nCould you also update the metadata of this dataset ?\r\nYou can do so by running\r\n```\r\ndatasets-cli test ./datasets/newsgroup --all_configs --save_infos --ignore_verifications\r\n```\r\nThis should update the dataset_infos.json file that contains the size of all the splits for example.", "Hi,\r\n`dataset_infos.json` should be updated now.\r\n" ]
fixed one instance of 'train' to 'test'
{ "+1": 1, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/2225/reactions" }
MDExOlB1bGxSZXF1ZXN0NjE1NzAzMTY4
{ "diff_url": "https://github.com/huggingface/datasets/pull/2225.diff", "html_url": "https://github.com/huggingface/datasets/pull/2225", "merged_at": "2021-04-15T21:19:09Z", "patch_url": "https://github.com/huggingface/datasets/pull/2225.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/2225" }
2021-04-15T04:26:40Z
https://api.github.com/repos/huggingface/datasets/issues/2225/comments
I believe this should be 'test' instead of 'train'
{ "avatar_url": "https://avatars.githubusercontent.com/u/46733535?v=4", "events_url": "https://api.github.com/users/alexwdong/events{/privacy}", "followers_url": "https://api.github.com/users/alexwdong/followers", "following_url": "https://api.github.com/users/alexwdong/following{/other_user}", "gists_url": "https://api.github.com/users/alexwdong/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/alexwdong", "id": 46733535, "login": "alexwdong", "node_id": "MDQ6VXNlcjQ2NzMzNTM1", "organizations_url": "https://api.github.com/users/alexwdong/orgs", "received_events_url": "https://api.github.com/users/alexwdong/received_events", "repos_url": "https://api.github.com/users/alexwdong/repos", "site_admin": false, "starred_url": "https://api.github.com/users/alexwdong/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/alexwdong/subscriptions", "type": "User", "url": "https://api.github.com/users/alexwdong" }
https://api.github.com/repos/huggingface/datasets/issues/2225/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2225/timeline
closed
false
2,225
null
2021-04-15T21:19:09Z
null
true
857,983,361
https://api.github.com/repos/huggingface/datasets/issues/2224
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2224/events
[]
null
2021-04-14T14:59:13Z
[]
https://github.com/huggingface/datasets/issues/2224
MEMBER
null
null
null
[]
Raise error if Windows max path length is not disabled
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2224/reactions" }
MDU6SXNzdWU4NTc5ODMzNjE=
null
2021-04-14T14:57:20Z
https://api.github.com/repos/huggingface/datasets/issues/2224/comments
On startup, raise an error if Windows max path length is not disabled; ask the user to disable it. Linked to discussion in #2220.
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
https://api.github.com/repos/huggingface/datasets/issues/2224/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2224/timeline
open
false
2,224
null
null
null
false
857,870,800
https://api.github.com/repos/huggingface/datasets/issues/2223
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2223/events
[]
null
2021-04-15T19:11:25Z
[]
https://github.com/huggingface/datasets/pull/2223
MEMBER
null
false
null
[ "> why a cache dir per test function does not work?\r\n\r\nProbably because we end up with multiple `datasets_module` in the python path. This breaks the import of all the datasets/metrics modules.\r\nIf you want to use one modules cache per test, you may need remove the `datasets_module` that was added to the python path during the test.\r\nIndeed if the module cache hasn't been initialized, then it's added to the python path by calling `init_dynamic_modules`:\r\n\r\nhttps://github.com/huggingface/datasets/blob/ba76012a19193a35053b9e20243ff40c2b4204ab/src/datasets/load.py#L291-L291", "@lhoestq, for the moment, this PR avoids populating the `~/.cache` dir during training, which is already an improvement, isn't it?", "Yes we can merge it this way if you're fine with it !\r\nThis is a good improvement", "I will eventually try to implement a `cache_dir` per test function in another PR, but I think I should first fix some side effects in tests: each test function should be atomic and able to have its own `cache_dir` without being affected by the `cache_dir` set in other test functions.", "Yes this would be ideal !" ]
Set test cache config
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2223/reactions" }
MDExOlB1bGxSZXF1ZXN0NjE1MjE4MDIz
{ "diff_url": "https://github.com/huggingface/datasets/pull/2223.diff", "html_url": "https://github.com/huggingface/datasets/pull/2223", "merged_at": "2021-04-15T19:11:25Z", "patch_url": "https://github.com/huggingface/datasets/pull/2223.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/2223" }
2021-04-14T12:55:24Z
https://api.github.com/repos/huggingface/datasets/issues/2223/comments
Currently, running the tests populates the default cache directory `"~/.cache"`. This PR monkey-patches the config to set the cache directory within the temporary test directory, avoiding side effects.
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
https://api.github.com/repos/huggingface/datasets/issues/2223/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2223/timeline
closed
false
2,223
null
2021-04-15T19:11:25Z
null
true
857,847,231
https://api.github.com/repos/huggingface/datasets/issues/2222
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2222/events
[ { "color": "ffffff", "default": true, "description": "This will not be worked on", "id": 1935892913, "name": "wontfix", "node_id": "MDU6TGFiZWwxOTM1ODkyOTEz", "url": "https://api.github.com/repos/huggingface/datasets/labels/wontfix" } ]
null
2021-04-14T15:00:25Z
[]
https://github.com/huggingface/datasets/pull/2222
MEMBER
null
false
null
[ "Windows users should disable the max path length limit. It's a nightmare to handle it.\r\nAlso the lock path must not be changed in a random way. Otherwise from another process the lock path might not be the same and the locking mechanism won't work.", "Do you agree with handling the case where MAX_PATH is not disabled? If not, we can close this PR.\r\n\r\nIf so, would it work a deterministic lock path instead of random?", "I'd rather not handle this at all, since there will be other places in the code where the limit will break things" ]
Fix too long WindowsFileLock name
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2222/reactions" }
MDExOlB1bGxSZXF1ZXN0NjE1MTk5MTM5
{ "diff_url": "https://github.com/huggingface/datasets/pull/2222.diff", "html_url": "https://github.com/huggingface/datasets/pull/2222", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/2222.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/2222" }
2021-04-14T12:26:52Z
https://api.github.com/repos/huggingface/datasets/issues/2222/comments
Fix WindowsFileLock name longer than allowed MAX_PATH by shortening the basename.
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
https://api.github.com/repos/huggingface/datasets/issues/2222/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2222/timeline
closed
false
2,222
null
2021-04-14T14:46:19Z
null
true
857,833,770
https://api.github.com/repos/huggingface/datasets/issues/2221
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2221/events
[]
null
2021-04-14T13:50:19Z
[]
https://github.com/huggingface/datasets/pull/2221
CONTRIBUTOR
null
false
null
[]
Add SLR70 - SLR80 and SLR86 to OpenSLR dataset
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2221/reactions" }
MDExOlB1bGxSZXF1ZXN0NjE1MTg4MTE5
{ "diff_url": "https://github.com/huggingface/datasets/pull/2221.diff", "html_url": "https://github.com/huggingface/datasets/pull/2221", "merged_at": "2021-04-14T13:50:19Z", "patch_url": "https://github.com/huggingface/datasets/pull/2221.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/2221" }
2021-04-14T12:09:18Z
https://api.github.com/repos/huggingface/datasets/issues/2221/comments
I would like to add SLR70, SLR71, SLR72, SLR73, SLR74, SLR75, SLR76, SLR77, SLR78, SLR79, SLR80 and SLR86 to OpenSLR dataset. The languages are: Nigerian English, Chilean Spanish, Columbian Spanish, Peruvian Spanish, Puerto Rico Spanish, Venezuelan Spanish, Basque, Galician, Gujarati and Kannada.
{ "avatar_url": "https://avatars.githubusercontent.com/u/7669893?v=4", "events_url": "https://api.github.com/users/cahya-wirawan/events{/privacy}", "followers_url": "https://api.github.com/users/cahya-wirawan/followers", "following_url": "https://api.github.com/users/cahya-wirawan/following{/other_user}", "gists_url": "https://api.github.com/users/cahya-wirawan/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/cahya-wirawan", "id": 7669893, "login": "cahya-wirawan", "node_id": "MDQ6VXNlcjc2Njk4OTM=", "organizations_url": "https://api.github.com/users/cahya-wirawan/orgs", "received_events_url": "https://api.github.com/users/cahya-wirawan/received_events", "repos_url": "https://api.github.com/users/cahya-wirawan/repos", "site_admin": false, "starred_url": "https://api.github.com/users/cahya-wirawan/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/cahya-wirawan/subscriptions", "type": "User", "url": "https://api.github.com/users/cahya-wirawan" }
https://api.github.com/repos/huggingface/datasets/issues/2221/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2221/timeline
closed
false
2,221
null
2021-04-14T13:50:19Z
null
true
857,774,626
https://api.github.com/repos/huggingface/datasets/issues/2220
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2220/events
[ { "color": "ffffff", "default": true, "description": "This will not be worked on", "id": 1935892913, "name": "wontfix", "node_id": "MDU6TGFiZWwxOTM1ODkyOTEz", "url": "https://api.github.com/repos/huggingface/datasets/labels/wontfix" } ]
null
2021-04-14T14:59:50Z
[]
https://github.com/huggingface/datasets/pull/2220
MEMBER
null
false
null
[ "How is it possible to get an infinite loop ? Can you add more details ?", "Yes, in Windows, if the filename is too long, a `FileNotFoundError` is raised. The exception should be raised in this case. Otherwise, we get into an infinite loop.\r\n\r\nIf other process has the file locked, then `PermissionError` is raised. In this case, `pass` is OK.", "Note that the filelock module comes from this project that hasn't changed in years - while still being used by ten of thousands of projects:\r\nhttps://github.com/benediktschmitt/py-filelock\r\n\r\nUnless we have proper tests for this, I wouldn't recommend to change it", "I'm pretty sure many things from the library could break for windows users that haven't disabled the max path length limit.\r\nMaybe it would be simpler to simply raise an error on startup. For exampe, for windows users the error could ask them to disable the limit if it's not been disabled yet ?" ]
Fix infinite loop in WindowsFileLock
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2220/reactions" }
MDExOlB1bGxSZXF1ZXN0NjE1MTM4NDQz
{ "diff_url": "https://github.com/huggingface/datasets/pull/2220.diff", "html_url": "https://github.com/huggingface/datasets/pull/2220", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/2220.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/2220" }
2021-04-14T10:49:58Z
https://api.github.com/repos/huggingface/datasets/issues/2220/comments
Raise exception to avoid infinite loop.
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
https://api.github.com/repos/huggingface/datasets/issues/2220/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2220/timeline
closed
false
2,220
null
2021-04-14T14:59:34Z
null
true
857,321,242
https://api.github.com/repos/huggingface/datasets/issues/2219
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2219/events
[]
null
2021-04-24T14:25:51Z
[]
https://github.com/huggingface/datasets/pull/2219
CONTRIBUTOR
null
false
null
[ "1) Changed the language in a few places apart from those you mentioned in README\r\n2) Reduced the size of dummy data folder by removing all other entries except the first\r\n3) Updated YAML tags by using to the past version of `datasets-tagging` app. Will update the quick fix on that repository too in a while", "@bhavitvyamalik Thanks for adding the dataset on huggingface! Can you please add a metric also for the dataset using the squad_v2 metric file? ", "@MohammedRakib you can check [#2257](https://github.com/huggingface/datasets/pull/2257)" ]
Added CUAD dataset
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2219/reactions" }
MDExOlB1bGxSZXF1ZXN0NjE0NzYxMzA3
{ "diff_url": "https://github.com/huggingface/datasets/pull/2219.diff", "html_url": "https://github.com/huggingface/datasets/pull/2219", "merged_at": "2021-04-16T08:50:44Z", "patch_url": "https://github.com/huggingface/datasets/pull/2219.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/2219" }
2021-04-13T21:05:03Z
https://api.github.com/repos/huggingface/datasets/issues/2219/comments
Dataset link : https://github.com/TheAtticusProject/cuad/ Working on README.md currently. Closes #2084 and [#1](https://github.com/TheAtticusProject/cuad/issues/1).
{ "avatar_url": "https://avatars.githubusercontent.com/u/19718818?v=4", "events_url": "https://api.github.com/users/bhavitvyamalik/events{/privacy}", "followers_url": "https://api.github.com/users/bhavitvyamalik/followers", "following_url": "https://api.github.com/users/bhavitvyamalik/following{/other_user}", "gists_url": "https://api.github.com/users/bhavitvyamalik/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/bhavitvyamalik", "id": 19718818, "login": "bhavitvyamalik", "node_id": "MDQ6VXNlcjE5NzE4ODE4", "organizations_url": "https://api.github.com/users/bhavitvyamalik/orgs", "received_events_url": "https://api.github.com/users/bhavitvyamalik/received_events", "repos_url": "https://api.github.com/users/bhavitvyamalik/repos", "site_admin": false, "starred_url": "https://api.github.com/users/bhavitvyamalik/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/bhavitvyamalik/subscriptions", "type": "User", "url": "https://api.github.com/users/bhavitvyamalik" }
https://api.github.com/repos/huggingface/datasets/issues/2219/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2219/timeline
closed
false
2,219
null
2021-04-16T08:50:44Z
null
true
857,238,435
https://api.github.com/repos/huggingface/datasets/issues/2218
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2218/events
[]
null
2021-04-14T21:42:27Z
[]
https://github.com/huggingface/datasets/issues/2218
NONE
null
null
null
[ "Hi,\r\n\r\ncurrently the datasets API doesn't have a dedicated function to remove duplicate rows, but since the LAMA dataset is not too big (it fits in RAM), we can leverage pandas to help us remove duplicates:\r\n```python\r\n>>> from datasets import load_dataset, Dataset\r\n>>> dataset = load_dataset('lama', split='train')\r\n>>> dataset = Dataset.from_pandas(dataset.to_pandas().drop_duplicates(subset=...)) # specify a subset of the columns to consider in a list or use all of the columns if None\r\n```\r\n\r\nNote that the same can be achieved with the `Dataset.filter` method but this would requrie some extra work (filter function, speed?).", "Oh, seems like my question wasn't specified well. I'm _not_ asking how to remove duplicates, but whether duplicates should be removed if I want to do the evaluation on the LAMA dataset as it was proposed in the original paper/repository? In other words, will I get the same result if evaluate on the de-duplicated dataset loaded from HF's `datasets` as the results I'd get if I use the original data format and data processing script in https://github.com/facebookresearch/LAMA? ", "So it looks like the person who added LAMA to the library chose to have one item per piece of evidence rather than one per relation - and in this case, there are duplicate pieces of evidence for the target relation\r\n\r\nIf I understand correctly, to reproduce reported results, you would have to aggregate predictions for the several pieces of evidence provided for each relation (each unique `uuid`), but the original authors will know better \r\n\r\ncc @fabiopetroni " ]
Duplicates in the LAMA dataset
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2218/reactions" }
MDU6SXNzdWU4NTcyMzg0MzU=
null
2021-04-13T18:59:49Z
https://api.github.com/repos/huggingface/datasets/issues/2218/comments
I observed duplicates in the LAMA probing dataset, see a minimal code below. ``` >>> import datasets >>> dataset = datasets.load_dataset('lama') No config specified, defaulting to: lama/trex Reusing dataset lama (/home/anam/.cache/huggingface/datasets/lama/trex/1.1.0/97deffae13eca0a18e77dfb3960bb31741e973586f5c1fe1ec0d6b5eece7bddc) >>> train_dataset = dataset['train'] >>> train_dataset[0] {'description': 'language or languages a person has learned from early childhood', 'label': 'native language', 'masked_sentence': 'Louis Jules Trochu ([lwi ʒyl tʁɔʃy]; 12 March 1815 – 7 October 1896) was a [MASK] military leader and politician.', 'obj_label': 'French', 'obj_surface': 'French', 'obj_uri': 'Q150', 'predicate_id': 'P103', 'sub_label': 'Louis Jules Trochu', 'sub_surface': 'Louis Jules Trochu', 'sub_uri': 'Q441235', 'template': 'The native language of [X] is [Y] .', 'template_negated': '[X] is not owned by [Y] .', 'type': 'N-1', 'uuid': '40b2ed1c-0961-482e-844e-32596b6117c8'} >>> train_dataset[1] {'description': 'language or languages a person has learned from early childhood', 'label': 'native language', 'masked_sentence': 'Louis Jules Trochu ([lwi ʒyl tʁɔʃy]; 12 March 1815 – 7 October 1896) was a [MASK] military leader and politician.', 'obj_label': 'French', 'obj_surface': 'French', 'obj_uri': 'Q150', 'predicate_id': 'P103', 'sub_label': 'Louis Jules Trochu', 'sub_surface': 'Louis Jules Trochu', 'sub_uri': 'Q441235', 'template': 'The native language of [X] is [Y] .', 'template_negated': '[X] is not owned by [Y] .', 'type': 'N-1', 'uuid': '40b2ed1c-0961-482e-844e-32596b6117c8'} ``` I checked the original data available at https://dl.fbaipublicfiles.com/LAMA/data.zip. This particular duplicated comes from: ``` {"uuid": "40b2ed1c-0961-482e-844e-32596b6117c8", "obj_uri": "Q150", "obj_label": "French", "sub_uri": "Q441235", "sub_label": "Louis Jules Trochu", "predicate_id": "P103", "evidences": [{"sub_surface": "Louis Jules Trochu", "obj_surface": "French", "masked_sentence": "Louis Jules Trochu ([lwi \u0292yl t\u0281\u0254\u0283y]; 12 March 1815 \u2013 7 October 1896) was a [MASK] military leader and politician."}, {"sub_surface": "Louis Jules Trochu", "obj_surface": "French", "masked_sentence": "Louis Jules Trochu ([lwi \u0292yl t\u0281\u0254\u0283y]; 12 March 1815 \u2013 7 October 1896) was a [MASK] military leader and politician."}]} ``` What is the best way to deal with these duplicates if I want to use `datasets` to probe with LAMA?
{ "avatar_url": "https://avatars.githubusercontent.com/u/7276193?v=4", "events_url": "https://api.github.com/users/amarasovic/events{/privacy}", "followers_url": "https://api.github.com/users/amarasovic/followers", "following_url": "https://api.github.com/users/amarasovic/following{/other_user}", "gists_url": "https://api.github.com/users/amarasovic/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/amarasovic", "id": 7276193, "login": "amarasovic", "node_id": "MDQ6VXNlcjcyNzYxOTM=", "organizations_url": "https://api.github.com/users/amarasovic/orgs", "received_events_url": "https://api.github.com/users/amarasovic/received_events", "repos_url": "https://api.github.com/users/amarasovic/repos", "site_admin": false, "starred_url": "https://api.github.com/users/amarasovic/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/amarasovic/subscriptions", "type": "User", "url": "https://api.github.com/users/amarasovic" }
https://api.github.com/repos/huggingface/datasets/issues/2218/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2218/timeline
open
false
2,218
null
null
null
false
857,011,314
https://api.github.com/repos/huggingface/datasets/issues/2217
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2217/events
[]
null
2021-04-14T14:24:24Z
[]
https://github.com/huggingface/datasets/pull/2217
MEMBER
null
false
null
[]
Revert breaking change in cache_files property
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2217/reactions" }
MDExOlB1bGxSZXF1ZXN0NjE0NTAxNjIz
{ "diff_url": "https://github.com/huggingface/datasets/pull/2217.diff", "html_url": "https://github.com/huggingface/datasets/pull/2217", "merged_at": "2021-04-14T14:24:23Z", "patch_url": "https://github.com/huggingface/datasets/pull/2217.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/2217" }
2021-04-13T14:20:04Z
https://api.github.com/repos/huggingface/datasets/issues/2217/comments
#2025 changed the format of `Dataset.cache_files`. Before it was formatted like ```python [{"filename": "path/to/file.arrow", "start": 0, "end": 1337}] ``` and it was changed to ```python ["path/to/file.arrow"] ``` since there's no start/end offsets available anymore. To make this less breaking, I'm setting the format back to a list of dicts: ```python [{"filename": "path/to/file.arrow"}] ```
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
https://api.github.com/repos/huggingface/datasets/issues/2217/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2217/timeline
closed
false
2,217
null
2021-04-14T14:24:23Z
null
true
856,955,534
https://api.github.com/repos/huggingface/datasets/issues/2216
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2216/events
[]
null
2021-04-13T13:53:20Z
[]
https://github.com/huggingface/datasets/pull/2216
MEMBER
null
false
null
[]
added real label for glue/mrpc to test set
{ "+1": 1, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/2216/reactions" }
MDExOlB1bGxSZXF1ZXN0NjE0NDU0MjE1
{ "diff_url": "https://github.com/huggingface/datasets/pull/2216.diff", "html_url": "https://github.com/huggingface/datasets/pull/2216", "merged_at": "2021-04-13T13:53:19Z", "patch_url": "https://github.com/huggingface/datasets/pull/2216.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/2216" }
2021-04-13T13:20:20Z
https://api.github.com/repos/huggingface/datasets/issues/2216/comments
Added real label to `glue.py` `mrpc` task for test split.
{ "avatar_url": "https://avatars.githubusercontent.com/u/32632186?v=4", "events_url": "https://api.github.com/users/philschmid/events{/privacy}", "followers_url": "https://api.github.com/users/philschmid/followers", "following_url": "https://api.github.com/users/philschmid/following{/other_user}", "gists_url": "https://api.github.com/users/philschmid/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/philschmid", "id": 32632186, "login": "philschmid", "node_id": "MDQ6VXNlcjMyNjMyMTg2", "organizations_url": "https://api.github.com/users/philschmid/orgs", "received_events_url": "https://api.github.com/users/philschmid/received_events", "repos_url": "https://api.github.com/users/philschmid/repos", "site_admin": false, "starred_url": "https://api.github.com/users/philschmid/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/philschmid/subscriptions", "type": "User", "url": "https://api.github.com/users/philschmid" }
https://api.github.com/repos/huggingface/datasets/issues/2216/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2216/timeline
closed
false
2,216
null
2021-04-13T13:53:19Z
null
true
856,716,791
https://api.github.com/repos/huggingface/datasets/issues/2215
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2215/events
[]
null
2021-04-13T14:05:14Z
[]
https://github.com/huggingface/datasets/pull/2215
CONTRIBUTOR
null
false
null
[ "Hi @lhoestq,\r\nCould you please help me, I got this error message in all \"ci/circleci: run_dataset_script_tests_pyarrow*\" tests:\r\n```\r\n...\r\n \"\"\"Wrapper classes for various types of tokenization.\"\"\"\r\n \r\n from bleurt.lib import bert_tokenization\r\n import tensorflow.compat.v1 as tf\r\n> import sentencepiece as spm\r\nE ModuleNotFoundError: No module named 'sentencepiece'\r\n...\r\n```\r\nI am not sure why I do get it. Thanks.\r\n", "Hi ! This issue appeared on master since the last update of `BLEURT`.\r\nI'm working on a fix. You can ignore this issue for this PR", "> Hi ! This issue appeared on master since the last update of `BLEURT`.\r\n> I'm working on a fix. You can ignore this issue for this PR\r\n\r\nThanks for the info", "Merging since the CI is fixed on master" ]
Add datasets SLR35 and SLR36 to OpenSLR
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2215/reactions" }
MDExOlB1bGxSZXF1ZXN0NjE0MjUyNTEy
{ "diff_url": "https://github.com/huggingface/datasets/pull/2215.diff", "html_url": "https://github.com/huggingface/datasets/pull/2215", "merged_at": "2021-04-13T14:05:14Z", "patch_url": "https://github.com/huggingface/datasets/pull/2215.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/2215" }
2021-04-13T08:24:07Z
https://api.github.com/repos/huggingface/datasets/issues/2215/comments
I would like to add [SLR35](https://openslr.org/35/) (18GB) and [SLR36](https://openslr.org/36/) (22GB) which are Large Javanese and Sundanese ASR training data set collected by Google in collaboration with Reykjavik University and Universitas Gadjah Mada in Indonesia.
{ "avatar_url": "https://avatars.githubusercontent.com/u/7669893?v=4", "events_url": "https://api.github.com/users/cahya-wirawan/events{/privacy}", "followers_url": "https://api.github.com/users/cahya-wirawan/followers", "following_url": "https://api.github.com/users/cahya-wirawan/following{/other_user}", "gists_url": "https://api.github.com/users/cahya-wirawan/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/cahya-wirawan", "id": 7669893, "login": "cahya-wirawan", "node_id": "MDQ6VXNlcjc2Njk4OTM=", "organizations_url": "https://api.github.com/users/cahya-wirawan/orgs", "received_events_url": "https://api.github.com/users/cahya-wirawan/received_events", "repos_url": "https://api.github.com/users/cahya-wirawan/repos", "site_admin": false, "starred_url": "https://api.github.com/users/cahya-wirawan/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/cahya-wirawan/subscriptions", "type": "User", "url": "https://api.github.com/users/cahya-wirawan" }
https://api.github.com/repos/huggingface/datasets/issues/2215/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2215/timeline
closed
false
2,215
null
2021-04-13T14:05:14Z
null
true
856,333,657
https://api.github.com/repos/huggingface/datasets/issues/2214
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2214/events
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
null
2021-04-23T15:20:02Z
[]
https://github.com/huggingface/datasets/issues/2214
NONE
completed
null
null
[ "Hi @nsaphra, thanks for reporting.\r\n\r\nThis issue was fixed in `datasets` version 1.3.0. Could you please update `datasets` and tell me if the problem persists?\r\n```shell\r\npip install -U datasets\r\n```", "There might be a bug in the conda version of `datasets` 1.2.1 where the datasets/metric scripts are downloaded from `master` instead of the `1.2.1` repo.\r\n\r\nYou can try setting the env var `HF_SCRIPTS_VERSION=\"1.2.1\"` as a workaround. Let me know if that helps.", "I just faced the same issue. I was using 1.2.1 from conda and received the same AttributeError complaining about 'add_start_docstrings'. Uninstalling the conda installed datasets and then installing the latest datasets (version 1.5.0) using pip install solved the issue for me. I don't like mixing up conda and pip installs in the same environments but this will have to do for now, until 1.5.0 is made available through conda.", "Yep, seems to have fixed things! The conda package could really do with an update. Thanks!" ]
load_metric error: module 'datasets.utils.file_utils' has no attribute 'add_start_docstrings'
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2214/reactions" }
MDU6SXNzdWU4NTYzMzM2NTc=
null
2021-04-12T20:26:01Z
https://api.github.com/repos/huggingface/datasets/issues/2214/comments
I'm having the same problem as [Notebooks issue 10](https://github.com/huggingface/notebooks/issues/10) on datasets 1.2.1, and it seems to be an issue with the datasets package. ```python >>> from datasets import load_metric >>> metric = load_metric("glue", "sst2") Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/ext3/miniconda3/lib/python3.8/site-packages/datasets-1.2.1-py3.8.egg/datasets/load.py", line 502, in load_metric File "/ext3/miniconda3/lib/python3.8/site-packages/datasets-1.2.1-py3.8.egg/datasets/load.py", line 66, in import_main_class File "/ext3/miniconda3/lib/python3.8/importlib/__init__.py", line 127, in import_module return _bootstrap._gcd_import(name[level:], package, level) File "<frozen importlib._bootstrap>", line 1014, in _gcd_import File "<frozen importlib._bootstrap>", line 991, in _find_and_load File "<frozen importlib._bootstrap>", line 975, in _find_and_load_unlocked File "<frozen importlib._bootstrap>", line 671, in _load_unlocked File "<frozen importlib._bootstrap_external>", line 783, in exec_module File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed File "/home/ns4008/.cache/huggingface/modules/datasets_modules/metrics/glue/e4606ab9804a36bcd5a9cebb2cb65bb14b6ac78ee9e6d5981fa679a495dd55de/glue.py", line 105, in <module> @datasets.utils.file_utils.add_start_docstrings(_DESCRIPTION, _KWARGS_DESCRIPTION) AttributeError: module 'datasets.utils.file_utils' has no attribute 'add_start_docstrings' ```
{ "avatar_url": "https://avatars.githubusercontent.com/u/414788?v=4", "events_url": "https://api.github.com/users/nsaphra/events{/privacy}", "followers_url": "https://api.github.com/users/nsaphra/followers", "following_url": "https://api.github.com/users/nsaphra/following{/other_user}", "gists_url": "https://api.github.com/users/nsaphra/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/nsaphra", "id": 414788, "login": "nsaphra", "node_id": "MDQ6VXNlcjQxNDc4OA==", "organizations_url": "https://api.github.com/users/nsaphra/orgs", "received_events_url": "https://api.github.com/users/nsaphra/received_events", "repos_url": "https://api.github.com/users/nsaphra/repos", "site_admin": false, "starred_url": "https://api.github.com/users/nsaphra/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/nsaphra/subscriptions", "type": "User", "url": "https://api.github.com/users/nsaphra" }
https://api.github.com/repos/huggingface/datasets/issues/2214/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2214/timeline
closed
false
2,214
null
2021-04-23T15:20:02Z
null
false
856,025,320
https://api.github.com/repos/huggingface/datasets/issues/2213
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2213/events
[]
null
2021-04-14T22:04:54Z
[]
https://github.com/huggingface/datasets/pull/2213
COLLABORATOR
null
false
null
[]
Fix lc_quad download checksum
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2213/reactions" }
MDExOlB1bGxSZXF1ZXN0NjEzNjcwODk2
{ "diff_url": "https://github.com/huggingface/datasets/pull/2213.diff", "html_url": "https://github.com/huggingface/datasets/pull/2213", "merged_at": "2021-04-14T13:42:25Z", "patch_url": "https://github.com/huggingface/datasets/pull/2213.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/2213" }
2021-04-12T14:16:59Z
https://api.github.com/repos/huggingface/datasets/issues/2213/comments
Fixes #2211
{ "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mariosasko", "id": 47462742, "login": "mariosasko", "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "repos_url": "https://api.github.com/users/mariosasko/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "type": "User", "url": "https://api.github.com/users/mariosasko" }
https://api.github.com/repos/huggingface/datasets/issues/2213/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2213/timeline
closed
false
2,213
null
2021-04-14T13:42:25Z
null
true
855,999,133
https://api.github.com/repos/huggingface/datasets/issues/2212
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2212/events
[]
null
2023-10-03T16:09:19Z
[]
https://github.com/huggingface/datasets/issues/2212
NONE
completed
null
null
[ "Hi ! Apparently the data are not available from this url anymore. We'll replace it with the new url when it's available", "I saw this on their website when we request to download the dataset:\r\n![image](https://user-images.githubusercontent.com/19718818/114879600-fa458680-9e1e-11eb-9e05-f0963d68ff0f.png)\r\n\r\nCan we still request them link for the dataset and make a PR? @lhoestq @yjernite ", "I've contacted Martin (first author of the fquad paper) regarding a possible new url. Hopefully we can get one soon !", "They now made a website to force people who want to use the dataset for commercial purposes to seek a commercial license from them ...", "The script has been adopted to support manual download from the website, so I'm closing this issue." ]
Can't reach "https://storage.googleapis.com/illuin/fquad/train.json.zip" when trying to load fquad dataset
{ "+1": 1, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/2212/reactions" }
MDU6SXNzdWU4NTU5OTkxMzM=
null
2021-04-12T13:49:56Z
https://api.github.com/repos/huggingface/datasets/issues/2212/comments
I'm trying to load the [fquad dataset](https://huggingface.co/datasets/fquad) by running: ```Python fquad = load_dataset("fquad") ``` which produces the following error: ``` Using custom data configuration default Downloading and preparing dataset fquad/default (download: 3.14 MiB, generated: 6.62 MiB, post-processed: Unknown size, total: 9.76 MiB) to /root/.cache/huggingface/datasets/fquad/default/0.1.0/778dc2c85813d05ddd0c17087294d5f8f24820752340958070876b677af9f061... --------------------------------------------------------------------------- ConnectionError Traceback (most recent call last) <ipython-input-48-a2721797e23b> in <module>() ----> 1 fquad = load_dataset("fquad") 11 frames /usr/local/lib/python3.7/dist-packages/datasets/utils/file_utils.py in get_from_cache(url, cache_dir, force_download, proxies, etag_timeout, resume_download, user_agent, local_files_only, use_etag, max_retries, use_auth_token) 614 raise FileNotFoundError("Couldn't find file at {}".format(url)) 615 _raise_if_offline_mode_is_enabled(f"Tried to reach {url}") --> 616 raise ConnectionError("Couldn't reach {}".format(url)) 617 618 # Try a second time ConnectionError: Couldn't reach https://storage.googleapis.com/illuin/fquad/train.json.zip ``` Does anyone know why that is and how to fix it?
{ "avatar_url": "https://avatars.githubusercontent.com/u/21348833?v=4", "events_url": "https://api.github.com/users/hanss0n/events{/privacy}", "followers_url": "https://api.github.com/users/hanss0n/followers", "following_url": "https://api.github.com/users/hanss0n/following{/other_user}", "gists_url": "https://api.github.com/users/hanss0n/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/hanss0n", "id": 21348833, "login": "hanss0n", "node_id": "MDQ6VXNlcjIxMzQ4ODMz", "organizations_url": "https://api.github.com/users/hanss0n/orgs", "received_events_url": "https://api.github.com/users/hanss0n/received_events", "repos_url": "https://api.github.com/users/hanss0n/repos", "site_admin": false, "starred_url": "https://api.github.com/users/hanss0n/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/hanss0n/subscriptions", "type": "User", "url": "https://api.github.com/users/hanss0n" }
https://api.github.com/repos/huggingface/datasets/issues/2212/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2212/timeline
closed
false
2,212
null
2023-10-03T16:09:18Z
null
false
855,988,410
https://api.github.com/repos/huggingface/datasets/issues/2211
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2211/events
[]
null
2021-04-14T13:42:25Z
[]
https://github.com/huggingface/datasets/issues/2211
NONE
completed
null
null
[ "Hi,\r\n\r\nI've already opened a PR with the fix. If you are in a hurry, just build the project from source and run:\r\n```bash\r\ndatasets-cli test datasets/lc_quad --save_infos --all_configs --ignore_verifications\r\n```\r\n\r\n", "Ah sorry, I tried searching but couldn't find any related PR. \r\n\r\nThank you! " ]
Getting checksum error when trying to load lc_quad dataset
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2211/reactions" }
MDU6SXNzdWU4NTU5ODg0MTA=
null
2021-04-12T13:38:58Z
https://api.github.com/repos/huggingface/datasets/issues/2211/comments
I'm having issues loading the [lc_quad](https://huggingface.co/datasets/fquad) dataset by running: ```Python lc_quad = load_dataset("lc_quad") ``` which is giving me the following error: ``` Using custom data configuration default Downloading and preparing dataset lc_quad/default (download: 3.69 MiB, generated: 19.77 MiB, post-processed: Unknown size, total: 23.46 MiB) to /root/.cache/huggingface/datasets/lc_quad/default/2.0.0/5a98fe174603f5dec6df07edf1c2b4d2317210d2ad61f5a393839bca4d64e5a7... --------------------------------------------------------------------------- NonMatchingChecksumError Traceback (most recent call last) <ipython-input-42-404ace83f73c> in <module>() ----> 1 lc_quad = load_dataset("lc_quad") 3 frames /usr/local/lib/python3.7/dist-packages/datasets/utils/info_utils.py in verify_checksums(expected_checksums, recorded_checksums, verification_name) 37 if len(bad_urls) > 0: 38 error_msg = "Checksums didn't match" + for_verification_name + ":\n" ---> 39 raise NonMatchingChecksumError(error_msg + str(bad_urls)) 40 logger.info("All the checksums matched successfully" + for_verification_name) 41 NonMatchingChecksumError: Checksums didn't match for dataset source files: ['https://github.com/AskNowQA/LC-QuAD2.0/archive/master.zip'] ``` Does anyone know why this could be and how I fix it?
{ "avatar_url": "https://avatars.githubusercontent.com/u/21348833?v=4", "events_url": "https://api.github.com/users/hanss0n/events{/privacy}", "followers_url": "https://api.github.com/users/hanss0n/followers", "following_url": "https://api.github.com/users/hanss0n/following{/other_user}", "gists_url": "https://api.github.com/users/hanss0n/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/hanss0n", "id": 21348833, "login": "hanss0n", "node_id": "MDQ6VXNlcjIxMzQ4ODMz", "organizations_url": "https://api.github.com/users/hanss0n/orgs", "received_events_url": "https://api.github.com/users/hanss0n/received_events", "repos_url": "https://api.github.com/users/hanss0n/repos", "site_admin": false, "starred_url": "https://api.github.com/users/hanss0n/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/hanss0n/subscriptions", "type": "User", "url": "https://api.github.com/users/hanss0n" }
https://api.github.com/repos/huggingface/datasets/issues/2211/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2211/timeline
closed
false
2,211
null
2021-04-14T13:42:25Z
null
false
855,709,400
https://api.github.com/repos/huggingface/datasets/issues/2210
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2210/events
[]
null
2021-04-13T02:03:05Z
[]
https://github.com/huggingface/datasets/issues/2210
NONE
completed
null
null
[ "Hi ! Yes this is an issue with `datasets<=1.5.0`\r\nThis issue has been fixed by #2122 , we'll do a new release soon :)\r\nFor now you can test it on the `master` branch.", "Hi, thank you for your answer. I did not realize that my issue stems from the same problem. " ]
dataloading slow when using HUGE dataset
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2210/reactions" }
MDU6SXNzdWU4NTU3MDk0MDA=
null
2021-04-12T08:33:02Z
https://api.github.com/repos/huggingface/datasets/issues/2210/comments
Hi, When I use datasets with 600GB data, the dataloading speed increases significantly. I am experimenting with two datasets, and one is about 60GB and the other 600GB. Simply speaking, my code uses `datasets.set_format("torch")` function and let pytorch-lightning handle ddp training. When looking at the pytorch-lightning supported profile of two different runs, I see that fetching a batch(`get_train_batch`) consumes an unreasonable amount of time when data is large. What could be the cause? * 60GB data ``` Action | Mean duration (s) |Num calls | Total time (s) | Percentage % | ------------------------------------------------------------------------------------------------------------------------------------ Total | - |_ | 200.33 | 100 % | ------------------------------------------------------------------------------------------------------------------------------------ run_training_epoch | 71.994 |1 | 71.994 | 35.937 | run_training_batch | 0.64373 |100 | 64.373 | 32.133 | optimizer_step_and_closure_0 | 0.64322 |100 | 64.322 | 32.108 | training_step_and_backward | 0.61004 |100 | 61.004 | 30.452 | model_backward | 0.37552 |100 | 37.552 | 18.745 | model_forward | 0.22813 |100 | 22.813 | 11.387 | training_step | 0.22759 |100 | 22.759 | 11.361 | get_train_batch | 0.066385 |100 | 6.6385 | 3.3138 | ``` * 600GB data ``` Action | Mean duration (s) |Num calls | Total time (s) | Percentage % | ------------------------------------------------------------------------------------------------------------------------------------ Total | - |_ | 3285.6 | 100 % | ------------------------------------------------------------------------------------------------------------------------------------ run_training_epoch | 1397.9 |1 | 1397.9 | 42.546 | run_training_batch | 7.2596 |100 | 725.96 | 22.095 | optimizer_step_and_closure_0 | 7.2589 |100 | 725.89 | 22.093 | training_step_and_backward | 7.223 |100 | 722.3 | 21.984 | model_backward | 6.9662 |100 | 696.62 | 21.202 | get_train_batch | 6.322 |100 | 632.2 | 19.241 | model_forward | 0.24902 |100 | 24.902 | 0.75789 | training_step | 0.2485 |100 | 24.85 | 0.75633 | ```
{ "avatar_url": "https://avatars.githubusercontent.com/u/29157715?v=4", "events_url": "https://api.github.com/users/hwijeen/events{/privacy}", "followers_url": "https://api.github.com/users/hwijeen/followers", "following_url": "https://api.github.com/users/hwijeen/following{/other_user}", "gists_url": "https://api.github.com/users/hwijeen/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/hwijeen", "id": 29157715, "login": "hwijeen", "node_id": "MDQ6VXNlcjI5MTU3NzE1", "organizations_url": "https://api.github.com/users/hwijeen/orgs", "received_events_url": "https://api.github.com/users/hwijeen/received_events", "repos_url": "https://api.github.com/users/hwijeen/repos", "site_admin": false, "starred_url": "https://api.github.com/users/hwijeen/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/hwijeen/subscriptions", "type": "User", "url": "https://api.github.com/users/hwijeen" }
https://api.github.com/repos/huggingface/datasets/issues/2210/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2210/timeline
closed
false
2,210
null
2021-04-13T02:03:05Z
null
false
855,638,232
https://api.github.com/repos/huggingface/datasets/issues/2209
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2209/events
[ { "color": "0075ca", "default": true, "description": "Improvements or additions to documentation", "id": 1935892861, "name": "documentation", "node_id": "MDU6TGFiZWwxOTM1ODkyODYx", "url": "https://api.github.com/repos/huggingface/datasets/labels/documentation" } ]
null
2021-04-12T17:55:52Z
[]
https://github.com/huggingface/datasets/pull/2209
MEMBER
null
false
null
[]
Add code of conduct to the project
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2209/reactions" }
MDExOlB1bGxSZXF1ZXN0NjEzMzQwMTI2
{ "diff_url": "https://github.com/huggingface/datasets/pull/2209.diff", "html_url": "https://github.com/huggingface/datasets/pull/2209", "merged_at": "2021-04-12T17:55:52Z", "patch_url": "https://github.com/huggingface/datasets/pull/2209.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/2209" }
2021-04-12T07:16:14Z
https://api.github.com/repos/huggingface/datasets/issues/2209/comments
Add code of conduct to the project and link it from README and CONTRIBUTING. This was already done in `transformers`.
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
https://api.github.com/repos/huggingface/datasets/issues/2209/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2209/timeline
closed
false
2,209
null
2021-04-12T17:55:52Z
null
true
855,343,835
https://api.github.com/repos/huggingface/datasets/issues/2208
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2208/events
[]
null
2021-04-14T22:05:36Z
[]
https://github.com/huggingface/datasets/pull/2208
COLLABORATOR
null
false
null
[ "merging since the CI is fixed on master" ]
Remove Python2 leftovers
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 1, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/2208/reactions" }
MDExOlB1bGxSZXF1ZXN0NjEzMTAxMzMw
{ "diff_url": "https://github.com/huggingface/datasets/pull/2208.diff", "html_url": "https://github.com/huggingface/datasets/pull/2208", "merged_at": "2021-04-14T13:40:50Z", "patch_url": "https://github.com/huggingface/datasets/pull/2208.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/2208" }
2021-04-11T16:08:03Z
https://api.github.com/repos/huggingface/datasets/issues/2208/comments
This PR removes Python2 leftovers since this project aims for Python3.6+ (and as of 2020 Python2 is no longer officially supported)
{ "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mariosasko", "id": 47462742, "login": "mariosasko", "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "repos_url": "https://api.github.com/users/mariosasko/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "type": "User", "url": "https://api.github.com/users/mariosasko" }
https://api.github.com/repos/huggingface/datasets/issues/2208/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2208/timeline
closed
false
2,208
null
2021-04-14T13:40:51Z
null
true
855,267,383
https://api.github.com/repos/huggingface/datasets/issues/2207
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2207/events
[]
null
2022-06-01T16:23:08Z
[]
https://github.com/huggingface/datasets/issues/2207
NONE
completed
null
null
[ "Hi ! The ClassLabel feature type encodes the labels as integers.\r\nThe integer corresponds to the index of the label name in the `names` list of the ClassLabel.\r\nHere that means that the labels are 'entailment' (0), 'neutral' (1), 'contradiction' (2).\r\n\r\nYou can get the label names back by using `a.features['label'].int2str(i)`.\r\n", "Hi! You can also easily reorder the label with the [`Dataset.align_labels_with_mapping`](https://huggingface.co/docs/datasets/master/en/process#align) method." ]
making labels consistent across the datasets
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2207/reactions" }
MDU6SXNzdWU4NTUyNjczODM=
null
2021-04-11T10:03:56Z
https://api.github.com/repos/huggingface/datasets/issues/2207/comments
Hi For accessing the labels one can type ``` >>> a.features['label'] ClassLabel(num_classes=3, names=['entailment', 'neutral', 'contradiction'], names_file=None, id=None) ``` The labels however are not consistent with the actual labels sometimes, for instance in case of XNLI, the actual labels are 0,1,2, but if one try to access as above they are entailment, neutral,contradiction, it would be great to have the labels consistent. thanks
{ "avatar_url": "https://avatars.githubusercontent.com/u/79165106?v=4", "events_url": "https://api.github.com/users/dorost1234/events{/privacy}", "followers_url": "https://api.github.com/users/dorost1234/followers", "following_url": "https://api.github.com/users/dorost1234/following{/other_user}", "gists_url": "https://api.github.com/users/dorost1234/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/dorost1234", "id": 79165106, "login": "dorost1234", "node_id": "MDQ6VXNlcjc5MTY1MTA2", "organizations_url": "https://api.github.com/users/dorost1234/orgs", "received_events_url": "https://api.github.com/users/dorost1234/received_events", "repos_url": "https://api.github.com/users/dorost1234/repos", "site_admin": false, "starred_url": "https://api.github.com/users/dorost1234/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dorost1234/subscriptions", "type": "User", "url": "https://api.github.com/users/dorost1234" }
https://api.github.com/repos/huggingface/datasets/issues/2207/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2207/timeline
closed
false
2,207
null
2022-06-01T16:21:10Z
null
false
855,252,415
https://api.github.com/repos/huggingface/datasets/issues/2206
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2206/events
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
null
2021-11-10T12:18:30Z
[]
https://github.com/huggingface/datasets/issues/2206
NONE
completed
null
null
[ "Hi,\r\n\r\nthe output of the tokenizers is treated specially in the lib to optimize the dataset size (see the code [here](https://github.com/huggingface/datasets/blob/master/src/datasets/arrow_writer.py#L138-L141)). It looks like that one of the values in a dictionary returned by the tokenizer is out of the assumed range.\r\nCan you please provide a minimal reproducible example for more help?", "Hi @yana-xuyan, thanks for reporting.\r\n\r\nAs clearly @mariosasko explained, `datasets` performs some optimizations in order to reduce the size of the dataset cache files. And one of them is storing the field `special_tokens_mask` as `int8`, which means that this field can only contain integers between `-128` to `127`. As your message error states, one of the values of this field is `50259`, and therefore it cannot be stored as an `int8`.\r\n\r\nMaybe we could implement a way to disable this optimization and allow using any integer value; although the size of the cache files would be much larger.", "I'm facing same issue @mariosasko @albertvillanova \r\n\r\n```\r\nArrowInvalid: Integer value 50260 not in range: -128 to 127\r\n```\r\n\r\nTo reproduce:\r\n```python\r\nSPECIAL_TOKENS = ['<bos>','<eos>','<speaker1>','<speaker2>','<pad>']\r\nATTR_TO_SPECIAL_TOKEN = {\r\n 'bos_token': '<bos>', \r\n 'eos_token': '<eos>', \r\n 'pad_token': '<pad>',\r\n 'additional_special_tokens': ['<speaker1>', '<speaker2>']\r\n }\r\n\r\ntokenizer = AutoTokenizer.from_pretrained(\"gpt2\", use_fast=False)\r\nnum_added_tokens =tokenizer.add_special_tokens(ATTR_TO_SPECIAL_TOKEN)\r\nvocab_size = len(self.tokenizer.encoder) + num_added_tokens\r\nvocab =tokenizer.get_vocab()\r\n\r\npad_index = tokenizer.pad_token_id\r\neos_index = tokenizer.eos_token_id\r\nbos_index = tokenizer.bos_token_id\r\nspeaker1_index = vocab[\"<speaker1>\"]\r\nspeaker2_index = vocab[\"<speaker2>\"]\r\n```\r\n\r\n```python\r\ntokenizer.decode(['50260'])\r\n'<speaker1>'\r\n```", "@mariosasko \r\nI am hitting this bug in the Bert tokenizer too. I see that @albertvillanova labeled this as a bug back in April. Has there been a fix released yet?\r\nWhat I did for now is to just disable the optimization in the HF library. @yana-xuyan and @thomas-happify, is that what you did and did that work for you?\r\n\r\n", "Hi @gregg-ADP, \r\n\r\nThis is still a bug.\r\n\r\nAs @albertvillanova has suggested, maybe it's indeed worth adding a variable to `config.py` to have a way to disable this behavior.\r\n\r\nIn the meantime, this forced optimization can be disabled by specifying `features` (of the returned examples) in the `map` call:\r\n```python\r\nfrom datasets import *\r\n... # dataset init\r\nds.map(process_example, features=Features({\"special_tokens_mask\": Sequence(Value(\"int32\")), ... rest of the features}) \r\n```\r\n\r\ncc @lhoestq so he is also aware of this issue", "Thanks for the quick reply @mariosasko. What I did was to changed the optimizer to use int32 instead of int8. \r\nWhat you're suggesting specifies the type for each feature explicitly without changing the HF code. This is definitely a better option. However, we are hitting a new error later:\r\n```\r\n File \"/Users/ccccc/PycharmProjects/aaaa-ml/venv-source/lib/python3.8/site-packages/torch/nn/modules/module.py\", line 1051, in _call_impl\r\n return forward_call(*input, **kwargs)\r\nTypeError: forward() got an unexpected keyword argument 'pos'\r\n\r\n```\r\nWhere 'pos' is the name of a new feature we added. Do you agree that your way of fixing the optimizer issue will not fix our new issue? If not, I will continue with this optimizer fix until we resolve our other issue.\r\n", "Hi @gwc4github,\r\n\r\nthe fix was merged a few minutes ago, and it doesn't require any changes on the user side (e.g. no need for specifying `features`). If you find time, feel free to install `datasets` from master with:\r\n```\r\npip install git+https://github.com/huggingface/datasets.git\r\n```\r\nand let us know if it works for your use case! " ]
Got pyarrow error when loading a dataset while adding special tokens into the tokenizer
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2206/reactions" }
MDU6SXNzdWU4NTUyNTI0MTU=
null
2021-04-11T08:40:09Z
https://api.github.com/repos/huggingface/datasets/issues/2206/comments
I added five more special tokens into the GPT2 tokenizer. But after that, when I try to pre-process the data using my previous code, I got an error shown below: Traceback (most recent call last): File "/home/xuyan/anaconda3/envs/convqa/lib/python3.7/site-packages/datasets/arrow_dataset.py", line 1687, in _map_single writer.write(example) File "/home/xuyan/anaconda3/envs/convqa/lib/python3.7/site-packages/datasets/arrow_writer.py", line 296, in write self.write_on_file() File "/home/xuyan/anaconda3/envs/convqa/lib/python3.7/site-packages/datasets/arrow_writer.py", line 270, in write_on_file pa_array = pa.array(typed_sequence) File "pyarrow/array.pxi", line 222, in pyarrow.lib.array File "pyarrow/array.pxi", line 110, in pyarrow.lib._handle_arrow_array_protocol File "/home/xuyan/anaconda3/envs/convqa/lib/python3.7/site-packages/datasets/arrow_writer.py", line 108, in __arrow_array__ out = out.cast(pa.list_(self.optimized_int_type)) File "pyarrow/array.pxi", line 810, in pyarrow.lib.Array.cast File "/home/xuyan/anaconda3/envs/convqa/lib/python3.7/site-packages/pyarrow/compute.py", line 281, in cast return call_function("cast", [arr], options) File "pyarrow/_compute.pyx", line 465, in pyarrow._compute.call_function File "pyarrow/_compute.pyx", line 294, in pyarrow._compute.Function.call File "pyarrow/error.pxi", line 122, in pyarrow.lib.pyarrow_internal_check_status File "pyarrow/error.pxi", line 84, in pyarrow.lib.check_status pyarrow.lib.ArrowInvalid: Integer value 50259 not in range: -128 to 127 Do you have any idea about it?
{ "avatar_url": "https://avatars.githubusercontent.com/u/38536635?v=4", "events_url": "https://api.github.com/users/yana-xuyan/events{/privacy}", "followers_url": "https://api.github.com/users/yana-xuyan/followers", "following_url": "https://api.github.com/users/yana-xuyan/following{/other_user}", "gists_url": "https://api.github.com/users/yana-xuyan/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/yana-xuyan", "id": 38536635, "login": "yana-xuyan", "node_id": "MDQ6VXNlcjM4NTM2NjM1", "organizations_url": "https://api.github.com/users/yana-xuyan/orgs", "received_events_url": "https://api.github.com/users/yana-xuyan/received_events", "repos_url": "https://api.github.com/users/yana-xuyan/repos", "site_admin": false, "starred_url": "https://api.github.com/users/yana-xuyan/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/yana-xuyan/subscriptions", "type": "User", "url": "https://api.github.com/users/yana-xuyan" }
https://api.github.com/repos/huggingface/datasets/issues/2206/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2206/timeline
closed
false
2,206
null
2021-11-10T12:04:28Z
null
false
855,207,605
https://api.github.com/repos/huggingface/datasets/issues/2205
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2205/events
[]
null
2021-04-12T17:53:34Z
[]
https://github.com/huggingface/datasets/pull/2205
CONTRIBUTOR
null
false
null
[]
Updating citation information on LinCE readme
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2205/reactions" }
MDExOlB1bGxSZXF1ZXN0NjEzMDAwMzYw
{ "diff_url": "https://github.com/huggingface/datasets/pull/2205.diff", "html_url": "https://github.com/huggingface/datasets/pull/2205", "merged_at": "2021-04-12T17:53:34Z", "patch_url": "https://github.com/huggingface/datasets/pull/2205.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/2205" }
2021-04-11T03:18:05Z
https://api.github.com/repos/huggingface/datasets/issues/2205/comments
Hi! I just updated the citation information in this PR. It had an additional bibtex from one of the datasets used in LinCE and then the LinCE bibtex. I removed the former and added a link that shows the full list of citations for each dataset. Thanks!
{ "avatar_url": "https://avatars.githubusercontent.com/u/5833357?v=4", "events_url": "https://api.github.com/users/gaguilar/events{/privacy}", "followers_url": "https://api.github.com/users/gaguilar/followers", "following_url": "https://api.github.com/users/gaguilar/following{/other_user}", "gists_url": "https://api.github.com/users/gaguilar/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/gaguilar", "id": 5833357, "login": "gaguilar", "node_id": "MDQ6VXNlcjU4MzMzNTc=", "organizations_url": "https://api.github.com/users/gaguilar/orgs", "received_events_url": "https://api.github.com/users/gaguilar/received_events", "repos_url": "https://api.github.com/users/gaguilar/repos", "site_admin": false, "starred_url": "https://api.github.com/users/gaguilar/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/gaguilar/subscriptions", "type": "User", "url": "https://api.github.com/users/gaguilar" }
https://api.github.com/repos/huggingface/datasets/issues/2205/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2205/timeline
closed
false
2,205
null
2021-04-12T17:53:34Z
null
true
855,144,431
https://api.github.com/repos/huggingface/datasets/issues/2204
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2204/events
[]
null
2021-04-15T13:49:46Z
[]
https://github.com/huggingface/datasets/pull/2204
CONTRIBUTOR
null
false
null
[]
Add configurable options to `seqeval` metric
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2204/reactions" }
MDExOlB1bGxSZXF1ZXN0NjEyOTU1MzM2
{ "diff_url": "https://github.com/huggingface/datasets/pull/2204.diff", "html_url": "https://github.com/huggingface/datasets/pull/2204", "merged_at": "2021-04-15T13:49:46Z", "patch_url": "https://github.com/huggingface/datasets/pull/2204.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/2204" }
2021-04-10T19:58:19Z
https://api.github.com/repos/huggingface/datasets/issues/2204/comments
Fixes #2148 Adds options to use strict mode, different schemes of evaluation, sample weight and adjust zero_division behavior, if encountered. `seqeval` provides schemes as objects, hence dynamic import from string, to avoid making the user do the import (thanks to @albertvillanova for the `importlib` idea).
{ "avatar_url": "https://avatars.githubusercontent.com/u/44571847?v=4", "events_url": "https://api.github.com/users/marrodion/events{/privacy}", "followers_url": "https://api.github.com/users/marrodion/followers", "following_url": "https://api.github.com/users/marrodion/following{/other_user}", "gists_url": "https://api.github.com/users/marrodion/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/marrodion", "id": 44571847, "login": "marrodion", "node_id": "MDQ6VXNlcjQ0NTcxODQ3", "organizations_url": "https://api.github.com/users/marrodion/orgs", "received_events_url": "https://api.github.com/users/marrodion/received_events", "repos_url": "https://api.github.com/users/marrodion/repos", "site_admin": false, "starred_url": "https://api.github.com/users/marrodion/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/marrodion/subscriptions", "type": "User", "url": "https://api.github.com/users/marrodion" }
https://api.github.com/repos/huggingface/datasets/issues/2204/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2204/timeline
closed
false
2,204
null
2021-04-15T13:49:46Z
null
true
855,053,595
https://api.github.com/repos/huggingface/datasets/issues/2203
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2203/events
[]
null
2021-04-23T14:33:39Z
[]
https://github.com/huggingface/datasets/pull/2203
NONE
null
false
null
[ "Hi ! Can you add a description regarding this PR ? Why do you think we need to update the dummy data used to test the `banking77` dataset loading script ?", "Closing for inactivity. Feel free to re-open if you want to push this change" ]
updated banking77 train and test data
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2203/reactions" }
MDExOlB1bGxSZXF1ZXN0NjEyODg4MzA5
{ "diff_url": "https://github.com/huggingface/datasets/pull/2203.diff", "html_url": "https://github.com/huggingface/datasets/pull/2203", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/2203.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/2203" }
2021-04-10T12:10:10Z
https://api.github.com/repos/huggingface/datasets/issues/2203/comments
{ "avatar_url": "https://avatars.githubusercontent.com/u/6765330?v=4", "events_url": "https://api.github.com/users/hsali/events{/privacy}", "followers_url": "https://api.github.com/users/hsali/followers", "following_url": "https://api.github.com/users/hsali/following{/other_user}", "gists_url": "https://api.github.com/users/hsali/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/hsali", "id": 6765330, "login": "hsali", "node_id": "MDQ6VXNlcjY3NjUzMzA=", "organizations_url": "https://api.github.com/users/hsali/orgs", "received_events_url": "https://api.github.com/users/hsali/received_events", "repos_url": "https://api.github.com/users/hsali/repos", "site_admin": false, "starred_url": "https://api.github.com/users/hsali/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/hsali/subscriptions", "type": "User", "url": "https://api.github.com/users/hsali" }
https://api.github.com/repos/huggingface/datasets/issues/2203/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2203/timeline
closed
false
2,203
null
2021-04-23T14:33:39Z
null
true
854,501,109
https://api.github.com/repos/huggingface/datasets/issues/2202
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2202/events
[]
null
2021-04-12T17:58:00Z
[]
https://github.com/huggingface/datasets/pull/2202
MEMBER
null
false
null
[]
Add classes GenerateMode, DownloadConfig and Version to the documentation
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2202/reactions" }
MDExOlB1bGxSZXF1ZXN0NjEyNDM2ODMx
{ "diff_url": "https://github.com/huggingface/datasets/pull/2202.diff", "html_url": "https://github.com/huggingface/datasets/pull/2202", "merged_at": "2021-04-12T17:57:59Z", "patch_url": "https://github.com/huggingface/datasets/pull/2202.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/2202" }
2021-04-09T12:58:19Z
https://api.github.com/repos/huggingface/datasets/issues/2202/comments
Add documentation for classes `GenerateMode`, `DownloadConfig` and `Version`. Update the docstring of `load_dataset` to create cross-reference links to the classes. Related to #2187.
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
https://api.github.com/repos/huggingface/datasets/issues/2202/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2202/timeline
closed
false
2,202
null
2021-04-12T17:57:59Z
null
true
854,499,563
https://api.github.com/repos/huggingface/datasets/issues/2201
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2201/events
[]
null
2021-04-12T13:32:17Z
[]
https://github.com/huggingface/datasets/pull/2201
MEMBER
null
false
null
[]
Fix ArrowWriter overwriting features in ArrowBasedBuilder
{ "+1": 2, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 2, "url": "https://api.github.com/repos/huggingface/datasets/issues/2201/reactions" }
MDExOlB1bGxSZXF1ZXN0NjEyNDM1NTE3
{ "diff_url": "https://github.com/huggingface/datasets/pull/2201.diff", "html_url": "https://github.com/huggingface/datasets/pull/2201", "merged_at": "2021-04-12T13:32:16Z", "patch_url": "https://github.com/huggingface/datasets/pull/2201.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/2201" }
2021-04-09T12:56:19Z
https://api.github.com/repos/huggingface/datasets/issues/2201/comments
This should fix the issues with CSV loading experienced in #2153 and #2200. The CSV builder is an ArrowBasedBuilder that had an issue with its ArrowWriter used to write the arrow file from the csv data. The writer wasn't initialized with the features passed by the user. Therefore the writer was inferring the features from the arrow data, discarding the features passed by the user. I fixed that and I updated the tests
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
https://api.github.com/repos/huggingface/datasets/issues/2201/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2201/timeline
closed
false
2,201
null
2021-04-12T13:32:16Z
null
true
854,449,656
https://api.github.com/repos/huggingface/datasets/issues/2200
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2200/events
[]
null
2021-06-04T10:37:35Z
[ { "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" } ]
https://github.com/huggingface/datasets/issues/2200
NONE
completed
null
null
[ "Hi ! This might be related to #2153 \r\n\r\nYou're right the ArrowWriter should be initialized with `features=self.info.features` ! Good catch\r\nI'm opening a PR to fix this and also to figure out how it was not caught in the tests\r\n\r\nEDIT: opened #2201", "> Hi ! This might be related to #2153\r\n> \r\n> You're right the ArrowWriter should be initialized with `features=self.info.features` ! Good catch\r\n> I'm opening a PR to fix this and also to figure out how it was not caught in the tests\r\n> \r\n> EDIT: opened #2201\r\n\r\nGlad to hear that! Thank you for your fix, I'm new to huggingface, it's a fantastic project 😁" ]
_prepare_split will overwrite DatasetBuilder.info.features
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2200/reactions" }
MDU6SXNzdWU4NTQ0NDk2NTY=
null
2021-04-09T11:47:13Z
https://api.github.com/repos/huggingface/datasets/issues/2200/comments
Hi, here is my issue: I initialized a Csv datasetbuilder with specific features: ``` def get_dataset_features(data_args): features = {} if data_args.text_features: features.update({text_feature: hf_features.Value("string") for text_feature in data_args.text_features.strip().split(",")}) if data_args.num_features: features.update({text_feature: hf_features.Value("float32") for text_feature in data_args.num_features.strip().split(",")}) if data_args.label_classes: features["label"] = hf_features.ClassLabel(names=data_args.label_classes.strip().split(",")) else: features["label"] = hf_features.Value("float32") return hf_features.Features(features) datasets = load_dataset(extension, data_files=data_files, sep=data_args.delimiter, header=data_args.header, column_names=data_args.column_names.split(",") if data_args.column_names else None, features=get_dataset_features(data_args=data_args)) ``` The `features` is printout as below before `builder_instance.as_dataset` is called: ``` {'label': ClassLabel(num_classes=2, names=['unacceptable', 'acceptable'], names_file=None, id=None), 'notated': Value(dtype='string', id=None), 'sentence': Value(dtype='string', id=None), 'src_code': Value(dtype='string', id=None)} ```` But after the `builder_instance.as_dataset` is called for Csv dataset builder, the `features` is changed to: ``` {'label': Value(dtype='int64', id=None), 'notated': Value(dtype='string', id=None), 'sentence': Value(dtype='string', id=None), 'src_code': Value(dtype='string', id=None)} ``` After digged into the code, I releazed that in `ArrowBasedBuilder._prepare_split`, the DatasetBuilder's info's features will be overwrited by `ArrowWriter`'s `_features`. But `ArrowWriter` is initailized without passing `features`. So my concern is: It's this overwrite must be done, or, should it be an option to pass features in `_prepare_split` function?
{ "avatar_url": "https://avatars.githubusercontent.com/u/4157614?v=4", "events_url": "https://api.github.com/users/Gforky/events{/privacy}", "followers_url": "https://api.github.com/users/Gforky/followers", "following_url": "https://api.github.com/users/Gforky/following{/other_user}", "gists_url": "https://api.github.com/users/Gforky/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/Gforky", "id": 4157614, "login": "Gforky", "node_id": "MDQ6VXNlcjQxNTc2MTQ=", "organizations_url": "https://api.github.com/users/Gforky/orgs", "received_events_url": "https://api.github.com/users/Gforky/received_events", "repos_url": "https://api.github.com/users/Gforky/repos", "site_admin": false, "starred_url": "https://api.github.com/users/Gforky/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Gforky/subscriptions", "type": "User", "url": "https://api.github.com/users/Gforky" }
https://api.github.com/repos/huggingface/datasets/issues/2200/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2200/timeline
closed
false
2,200
null
2021-06-04T10:37:35Z
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
false
854,417,318
https://api.github.com/repos/huggingface/datasets/issues/2199
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2199/events
[]
null
2021-04-09T15:57:05Z
[]
https://github.com/huggingface/datasets/pull/2199
MEMBER
null
false
null
[ "Hi @lhoestq, could you please check if this makes sense? Thanks.", "What about using `_indices_data_files` field in save_to_disk instead of `_indices_files` ?\r\nThis way future datasets can also be reloaded from older versions of the lib\r\n\r\n`_indices_files` was introduced in a recent PR and was not released", "Yes, I have seen it is not released yet...\r\n\r\nYou are right! It was your awesome PR on Tables which renamed this. If there is no particular reason for this renaming, yes, we could switch it back to the previous `_indices_data_files`. ;)" ]
Fix backward compatibility in Dataset.load_from_disk
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2199/reactions" }
MDExOlB1bGxSZXF1ZXN0NjEyMzY0ODU3
{ "diff_url": "https://github.com/huggingface/datasets/pull/2199.diff", "html_url": "https://github.com/huggingface/datasets/pull/2199", "merged_at": "2021-04-09T15:57:05Z", "patch_url": "https://github.com/huggingface/datasets/pull/2199.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/2199" }
2021-04-09T11:01:10Z
https://api.github.com/repos/huggingface/datasets/issues/2199/comments
Fix backward compatibility when loading from disk an old dataset saved to disk with indices using key "_indices_data_files". Related to #2195.
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
https://api.github.com/repos/huggingface/datasets/issues/2199/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2199/timeline
closed
false
2,199
null
2021-04-09T15:57:05Z
null
true
854,357,481
https://api.github.com/repos/huggingface/datasets/issues/2198
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2198/events
[]
null
2021-04-16T14:11:46Z
[]
https://github.com/huggingface/datasets/pull/2198
CONTRIBUTOR
null
false
null
[ "From offline discussions: we want to make the permissions handling consistent with `transformers`. However from discussion in https://github.com/huggingface/transformers/pull/11119 it looks like it might not be a good solution to provide this argument. Users should use umask for now, and we'll see how things evolve.\r\n\r\n@bhavitvyamalik I'm closing the PR for now if you don't mind" ]
added file_permission in load_dataset
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2198/reactions" }
MDExOlB1bGxSZXF1ZXN0NjEyMzE0MTIz
{ "diff_url": "https://github.com/huggingface/datasets/pull/2198.diff", "html_url": "https://github.com/huggingface/datasets/pull/2198", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/2198.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/2198" }
2021-04-09T09:39:06Z
https://api.github.com/repos/huggingface/datasets/issues/2198/comments
As discussed in #2065 I've added `file_permission` argument in `load_dataset`. Added mainly 2 things here: 1) Permission of downloaded datasets when converted to .arrow files can be changed with argument `file_permission` argument in `load_dataset` (default is 0o644 only) 2) Incase the user uses `map` later on to generate another cache file of dataset, it ensures the permissions of newly generated file are similar to that of` *-train.arrow` file inside cache_dir for that dataset.
{ "avatar_url": "https://avatars.githubusercontent.com/u/19718818?v=4", "events_url": "https://api.github.com/users/bhavitvyamalik/events{/privacy}", "followers_url": "https://api.github.com/users/bhavitvyamalik/followers", "following_url": "https://api.github.com/users/bhavitvyamalik/following{/other_user}", "gists_url": "https://api.github.com/users/bhavitvyamalik/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/bhavitvyamalik", "id": 19718818, "login": "bhavitvyamalik", "node_id": "MDQ6VXNlcjE5NzE4ODE4", "organizations_url": "https://api.github.com/users/bhavitvyamalik/orgs", "received_events_url": "https://api.github.com/users/bhavitvyamalik/received_events", "repos_url": "https://api.github.com/users/bhavitvyamalik/repos", "site_admin": false, "starred_url": "https://api.github.com/users/bhavitvyamalik/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/bhavitvyamalik/subscriptions", "type": "User", "url": "https://api.github.com/users/bhavitvyamalik" }
https://api.github.com/repos/huggingface/datasets/issues/2198/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2198/timeline
closed
false
2,198
null
2021-04-16T14:11:46Z
null
true
854,356,559
https://api.github.com/repos/huggingface/datasets/issues/2197
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2197/events
[]
null
2021-04-09T09:54:40Z
[]
https://github.com/huggingface/datasets/pull/2197
MEMBER
null
false
null
[]
fix missing indices_files in load_form_disk
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2197/reactions" }
MDExOlB1bGxSZXF1ZXN0NjEyMzEzMzQw
{ "diff_url": "https://github.com/huggingface/datasets/pull/2197.diff", "html_url": "https://github.com/huggingface/datasets/pull/2197", "merged_at": "2021-04-09T09:54:39Z", "patch_url": "https://github.com/huggingface/datasets/pull/2197.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/2197" }
2021-04-09T09:37:57Z
https://api.github.com/repos/huggingface/datasets/issues/2197/comments
This should fix #2195 `load_from_disk` was failing if there was no "_indices_files" field in state.json. This can happen if the dataset has no indices mapping
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
https://api.github.com/repos/huggingface/datasets/issues/2197/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2197/timeline
closed
false
2,197
null
2021-04-09T09:54:39Z
null
true
854,126,114
https://api.github.com/repos/huggingface/datasets/issues/2196
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2196/events
[ { "color": "d876e3", "default": true, "description": "Further information is requested", "id": 1935892912, "name": "question", "node_id": "MDU6TGFiZWwxOTM1ODkyOTEy", "url": "https://api.github.com/repos/huggingface/datasets/labels/question" } ]
null
2021-04-12T05:25:29Z
[]
https://github.com/huggingface/datasets/issues/2196
NONE
completed
null
null
[ "Hi ! Files that starts with `cache-*` are cached computation files, i.e. they are the cached results of map/filter/cast/etc. operations. For example if you used `map` on your dataset to transform it, then the resulting dataset is going to be stored and cached in a `cache-*` file. These files are used to avoid having to load the dataset in RAM, even after many transforms", "Thanks @lhoestq! Hmm.. that's strange because I specifically turned off auto caching, and saved mapped result, using `save_to_disk`, to another location. At this location, the following file is created:`355G\tcache-ed205e500a7dc44c.arrow`\r\n\r\nTo my observation, both `load_dataset` and `map` creates `cache-*` files, and I wonder what the `cache-*` file from `load_dataset` is for (as I believe the same information is stored in `json-train.arrow`.", "This is a wrong report -- `cache-*` files are created only my `map`, not by `load_dataset`. " ]
`load_dataset` caches two arrow files?
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2196/reactions" }
MDU6SXNzdWU4NTQxMjYxMTQ=
null
2021-04-09T03:49:19Z
https://api.github.com/repos/huggingface/datasets/issues/2196/comments
Hi, I am using datasets to load large json file of 587G. I checked the cached folder and found that there are two arrow files created: * `cache-ed205e500a7dc44c.arrow` - 355G * `json-train.arrow` - 582G Why is the first file created? If I delete it, would I still be able to `load_from_disk`?
{ "avatar_url": "https://avatars.githubusercontent.com/u/29157715?v=4", "events_url": "https://api.github.com/users/hwijeen/events{/privacy}", "followers_url": "https://api.github.com/users/hwijeen/followers", "following_url": "https://api.github.com/users/hwijeen/following{/other_user}", "gists_url": "https://api.github.com/users/hwijeen/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/hwijeen", "id": 29157715, "login": "hwijeen", "node_id": "MDQ6VXNlcjI5MTU3NzE1", "organizations_url": "https://api.github.com/users/hwijeen/orgs", "received_events_url": "https://api.github.com/users/hwijeen/received_events", "repos_url": "https://api.github.com/users/hwijeen/repos", "site_admin": false, "starred_url": "https://api.github.com/users/hwijeen/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/hwijeen/subscriptions", "type": "User", "url": "https://api.github.com/users/hwijeen" }
https://api.github.com/repos/huggingface/datasets/issues/2196/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2196/timeline
closed
false
2,196
null
2021-04-12T05:25:29Z
null
false
854,070,194
https://api.github.com/repos/huggingface/datasets/issues/2195
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2195/events
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
null
2021-04-09T09:55:09Z
[]
https://github.com/huggingface/datasets/issues/2195
NONE
completed
null
null
[ "Thanks for reporting @samsontmr.\r\n\r\nIt seems a backward compatibility issue...", "Thanks @samsontmr this should be fixed on master now\r\n\r\nFeel free to reopen if you're still having issues" ]
KeyError: '_indices_files' in `arrow_dataset.py`
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2195/reactions" }
MDU6SXNzdWU4NTQwNzAxOTQ=
null
2021-04-09T01:37:12Z
https://api.github.com/repos/huggingface/datasets/issues/2195/comments
After pulling the latest master, I'm getting a crash when `load_from_disk` tries to load my local dataset. Trace: ``` Traceback (most recent call last): File "load_data.py", line 11, in <module> dataset = load_from_disk(SRC) File "/opt/conda/envs/py38/lib/python3.8/site-packages/datasets/load.py", line 784, in load_from_disk return DatasetDict.load_from_disk(dataset_path, fs, keep_in_memory=keep_in_memory) File "/opt/conda/envs/py38/lib/python3.8/site-packages/datasets/dataset_dict.py", line 692, in load_from_disk dataset_dict[k] = Dataset.load_from_disk(dataset_dict_split_path, fs, keep_in_memory=keep_in_memory) File "/opt/conda/envs/py38/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 634, in load_from_disk if state["_indices_files"]: KeyError: '_indices_files' ``` I believe this is the line causing the error since there may not be a `_indices_files` key in the older versions: https://github.com/huggingface/datasets/blob/b70141e3c5149430951773aaa0155555c5fb3e76/src/datasets/arrow_dataset.py#L634 May I suggest using `state.get()` instead of directly indexing the dictionary? @lhoestq
{ "avatar_url": "https://avatars.githubusercontent.com/u/15007950?v=4", "events_url": "https://api.github.com/users/samsontmr/events{/privacy}", "followers_url": "https://api.github.com/users/samsontmr/followers", "following_url": "https://api.github.com/users/samsontmr/following{/other_user}", "gists_url": "https://api.github.com/users/samsontmr/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/samsontmr", "id": 15007950, "login": "samsontmr", "node_id": "MDQ6VXNlcjE1MDA3OTUw", "organizations_url": "https://api.github.com/users/samsontmr/orgs", "received_events_url": "https://api.github.com/users/samsontmr/received_events", "repos_url": "https://api.github.com/users/samsontmr/repos", "site_admin": false, "starred_url": "https://api.github.com/users/samsontmr/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/samsontmr/subscriptions", "type": "User", "url": "https://api.github.com/users/samsontmr" }
https://api.github.com/repos/huggingface/datasets/issues/2195/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2195/timeline
closed
false
2,195
null
2021-04-09T09:54:39Z
null
false
853,909,452
https://api.github.com/repos/huggingface/datasets/issues/2194
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2194/events
[]
null
2021-04-09T16:56:50Z
[]
https://github.com/huggingface/datasets/issues/2194
CONTRIBUTOR
completed
null
null
[ "\r\nThis wasn't a `datasets` problem, but `transformers`' and it was solved here https://github.com/huggingface/transformers/pull/11168\r\n" ]
py3.7: TypeError: can't pickle _LazyModule objects
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2194/reactions" }
MDU6SXNzdWU4NTM5MDk0NTI=
null
2021-04-08T21:02:48Z
https://api.github.com/repos/huggingface/datasets/issues/2194/comments
While this works fine with py3.8, under py3.7, with a totally new conda env and transformers install: ``` git clone https://github.com/huggingface/transformers cd transformers pip install -e .[testing] export BS=1; rm -rf /tmp/test-clm; PYTHONPATH=src USE_TF=0 CUDA_VISIBLE_DEVICES=0 python \ examples/language-modeling/run_clm.py --model_name_or_path distilgpt2 --dataset_name wikitext \ --dataset_config_name wikitext-2-raw-v1 --do_train --max_train_samples 1 \ --per_device_train_batch_size $BS --output_dir /tmp/test-clm --block_size 128 --logging_steps 1 \ --fp16 ``` ``` Traceback (most recent call last): File "examples/language-modeling/run_clm.py", line 453, in <module> main() File "examples/language-modeling/run_clm.py", line 336, in main load_from_cache_file=not data_args.overwrite_cache, File "/home/stas/anaconda3/lib/python3.7/site-packages/datasets/dataset_dict.py", line 303, in map for k, dataset in self.items() File "/home/stas/anaconda3/lib/python3.7/site-packages/datasets/dataset_dict.py", line 303, in <dictcomp> for k, dataset in self.items() File "/home/stas/anaconda3/lib/python3.7/site-packages/datasets/arrow_dataset.py", line 1259, in map update_data=update_data, File "/home/stas/anaconda3/lib/python3.7/site-packages/datasets/arrow_dataset.py", line 157, in wrapper out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs) File "/home/stas/anaconda3/lib/python3.7/site-packages/datasets/fingerprint.py", line 158, in wrapper self._fingerprint, transform, kwargs_for_fingerprint File "/home/stas/anaconda3/lib/python3.7/site-packages/datasets/fingerprint.py", line 105, in update_fingerprint hasher.update(transform_args[key]) File "/home/stas/anaconda3/lib/python3.7/site-packages/datasets/fingerprint.py", line 57, in update self.m.update(self.hash(value).encode("utf-8")) File "/home/stas/anaconda3/lib/python3.7/site-packages/datasets/fingerprint.py", line 53, in hash return cls.hash_default(value) File "/home/stas/anaconda3/lib/python3.7/site-packages/datasets/fingerprint.py", line 46, in hash_default return cls.hash_bytes(dumps(value)) File "/home/stas/anaconda3/lib/python3.7/site-packages/datasets/utils/py_utils.py", line 389, in dumps dump(obj, file) File "/home/stas/anaconda3/lib/python3.7/site-packages/datasets/utils/py_utils.py", line 361, in dump Pickler(file, recurse=True).dump(obj) File "/home/stas/anaconda3/lib/python3.7/site-packages/dill/_dill.py", line 454, in dump StockPickler.dump(self, obj) File "/home/stas/anaconda3/lib/python3.7/pickle.py", line 437, in dump self.save(obj) File "/home/stas/anaconda3/lib/python3.7/pickle.py", line 504, in save f(self, obj) # Call unbound method with explicit self File "/home/stas/anaconda3/lib/python3.7/site-packages/datasets/utils/py_utils.py", line 556, in save_function obj=obj, File "/home/stas/anaconda3/lib/python3.7/pickle.py", line 638, in save_reduce save(args) File "/home/stas/anaconda3/lib/python3.7/pickle.py", line 504, in save f(self, obj) # Call unbound method with explicit self File "/home/stas/anaconda3/lib/python3.7/pickle.py", line 789, in save_tuple save(element) File "/home/stas/anaconda3/lib/python3.7/pickle.py", line 504, in save f(self, obj) # Call unbound method with explicit self File "/home/stas/anaconda3/lib/python3.7/site-packages/dill/_dill.py", line 941, in save_module_dict StockPickler.save_dict(pickler, obj) File "/home/stas/anaconda3/lib/python3.7/pickle.py", line 859, in save_dict self._batch_setitems(obj.items()) File "/home/stas/anaconda3/lib/python3.7/pickle.py", line 885, in _batch_setitems save(v) File "/home/stas/anaconda3/lib/python3.7/pickle.py", line 524, in save rv = reduce(self.proto) TypeError: can't pickle _LazyModule objects ``` ``` $ python --version Python 3.7.4 $ python -m torch.utils.collect_env Collecting environment information... PyTorch version: 1.8.0.dev20210110+cu110 Is debug build: False CUDA used to build PyTorch: 11.0 ROCM used to build PyTorch: N/A OS: Ubuntu 20.04.2 LTS (x86_64) GCC version: (Ubuntu 9.3.0-17ubuntu1~20.04) 9.3.0 Clang version: 10.0.0-4ubuntu1 CMake version: version 3.16.3 ``` Thanks.
{ "avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4", "events_url": "https://api.github.com/users/stas00/events{/privacy}", "followers_url": "https://api.github.com/users/stas00/followers", "following_url": "https://api.github.com/users/stas00/following{/other_user}", "gists_url": "https://api.github.com/users/stas00/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/stas00", "id": 10676103, "login": "stas00", "node_id": "MDQ6VXNlcjEwNjc2MTAz", "organizations_url": "https://api.github.com/users/stas00/orgs", "received_events_url": "https://api.github.com/users/stas00/received_events", "repos_url": "https://api.github.com/users/stas00/repos", "site_admin": false, "starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stas00/subscriptions", "type": "User", "url": "https://api.github.com/users/stas00" }
https://api.github.com/repos/huggingface/datasets/issues/2194/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2194/timeline
closed
false
2,194
null
2021-04-09T01:52:57Z
null
false
853,725,707
https://api.github.com/repos/huggingface/datasets/issues/2193
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2193/events
[ { "color": "d876e3", "default": true, "description": "Further information is requested", "id": 1935892912, "name": "question", "node_id": "MDU6TGFiZWwxOTM1ODkyOTEy", "url": "https://api.github.com/repos/huggingface/datasets/labels/question" } ]
null
2021-04-26T16:13:59Z
[]
https://github.com/huggingface/datasets/issues/2193
CONTRIBUTOR
completed
null
null
[ "Hi ! Yes we are working on making `filter` significantly faster. You can look at related PRs here: #2060 #2178 \r\n\r\nI think you can expect to have the fast version of `filter` available next week.\r\n\r\nWe'll make it only select one column, and we'll also make the overall filtering operation way faster by avoiding many arrow<->python conversions especially during writing.\r\n\r\nI'll let you know how it goes !", "@lhoestq Thanks for the response— it's great to hear that we'll be getting a much faster `filter` method soon. However, my use case does also involve using `map` over a single column in order to pre-compute roughly uniformly sized batches, and right now that is also very slow. Is there any plan to make `map` faster for single column operations?\r\n\r\nIf that's not a priority for the maintainers right now, I could try my hand at adding the feature, but I can't guarantee I would do a good job given my lack of familiarity with pyarrow.", "Currently the optimal setup for single-column computations is probably to do something like\r\n```python\r\nresult = dataset.map(f, input_columns=\"my_col\", remove_columns=dataset.column_names)\r\n```\r\nThis has two advantages:\r\n- input_columns=\"my_col\" allows to only read the column \"my_col\"\r\n- remove_columns=dataset.column_names makes `map` only keep the output of your function `f`, and it drops the other columns of the dataset instead of keeping them.\r\n\r\nLet me know if it improves speed on your side.\r\n\r\nYou can also get more speed by using `batched=True` and setting `num_proc=` for multiprocessing", "Hi @lhoestq ,\r\n\r\nI'm hijacking this issue, because I'm currently trying to do the approach you recommend:\r\n\r\n> Currently the optimal setup for single-column computations is probably to do something like\r\n> \r\n> ```python\r\n> result = dataset.map(f, input_columns=\"my_col\", remove_columns=dataset.column_names)\r\n> ```\r\n\r\nHere is my code: (see edit, in which I added a simplified version\r\n\r\n```\r\nThis is the error:\r\n```bash\r\npyarrow.lib.ArrowInvalid: Column 1 named tokens expected length 8964 but got length 1000\r\n```\r\nI wonder why this error occurs, when I delete every column? Can you give me a hint?\r\n\r\n### Edit:\r\nI preprocessed my dataset before (using map with the features argument) and saved it to disk. May this be part of the error? I can iterate over the\r\ncomplete dataset and print every sample before calling map. There seems to be no other problem with the dataset.\r\n\r\nI tried to simplify the code that crashes:\r\n\r\n```python\r\n# works\r\nlog.debug(dataset.column_names)\r\nlog.debug(dataset)\r\nfor i, sample in enumerate(dataset):\r\n log.debug(i, sample)\r\n\r\n# crashes\r\ncounted_dataset = dataset.map(\r\n lambda x: {\"a\": list(range(20))},\r\n input_columns=column,\r\n remove_columns=dataset.column_names,\r\n load_from_cache_file=False,\r\n num_proc=num_workers,\r\n batched=True,\r\n)\r\n```\r\n\r\n```\r\npyarrow.lib.ArrowInvalid: Column 1 named tokens expected length 20 but got length 1000\r\n```\r\n\r\nEdit2: \r\n\r\nMay this be a problem with a schema I set when preprocessing the dataset before? I tried to add the `features` argument to the function and then I get a new error:\r\n\r\n```python\r\n# crashes\r\ncounted_dataset = dataset.map(\r\n lambda x: {\"a\": list(range(20))},\r\n input_columns=column,\r\n remove_columns=dataset.column_names,\r\n load_from_cache_file=False,\r\n num_proc=num_workers,\r\n batched=True,\r\n features=datasets.Features(\r\n {\r\n \"a\": datasets.Sequence(datasets.Value(\"int32\"))\r\n }\r\n )\r\n)\r\n```\r\n\r\n```\r\n File \"env/lib/python3.8/site-packages/datasets/arrow_dataset.py\", line 1704, in _map_single\r\n writer.write_batch(batch)\r\n File \"env/lib/python3.8/site-packages/datasets/arrow_writer.py\", line 312, in write_batch\r\n col_type = schema.field(col).type if schema is not None else None\r\n File \"pyarrow/types.pxi\", line 1341, in pyarrow.lib.Schema.field\r\nKeyError: 'Column tokens does not exist in schema'\r\n```", "Hi ! Can you open a separate issue for that ?\r\nAlso if you could provide a google colab or a sample code to reproduce this issue that would be helpful.\r\nOn my side I was not able to reproduce this error.", "@lhoestq Sorry I'm just responding now. I'm currently using your recommendation for the map on a single column, and I've gotten it to be fast enough to sort of work for my use case by just setting `num_proc=10`, although it's still quite slow. It's clear that it is still loading the entirety of each row into memory and then discarding everything except the selected column, instead of exploiting the columnar data format to only load the selected column.\r\n\r\nMy code is like this:\r\n```\r\n self.dataset = self.dataset.sort('num_tokens')\r\n batch_dataset = self.dataset.map(\r\n\tcompute_uniform_sized_batches,\r\n\tbatched=True, batch_size=10_000, num_proc=10, input_columns=['num_tokens'],\r\n\tremove_columns=get_columns_all_equal(self.dataset),\r\n\twith_indices=True,\r\n\tfn_kwargs=dict(max_size=tokens_per_batch)\r\n)\r\nself.batches = {\r\n\tname: list(zip(split['start'], split['length']))\r\n\tfor name, split in batch_dataset.items()\r\n}\r\n```\r\nI find that the processes with higher IDs take significantly longer to complete, presumably because the dataset is sorted by article length and they're loading the entire article text into memory, instead of just the 'num_tokens' column.\r\n\r\nI should note that my batching procedure would work best if I just used `batch_size=None` and loaded the whole column into memory at once, but I found that this was intolerably slow and gave me no progress information, so I'm using the less than ideal `batch_size=10_000`.", "Hi @norabelrose ! I'm glad you managed to make this work on your side.\r\nRegarding memory usage, you can try to drop the columns that you don't want to use for your `map` for now.\r\n\r\nIn the future we'll try to find a way to not load unnecessary columns in memory in `map`. Currently the way it works is that it gets the batch as a python dict, then it updates it using the output of your mapping function, and finally it removes columns from `remove_columns`. Therefore for a moment some columns are loaded in memory even if you remove them or don't use them for your mapping function.\r\n\r\nIt would be nice to have a way to optimize memory for cases such as yours !", "@lhoestq After looking through the source code, it looks like the following solution has at least some chance of working:\r\n- refactor `Dataset.map()` so that the `input_columns` parameter is implemented by using the `self.formatted_as()` context manager with `columns=input_columns`\r\n- change `Dataset._getitem()` so that it passes `self._data.drop(drop_columns)` to the `query_table()` function whenever `format_columns` is non-None and `output_all_columns` is False, instead of `self._data` itself", "Looks like a great direction :)\r\nNote that `query_table` doesn't bring data into memory. Only `format_table` does.\r\nAlso the dataset may already have a format with `columns=` already defined so we would need to define the formatted `input_dataset` like:\r\n```python\r\n# before the `map` main for loop\r\ninput_columns = input_columns if input_columns is not None else self.column_names\r\nif not self._output_all_columns:\r\n columns = [col for col in input_columns if self._format_columns is None or col in self._format_columns]\r\n input_dataset = self.with_format(\r\n type=self._format_type,\r\n columns=columns\r\n )\r\nelse:\r\n # in this case we could find a way to filter both format_columns and unformatted columns eventually\r\n input_dataset = self\r\n# then input_dataset can be used in the main for loop of `map`\r\n```\r\n\r\nEDIT: oh and regarding streaming format versus file format for arrow, we plan to start using the file format #1933 at one point (though I'm not sure if it would improve performance)", "Good to know about `query_table` not bringing anything into memory. I was under the impression that it did because a while back I looked at my `map` operation in pdb and it looked like it was spending forever in line 93 of formatting.py, `return pa.concat_tables(....)`, although that was before the `fast_slice` interpolation search was implemented, so it may have had more to do with the slow ChunkedArray slice implementation than anything else.\r\n\r\nIf `query_table` is I/O free then the fix may be as simple as just adding this to line 1779 of arrow_dataset.py:\r\n```python\r\n# Only load the columns we actually need\r\nif input_columns:\r\n stack.enter_context(self.formatted_as(\r\n self._format_type,\r\n columns=input_columns,\r\n output_all_columns=False,\r\n **self._format_kwargs\r\n ))\r\n```\r\nIt's not clear to me why the `[col for col in input_columns if self._format_columns is None or col in self._format_columns]` check would be necessary— it seems like either `input_columns` should simply temporarily override the `_format_columns` within the `map` operation, or we should throw an error if there are any conflicts. Currently it doesn't look like this case is checked for at all within `map`, but maybe I'm just missing it.", "`query_table` simply slices/concatenates parts of the table. The actual data inside the table is not brought in memory.\r\nAlso I'm more in favor of declaring `input_dataset = self.with_format(...)` since `formatted_as` may update the dataset fingerprint of `self`, which is not expected when someone runs `map`.\r\n\r\n> It's not clear to me why the [col for col in input_columns if self._format_columns is None or col in self._format_columns] check would be necessary— it seems like either input_columns should simply temporarily override the _format_columns within the map operation, or we should throw an error if there are any conflicts. Currently it doesn't look like this case is checked for at all within map, but maybe I'm just missing it.\r\n\r\nActually yes we can just use input_columns. And we do need to add a check to make sure there are not conflicts or this could lead to confusing errors.", "That sounds good to me! I just submitted a PR (#2246) implementing your approach. I also changed how `_query_table` handles Iterable keys since it still seemed like `pa.concat_tables` was taking a long time to create the table for each batch. Now my whole `map()` operation takes 1 min 46 seconds where it used to take somewhere on the order of 10 minutes." ]
Filtering/mapping on one column is very slow
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2193/reactions" }
MDU6SXNzdWU4NTM3MjU3MDc=
null
2021-04-08T18:16:14Z
https://api.github.com/repos/huggingface/datasets/issues/2193/comments
I'm currently using the `wikipedia` dataset— I'm tokenizing the articles with the `tokenizers` library using `map()` and also adding a new `num_tokens` column to the dataset as part of that map operation. I want to be able to _filter_ the dataset based on this `num_tokens` column, but even when I specify `input_columns=['num_tokens']`, it seems that the entirety of each row is loaded into memory, which makes the operation take much longer than it should. Indeed, `filter` currently just calls `map`, and I found that in `_map_single` on lines 1690-1704 of `arrow_dataset.py`, the method is just grabbing slices of _all the rows_ of the dataset and then passing only the specified columns to the map function. It seems that, when the user passes a value for `input_columns`, the `map` function should create a temporary pyarrow table by selecting just those columns, and then get slices from that table. Or something like that— I'm not very familiar with the pyarrow API. I know that in the meantime I can sort of get around this by simply only returning the rows that match my filter criterion from the tokenizing function I pass to `map()`, but I actually _also_ want to map on just the `num_tokens` column in order to compute batches with a roughly uniform number of tokens per batch. I would also ideally like to be able to change my minimum and maximum article lengths without having to re-tokenize the entire dataset. PS: This is definitely not a "dataset request." I'm realizing that I don't actually know how to remove labels from my own issues on other people's repos, if that is even possible.
{ "avatar_url": "https://avatars.githubusercontent.com/u/39116809?v=4", "events_url": "https://api.github.com/users/norabelrose/events{/privacy}", "followers_url": "https://api.github.com/users/norabelrose/followers", "following_url": "https://api.github.com/users/norabelrose/following{/other_user}", "gists_url": "https://api.github.com/users/norabelrose/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/norabelrose", "id": 39116809, "login": "norabelrose", "node_id": "MDQ6VXNlcjM5MTE2ODA5", "organizations_url": "https://api.github.com/users/norabelrose/orgs", "received_events_url": "https://api.github.com/users/norabelrose/received_events", "repos_url": "https://api.github.com/users/norabelrose/repos", "site_admin": false, "starred_url": "https://api.github.com/users/norabelrose/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/norabelrose/subscriptions", "type": "User", "url": "https://api.github.com/users/norabelrose" }
https://api.github.com/repos/huggingface/datasets/issues/2193/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2193/timeline
closed
false
2,193
null
2021-04-26T16:13:59Z
null
false
853,547,910
https://api.github.com/repos/huggingface/datasets/issues/2192
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2192/events
[]
null
2021-04-08T15:47:41Z
[]
https://github.com/huggingface/datasets/pull/2192
MEMBER
null
false
null
[]
Fix typo in huggingface hub
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2192/reactions" }
MDExOlB1bGxSZXF1ZXN0NjExNjE5NTY0
{ "diff_url": "https://github.com/huggingface/datasets/pull/2192.diff", "html_url": "https://github.com/huggingface/datasets/pull/2192", "merged_at": "2021-04-08T15:47:40Z", "patch_url": "https://github.com/huggingface/datasets/pull/2192.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/2192" }
2021-04-08T14:42:24Z
https://api.github.com/repos/huggingface/datasets/issues/2192/comments
pip knows how to resolve to `huggingface_hub`, but conda doesn't! The `packaging` dependency is also required for the build to complete.
{ "avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4", "events_url": "https://api.github.com/users/LysandreJik/events{/privacy}", "followers_url": "https://api.github.com/users/LysandreJik/followers", "following_url": "https://api.github.com/users/LysandreJik/following{/other_user}", "gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/LysandreJik", "id": 30755778, "login": "LysandreJik", "node_id": "MDQ6VXNlcjMwNzU1Nzc4", "organizations_url": "https://api.github.com/users/LysandreJik/orgs", "received_events_url": "https://api.github.com/users/LysandreJik/received_events", "repos_url": "https://api.github.com/users/LysandreJik/repos", "site_admin": false, "starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions", "type": "User", "url": "https://api.github.com/users/LysandreJik" }
https://api.github.com/repos/huggingface/datasets/issues/2192/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2192/timeline
closed
false
2,192
null
2021-04-08T15:47:40Z
null
true
853,364,204
https://api.github.com/repos/huggingface/datasets/issues/2191
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2191/events
[ { "color": "B67A40", "default": false, "description": "Restructuring existing code without changing its external behavior", "id": 2851292821, "name": "refactoring", "node_id": "MDU6TGFiZWwyODUxMjkyODIx", "url": "https://api.github.com/repos/huggingface/datasets/labels/refactoring" } ]
null
2021-04-19T07:53:11Z
[]
https://github.com/huggingface/datasets/pull/2191
MEMBER
null
false
{ "closed_at": "2021-04-20T16:50:46Z", "closed_issues": 4, "created_at": "2021-04-09T13:07:51Z", "creator": { "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }, "description": "Next minor release", "due_on": "2021-04-16T07:00:00Z", "html_url": "https://github.com/huggingface/datasets/milestone/1", "id": 6644198, "labels_url": "https://api.github.com/repos/huggingface/datasets/milestones/1/labels", "node_id": "MDk6TWlsZXN0b25lNjY0NDE5OA==", "number": 1, "open_issues": 0, "state": "closed", "title": "1.6", "updated_at": "2021-04-20T16:50:46Z", "url": "https://api.github.com/repos/huggingface/datasets/milestones/1" }
[ "I find very interesting that idea of using a fixture instead!\r\n\r\nLet me rework a little bit this PR, @lhoestq.", "@lhoestq, as this is a big refactoring, I had many problems to solve the conflicts with the master branch...\r\n\r\nTherefore, I think it is better to merge this as it is, and then to make other PRs with additional refactorings, before I get conflicts again with the master branch...", "There are still some conflicts that prevent merging.\r\nMoreover I noticed that you added one fixture per method of the Dataset object to be mocked. The code of all these fixtures is pretty much the same, feel free to factorize them into one fixture.\r\n\r\nAlso feel free to create another branch from `master` if you don't want to fix the conflicts of this branch.\r\nLet me know if I can help you on this", "@lhoestq, yes, the new conflicts appeared after today merge commits on master...\r\n\r\nI am definitely going to split this PR into smaller ones in order to avoid having to resolve many conflicts after each commit on master. There are lots of conflicts and these are painful to resolve." ]
Refactorize tests to use Dataset as context manager
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2191/reactions" }
MDExOlB1bGxSZXF1ZXN0NjExNDY1Nzc0
{ "diff_url": "https://github.com/huggingface/datasets/pull/2191.diff", "html_url": "https://github.com/huggingface/datasets/pull/2191", "merged_at": "2021-04-19T07:53:10Z", "patch_url": "https://github.com/huggingface/datasets/pull/2191.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/2191" }
2021-04-08T11:21:04Z
https://api.github.com/repos/huggingface/datasets/issues/2191/comments
Refactorize Dataset tests to use Dataset as context manager.
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
https://api.github.com/repos/huggingface/datasets/issues/2191/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2191/timeline
closed
false
2,191
null
2021-04-19T07:53:10Z
null
true
853,181,564
https://api.github.com/repos/huggingface/datasets/issues/2190
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2190/events
[]
null
2021-05-24T10:03:55Z
[]
https://github.com/huggingface/datasets/issues/2190
NONE
completed
null
null
[ "Hi @anassalamah,\r\n\r\nCould you please try with this:\r\n```python\r\ntrain_ds = load_dataset(\"news_commentary\", lang1=\"ar\", lang2=\"en\", split='train[:98%]')\r\nval_ds = load_dataset(\"news_commentary\", lang1=\"ar\", lang2=\"en\", split='train[98%:]')\r\n```", "Hello @albertvillanova, \r\n\r\nThanks for the suggestion. I didn't know you could do that. however, it didn't resolve the issue\r\n\r\n![image](https://user-images.githubusercontent.com/8571003/114169966-ec819400-993a-11eb-8a67-930f9a9b2290.png)\r\n" ]
News_commentary Dataset Translation Pairs are of Incorrect Language Specified Pairs
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2190/reactions" }
MDU6SXNzdWU4NTMxODE1NjQ=
null
2021-04-08T07:53:43Z
https://api.github.com/repos/huggingface/datasets/issues/2190/comments
I used load_dataset to load the news_commentary dataset for "ar-en" translation pairs but found translations from Arabic to Hindi. ``` train_ds = load_dataset("news_commentary", "ar-en", split='train[:98%]') val_ds = load_dataset("news_commentary", "ar-en", split='train[98%:]') # filtering out examples that are not ar-en translations but ar-hi val_ds = val_ds.filter(lambda example, indice: indice not in chain(range(1312,1327) ,range(1384,1399), range(1030,1042)), with_indices=True) ``` * I'm fairly new to using datasets so I might be doing something wrong
{ "avatar_url": "https://avatars.githubusercontent.com/u/8571003?v=4", "events_url": "https://api.github.com/users/anassalamah/events{/privacy}", "followers_url": "https://api.github.com/users/anassalamah/followers", "following_url": "https://api.github.com/users/anassalamah/following{/other_user}", "gists_url": "https://api.github.com/users/anassalamah/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/anassalamah", "id": 8571003, "login": "anassalamah", "node_id": "MDQ6VXNlcjg1NzEwMDM=", "organizations_url": "https://api.github.com/users/anassalamah/orgs", "received_events_url": "https://api.github.com/users/anassalamah/received_events", "repos_url": "https://api.github.com/users/anassalamah/repos", "site_admin": false, "starred_url": "https://api.github.com/users/anassalamah/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/anassalamah/subscriptions", "type": "User", "url": "https://api.github.com/users/anassalamah" }
https://api.github.com/repos/huggingface/datasets/issues/2190/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2190/timeline
closed
false
2,190
null
2021-05-24T10:03:55Z
null
false
853,052,891
https://api.github.com/repos/huggingface/datasets/issues/2189
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2189/events
[]
null
2022-06-01T16:32:15Z
[]
https://github.com/huggingface/datasets/issues/2189
NONE
completed
null
null
[ "Hi ! We refactored save_to_disk in #2025 so this doesn't happen.\r\nFeel free to try it on master for now\r\nWe'll do a new release soon" ]
save_to_disk doesn't work when we use concatenate_datasets function before creating the final dataset_object.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2189/reactions" }
MDU6SXNzdWU4NTMwNTI4OTE=
null
2021-04-08T04:42:53Z
https://api.github.com/repos/huggingface/datasets/issues/2189/comments
As you can see, it saves the entire dataset. @lhoestq You can check by going through the following example, ``` from datasets import load_from_disk,concatenate_datasets loaded_data=load_from_disk('/home/gsir059/HNSW-ori/my_knowledge_dataset') n=20 kb_list=[loaded_data.shard(n, i, contiguous=True) for i in range(n)] final_dataset=concatenate_datasets([kb_list[1],kb_list[2]]) final_dataset.save_to_disk('/home/gsir059/haha/k.arrow') ```
{ "avatar_url": "https://avatars.githubusercontent.com/u/16892570?v=4", "events_url": "https://api.github.com/users/shamanez/events{/privacy}", "followers_url": "https://api.github.com/users/shamanez/followers", "following_url": "https://api.github.com/users/shamanez/following{/other_user}", "gists_url": "https://api.github.com/users/shamanez/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/shamanez", "id": 16892570, "login": "shamanez", "node_id": "MDQ6VXNlcjE2ODkyNTcw", "organizations_url": "https://api.github.com/users/shamanez/orgs", "received_events_url": "https://api.github.com/users/shamanez/received_events", "repos_url": "https://api.github.com/users/shamanez/repos", "site_admin": false, "starred_url": "https://api.github.com/users/shamanez/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/shamanez/subscriptions", "type": "User", "url": "https://api.github.com/users/shamanez" }
https://api.github.com/repos/huggingface/datasets/issues/2189/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2189/timeline
closed
false
2,189
null
2022-06-01T16:32:15Z
null
false