url
stringlengths 61
61
| repository_url
stringclasses 1
value | labels_url
stringlengths 75
75
| comments_url
stringlengths 70
70
| events_url
stringlengths 68
68
| html_url
stringlengths 49
51
| id
int64 1.14B
1.87B
| node_id
stringlengths 18
19
| number
int64 3.74k
6.19k
| title
stringlengths 1
290
| user
dict | labels
list | state
stringclasses 2
values | locked
bool 1
class | assignee
dict | assignees
list | milestone
dict | comments
sequence | created_at
timestamp[s] | updated_at
timestamp[s] | closed_at
timestamp[s] | author_association
stringclasses 3
values | active_lock_reason
null | body
stringlengths 2
33.9k
⌀ | reactions
dict | timeline_url
stringlengths 70
70
| performed_via_github_app
null | state_reason
stringclasses 3
values | draft
bool 2
classes | pull_request
dict | is_pull_request
bool 2
classes |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://api.github.com/repos/huggingface/datasets/issues/4138 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4138/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4138/comments | https://api.github.com/repos/huggingface/datasets/issues/4138/events | https://github.com/huggingface/datasets/issues/4138 | 1,199,291,730 | I_kwDODunzps5He71S | 4,138 | Incorrect Russian filenames encoding after extraction by datasets.DownloadManager.download_and_extract() | {
"login": "iluvvatar",
"id": 55381086,
"node_id": "MDQ6VXNlcjU1MzgxMDg2",
"avatar_url": "https://avatars.githubusercontent.com/u/55381086?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/iluvvatar",
"html_url": "https://github.com/iluvvatar",
"followers_url": "https://api.github.com/users/iluvvatar/followers",
"following_url": "https://api.github.com/users/iluvvatar/following{/other_user}",
"gists_url": "https://api.github.com/users/iluvvatar/gists{/gist_id}",
"starred_url": "https://api.github.com/users/iluvvatar/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/iluvvatar/subscriptions",
"organizations_url": "https://api.github.com/users/iluvvatar/orgs",
"repos_url": "https://api.github.com/users/iluvvatar/repos",
"events_url": "https://api.github.com/users/iluvvatar/events{/privacy}",
"received_events_url": "https://api.github.com/users/iluvvatar/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"To reproduce:\r\n\r\n```python\r\n>>> import datasets\r\n>>> datasets.get_dataset_split_names('MalakhovIlya/RuREBus', config_name='raw_txt')\r\nTraceback (most recent call last):\r\n File \"/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.9/site-packages/datasets/inspect.py\", line 280, in get_dataset_config_info\r\n for split_generator in builder._split_generators(\r\n File \"/home/slesage/.cache/huggingface/modules/datasets_modules/datasets/MalakhovIlya--RuREBus/21046f5f1a0cf91187d68c30918d78d934ec7113ec435e146776d4f28f12c4ed/RuREBus.py\", line 101, in _split_generators\r\n decode_file_names(folder)\r\n File \"/home/slesage/.cache/huggingface/modules/datasets_modules/datasets/MalakhovIlya--RuREBus/21046f5f1a0cf91187d68c30918d78d934ec7113ec435e146776d4f28f12c4ed/RuREBus.py\", line 26, in decode_file_names\r\n for root, dirs, files in os.walk(folder, topdown=False):\r\n File \"/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.9/site-packages/datasets/streaming.py\", line 66, in wrapper\r\n return function(*args, use_auth_token=use_auth_token, **kwargs)\r\nTypeError: xwalk() got an unexpected keyword argument 'topdown'\r\n\r\nThe above exception was the direct cause of the following exception:\r\n\r\nTraceback (most recent call last):\r\n File \"<stdin>\", line 1, in <module>\r\n File \"/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.9/site-packages/datasets/inspect.py\", line 323, in get_dataset_split_names\r\n info = get_dataset_config_info(\r\n File \"/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.9/site-packages/datasets/inspect.py\", line 285, in get_dataset_config_info\r\n raise SplitsNotFoundError(\"The split names could not be parsed from the dataset config.\") from err\r\ndatasets.inspect.SplitsNotFoundError: The split names could not be parsed from the dataset config.\r\n```\r\n\r\nIt's not related to the dataset viewer. Maybe @albertvillanova or @lhoestq could help more on this issue.",
"Hi! This issue stems from the fact that `xwalk`, which is a streamable version of `os.walk`, doesn't support the `topdown` param due to `fsspec`'s `walk` also not supporting it, so fixing this issue could be tricky. \r\n\r\n@MalakhovIlyaPavlovich You can avoid the error by tweaking your data processing and not using this param. (and `Path.rename`, which also cannot be streamed) ",
"@mariosasko thank you for your reply. I couldn't reproduce error showed by @severo either on Ubuntu 20.04.3 LTS, Windows 10 and Google Colab environments. But trying to avoid using os.walk(topdown=False) and Path.rename(), In _split_generators I replaced\r\n```\r\ndef decode_file_names(folder):\r\n for root, dirs, files in os.walk(folder, topdown=False):\r\n root = Path(root)\r\n for file in files:\r\n old_name = root / Path(file)\r\n new_name = root / Path(\r\n file.encode('cp437').decode('cp866'))\r\n old_name.rename(new_name)\r\n for dir in dirs:\r\n old_name = root / Path(dir)\r\n new_name = root / Path(dir.encode('cp437').decode('cp866'))\r\n old_name.rename(new_name)\r\n\r\nfolder = dl_manager.download_and_extract(self._RAW_TXT_URLS)['raw_txt']\r\ndecode_file_names(folder)\r\n```\r\nby\r\n```\r\ndef extract(zip_file_path):\r\n p = Path(zip_file_path)\r\n dest_dir = str(p.parent / 'extracted' / p.stem)\r\n os.makedirs(dest_dir, exist_ok=True)\r\n with zipfile.ZipFile(zip_file_path) as archive:\r\n for file_info in tqdm(archive.infolist(), desc='Extracting'):\r\n filename = file_info.filename.encode('cp437').decode('cp866')\r\n target = os.path.join(dest_dir, *filename.split('/'))\r\n os.makedirs(os.path.dirname(target), exist_ok=True)\r\n if not file_info.is_dir():\r\n with archive.open(file_info) as source, open(target, 'wb') as dest:\r\n shutil.copyfileobj(source, dest)\r\n return dest_dir\r\n\r\nzip_file = dl_manager.download(self._RAW_TXT_URLS)['raw_txt']\r\nif not is_url(zip_file):\r\n folder = extract(zip_file)\r\nelse:\r\n folder = None\r\n```\r\nand now everything works well except data viewer for \"raw_txt\" subset: dataset preview on hub shows \"No data.\". As far as I understand dl_manager.download returns original URL when we are calling datasets.get_dataset_split_names and my suspicions are that dataset viewer can do smth similar. I couldn't find information about how it works. I would be very grateful, if you could tell me how to fix this)",
"This is what I get when I try to stream the `raw_txt` subset:\r\n```python\r\n>>> dset = load_dataset(\"MalakhovIlya/RuREBus\", \"raw_txt\", split=\"raw_txt\", streaming=True)\r\n>>> next(iter(dset))\r\nTraceback (most recent call last):\r\n File \"<stdin>\", line 1, in <module>\r\nStopIteration\r\n```\r\nSo there is a bug in your script.",
"streaming=True helped me to find solution. I fixed\r\n```\r\ndef extract(zip_file_path):\r\n p = Path(zip_file_path)\r\n dest_dir = str(p.parent / 'extracted' / p.stem)\r\n os.makedirs(dest_dir, exist_ok=True)\r\n with zipfile.ZipFile(zip_file_path) as archive:\r\n for file_info in tqdm(archive.infolist(), desc='Extracting'):\r\n filename = file_info.filename.encode('cp437').decode('cp866')\r\n target = os.path.join(dest_dir, *filename.split('/'))\r\n os.makedirs(os.path.dirname(target), exist_ok=True)\r\n if not file_info.is_dir():\r\n with archive.open(file_info) as source, open(target, 'wb') as dest:\r\n shutil.copyfileobj(source, dest)\r\n return dest_dir\r\n\r\nzip_file = dl_manager.download(self._RAW_TXT_URLS)['raw_txt']\r\nfolder = extract(zip_file)\r\n```\r\nby \r\n```\r\nfolder = dl_manager.download_and_extract(self._RAW_TXT_URLS)['raw_txt']\r\npath = os.path.join(folder, 'MED_txt/unparsed_txt')\r\nfor root, dirs, files in os.walk(path):\r\n decoded_root_name = Path(root).name.encode('cp437').decode('cp866')\r\n```\r\n@mariosasko thank you for your help :)"
] | 2022-04-11T02:07:13 | 2022-04-19T03:15:46 | 2022-04-16T15:46:29 | NONE | null | ## Dataset viewer issue for 'MalakhovIlya/RuREBus'
**Link:** https://huggingface.co/datasets/MalakhovIlya/RuREBus
**Description**
Using os.walk(topdown=False) in DatasetBuilder causes following error:
Status code: 400
Exception: TypeError
Message: xwalk() got an unexpected keyword argument 'topdown'
Couldn't find where "xwalk" come from. How can I fix this?
Am I the one who added this dataset ? Yes
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4138/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4138/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4137 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4137/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4137/comments | https://api.github.com/repos/huggingface/datasets/issues/4137/events | https://github.com/huggingface/datasets/pull/4137 | 1,199,000,453 | PR_kwDODunzps419D6A | 4,137 | Add single dataset citations for TweetEval | {
"login": "gchhablani",
"id": 29076344,
"node_id": "MDQ6VXNlcjI5MDc2MzQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/29076344?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gchhablani",
"html_url": "https://github.com/gchhablani",
"followers_url": "https://api.github.com/users/gchhablani/followers",
"following_url": "https://api.github.com/users/gchhablani/following{/other_user}",
"gists_url": "https://api.github.com/users/gchhablani/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gchhablani/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gchhablani/subscriptions",
"organizations_url": "https://api.github.com/users/gchhablani/orgs",
"repos_url": "https://api.github.com/users/gchhablani/repos",
"events_url": "https://api.github.com/users/gchhablani/events{/privacy}",
"received_events_url": "https://api.github.com/users/gchhablani/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"The `test_dataset_cards` method is failing with the error:\r\n\r\n```\r\nif error_messages:\r\n> raise ValueError(\"\\n\".join(error_messages))\r\nE ValueError: The following issues have been found in the dataset cards:\r\nE YAML tags:\r\nE The following typing errors are found: {'annotations_creators': \"(Expected `typing.List` with length > 0. Found value of type: `<class 'list'>`, with length: 0.\\n)\\nOR\\n(Expected `typing.Dict` with length > 0. Found value of type: `<class 'list'>`, with length: 0.\\n)\"}\r\n```\r\n\r\nAdding `found` as annotation creators."
] | 2022-04-10T11:51:54 | 2022-04-12T07:57:22 | 2022-04-12T07:51:15 | CONTRIBUTOR | null | This PR adds single data citations as per request of the original creators of the TweetEval dataset.
This is a recent email from the creator:
> Could I ask you a favor? Would you be able to add at the end of the README the citations of the single datasets as well? You can just copy our readme maybe? https://github.com/cardiffnlp/tweeteval#citing-tweeteval
(just to be sure that the creator of the single datasets also get credits when tweeteval is used)
Please let me know if this looks okay or if any changes are needed.
Thanks,
Gunjan
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4137/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4137/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4137",
"html_url": "https://github.com/huggingface/datasets/pull/4137",
"diff_url": "https://github.com/huggingface/datasets/pull/4137.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4137.patch",
"merged_at": "2022-04-12T07:51:15"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4135 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4135/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4135/comments | https://api.github.com/repos/huggingface/datasets/issues/4135/events | https://github.com/huggingface/datasets/pull/4135 | 1,198,307,610 | PR_kwDODunzps416-Rn | 4,135 | Support streaming xtreme dataset for PAN-X config | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-04-09T06:19:48 | 2022-05-06T08:39:40 | 2022-04-11T06:54:14 | MEMBER | null | Support streaming xtreme dataset for PAN-X config. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4135/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4135/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4135",
"html_url": "https://github.com/huggingface/datasets/pull/4135",
"diff_url": "https://github.com/huggingface/datasets/pull/4135.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4135.patch",
"merged_at": "2022-04-11T06:54:14"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4134 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4134/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4134/comments | https://api.github.com/repos/huggingface/datasets/issues/4134/events | https://github.com/huggingface/datasets/issues/4134 | 1,197,937,146 | I_kwDODunzps5HZxH6 | 4,134 | ELI5 supporting documents | {
"login": "saurabh-0077",
"id": 69015896,
"node_id": "MDQ6VXNlcjY5MDE1ODk2",
"avatar_url": "https://avatars.githubusercontent.com/u/69015896?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/saurabh-0077",
"html_url": "https://github.com/saurabh-0077",
"followers_url": "https://api.github.com/users/saurabh-0077/followers",
"following_url": "https://api.github.com/users/saurabh-0077/following{/other_user}",
"gists_url": "https://api.github.com/users/saurabh-0077/gists{/gist_id}",
"starred_url": "https://api.github.com/users/saurabh-0077/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/saurabh-0077/subscriptions",
"organizations_url": "https://api.github.com/users/saurabh-0077/orgs",
"repos_url": "https://api.github.com/users/saurabh-0077/repos",
"events_url": "https://api.github.com/users/saurabh-0077/events{/privacy}",
"received_events_url": "https://api.github.com/users/saurabh-0077/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892912,
"node_id": "MDU6TGFiZWwxOTM1ODkyOTEy",
"url": "https://api.github.com/repos/huggingface/datasets/labels/question",
"name": "question",
"color": "d876e3",
"default": true,
"description": "Further information is requested"
}
] | open | false | null | [] | null | [
"Hi ! Please post your question on the [forum](https://discuss.huggingface.co/), more people will be able to help you there ;)"
] | 2022-04-08T23:36:27 | 2022-04-13T13:52:46 | null | NONE | null | if i am using dense search to create supporting documents for eli5 how much time it will take bcz i read somewhere that it takes about 18 hrs?? | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4134/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4134/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4133 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4133/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4133/comments | https://api.github.com/repos/huggingface/datasets/issues/4133/events | https://github.com/huggingface/datasets/issues/4133 | 1,197,830,623 | I_kwDODunzps5HZXHf | 4,133 | HANS dataset preview broken | {
"login": "pietrolesci",
"id": 61748653,
"node_id": "MDQ6VXNlcjYxNzQ4NjUz",
"avatar_url": "https://avatars.githubusercontent.com/u/61748653?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pietrolesci",
"html_url": "https://github.com/pietrolesci",
"followers_url": "https://api.github.com/users/pietrolesci/followers",
"following_url": "https://api.github.com/users/pietrolesci/following{/other_user}",
"gists_url": "https://api.github.com/users/pietrolesci/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pietrolesci/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pietrolesci/subscriptions",
"organizations_url": "https://api.github.com/users/pietrolesci/orgs",
"repos_url": "https://api.github.com/users/pietrolesci/repos",
"events_url": "https://api.github.com/users/pietrolesci/events{/privacy}",
"received_events_url": "https://api.github.com/users/pietrolesci/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 3287858981,
"node_id": "MDU6TGFiZWwzMjg3ODU4OTgx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/streaming",
"name": "streaming",
"color": "fef2c0",
"default": false,
"description": ""
}
] | closed | false | null | [] | null | [
"The dataset cannot be loaded, be it in normal or streaming mode.\r\n\r\n```python\r\n>>> import datasets\r\n>>> dataset=datasets.load_dataset(\"hans\", split=\"train\", streaming=True)\r\n>>> next(iter(dataset))\r\nTraceback (most recent call last):\r\n File \"<stdin>\", line 1, in <module>\r\n File \"/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py\", line 497, in __iter__\r\n for key, example in self._iter():\r\n File \"/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py\", line 494, in _iter\r\n yield from ex_iterable\r\n File \"/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py\", line 87, in __iter__\r\n yield from self.generate_examples_fn(**self.kwargs)\r\n File \"/home/slesage/.cache/huggingface/modules/datasets_modules/datasets/hans/1bbcb735c482acd54f2e118074b59cfd2bf5f7a5a285d4d540d1e632216672ac/hans.py\", line 121, in _generate_examples\r\n for idx, line in enumerate(open(filepath, \"rb\")):\r\n File \"/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.9/site-packages/fsspec/spec.py\", line 1595, in __next__\r\n out = self.readline()\r\n File \"/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.9/site-packages/fsspec/spec.py\", line 1592, in readline\r\n return self.readuntil(b\"\\n\")\r\n File \"/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.9/site-packages/fsspec/spec.py\", line 1581, in readuntil\r\n self.seek(start + found + len(char))\r\n File \"/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.9/site-packages/fsspec/implementations/http.py\", line 676, in seek\r\n raise ValueError(\"Cannot seek streaming HTTP file\")\r\nValueError: Cannot seek streaming HTTP file\r\n>>> dataset=datasets.load_dataset(\"hans\", split=\"train\", streaming=False)\r\nDownloading and preparing dataset hans/plain_text (download: 29.51 MiB, generated: 30.34 MiB, post-processed: Unknown size, total: 59.85 MiB) to /home/slesage/.cache/huggingface/datasets/hans/plain_text/1.0.0/1bbcb735c482acd54f2e118074b59cfd2bf5f7a5a285d4d540d1e632216672ac...\r\nTraceback (most recent call last):\r\n File \"<stdin>\", line 1, in <module>\r\n File \"/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.9/site-packages/datasets/load.py\", line 1687, in load_dataset\r\n builder_instance.download_and_prepare(\r\n File \"/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.9/site-packages/datasets/builder.py\", line 605, in download_and_prepare\r\n self._download_and_prepare(\r\n File \"/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.9/site-packages/datasets/builder.py\", line 1104, in _download_and_prepare\r\n super()._download_and_prepare(dl_manager, verify_infos, check_duplicate_keys=verify_infos)\r\n File \"/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.9/site-packages/datasets/builder.py\", line 694, in _download_and_prepare\r\n self._prepare_split(split_generator, **prepare_split_kwargs)\r\n File \"/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.9/site-packages/datasets/builder.py\", line 1087, in _prepare_split\r\n for key, record in logging.tqdm(\r\n File \"/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.9/site-packages/tqdm/std.py\", line 1180, in __iter__\r\n for obj in iterable:\r\n File \"/home/slesage/.cache/huggingface/modules/datasets_modules/datasets/hans/1bbcb735c482acd54f2e118074b59cfd2bf5f7a5a285d4d540d1e632216672ac/hans.py\", line 121, in _generate_examples\r\n for idx, line in enumerate(open(filepath, \"rb\")):\r\nValueError: readline of closed file\r\n```\r\n\r\n",
"Hi! I've opened a PR that should make this dataset stremable. You can test it as follows:\r\n```python\r\nfrom datasets import load_dataset\r\ndset = load_dataset(\"hans\", split=\"train\", streaming=True, revision=\"49decd29839c792ecc24ac88f861cbdec30c1c40\")\r\n```\r\n\r\n@severo The current script doesn't throw an error in normal mode (only in streaming mode) on my local machine or in Colab. Can you update your installation of `datasets` and see if that fixes the issue?",
"Thanks for this. It works well, thanks! The dataset viewer is using https://github.com/huggingface/datasets/releases/tag/2.0.0, I'm eager to upgrade to 2.0.1 😉"
] | 2022-04-08T21:06:15 | 2022-04-13T11:57:34 | 2022-04-13T11:57:34 | NONE | null | ## Dataset viewer issue for '*hans*'
**Link:** [https://huggingface.co/datasets/hans](https://huggingface.co/datasets/hans)
HANS dataset preview is broken with error 400
Am I the one who added this dataset ? No
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4133/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4133/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4132 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4132/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4132/comments | https://api.github.com/repos/huggingface/datasets/issues/4132/events | https://github.com/huggingface/datasets/pull/4132 | 1,197,661,720 | PR_kwDODunzps41460R | 4,132 | Support streaming xtreme dataset for PAWS-X config | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-04-08T18:25:32 | 2022-05-06T08:39:42 | 2022-04-08T21:02:44 | MEMBER | null | Support streaming xtreme dataset for PAWS-X config. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4132/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4132/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4132",
"html_url": "https://github.com/huggingface/datasets/pull/4132",
"diff_url": "https://github.com/huggingface/datasets/pull/4132.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4132.patch",
"merged_at": "2022-04-08T21:02:44"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4131 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4131/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4131/comments | https://api.github.com/repos/huggingface/datasets/issues/4131/events | https://github.com/huggingface/datasets/pull/4131 | 1,197,472,249 | PR_kwDODunzps414Zt1 | 4,131 | Support streaming xtreme dataset for udpos config | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-04-08T15:30:49 | 2022-05-06T08:39:46 | 2022-04-08T16:28:07 | MEMBER | null | Support streaming xtreme dataset for udpos config. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4131/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4131/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4131",
"html_url": "https://github.com/huggingface/datasets/pull/4131",
"diff_url": "https://github.com/huggingface/datasets/pull/4131.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4131.patch",
"merged_at": "2022-04-08T16:28:07"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4130 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4130/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4130/comments | https://api.github.com/repos/huggingface/datasets/issues/4130/events | https://github.com/huggingface/datasets/pull/4130 | 1,197,456,857 | PR_kwDODunzps414Wqx | 4,130 | Add SBU Captions Photo Dataset | {
"login": "thomasw21",
"id": 24695242,
"node_id": "MDQ6VXNlcjI0Njk1MjQy",
"avatar_url": "https://avatars.githubusercontent.com/u/24695242?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/thomasw21",
"html_url": "https://github.com/thomasw21",
"followers_url": "https://api.github.com/users/thomasw21/followers",
"following_url": "https://api.github.com/users/thomasw21/following{/other_user}",
"gists_url": "https://api.github.com/users/thomasw21/gists{/gist_id}",
"starred_url": "https://api.github.com/users/thomasw21/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/thomasw21/subscriptions",
"organizations_url": "https://api.github.com/users/thomasw21/orgs",
"repos_url": "https://api.github.com/users/thomasw21/repos",
"events_url": "https://api.github.com/users/thomasw21/events{/privacy}",
"received_events_url": "https://api.github.com/users/thomasw21/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-04-08T15:17:39 | 2022-04-12T10:47:31 | 2022-04-12T10:41:29 | CONTRIBUTOR | null | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4130/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4130/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4130",
"html_url": "https://github.com/huggingface/datasets/pull/4130",
"diff_url": "https://github.com/huggingface/datasets/pull/4130.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4130.patch",
"merged_at": "2022-04-12T10:41:29"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4129 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4129/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4129/comments | https://api.github.com/repos/huggingface/datasets/issues/4129/events | https://github.com/huggingface/datasets/issues/4129 | 1,197,376,796 | I_kwDODunzps5HXoUc | 4,129 | dataset metadata for reproducibility | {
"login": "nbroad1881",
"id": 24982805,
"node_id": "MDQ6VXNlcjI0OTgyODA1",
"avatar_url": "https://avatars.githubusercontent.com/u/24982805?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/nbroad1881",
"html_url": "https://github.com/nbroad1881",
"followers_url": "https://api.github.com/users/nbroad1881/followers",
"following_url": "https://api.github.com/users/nbroad1881/following{/other_user}",
"gists_url": "https://api.github.com/users/nbroad1881/gists{/gist_id}",
"starred_url": "https://api.github.com/users/nbroad1881/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/nbroad1881/subscriptions",
"organizations_url": "https://api.github.com/users/nbroad1881/orgs",
"repos_url": "https://api.github.com/users/nbroad1881/repos",
"events_url": "https://api.github.com/users/nbroad1881/events{/privacy}",
"received_events_url": "https://api.github.com/users/nbroad1881/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | open | false | null | [] | null | [] | 2022-04-08T14:17:28 | 2022-04-08T14:17:28 | null | NONE | null | When pulling a dataset from the hub, it would be useful to have some metadata about the specific dataset and version that is used. The metadata could then be passed to the `Trainer` which could then be saved to a model card. This is useful for people who run many experiments on different versions (commits/branches) of the same dataset.
The dataset could have a list of “source datasets” metadata and ignore what happens to them before arriving in the Trainer (i.e. ignore mapping, filtering, etc.).
Here is a basic representation (made by @lhoestq )
```python
>>> from datasets import load_dataset
>>>
>>> my_dataset = load_dataset(...)["train"]
>>> my_dataset = my_dataset.map(...)
>>>
>>> my_dataset.sources
[HFHubDataset(repo_id=..., revision=..., arguments={...})]
``` | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4129/reactions",
"total_count": 4,
"+1": 4,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4129/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4128 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4128/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4128/comments | https://api.github.com/repos/huggingface/datasets/issues/4128/events | https://github.com/huggingface/datasets/pull/4128 | 1,197,326,311 | PR_kwDODunzps4138I6 | 4,128 | More robust `cast_to_python_objects` in `TypedSequence` | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-04-08T13:33:35 | 2022-04-13T14:07:41 | 2022-04-13T14:01:16 | CONTRIBUTOR | null | Adds a fallback to run an expensive version of `cast_to_python_objects` which exhaustively checks entire lists to avoid the `ArrowInvalid: Could not convert` error in `TypedSequence`. Currently, this error can happen in situations where only some images are decoded in `map`, in which case `cast_to_python_objects` fails to recognize that it needs to cast `PIL.Image` objects if they are not at the beginning of the sequence and stops after the first image dictionary (e.g., if `data` is `[{"bytes": None, "path": "some path"}, PIL.Image(), ...]`)
Fix #4124 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4128/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4128/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4128",
"html_url": "https://github.com/huggingface/datasets/pull/4128",
"diff_url": "https://github.com/huggingface/datasets/pull/4128.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4128.patch",
"merged_at": "2022-04-13T14:01:16"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4127 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4127/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4127/comments | https://api.github.com/repos/huggingface/datasets/issues/4127/events | https://github.com/huggingface/datasets/pull/4127 | 1,197,297,756 | PR_kwDODunzps4132EN | 4,127 | Add configs with processed data in medical_dialog dataset | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-04-08T13:08:16 | 2022-05-06T08:39:50 | 2022-04-08T16:20:51 | MEMBER | null | There exist processed data files that do not require parsing the raw data files (which can take long time).
Fix #4122. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4127/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4127/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4127",
"html_url": "https://github.com/huggingface/datasets/pull/4127",
"diff_url": "https://github.com/huggingface/datasets/pull/4127.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4127.patch",
"merged_at": "2022-04-08T16:20:51"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4126 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4126/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4126/comments | https://api.github.com/repos/huggingface/datasets/issues/4126/events | https://github.com/huggingface/datasets/issues/4126 | 1,196,665,194 | I_kwDODunzps5HU6lq | 4,126 | dataset viewer issue for common_voice | {
"login": "laphang",
"id": 24724502,
"node_id": "MDQ6VXNlcjI0NzI0NTAy",
"avatar_url": "https://avatars.githubusercontent.com/u/24724502?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/laphang",
"html_url": "https://github.com/laphang",
"followers_url": "https://api.github.com/users/laphang/followers",
"following_url": "https://api.github.com/users/laphang/following{/other_user}",
"gists_url": "https://api.github.com/users/laphang/gists{/gist_id}",
"starred_url": "https://api.github.com/users/laphang/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/laphang/subscriptions",
"organizations_url": "https://api.github.com/users/laphang/orgs",
"repos_url": "https://api.github.com/users/laphang/repos",
"events_url": "https://api.github.com/users/laphang/events{/privacy}",
"received_events_url": "https://api.github.com/users/laphang/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 3470211881,
"node_id": "LA_kwDODunzps7O1zsp",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset-viewer",
"name": "dataset-viewer",
"color": "E5583E",
"default": false,
"description": "Related to the dataset viewer on huggingface.co"
},
{
"id": 4027368468,
"node_id": "LA_kwDODunzps7wDMQU",
"url": "https://api.github.com/repos/huggingface/datasets/labels/audio_column",
"name": "audio_column",
"color": "F83ACF",
"default": false,
"description": ""
}
] | closed | false | {
"login": "severo",
"id": 1676121,
"node_id": "MDQ6VXNlcjE2NzYxMjE=",
"avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/severo",
"html_url": "https://github.com/severo",
"followers_url": "https://api.github.com/users/severo/followers",
"following_url": "https://api.github.com/users/severo/following{/other_user}",
"gists_url": "https://api.github.com/users/severo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/severo/subscriptions",
"organizations_url": "https://api.github.com/users/severo/orgs",
"repos_url": "https://api.github.com/users/severo/repos",
"events_url": "https://api.github.com/users/severo/events{/privacy}",
"received_events_url": "https://api.github.com/users/severo/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "severo",
"id": 1676121,
"node_id": "MDQ6VXNlcjE2NzYxMjE=",
"avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/severo",
"html_url": "https://github.com/severo",
"followers_url": "https://api.github.com/users/severo/followers",
"following_url": "https://api.github.com/users/severo/following{/other_user}",
"gists_url": "https://api.github.com/users/severo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/severo/subscriptions",
"organizations_url": "https://api.github.com/users/severo/orgs",
"repos_url": "https://api.github.com/users/severo/repos",
"events_url": "https://api.github.com/users/severo/events{/privacy}",
"received_events_url": "https://api.github.com/users/severo/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"Yes, it's a known issue, and we expect to fix it soon.",
"Fixed.\r\n\r\n<img width=\"1393\" alt=\"Capture d’écran 2022-04-25 à 15 42 05\" src=\"https://user-images.githubusercontent.com/1676121/165101176-d729d85b-efff-45a8-bad1-b69223edba5f.png\">\r\n"
] | 2022-04-07T23:34:28 | 2022-04-25T13:42:17 | 2022-04-25T13:42:16 | NONE | null | ## Dataset viewer issue for 'common_voice'
**Link:** https://huggingface.co/datasets/common_voice
Server Error
Status code: 400
Exception: TypeError
Message: __init__() got an unexpected keyword argument 'audio_column'
Am I the one who added this dataset ? No
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4126/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4126/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4125 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4125/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4125/comments | https://api.github.com/repos/huggingface/datasets/issues/4125/events | https://github.com/huggingface/datasets/pull/4125 | 1,196,633,936 | PR_kwDODunzps411qeR | 4,125 | BIG-bench | {
"login": "andersjohanandreassen",
"id": 43357549,
"node_id": "MDQ6VXNlcjQzMzU3NTQ5",
"avatar_url": "https://avatars.githubusercontent.com/u/43357549?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/andersjohanandreassen",
"html_url": "https://github.com/andersjohanandreassen",
"followers_url": "https://api.github.com/users/andersjohanandreassen/followers",
"following_url": "https://api.github.com/users/andersjohanandreassen/following{/other_user}",
"gists_url": "https://api.github.com/users/andersjohanandreassen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/andersjohanandreassen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/andersjohanandreassen/subscriptions",
"organizations_url": "https://api.github.com/users/andersjohanandreassen/orgs",
"repos_url": "https://api.github.com/users/andersjohanandreassen/repos",
"events_url": "https://api.github.com/users/andersjohanandreassen/events{/privacy}",
"received_events_url": "https://api.github.com/users/andersjohanandreassen/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"> It looks like the CI is failing on windows because our windows CI is unable to clone the bigbench repository (maybe it has to do with filenames that are longer than 256 characters, which windows don't like). Could the smaller installation of bigbench via pip solve this issue ?\r\n> Otherwise we can see how to remove this limitation in our windows CI.\r\n\r\nI'm not sure.\r\nIf it's git's fault that it can't handle the long filenames, it will possibly be resolved by the pip install. If it's an issue with windows not liking long filenames after it's installed, then it will not be resolved.\r\nI don't have a windows computer to try it on, but I might be able to tweek this PR and do an experiment to find out. \r\nWe're waiting for a quota increase for the pip install (https://github.com/pypa/pypi-support/issues/1782). It's been pending for 2-3 weeks, and I don't have an estimate for when it will be resolved. \r\n\r\n>Regarding the dummy data zip files, I think we can just keep datasets/bigbench/dummy/abstract_narrative_understanding/1.0.0/dummy_data.zip and remove all the other ones. We just require to have at least one dummy_data.zip file.\r\n\r\nSounds great. I will trim that down. ",
"Do you know what are the other tests dependencies that have conflicts with bigbench ? I can try to split the CI to end up with a compatible list of test dependencies",
"Hi @lhoestq,\r\n\r\nI haven't played with eliminating requirements form the test dependencies, and I've been trying to resolve this by modifying the bigbench repo to become compatible. \r\nIn the original bigbench repo, the version requirements were strict, and specifically it had a datasets==1.17.0 requirement which was causing trouble. \r\nI'm working on PR https://github.com/google/BIG-bench/pull/766 to get some more flexible versions that might be compatible with the test dependencies in HF/datasets.\r\nWe're somewhat flexible in modifying these version numbers if we can figure out what the exact conflict is. \r\n\r\nI've spent some time experimenting with different versions, but I don't have a very efficient way of doing this debugging on my work computer (which for some reason doesn't produce the same sets of errors running python 3.9 instead of 3.6 or 3.7 in the tests). \r\nIt currently fails at \r\n> The conflict is caused by:\r\n> bert-score 0.3.6 depends on matplotlib\r\n> big-bench 0.0.1 depends on matplotlib<4.0 and >=3.5.1\r\n\r\nwhich doesn't seem like it can be the real issue. \r\n\r\nIf you have any advice for how to resolve these conflicts, that would be greatly appreciated!",
"Hi again @lhoestq, \r\nAfter some more or less random guessing of conflicting packages, I've managed to find a configuration that seems to be compatible with HF/datasets. \r\n\r\nThe errors went away after removing version limits on matplotlib and scipy, and loosening numpy from 1.19 -> 1.17 in the bigbench requirements. \r\n\r\nI might do some more tweaking to see if it lets me set some minimal limits on matplotlib and scipy, but I think we at least can move forward.\r\n\r\nThe WIN tests are still failing, now because of \r\n\r\n> Did not find path entry C:\\tools\\miniconda3\\bin\r\n>C:\\tools\\miniconda3\\envs\\py37\\python.exe: No module named pytest\r\n\r\nI have no way of debugging this locally, and unless there's some way to get more verbose logs, I don't know why it's not finding pytest. Would you be able to take a quick look? \r\n\r\nUpdate: Actually, I see it's still failing because of the long filenames. So perhaps the pytest error is just because the previous steps failed. ",
"One more update on the WIN errors. \r\nI think all the long filenames are in files in the github repo that does not need to be included. \r\nWe will try to remove them .",
"Hi ! The remaining error seems to be a `UnicodeDecodeError` from `setup.py`. I think you can fix your setup.py:\r\n```diff\r\n- with open(os.path.join(os.path.dirname(__file__), fname)) as f:\r\n+ with open(os.path.join(os.path.dirname(__file__), fname), encoding=\"utf-8\") as f:\r\n```\r\nIndeed on windows, when you `open` a file it doesn't always use \"utf-8\" by default",
"Hi @lhoestq, \r\nThe dependency issues seems to now be resolved 🎉 \r\n\r\nNow, the WIN tests are failing at\r\n> ERROR tests/test_arrow_dataset.py::test_dummy_dataset_serialize_s3 - botocore...\r\n> ERROR tests/test_dataset_dict.py::test_dummy_dataset_serialize_s3 - botocore...\r\n\r\nIs this testing the dummy dataset that's added in bigbench? If so, I might need some help getting the right format in.\r\n\r\nThe error message I'm seeing is \r\n> raise EndpointConnectionError(endpoint_url=request.url, error=e)\r\n> E botocore.exceptions.EndpointConnectionError: Could not connect to the endpoint URL: \"http://127.0.0.1:5555/test\"\r\n\r\nWhich seems unrelated, but perhaps the real issue is somewhere I'm not seeing? ",
"Woohoo awesome !\r\n\r\nLet me check the CI error",
"Can you try to re-run the CI, just in case CircleCI messed up ?",
"Hi @lhoestq, \r\nRerunning did not seem to solve the problem. \r\nThe `test_dummy_dataset_serialize_s3` error still seems to remain.",
"Hi again @lhoestq, \r\nI'm not sure if this is informative or not in terms of debugging, but I deleted the dummy data and the errors for windows still fail and the others still pass. \r\nDo you have any idea what could be causing this error on windows?",
"_The documentation is not available anymore as the PR was closed or merged._",
"Now the last question: let's have the dataset under`google/bigbench` @andersjohanandreassen ?\r\n\r\nI think it would be nicer, this way you and anyone in your team can update the dataset card whevener you want without going through a github PR. You just need to join the https://huggingface.co/google page using your google email :)",
"Hi @lhoestq, \r\n\r\nThank you so much for the help! I really appreciate it!!!\r\n\r\nAfter some discussion with the other bigbench organizers, I think there is a slight preference for bigbench to not be under google/bigbench since this is a collaboration with researchers from many different institutions/organizations beyond Google. \r\n\r\nI see the drawback with the updates to the dataset card having to go through a PR, but hopefully that won't be very frequent. \r\n\r\nWe're finalizing putting the bigbench api on pip, so once that's finalized I just need to update the setup.py with the correct dependency and I think we are ready to merge. ",
"Ok perfect, thank you !",
"I noticed that in the latest windows CI run it takes forever to install the dependencies, was there any change in the bigbench dependencies recently ?",
"oh, sorry! I just did a double check on the dependencies, and it seems like there is at least one left that should have been removed. There's also one new one added. \r\nLet me get those removed again. Will ping you here when it's updated. ",
"It looks like there is a circular dependency in `bigbench` at https://storage.googleapis.com/public_research_data/bigbench/bigbench-0.0.1.tar.gz\r\n\r\n```python\r\n>>> import bigbench.api.util as bb_utils\r\nTraceback (most recent call last):\r\n File \"<stdin>\", line 1, in <module>\r\n File \"/home/circleci/.pyenv/versions/3.6.15/lib/python3.6/site-packages/bigbench/api/util.py\", line 29, in <module>\r\n import bigbench.models.query_logging_model as query_logging_model\r\n File \"/home/circleci/.pyenv/versions/3.6.15/lib/python3.6/site-packages/bigbench/models/query_logging_model.py\", line 23, in <module>\r\n import bigbench.api.util as util\r\nAttributeError: module 'bigbench.api' has no attribute 'util'\r\n```",
"Hi @lhoestq , \r\nI think we are ready to merge! \r\n\r\nI have one minor question that I haven't been able to figure out: \r\nIs there a way to bypass the `verify_infos` from triggering? I have `max_examples` as an argument to allow for selecting a fixed subset of the datasets (some of the tasks have *very* many examples). But this is a variable that's not specified by the configs, so it raises an `NonMatchingSplitsSizesError`.\r\nI wasn't able to work my way around this, but perhaps there is a way to bypass this that I'm not seeing?\r\nIf this cannot be done, I'm happy to ignore this for now.\r\n\r\nRegarding pypi, we are working on a release there, but I'm told there is some issue that there is a problem regarding the upload, and we are not sure when it will be resolved, and it's not in my control. \r\nI think merging this PR with the GCS is a great idea, and I will open a new PR when the pypi version is ready. ",
"Cool ! Merging then :D\r\n\r\n> Is there a way to bypass the verify_infos from triggering? I have max_examples as an argument to allow for selecting a fixed subset of the datasets (some of the tasks have very many examples). But this is a variable that's not specified by the configs, so it raises an NonMatchingSplitsSizesError.\r\n\r\nThis is a bug, I opened an issue [here](https://github.com/huggingface/datasets/issues/4462). It should be easy to fix :)",
"The bigbench page is available here ! https://huggingface.co/datasets/bigbench\r\n\r\nI think we can update the dataset viewer to install bigbench on it, but since this is production code I'd rather use the version on pypi for bigbench when it comes out"
] | 2022-04-07T22:33:30 | 2022-06-08T17:57:48 | 2022-06-08T17:32:32 | CONTRIBUTOR | null | This PR adds all BIG-bench json tasks to huggingface/datasets. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4125/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4125/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4125",
"html_url": "https://github.com/huggingface/datasets/pull/4125",
"diff_url": "https://github.com/huggingface/datasets/pull/4125.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4125.patch",
"merged_at": "2022-06-08T17:32:32"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4124 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4124/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4124/comments | https://api.github.com/repos/huggingface/datasets/issues/4124/events | https://github.com/huggingface/datasets/issues/4124 | 1,196,469,842 | I_kwDODunzps5HUK5S | 4,124 | Image decoding often fails when transforming Image datasets | {
"login": "RafayAK",
"id": 17025191,
"node_id": "MDQ6VXNlcjE3MDI1MTkx",
"avatar_url": "https://avatars.githubusercontent.com/u/17025191?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/RafayAK",
"html_url": "https://github.com/RafayAK",
"followers_url": "https://api.github.com/users/RafayAK/followers",
"following_url": "https://api.github.com/users/RafayAK/following{/other_user}",
"gists_url": "https://api.github.com/users/RafayAK/gists{/gist_id}",
"starred_url": "https://api.github.com/users/RafayAK/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/RafayAK/subscriptions",
"organizations_url": "https://api.github.com/users/RafayAK/orgs",
"repos_url": "https://api.github.com/users/RafayAK/repos",
"events_url": "https://api.github.com/users/RafayAK/events{/privacy}",
"received_events_url": "https://api.github.com/users/RafayAK/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"A quick hack I have found is that we can call the image first before running the transforms and it makes sure the image is decoded before being passed on.\r\n\r\nFor this I just needed to add `example['img'] = example['img']` to the top of my `generate_flipped_data` function, defined above, so that image decode in invoked.\r\n\r\nAfter this minor change this function works:\r\n```python\r\ndef generate_flipped_data(example, p=0.5):\r\n \"\"\"\r\n A Dataset mapping functions that transforms some of the image up-side-down.\r\n If the probability value (p) is 0.5 approximately half the images will be flipped upside-down\r\n Args:\r\n example: An example from the dataset containing a Python dictionary with \"img\" and \"is_flipped\" key-value pair\r\n p: probability of flipping the image up-side-down, Default 0.5\r\n\r\n Returns:\r\n example: A Dataset object\r\n\r\n \"\"\"\r\n example['img'] = example['img'] # <<< This is the only change\r\n if rng.random() > p: # the flip the image and set is_flipped column to 1\r\n example['img'] = example['img'].transpose(\r\n 1) # ImageOps.flip(example['img']) #example['img'].transpose(Image.FLIP_TOP_BOTTOM)\r\n example['is_flipped'] = 1\r\n\r\n return example\r\n```",
"Hi @RafayAK, thanks for reporting.\r\n\r\nCurrent implementation of the Image feature performs the decoding only if the \"img\" field is accessed by the mapped function.\r\n\r\nIn your original `generate_flipped_data` function:\r\n- it only accesses the \"img\" field (and thus performs decoding) if `rng.random() > p`;\r\n- on the other hand, for the cases where `rng.random() <= p`, the \"img\" field is not accessed and thus no decoding is performed for those examples\r\n\r\nBy adding the code line `example['img'] = example['img']`, you make sure the \"img\" field is accessed in all cases, and the decoding is done for all examples.\r\n\r\nAlso note that there is a little bug in your implementation: `p` is not the probability of flipping, but the probability of not-flipping; the larger is `p`, the smaller is the probability of flipping.\r\n\r\nSome refactoring (fixing also `p`):\r\n```python\r\ndef generate_flipped_data(example, p=0.5):\r\n \"\"\"\r\n A Dataset mapping functions that transforms some of the image up-side-down.\r\n If the probability value (p) is 0.5 approximately half the images will be flipped upside-down.\r\n\r\n Args:\r\n example: An example from the dataset containing a Python dictionary with \"img\" and \"is_flipped\" key-value pair\r\n p: probability of flipping the image up-side-down, Default 0.5\r\n\r\n Returns:\r\n example: A Dataset object\r\n\r\n \"\"\"\r\n do_flip = rng.random() < p # Note the \"<\" sign here instead of \">\"\r\n example['img'] = example['img'].transpose(1) if do_flip else example['img'] # Note \"img\" is always accessed\r\n example['is_flipped'] = 1 if do_flip else 0\r\n return example",
"@albertvillanova Thanks for letting me know this is intended behavior. The docs are severely lacking on this, if I hadn't posted this here I would have never found out how I'm actually supposed to modify images in a Dataset object.",
"@albertvillanova Secondly if you check the error message it shows that around 1999 images were successfully created, I'm pretty sure some of them were also flipped during the process. Back to my main contention, sometimes the decoding takes place other times it fails. \r\n\r\nI suppose to run `map` on any dataset all the examples should be invoked even if on some of them we end up doing nothing, is that right?",
"Hi @RafayAK! I've opened a PR with the fix, which adds a fallback to reattempt casting to PyArrow format with a more robust (but more expensive) procedure if the first attempt fails. Feel free to test it by installing `datasets` from the PR branch with the following command:\r\n```\r\npip install git+https://github.com/huggingface/datasets.git@fix-4124\r\n```",
"@mariosasko I'll try this right away and report back.",
"@mariosasko Thanks a lot for looking into this, now the `map` function at least behaves as one would expect a function to behave. \r\n\r\nLooking forward to exploring Hugging Face more and even contributing 😃.\r\n\r\n```bash\r\n $ conda list | grep datasets\r\ndatasets 2.0.1.dev0 pypi_0 pypi\r\n\r\n```\r\n\r\n```python\r\ndef preprocess_data(dataset):\r\n \"\"\"\r\n Helper funtion to pre-process HuggingFace Cifar-100 Dataset to remove fine_label and coarse_label columns and\r\n add is_flipped column\r\n Args:\r\n dataset: HuggingFace CIFAR-100 Dataset Object\r\n\r\n Returns:\r\n new_dataset: A Dataset object with \"img\" and \"is_flipped\" columns only\r\n\r\n \"\"\"\r\n # remove fine_label and coarse_label columns\r\n new_dataset = dataset.remove_columns(['fine_label', 'coarse_label'])\r\n # add the column for is_flipped\r\n new_dataset = new_dataset.add_column(name=\"is_flipped\", column=np.zeros((len(new_dataset)), dtype=np.uint8))\r\n\r\n return new_dataset\r\n\r\n\r\ndef generate_flipped_data(example, p=0.5):\r\n \"\"\"\r\n A Dataset mapping functions that transforms some of the image up-side-down.\r\n If the probability value (p) is 0.5 approximately half the images will be flipped upside-down\r\n Args:\r\n example: An example from the dataset containing a Python dictionary with \"img\" and \"is_flipped\" key-value pair\r\n p: probability of flipping the image up-side-down, Default 0.5\r\n\r\n Returns:\r\n example: A Dataset object\r\n\r\n \"\"\"\r\n # example['img'] = example['img']\r\n if rng.random() > p: # the flip the image and set is_flipped column to 1\r\n example['img'] = example['img'].transpose(\r\n 1) # ImageOps.flip(example['img']) #example['img'].transpose(Image.FLIP_TOP_BOTTOM)\r\n example['is_flipped'] = 1\r\n\r\n return example\r\n\r\nmy_test = preprocess_data(test_dataset)\r\nmy_test = my_test.map(generate_flipped_data)\r\n```\r\n\r\nThe output now show the function was applied successfully:\r\n``` bash\r\n/home/rafay/anaconda3/envs/pytorch_new/bin/python /home/rafay/Documents/you_only_live_once/upside_down_detector/create_dataset.py\r\nDownloading builder script: 5.61kB [00:00, 3.16MB/s] \r\nDownloading metadata: 4.21kB [00:00, 2.56MB/s] \r\nReusing dataset cifar100 (/home/rafay/.cache/huggingface/datasets/cifar100/cifar100/1.0.0/f365c8b725c23e8f0f8d725c3641234d9331cd2f62919d1381d1baa5b3ba3142)\r\nReusing dataset cifar100 (/home/rafay/.cache/huggingface/datasets/cifar100/cifar100/1.0.0/f365c8b725c23e8f0f8d725c3641234d9331cd2f62919d1381d1baa5b3ba3142)\r\n100%|██████████| 10000/10000 [00:01<00:00, 5149.15ex/s]\r\n```\r\n"
] | 2022-04-07T19:17:25 | 2022-04-13T14:01:16 | 2022-04-13T14:01:16 | NONE | null | ## Describe the bug
When transforming/modifying images in an image dataset using the `map` function the PIL images often fail to decode in time for the image transforms, causing errors.
Using a debugger it is easy to see what the problem is, the Image decode invocation does not take place and the resulting image passed around is still raw bytes:
```
[{'bytes': b'\x89PNG\r\n\x1a\n\x00\x00\x00\rIHDR\x00\x00\x00 \x00\x00\x00 \x08\x02\x00\x00\x00\xfc\x18\xed\xa3\x00\x00\x08\x02IDATx\x9cEVIs[\xc7\x11\xeemf\xde\x82\x8d\x80\x08\x89"\xb5V\\\xb6\x94(\xe5\x9f\x90\xca5\x7f$\xa7T\xe5\x9f&9\xd9\x8a\\.\xdb\xa4$J\xa4\x00\x02x\xc0{\xb3t\xe7\x00\xca\x99\xd3\\f\xba\xba\xbf\xa5?|\xfa\xf4\xa2\xeb\xba\xedv\xa3f^\xf8\xd5\x0bY\xb6\x10\xb3\xaaDq\xcd\x83\x87\xdf5\xf3gZ\x1a\x04\x0f\xa0fp\xfa\xe0\xd4\x07?\x9dN\xc4\xb1\x99\xfd\xf2\xcb/\x97\x97\x97H\xa2\xaaf\x16\x82\xaf\xeb\xca{\xbf\xd9l.\xdf\x7f\xfa\xcb_\xff&\x88\x08\x00\x80H\xc0\x80@.;\x0f\x8c@#v\xe3\xe5\xfc\xd1\x9f\xee6q\xbf\xdf\xa6\x14\'\x93\xf1\xc3\xe5\xe3\xd1x\x14c\x8c1\xa5\x1c\x9dsM\xd3\xb4\xed\x08\x89SJ)\xa5\xedv\xbb^\xafNO\x97D\x84Hf ....
```
## Steps to reproduce the bug
```python
from datasets import load_dataset, Dataset
import numpy as np
# seeded NumPy random number generator for reprodducinble results.
rng = np.random.default_rng(seed=0)
test_dataset = load_dataset('cifar100', split="test")
def preprocess_data(dataset):
"""
Helper function to pre-process HuggingFace Cifar-100 Dataset to remove fine_label and coarse_label columns and
add is_flipped column
Args:
dataset: HuggingFace CIFAR-100 Dataset Object
Returns:
new_dataset: A Dataset object with "img" and "is_flipped" columns only
"""
# remove fine_label and coarse_label columns
new_dataset = dataset.remove_columns(['fine_label', 'coarse_label'])
# add the column for is_flipped
new_dataset = new_dataset.add_column(name="is_flipped", column=np.zeros((len(new_dataset)), dtype=np.uint8))
return new_dataset
def generate_flipped_data(example, p=0.5):
"""
A Dataset mapping function that transforms some of the images up-side-down.
If the probability value (p) is 0.5 approximately half the images will be flipped upside-down
Args:
example: An example from the dataset containing a Python dictionary with "img" and "is_flipped" key-value pair
p: the probability of flipping the image up-side-down, Default 0.5
Returns:
example: A Dataset object
"""
# example['img'] = example['img']
if rng.random() > p: # the flip the image and set is_flipped column to 1
example['img'] = example['img'].transpose(
1) # ImageOps.flip(example['img']) #example['img'].transpose(Image.FLIP_TOP_BOTTOM)
example['is_flipped'] = 1
return example
my_test = preprocess_data(test_dataset)
my_test = my_test.map(generate_flipped_data)
```
## Expected results
The dataset should be transformed without problems.
## Actual results
```
/home/rafay/anaconda3/envs/pytorch_new/bin/python /home/rafay/Documents/you_only_live_once/upside_down_detector/create_dataset.py
Reusing dataset cifar100 (/home/rafay/.cache/huggingface/datasets/cifar100/cifar100/1.0.0/f365c8b725c23e8f0f8d725c3641234d9331cd2f62919d1381d1baa5b3ba3142)
Reusing dataset cifar100 (/home/rafay/.cache/huggingface/datasets/cifar100/cifar100/1.0.0/f365c8b725c23e8f0f8d725c3641234d9331cd2f62919d1381d1baa5b3ba3142)
20%|█▉ | 1999/10000 [00:00<00:01, 5560.44ex/s]
Traceback (most recent call last):
File "/home/rafay/anaconda3/envs/pytorch_new/lib/python3.10/site-packages/datasets/arrow_dataset.py", line 2326, in _map_single
writer.write(example)
File "/home/rafay/anaconda3/envs/pytorch_new/lib/python3.10/site-packages/datasets/arrow_writer.py", line 441, in write
self.write_examples_on_file()
File "/home/rafay/anaconda3/envs/pytorch_new/lib/python3.10/site-packages/datasets/arrow_writer.py", line 399, in write_examples_on_file
self.write_batch(batch_examples=batch_examples)
File "/home/rafay/anaconda3/envs/pytorch_new/lib/python3.10/site-packages/datasets/arrow_writer.py", line 492, in write_batch
arrays.append(pa.array(typed_sequence))
File "pyarrow/array.pxi", line 230, in pyarrow.lib.array
File "pyarrow/array.pxi", line 110, in pyarrow.lib._handle_arrow_array_protocol
File "/home/rafay/anaconda3/envs/pytorch_new/lib/python3.10/site-packages/datasets/arrow_writer.py", line 185, in __arrow_array__
out = pa.array(cast_to_python_objects(data, only_1d_for_numpy=True))
File "pyarrow/array.pxi", line 316, in pyarrow.lib.array
File "pyarrow/array.pxi", line 39, in pyarrow.lib._sequence_to_array
File "pyarrow/error.pxi", line 143, in pyarrow.lib.pyarrow_internal_check_status
File "pyarrow/error.pxi", line 99, in pyarrow.lib.check_status
pyarrow.lib.ArrowInvalid: Could not convert <PIL.Image.Image image mode=RGB size=32x32 at 0x7F56AEE61DE0> with type Image: did not recognize Python value type when inferring an Arrow data type
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/rafay/Documents/you_only_live_once/upside_down_detector/create_dataset.py", line 55, in <module>
my_test = my_test.map(generate_flipped_data)
File "/home/rafay/anaconda3/envs/pytorch_new/lib/python3.10/site-packages/datasets/arrow_dataset.py", line 1953, in map
return self._map_single(
File "/home/rafay/anaconda3/envs/pytorch_new/lib/python3.10/site-packages/datasets/arrow_dataset.py", line 519, in wrapper
out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
File "/home/rafay/anaconda3/envs/pytorch_new/lib/python3.10/site-packages/datasets/arrow_dataset.py", line 486, in wrapper
out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
File "/home/rafay/anaconda3/envs/pytorch_new/lib/python3.10/site-packages/datasets/fingerprint.py", line 458, in wrapper
out = func(self, *args, **kwargs)
File "/home/rafay/anaconda3/envs/pytorch_new/lib/python3.10/site-packages/datasets/arrow_dataset.py", line 2360, in _map_single
writer.finalize()
File "/home/rafay/anaconda3/envs/pytorch_new/lib/python3.10/site-packages/datasets/arrow_writer.py", line 522, in finalize
self.write_examples_on_file()
File "/home/rafay/anaconda3/envs/pytorch_new/lib/python3.10/site-packages/datasets/arrow_writer.py", line 399, in write_examples_on_file
self.write_batch(batch_examples=batch_examples)
File "/home/rafay/anaconda3/envs/pytorch_new/lib/python3.10/site-packages/datasets/arrow_writer.py", line 492, in write_batch
arrays.append(pa.array(typed_sequence))
File "pyarrow/array.pxi", line 230, in pyarrow.lib.array
File "pyarrow/array.pxi", line 110, in pyarrow.lib._handle_arrow_array_protocol
File "/home/rafay/anaconda3/envs/pytorch_new/lib/python3.10/site-packages/datasets/arrow_writer.py", line 185, in __arrow_array__
out = pa.array(cast_to_python_objects(data, only_1d_for_numpy=True))
File "pyarrow/array.pxi", line 316, in pyarrow.lib.array
File "pyarrow/array.pxi", line 39, in pyarrow.lib._sequence_to_array
File "pyarrow/error.pxi", line 143, in pyarrow.lib.pyarrow_internal_check_status
File "pyarrow/error.pxi", line 99, in pyarrow.lib.check_status
pyarrow.lib.ArrowInvalid: Could not convert <PIL.Image.Image image mode=RGB size=32x32 at 0x7F56AEE61DE0> with type Image: did not recognize Python value type when inferring an Arrow data type
Process finished with exit code 1
```
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 2.0.0
- Platform: Linux(Fedora 35)
- Python version: 3.10
- PyArrow version: 7.0.0
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4124/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4124/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4123 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4123/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4123/comments | https://api.github.com/repos/huggingface/datasets/issues/4123/events | https://github.com/huggingface/datasets/issues/4123 | 1,196,367,512 | I_kwDODunzps5HTx6Y | 4,123 | Building C4 takes forever | {
"login": "StellaAthena",
"id": 15899312,
"node_id": "MDQ6VXNlcjE1ODk5MzEy",
"avatar_url": "https://avatars.githubusercontent.com/u/15899312?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/StellaAthena",
"html_url": "https://github.com/StellaAthena",
"followers_url": "https://api.github.com/users/StellaAthena/followers",
"following_url": "https://api.github.com/users/StellaAthena/following{/other_user}",
"gists_url": "https://api.github.com/users/StellaAthena/gists{/gist_id}",
"starred_url": "https://api.github.com/users/StellaAthena/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/StellaAthena/subscriptions",
"organizations_url": "https://api.github.com/users/StellaAthena/orgs",
"repos_url": "https://api.github.com/users/StellaAthena/repos",
"events_url": "https://api.github.com/users/StellaAthena/events{/privacy}",
"received_events_url": "https://api.github.com/users/StellaAthena/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | [
"Hi @StellaAthena, thanks for reporting.\r\n\r\nPlease note, that our `datasets` library performs several operations in order to load a dataset, among them:\r\n- it downloads all the required files: for C4 \"en\", 378.69 GB of JSON GZIPped files\r\n- it parses their content to generate the dataset\r\n- it caches the dataset in an Arrow file: for C4 \"en\", this file size is 1.87 TB\r\n- it memory-maps the Arrow file\r\n\r\nIf it suits your use case, you might load this dataset in streaming mode:\r\n- no Arrow file is generated\r\n- you can iterate over elements immediately (no need to wait to download all the entire files)\r\n\r\n```python\r\nIn [45]: from datasets import load_dataset\r\n ...: ds = load_dataset(\"c4\", \"en\", split=\"train\", streaming=True)\r\n ...: for item in ds:\r\n ...: print(item)\r\n ...: break\r\n ...: \r\n{'text': 'Beginners BBQ Class Taking Place in Missoula!\\nDo you want to get better at making delicious BBQ? You will have the opportunity, put this on your calendar now. Thursday, September 22nd join World Class BBQ Champion, Tony Balay from Lonestar Smoke Rangers. He will be teaching a beginner level class for everyone who wants to get better with their culinary skills.\\nHe will teach you everything you need to know to compete in a KCBS BBQ competition, including techniques, recipes, timelines, meat selection and trimming, plus smoker and fire information.\\nThe cost to be in the class is $35 per person, and for spectators it is free. Included in the cost will be either a t-shirt or apron and you will be tasting samples of each meat that is prepared.', 'timestamp': '2019-04-25T12:57:54Z', 'url': 'https://klyq.com/beginners-bbq-class-taking-place-in-missoula/'}\r\n```\r\nI hope this is useful for your use case."
] | 2022-04-07T17:41:30 | 2023-06-26T22:01:29 | 2023-06-26T22:01:29 | NONE | null | ## Describe the bug
C4-en is a 300 GB dataset. However, when I try to download it through the hub it takes over _six hours_ to generate the train/test split from the downloaded files. This is an absurd amount of time and an unnecessary waste of resources.
## Steps to reproduce the bug
```python
c4 = datasets.load("c4", "en")
```
## Expected results
I would like to be able to download pre-split data.
## Environment info
- `datasets` version: 2.0.0
- Platform: Linux-5.13.0-35-generic-x86_64-with-glibc2.29
- Python version: 3.8.10
- PyArrow version: 7.0.0
- Pandas version: 1.4.1
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4123/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4123/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4122 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4122/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4122/comments | https://api.github.com/repos/huggingface/datasets/issues/4122/events | https://github.com/huggingface/datasets/issues/4122 | 1,196,095,072 | I_kwDODunzps5HSvZg | 4,122 | medical_dialog zh has very slow _generate_examples | {
"login": "nbroad1881",
"id": 24982805,
"node_id": "MDQ6VXNlcjI0OTgyODA1",
"avatar_url": "https://avatars.githubusercontent.com/u/24982805?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/nbroad1881",
"html_url": "https://github.com/nbroad1881",
"followers_url": "https://api.github.com/users/nbroad1881/followers",
"following_url": "https://api.github.com/users/nbroad1881/following{/other_user}",
"gists_url": "https://api.github.com/users/nbroad1881/gists{/gist_id}",
"starred_url": "https://api.github.com/users/nbroad1881/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/nbroad1881/subscriptions",
"organizations_url": "https://api.github.com/users/nbroad1881/orgs",
"repos_url": "https://api.github.com/users/nbroad1881/repos",
"events_url": "https://api.github.com/users/nbroad1881/events{/privacy}",
"received_events_url": "https://api.github.com/users/nbroad1881/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"Hi @nbroad1881, thanks for reporting.\r\n\r\nLet me have a look to try to improve its performance. ",
"Thanks @nbroad1881 for reporting! I don't recall it taking so long. I will also have a look at this. \r\n@albertvillanova please let me know if I am doing something unnecessary or time consuming.",
"Hi @nbroad1881 and @vrindaprabhu,\r\n\r\nAs a workaround for the performance of the parsing of the raw data files (this could be addressed in a subsequent PR), I have found that there are also processed data files, that do not require parsing. I have added these as new configurations `processed.en` and `processed.zh`:\r\n```python\r\nds = load_dataset(\"medical_dialog\", \"processed.zh\")\r\n```"
] | 2022-04-07T14:00:51 | 2022-04-08T16:20:51 | 2022-04-08T16:20:51 | NONE | null | ## Describe the bug
After downloading the files from Google Drive, `load_dataset("medical_dialog", "zh", data_dir="./")` takes an unreasonable amount of time. Generating the train/test split for 33% of the dataset takes over 4.5 hours.
## Steps to reproduce the bug
The easiest way I've found to download files from Google Drive is to use `gdown` and use Google Colab because the download speeds will be very high due to the fact that they are both in Google Cloud.
```python
file_ids = [
"1AnKxGEuzjeQsDHHqL3NqI_aplq2hVL_E",
"1tt7weAT1SZknzRFyLXOT2fizceUUVRXX",
"1A64VBbsQ_z8wZ2LDox586JIyyO6mIwWc",
"1AKntx-ECnrxjB07B6BlVZcFRS4YPTB-J",
"1xUk8AAua_x27bHUr-vNoAuhEAjTxOvsu",
"1ezKTfe7BgqVN5o-8Vdtr9iAF0IueCSjP",
"1tA7bSOxR1RRNqZst8cShzhuNHnayUf7c",
"1pA3bCFA5nZDhsQutqsJcH3d712giFb0S",
"1pTLFMdN1A3ro-KYghk4w4sMz6aGaMOdU",
"1dUSnG0nUPq9TEQyHd6ZWvaxO0OpxVjXD",
"1UfCH05nuWiIPbDZxQzHHGAHyMh8dmPQH",
]
for i in file_ids:
url = f"https://drive.google.com/uc?id={i}"
!gdown $url
from datasets import load_dataset
ds = load_dataset("medical_dialog", "zh", data_dir="./")
```
## Expected results
Faster load time
## Actual results
`Generating train split: 33%: 625519/1921127 [4:31:03<31:39:20, 11.37 examples/s]`
## Environment info
- `datasets` version: 2.0.0
- Platform: Linux-5.4.144+-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.7.13
- PyArrow version: 6.0.1
- Pandas version: 1.3.5
@vrindaprabhu , could you take a look at this since you implemented it? I think the `_generate_examples` function might need to be rewritten | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4122/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4122/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4121 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4121/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4121/comments | https://api.github.com/repos/huggingface/datasets/issues/4121/events | https://github.com/huggingface/datasets/issues/4121 | 1,196,000,018 | I_kwDODunzps5HSYMS | 4,121 | datasets.load_metric can not load a local metirc | {
"login": "Gare-Ng",
"id": 51749469,
"node_id": "MDQ6VXNlcjUxNzQ5NDY5",
"avatar_url": "https://avatars.githubusercontent.com/u/51749469?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Gare-Ng",
"html_url": "https://github.com/Gare-Ng",
"followers_url": "https://api.github.com/users/Gare-Ng/followers",
"following_url": "https://api.github.com/users/Gare-Ng/following{/other_user}",
"gists_url": "https://api.github.com/users/Gare-Ng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Gare-Ng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Gare-Ng/subscriptions",
"organizations_url": "https://api.github.com/users/Gare-Ng/orgs",
"repos_url": "https://api.github.com/users/Gare-Ng/repos",
"events_url": "https://api.github.com/users/Gare-Ng/events{/privacy}",
"received_events_url": "https://api.github.com/users/Gare-Ng/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | [
"Hello, could you tell me how this issue can be fixed? I'm coming across the same issue."
] | 2022-04-07T12:48:56 | 2023-01-18T14:30:46 | 2022-04-07T13:53:27 | NONE | null | ## Describe the bug
No matter how I hard try to tell load_metric that I want to load a local metric file, it still continues to fetch things on the Internet. And unfortunately it says 'ConnectionError: Couldn't reach'. However I can download this file without connectionerror and tell load_metric its local directory. And it comes back where it begins...
## Steps to reproduce the bug
```python
metric = load_metric(path=r'C:\Users\Gare\PycharmProjects\Gare\blue\bleu.py')
ConnectionError: Couldn't reach https://github.com/tensorflow/nmt/raw/master/nmt/scripts/bleu.py
metric = load_metric(path='bleu')
ConnectionError: Couldn't reach https://raw.githubusercontent.com/huggingface/datasets/1.12.1/metrics/bleu/bleu.py
metric = load_metric(path='./blue/bleu.py')
ConnectionError: Couldn't reach https://github.com/tensorflow/nmt/raw/master/nmt/scripts/bleu.py
```
## Expected results
I do read the docs [here](https://huggingface.co/docs/datasets/package_reference/loading_methods#datasets.load_metric). There are no other parameters that help function to distinguish from local and online file but path. As what I code above, it should load from local.
## Actual results
> metric = load_metric(path=r'C:\Users\Gare\PycharmProjects\Gare\blue\bleu.py')
> ~\AppData\Local\Temp\ipykernel_19636\1855752034.py in <module>
----> 1 metric = load_metric(path=r'C:\Users\Gare\PycharmProjects\Gare\blue\bleu.py')
D:\Program Files\Anaconda\envs\Gare\lib\site-packages\datasets\load.py in load_metric(path, config_name, process_id, num_process, cache_dir, experiment_id, keep_in_memory, download_config, download_mode, script_version, **metric_init_kwargs)
817 if data_files is None and data_dir is not None:
818 data_files = os.path.join(data_dir, "**")
--> 819
820 self.name = name
821 self.revision = revision
D:\Program Files\Anaconda\envs\Gare\lib\site-packages\datasets\load.py in prepare_module(path, script_version, download_config, download_mode, dataset, force_local_path, dynamic_modules_path, return_resolved_file_path, return_associated_base_path, data_files, **download_kwargs)
639 self,
640 path: str,
--> 641 download_config: Optional[DownloadConfig] = None,
642 download_mode: Optional[DownloadMode] = None,
643 dynamic_modules_path: Optional[str] = None,
D:\Program Files\Anaconda\envs\Gare\lib\site-packages\datasets\utils\file_utils.py in cached_path(url_or_filename, download_config, **download_kwargs)
297 token = hf_api.HfFolder.get_token()
298 if token:
--> 299 headers["authorization"] = f"Bearer {token}"
300 return headers
301
D:\Program Files\Anaconda\envs\Gare\lib\site-packages\datasets\utils\file_utils.py in get_from_cache(url, cache_dir, force_download, proxies, etag_timeout, resume_download, user_agent, local_files_only, use_etag, max_retries, use_auth_token)
604 def _resumable_file_manager():
605 with open(incomplete_path, "a+b") as f:
--> 606 yield f
607
608 temp_file_manager = _resumable_file_manager
ConnectionError: Couldn't reach https://github.com/tensorflow/nmt/raw/master/nmt/scripts/bleu.py
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 2.0.0
- Platform: Windows-10-10.0.22000-SP0
- Python version: 3.7.13
- PyArrow version: 7.0.0
- Pandas version: 1.3.4
Any advice would be appreciated. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4121/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4121/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4120 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4120/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4120/comments | https://api.github.com/repos/huggingface/datasets/issues/4120/events | https://github.com/huggingface/datasets/issues/4120 | 1,195,887,430 | I_kwDODunzps5HR8tG | 4,120 | Representing dictionaries (json) objects as features | {
"login": "yanaiela",
"id": 8031035,
"node_id": "MDQ6VXNlcjgwMzEwMzU=",
"avatar_url": "https://avatars.githubusercontent.com/u/8031035?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yanaiela",
"html_url": "https://github.com/yanaiela",
"followers_url": "https://api.github.com/users/yanaiela/followers",
"following_url": "https://api.github.com/users/yanaiela/following{/other_user}",
"gists_url": "https://api.github.com/users/yanaiela/gists{/gist_id}",
"starred_url": "https://api.github.com/users/yanaiela/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yanaiela/subscriptions",
"organizations_url": "https://api.github.com/users/yanaiela/orgs",
"repos_url": "https://api.github.com/users/yanaiela/repos",
"events_url": "https://api.github.com/users/yanaiela/events{/privacy}",
"received_events_url": "https://api.github.com/users/yanaiela/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | open | false | null | [] | null | [] | 2022-04-07T11:07:41 | 2022-04-07T11:07:41 | null | CONTRIBUTOR | null | In the process of adding a new dataset to the hub, I stumbled upon the inability to represent dictionaries that contain different key names, unknown in advance (and may differ between samples), original asked in the [forum](https://discuss.huggingface.co/t/representing-nested-dictionary-with-different-keys/16442).
For instance:
```
sample1 = {"nps": {
"a": {"id": 0, "text": "text1"},
"b": {"id": 1, "text": "text2"},
}}
sample2 = {"nps": {
"a": {"id": 0, "text": "text1"},
"b": {"id": 1, "text": "text2"},
"c": {"id": 2, "text": "text3"},
}}
sample3 = {"nps": {
"a": {"id": 0, "text": "text1"},
"b": {"id": 1, "text": "text2"},
"c": {"id": 2, "text": "text3"},
"d": {"id": 3, "text": "text4"},
}}
```
the `nps` field cannot be represented as a Feature while maintaining its original structure.
@lhoestq suggested to add JSON as a new feature type, which will solve this problem.
It seems like an alternative solution would be to change the original data format, which isn't an optimal solution in my case. Moreover, JSON is a common structure, that will likely to be useful in future datasets as well. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4120/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4120/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4119 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4119/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4119/comments | https://api.github.com/repos/huggingface/datasets/issues/4119/events | https://github.com/huggingface/datasets/pull/4119 | 1,195,641,298 | PR_kwDODunzps41yXHF | 4,119 | Hotfix failing CI tests on Windows | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-04-07T07:38:46 | 2022-04-07T09:47:24 | 2022-04-07T07:57:13 | MEMBER | null | This PR makes a hotfix for our CI Windows tests: https://app.circleci.com/pipelines/github/huggingface/datasets/11092/workflows/9cfdb1dd-0fec-4fe0-8122-5f533192ebdc/jobs/67414
Fix #4118
I guess this issue is related to this PR:
- huggingface/huggingface_hub#815 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4119/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4119/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4119",
"html_url": "https://github.com/huggingface/datasets/pull/4119",
"diff_url": "https://github.com/huggingface/datasets/pull/4119.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4119.patch",
"merged_at": "2022-04-07T07:57:13"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4118 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4118/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4118/comments | https://api.github.com/repos/huggingface/datasets/issues/4118/events | https://github.com/huggingface/datasets/issues/4118 | 1,195,638,944 | I_kwDODunzps5HRACg | 4,118 | Failing CI tests on Windows | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
] | null | [] | 2022-04-07T07:36:25 | 2022-04-07T07:57:13 | 2022-04-07T07:57:13 | MEMBER | null | ## Describe the bug
Our CI Windows tests are failing from yesterday: https://app.circleci.com/pipelines/github/huggingface/datasets/11092/workflows/9cfdb1dd-0fec-4fe0-8122-5f533192ebdc/jobs/67414
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4118/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4118/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4117 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4117/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4117/comments | https://api.github.com/repos/huggingface/datasets/issues/4117/events | https://github.com/huggingface/datasets/issues/4117 | 1,195,552,406 | I_kwDODunzps5HQq6W | 4,117 | AttributeError: module 'huggingface_hub' has no attribute 'hf_api' | {
"login": "arymbe",
"id": 4567991,
"node_id": "MDQ6VXNlcjQ1Njc5OTE=",
"avatar_url": "https://avatars.githubusercontent.com/u/4567991?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/arymbe",
"html_url": "https://github.com/arymbe",
"followers_url": "https://api.github.com/users/arymbe/followers",
"following_url": "https://api.github.com/users/arymbe/following{/other_user}",
"gists_url": "https://api.github.com/users/arymbe/gists{/gist_id}",
"starred_url": "https://api.github.com/users/arymbe/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/arymbe/subscriptions",
"organizations_url": "https://api.github.com/users/arymbe/orgs",
"repos_url": "https://api.github.com/users/arymbe/repos",
"events_url": "https://api.github.com/users/arymbe/events{/privacy}",
"received_events_url": "https://api.github.com/users/arymbe/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"Hi @arymbe, thanks for reporting.\r\n\r\nUnfortunately, I'm not able to reproduce your problem.\r\n\r\nCould you please write the complete stack trace? That way we will be able to see which package originates the exception.",
"Hello, thank you for your fast replied. this is the complete error that I got\r\n\r\n---------------------------------------------------------------------------\r\n\r\nAttributeError Traceback (most recent call last)\r\n\r\n---------------------------------------------------------------------------\r\n\r\nAttributeError Traceback (most recent call last)\r\n\r\nInput In [27], in <module>\r\n----> 1 from datasets import load_dataset\r\n\r\nvenv/lib/python3.8/site-packages/datasets/__init__.py:39, in <module>\r\n 37 from .arrow_dataset import Dataset, concatenate_datasets\r\n 38 from .arrow_reader import ReadInstruction\r\n---> 39 from .builder import ArrowBasedBuilder, BeamBasedBuilder, BuilderConfig, DatasetBuilder, GeneratorBasedBuilder\r\n 40 from .combine import interleave_datasets\r\n 41 from .dataset_dict import DatasetDict, IterableDatasetDict\r\n\r\nvenv/lib/python3.8/site-packages/datasets/builder.py:40, in <module>\r\n 32 from .arrow_reader import (\r\n 33 HF_GCP_BASE_URL,\r\n 34 ArrowReader,\r\n (...)\r\n 37 ReadInstruction,\r\n 38 )\r\n 39 from .arrow_writer import ArrowWriter, BeamWriter\r\n---> 40 from .data_files import DataFilesDict, sanitize_patterns\r\n 41 from .dataset_dict import DatasetDict, IterableDatasetDict\r\n 42 from .features import Features\r\n\r\nvenv/lib/python3.8/site-packages/datasets/data_files.py:297, in <module>\r\n 292 except FileNotFoundError:\r\n 293 raise FileNotFoundError(f\"The directory at {base_path} doesn't contain any data file\") from None\r\n 296 def _resolve_single_pattern_in_dataset_repository(\r\n--> 297 dataset_info: huggingface_hub.hf_api.DatasetInfo,\r\n 298 pattern: str,\r\n 299 allowed_extensions: Optional[list] = None,\r\n 300 ) -> List[PurePath]:\r\n 301 data_files_ignore = FILES_TO_IGNORE\r\n 302 fs = HfFileSystem(repo_info=dataset_info)\r\n\r\nAttributeError: module 'huggingface_hub' has no attribute 'hf_api'",
"This is weird... It is long ago that the package `huggingface_hub` has a submodule called `hf_api`.\r\n\r\nMaybe you have a problem with your installed `huggingface_hub`...\r\n\r\nCould you please try to update it?\r\n```shell\r\npip install -U huggingface_hub\r\n```",
"Yap, I've updated several times. Then, I've tried numeral combination of datasets and huggingface_hub versions. However, I think your point is right that there is a problem with my huggingface_hub installation. I'll try another way to find the solution. I'll update it later when I get the solution. Thank you :)",
"I'm sorry I can't reproduce your problem.\r\n\r\nMaybe you could try to create a new Python virtual environment and install all dependencies there from scratch. You can use either:\r\n- Python venv: https://docs.python.org/3/library/venv.html\r\n- or conda venv (if you are using conda): https://docs.conda.io/projects/conda/en/latest/user-guide/tasks/manage-environments.html",
"Facing the same issue.\r\n\r\nResponse from `pip show datasets`\r\n```\r\nName: datasets\r\nVersion: 1.15.1\r\nSummary: HuggingFace community-driven open-source library of datasets\r\nHome-page: https://github.com/huggingface/datasets\r\nAuthor: HuggingFace Inc.\r\nAuthor-email: [email protected]\r\nLicense: Apache 2.0\r\nLocation: /usr/local/lib/python3.8/dist-packages\r\nRequires: aiohttp, dill, fsspec, huggingface-hub, multiprocess, numpy, packaging, pandas, pyarrow, requests, tqdm, xxhash\r\nRequired-by: lm-eval\r\n```\r\n\r\nResponse from `pip show huggingface_hub`\r\n\r\n```\r\nName: huggingface-hub\r\nVersion: 0.8.1\r\nSummary: Client library to download and publish models, datasets and other repos on the huggingface.co hub\r\nHome-page: https://github.com/huggingface/huggingface_hub\r\nAuthor: Hugging Face, Inc.\r\nAuthor-email: [email protected]\r\nLicense: Apache\r\nLocation: /usr/local/lib/python3.8/dist-packages\r\nRequires: filelock, packaging, pyyaml, requests, tqdm, typing-extensions\r\nRequired-by: datasets\r\n```\r\n\r\nresponse from `datasets-cli env`\r\n\r\n```\r\nTraceback (most recent call last):\r\n File \"/usr/local/bin/datasets-cli\", line 5, in <module>\r\n from datasets.commands.datasets_cli import main\r\n File \"/usr/local/lib/python3.8/dist-packages/datasets/__init__.py\", line 37, in <module>\r\n from .builder import ArrowBasedBuilder, BeamBasedBuilder, BuilderConfig, DatasetBuilder, GeneratorBasedBuilder\r\n File \"/usr/local/lib/python3.8/dist-packages/datasets/builder.py\", line 44, in <module>\r\n from .data_files import DataFilesDict, _sanitize_patterns\r\n File \"/usr/local/lib/python3.8/dist-packages/datasets/data_files.py\", line 120, in <module>\r\n dataset_info: huggingface_hub.hf_api.DatasetInfo,\r\n File \"/usr/local/lib/python3.8/dist-packages/huggingface_hub/__init__.py\", line 105, in __getattr__\r\n raise AttributeError(f\"No {package_name} attribute {name}\")\r\nAttributeError: No huggingface_hub attribute hf_api\r\n```",
"A workaround: \r\nI changed lines around Line 125 in `__init__.py` of `huggingface_hub` to something like\r\n```\r\n__getattr__, __dir__, __all__ = _attach(\r\n __name__,\r\n submodules=['hf_api'],\r\n```\r\nand it works ( which gives `datasets` direct access to `huggingface_hub.hf_api` ).",
"I was getting the same issue. After trying a few versions, following combination worked for me.\r\ndataset==2.3.2\r\nhuggingface_hub==0.7.0\r\n\r\nIn another environment, I just installed latest repos from pip through `pip install -U transformers datasets tokenizers evaluate`, resulting in following versions. This also worked. Hope it helps someone. \r\n\r\ndatasets-2.3.2 evaluate-0.1.2 huggingface-hub-0.8.1 responses-0.18.0 tokenizers-0.12.1 transformers-4.20.1",
"For layoutlm_v3 finetune\r\ndatasets-2.3.2 evaluate-0.1.2 huggingface-hub-0.8.1 responses-0.18.0 tokenizers-0.12.1 transformers-4.12.5",
"(For layoutlmv3 fine-tuning) In my case, modifying `requirements.txt` as below worked.\r\n\r\n- python = 3.7\r\n\r\n```\r\ndatasets==2.3.2\r\nevaluate==0.1.2\r\nhuggingface-hub==0.8.1\r\nresponse==0.5.0\r\ntokenizers==0.10.1\r\ntransformers==4.12.5\r\nseqeval==1.2.2\r\ndeepspeed==0.5.7\r\ntensorboard==2.7.0\r\nseqeval==1.2.2\r\nsentencepiece\r\ntimm==0.4.12\r\nPillow\r\neinops\r\ntextdistance\r\nshapely\r\n```",
"> For layoutlm_v3 finetune datasets-2.3.2 evaluate-0.1.2 huggingface-hub-0.8.1 responses-0.18.0 tokenizers-0.12.1 transformers-4.12.5\r\n\r\nGOOD!! Thanks!"
] | 2022-04-07T05:52:36 | 2022-07-28T16:44:04 | 2022-04-19T15:36:35 | NONE | null | ## Describe the bug
Could you help me please. I got this following error.
AttributeError: module 'huggingface_hub' has no attribute 'hf_api'
## Steps to reproduce the bug
when I imported the datasets
# Sample code to reproduce the bug
from datasets import list_datasets, load_dataset, list_metrics, load_metric
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 2.0.0
- Platform: macOS-12.3-x86_64-i386-64bit
- Python version: 3.8.9
- PyArrow version: 7.0.0
- Pandas version: 1.3.5
- Huggingface-hub: 0.5.0
- Transformers: 4.18.0
Thank you in advance. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4117/reactions",
"total_count": 3,
"+1": 3,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4117/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4116 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4116/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4116/comments | https://api.github.com/repos/huggingface/datasets/issues/4116/events | https://github.com/huggingface/datasets/pull/4116 | 1,194,926,459 | PR_kwDODunzps41wCEO | 4,116 | Pretty print dataset info files | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"maybe just do it from now on no? (i.e. not for existing `dataset_infos.json` files)",
"_The documentation is not available anymore as the PR was closed or merged._",
"> maybe just do it from now on no? (i.e. not for existing dataset_infos.json files)\r\n\r\nYes, or do this only for datasets created with `push_to_hub` to (always) keep the GH datasets small? \r\n",
"yep sounds good too on my side! ",
"I reverted the change to avoid the size increase and added the `pretty_print` flag, which pretty-prints the JSON, and that flag is only True for datasets created with `push_to_hub`. "
] | 2022-04-06T17:40:48 | 2022-04-08T11:28:01 | 2022-04-08T11:21:53 | CONTRIBUTOR | null | Adds indentation to the `dataset_infos.json` file when saving for nicer diffs.
(suggested by @julien-c)
This PR also updates the info files of the GH datasets. Note that this change adds more than **10 MB** to the repo size (the total file size before the change: 29.672298 MB, after: 41.666475 MB), so I'm not sure this change is a good idea.
`src/datasets/info.py` is the only relevant file for reviewers.
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4116/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4116/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4116",
"html_url": "https://github.com/huggingface/datasets/pull/4116",
"diff_url": "https://github.com/huggingface/datasets/pull/4116.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4116.patch",
"merged_at": "2022-04-08T11:21:53"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4115 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4115/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4115/comments | https://api.github.com/repos/huggingface/datasets/issues/4115/events | https://github.com/huggingface/datasets/issues/4115 | 1,194,907,555 | I_kwDODunzps5HONej | 4,115 | ImageFolder add option to ignore some folders like '.ipynb_checkpoints' | {
"login": "cceyda",
"id": 15624271,
"node_id": "MDQ6VXNlcjE1NjI0Mjcx",
"avatar_url": "https://avatars.githubusercontent.com/u/15624271?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/cceyda",
"html_url": "https://github.com/cceyda",
"followers_url": "https://api.github.com/users/cceyda/followers",
"following_url": "https://api.github.com/users/cceyda/following{/other_user}",
"gists_url": "https://api.github.com/users/cceyda/gists{/gist_id}",
"starred_url": "https://api.github.com/users/cceyda/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/cceyda/subscriptions",
"organizations_url": "https://api.github.com/users/cceyda/orgs",
"repos_url": "https://api.github.com/users/cceyda/repos",
"events_url": "https://api.github.com/users/cceyda/events{/privacy}",
"received_events_url": "https://api.github.com/users/cceyda/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | closed | false | null | [] | null | [
"Maybe it would be nice to ignore private dirs like this one (ones starting with `.`) by default. \r\n\r\nCC @mariosasko ",
"Maybe we can add a `ignore_hidden_files` flag to the builder configs of our packaged loaders (to be consistent across all of them), wdyt @lhoestq @albertvillanova? ",
"I think they should always ignore them actually ! Not sure if adding a flag would be helpful",
"@lhoestq But what if the user explicitly requests those files via regex?\r\n\r\n`glob.glob` ignores hidden files (files starting with \".\") by default unless they are explicitly requested, but fsspec's `glob` doesn't follow this behavior, which is probably a bug, so maybe we can raise an issue or open a PR in their repo?",
"> @lhoestq But what if the user explicitly requests those files via regex?\r\n\r\nUsually hidden files are meant to be ignored. If they are data files, they must be placed outside a hidden directory in the first place right ? I think it's more sensible to explain this than adding a flag.\r\n\r\n> glob.glob ignores hidden files (files starting with \".\") by default unless they are explicitly requested, but fsspec's glob doesn't follow this behavior, which is probably a bug, so maybe we can raise an issue or open a PR in their repo?\r\n\r\nAfter globbing using `fsspec`, we already ignore files that start with a `.` in `_resolve_single_pattern_locally` and `_resolve_single_pattern_in_dataset_repository`, I guess we can just account for parent directories as well ?\r\n\r\nWe could open an issue on `fsspec` but I think they won't change this since it's an important breaking change for them."
] | 2022-04-06T17:29:43 | 2022-06-01T13:04:16 | 2022-06-01T13:04:16 | CONTRIBUTOR | null | **Is your feature request related to a problem? Please describe.**
I sometimes like to peek at the dataset images from jupyterlab. thus '.ipynb_checkpoints' folder appears where my dataset is and (just realized) leads to accidental duplicate image additions. I think this is an easy enough thing to miss especially if the dataset is very large.
**Describe the solution you'd like**
maybe have an option `ignore` or something .gitignore style
`dataset = load_dataset("imagefolder", data_dir="./data/original", ignore="regex?")`
**Describe alternatives you've considered**
Could filter out manually
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4115/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4115/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4114 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4114/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4114/comments | https://api.github.com/repos/huggingface/datasets/issues/4114/events | https://github.com/huggingface/datasets/issues/4114 | 1,194,855,345 | I_kwDODunzps5HOAux | 4,114 | Allow downloading just some columns of a dataset | {
"login": "osanseviero",
"id": 7246357,
"node_id": "MDQ6VXNlcjcyNDYzNTc=",
"avatar_url": "https://avatars.githubusercontent.com/u/7246357?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/osanseviero",
"html_url": "https://github.com/osanseviero",
"followers_url": "https://api.github.com/users/osanseviero/followers",
"following_url": "https://api.github.com/users/osanseviero/following{/other_user}",
"gists_url": "https://api.github.com/users/osanseviero/gists{/gist_id}",
"starred_url": "https://api.github.com/users/osanseviero/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/osanseviero/subscriptions",
"organizations_url": "https://api.github.com/users/osanseviero/orgs",
"repos_url": "https://api.github.com/users/osanseviero/repos",
"events_url": "https://api.github.com/users/osanseviero/events{/privacy}",
"received_events_url": "https://api.github.com/users/osanseviero/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | open | false | null | [] | null | [
"In the general case you can’t always reduce the quantity of data to download, since you can’t parse CSV or JSON data without downloading the whole files right ? ^^ However we could explore this case-by-case I guess",
"Actually for csv pandas has `usecols` which allows loading a subset of columns in a more efficient way afaik, but yes, you're right this might be more complex than I thought."
] | 2022-04-06T16:38:46 | 2022-04-07T07:56:26 | null | MEMBER | null | **Is your feature request related to a problem? Please describe.**
Some people are interested in doing label analysis of a CV dataset without downloading all the images. Downloading the whole dataset does not always makes sense for this kind of use case
**Describe the solution you'd like**
Be able to just download some columns of a dataset, such as doing
```python
load_dataset("huggan/wikiart",columns=["artist", "genre"])
```
Although this might make things a bit complicated in terms of local caching of datasets. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4114/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4114/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4113 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4113/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4113/comments | https://api.github.com/repos/huggingface/datasets/issues/4113/events | https://github.com/huggingface/datasets/issues/4113 | 1,194,843,532 | I_kwDODunzps5HN92M | 4,113 | Multiprocessing with FileLock fails in python 3.9 | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | [
"Closing this one because it must be used this way actually:\r\n```python\r\ndef main():\r\n with FileLock(\"tmp.lock\"):\r\n with Pool(2) as pool:\r\n pool.map(run, range(2))\r\n\r\nif __name__ == \"__main__\":\r\n main()\r\n```"
] | 2022-04-06T16:27:09 | 2022-11-28T11:49:14 | 2022-11-28T11:49:14 | MEMBER | null | On python 3.9, this code hangs:
```python
from multiprocessing import Pool
from filelock import FileLock
def run(i):
print(f"got the lock in multi process [{i}]")
with FileLock("tmp.lock"):
with Pool(2) as pool:
pool.map(run, range(2))
```
This is because the subprocesses try to acquire the lock from the main process for some reason. This is not the case in older versions of python.
This can cause many issues in python 3.9. In particular, we use multiprocessing to fetch data files when you load a dataset (as long as there are >16 data files). Therefore `imagefolder` hangs, and I expect any dataset that needs to download >16 files to hang as well.
Let's see if we can fix this and have a CI that runs on 3.9.
cc @mariosasko @julien-c | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4113/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4113/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4112 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4112/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4112/comments | https://api.github.com/repos/huggingface/datasets/issues/4112/events | https://github.com/huggingface/datasets/issues/4112 | 1,194,752,765 | I_kwDODunzps5HNnr9 | 4,112 | ImageFolder with Grayscale images dataset | {
"login": "chainyo",
"id": 50595514,
"node_id": "MDQ6VXNlcjUwNTk1NTE0",
"avatar_url": "https://avatars.githubusercontent.com/u/50595514?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/chainyo",
"html_url": "https://github.com/chainyo",
"followers_url": "https://api.github.com/users/chainyo/followers",
"following_url": "https://api.github.com/users/chainyo/following{/other_user}",
"gists_url": "https://api.github.com/users/chainyo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/chainyo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/chainyo/subscriptions",
"organizations_url": "https://api.github.com/users/chainyo/orgs",
"repos_url": "https://api.github.com/users/chainyo/repos",
"events_url": "https://api.github.com/users/chainyo/events{/privacy}",
"received_events_url": "https://api.github.com/users/chainyo/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi! Replacing:\r\n```python\r\ntransformed_dataset = dataset.with_transform(transforms)\r\ntransformed_dataset.set_format(type=\"torch\", device=\"cuda\")\r\n```\r\n\r\nwith:\r\n```python\r\ndef transform_func(examples):\r\n examples[\"image\"] = [transforms(img).to(\"cuda\") for img in examples[\"image\"]]\r\n return examples\r\n\r\ntransformed_dataset = dataset.with_transform(transform_func)\r\n```\r\nshould fix the issue. `datasets` doesn't support chaining of transforms (you can think of `set_format`/`with_format` as a predefined transform func for `set_transform`/`with_transforms`), so the last transform (in your case, `set_format`) takes precedence over the previous ones (in your case `with_format`). And the PyTorch formatter is not supported by the Image feature, hence the error (adding support for that is on our short-term roadmap).",
"Ok thanks a lot for the code snippet!\r\n\r\nI love the way `datasets` is easy to use but it made it really long to pre-process all the images (400.000 in my case) before training anything. `ImageFolder` from pytorch is faster in my case but force me to have the images on my local machine.\r\n\r\nI don't know how to speed up the process without switching to `ImageFolder` :smile: ",
"You can pass `ignore_verifications=True` in `load_dataset` to skip checksum verification, which takes a lot of time if the number of files is large. We will consider making this the default behavior."
] | 2022-04-06T15:10:00 | 2022-04-22T10:21:53 | 2022-04-22T10:21:52 | NONE | null | Hi, I'm facing a problem with a grayscale images dataset I have uploaded [here](https://huggingface.co/datasets/ChainYo/rvl-cdip) (RVL-CDIP)
I'm getting an error while I want to use images for training a model with PyTorch DataLoader. Here is the full traceback:
```bash
AttributeError: Caught AttributeError in DataLoader worker process 0.
Original Traceback (most recent call last):
File "/home/chainyo/miniconda3/envs/gan-bird/lib/python3.8/site-packages/torch/utils/data/_utils/worker.py", line 287, in _worker_loop
data = fetcher.fetch(index)
File "/home/chainyo/miniconda3/envs/gan-bird/lib/python3.8/site-packages/torch/utils/data/_utils/fetch.py", line 49, in fetch
data = [self.dataset[idx] for idx in possibly_batched_index]
File "/home/chainyo/miniconda3/envs/gan-bird/lib/python3.8/site-packages/torch/utils/data/_utils/fetch.py", line 49, in <listcomp>
data = [self.dataset[idx] for idx in possibly_batched_index]
File "/home/chainyo/miniconda3/envs/gan-bird/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 1765, in __getitem__
return self._getitem(
File "/home/chainyo/miniconda3/envs/gan-bird/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 1750, in _getitem
formatted_output = format_table(
File "/home/chainyo/miniconda3/envs/gan-bird/lib/python3.8/site-packages/datasets/formatting/formatting.py", line 532, in format_table
return formatter(pa_table, query_type=query_type)
File "/home/chainyo/miniconda3/envs/gan-bird/lib/python3.8/site-packages/datasets/formatting/formatting.py", line 281, in __call__
return self.format_row(pa_table)
File "/home/chainyo/miniconda3/envs/gan-bird/lib/python3.8/site-packages/datasets/formatting/torch_formatter.py", line 58, in format_row
return self.recursive_tensorize(row)
File "/home/chainyo/miniconda3/envs/gan-bird/lib/python3.8/site-packages/datasets/formatting/torch_formatter.py", line 54, in recursive_tensorize
return map_nested(self._recursive_tensorize, data_struct, map_list=False)
File "/home/chainyo/miniconda3/envs/gan-bird/lib/python3.8/site-packages/datasets/utils/py_utils.py", line 314, in map_nested
mapped = [
File "/home/chainyo/miniconda3/envs/gan-bird/lib/python3.8/site-packages/datasets/utils/py_utils.py", line 315, in <listcomp>
_single_map_nested((function, obj, types, None, True, None))
File "/home/chainyo/miniconda3/envs/gan-bird/lib/python3.8/site-packages/datasets/utils/py_utils.py", line 267, in _single_map_nested
return {k: _single_map_nested((function, v, types, None, True, None)) for k, v in pbar}
File "/home/chainyo/miniconda3/envs/gan-bird/lib/python3.8/site-packages/datasets/utils/py_utils.py", line 267, in <dictcomp>
return {k: _single_map_nested((function, v, types, None, True, None)) for k, v in pbar}
File "/home/chainyo/miniconda3/envs/gan-bird/lib/python3.8/site-packages/datasets/utils/py_utils.py", line 251, in _single_map_nested
return function(data_struct)
File "/home/chainyo/miniconda3/envs/gan-bird/lib/python3.8/site-packages/datasets/formatting/torch_formatter.py", line 51, in _recursive_tensorize
return self._tensorize(data_struct)
File "/home/chainyo/miniconda3/envs/gan-bird/lib/python3.8/site-packages/datasets/formatting/torch_formatter.py", line 38, in _tensorize
if np.issubdtype(value.dtype, np.integer):
AttributeError: 'bytes' object has no attribute 'dtype'
```
I don't really understand why the image is still a bytes object while I used transformations on it. Here the code I used to upload the dataset (and it worked well):
```python
train_dataset = load_dataset("imagefolder", data_dir="data/train")
train_dataset = train_dataset["train"]
test_dataset = load_dataset("imagefolder", data_dir="data/test")
test_dataset = test_dataset["train"]
val_dataset = load_dataset("imagefolder", data_dir="data/val")
val_dataset = val_dataset["train"]
dataset = DatasetDict({
"train": train_dataset,
"val": val_dataset,
"test": test_dataset
})
dataset.push_to_hub("ChainYo/rvl-cdip")
```
Now here is the code I am using to get the dataset and prepare it for training:
```python
img_size = 512
batch_size = 128
normalize = [(0.5), (0.5)]
data_dir = "ChainYo/rvl-cdip"
dataset = load_dataset(data_dir, split="train")
transforms = transforms.Compose([
transforms.Resize(img_size),
transforms.CenterCrop(img_size),
transforms.ToTensor(),
transforms.Normalize(*normalize)
])
transformed_dataset = dataset.with_transform(transforms)
transformed_dataset.set_format(type="torch", device="cuda")
train_dataloader = torch.utils.data.DataLoader(
transformed_dataset, batch_size=batch_size, shuffle=True, num_workers=4, pin_memory=True
)
```
But this get me the error above. I don't understand why it's doing this kind of weird thing?
Do I need to map something on the dataset? Something like this:
```python
labels = dataset.features["label"].names
num_labels = dataset.features["label"].num_classes
def preprocess_data(examples):
images = [ex.convert("RGB") for ex in examples["image"]]
labels = [ex for ex in examples["label"]]
return {"images": images, "labels": labels}
features = Features({
"images": Image(decode=True, id=None),
"labels": ClassLabel(num_classes=num_labels, names=labels)
})
decoded_dataset = dataset.map(preprocess_data, remove_columns=dataset.column_names, features=features, batched=True, batch_size=100)
```
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4112/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4112/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4111 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4111/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4111/comments | https://api.github.com/repos/huggingface/datasets/issues/4111/events | https://github.com/huggingface/datasets/pull/4111 | 1,194,660,699 | PR_kwDODunzps41vJCt | 4,111 | Update security policy | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-04-06T13:59:51 | 2022-04-07T09:46:30 | 2022-04-07T09:40:27 | MEMBER | null | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4111/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4111/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4111",
"html_url": "https://github.com/huggingface/datasets/pull/4111",
"diff_url": "https://github.com/huggingface/datasets/pull/4111.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4111.patch",
"merged_at": "2022-04-07T09:40:27"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4110 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4110/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4110/comments | https://api.github.com/repos/huggingface/datasets/issues/4110/events | https://github.com/huggingface/datasets/pull/4110 | 1,194,581,375 | PR_kwDODunzps41u4Je | 4,110 | Matthews Correlation Metric Card | {
"login": "emibaylor",
"id": 27527747,
"node_id": "MDQ6VXNlcjI3NTI3NzQ3",
"avatar_url": "https://avatars.githubusercontent.com/u/27527747?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/emibaylor",
"html_url": "https://github.com/emibaylor",
"followers_url": "https://api.github.com/users/emibaylor/followers",
"following_url": "https://api.github.com/users/emibaylor/following{/other_user}",
"gists_url": "https://api.github.com/users/emibaylor/gists{/gist_id}",
"starred_url": "https://api.github.com/users/emibaylor/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/emibaylor/subscriptions",
"organizations_url": "https://api.github.com/users/emibaylor/orgs",
"repos_url": "https://api.github.com/users/emibaylor/repos",
"events_url": "https://api.github.com/users/emibaylor/events{/privacy}",
"received_events_url": "https://api.github.com/users/emibaylor/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-04-06T12:59:35 | 2022-05-03T13:43:17 | 2022-05-03T13:36:13 | CONTRIBUTOR | null | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4110/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4110/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4110",
"html_url": "https://github.com/huggingface/datasets/pull/4110",
"diff_url": "https://github.com/huggingface/datasets/pull/4110.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4110.patch",
"merged_at": "2022-05-03T13:36:12"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4109 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4109/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4109/comments | https://api.github.com/repos/huggingface/datasets/issues/4109/events | https://github.com/huggingface/datasets/pull/4109 | 1,194,579,257 | PR_kwDODunzps41u3sm | 4,109 | Add Spearmanr Metric Card | {
"login": "emibaylor",
"id": 27527747,
"node_id": "MDQ6VXNlcjI3NTI3NzQ3",
"avatar_url": "https://avatars.githubusercontent.com/u/27527747?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/emibaylor",
"html_url": "https://github.com/emibaylor",
"followers_url": "https://api.github.com/users/emibaylor/followers",
"following_url": "https://api.github.com/users/emibaylor/following{/other_user}",
"gists_url": "https://api.github.com/users/emibaylor/gists{/gist_id}",
"starred_url": "https://api.github.com/users/emibaylor/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/emibaylor/subscriptions",
"organizations_url": "https://api.github.com/users/emibaylor/orgs",
"repos_url": "https://api.github.com/users/emibaylor/repos",
"events_url": "https://api.github.com/users/emibaylor/events{/privacy}",
"received_events_url": "https://api.github.com/users/emibaylor/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"changes made! @lhoestq let me know what you think ",
"The CI fail is unrelated to this PR and fixed on master, feel free to merge :)"
] | 2022-04-06T12:57:53 | 2022-05-03T16:50:26 | 2022-05-03T16:43:37 | CONTRIBUTOR | null | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4109/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4109/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4109",
"html_url": "https://github.com/huggingface/datasets/pull/4109",
"diff_url": "https://github.com/huggingface/datasets/pull/4109.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4109.patch",
"merged_at": "2022-05-03T16:43:37"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4108 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4108/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4108/comments | https://api.github.com/repos/huggingface/datasets/issues/4108/events | https://github.com/huggingface/datasets/pull/4108 | 1,194,578,584 | PR_kwDODunzps41u3j2 | 4,108 | Perplexity Speedup | {
"login": "emibaylor",
"id": 27527747,
"node_id": "MDQ6VXNlcjI3NTI3NzQ3",
"avatar_url": "https://avatars.githubusercontent.com/u/27527747?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/emibaylor",
"html_url": "https://github.com/emibaylor",
"followers_url": "https://api.github.com/users/emibaylor/followers",
"following_url": "https://api.github.com/users/emibaylor/following{/other_user}",
"gists_url": "https://api.github.com/users/emibaylor/gists{/gist_id}",
"starred_url": "https://api.github.com/users/emibaylor/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/emibaylor/subscriptions",
"organizations_url": "https://api.github.com/users/emibaylor/orgs",
"repos_url": "https://api.github.com/users/emibaylor/repos",
"events_url": "https://api.github.com/users/emibaylor/events{/privacy}",
"received_events_url": "https://api.github.com/users/emibaylor/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"WRT the high values, can you add some unit tests with some [string, model] pairs and their resulting perplexity code, and @TristanThrush can run the same pairs through his version of the code?",
"_The documentation is not available anymore as the PR was closed or merged._",
"I thought that the perplexity metric should output the average perplexity value of all the strings that it gets as input (not a perplexity value per string, as the new version does).\r\n@lhoestq , @TristanThrush thoughts?",
"> I thought that the perplexity metric should output the average perplexity value of all the strings that it gets as input (not a perplexity value per string, as the new version does). @lhoestq , @TristanThrush thoughts?\r\n\r\nI support this change from Emi. If we have a perplexity function that loads GPT2 and then returns an average over all of the strings, then it is impossible to get multiple perplexities of a batch of strings efficiently. If we have this new perplexity function that is built for batching, then it is possible to get a batch of perplexities efficiently and you can still compute the average efficiently afterwards.",
"Thanks a lot for working on this @emibaylor @TristanThrush :)\r\n\r\nFor consistency with the other metrics, I think it's nice if we return the mean perplexity. Though I agree that having the separate perplexities per sample can also be useful. What do you think about returning both ?\r\n```python\r\nreturn {\"perplexities\": ppls, \"mean_perplexity\": np.mean(ppls)}\r\n```\r\nwe're also doing this for the COMET metric.",
"> Thanks a lot for working on this @emibaylor @TristanThrush :)\r\n> \r\n> For consistency with the other metrics, I think it's nice if we return the mean perplexity. Though I agree that having the separate perplexities per sample can also be useful. What do you think about returning both ?\r\n> \r\n> ```python\r\n> return {\"perplexities\": ppls, \"mean_perplexity\": np.mean(ppls)}\r\n> ```\r\n> \r\n> we're also doing this for the COMET metric.\r\n\r\nThanks! Sounds great to me.",
"The CI fail is unrelated to your PR and has been fixed on master, feel free to merge the master branch into your PR to fix the CI ;)"
] | 2022-04-06T12:57:21 | 2022-04-20T13:00:54 | 2022-04-20T12:54:42 | CONTRIBUTOR | null | This PR makes necessary changes to perplexity such that:
- it runs much faster (via batching)
- it throws an error when input is empty, or when input is one word without <BOS> token
- it adds the option to add a <BOS> token
Issues:
- The values returned are extremely high, and I'm worried they aren't correct. Even if they are correct, they are sometimes returned as `inf`, which is not very useful (see [comment below](https://github.com/huggingface/datasets/pull/4108#discussion_r843931094) for some of the output values).
- If the values are not correct, can you help me find the error?
- If the values are correct, it might be worth it to measure something like perplexity per word, which would allow us to get actual values for the larger perplexities, instead of just `inf`
Future:
- `stride` is not currently implemented here. I have some thoughts on how to make it happen with batching, but I think it would be better to get another set of eyes to look at any possible errors causing such large values now rather than later. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4108/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4108/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4108",
"html_url": "https://github.com/huggingface/datasets/pull/4108",
"diff_url": "https://github.com/huggingface/datasets/pull/4108.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4108.patch",
"merged_at": "2022-04-20T12:54:42"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4107 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4107/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4107/comments | https://api.github.com/repos/huggingface/datasets/issues/4107/events | https://github.com/huggingface/datasets/issues/4107 | 1,194,484,885 | I_kwDODunzps5HMmSV | 4,107 | Unable to view the dataset and loading the same dataset throws the error - ArrowInvalid: Exceeded maximum rows | {
"login": "Pavithree",
"id": 23344465,
"node_id": "MDQ6VXNlcjIzMzQ0NDY1",
"avatar_url": "https://avatars.githubusercontent.com/u/23344465?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Pavithree",
"html_url": "https://github.com/Pavithree",
"followers_url": "https://api.github.com/users/Pavithree/followers",
"following_url": "https://api.github.com/users/Pavithree/following{/other_user}",
"gists_url": "https://api.github.com/users/Pavithree/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Pavithree/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Pavithree/subscriptions",
"organizations_url": "https://api.github.com/users/Pavithree/orgs",
"repos_url": "https://api.github.com/users/Pavithree/repos",
"events_url": "https://api.github.com/users/Pavithree/events{/privacy}",
"received_events_url": "https://api.github.com/users/Pavithree/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | [
"Thanks for reporting. I'm looking at it",
" It's not related to the dataset viewer in itself. I can replicate the error with:\r\n\r\n```\r\n>>> import datasets as ds\r\n>>> d = ds.load_dataset('Pavithree/explainLikeImFive')\r\nUsing custom data configuration Pavithree--explainLikeImFive-b68b6d8112cd8a51\r\nDownloading and preparing dataset json/Pavithree--explainLikeImFive to /home/slesage/.cache/huggingface/datasets/json/Pavithree--explainLikeImFive-b68b6d8112cd8a51/0.0.0/ac0ca5f5289a6cf108e706efcf040422dbbfa8e658dee6a819f20d76bb84d26b...\r\nDownloading data: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 305M/305M [00:03<00:00, 98.6MB/s]\r\nDownloading data: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 17.9M/17.9M [00:00<00:00, 75.7MB/s]\r\nDownloading data: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 11.9M/11.9M [00:00<00:00, 70.6MB/s]\r\nDownloading data files: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 3/3 [00:05<00:00, 1.92s/it]\r\nExtracting data files: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 3/3 [00:00<00:00, 1948.42it/s]\r\nFailed to read file '/home/slesage/.cache/huggingface/datasets/downloads/5fee9c8819754df277aee6f252e4db6897d785231c21938407b8862ca871d246' with error <class 'pyarrow.lib.ArrowInvalid'>: Exceeded maximum rows\r\nTraceback (most recent call last):\r\n File \"/home/slesage/hf/datasets/src/datasets/packaged_modules/json/json.py\", line 144, in _generate_tables\r\n dataset = json.load(f)\r\n File \"/home/slesage/.pyenv/versions/3.8.11/lib/python3.8/json/__init__.py\", line 293, in load\r\n return loads(fp.read(),\r\n File \"/home/slesage/.pyenv/versions/3.8.11/lib/python3.8/json/__init__.py\", line 357, in loads\r\n return _default_decoder.decode(s)\r\n File \"/home/slesage/.pyenv/versions/3.8.11/lib/python3.8/json/decoder.py\", line 340, in decode\r\n raise JSONDecodeError(\"Extra data\", s, end)\r\njson.decoder.JSONDecodeError: Extra data: line 1 column 916 (char 915)\r\n\r\nDuring handling of the above exception, another exception occurred:\r\n\r\nTraceback (most recent call last):\r\n File \"<stdin>\", line 1, in <module>\r\n File \"/home/slesage/hf/datasets/src/datasets/load.py\", line 1691, in load_dataset\r\n builder_instance.download_and_prepare(\r\n File \"/home/slesage/hf/datasets/src/datasets/builder.py\", line 605, in download_and_prepare\r\n self._download_and_prepare(\r\n File \"/home/slesage/hf/datasets/src/datasets/builder.py\", line 694, in _download_and_prepare\r\n self._prepare_split(split_generator, **prepare_split_kwargs)\r\n File \"/home/slesage/hf/datasets/src/datasets/builder.py\", line 1151, in _prepare_split\r\n for key, table in logging.tqdm(\r\n File \"/home/slesage/.pyenv/versions/datasets/lib/python3.8/site-packages/tqdm/std.py\", line 1168, in __iter__\r\n for obj in iterable:\r\n File \"/home/slesage/hf/datasets/src/datasets/packaged_modules/json/json.py\", line 146, in _generate_tables\r\n raise e\r\n File \"/home/slesage/hf/datasets/src/datasets/packaged_modules/json/json.py\", line 122, in _generate_tables\r\n pa_table = paj.read_json(\r\n File \"pyarrow/_json.pyx\", line 246, in pyarrow._json.read_json\r\n File \"pyarrow/error.pxi\", line 143, in pyarrow.lib.pyarrow_internal_check_status\r\n File \"pyarrow/error.pxi\", line 99, in pyarrow.lib.check_status\r\npyarrow.lib.ArrowInvalid: Exceeded maximum rows\r\n```\r\n\r\ncc @lhoestq @albertvillanova @mariosasko ",
"It seems that train.json is not a valid JSON Lines file: it has several JSON objects in the first line (the 915th character in the first line starts a new object, and there's no \"\\n\")\r\n\r\nYou need to have one JSON object per line",
"I'm closing this issue.\r\n\r\n@Pavithree, please, feel free to re-open it if fixing the JSON file does not solve it.",
"Thank you! that fixes the issue."
] | 2022-04-06T11:37:15 | 2022-04-08T07:13:07 | 2022-04-06T14:39:55 | NONE | null | ## Dataset viewer issue - -ArrowInvalid: Exceeded maximum rows
**Link:** *https://huggingface.co/datasets/Pavithree/explainLikeImFive*
*This is the subset of original eli5 dataset https://huggingface.co/datasets/vblagoje/lfqa. I just filtered the data samples which belongs to one particular subreddit thread. However, the dataset preview for train split returns the below mentioned error:
Status code: 400
Exception: ArrowInvalid
Message: Exceeded maximum rows
When I try to load the same dataset it returns ArrowInvalid: Exceeded maximum rows error*
Am I the one who added this dataset ? Yes
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4107/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4107/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4106 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4106/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4106/comments | https://api.github.com/repos/huggingface/datasets/issues/4106/events | https://github.com/huggingface/datasets/pull/4106 | 1,194,393,892 | PR_kwDODunzps41uPpa | 4,106 | Support huggingface_hub 0.5 | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Looks like GH actions is not able to resolve `huggingface_hub` 0.5.0, I'm investivating",
"_The documentation is not available anymore as the PR was closed or merged._",
"I'm glad to see changes in `huggingface_hub` are simplifying code here.",
"seems to supersede #4102, feel free to close mine :)",
"maybe just cherry-pick the docstring fix",
"I think I've found the issue:\r\n- https://github.com/huggingface/huggingface_hub/pull/790",
"Good catch, `huggingface_hub` doesn't support python 3.6 anymore indeed, therefore we should keep support for 0.4.0. I'm reverting the requirement version bump for now.\r\n\r\nWe can update the requirement once we drop support for python 3.6 in `datasets`",
"@lhoestq, I've opened this PR on `huggingface_hub`: \r\n- https://github.com/huggingface/huggingface_hub/pull/823\r\n\r\nIs there any strong reason why `huggingface_hub` no longer supports Python 3.6? ",
"I think `datasets` can drop support for 3.6 soon. But for now maybe let's keep support for 0.4.0, python 3.6 users are not affected by https://github.com/huggingface/datasets/issues/4105 anyway.\r\n\r\n`huggingface_hub` doesn't not have to support 3.6 again just for the CI IMO",
"@lhoestq I commented on the PR, that IMO it is not a good practice to drop support for Python 3.6 without a previous deprecation cycle.",
"Re-added support for older versions. I ended up checking `huggingface_hub` version to use the old, deprecated API for <0.5.0",
"I find it good practice to have all dependency version related code in a single file so that when you decide to remove support for an old version of a dependency it's easy to find and remove them, hence suggesting `utils/_fixes.py` in https://github.com/huggingface/datasets/issues/4105#issuecomment-1090041204",
"good idea, thanks !",
"I used your suggestion @adrinjalali , I just replace the try/except with a check on the version of `huggingface_hub`"
] | 2022-04-06T10:15:25 | 2022-04-08T10:28:43 | 2022-04-08T10:22:23 | MEMBER | null | Following https://github.com/huggingface/datasets/issues/4105
`huggingface_hub` deprecated some parameters in `HfApi` in 0.5. This PR updates all the calls to HfApi to remove all the deprecations, <s>and I set the `hugginface_hub` requirement to `>=0.5.0`</s>
cc @adrinjalali @LysandreJik | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4106/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4106/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4106",
"html_url": "https://github.com/huggingface/datasets/pull/4106",
"diff_url": "https://github.com/huggingface/datasets/pull/4106.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4106.patch",
"merged_at": "2022-04-08T10:22:23"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4105 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4105/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4105/comments | https://api.github.com/repos/huggingface/datasets/issues/4105/events | https://github.com/huggingface/datasets/issues/4105 | 1,194,297,119 | I_kwDODunzps5HL4cf | 4,105 | push to hub fails with huggingface-hub 0.5.0 | {
"login": "frascuchon",
"id": 2518789,
"node_id": "MDQ6VXNlcjI1MTg3ODk=",
"avatar_url": "https://avatars.githubusercontent.com/u/2518789?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/frascuchon",
"html_url": "https://github.com/frascuchon",
"followers_url": "https://api.github.com/users/frascuchon/followers",
"following_url": "https://api.github.com/users/frascuchon/following{/other_user}",
"gists_url": "https://api.github.com/users/frascuchon/gists{/gist_id}",
"starred_url": "https://api.github.com/users/frascuchon/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/frascuchon/subscriptions",
"organizations_url": "https://api.github.com/users/frascuchon/orgs",
"repos_url": "https://api.github.com/users/frascuchon/repos",
"events_url": "https://api.github.com/users/frascuchon/events{/privacy}",
"received_events_url": "https://api.github.com/users/frascuchon/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | [
"Hi ! Indeed there was a breaking change in `huggingface_hub` 0.5.0 in `HfApi.create_repo`, which is called here in `datasets` by passing the org name in both the `repo_id` and the `organization` arguments:\r\n\r\nhttps://github.com/huggingface/datasets/blob/2230f7f7d7fbaf102cff356f5a8f3bd1561bea43/src/datasets/arrow_dataset.py#L3363-L3369\r\n\r\nI think we should fix that in `huggingface_hub`, will keep you posted. In the meantime please use `huggingface_hub` 0.4.0",
"I'll be sending a fix for this later today on the `huggingface_hub` side.\r\n\r\nThe error would be converted to a `FutureWarning` if `datasets` uses kwargs instead of positional, for example here: \r\n\r\nhttps://github.com/huggingface/datasets/blob/2230f7f7d7fbaf102cff356f5a8f3bd1561bea43/src/datasets/arrow_dataset.py#L3363-L3369\r\n\r\nto be:\r\n\r\n``` python\r\n api.create_repo(\r\n name=dataset_name,\r\n token=token,\r\n repo_type=\"dataset\",\r\n organization=organization,\r\n private=private,\r\n )\r\n```\r\n\r\nBut `name` and `organization` are deprecated in `huggingface_hub=0.5`, and people should pass `repo_id='org/name` instead. Note that `repo_id` was introduced in 0.5 and if `datasets` wants to support older `huggingface_hub` versions (which I encourage it to do), there needs to be a helper function to do that. It can be something like:\r\n\r\n\r\n```python\r\ndef create_repo(\r\n client,\r\n name: str,\r\n token: Optional[str] = None,\r\n organization: Optional[str] = None,\r\n private: Optional[bool] = None,\r\n repo_type: Optional[str] = None,\r\n exist_ok: Optional[bool] = False,\r\n space_sdk: Optional[str] = None,\r\n) -> str:\r\n try:\r\n return client.create_repo(\r\n repo_id=f\"{organization}/{name}\",\r\n token=token,\r\n private=private,\r\n repo_type=repo_type,\r\n exist_ok=exist_ok,\r\n space_sdk=space_sdk,\r\n )\r\n except TypeError:\r\n return client.create_repo(\r\n name=name,\r\n organization=organization,\r\n token=token,\r\n private=private,\r\n repo_type=repo_type,\r\n exist_ok=exist_ok,\r\n space_sdk=space_sdk,\r\n )\r\n```\r\n\r\nin a `utils/_fixes.py` kinda file and and be used internally.\r\n\r\nI'll be sending a patch to `huggingface_hub` to convert the error reported in this issue to a `FutureWarning`.",
"PR with the hotfix on the `huggingface_hub` side: https://github.com/huggingface/huggingface_hub/pull/822",
"We can definitely change `push_to_hub` to use `repo_id` in `datasets` and require `huggingface_hub>=0.5.0`.\r\n\r\nLet me open a PR :)",
"`huggingface_hub` 0.5.1 just got released with a fix, feel free to update `huggingface_hub` ;)"
] | 2022-04-06T08:59:57 | 2022-04-13T14:30:47 | 2022-04-13T14:30:47 | NONE | null | ## Describe the bug
`ds.push_to_hub` is failing when updating a dataset in the form "org_id/repo_id"
## Steps to reproduce the bug
```python
from datasets import load_dataset
ds = load_dataset("rubrix/news_test")
ds.push_to_hub("<your-user>/news_test", token="<your-token>")
```
## Expected results
The dataset is successfully uploaded
## Actual results
An error validation is raised:
```bash
if repo_id and (name or organization):
> raise ValueError(
"Only pass `repo_id` and leave deprecated `name` and "
"`organization` to be None."
E ValueError: Only pass `repo_id` and leave deprecated `name` and `organization` to be None.
```
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 1.18.1
- `huggingface-hub`: 0.5
- Platform: macOS
- Python version: 3.8.12
- PyArrow version: 6.0.0
cc @adrinjalali
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4105/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4105/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4104 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4104/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4104/comments | https://api.github.com/repos/huggingface/datasets/issues/4104/events | https://github.com/huggingface/datasets/issues/4104 | 1,194,072,966 | I_kwDODunzps5HLBuG | 4,104 | Add time series data - stock market | {
"login": "INF800",
"id": 45640029,
"node_id": "MDQ6VXNlcjQ1NjQwMDI5",
"avatar_url": "https://avatars.githubusercontent.com/u/45640029?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/INF800",
"html_url": "https://github.com/INF800",
"followers_url": "https://api.github.com/users/INF800/followers",
"following_url": "https://api.github.com/users/INF800/following{/other_user}",
"gists_url": "https://api.github.com/users/INF800/gists{/gist_id}",
"starred_url": "https://api.github.com/users/INF800/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/INF800/subscriptions",
"organizations_url": "https://api.github.com/users/INF800/orgs",
"repos_url": "https://api.github.com/users/INF800/repos",
"events_url": "https://api.github.com/users/INF800/events{/privacy}",
"received_events_url": "https://api.github.com/users/INF800/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2067376369,
"node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request",
"name": "dataset request",
"color": "e99695",
"default": false,
"description": "Requesting to add a new dataset"
}
] | open | false | null | [] | null | [
"Can I use instructions present in below link for time series dataset as well? \r\nhttps://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md ",
"cc'ing @kashif and @NielsRogge for visibility!",
"@INF800 happy to add this dataset! I will try to set a PR by the end of the day... if you can kindly point me to the dataset? Also, note we have a bunch of time series datasets checked in e.g. `electricity_load_diagrams` or `monash_tsf`, and ideally this dataset could also be in a similar format. ",
"Thankyou. This is how raw data looks like before cleaning for an individual stocks:\r\n\r\n1. https://github.com/INF800/marktech/tree/raw-data/f/data/raw\r\n2. https://github.com/INF800/marktech/tree/raw-data/t/data/raw\r\n3. https://github.com/INF800/marktech/tree/raw-data/rdfn/data/raw\r\n4. https://github.com/INF800/marktech/tree/raw-data/irbt/data/raw\r\n5. https://github.com/INF800/marktech/tree/raw-data/hll/data/raw\r\n6. https://github.com/INF800/marktech/tree/raw-data/infy/data/raw\r\n7. https://github.com/INF800/marktech/tree/raw-data/reli/data/raw\r\n8. https://github.com/INF800/marktech/tree/raw-data/hdbk/data/raw\r\n\r\n> Scraping is automated using GitHub Actions. So, everyday we will see a new file added in the above links.\r\n\r\nI can rewrite the cleaning scripts to make sure it fits HF dataset standards. (P.S I am very much new to HF dataset)\r\n\r\nThe data set above can be converted into univariate regression / multivariate regression / sequence to sequence generation dataset etc. So, do we have some kind of transformation modules that will read the dataset as some type of dataset (`GenericTimeData`) and convert it to other possible dataset relating to a specific ML task. **By having this kind of transformation module, I only have to add data once** and use transformation module whenever necessary\r\n\r\nAdditionally, having some kind of versioning for the dataset will be really helpful because it will keep on updating - especially time series datasets ",
"thanks @INF800 I'll have a look. I believe it should be possible to incorporate this into the time-series format.",
"Referencing https://github.com/qingsongedu/time-series-transformers-review",
"@INF800 yes I am aware of the review repository and paper which is more or less a collection of abstracts etc. I am working on a unified library of implementations of these papers together with datasets to be then able to compare/contrast and build upon the research etc. but I am not ready to share them publicly just yet.\r\n\r\nIn any case regarding your dataset at the moment its seems from looking at the csv files, its mixture of textual and numerical data, sometimes in the same column etc. As you know, for time series models we would need just numeric data so I would need your help in disambiguating the dataset you have collected and also perhaps starting with just numerical data to start with... \r\n\r\nDo you think you can make a version with just numerical data?",
"> @INF800 yes I am aware of the review repository and paper which is more or less a collection of abstracts etc. I am working on a unified library of implementations of these papers together with datasets to be then able to compare/contrast and build upon the research etc. but I am not ready to share them publicly just yet.\r\n> \r\n> In any case regarding your dataset at the moment its seems from looking at the csv files, its mixture of textual and numerical data, sometimes in the same column etc. As you know, for time series models we would need just numeric data so I would need your help in disambiguating the dataset you have collected and also perhaps starting with just numerical data to start with...\r\n> \r\n> Do you think you can make a version with just numerical data?\r\n\r\nWill share the numeric data and conversion script within end of this week. \r\n\r\nI am on a business trip currently - it is in my desktop."
] | 2022-04-06T05:46:58 | 2022-04-11T09:07:10 | null | NONE | null | ## Adding a Time Series Dataset
- **Name:** 2min ticker data for stock market
- **Description:** 8 stocks' data collected for 1month post ukraine-russia war. 4 NSE stocks and 4 NASDAQ stocks. Along with technical indicators (additional features) as shown in below image
- **Data:** Collected by myself from investing.com
- **Motivation:** Test applicability of transformer based model on stock market / time series problem
![image](https://user-images.githubusercontent.com/45640029/161904077-52fe97cb-3720-4e3f-98ee-7f6720a056e2.png) | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4104/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4104/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4103 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4103/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4103/comments | https://api.github.com/repos/huggingface/datasets/issues/4103/events | https://github.com/huggingface/datasets/pull/4103 | 1,193,987,104 | PR_kwDODunzps41s3T4 | 4,103 | Add the `GSM8K` dataset | {
"login": "jon-tow",
"id": 41410219,
"node_id": "MDQ6VXNlcjQxNDEwMjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/41410219?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jon-tow",
"html_url": "https://github.com/jon-tow",
"followers_url": "https://api.github.com/users/jon-tow/followers",
"following_url": "https://api.github.com/users/jon-tow/following{/other_user}",
"gists_url": "https://api.github.com/users/jon-tow/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jon-tow/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jon-tow/subscriptions",
"organizations_url": "https://api.github.com/users/jon-tow/orgs",
"repos_url": "https://api.github.com/users/jon-tow/repos",
"events_url": "https://api.github.com/users/jon-tow/events{/privacy}",
"received_events_url": "https://api.github.com/users/jon-tow/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"The CI is failing because it's outdated, but the task tags are updated on `master`, merging :)"
] | 2022-04-06T04:07:52 | 2022-04-12T15:38:28 | 2022-04-12T10:21:16 | CONTRIBUTOR | null | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4103/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4103/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4103",
"html_url": "https://github.com/huggingface/datasets/pull/4103",
"diff_url": "https://github.com/huggingface/datasets/pull/4103.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4103.patch",
"merged_at": "2022-04-12T10:21:16"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4102 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4102/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4102/comments | https://api.github.com/repos/huggingface/datasets/issues/4102/events | https://github.com/huggingface/datasets/pull/4102 | 1,193,616,722 | PR_kwDODunzps41roGx | 4,102 | [hub] Fix `api.create_repo` call? | {
"login": "julien-c",
"id": 326577,
"node_id": "MDQ6VXNlcjMyNjU3Nw==",
"avatar_url": "https://avatars.githubusercontent.com/u/326577?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/julien-c",
"html_url": "https://github.com/julien-c",
"followers_url": "https://api.github.com/users/julien-c/followers",
"following_url": "https://api.github.com/users/julien-c/following{/other_user}",
"gists_url": "https://api.github.com/users/julien-c/gists{/gist_id}",
"starred_url": "https://api.github.com/users/julien-c/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/julien-c/subscriptions",
"organizations_url": "https://api.github.com/users/julien-c/orgs",
"repos_url": "https://api.github.com/users/julien-c/repos",
"events_url": "https://api.github.com/users/julien-c/events{/privacy}",
"received_events_url": "https://api.github.com/users/julien-c/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_4102). All of your documentation changes will be reflected on that endpoint.",
"Closing in favor of https://github.com/huggingface/datasets/pull/4106"
] | 2022-04-05T19:21:52 | 2022-04-12T08:41:46 | 2022-04-12T08:41:46 | MEMBER | null | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4102/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4102/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4102",
"html_url": "https://github.com/huggingface/datasets/pull/4102",
"diff_url": "https://github.com/huggingface/datasets/pull/4102.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4102.patch",
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4101 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4101/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4101/comments | https://api.github.com/repos/huggingface/datasets/issues/4101/events | https://github.com/huggingface/datasets/issues/4101 | 1,193,399,204 | I_kwDODunzps5HIdOk | 4,101 | How can I download only the train and test split for full numbers using load_dataset()? | {
"login": "Nakkhatra",
"id": 64383902,
"node_id": "MDQ6VXNlcjY0MzgzOTAy",
"avatar_url": "https://avatars.githubusercontent.com/u/64383902?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Nakkhatra",
"html_url": "https://github.com/Nakkhatra",
"followers_url": "https://api.github.com/users/Nakkhatra/followers",
"following_url": "https://api.github.com/users/Nakkhatra/following{/other_user}",
"gists_url": "https://api.github.com/users/Nakkhatra/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Nakkhatra/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Nakkhatra/subscriptions",
"organizations_url": "https://api.github.com/users/Nakkhatra/orgs",
"repos_url": "https://api.github.com/users/Nakkhatra/repos",
"events_url": "https://api.github.com/users/Nakkhatra/events{/privacy}",
"received_events_url": "https://api.github.com/users/Nakkhatra/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | open | false | null | [] | null | [
"Hi! Can you please specify the full name of the dataset? IIRC `full_numbers` is one of the configs of the `svhn` dataset, and its generation is slow due to data being stored in binary Matlab files. Even if you specify a specific split, `datasets` downloads all of them, but we plan to fix that soon and only download the requested split.\r\n\r\nIf you are in a hurry, download the `svhn` script [here](`https://huggingface.co/datasets/svhn/blob/main/svhn.py`), remove [this code](https://huggingface.co/datasets/svhn/blob/main/svhn.py#L155-L162), and run:\r\n```python\r\nfrom datasets import load_dataset\r\ndset = load_dataset(\"path/to/your/local/script.py\", \"full_numbers\")\r\n```\r\n\r\nAnd to make loading easier in Colab, you can create a dataset repo on the Hub and upload the script there. Or push the script to Google Drive and mount the drive in Colab."
] | 2022-04-05T16:00:15 | 2022-04-06T13:09:01 | null | NONE | null | How can I download only the train and test split for full numbers using load_dataset()?
I do not need the extra split and it will take 40 mins just to download in Colab. I have very short time in hand. Please help. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4101/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4101/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4100 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4100/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4100/comments | https://api.github.com/repos/huggingface/datasets/issues/4100/events | https://github.com/huggingface/datasets/pull/4100 | 1,193,393,959 | PR_kwDODunzps41q4ce | 4,100 | Improve RedCaps dataset card | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"I find this preprocessing a bit too specific to add it as a method to `datasets` as it's only useful in the context of CV (and we support multiple modalities). However, I agree it would be great to move this code to another lib to avoid code duplication. Maybe we should create a package with preprocessing functions/transforms for this purpose?"
] | 2022-04-05T15:57:14 | 2022-04-13T14:08:54 | 2022-04-13T14:02:26 | CONTRIBUTOR | null | This PR modifies the RedCaps card to:
* fix the formatting of the Point of Contact fields on the Hub
* speed up the image fetching logic (aligns it with the [img2dataset](https://github.com/rom1504/img2dataset) tool) and make it more robust (return None if **any** exception is thrown) | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4100/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4100/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4100",
"html_url": "https://github.com/huggingface/datasets/pull/4100",
"diff_url": "https://github.com/huggingface/datasets/pull/4100.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4100.patch",
"merged_at": "2022-04-13T14:02:26"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4099 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4099/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4099/comments | https://api.github.com/repos/huggingface/datasets/issues/4099/events | https://github.com/huggingface/datasets/issues/4099 | 1,193,253,768 | I_kwDODunzps5HH5uI | 4,099 | UnicodeDecodeError: 'ascii' codec can't decode byte 0xe5 in position 213: ordinal not in range(128) | {
"login": "andreybond",
"id": 20210017,
"node_id": "MDQ6VXNlcjIwMjEwMDE3",
"avatar_url": "https://avatars.githubusercontent.com/u/20210017?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/andreybond",
"html_url": "https://github.com/andreybond",
"followers_url": "https://api.github.com/users/andreybond/followers",
"following_url": "https://api.github.com/users/andreybond/following{/other_user}",
"gists_url": "https://api.github.com/users/andreybond/gists{/gist_id}",
"starred_url": "https://api.github.com/users/andreybond/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/andreybond/subscriptions",
"organizations_url": "https://api.github.com/users/andreybond/orgs",
"repos_url": "https://api.github.com/users/andreybond/repos",
"events_url": "https://api.github.com/users/andreybond/events{/privacy}",
"received_events_url": "https://api.github.com/users/andreybond/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"Hi @andreybond, thanks for reporting.\r\n\r\nUnfortunately, I'm not able to able to reproduce your issue:\r\n```python\r\nIn [4]: from datasets import load_dataset\r\n ...: datasets = load_dataset(\"nielsr/XFUN\", \"xfun.ja\")\r\n\r\nIn [5]: datasets\r\nOut[5]: \r\nDatasetDict({\r\n train: Dataset({\r\n features: ['id', 'input_ids', 'bbox', 'labels', 'image', 'entities', 'relations'],\r\n num_rows: 194\r\n })\r\n validation: Dataset({\r\n features: ['id', 'input_ids', 'bbox', 'labels', 'image', 'entities', 'relations'],\r\n num_rows: 71\r\n })\r\n})\r\n```\r\n\r\nThe only reason I can imagine this issue may arise is if your default encoding is not \"UTF-8\" (and it is ASCII instead). This is usually the case on Windows machines; but you say your environment is a Linux machine. Maybe you change your machine default encoding?\r\n\r\nCould you please check this?\r\n```python\r\nIn [6]: import sys\r\n\r\nIn [7]: sys.getdefaultencoding()\r\nOut[7]: 'utf-8'\r\n```",
"I opened a PR in the original dataset loading script:\r\n- microsoft/unilm#677\r\n\r\nand fixed the corresponding dataset script on the Hub:\r\n- https://huggingface.co/datasets/nielsr/XFUN/commit/73ba5e026621e05fb756ae0f267eb49971f70ebd",
"import sys\r\nsys.getdefaultencoding()\r\n\r\nreturned: 'utf-8'\r\n\r\n---------------------\r\n\r\nI've just cloned master branch - your fix works! Thank you!"
] | 2022-04-05T14:42:38 | 2022-04-06T06:37:44 | 2022-04-06T06:35:54 | NONE | null | ## Describe the bug
Error "UnicodeDecodeError: 'ascii' codec can't decode byte 0xe5 in position 213: ordinal not in range(128)" is thrown when downloading dataset.
## Steps to reproduce the bug
```python
from datasets import load_dataset
datasets = load_dataset("nielsr/XFUN", "xfun.ja")
```
## Expected results
Dataset should be downloaded without exceptions
## Actual results
Stack trace (for the second-time execution):
Downloading and preparing dataset xfun/xfun.ja to /root/.cache/huggingface/datasets/nielsr___xfun/xfun.ja/0.0.0/e06e948b673d1be9a390a83c05c10e49438bf03dd85ae9a4fe06f8747a724477...
Downloading data files: 100%
2/2 [00:00<00:00, 88.48it/s]
Extracting data files: 100%
2/2 [00:00<00:00, 79.60it/s]
UnicodeDecodeErrorTraceback (most recent call last)
<ipython-input-31-79c26bd1109c> in <module>
1 from datasets import load_dataset
2
----> 3 datasets = load_dataset("nielsr/XFUN", "xfun.ja")
/usr/local/lib/python3.6/dist-packages/datasets/load.py in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, ignore_verifications, keep_in_memory, save_infos, revision, use_auth_token, task, streaming, **config_kwargs)
/usr/local/lib/python3.6/dist-packages/datasets/builder.py in download_and_prepare(self, download_config, download_mode, ignore_verifications, try_from_hf_gcs, dl_manager, base_path, use_auth_token, **download_and_prepare_kwargs)
604 )
605
--> 606 # By default, return all splits
607 if split is None:
608 split = {s: s for s in self.info.splits}
/usr/local/lib/python3.6/dist-packages/datasets/builder.py in _download_and_prepare(self, dl_manager, verify_infos)
/usr/local/lib/python3.6/dist-packages/datasets/builder.py in _download_and_prepare(self, dl_manager, verify_infos, **prepare_split_kwargs)
692 Args:
693 split: `datasets.Split` which subset of the data to read.
--> 694
695 Returns:
696 `Dataset`
/usr/local/lib/python3.6/dist-packages/datasets/builder.py in _prepare_split(self, split_generator, check_duplicate_keys)
/usr/local/lib/python3.6/dist-packages/tqdm/notebook.py in __iter__(self)
252 if not self.disable:
253 self.display(check_delay=False)
--> 254
255 def __iter__(self):
256 try:
/usr/local/lib/python3.6/dist-packages/tqdm/std.py in __iter__(self)
1183 for obj in iterable:
1184 yield obj
-> 1185 return
1186
1187 mininterval = self.mininterval
~/.cache/huggingface/modules/datasets_modules/datasets/nielsr--XFUN/e06e948b673d1be9a390a83c05c10e49438bf03dd85ae9a4fe06f8747a724477/XFUN.py in _generate_examples(self, filepaths)
140 logger.info("Generating examples from = %s", filepath)
141 with open(filepath[0], "r") as f:
--> 142 data = json.load(f)
143
144 for doc in data["documents"]:
/usr/lib/python3.6/json/__init__.py in load(fp, cls, object_hook, parse_float, parse_int, parse_constant, object_pairs_hook, **kw)
294
295 """
--> 296 return loads(fp.read(),
297 cls=cls, object_hook=object_hook,
298 parse_float=parse_float, parse_int=parse_int,
/usr/lib/python3.6/encodings/ascii.py in decode(self, input, final)
24 class IncrementalDecoder(codecs.IncrementalDecoder):
25 def decode(self, input, final=False):
---> 26 return codecs.ascii_decode(input, self.errors)[0]
27
28 class StreamWriter(Codec,codecs.StreamWriter):
UnicodeDecodeError: 'ascii' codec can't decode byte 0xe5 in position 213: ordinal not in range(128)
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 2.0.0 (but reproduced with many previous versions)
- Platform: Docker: Linux da5b74136d6b 5.3.0-1031-azure #32~18.04.1-Ubuntu SMP Mon Jun 22 15:27:23 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux ; Base docker image is : huggingface/transformers-pytorch-cpu
- Python version: 3.6.9
- PyArrow version: 6.0.1
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4099/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4099/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4098 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4098/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4098/comments | https://api.github.com/repos/huggingface/datasets/issues/4098/events | https://github.com/huggingface/datasets/pull/4098 | 1,193,245,522 | PR_kwDODunzps41qXjo | 4,098 | Proposing WikiSplit metric card | {
"login": "sashavor",
"id": 14205986,
"node_id": "MDQ6VXNlcjE0MjA1OTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/14205986?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sashavor",
"html_url": "https://github.com/sashavor",
"followers_url": "https://api.github.com/users/sashavor/followers",
"following_url": "https://api.github.com/users/sashavor/following{/other_user}",
"gists_url": "https://api.github.com/users/sashavor/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sashavor/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sashavor/subscriptions",
"organizations_url": "https://api.github.com/users/sashavor/orgs",
"repos_url": "https://api.github.com/users/sashavor/repos",
"events_url": "https://api.github.com/users/sashavor/events{/privacy}",
"received_events_url": "https://api.github.com/users/sashavor/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"A quick Github tip ;) To avoid running N times the CI, you can push all the changes at once: go to Files Changed tab, and on each suggestion there's a \"add to commit batch\" and then you can do one commit for all the suggestions you want to approve ;)",
"Oh thanks for the tip!! Haha I was wondering why it was running a bunch of\ntimes :P\n\nOn Tue, Apr 5, 2022 at 11:44 AM Quentin Lhoest ***@***.***>\nwrote:\n\n> A quick Github tip ;) To avoid running N times the CI, you can push all\n> the changes at once: go to Files Changed tab, and on each suggestion\n> there's a \"add to commit batch\" and then you can do one commit for all the\n> suggestions you want to approve ;)\n>\n> —\n> Reply to this email directly, view it on GitHub\n> <https://github.com/huggingface/datasets/pull/4098#issuecomment-1088894515>,\n> or unsubscribe\n> <https://github.com/notifications/unsubscribe-auth/ADMMIIRZYNVFJRWRWW4VJY3VDRNUBANCNFSM5SS7L5HA>\n> .\n> You are receiving this because you modified the open/close state.Message\n> ID: ***@***.***>\n>\n\n\n-- \nSasha Luccioni, PhD\nPostdoctoral Researcher (Mila, Université de Montréal)\nChercheure postdoctorale (Mila, Université de Montréal)\nhttps://www.sashaluccioni.com/\n [image: Image result for universite de montreal logo]\n"
] | 2022-04-05T14:36:34 | 2022-10-11T09:10:21 | 2022-04-05T15:42:28 | NONE | null | Pinging @lhoestq to ensure that my distinction between the dataset and the metric are clear :sweat_smile: | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4098/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4098/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4098",
"html_url": "https://github.com/huggingface/datasets/pull/4098",
"diff_url": "https://github.com/huggingface/datasets/pull/4098.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4098.patch",
"merged_at": "2022-04-05T15:42:28"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4097 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4097/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4097/comments | https://api.github.com/repos/huggingface/datasets/issues/4097/events | https://github.com/huggingface/datasets/pull/4097 | 1,193,205,751 | PR_kwDODunzps41qPEu | 4,097 | Updating FrugalScore metric card | {
"login": "sashavor",
"id": 14205986,
"node_id": "MDQ6VXNlcjE0MjA1OTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/14205986?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sashavor",
"html_url": "https://github.com/sashavor",
"followers_url": "https://api.github.com/users/sashavor/followers",
"following_url": "https://api.github.com/users/sashavor/following{/other_user}",
"gists_url": "https://api.github.com/users/sashavor/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sashavor/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sashavor/subscriptions",
"organizations_url": "https://api.github.com/users/sashavor/orgs",
"repos_url": "https://api.github.com/users/sashavor/repos",
"events_url": "https://api.github.com/users/sashavor/events{/privacy}",
"received_events_url": "https://api.github.com/users/sashavor/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-04-05T14:09:24 | 2022-04-05T15:07:35 | 2022-04-05T15:01:46 | NONE | null | removing duplicate paragraph | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4097/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4097/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4097",
"html_url": "https://github.com/huggingface/datasets/pull/4097",
"diff_url": "https://github.com/huggingface/datasets/pull/4097.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4097.patch",
"merged_at": "2022-04-05T15:01:46"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4096 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4096/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4096/comments | https://api.github.com/repos/huggingface/datasets/issues/4096/events | https://github.com/huggingface/datasets/issues/4096 | 1,193,165,229 | I_kwDODunzps5HHkGt | 4,096 | Add support for streaming Zarr stores for hosted datasets | {
"login": "jacobbieker",
"id": 7170359,
"node_id": "MDQ6VXNlcjcxNzAzNTk=",
"avatar_url": "https://avatars.githubusercontent.com/u/7170359?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jacobbieker",
"html_url": "https://github.com/jacobbieker",
"followers_url": "https://api.github.com/users/jacobbieker/followers",
"following_url": "https://api.github.com/users/jacobbieker/following{/other_user}",
"gists_url": "https://api.github.com/users/jacobbieker/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jacobbieker/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jacobbieker/subscriptions",
"organizations_url": "https://api.github.com/users/jacobbieker/orgs",
"repos_url": "https://api.github.com/users/jacobbieker/repos",
"events_url": "https://api.github.com/users/jacobbieker/events{/privacy}",
"received_events_url": "https://api.github.com/users/jacobbieker/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | closed | false | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"Hi @jacobbieker, thanks for your request and study of possible alternatives.\r\n\r\nWe are very interested in finding a way to make `datasets` useful to you.\r\n\r\nLooking at the Zarr docs, I saw that among its storage alternatives, there is the ZIP file format: https://zarr.readthedocs.io/en/stable/api/storage.html#zarr.storage.ZipStore\r\n\r\nThis might be convenient for many reasons:\r\n- On the one hand, we avoid the Git issue with huge number of small files: chunks files are compressed into a single ZIP file\r\n- On the other hand, the ZIP file format is specially suited for streaming data because it allows random access to its component files (i.e. it supports random access to its chunks)\r\n\r\nAnyway, I think that a Python loading script will be necessary: you need to implement additional logic to select certain chunks (based on date or other criteria).\r\n\r\nPlease, let me know if this makes sense to you.",
"Ah okay, I missed the option of zip files for zarr, I'll try that with our repos and see if it works! Thanks a lot!",
"Hi @jacobbieker, does the Zarr ZipStore work for your use case?",
"Hi,\r\n\r\nYes, it seems to! I got it working for https://huggingface.co/datasets/openclimatefix/mrms thanks for the help! ",
"On behalf of the Zarr developers, let me say THANK YOU for working to support Zarr on HF! 🙏 Zarr is a 100% open-source and community driven project (fiscally sponsored by NumFocus). We see it as an ideal format for ML training datasets, particularly in scientific domains.\r\n\r\nI think the solution of zipping the Zarr store is a reasonable way to balance the constraints of Git LFS with the structure of Zarr.\r\n\r\nIt would be amazing to get something on the [Hugging Face Datasets Docs](https://huggingface.co/docs/datasets/index) about how to best work with Zarr. Let me know if there's a way I could help with that effort.",
"Also just noting here that I was able to lazily open @jacobbieker's dataset over the internet from HF hub 🚀 !\r\n\r\n```python\r\nimport xarray as xr\r\nurl = \"https://huggingface.co/datasets/openclimatefix/mrms/resolve/main/data/2016_001.zarr.zip\"\r\nzip_url = 'zip:///::' + url\r\nds = xr.open_dataset(zip_url, engine='zarr', chunks={})\r\n```\r\n\r\n<img width=\"740\" alt=\"image\" src=\"https://user-images.githubusercontent.com/1197350/164508663-bc75cdc0-734d-44f4-9562-2877ecfdf433.png\">\r\n",
"However, I wasn't able to get streaming working using the Datasets api:\r\n\r\n```python\r\nfrom datasets import load_dataset\r\nds = load_dataset(\"openclimatefix/mrms\", streaming=True, split='train')\r\nitem = next(iter(ds))\r\n```\r\n\r\n<details>\r\n<summary>FileNotFoundError traceback</summary>\r\n\r\n```\r\nNo config specified, defaulting to: mrms/2021\r\nzip://::https://huggingface.co/datasets/openclimatefix/mrms/resolve/main/data/2016_001.zarr.zip\r\ndata/2016_001.zarr.zip\r\nzip://2016_001.zarr.zip::https://huggingface.co/datasets/openclimatefix/mrms/resolve/main/data/2016_001.zarr.zip\r\n---------------------------------------------------------------------------\r\nFileNotFoundError Traceback (most recent call last)\r\nInput In [1], in <cell line: 3>()\r\n 1 from datasets import load_dataset\r\n 2 ds = load_dataset(\"openclimatefix/mrms\", streaming=True, split='train')\r\n----> 3 item = next(iter(ds))\r\n\r\nFile /opt/miniconda3/envs/hugginface/lib/python3.9/site-packages/datasets/iterable_dataset.py:497, in IterableDataset.__iter__(self)\r\n 496 def __iter__(self):\r\n--> 497 for key, example in self._iter():\r\n 498 if self.features:\r\n 499 # we encode the example for ClassLabel feature types for example\r\n 500 encoded_example = self.features.encode_example(example)\r\n\r\nFile /opt/miniconda3/envs/hugginface/lib/python3.9/site-packages/datasets/iterable_dataset.py:494, in IterableDataset._iter(self)\r\n 492 else:\r\n 493 ex_iterable = self._ex_iterable\r\n--> 494 yield from ex_iterable\r\n\r\nFile /opt/miniconda3/envs/hugginface/lib/python3.9/site-packages/datasets/iterable_dataset.py:87, in ExamplesIterable.__iter__(self)\r\n 86 def __iter__(self):\r\n---> 87 yield from self.generate_examples_fn(**self.kwargs)\r\n\r\nFile ~/.cache/huggingface/modules/datasets_modules/datasets/openclimatefix--mrms/2a6f697014d7eb3caf586ca137d47ca38785ae2fe36248611b021f8248b59936/mrms.py:150, in MRMS._generate_examples(self, filepath, split)\r\n 147 filepath = \"[https://huggingface.co/datasets/openclimatefix/mrms/resolve/main/data/2016_001.zarr.zip](https://huggingface.co/datasets/openclimatefix/mrms/resolve/main/data/2016_001.zarr.zip%3C/span%3E%3Cspan) style=\"color:rgb(175,0,0)\">\"\r\n 148 # TODO: This method handles input defined in _split_generators to yield (key, example) tuples from the dataset.\r\n 149 # The `key` is for legacy reasons (tfds) and is not important in itself, but must be unique for each example.\r\n--> 150 with zarr.storage.FSStore(fsspec.open(\"zip::\" + filepath, mode='r'), mode='r') as store:\r\n 151 data = xr.open_zarr(store)\r\n 152 for key, row in enumerate(data[\"time\"].values):\r\n\r\nFile /opt/miniconda3/envs/hugginface/lib/python3.9/site-packages/zarr/storage.py:1120, in FSStore.__init__(self, url, normalize_keys, key_separator, mode, exceptions, dimension_separator, **storage_options)\r\n 1117 import fsspec\r\n 1118 self.normalize_keys = normalize_keys\r\n-> 1120 protocol, _ = fsspec.core.split_protocol(url)\r\n 1121 # set auto_mkdir to True for local file system\r\n 1122 if protocol in (None, \"file\") and not storage_options.get(\"auto_mkdir\"):\r\n\r\nFile /opt/miniconda3/envs/hugginface/lib/python3.9/site-packages/fsspec/core.py:514, in split_protocol(urlpath)\r\n 512 def split_protocol(urlpath):\r\n 513 \"\"\"Return protocol, path pair\"\"\"\r\n--> 514 urlpath = stringify_path(urlpath)\r\n 515 if \"://\" in urlpath:\r\n 516 protocol, path = urlpath.split(\"://\", 1)\r\n\r\nFile /opt/miniconda3/envs/hugginface/lib/python3.9/site-packages/fsspec/utils.py:315, in stringify_path(filepath)\r\n 313 return filepath\r\n 314 elif hasattr(filepath, \"__fspath__\"):\r\n--> 315 return filepath.__fspath__()\r\n 316 elif isinstance(filepath, pathlib.Path):\r\n 317 return str(filepath)\r\n\r\nFile /opt/miniconda3/envs/hugginface/lib/python3.9/site-packages/fsspec/core.py:98, in OpenFile.__fspath__(self)\r\n 96 def __fspath__(self):\r\n 97 # may raise if cannot be resolved to local file\r\n---> 98 return self.open().__fspath__()\r\n\r\nFile /opt/miniconda3/envs/hugginface/lib/python3.9/site-packages/fsspec/core.py:140, in OpenFile.open(self)\r\n 132 def open(self):\r\n 133 \"\"\"Materialise this as a real open file without context\r\n 134 \r\n 135 The file should be explicitly closed to avoid enclosed file\r\n (...)\r\n 138 been deleted; but a with-context is better style.\r\n 139 \"\"\"\r\n--> 140 out = self.__enter__()\r\n 141 closer = out.close\r\n 142 fobjects = self.fobjects.copy()[:-1]\r\n\r\nFile /opt/miniconda3/envs/hugginface/lib/python3.9/site-packages/fsspec/core.py:103, in OpenFile.__enter__(self)\r\n 100 def __enter__(self):\r\n 101 mode = self.mode.replace(\"t\", \"\").replace(\"b\", \"\") + \"b\"\r\n--> 103 f = self.fs.open(self.path, mode=mode)\r\n 105 self.fobjects = [f]\r\n 107 if self.compression is not None:\r\n\r\nFile /opt/miniconda3/envs/hugginface/lib/python3.9/site-packages/fsspec/spec.py:1009, in AbstractFileSystem.open(self, path, mode, block_size, cache_options, compression, **kwargs)\r\n 1007 else:\r\n 1008 ac = kwargs.pop(\"autocommit\", not self._intrans)\r\n-> 1009 f = self._open(\r\n 1010 path,\r\n 1011 mode=mode,\r\n 1012 block_size=block_size,\r\n 1013 autocommit=ac,\r\n 1014 cache_options=cache_options,\r\n 1015 **kwargs,\r\n 1016 )\r\n 1017 if compression is not None:\r\n 1018 from fsspec.compression import compr\r\n\r\nFile /opt/miniconda3/envs/hugginface/lib/python3.9/site-packages/fsspec/implementations/zip.py:96, in ZipFileSystem._open(self, path, mode, block_size, autocommit, cache_options, **kwargs)\r\n 94 if mode != \"rb\":\r\n 95 raise NotImplementedError\r\n---> 96 info = self.info(path)\r\n 97 out = self.zip.open(path, \"r\")\r\n 98 out.size = info[\"size\"]\r\n\r\nFile /opt/miniconda3/envs/hugginface/lib/python3.9/site-packages/fsspec/archive.py:42, in AbstractArchiveFileSystem.info(self, path, **kwargs)\r\n 40 return self.dir_cache[path + \"/\"]\r\n 41 else:\r\n---> 42 raise FileNotFoundError(path)\r\n\r\nFileNotFoundError:\r\n```\r\n\r\n</details>\r\n\r\nIs this a bug? Or am I just doing it wrong...",
"I'm still messing around with that dataset, so the data might have moved. I currently have each year of MRMS precipitation rate data as it's own zarr, but as they are quite large (on order of 100GB each) I'm working to split them into single days, and as such they are still being moved around, I was just trying to get a proof of concept working originally. ",
"I've mostly finished rearranging the data now and uploading some more, so this works now:\r\n```python\r\nimport datasets\r\nds = datasets.load_dataset(\"openclimatefix/mrms\", streaming=True, split=\"train\")\r\nitem = next(iter(ds))\r\nprint(item.keys())\r\nprint(item[\"timestamp\"])\r\n```\r\n\r\nThe MRMS data now goes most of 2016-2022, with quite a few gaps I'm working on filling in"
] | 2022-04-05T13:38:32 | 2022-04-25T08:04:12 | 2022-04-21T08:12:58 | NONE | null | **Is your feature request related to a problem? Please describe.**
Lots of geospatial data is stored in the Zarr format. This format works well for n-dimensional data and coordinates, and can have good compression. Unfortunately, HF datasets doesn't support streaming in data in Zarr format as far as I can tell. Zarr stores are designed to be easily streamed in from cloud storage, especially with xarray and fsspec. Since geospatial data tends to be very large, and on the order of TBs of data or 10's of TBs of data for a single dataset, it can be difficult to store the dataset locally for users. Just adding Zarr stores with HF git doesn't work well (see https://github.com/huggingface/datasets/issues/3823) as Zarr splits the data into lots of small chunks for fast loading, and that doesn't work well with git. I've somewhat gotten around that issue by tarring each Zarr store and uploading them as a single file, which seems to be working (see https://huggingface.co/datasets/openclimatefix/gfs-reforecast for example data files, although the script isn't written yet). This does mean that streaming doesn't quite work though. On the other hand, in https://huggingface.co/datasets/openclimatefix/eumetsat_uk_hrv we stream in a Zarr store from a public GCP bucket quite easily.
**Describe the solution you'd like**
A way to upload Zarr stores for hosted datasets so that we can stream it with xarray and fsspec.
**Describe alternatives you've considered**
Tarring each Zarr store individually and just extracting them in the dataset script -> Downside this is a lot of data that probably doesn't fit locally for a lot of potential users.
Pre-prepare examples in a format like Parquet -> Would use a lot more storage, and a lot less flexibility, in the eumetsat_uk_hrv, we use the one Zarr store for multiple different configurations.
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4096/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4096/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4095 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4095/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4095/comments | https://api.github.com/repos/huggingface/datasets/issues/4095/events | https://github.com/huggingface/datasets/pull/4095 | 1,192,573,353 | PR_kwDODunzps41oIFI | 4,095 | fix typo in rename_column error message | {
"login": "hunterlang",
"id": 680821,
"node_id": "MDQ6VXNlcjY4MDgyMQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/680821?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/hunterlang",
"html_url": "https://github.com/hunterlang",
"followers_url": "https://api.github.com/users/hunterlang/followers",
"following_url": "https://api.github.com/users/hunterlang/following{/other_user}",
"gists_url": "https://api.github.com/users/hunterlang/gists{/gist_id}",
"starred_url": "https://api.github.com/users/hunterlang/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/hunterlang/subscriptions",
"organizations_url": "https://api.github.com/users/hunterlang/orgs",
"repos_url": "https://api.github.com/users/hunterlang/repos",
"events_url": "https://api.github.com/users/hunterlang/events{/privacy}",
"received_events_url": "https://api.github.com/users/hunterlang/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_4095). All of your documentation changes will be reflected on that endpoint."
] | 2022-04-05T03:55:56 | 2022-04-05T08:54:46 | 2022-04-05T08:45:53 | CONTRIBUTOR | null | I feel bad submitting such a tiny change as a PR but it confused me today 😄 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4095/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4095/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4095",
"html_url": "https://github.com/huggingface/datasets/pull/4095",
"diff_url": "https://github.com/huggingface/datasets/pull/4095.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4095.patch",
"merged_at": "2022-04-05T08:45:53"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4094 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4094/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4094/comments | https://api.github.com/repos/huggingface/datasets/issues/4094/events | https://github.com/huggingface/datasets/issues/4094 | 1,192,534,414 | I_kwDODunzps5HFKGO | 4,094 | Helo Mayfrends | {
"login": "Budigming",
"id": 102933353,
"node_id": "U_kgDOBiKjaQ",
"avatar_url": "https://avatars.githubusercontent.com/u/102933353?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Budigming",
"html_url": "https://github.com/Budigming",
"followers_url": "https://api.github.com/users/Budigming/followers",
"following_url": "https://api.github.com/users/Budigming/following{/other_user}",
"gists_url": "https://api.github.com/users/Budigming/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Budigming/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Budigming/subscriptions",
"organizations_url": "https://api.github.com/users/Budigming/orgs",
"repos_url": "https://api.github.com/users/Budigming/repos",
"events_url": "https://api.github.com/users/Budigming/events{/privacy}",
"received_events_url": "https://api.github.com/users/Budigming/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2067376369,
"node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request",
"name": "dataset request",
"color": "e99695",
"default": false,
"description": "Requesting to add a new dataset"
}
] | closed | false | null | [] | null | [] | 2022-04-05T02:42:57 | 2022-04-05T07:16:42 | 2022-04-05T07:16:42 | NONE | null | ## Adding a Dataset
- **Name:** *name of the dataset*
- **Description:** *short description of the dataset (or link to social media or blog post)*
- **Paper:** *link to the dataset paper if available*
- **Data:** *link to the Github repository or current dataset location*
- **Motivation:** *what are some good reasons to have this dataset*
Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md).
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4094/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4094/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4093 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4093/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4093/comments | https://api.github.com/repos/huggingface/datasets/issues/4093/events | https://github.com/huggingface/datasets/issues/4093 | 1,192,523,161 | I_kwDODunzps5HFHWZ | 4,093 | elena-soare/crawled-ecommerce: missing dataset | {
"login": "seevaratnam",
"id": 17519354,
"node_id": "MDQ6VXNlcjE3NTE5MzU0",
"avatar_url": "https://avatars.githubusercontent.com/u/17519354?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/seevaratnam",
"html_url": "https://github.com/seevaratnam",
"followers_url": "https://api.github.com/users/seevaratnam/followers",
"following_url": "https://api.github.com/users/seevaratnam/following{/other_user}",
"gists_url": "https://api.github.com/users/seevaratnam/gists{/gist_id}",
"starred_url": "https://api.github.com/users/seevaratnam/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/seevaratnam/subscriptions",
"organizations_url": "https://api.github.com/users/seevaratnam/orgs",
"repos_url": "https://api.github.com/users/seevaratnam/repos",
"events_url": "https://api.github.com/users/seevaratnam/events{/privacy}",
"received_events_url": "https://api.github.com/users/seevaratnam/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 3470211881,
"node_id": "LA_kwDODunzps7O1zsp",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset-viewer",
"name": "dataset-viewer",
"color": "E5583E",
"default": false,
"description": "Related to the dataset viewer on huggingface.co"
}
] | closed | false | {
"login": "severo",
"id": 1676121,
"node_id": "MDQ6VXNlcjE2NzYxMjE=",
"avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/severo",
"html_url": "https://github.com/severo",
"followers_url": "https://api.github.com/users/severo/followers",
"following_url": "https://api.github.com/users/severo/following{/other_user}",
"gists_url": "https://api.github.com/users/severo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/severo/subscriptions",
"organizations_url": "https://api.github.com/users/severo/orgs",
"repos_url": "https://api.github.com/users/severo/repos",
"events_url": "https://api.github.com/users/severo/events{/privacy}",
"received_events_url": "https://api.github.com/users/severo/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "severo",
"id": 1676121,
"node_id": "MDQ6VXNlcjE2NzYxMjE=",
"avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/severo",
"html_url": "https://github.com/severo",
"followers_url": "https://api.github.com/users/severo/followers",
"following_url": "https://api.github.com/users/severo/following{/other_user}",
"gists_url": "https://api.github.com/users/severo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/severo/subscriptions",
"organizations_url": "https://api.github.com/users/severo/orgs",
"repos_url": "https://api.github.com/users/severo/repos",
"events_url": "https://api.github.com/users/severo/events{/privacy}",
"received_events_url": "https://api.github.com/users/severo/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"It's a bug! Thanks for reporting, I'm looking at it.",
"By the way, the error on our part is due to the huge size of every row (~90MB). The dataset viewer does not support such big dataset rows for the moment.\r\nAnyway, we're working to give a hint about this in the dataset viewer.",
"Fixed. See https://huggingface.co/datasets/elena-soare/crawled-ecommerce/viewer/elena-soare--crawled-ecommerce/train.\r\n\r\n<img width=\"1552\" alt=\"Capture d’écran 2022-04-12 à 11 23 51\" src=\"https://user-images.githubusercontent.com/1676121/162929722-2e2b80e2-154a-4b61-87bd-e341bd6c46e6.png\">\r\n\r\nThanks for reporting!"
] | 2022-04-05T02:25:19 | 2022-04-12T09:34:53 | 2022-04-12T09:34:53 | NONE | null | elena-soare/crawled-ecommerce
**Link:** *link to the dataset viewer page*
*short description of the issue*
Am I the one who added this dataset ? Yes-No
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4093/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4093/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4092 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4092/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4092/comments | https://api.github.com/repos/huggingface/datasets/issues/4092/events | https://github.com/huggingface/datasets/pull/4092 | 1,192,499,903 | PR_kwDODunzps41n40R | 4,092 | Fix dataset `amazon_us_reviews` metadata - 4/4/2022 | {
"login": "trentonstrong",
"id": 191985,
"node_id": "MDQ6VXNlcjE5MTk4NQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/191985?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/trentonstrong",
"html_url": "https://github.com/trentonstrong",
"followers_url": "https://api.github.com/users/trentonstrong/followers",
"following_url": "https://api.github.com/users/trentonstrong/following{/other_user}",
"gists_url": "https://api.github.com/users/trentonstrong/gists{/gist_id}",
"starred_url": "https://api.github.com/users/trentonstrong/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/trentonstrong/subscriptions",
"organizations_url": "https://api.github.com/users/trentonstrong/orgs",
"repos_url": "https://api.github.com/users/trentonstrong/repos",
"events_url": "https://api.github.com/users/trentonstrong/events{/privacy}",
"received_events_url": "https://api.github.com/users/trentonstrong/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"cc: @albertvillanova just FYI"
] | 2022-04-05T01:39:45 | 2022-04-08T12:35:41 | 2022-04-08T12:29:31 | CONTRIBUTOR | null | Fixes #4048 by running `dataset-cli test` to reprocess data and regenerate metadata. Additionally I've updated the README to include up-to-date counts for the subsets. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4092/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4092/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4092",
"html_url": "https://github.com/huggingface/datasets/pull/4092",
"diff_url": "https://github.com/huggingface/datasets/pull/4092.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4092.patch",
"merged_at": "2022-04-08T12:29:30"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4091 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4091/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4091/comments | https://api.github.com/repos/huggingface/datasets/issues/4091/events | https://github.com/huggingface/datasets/issues/4091 | 1,192,023,855 | I_kwDODunzps5HDNcv | 4,091 | Build a Dataset One Example at a Time Without Loading All Data Into Memory | {
"login": "aravind-tonita",
"id": 99340348,
"node_id": "U_kgDOBevQPA",
"avatar_url": "https://avatars.githubusercontent.com/u/99340348?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/aravind-tonita",
"html_url": "https://github.com/aravind-tonita",
"followers_url": "https://api.github.com/users/aravind-tonita/followers",
"following_url": "https://api.github.com/users/aravind-tonita/following{/other_user}",
"gists_url": "https://api.github.com/users/aravind-tonita/gists{/gist_id}",
"starred_url": "https://api.github.com/users/aravind-tonita/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/aravind-tonita/subscriptions",
"organizations_url": "https://api.github.com/users/aravind-tonita/orgs",
"repos_url": "https://api.github.com/users/aravind-tonita/repos",
"events_url": "https://api.github.com/users/aravind-tonita/events{/privacy}",
"received_events_url": "https://api.github.com/users/aravind-tonita/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | closed | false | null | [] | null | [
"Hi! Yes, the problem with `add_item` is that it keeps examples in memory, so you are left with these options:\r\n* writing a dataset loading script in which you iterate over `custom_example_dict_streamer` and yield the examples (in `_generate examples`)\r\n* storing the data in a JSON/CSV/Parquet/TXT file and using `Dataset.from_{format}`\r\n* using `add_item` + `save_to_disk` on smaller chunks: \r\n ```python\r\n from datasets import Dataset, concatenate_datasets\r\n MAX_SAMPLES_IN_MEMORY = 1000\r\n samples_in_dset = 0\r\n dset = Dataset.from_dict({\"col1\": [], \"col2\": []}) # empty dataset\r\n path_to_save_dir = \"path/to/save/dir\"\r\n num_chunks = 0\r\n for example_dict in custom_example_dict_streamer(\"/path/to/raw/data\"):\r\n dset = dset.add_item(example_dict)\r\n samples_in_dset += 1\r\n if samples_in_dset == MAX_SAMPLES_IN_MEMORY:\r\n samples_in_dset = 0\r\n dset.save_to_disk(f\"{path_to_save_dir}{num_chunks}\")\r\n num_chunks =+ 1\r\n dset = Dataset.from_dict({\"col1\": [], \"col2\": []}) # empty dataset\r\n if samples_in_dset > 0:\r\n dset.save_to_disk(f\"{path_to_save_dir}{num_chunks}\")\r\n num_chunks =+ 1\r\n loaded_dsets = [] # memory-mapped\r\n for chunk_num in range(num_chunks):\r\n dset = Dataset.load_from_disk(f\"{path_to_save_dir}{chunk_num}\") \r\n loaded_dsets.append(dset)\r\n final_dset = concatenate_datasets(dset)\r\n ```\r\n If you still have issues with this approach, you can try to delete unused datasets with `gc.collect()` to free some memory. ",
"This is really elegant, thank you @mariosasko! I will try this."
] | 2022-04-04T16:19:24 | 2022-04-20T14:31:00 | 2022-04-20T14:31:00 | NONE | null | **Is your feature request related to a problem? Please describe.**
I have a very large dataset stored on disk in a custom format. I have some custom code that reads one data example at a time and yields it in the form of a dictionary. I want to construct a `Dataset` with all examples, and then save it to disk. I later want to load the saved `Dataset` and use it like any other HuggingFace dataset, get splits, wrap it in a PyTorch `DataLoader`, etc. **Crucially, I do not ever want to materialize all the data in memory while building the dataset.**
**Describe the solution you'd like**
I would like to be able to do something like the following. Notice how each example is read and then immediately added to the dataset. We do not store all the data in memory when constructing the `Dataset`. If it helps, I will know the schema of my dataset before hand.
```
# Initialize an empty Dataset, possibly from a known schema.
dataset = Dataset()
# Read in examples one by one using a custom data streamer.
for example_dict in custom_example_dict_streamer("/path/to/raw/data"):
# Add this example to the dict but do not store it in memory.
dataset.add_item(example_dict)
# Save the final dataset to disk as an Arrow-backed dataset.
dataset.save_to_disk("/path/to/dataset")
...
# I'd like to be able to later `load_from_disk` and use the loaded Dataset
# just like any other memory-mapped pyarrow-backed HuggingFace dataset...
loaded_dataset = Dataset.load_from_disk("/path/to/dataset")
loaded_dataset.set_format(type="torch", columnns=["foo", "bar", "baz"])
dataloader = torch.utils.data.DataLoader(loaded_dataset, batch_size=16)
...
```
**Describe alternatives you've considered**
I initially tried to read all the data into memory, construct a Pandas DataFrame and then call `Dataset.from_pandas`. This would not work as it requires storing all the data in memory. It seems that there is an `add_item` method already -- I tried to implement something like the desired API written above, but I've not been able to initialize an empty `Dataset` (this seems to require several layers of constructing `datasets.table.Table` which requires constructing a `pyarrow.lib.Table`, etc). I also considered writing my data to multiple sharded CSV files or JSON files and then using `from_csv` or `from_json`. I'd prefer not to do this because (1) I'd prefer to avoid the intermediate step of creating these temp CSV/JSON files and (2) I'm not sure if `from_csv` and `from_json` use memory-mapping.
Do you have any suggestions on how I'd be able to achieve this use case? Does something already exist to support this? Thank you very much in advance! | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4091/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4091/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4090 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4090/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4090/comments | https://api.github.com/repos/huggingface/datasets/issues/4090/events | https://github.com/huggingface/datasets/pull/4090 | 1,191,956,734 | PR_kwDODunzps41mEs5 | 4,090 | Avoid writing empty license files | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-04-04T15:23:37 | 2022-04-07T12:46:45 | 2022-04-07T12:40:43 | MEMBER | null | This PR avoids the creation of empty `LICENSE` files. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4090/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4090/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4090",
"html_url": "https://github.com/huggingface/datasets/pull/4090",
"diff_url": "https://github.com/huggingface/datasets/pull/4090.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4090.patch",
"merged_at": "2022-04-07T12:40:43"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4089 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4089/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4089/comments | https://api.github.com/repos/huggingface/datasets/issues/4089/events | https://github.com/huggingface/datasets/pull/4089 | 1,191,915,196 | PR_kwDODunzps41l7yd | 4,089 | Create metric card for Frugal Score | {
"login": "sashavor",
"id": 14205986,
"node_id": "MDQ6VXNlcjE0MjA1OTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/14205986?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sashavor",
"html_url": "https://github.com/sashavor",
"followers_url": "https://api.github.com/users/sashavor/followers",
"following_url": "https://api.github.com/users/sashavor/following{/other_user}",
"gists_url": "https://api.github.com/users/sashavor/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sashavor/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sashavor/subscriptions",
"organizations_url": "https://api.github.com/users/sashavor/orgs",
"repos_url": "https://api.github.com/users/sashavor/repos",
"events_url": "https://api.github.com/users/sashavor/events{/privacy}",
"received_events_url": "https://api.github.com/users/sashavor/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-04-04T14:53:49 | 2022-04-05T14:14:46 | 2022-04-05T14:06:50 | NONE | null | Proposing metric card for Frugal Score.
@albertvillanova or @lhoestq -- there are certain aspects that I'm not 100% sure on (such as how exactly the distillation between BertScore and FrugalScore is done) -- so if you find that something isn't clear, please let me know! | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4089/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4089/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4089",
"html_url": "https://github.com/huggingface/datasets/pull/4089",
"diff_url": "https://github.com/huggingface/datasets/pull/4089.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4089.patch",
"merged_at": "2022-04-05T14:06:50"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4088 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4088/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4088/comments | https://api.github.com/repos/huggingface/datasets/issues/4088/events | https://github.com/huggingface/datasets/pull/4088 | 1,191,901,172 | PR_kwDODunzps41l4yE | 4,088 | Remove unused legacy Beam utils | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-04-04T14:43:51 | 2022-04-05T15:23:27 | 2022-04-05T15:17:41 | MEMBER | null | This PR removes unused legacy custom `WriteToParquet`, once official Apache Beam includes the patch since version 2.22.0:
- Patch PR: https://github.com/apache/beam/pull/11699
- Issue: https://issues.apache.org/jira/browse/BEAM-10022
In relation with:
- #204 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4088/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4088/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4088",
"html_url": "https://github.com/huggingface/datasets/pull/4088",
"diff_url": "https://github.com/huggingface/datasets/pull/4088.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4088.patch",
"merged_at": "2022-04-05T15:17:41"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4087 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4087/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4087/comments | https://api.github.com/repos/huggingface/datasets/issues/4087/events | https://github.com/huggingface/datasets/pull/4087 | 1,191,819,805 | PR_kwDODunzps41lnfO | 4,087 | Fix BeamWriter output Parquet file | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-04-04T13:46:50 | 2022-04-05T15:00:40 | 2022-04-05T14:54:48 | MEMBER | null | Since now, the `BeamWriter` saved a Parquet file with a simplified schema, where each field value was serialized to JSON. That resulted in Parquet files larger than Arrow files.
This PR:
- writes Parquet file preserving original schema and without serialization, thus avoiding serialization overhead and resulting in a smaller output file size.
- fixes `parquet_to_arrow` function | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4087/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4087/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4087",
"html_url": "https://github.com/huggingface/datasets/pull/4087",
"diff_url": "https://github.com/huggingface/datasets/pull/4087.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4087.patch",
"merged_at": "2022-04-05T14:54:48"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4086 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4086/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4086/comments | https://api.github.com/repos/huggingface/datasets/issues/4086/events | https://github.com/huggingface/datasets/issues/4086 | 1,191,373,374 | I_kwDODunzps5HAuo- | 4,086 | Dataset viewer issue for McGill-NLP/feedbackQA | {
"login": "cslizc",
"id": 54827718,
"node_id": "MDQ6VXNlcjU0ODI3NzE4",
"avatar_url": "https://avatars.githubusercontent.com/u/54827718?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/cslizc",
"html_url": "https://github.com/cslizc",
"followers_url": "https://api.github.com/users/cslizc/followers",
"following_url": "https://api.github.com/users/cslizc/following{/other_user}",
"gists_url": "https://api.github.com/users/cslizc/gists{/gist_id}",
"starred_url": "https://api.github.com/users/cslizc/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/cslizc/subscriptions",
"organizations_url": "https://api.github.com/users/cslizc/orgs",
"repos_url": "https://api.github.com/users/cslizc/repos",
"events_url": "https://api.github.com/users/cslizc/events{/privacy}",
"received_events_url": "https://api.github.com/users/cslizc/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 3470211881,
"node_id": "LA_kwDODunzps7O1zsp",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset-viewer",
"name": "dataset-viewer",
"color": "E5583E",
"default": false,
"description": "Related to the dataset viewer on huggingface.co"
}
] | closed | false | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"Hi @cslizc, thanks for reporting.\r\n\r\nI have just forced the refresh of the corresponding cache and the preview is working now.",
"thank you so much"
] | 2022-04-04T07:27:20 | 2022-04-04T22:29:53 | 2022-04-04T08:01:45 | NONE | null | ## Dataset viewer issue for '*McGill-NLP/feedbackQA*'
**Link:** *[link to the dataset viewer page](https://huggingface.co/datasets/McGill-NLP/feedbackQA)*
*short description of the issue*
The dataset can be loaded correctly with `load_dataset` but the preview doesn't work. Error message:
```
Status code: 400
Exception: Status400Error
Message: Not found. Maybe the cache is missing, or maybe the dataset does not exist.
```
Am I the one who added this dataset ? Yes
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4086/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4086/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4085 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4085/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4085/comments | https://api.github.com/repos/huggingface/datasets/issues/4085/events | https://github.com/huggingface/datasets/issues/4085 | 1,190,621,345 | I_kwDODunzps5G93Ch | 4,085 | datasets.set_progress_bar_enabled(False) not working in datasets v2 | {
"login": "virilo",
"id": 3381112,
"node_id": "MDQ6VXNlcjMzODExMTI=",
"avatar_url": "https://avatars.githubusercontent.com/u/3381112?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/virilo",
"html_url": "https://github.com/virilo",
"followers_url": "https://api.github.com/users/virilo/followers",
"following_url": "https://api.github.com/users/virilo/following{/other_user}",
"gists_url": "https://api.github.com/users/virilo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/virilo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/virilo/subscriptions",
"organizations_url": "https://api.github.com/users/virilo/orgs",
"repos_url": "https://api.github.com/users/virilo/repos",
"events_url": "https://api.github.com/users/virilo/events{/privacy}",
"received_events_url": "https://api.github.com/users/virilo/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"Now, I can't find any reference to set_progress_bar_enabled in the code.\r\n\r\nI think it have been deleted",
"Hi @virilo,\r\n\r\nPlease note that since `datasets` version 2.0.0, we have aligned with `transformers` the management of the progress bar (among other things):\r\n- #3897\r\n\r\nNow, you should update your code to use `datasets.logging.disable_progress_bar`.\r\n\r\nYou have more info in our docs: [Logging methods](https://huggingface.co/docs/datasets/package_reference/logging_methods)",
"One important thing for beginner like me is: from datasets.utils.logging import disable_progress_bar\r\nDo not forget the 'utils' or you will waste a long time like me...."
] | 2022-04-02T12:40:10 | 2022-09-17T02:18:03 | 2022-04-04T06:44:34 | NONE | null | ## Describe the bug
datasets.set_progress_bar_enabled(False) not working in datasets v2
## Steps to reproduce the bug
```python
datasets.set_progress_bar_enabled(False)
```
## Expected results
datasets not using any progress bar
## Actual results
AttributeError: module 'datasets' has no attribute 'set_progress_bar_enabled
## Environment info
datasets version 2
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4085/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4085/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4084 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4084/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4084/comments | https://api.github.com/repos/huggingface/datasets/issues/4084/events | https://github.com/huggingface/datasets/issues/4084 | 1,190,060,415 | I_kwDODunzps5G7uF_ | 4,084 | Errors in `Train with Datasets` Tensorflow code section on Huggingface.co | {
"login": "blackhat-coder",
"id": 57095771,
"node_id": "MDQ6VXNlcjU3MDk1Nzcx",
"avatar_url": "https://avatars.githubusercontent.com/u/57095771?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/blackhat-coder",
"html_url": "https://github.com/blackhat-coder",
"followers_url": "https://api.github.com/users/blackhat-coder/followers",
"following_url": "https://api.github.com/users/blackhat-coder/following{/other_user}",
"gists_url": "https://api.github.com/users/blackhat-coder/gists{/gist_id}",
"starred_url": "https://api.github.com/users/blackhat-coder/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/blackhat-coder/subscriptions",
"organizations_url": "https://api.github.com/users/blackhat-coder/orgs",
"repos_url": "https://api.github.com/users/blackhat-coder/repos",
"events_url": "https://api.github.com/users/blackhat-coder/events{/privacy}",
"received_events_url": "https://api.github.com/users/blackhat-coder/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"Hi @blackhat-coder, thanks for reporting.\r\n\r\nPlease note that the `transformers` library updated their data collators API last year (version 4.10.0):\r\n- huggingface/transformers#13105\r\n\r\nnow requiring to pass `return_tensors` argument at Data Collator instantiation.\r\n\r\nAnd therefore, we also updated in the `datasets` library documentation all the examples using `transformers` data collators.\r\n\r\nIf you would like to follow our examples, please update your installed `transformers` version:\r\n```\r\npip install -U transformers\r\n```"
] | 2022-04-01T17:02:47 | 2022-04-04T07:24:37 | 2022-04-04T07:21:31 | NONE | null | ## Describe the bug
Hi
### Error 1
Running the Tensforlow code on [Huggingface](https://huggingface.co/docs/datasets/use_dataset) gives a TypeError: __init__() got an unexpected keyword argument 'return_tensors'
### Error 2
`DataCollatorWithPadding` isn't imported
## Steps to reproduce the bug
```python
import tensorflow as tf
from datasets import load_dataset
from transformers import AutoTokenizer
dataset = load_dataset('glue', 'mrpc', split='train')
tokenizer = AutoTokenizer.from_pretrained('bert-base-cased')
dataset = dataset.map(lambda e: tokenizer(e['sentence1'], truncation=True, padding='max_length'), batched=True)
data_collator = DataCollatorWithPadding(tokenizer=tokenizer, return_tensors="tf")
train_dataset = dataset["train"].to_tf_dataset(
columns=['input_ids', 'token_type_ids', 'attention_mask', 'label'],
shuffle=True,
batch_size=16,
collate_fn=data_collator,
)
```
This is the same code on Huggingface.co
## Actual results
TypeError: __init__() got an unexpected keyword argument 'return_tensors'
## Environment info
- `datasets` version: 2.0.0
- Platform: Windows-10-10.0.19044-SP0
- Python version: 3.9.7
- PyArrow version: 6.0.0
- Pandas version: 1.4.1
> | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4084/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4084/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4083 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4083/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4083/comments | https://api.github.com/repos/huggingface/datasets/issues/4083/events | https://github.com/huggingface/datasets/pull/4083 | 1,190,025,878 | PR_kwDODunzps41gEbu | 4,083 | Add SacreBLEU Metric Card | {
"login": "emibaylor",
"id": 27527747,
"node_id": "MDQ6VXNlcjI3NTI3NzQ3",
"avatar_url": "https://avatars.githubusercontent.com/u/27527747?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/emibaylor",
"html_url": "https://github.com/emibaylor",
"followers_url": "https://api.github.com/users/emibaylor/followers",
"following_url": "https://api.github.com/users/emibaylor/following{/other_user}",
"gists_url": "https://api.github.com/users/emibaylor/gists{/gist_id}",
"starred_url": "https://api.github.com/users/emibaylor/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/emibaylor/subscriptions",
"organizations_url": "https://api.github.com/users/emibaylor/orgs",
"repos_url": "https://api.github.com/users/emibaylor/repos",
"events_url": "https://api.github.com/users/emibaylor/events{/privacy}",
"received_events_url": "https://api.github.com/users/emibaylor/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-04-01T16:24:56 | 2022-04-12T20:45:00 | 2022-04-12T20:38:40 | CONTRIBUTOR | null | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4083/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4083/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4083",
"html_url": "https://github.com/huggingface/datasets/pull/4083",
"diff_url": "https://github.com/huggingface/datasets/pull/4083.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4083.patch",
"merged_at": "2022-04-12T20:38:40"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4082 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4082/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4082/comments | https://api.github.com/repos/huggingface/datasets/issues/4082/events | https://github.com/huggingface/datasets/pull/4082 | 1,189,965,845 | PR_kwDODunzps41f3fb | 4,082 | Add chrF(++) Metric Card | {
"login": "emibaylor",
"id": 27527747,
"node_id": "MDQ6VXNlcjI3NTI3NzQ3",
"avatar_url": "https://avatars.githubusercontent.com/u/27527747?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/emibaylor",
"html_url": "https://github.com/emibaylor",
"followers_url": "https://api.github.com/users/emibaylor/followers",
"following_url": "https://api.github.com/users/emibaylor/following{/other_user}",
"gists_url": "https://api.github.com/users/emibaylor/gists{/gist_id}",
"starred_url": "https://api.github.com/users/emibaylor/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/emibaylor/subscriptions",
"organizations_url": "https://api.github.com/users/emibaylor/orgs",
"repos_url": "https://api.github.com/users/emibaylor/repos",
"events_url": "https://api.github.com/users/emibaylor/events{/privacy}",
"received_events_url": "https://api.github.com/users/emibaylor/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-04-01T15:32:12 | 2022-04-12T20:43:55 | 2022-04-12T20:38:06 | CONTRIBUTOR | null | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4082/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4082/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4082",
"html_url": "https://github.com/huggingface/datasets/pull/4082",
"diff_url": "https://github.com/huggingface/datasets/pull/4082.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4082.patch",
"merged_at": "2022-04-12T20:38:06"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4081 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4081/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4081/comments | https://api.github.com/repos/huggingface/datasets/issues/4081/events | https://github.com/huggingface/datasets/pull/4081 | 1,189,916,472 | PR_kwDODunzps41fsxW | 4,081 | Close parquet writer properly in `push_to_hub` | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"@lhoestq / @albertvillanova / @mariosasko \r\nI am facing the same scenario. Let me explain the situation point. I have a glue ETL job\r\n\r\n1--> My files are in parquet format and stored in AWS s3.\r\n2--> I am iterating a loop for a data set where the same file name can occur with diffrent other data.\r\n3--> I read the parquet and saved it in a pandas data frame.\r\n4--> Done some operation on that data frame\r\n5--> upload the updated data frame into the S3 parquet file. Below are code snippet what I am using to save the updated \r\n data frame into parquet format and load into S3\r\n `header_name_column_list = dict(data_frame)\r\n header_list = []\r\n for col_id, col_type in header_name_column_list.items():\r\n header_list.append(pyarrow.field(col_id, pyarrow.string()))\r\n table_schema = pyarrow.schema(header_list)\r\n table = pyarrow.Table.from_pandas(data_frame, schema=table_schema, preserve_index=False)\r\n writer = parquet.ParquetWriter(b_buffer, table.schema)\r\n writer.write_table(table)\r\n writer.close()\r\n b_buffer.seek(0)\r\n .....\r\n ....\r\n self.s3_client.upload_fileobj(\r\n b_buffer,\r\n self.bucket,\r\n file_key,\r\n ExtraArgs=extra_args)`\r\n\r\nBut when I executed the glue etl job, the first time it works properly and but in the next iteration, when I try to open the same file got that error.\r\n\r\n\r\n<html>\r\n<body>\r\n<!--StartFragment-->\r\n\r\nINFO:Iot-dsip-de-duplication-job:Dataframe uploaded: s3://abc/2022/07/12/file1_ft_20220714122108.3065_12345.parquet INFO:Iot-dsip-de-duplication-job:Sleep for 60 sec\r\nINFO:Iot-dsip-de-duplication-job:start after sleep\r\n.......................\r\n..........................\r\n..........................\r\nERROR:Iot-dsip-de-duplication-job:Failed to read data from parquet file s3://abc/2022/07/12/file1_ft_20220714122108.3065_12345.parquet, error is : Invalid: Parquet magic bytes not found in footer. Either the file is corrupted or this is not a parquet file.INFO:Iot-dsip-de-duplication-job:Empty dataframe found\r\n\r\n<!--EndFragment-->\r\n</body>\r\n</html>\r\n\r\nAny clue will be really helpful..I got stuck with this problem."
] | 2022-04-01T14:58:50 | 2022-07-14T19:22:06 | 2022-04-01T16:16:19 | MEMBER | null | We don’t call writer.close(), which causes https://github.com/huggingface/datasets/issues/4077. It can happen that we upload the file before the writer is garbage collected and writes the footer.
I fixed this by explicitly closing the parquet writer.
Close https://github.com/huggingface/datasets/issues/4077. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4081/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4081/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4081",
"html_url": "https://github.com/huggingface/datasets/pull/4081",
"diff_url": "https://github.com/huggingface/datasets/pull/4081.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4081.patch",
"merged_at": "2022-04-01T16:16:19"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4080 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4080/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4080/comments | https://api.github.com/repos/huggingface/datasets/issues/4080/events | https://github.com/huggingface/datasets/issues/4080 | 1,189,667,296 | I_kwDODunzps5G6OHg | 4,080 | NonMatchingChecksumError for downloading conll2012_ontonotesv5 dataset | {
"login": "richarddwang",
"id": 17963619,
"node_id": "MDQ6VXNlcjE3OTYzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/17963619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/richarddwang",
"html_url": "https://github.com/richarddwang",
"followers_url": "https://api.github.com/users/richarddwang/followers",
"following_url": "https://api.github.com/users/richarddwang/following{/other_user}",
"gists_url": "https://api.github.com/users/richarddwang/gists{/gist_id}",
"starred_url": "https://api.github.com/users/richarddwang/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/richarddwang/subscriptions",
"organizations_url": "https://api.github.com/users/richarddwang/orgs",
"repos_url": "https://api.github.com/users/richarddwang/repos",
"events_url": "https://api.github.com/users/richarddwang/events{/privacy}",
"received_events_url": "https://api.github.com/users/richarddwang/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892865,
"node_id": "MDU6TGFiZWwxOTM1ODkyODY1",
"url": "https://api.github.com/repos/huggingface/datasets/labels/duplicate",
"name": "duplicate",
"color": "cfd3d7",
"default": true,
"description": "This issue or pull request already exists"
},
{
"id": 2067388877,
"node_id": "MDU6TGFiZWwyMDY3Mzg4ODc3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20bug",
"name": "dataset bug",
"color": "2edb81",
"default": false,
"description": "A bug in a dataset script provided in the library"
}
] | closed | false | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"Hi @richarddwang,\r\n\r\n\r\nIndeed, we have recently updated the loading script of that dataset (and fixed that bug as well):\r\n- #4002\r\n\r\nThat fix will be available in our next `datasets` library release. In the meantime, you can incorporate that fix by:\r\n- installing `datasets` from our GitHub repo:\r\n```bash\r\npip install git+https://github.com/huggingface/datasets#egg=datasets\r\n```\r\n- forcing the data files to be redownloaded\r\n```python\r\nds = load_dataset('conll2012_ontonotesv5', 'english_v4', split=\"test\", download_mode=\"force_redownload\")\r\n```\r\n\r\nFeel free to re-open this issue if the problem persists. \r\n\r\nDuplicate of:\r\n- #4031"
] | 2022-04-01T11:34:28 | 2022-04-01T13:59:10 | 2022-04-01T13:59:10 | CONTRIBUTOR | null | ## Steps to reproduce the bug
```python
datasets.load_dataset("conll2012_ontonotesv5", "english_v12")
```
## Actual results
```
Downloading builder script: 32.2kB [00:00, 9.72MB/s]
Downloading metadata: 20.0kB [00:00, 10.4MB/s]
Downloading and preparing dataset conll2012_ontonotesv5/english_v12 (download: 174.83 MiB, generated: 204.29 MiB, post-processed: Unknown size
, total: 379.12 MiB) to ...
Traceback (most recent call last): [315/390]
File "/home/yisiang/lgtn/conll2012/run.py", line 86, in <module>
train()
File "/home/yisiang/lgtn/conll2012/run.py", line 65, in train
trainer.fit(model, datamodule=dm)
File "/home/yisiang/miniconda3/envs/ai/lib/python3.9/site-packages/pytorch_lightning/trainer/trainer.py", line 740, in fit
self._call_and_handle_interrupt(
File "/home/yisiang/miniconda3/envs/ai/lib/python3.9/site-packages/pytorch_lightning/trainer/trainer.py", line 685, in _call_and_handle_inte
rrupt
return trainer_fn(*args, **kwargs)
File "/home/yisiang/miniconda3/envs/ai/lib/python3.9/site-packages/pytorch_lightning/trainer/trainer.py", line 777, in _fit_impl
self._run(model, ckpt_path=ckpt_path)
File "/home/yisiang/miniconda3/envs/ai/lib/python3.9/site-packages/pytorch_lightning/trainer/trainer.py", line 1131, in _run
self._data_connector.prepare_data()
File "/home/yisiang/miniconda3/envs/ai/lib/python3.9/site-packages/pytorch_lightning/trainer/connectors/data_connector.py", line 154, in pre
pare_data
self.trainer.datamodule.prepare_data()
File "/home/yisiang/miniconda3/envs/ai/lib/python3.9/site-packages/pytorch_lightning/core/datamodule.py", line 474, in wrapped_fn
fn(*args, **kwargs)
File "/home/yisiang/lgtn/_abstract_task/data.py", line 43, in prepare_data
raw_dsets = datasets.load_dataset(**load_dataset_kwargs)
File "/home/yisiang/miniconda3/envs/ai/lib/python3.9/site-packages/datasets/load.py", line 1687, in load_dataset
builder_instance.download_and_prepare(
File "/home/yisiang/miniconda3/envs/ai/lib/python3.9/site-packages/datasets/builder.py", line 605, in download_and_prepare
self._download_and_prepare(
File "/home/yisiang/miniconda3/envs/ai/lib/python3.9/site-packages/datasets/builder.py", line 1104, in _download_and_prepare
super()._download_and_prepare(dl_manager, verify_infos, check_duplicate_keys=verify_infos)
File "/home/yisiang/miniconda3/envs/ai/lib/python3.9/site-packages/datasets/builder.py", line 676, in _download_and_prepare
verify_checksums(
File "/home/yisiang/miniconda3/envs/ai/lib/python3.9/site-packages/datasets/utils/info_utils.py", line 40, in verify_checksums
raise NonMatchingChecksumError(error_msg + str(bad_urls))
datasets.utils.info_utils.NonMatchingChecksumError: Checksums didn't match for dataset source files:
['https://md-datasets-cache-zipfiles-prod.s3.eu-west-1.amazonaws.com/zmycy7t9h9-1.zip']
```
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 2.0.0 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4080/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4080/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4079 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4079/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4079/comments | https://api.github.com/repos/huggingface/datasets/issues/4079/events | https://github.com/huggingface/datasets/pull/4079 | 1,189,521,576 | PR_kwDODunzps41eYRC | 4,079 | Increase max retries for GitHub datasets | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-04-01T09:34:03 | 2022-04-01T15:32:40 | 2022-04-01T15:27:11 | MEMBER | null | As GitHub recurrently raises connectivity issues, this PR increases the number of max retries to request GitHub datasets, as previously done for GitHub metrics:
- #4063
Note that this is a temporary solution, while we decide when and how to load GitHub datasets from the Hub:
- #4059
Fix #2048
Related to:
- #4051
- #3210
- #2787
- #2075
- #2036
CC: @lhoestq | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4079/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4079/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4079",
"html_url": "https://github.com/huggingface/datasets/pull/4079",
"diff_url": "https://github.com/huggingface/datasets/pull/4079.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4079.patch",
"merged_at": "2022-04-01T15:27:10"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4078 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4078/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4078/comments | https://api.github.com/repos/huggingface/datasets/issues/4078/events | https://github.com/huggingface/datasets/pull/4078 | 1,189,513,572 | PR_kwDODunzps41eWnl | 4,078 | Fix GithubMetricModuleFactory instantiation with None download_config | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-04-01T09:26:58 | 2022-04-01T14:44:51 | 2022-04-01T14:39:27 | MEMBER | null | Recent PR:
- #4063
introduced a potential bug if `GithubMetricModuleFactory` is instantiated with None `download_config`.
This PR add instantiation tests and fix that potential issue.
CC: @lhoestq | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4078/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4078/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4078",
"html_url": "https://github.com/huggingface/datasets/pull/4078",
"diff_url": "https://github.com/huggingface/datasets/pull/4078.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4078.patch",
"merged_at": "2022-04-01T14:39:27"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4077 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4077/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4077/comments | https://api.github.com/repos/huggingface/datasets/issues/4077/events | https://github.com/huggingface/datasets/issues/4077 | 1,189,467,585 | I_kwDODunzps5G5dXB | 4,077 | ArrowInvalid: Parquet magic bytes not found in footer. Either the file is corrupted or this is not a parquet file. | {
"login": "NielsRogge",
"id": 48327001,
"node_id": "MDQ6VXNlcjQ4MzI3MDAx",
"avatar_url": "https://avatars.githubusercontent.com/u/48327001?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/NielsRogge",
"html_url": "https://github.com/NielsRogge",
"followers_url": "https://api.github.com/users/NielsRogge/followers",
"following_url": "https://api.github.com/users/NielsRogge/following{/other_user}",
"gists_url": "https://api.github.com/users/NielsRogge/gists{/gist_id}",
"starred_url": "https://api.github.com/users/NielsRogge/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/NielsRogge/subscriptions",
"organizations_url": "https://api.github.com/users/NielsRogge/orgs",
"repos_url": "https://api.github.com/users/NielsRogge/repos",
"events_url": "https://api.github.com/users/NielsRogge/events{/privacy}",
"received_events_url": "https://api.github.com/users/NielsRogge/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
}
] | null | [] | 2022-04-01T08:49:13 | 2022-04-01T16:16:19 | 2022-04-01T16:16:19 | CONTRIBUTOR | null | ## Describe the bug
When uploading a relatively large image dataset of > 1GB, reloading doesn't work for me, even though pushing to the hub went just fine.
Basically, I do:
```
from datasets import load_dataset
dataset = load_dataset("imagefolder", data_files="path_to_my_files")
dataset.push_to_hub("dataset_name") # works fine, no errors
reloaded_dataset = load_dataset("dataset_name")
```
and it returns:
```
/usr/local/lib/python3.7/dist-packages/pyarrow/error.pxi in pyarrow.lib.check_status()
ArrowInvalid: Parquet magic bytes not found in footer. Either the file is corrupted or this is not a parquet file.
```
I created a Colab notebook to reproduce my error: https://colab.research.google.com/drive/141LJCcM2XyqprPY83nIQ-Zk3BbxWeahq?usp=sharing
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4077/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4077/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4076 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4076/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4076/comments | https://api.github.com/repos/huggingface/datasets/issues/4076/events | https://github.com/huggingface/datasets/pull/4076 | 1,188,478,867 | PR_kwDODunzps41a1n2 | 4,076 | Add ROUGE Metric Card | {
"login": "emibaylor",
"id": 27527747,
"node_id": "MDQ6VXNlcjI3NTI3NzQ3",
"avatar_url": "https://avatars.githubusercontent.com/u/27527747?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/emibaylor",
"html_url": "https://github.com/emibaylor",
"followers_url": "https://api.github.com/users/emibaylor/followers",
"following_url": "https://api.github.com/users/emibaylor/following{/other_user}",
"gists_url": "https://api.github.com/users/emibaylor/gists{/gist_id}",
"starred_url": "https://api.github.com/users/emibaylor/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/emibaylor/subscriptions",
"organizations_url": "https://api.github.com/users/emibaylor/orgs",
"repos_url": "https://api.github.com/users/emibaylor/repos",
"events_url": "https://api.github.com/users/emibaylor/events{/privacy}",
"received_events_url": "https://api.github.com/users/emibaylor/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-03-31T18:34:34 | 2022-04-12T20:43:45 | 2022-04-12T20:37:38 | CONTRIBUTOR | null | Add ROUGE metric card.
I've left the 'Values from popular papers' section empty for the time being because I don't know the summarization literature very well and am therefore not sure which paper(s) to pull from (note that the original rouge paper does not seem to present specific values, just correlations with human judgements). Any suggestions on which paper(s) to pull from would be helpful! :) | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4076/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4076/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4076",
"html_url": "https://github.com/huggingface/datasets/pull/4076",
"diff_url": "https://github.com/huggingface/datasets/pull/4076.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4076.patch",
"merged_at": "2022-04-12T20:37:38"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4075 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4075/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4075/comments | https://api.github.com/repos/huggingface/datasets/issues/4075/events | https://github.com/huggingface/datasets/issues/4075 | 1,188,462,162 | I_kwDODunzps5G1n5S | 4,075 | Add CCAgT dataset | {
"login": "johnnv1",
"id": 20444345,
"node_id": "MDQ6VXNlcjIwNDQ0MzQ1",
"avatar_url": "https://avatars.githubusercontent.com/u/20444345?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/johnnv1",
"html_url": "https://github.com/johnnv1",
"followers_url": "https://api.github.com/users/johnnv1/followers",
"following_url": "https://api.github.com/users/johnnv1/following{/other_user}",
"gists_url": "https://api.github.com/users/johnnv1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/johnnv1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/johnnv1/subscriptions",
"organizations_url": "https://api.github.com/users/johnnv1/orgs",
"repos_url": "https://api.github.com/users/johnnv1/repos",
"events_url": "https://api.github.com/users/johnnv1/events{/privacy}",
"received_events_url": "https://api.github.com/users/johnnv1/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2067376369,
"node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request",
"name": "dataset request",
"color": "e99695",
"default": false,
"description": "Requesting to add a new dataset"
},
{
"id": 3608941089,
"node_id": "LA_kwDODunzps7XHBIh",
"url": "https://api.github.com/repos/huggingface/datasets/labels/vision",
"name": "vision",
"color": "bfdadc",
"default": false,
"description": "Vision datasets"
}
] | closed | false | {
"login": "johnnv1",
"id": 20444345,
"node_id": "MDQ6VXNlcjIwNDQ0MzQ1",
"avatar_url": "https://avatars.githubusercontent.com/u/20444345?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/johnnv1",
"html_url": "https://github.com/johnnv1",
"followers_url": "https://api.github.com/users/johnnv1/followers",
"following_url": "https://api.github.com/users/johnnv1/following{/other_user}",
"gists_url": "https://api.github.com/users/johnnv1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/johnnv1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/johnnv1/subscriptions",
"organizations_url": "https://api.github.com/users/johnnv1/orgs",
"repos_url": "https://api.github.com/users/johnnv1/repos",
"events_url": "https://api.github.com/users/johnnv1/events{/privacy}",
"received_events_url": "https://api.github.com/users/johnnv1/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "johnnv1",
"id": 20444345,
"node_id": "MDQ6VXNlcjIwNDQ0MzQ1",
"avatar_url": "https://avatars.githubusercontent.com/u/20444345?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/johnnv1",
"html_url": "https://github.com/johnnv1",
"followers_url": "https://api.github.com/users/johnnv1/followers",
"following_url": "https://api.github.com/users/johnnv1/following{/other_user}",
"gists_url": "https://api.github.com/users/johnnv1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/johnnv1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/johnnv1/subscriptions",
"organizations_url": "https://api.github.com/users/johnnv1/orgs",
"repos_url": "https://api.github.com/users/johnnv1/repos",
"events_url": "https://api.github.com/users/johnnv1/events{/privacy}",
"received_events_url": "https://api.github.com/users/johnnv1/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"Awesome ! Let us know if you have questions or if we can help ;) I'm assigning you\r\n\r\nPS: if possible, please try to not use Google Drive links in your dataset script, since Google Drive has download quotas and is not always reliable.",
"HI, I was waiting to come out in the second version to do the implementation.\r\n\r\n- Paper: https://dx.doi.org/10.2139/ssrn.4126881\r\n- Data: [Data mendelay](http://doi.org/10.17632/wg4bpm33hj.2)",
"Nice ! 🚀 ",
"The link of CCAgT dataset is: https://huggingface.co/datasets/lapix/CCAgT"
] | 2022-03-31T18:20:28 | 2022-07-06T19:03:42 | 2022-07-06T19:03:42 | NONE | null | ## Adding a Dataset
- **Name:** CCAgT dataset: Images of Cervical Cells with AgNOR Stain Technique
- **Description:** The dataset contains 2540 images (1600x1200 where each pixel is 0.111μm×0.111μm) from three different slides, having at least one nucleus per image. These images are from fields belonging to a sample cervical slide, colored with silver-stained, a method known as Argyrophilic Nucleolar Organizer Regions (AgNOR).
- **Paper:** https://doi.org/10.1109/cbms49503.2020.00110
- **Data:** https://arquivos.ufsc.br/d/373be2177a33426a9e6c/ or https://drive.google.com/drive/u/4/folders/1TBpYCv6S1ydASLauSzcsvO7Wc5O-WUw0
- **Motivation:** This is a unique dataset (because of the stain), for a major health problem, cervical cancer, with real data.
Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md).
Hi, this is a public version of the dataset that I have been working on, soon we will have another version of this dataset. But until this new version goes out, I thought I would add this dataset here, if it makes sense for the repository. You can assign the task to me if possible | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4075/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4075/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4074 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4074/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4074/comments | https://api.github.com/repos/huggingface/datasets/issues/4074/events | https://github.com/huggingface/datasets/issues/4074 | 1,188,449,142 | I_kwDODunzps5G1kt2 | 4,074 | Error in google/xtreme_s dataset card | {
"login": "wranai",
"id": 1048544,
"node_id": "MDQ6VXNlcjEwNDg1NDQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/1048544?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/wranai",
"html_url": "https://github.com/wranai",
"followers_url": "https://api.github.com/users/wranai/followers",
"following_url": "https://api.github.com/users/wranai/following{/other_user}",
"gists_url": "https://api.github.com/users/wranai/gists{/gist_id}",
"starred_url": "https://api.github.com/users/wranai/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/wranai/subscriptions",
"organizations_url": "https://api.github.com/users/wranai/orgs",
"repos_url": "https://api.github.com/users/wranai/repos",
"events_url": "https://api.github.com/users/wranai/events{/privacy}",
"received_events_url": "https://api.github.com/users/wranai/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892861,
"node_id": "MDU6TGFiZWwxOTM1ODkyODYx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/documentation",
"name": "documentation",
"color": "0075ca",
"default": true,
"description": "Improvements or additions to documentation"
},
{
"id": 2067388877,
"node_id": "MDU6TGFiZWwyMDY3Mzg4ODc3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20bug",
"name": "dataset bug",
"color": "2edb81",
"default": false,
"description": "A bug in a dataset script provided in the library"
}
] | closed | false | null | [] | null | [
"Hi @wranai, thanks for reporting.\r\n\r\nPlease note that the information about language families and groups is taken form the original paper: [XTREME-S: Evaluating Cross-lingual Speech Representations](https://arxiv.org/abs/2203.10752).\r\n\r\nIf that information is wrong, feel free to contact the paper's authors to suggest that correction.\r\n\r\nJust note that Hungarian language (contrary to their geographically surrounding neighbor languages) belongs to the Uralic (languages) family, together with (among others) Finnish, Estonian, some other languages in northern regions of Scandinavia..."
] | 2022-03-31T18:07:45 | 2022-04-01T08:12:56 | 2022-04-01T08:12:56 | NONE | null | **Link:** https://huggingface.co/datasets/google/xtreme_s
Not a big deal but Hungarian is considered an Eastern European language, together with Serbian, Slovak, Slovenian (all correctly categorized; Slovenia is mostly to the West of Hungary, by the way).
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4074/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4074/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4073 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4073/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4073/comments | https://api.github.com/repos/huggingface/datasets/issues/4073/events | https://github.com/huggingface/datasets/pull/4073 | 1,188,364,711 | PR_kwDODunzps41adPA | 4,073 | Create a metric card for Competition MATH | {
"login": "sashavor",
"id": 14205986,
"node_id": "MDQ6VXNlcjE0MjA1OTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/14205986?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sashavor",
"html_url": "https://github.com/sashavor",
"followers_url": "https://api.github.com/users/sashavor/followers",
"following_url": "https://api.github.com/users/sashavor/following{/other_user}",
"gists_url": "https://api.github.com/users/sashavor/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sashavor/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sashavor/subscriptions",
"organizations_url": "https://api.github.com/users/sashavor/orgs",
"repos_url": "https://api.github.com/users/sashavor/repos",
"events_url": "https://api.github.com/users/sashavor/events{/privacy}",
"received_events_url": "https://api.github.com/users/sashavor/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-03-31T16:48:59 | 2022-04-01T19:02:39 | 2022-04-01T18:57:13 | NONE | null | Proposing metric card for Competition MATH | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4073/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4073/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4073",
"html_url": "https://github.com/huggingface/datasets/pull/4073",
"diff_url": "https://github.com/huggingface/datasets/pull/4073.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4073.patch",
"merged_at": "2022-04-01T18:57:12"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4072 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4072/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4072/comments | https://api.github.com/repos/huggingface/datasets/issues/4072/events | https://github.com/huggingface/datasets/pull/4072 | 1,188,266,410 | PR_kwDODunzps41aIUG | 4,072 | Add installation instructions to image_process doc | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-03-31T15:29:37 | 2022-03-31T17:05:46 | 2022-03-31T17:00:19 | CONTRIBUTOR | null | This PR adds the installation instructions for the Image feature to the image process doc. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4072/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4072/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4072",
"html_url": "https://github.com/huggingface/datasets/pull/4072",
"diff_url": "https://github.com/huggingface/datasets/pull/4072.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4072.patch",
"merged_at": "2022-03-31T17:00:19"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4071 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4071/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4071/comments | https://api.github.com/repos/huggingface/datasets/issues/4071/events | https://github.com/huggingface/datasets/issues/4071 | 1,187,587,683 | I_kwDODunzps5GySZj | 4,071 | Loading issue for xuyeliu/notebookCDG dataset | {
"login": "Jun-jie-Huang",
"id": 46160972,
"node_id": "MDQ6VXNlcjQ2MTYwOTcy",
"avatar_url": "https://avatars.githubusercontent.com/u/46160972?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Jun-jie-Huang",
"html_url": "https://github.com/Jun-jie-Huang",
"followers_url": "https://api.github.com/users/Jun-jie-Huang/followers",
"following_url": "https://api.github.com/users/Jun-jie-Huang/following{/other_user}",
"gists_url": "https://api.github.com/users/Jun-jie-Huang/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Jun-jie-Huang/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Jun-jie-Huang/subscriptions",
"organizations_url": "https://api.github.com/users/Jun-jie-Huang/orgs",
"repos_url": "https://api.github.com/users/Jun-jie-Huang/repos",
"events_url": "https://api.github.com/users/Jun-jie-Huang/events{/privacy}",
"received_events_url": "https://api.github.com/users/Jun-jie-Huang/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2067388877,
"node_id": "MDU6TGFiZWwyMDY3Mzg4ODc3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20bug",
"name": "dataset bug",
"color": "2edb81",
"default": false,
"description": "A bug in a dataset script provided in the library"
}
] | closed | false | null | [] | null | [
"Hi @Jun-jie-Huang,\r\n\r\nAs the error message says, \".pkl\" data files are not supported.\r\n\r\nIf you would like to share your dataset on the Hub, you would need:\r\n- either to create a Python loading script, that loads the data in any format\r\n- or to transform your data files to one of the supported formats (listed in the error message above: CSV, JSON, Parquet, TXT,...)\r\n\r\nYou can find the details in our docs: \r\n- How to share a dataset: https://huggingface.co/docs/datasets/share\r\n- How to create a dataset loading script: https://huggingface.co/docs/datasets/dataset_script\r\n\r\nFeel free to re-open this issue and ping us if you need further assistance."
] | 2022-03-31T06:36:29 | 2022-03-31T08:17:01 | 2022-03-31T08:16:16 | NONE | null | ## Dataset viewer issue for '*xuyeliu/notebookCDG*'
**Link:** *[link to the dataset viewer page](https://huggingface.co/datasets/xuyeliu/notebookCDG)*
*Couldn't load the xuyeliu/notebookCDG with provided scripts: *
```
from datasets import load_dataset
dataset = load_dataset("xuyeliu/notebookCDG/dataset_notebook.pkl")
```
I get an error message as follows:
FileNotFoundError: Couldn't find a dataset script at /home/code_documentation/code/xuyeliu/notebookCDG/notebookCDG.py or any data file in the same directory. Couldn't find 'xuyeliu/notebookCDG' on the Hugging Face Hub either: FileNotFoundError: Unable to resolve any data file that matches ['**train*'] in dataset repository xuyeliu/notebookCDG with any supported extension ['csv', 'tsv', 'json', 'jsonl', 'parquet', 'txt', 'blp', 'bmp', 'dib', 'bufr', 'cur', 'pcx', 'dcx', 'dds', 'ps', 'eps', 'fit', 'fits', 'fli', 'flc', 'ftc', 'ftu', 'gbr', 'gif', 'grib', 'h5', 'hdf', 'png', 'apng', 'jp2', 'j2k', 'jpc', 'jpf', 'jpx', 'j2c', 'icns', 'ico', 'im', 'iim', 'tif', 'tiff', 'jfif', 'jpe', 'jpg', 'jpeg', 'mpg', 'mpeg', 'msp', 'pcd', 'pxr', 'pbm', 'pgm', 'ppm', 'pnm', 'psd', 'bw', 'rgb', 'rgba', 'sgi', 'ras', 'tga', 'icb', 'vda', 'vst', 'webp', 'wmf', 'emf', 'xbm', 'xpm', 'zip']
Am I the one who added this dataset ? No
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4071/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4071/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4070 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4070/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4070/comments | https://api.github.com/repos/huggingface/datasets/issues/4070/events | https://github.com/huggingface/datasets/pull/4070 | 1,186,810,205 | PR_kwDODunzps41VMYq | 4,070 | Create metric card for seqeval | {
"login": "sashavor",
"id": 14205986,
"node_id": "MDQ6VXNlcjE0MjA1OTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/14205986?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sashavor",
"html_url": "https://github.com/sashavor",
"followers_url": "https://api.github.com/users/sashavor/followers",
"following_url": "https://api.github.com/users/sashavor/following{/other_user}",
"gists_url": "https://api.github.com/users/sashavor/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sashavor/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sashavor/subscriptions",
"organizations_url": "https://api.github.com/users/sashavor/orgs",
"repos_url": "https://api.github.com/users/sashavor/repos",
"events_url": "https://api.github.com/users/sashavor/events{/privacy}",
"received_events_url": "https://api.github.com/users/sashavor/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-03-30T18:08:01 | 2022-04-01T19:02:58 | 2022-04-01T18:57:25 | NONE | null | Proposing metric card for seqeval. Not sure which values to report for Popular papers though. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4070/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4070/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4070",
"html_url": "https://github.com/huggingface/datasets/pull/4070",
"diff_url": "https://github.com/huggingface/datasets/pull/4070.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4070.patch",
"merged_at": "2022-04-01T18:57:25"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4069 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4069/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4069/comments | https://api.github.com/repos/huggingface/datasets/issues/4069/events | https://github.com/huggingface/datasets/pull/4069 | 1,186,790,578 | PR_kwDODunzps41VIMJ | 4,069 | Add support for metadata files to `imagefolder` | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"Love it !\r\n\r\n+1 to using JSON Lines rather than CSV. I've also seen image datasets for which JSON Lines was used.\r\n\r\nA `file_name` column sounds good as well, and it means we could reuse the same name for audio. And ok to check the metadata file by default :)\r\n\r\nYou suggested to name the file infos.json - since we already have a datasets_infos.json file, maybe it would be nice to have a name for the metadata/annotations that doesn't contain \"info\" ? (e.g. metadata.json, annotations.json, labels.json)",
"@lhoestq I've addressed your comments and my TODOs. Additionally, I've updated `encode_nested_example`/`decode_nested_example` to support null values in place of a dictionary (if it's not top-level) since JSON Lines also supports this. ",
"@lhoestq Sure, feel free to add more tests if you have the time. ",
"I created a dedicated test file for `imagefolder`, moved some existing tests there from `test_packaged_modules.py`, and added an end-to-end test of `imagefolder` with metadata. I tested for train split only, and for two splits train and test.\r\n\r\nLet me know if the test looks ok to you. I'll add similar tests but with the other structures we support on tuesday",
"Thanks a lot for working on this! The test looks great :). ",
"Added a test for archives. Will also add a test when the metadata file is not named correctly, and see if we can raise an informative error"
] | 2022-03-30T17:47:51 | 2022-05-03T12:49:00 | 2022-05-03T12:42:16 | CONTRIBUTOR | null | This PR adds support for metadata files to `imagefolder` to add an ability to specify image fields other than `image` and `label`, which are inferred from the directory structure in the loaded dataset.
To be parsed as an image metadata file, a file should be named `"info.csv"` and should have the following structure:
```
image_id,some_col1_name,some_col2_name
rel/path/to/image1.jpg,image1_col1_value,image1_col2_value
rel/path/to/image2.jpg,image2_col1_value,image2_col2_value
...
```
This is how the resolution works:
```
- path/to/imagefolder/directory
- info.csv
- 10.jpg # referenced as 10.jpg in "info.csv"
- Cat
- 0.jpg # referenced as Cat/0.jpg in "info.csv"
- 1.jpg # referenced as Cat/1.jpg in "info.csv"
- Dog
- 0.jpg # referenced as Dog/0.jpg in "info.csv"
- 1.jpg # referenced as Dog/1.jpg in "info.csv"
```
Open questions:
1. IMO it makes more sense to store image metadata as JSON Lines than CSV. CSV is sufficient for textual metadata but not the best for representing bounding boxes, for instance. Also, JSON Lines is more strict, which is good in this case (CSV supports various delimiters, the header line is optional, etc., so it's easier to enforce rules on JSON Lines that it's on CSV)
2. A better name for the `image_id` column, which contains image identifiers? Maybe `image_file` or `image_filename`?
3. WDYT about making `with_metadata=True` the default behavior if the loaded repo/directory contains an `info.csv` file?
An example repository: https://huggingface.co/datasets/mariosasko/PetImages. Can be loaded by installing `datasets` from the PR branch and running `load_dataset("mariosasko/PetImages", with_metadata=True)`.
cc: @abhishekkrthakur (this PR should address https://huggingface.slack.com/archives/C02JB9L6JKF/p1645450017434029?thread_ts=1645157416.389499&cid=C02JB9L6JKF)
TODOs:
- [x] Test
- [x] Metadata file nesting
```
- path/to/imagefolder/directory
- info.csv
- 10.jpg
- Cat
- info.csv # should have higher precedence in this directory than the top-level info.csv, but we choose the first "eligible" metadata file currently
- 0.jpg
- 1.jpg
``` | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4069/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4069/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4069",
"html_url": "https://github.com/huggingface/datasets/pull/4069",
"diff_url": "https://github.com/huggingface/datasets/pull/4069.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4069.patch",
"merged_at": "2022-05-03T12:42:16"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4068 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4068/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4068/comments | https://api.github.com/repos/huggingface/datasets/issues/4068/events | https://github.com/huggingface/datasets/pull/4068 | 1,186,765,422 | PR_kwDODunzps41VC0I | 4,068 | Improve out of bounds error message | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-03-30T17:22:10 | 2022-03-31T08:39:08 | 2022-03-31T08:33:57 | MEMBER | null | In 1.18.4 with https://github.com/huggingface/datasets/pull/3719 we introduced an error message for users using `select` with out of bounds indices. The message ended up being confusing for some users because it mentioned negative indices, which is not the main use case.
I replaced it with a message that is very similar to the one you get with you try to access a list with an out-of-range index. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4068/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4068/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4068",
"html_url": "https://github.com/huggingface/datasets/pull/4068",
"diff_url": "https://github.com/huggingface/datasets/pull/4068.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4068.patch",
"merged_at": "2022-03-31T08:33:56"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4067 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4067/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4067/comments | https://api.github.com/repos/huggingface/datasets/issues/4067/events | https://github.com/huggingface/datasets/pull/4067 | 1,186,731,905 | PR_kwDODunzps41U7qc | 4,067 | Update datasets task tags to align tags with models | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"Looks good, but I think we are missing some scripts with outdated tags (RedCaps, MNIST, ...).",
"Just updated the tags of vision datasets :)\r\nWe can figure out one for image datasets without labels like PASS - not sure how to name the task for this, maybe `image-fill-mask` (for consistency with language modeling for pretraining) / `masked-auto-encoding` (from ViT). Let's see that in another PR later"
] | 2022-03-30T16:49:32 | 2022-04-13T17:37:27 | 2022-04-13T17:31:11 | MEMBER | null | **Requires https://github.com/huggingface/datasets/pull/4066 to be merged first**
Following https://github.com/huggingface/datasets/pull/4066 we need to update many dataset tags to use the new ones. This PR takes case of this and is quite big - feel free to review only certain tags if you don't want to spend too much time on it.
Note that the CI will never be green for this PR, because many dataset cards have missing tags or sections, and fixing them is out of scope of this PR (the CI on master will be green anyway) | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4067/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4067/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4067",
"html_url": "https://github.com/huggingface/datasets/pull/4067",
"diff_url": "https://github.com/huggingface/datasets/pull/4067.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4067.patch",
"merged_at": "2022-04-13T17:31:11"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4066 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4066/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4066/comments | https://api.github.com/repos/huggingface/datasets/issues/4066/events | https://github.com/huggingface/datasets/pull/4066 | 1,186,728,104 | PR_kwDODunzps41U63x | 4,066 | Tasks alignment with models | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"Yay! This is exciting! Note that we would probably be able to generate this JSON directly from `huggingface/hub-docs`' `Types.ts` file (cc @osanseviero)",
"The following issue should make this much easier :smile: https://github.com/huggingface/hub-docs/issues/83",
"So far I think I've addressed all the comments that I got on slack, but feel free to do a review @osanseviero and let me know if it sounds good to you",
"It just occurred to me that we should probably restart the `datasets-tagging` space once this is merged to update all the task categories there: https://huggingface.co/spaces/huggingface/datasets-tagging",
"Yes, let me update it now",
"Updated: https://huggingface.co/spaces/huggingface/datasets-tagging",
"current automated export is visible at #4154"
] | 2022-03-30T16:45:56 | 2022-04-13T13:12:52 | 2022-04-08T12:20:00 | MEMBER | null | I updated our `tasks.json` file with the new task taxonomy that is aligned with models.
The rule that defines a task is the following:
**Two tasks are different if and only if the steps of their pipelines** are different, i.e. if they can’t reasonably be implemented using the same coherent code (level of granularity/complexity of the code to be defined - ideally I’d like to say “HF user’s level”) - this is the same definition in `transformers`
I will update the tags of all the datasets in this repository [in another PR](https://github.com/huggingface/datasets/pull/4067) for readability.
Main changes:
- conditional-text-generation is split between summarization, translation, text-generation and text2text-generation
- speech-processing is split into automatic-speech-recognition, audio-classification, etc.
- structure-prediction is renamed token-classification
- abstractive-qa now belongs to text2text-generation
Here is just a simplified YAML dump of `tasks.json`:
```yaml
audio-classification:
- keyword-spotting
- speaker-identification
- speaker-intent-classification
- emotion-recognition
- speaker-language-identification
audio-to-audio: []
automatic-speech-recognition: []
conversational:
- dialogue-generation
feature-extraction: []
fill-mask:
- slot-filling
- masked-language-modeling
image-classification:
- multi-label-image-classification
- multi-class-image-classification
image-segmentation:
- instance-segmentation
- semantic-segmentation
- panoptic-segmentation
image-to-text:
- image-captioning
multiple-choice:
- multiple-choice-qa
- multiple-choice-coreference-resolution
object-detection:
- face-detection
- vehicle-detection
question-answering:
- extractive-qa
- open-domain-qa
- closed-domain-qa
sentence-similarity: []
tabular-classification: []
tabular-to-text:
- rdf-to-text
summarization:
- news-articles-summarization
- news-articles-headline-generation
table-to-text: []
table-question-answering: []
text-classification:
- acceptability-classification
- entity-linking-classification
- fact-checking
- intent-classification
- multi-class-classification
- multi-label-classification
- natural-language-inference
- semantic-similarity-classification
- sentiment-classification
- topic-classification
- semantic-similarity-scoring
- sentiment-scoring
- sentiment-analysis
- hate-speech-detection
- text-scoring
text-generation:
- dialogue-modeling
- language-modeling
text-retrieval:
- document-retrieval
- utterance-retrieval
- entity-linking-retrieval
- fact-checking-retrieval
text-to-image: []
text-to-tabular:
- relation-extraction
- semantic-role-labeling
text-to-speech: []
text2text-generation:
- text-simplification
- explanation-generation
- abstractive-qa
- open-domain-abstractive-qa
- closed-domain-qa
- open-book-qa
- closed-book-qa
time-series-forecasting:
- univariate-time-series-forecasting
- multivariate-time-series-forecasting
token-classification:
- named-entity-recognition
- part-of-speech-tagging
- parsing
- lemmatization
- word-sense-disambiguation
- coreference-resolution
translation: []
visual-question-answering: []
voice-activity-detection: []
zero-shot-classification: []
zero-shot-image-classification: []
reinforcement-learning: []
other: []
```
Feel free to comment and give suggestions, especially if you think we can also align this list with other projects
cc @julien-c @osanseviero @severo @lewtun @yjernite @albertvillanova @mariosasko @polinaeterna | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4066/reactions",
"total_count": 7,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 5,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4066/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4066",
"html_url": "https://github.com/huggingface/datasets/pull/4066",
"diff_url": "https://github.com/huggingface/datasets/pull/4066.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4066.patch",
"merged_at": "2022-04-08T12:20:00"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4065 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4065/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4065/comments | https://api.github.com/repos/huggingface/datasets/issues/4065/events | https://github.com/huggingface/datasets/pull/4065 | 1,186,722,478 | PR_kwDODunzps41U5rq | 4,065 | Create metric card for METEOR | {
"login": "sashavor",
"id": 14205986,
"node_id": "MDQ6VXNlcjE0MjA1OTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/14205986?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sashavor",
"html_url": "https://github.com/sashavor",
"followers_url": "https://api.github.com/users/sashavor/followers",
"following_url": "https://api.github.com/users/sashavor/following{/other_user}",
"gists_url": "https://api.github.com/users/sashavor/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sashavor/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sashavor/subscriptions",
"organizations_url": "https://api.github.com/users/sashavor/orgs",
"repos_url": "https://api.github.com/users/sashavor/repos",
"events_url": "https://api.github.com/users/sashavor/events{/privacy}",
"received_events_url": "https://api.github.com/users/sashavor/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-03-30T16:40:30 | 2022-03-31T17:12:10 | 2022-03-31T17:07:50 | NONE | null | Proposing a metric card for METEOR | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4065/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4065/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4065",
"html_url": "https://github.com/huggingface/datasets/pull/4065",
"diff_url": "https://github.com/huggingface/datasets/pull/4065.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4065.patch",
"merged_at": "2022-03-31T17:07:50"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4064 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4064/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4064/comments | https://api.github.com/repos/huggingface/datasets/issues/4064/events | https://github.com/huggingface/datasets/pull/4064 | 1,186,650,321 | PR_kwDODunzps41UqXS | 4,064 | Contributing MedMCQA dataset | {
"login": "monk1337",
"id": 17107749,
"node_id": "MDQ6VXNlcjE3MTA3NzQ5",
"avatar_url": "https://avatars.githubusercontent.com/u/17107749?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/monk1337",
"html_url": "https://github.com/monk1337",
"followers_url": "https://api.github.com/users/monk1337/followers",
"following_url": "https://api.github.com/users/monk1337/following{/other_user}",
"gists_url": "https://api.github.com/users/monk1337/gists{/gist_id}",
"starred_url": "https://api.github.com/users/monk1337/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/monk1337/subscriptions",
"organizations_url": "https://api.github.com/users/monk1337/orgs",
"repos_url": "https://api.github.com/users/monk1337/repos",
"events_url": "https://api.github.com/users/monk1337/events{/privacy}",
"received_events_url": "https://api.github.com/users/monk1337/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"@lhoestq Could you please take a look?\r\nThank you!!",
"Hi, thank you for the modifications and suggestions. Please check the changes.",
"Can you run `make style` to fix the code formatting please ?\r\n\r\nOh and was wrong with the dummy_data.zip file, it must actually be placed at `datasets/medmcqa/dummy/1.1.0/dummy_data.zip` - sorry about that\r\n\r\nCan you also set the class label names to `names=[\"a\", \"b\", \"c\", \"d\"]` to make it explicit which label corresponds to each answer ? You might have to regenerate `dataset_infos.json` after that",
"Hi, \r\n\r\n1) Changed the dummy data folder\r\n\r\n2) The labels are not ['a', 'b', 'c', 'd'] rather the labels are [1,2,3,4] where 1 represents the 1'st option, 2nd represents 2nd option so on, and its int.\r\n\r\nI tried changing to ['a','b','c','d'] and while generating `dataset_infos.json` getting this error :\r\n\r\n`ValueError: Class label 4 greater than configured num_classes 4`\r\nPlease check.",
"@lhoestq [lhoestq](https://github.com/lhoestq) Please check",
"You have this error because we expect the labels to start at 0, not 1. I think you just need to pass `int(data[\"cop\"]) - 1` when generating the examples.\r\n\r\nSorry for the delay in responding btw",
"@lhoestq I corrected that but here is another issue I am facing while generating `dataset_infos.json`\r\n\r\nI am using `\" \"` if it's test set and otherwise it's the correct option\r\n\r\nhttps://github.com/monk1337/datasets/blob/179f81d48cdd3093302e498babce04c0bf1e33b3/datasets/medmcqa/medmcqa.py#L111\r\n` \"cop\": \"\" if split == \"test\" else int(data[\"cop\"]) -1,\r\n`\r\n\r\nbut while running this command :\r\n\r\n`datasets-cli test datasets/medmcqa --save_infos --all_configs\r\n`\r\n\r\ngiving this error:\r\n\r\n```\r\n/content/datasets# datasets-cli test datasets/medmcqa --save_infos --all_configs\r\nUsing custom data configuration default\r\nTesting builder 'default' (1/1)\r\nDownloading and preparing dataset med_mcqa/default (download: 52.72 MiB, generated: 128.73 MiB, post-processed: Unknown size, total: 181.46 MiB) to /root/.cache/huggingface/datasets/med_mcqa/default/1.1.0/4c8e418778967b6d9603f79bbfc4fdfbcfffc389664d9aeb85e102cfde418043...\r\nTraceback (most recent call last): \r\n File \"/usr/local/bin/datasets-cli\", line 33, in <module>\r\n sys.exit(load_entry_point('datasets', 'console_scripts', 'datasets-cli')())\r\n File \"/content/datasets/src/datasets/commands/datasets_cli.py\", line 33, in main\r\n service.run()\r\n File \"/content/datasets/src/datasets/commands/test.py\", line 162, in run\r\n try_from_hf_gcs=False,\r\n File \"/content/datasets/src/datasets/builder.py\", line 606, in download_and_prepare\r\n dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs\r\n File \"/content/datasets/src/datasets/builder.py\", line 1104, in _download_and_prepare\r\n super()._download_and_prepare(dl_manager, verify_infos, check_duplicate_keys=verify_infos)\r\n File \"/content/datasets/src/datasets/builder.py\", line 694, in _download_and_prepare\r\n self._prepare_split(split_generator, **prepare_split_kwargs)\r\n File \"/content/datasets/src/datasets/builder.py\", line 1095, in _prepare_split\r\n example = self.info.features.encode_example(record)\r\n File \"/content/datasets/src/datasets/features/features.py\", line 1356, in encode_example\r\n return encode_nested_example(self, example)\r\n File \"/content/datasets/src/datasets/features/features.py\", line 1007, in encode_nested_example\r\n return {k: encode_nested_example(sub_schema, sub_obj) for k, (sub_schema, sub_obj) in zip_dict(schema, obj)}\r\n File \"/content/datasets/src/datasets/features/features.py\", line 1007, in <dictcomp>\r\n return {k: encode_nested_example(sub_schema, sub_obj) for k, (sub_schema, sub_obj) in zip_dict(schema, obj)}\r\n File \"/content/datasets/src/datasets/features/features.py\", line 1052, in encode_nested_example\r\n return schema.encode_example(obj) if obj is not None else None\r\n File \"/content/datasets/src/datasets/features/features.py\", line 897, in encode_example\r\n example_data = self.str2int(example_data)\r\n File \"/content/datasets/src/datasets/features/features.py\", line 854, in str2int\r\n output.append(self._str2int[str(value)])\r\nKeyError: ''\r\n```",
"Hey ! You can use this instead:\r\n`\"cop\": -1 if split == \"test\" else int(data[\"cop\"]) -1`",
"@lhoestq Thank you for your assistance, and I have updated the `dataset_infos.json` without any error. All the issues are resolved. Please review and approve if it's ready to merge.",
"Thanks ! There are two things to fic the CI:\r\n1. run `make style` to fix code formatting\r\n2. fix the dummy_data.zip file. Currently it's created from a directory called \"dummy\" that contains the JSON file, but it should be called \"dummy_data\" instead",
"@lhoestq Please check if anything else needs to be done :) ",
"Let me gently remind you that you can check the CI before pinging reviewers, this way you can know if something needs to be fixed right away.\r\n\r\nRight now, if you check the CI, you will see that you didn't fix the code formatting, and that you didn't fix the dummy data.\r\n\r\nLet me take a look",
"_The documentation is not available anymore as the PR was closed or merged._",
"Hi @lhoestq, I am sorry if I pinged multiple times; I have already corrected the dummy_data file issues and format issue before pinging for the merge request, as you commented last time\r\n\r\n_fix the dummy_data.zip file. Currently, it's created from a directory called \"dummy\" that contains the JSON file, but it should be called \"dummy_data\" instead._\r\n\r\nI fixed the file name and location.\r\n\r\nAnd I also ran the commands last time.\r\n\r\n```\r\nmake style\r\nflake8 datasets\r\n```\r\nPlease let me know if anything else needs to be changed.",
"Thanks a lot @monk1337 ! :)"
] | 2022-03-30T15:42:47 | 2022-05-06T09:40:40 | 2022-05-06T08:42:56 | CONTRIBUTOR | null | Adding MedMCQA dataset ( https://paperswithcode.com/dataset/medmcqa )
**Name**: MedMCQA
**Description**: MedMCQA is a large-scale, Multiple-Choice Question Answering (MCQA) dataset designed to address real-world medical entrance exam questions.
MedMCQA has more than 194k high-quality AIIMS & NEET PG entrance exam MCQs covering 2.4k healthcare topics and 21 medical subjects are collected with an average token length of 12.77 and high topical diversity.
The dataset contains questions about the following topics: Anesthesia, Anatomy, Biochemistry, Dental, ENT, Forensic Medicine (FM), Obstetrics and Gynecology (O&G), Medicine, Microbiology, Ophthalmology, Orthopedics Pathology, Pediatrics, Pharmacology, Physiology,
Psychiatry, Radiology Skin, Preventive & Social Medicine (PSM), and Surgery
**Code**: https://github.com/medmcqa/medmcqa
All files are at place :
**a dataset script** : medmcqa.py
**a dataset card with tags and information** : README.md.
**a metadata file** : dataset_infos.json
**a dummy-data file** : Please help to generate this file, I was facing
` raise JSONDecodeError("Extra data", s, end)` error | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4064/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4064/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4064",
"html_url": "https://github.com/huggingface/datasets/pull/4064",
"diff_url": "https://github.com/huggingface/datasets/pull/4064.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4064.patch",
"merged_at": "2022-05-06T08:42:56"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4063 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4063/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4063/comments | https://api.github.com/repos/huggingface/datasets/issues/4063/events | https://github.com/huggingface/datasets/pull/4063 | 1,186,611,368 | PR_kwDODunzps41UiDm | 4,063 | Increase max retries for GitHub metrics | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-03-30T15:12:48 | 2022-03-31T14:42:52 | 2022-03-31T14:37:47 | MEMBER | null | As GitHub recurrently raises connectivity issues, this PR increases the number of max retries to request GitHub metrics.
Related to:
- #3134
Also related to:
- #4059 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4063/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4063/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4063",
"html_url": "https://github.com/huggingface/datasets/pull/4063",
"diff_url": "https://github.com/huggingface/datasets/pull/4063.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4063.patch",
"merged_at": "2022-03-31T14:37:47"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4062 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4062/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4062/comments | https://api.github.com/repos/huggingface/datasets/issues/4062/events | https://github.com/huggingface/datasets/issues/4062 | 1,186,330,732 | I_kwDODunzps5Gtfhs | 4,062 | Loading mozilla-foundation/common_voice_7_0 dataset failed | {
"login": "aapot",
"id": 19529125,
"node_id": "MDQ6VXNlcjE5NTI5MTI1",
"avatar_url": "https://avatars.githubusercontent.com/u/19529125?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/aapot",
"html_url": "https://github.com/aapot",
"followers_url": "https://api.github.com/users/aapot/followers",
"following_url": "https://api.github.com/users/aapot/following{/other_user}",
"gists_url": "https://api.github.com/users/aapot/gists{/gist_id}",
"starred_url": "https://api.github.com/users/aapot/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/aapot/subscriptions",
"organizations_url": "https://api.github.com/users/aapot/orgs",
"repos_url": "https://api.github.com/users/aapot/repos",
"events_url": "https://api.github.com/users/aapot/events{/privacy}",
"received_events_url": "https://api.github.com/users/aapot/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2067388877,
"node_id": "MDU6TGFiZWwyMDY3Mzg4ODc3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20bug",
"name": "dataset bug",
"color": "2edb81",
"default": false,
"description": "A bug in a dataset script provided in the library"
}
] | closed | false | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"Hi @aapot, thanks for reporting.\r\n\r\nWe are investigating the cause of this issue. We will keep you informed. ",
"When making HTTP request from code line:\r\n```\r\nresponse = requests.get(f\"{_API_URL}/bucket/dataset/{path}/{use_cdn}\", timeout=10.0).json()\r\n```\r\nit cannot be decoded to JSON because it raises a 404 Not Found error.\r\n\r\nThe request is fixed if removing the `/{use_cdn}` from the URL.\r\n\r\nMaybe there was a change in the Common Voice API?\r\n\r\nCC: @anton-l @patrickvonplaten @polinaeterna ",
"We have contacted by email the data owners of the Common Voice dataset.",
"Hotfix: https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0/commit/17b237961e4f7f84a2a0aea645abe5428a9d568e",
"I have also made the hotfix for all the rest of Common Voice script versions: 8.0, 6.1, 6.0,..., 1.0",
"Hey, is there anything new?\r\nI could not load the dataset.",
"cc @lhoestq @polinaeterna ",
"Hi @ngoquanghuy99! The dataset should load fine if you go through the following steps:\r\n\r\n1. Go to https://huggingface.co/datasets/mozilla-foundation/common_voice_9_0 and click \"Access repository\" if you see a message about sharing your contact information with Mozilla Foundation at the top of the page. If you've already done that then skip to step 2.\r\n2. Run the command `huggingface-cli login` in your terminal or notebook to authenticate your machine.\r\n3. Load the dataset with `use_auth_token=True`:\r\n```python\r\nfrom datasets import load_dataset\r\n\r\ndataset = load_dataset(\"mozilla-foundation/common_voice_9_0\", \"ab\", use_auth_token=True)\r\n```",
"Thanks @anton-l \r\nI could load the dataset now, but in another way.\r\nThanks anyways!"
] | 2022-03-30T11:39:41 | 2022-06-21T07:36:23 | 2022-03-31T08:18:04 | NONE | null | ## Describe the bug
I wanted to load `mozilla-foundation/common_voice_7_0` dataset with `fi` language and `test` split from datasets on Colab/Kaggle notebook, but I am getting an error `JSONDecodeError: [Errno Expecting value] Not Found: 0` while loading it. The bug seems to affect other languages and splits too than just the `fi` and `test` split.
## Steps to reproduce the bug
```python
from datasets import load_dataset
dataset = load_dataset("mozilla-foundation/common_voice_7_0", "fi", split="test", use_auth_token="YOUR TOKEN")
```
## Expected results
load `mozilla-foundation/common_voice_7_0` dataset succesfully
## Actual results
```
JSONDecodeError Traceback (most recent call last)
/opt/conda/lib/python3.7/site-packages/requests/models.py in json(self, **kwargs)
909 try:
--> 910 return complexjson.loads(self.text, **kwargs)
911 except JSONDecodeError as e:
/opt/conda/lib/python3.7/site-packages/simplejson/__init__.py in loads(s, encoding, cls, object_hook, parse_float, parse_int, parse_constant, object_pairs_hook, use_decimal, **kw)
524 and not use_decimal and not kw):
--> 525 return _default_decoder.decode(s)
526 if cls is None:
/opt/conda/lib/python3.7/site-packages/simplejson/decoder.py in decode(self, s, _w, _PY3)
369 s = str(s, self.encoding)
--> 370 obj, end = self.raw_decode(s)
371 end = _w(s, end).end()
/opt/conda/lib/python3.7/site-packages/simplejson/decoder.py in raw_decode(self, s, idx, _w, _PY3)
399 idx += 3
--> 400 return self.scan_once(s, idx=_w(s, idx).end())
JSONDecodeError: Expecting value: line 1 column 1 (char 0)
During handling of the above exception, another exception occurred:
JSONDecodeError Traceback (most recent call last)
/tmp/ipykernel_358/370980805.py in <module>
1 # load Common Voice 7.0 dataset from Huggingface with Finnish "test" split
----> 2 test_dataset = load_dataset("mozilla-foundation/common_voice_7_0", "fi", split="test", use_auth_token=True)
/opt/conda/lib/python3.7/site-packages/datasets/load.py in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, ignore_verifications, keep_in_memory, save_infos, revision, use_auth_token, task, streaming, **config_kwargs)
1690 ignore_verifications=ignore_verifications,
1691 try_from_hf_gcs=try_from_hf_gcs,
-> 1692 use_auth_token=use_auth_token,
1693 )
1694
/opt/conda/lib/python3.7/site-packages/datasets/builder.py in download_and_prepare(self, download_config, download_mode, ignore_verifications, try_from_hf_gcs, dl_manager, base_path, use_auth_token, **download_and_prepare_kwargs)
604 if not downloaded_from_gcs:
605 self._download_and_prepare(
--> 606 dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs
607 )
608 # Sync info
/opt/conda/lib/python3.7/site-packages/datasets/builder.py in _download_and_prepare(self, dl_manager, verify_infos)
1102
1103 def _download_and_prepare(self, dl_manager, verify_infos):
-> 1104 super()._download_and_prepare(dl_manager, verify_infos, check_duplicate_keys=verify_infos)
1105
1106 def _get_examples_iterable_for_split(self, split_generator: SplitGenerator) -> ExamplesIterable:
/opt/conda/lib/python3.7/site-packages/datasets/builder.py in _download_and_prepare(self, dl_manager, verify_infos, **prepare_split_kwargs)
670 split_dict = SplitDict(dataset_name=self.name)
671 split_generators_kwargs = self._make_split_generators_kwargs(prepare_split_kwargs)
--> 672 split_generators = self._split_generators(dl_manager, **split_generators_kwargs)
673
674 # Checksums verification
~/.cache/huggingface/modules/datasets_modules/datasets/mozilla-foundation--common_voice_7_0/fe20cac47c166e25b1f096ab661832e3da7cf298ed4a91dcaa1343ad972d175b/common_voice_7_0.py in _split_generators(self, dl_manager)
151
152 self._log_download(self.config.name, bundle_version, hf_auth_token)
--> 153 archive = dl_manager.download(self._get_bundle_url(self.config.name, bundle_url_template))
154
155 if self.config.version < datasets.Version("5.0.0"):
~/.cache/huggingface/modules/datasets_modules/datasets/mozilla-foundation--common_voice_7_0/fe20cac47c166e25b1f096ab661832e3da7cf298ed4a91dcaa1343ad972d175b/common_voice_7_0.py in _get_bundle_url(self, locale, url_template)
130 path = urllib.parse.quote(path.encode("utf-8"), safe="~()*!.'")
131 use_cdn = self.config.size_bytes < 20 * 1024 * 1024 * 1024
--> 132 response = requests.get(f"{_API_URL}/bucket/dataset/{path}/{use_cdn}", timeout=10.0).json()
133 return response["url"]
134
/opt/conda/lib/python3.7/site-packages/requests/models.py in json(self, **kwargs)
915 raise RequestsJSONDecodeError(e.message)
916 else:
--> 917 raise RequestsJSONDecodeError(e.msg, e.doc, e.pos)
918
919 @property
JSONDecodeError: [Errno Expecting value] Not Found: 0
```
## Environment info
- `datasets` version: 2.0.0
- Platform: Linux-5.10.90+-x86_64-with-debian-bullseye-sid
- Python version: 3.7.12
- PyArrow version: 5.0.0
- Pandas version: 1.3.5
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4062/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4062/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4061 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4061/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4061/comments | https://api.github.com/repos/huggingface/datasets/issues/4061/events | https://github.com/huggingface/datasets/issues/4061 | 1,186,317,071 | I_kwDODunzps5GtcMP | 4,061 | Loading cnn_dailymail dataset failed | {
"login": "Arij-Aladel",
"id": 68355048,
"node_id": "MDQ6VXNlcjY4MzU1MDQ4",
"avatar_url": "https://avatars.githubusercontent.com/u/68355048?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Arij-Aladel",
"html_url": "https://github.com/Arij-Aladel",
"followers_url": "https://api.github.com/users/Arij-Aladel/followers",
"following_url": "https://api.github.com/users/Arij-Aladel/following{/other_user}",
"gists_url": "https://api.github.com/users/Arij-Aladel/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Arij-Aladel/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Arij-Aladel/subscriptions",
"organizations_url": "https://api.github.com/users/Arij-Aladel/orgs",
"repos_url": "https://api.github.com/users/Arij-Aladel/repos",
"events_url": "https://api.github.com/users/Arij-Aladel/events{/privacy}",
"received_events_url": "https://api.github.com/users/Arij-Aladel/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 1935892865,
"node_id": "MDU6TGFiZWwxOTM1ODkyODY1",
"url": "https://api.github.com/repos/huggingface/datasets/labels/duplicate",
"name": "duplicate",
"color": "cfd3d7",
"default": true,
"description": "This issue or pull request already exists"
}
] | closed | false | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"Hi @Arij-Aladel, thanks for reporting.\r\n\r\nThis issue was already reported \r\n- #3784\r\n\r\nand its root cause is a change in the Google Drive service. See:\r\n- #3786 \r\n\r\nWe have already fixed it in our 2.0.0 release. See:\r\n- #3787 \r\n\r\nPlease, update your `datasets` version:\r\n```\r\npip install -U datasets\r\n```\r\nand retry loading the dataset by forcing its redownload:\r\n```python\r\ndataset = load_dataset(\"cnn_dailymail\", \"3.0.0\", download_mode=\"force_redownload\")\r\n```"
] | 2022-03-30T11:29:02 | 2022-03-30T13:36:14 | 2022-03-30T13:36:14 | NONE | null | ## Describe the bug
I wanted to load cnn_dailymail dataset from huggingface datasets on jupyter lab, but I am getting an error ` NotADirectoryError:[Errno20] Not a directory ` while loading it.
## Steps to reproduce the bug
```python
from datasets import load_dataset
dataset = load_dataset('cnn_dailymail', '3.0.0')
```
## Expected results
load `cnn_dailymail` dataset succesfully
## Actual results
failed to load and get error
> NotADirectoryError: [Errno 20] Not a directory
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` 1.8.0:
- Platform: Ubuntu-20.04
- Python version: 3.9.10
- PyArrow version: 3.0.0
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4061/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4061/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4060 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4060/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4060/comments | https://api.github.com/repos/huggingface/datasets/issues/4060/events | https://github.com/huggingface/datasets/pull/4060 | 1,186,281,033 | PR_kwDODunzps41Tbmg | 4,060 | Deprecate canonical Multilingual Librispeech | {
"login": "polinaeterna",
"id": 16348744,
"node_id": "MDQ6VXNlcjE2MzQ4NzQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/16348744?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/polinaeterna",
"html_url": "https://github.com/polinaeterna",
"followers_url": "https://api.github.com/users/polinaeterna/followers",
"following_url": "https://api.github.com/users/polinaeterna/following{/other_user}",
"gists_url": "https://api.github.com/users/polinaeterna/gists{/gist_id}",
"starred_url": "https://api.github.com/users/polinaeterna/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/polinaeterna/subscriptions",
"organizations_url": "https://api.github.com/users/polinaeterna/orgs",
"repos_url": "https://api.github.com/users/polinaeterna/repos",
"events_url": "https://api.github.com/users/polinaeterna/events{/privacy}",
"received_events_url": "https://api.github.com/users/polinaeterna/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"Yes, as discussed in #4006 we should update facebook/multilingual_librispeech indeed before we do a release. @anton-l could you help taking care of updating facebook/multilingual_librispeech ? We need to update the task template\r\n```python\r\ntask_templates=[AutomaticSpeechRecognition(audio_column=\"audio\", transcription_column=\"text\")],\r\n```\r\nand write that `datasets>=2.1` is necessary to load it in the dataset card.\r\n\r\nOnce the change is done we can merge this PR and do the release I think",
"@polinaeterna @lhoestq \r\nUpdated the script and the dataset card: https://huggingface.co/datasets/facebook/multilingual_librispeech ",
"@anton-l @lhoestq now previewer doesn't work for this datasets as it cannot recognize new `audio_column` argument:\r\n![image](https://user-images.githubusercontent.com/16348744/161233533-3170760b-5141-4525-9592-6675669c223a.png)\r\n\r\nI'm not an expert in previewer things, where should I look into the corresponding code?",
"Yes, there are several datasets with the same error, eg https://github.com/huggingface/datasets-preview-backend/issues/188. I'm not sure what I should do to fix this? Upgrade datasets to master?\r\n",
"@anton-l ended up removing the task template in facebook/multilingual_librispeech to make it work for the current version of `datasets` and fix the viewer :) thanks !",
"@lhoestq can we merge now? ^^"
] | 2022-03-30T10:56:56 | 2022-04-01T12:54:05 | 2022-04-01T12:48:51 | CONTRIBUTOR | null | Deprecate canonical Multilingual Librispeech in favor of [the community one](https://huggingface.co/datasets/facebook/multilingual_librispeech) which supports streaming.
However, there is a problem regarding new ASR template schema: since it's changed, I guess all community datasets that use this template do not work with new version of the library, including MLS. Should we somehow notify users about that or is it possible to change this line ourselves? For MLS specifically, I cannot change the code directly as I'm not the member of the Facebook org.
Hm, and the code should be change after the release, no? | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4060/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4060/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4060",
"html_url": "https://github.com/huggingface/datasets/pull/4060",
"diff_url": "https://github.com/huggingface/datasets/pull/4060.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4060.patch",
"merged_at": "2022-04-01T12:48:51"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4059 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4059/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4059/comments | https://api.github.com/repos/huggingface/datasets/issues/4059/events | https://github.com/huggingface/datasets/pull/4059 | 1,186,149,949 | PR_kwDODunzps41TC-o | 4,059 | Load GitHub datasets from Hub | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"Currently the github datasets versioning is synced with the `datasets` lib versioning: when you load a github dataset using `datasets==x.y.z`, then the version of the dataset will be the one at the git tag `x.y.z`. This is for reproducibility reasons.\r\n\r\nWe could stop having this behavior and always use the latest version of the dataset, but when we do a breaking change it will break github datasets for previous versions of the library. It could be nice to think about tools that will allow backward compatibility if we ever need to to a breaking change in some datasets. Maybe a way to specify which revision of the dataset to use based on the `datasets` major version.\r\n\r\nIf we keep this behavior, then maybe add a note in setup.py to push to PyPI only after the `Update Hub repositories` CI job is done. It can take a few minutes to add the version tag to all the dataset repositories on the Hub. If we push to PyPI before the tags are pushed, then some users might get some 404 if at the same time they installed `datasets` and run `load_dataset`.",
"@lhoestq I was going to increase the `max_retries` as done for metrics:\r\n- #4063 \r\n\r\nBut then I realized that loading from the Hub would work as well. That is why I opened this PR.\r\n\r\nDefinitely, we should decide which behavior we want:\r\n- We have been working in the direction of eliminating the distinctions between canonical/community datasets\r\n- If we continue to go in that direction, then passing (or not passing) `revision` should have the same behavior for canonical/community\r\n- If we want to continue to tight the library version with the canonical datasets version, that is definitely a difference between canonical and community datasets\r\n\r\nNot sure what could be better in the long term...",
"> We could stop having this behavior and always use the latest version of the dataset, but when we do a breaking change it will break github datasets for previous versions of the library. \r\n\r\nNot sure of understanding this. Previous versions of the `datasets` library will continue to download GitHub datasets from GitHub, syncing library/dataset versions... Where is the problem?",
"Yes you're right, previous versions of `datasets` will still continue to download from github, but not future versions.\r\nIf we release `datasets` 2.1 by removing this behavior and if one day we release `datasets` 3.0 with a breaking change in the dataset scripts, then all version >=2.1 will break.",
"Ideally we should drop the differences between github datasets and community datasets, and maybe provide a way to fallback on an older version of a dataset repository if the user's `datasets` version is too old and incompatible with it.",
"I just noticed I literally opened the same PR lol\r\n\r\nI'm still convinced that we should do a better version compatibility check but we can see that later IMO",
"Normally in open source projects, when there is a duplicate PR, the latter is tagged as \"duplicate\" and closed. :stuck_out_tongue_winking_eye: \r\n\r\nLet me make things clear in my mind: so you say that the blocking point that was preventing this PR from merging, now is no longer a blocking point and could be addresses in a subsequent PR?",
"Let me close the duplicate one, sorry\r\n\r\n> Let me make things clear my mind: so you say that the blocking point that was preventing this PR from merging now is no longer a blocking point and could be addresses in a subsequent PR?\r\n\r\nYes 🙈",
"> Note that after this PR, all the changes made to a dataset will affect all the datasets version from now on\r\n\r\nYes, we have aligned this behavior with Hub datasets, as this is already the case for Hub datasets."
] | 2022-03-30T09:21:56 | 2022-09-16T12:43:26 | 2022-09-16T12:40:43 | MEMBER | null | We have recurrently had connection errors when requesting GitHub because sometimes the site is not available.
This PR requests the Hub instead, once all GitHub datasets are mirrored on the Hub.
Fix #2048
Related to:
- #4051
- #3210
- #2787
- #2075
- #2036 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4059/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4059/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4059",
"html_url": "https://github.com/huggingface/datasets/pull/4059",
"diff_url": "https://github.com/huggingface/datasets/pull/4059.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4059.patch",
"merged_at": "2022-09-16T12:40:43"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4058 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4058/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4058/comments | https://api.github.com/repos/huggingface/datasets/issues/4058/events | https://github.com/huggingface/datasets/pull/4058 | 1,185,611,600 | PR_kwDODunzps41RPhl | 4,058 | Updated annotations for nli_tr dataset | {
"login": "e-budur",
"id": 2246791,
"node_id": "MDQ6VXNlcjIyNDY3OTE=",
"avatar_url": "https://avatars.githubusercontent.com/u/2246791?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/e-budur",
"html_url": "https://github.com/e-budur",
"followers_url": "https://api.github.com/users/e-budur/followers",
"following_url": "https://api.github.com/users/e-budur/following{/other_user}",
"gists_url": "https://api.github.com/users/e-budur/gists{/gist_id}",
"starred_url": "https://api.github.com/users/e-budur/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/e-budur/subscriptions",
"organizations_url": "https://api.github.com/users/e-budur/orgs",
"repos_url": "https://api.github.com/users/e-budur/repos",
"events_url": "https://api.github.com/users/e-budur/events{/privacy}",
"received_events_url": "https://api.github.com/users/e-budur/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"Thank you so much @[lhoestq](https://github.com/lhoestq) for the time you take to your review the PR!"
] | 2022-03-29T23:46:59 | 2022-04-12T20:55:12 | 2022-04-12T10:37:22 | CONTRIBUTOR | null | This PR adds annotation tags for `nli_tr` dataset so that the dataset can be searchable wrt. relevant query parameters.
The annotations in this PR are based on the existing annotations of `snli` and `multi_nli` datasets as `nli_tr` is a machine-generated extension of those datasets.
This PR is intended only for updating the annotation labels but a followup PR will focus on updating the missing sections in the `README.md` as well.
Thanks for all your time to review it. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4058/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4058/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4058",
"html_url": "https://github.com/huggingface/datasets/pull/4058",
"diff_url": "https://github.com/huggingface/datasets/pull/4058.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4058.patch",
"merged_at": "2022-04-12T10:37:22"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4057 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4057/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4057/comments | https://api.github.com/repos/huggingface/datasets/issues/4057/events | https://github.com/huggingface/datasets/issues/4057 | 1,185,442,001 | I_kwDODunzps5GqGjR | 4,057 | `load_dataset` consumes too much memory for audio + tar archives | {
"login": "JFCeron",
"id": 50839826,
"node_id": "MDQ6VXNlcjUwODM5ODI2",
"avatar_url": "https://avatars.githubusercontent.com/u/50839826?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/JFCeron",
"html_url": "https://github.com/JFCeron",
"followers_url": "https://api.github.com/users/JFCeron/followers",
"following_url": "https://api.github.com/users/JFCeron/following{/other_user}",
"gists_url": "https://api.github.com/users/JFCeron/gists{/gist_id}",
"starred_url": "https://api.github.com/users/JFCeron/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/JFCeron/subscriptions",
"organizations_url": "https://api.github.com/users/JFCeron/orgs",
"repos_url": "https://api.github.com/users/JFCeron/repos",
"events_url": "https://api.github.com/users/JFCeron/events{/privacy}",
"received_events_url": "https://api.github.com/users/JFCeron/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | [
"Hi ! Could it be because you need to free the memory used by `tarfile` by emptying the tar `members` by any chance ?\r\n```python\r\n yield key, {\"audio\": {\"path\": audio_name, \"bytes\": audio_file_obj.read()}}\r\n audio_tarfile.members = [] # free memory\r\n key += 1\r\n```\r\n\r\nand then you can set `DEFAULT_WRITER_BATCH_SIZE` to whatever value makes more sense for your dataset.\r\n\r\nLet me know if the issue persists (which could happen, given that you managed to run your generator without RAM issues and using os.walk didn't solve the issue)",
"Thanks for your reply! Tried it but the issue persists. ",
"I also run out of memory when loading `mozilla-foundation/common_voice_8_0` that also uses `tarfile` via `dl_manager.iter_archive`. There seems to be some data files that stay in memory somewhere\r\n\r\nI don't have the issue with other compression formats like gzipped files",
"I'm facing a similar memory leak issue when loading cv8. As you said @lhoestq \r\n\r\n`load_dataset(\"mozilla-foundation/common_voice_8_0\", \"en\", use_auth_token=True, writer_batch_size=1)`\r\n\r\nThis issue is happening on a 32GB RAM machine. \r\n\r\nAny updates on how to fix this?",
"I've run a memory profiler to see where's the leak comes from:\r\n\r\n![image](https://user-images.githubusercontent.com/5097052/165101712-e7060ae5-77b2-4f6a-92bd-2996dbd60b36.png)\r\n\r\n... it seems that it's related to the tarfile lib buffer reader. But I don't know why it's only happening on the huggingface script",
"I have the same problem when loading video into numpy. \r\n```\r\nyield id,{ \r\n \"video\": imageio.v3.imread(video_path),\r\n \"label\": int(label)\r\n}\r\n```\r\nSince video files are heavy, it can only processes a dozen samples before OOM.",
"For video datasets I think you can just define the max number of video that can stay in memory by adding this class attribute to your dataset builer:\r\n```py\r\nDEFAULT_WRITER_BATCH_SIZE = 8 # only 8 videos at a time in memory before flushing the dataset writer\r\n```",
"same thing happens for me with `load_dataset(\"mozilla-foundation/common_voice_8_0\", \"en\", use_auth_token=True, writer_batch_size=1)` on azure ml. seems to fill up `tmp` and not release that memory until OOM",
"I'll add that I'm encountering the same issue with\r\n`load_dataset('wikipedia', 'ceb', runner='DirectRunner', split='train')`.\r\nSame for `'es'` in place of `'ceb'`.",
"> I'll add that I'm encountering the same issue with\r\n> load_dataset('wikipedia', 'ceb', runner='DirectRunner', split='train').\r\n> Same for 'es' in place of 'ceb'.\r\n\r\nThis is because the Apache Beam `DirectRunner` runs with the full data in memory unfortunately. Optimizing the `DirectRunner` is not in the scope of the `datasets` library, but rather in the Apache Beam project I believe. If you have memory issues with the `DirectRunner`, please consider switching to a machine with more RAM, or to distributed processing runtimes like Spark, Flink or DataFlow. There is a bit of documentation here: https://huggingface.co/docs/datasets/beam",
"> > I'll add that I'm encountering the same issue with\r\n> > `load_dataset('wikipedia', 'ceb', runner='DirectRunner', split='train')`.\r\n> > Same for `'es'` in place of `'ceb'`.\r\n> \r\n> This is because the Apache Beam `DirectRunner` runs with the full data in memory unfortunately. Optimizing the `DirectRunner` is not in the scope of the `datasets` library, but rather in the Apache Beam project I believe. If you have memory issues with the `DirectRunner`, please consider switching to a machine with more RAM, or to distributed processing runtimes like Spark, Flink or DataFlow. There is a bit of documentation here: https://huggingface.co/docs/datasets/beam\r\n\r\nFair enough, but this line of code crashed an AWS instance with 1024GB of RAM! I have also tried with `Runner='Flink'` on an environment with 51GB of RAM, which also failed.\r\n\r\nApache Beam has tons of open tickets already - is it worth submitting one to them over this?",
"> Fair enough, but this line of code crashed an AWS instance with 1024GB of RAM!\r\n\r\nWhat, wikipedia is not even bigger than 20GB\r\n\r\ncc @albertvillanova",
"> > Fair enough, but this line of code crashed an AWS instance with 1024GB of RAM!\r\n> \r\n> What, wikipedia is not even bigger than 20GB\r\n> \r\n> cc @albertvillanova\r\n\r\nLuckily, on Colab you can watch the call stack at the bottom of the screen - much of the time and space complexity seems to come from `_parse_and_clean_wikicode()` rather than the actual download process. As far as I can tell, the script is loading the full dataset and then cleaning it all at once, which is consuming a lot of memory.",
"I think we are mixing many different bugs in this Issue page:\r\n- TAR archive with audio files\r\n- video file\r\n- distributed parsing of Wikipedia using Apache Beam\r\n\r\n@dan-the-meme-man may I ask you to open a separate Issue for your problem? Then I will address it. It is important to fix it because we are currently working on a Datasets enhancement to be able to provide all Wikipedias already preprocessed.\r\n\r\nOn the other hand, I think we could keep this Issue page for the original problem: TAR archive with audio files. That is not fixed yet either.",
"Is there an update on the TAR archive issue with audio files? Happy to lend a hand in fixing this :)",
"I found the issue with Common Voice 8 and opened a PR to fix it: https://huggingface.co/datasets/mozilla-foundation/common_voice_8_0/discussions/2\r\n\r\nBasically the `metadata` dict that contains the transcripts per audio file was continuously getting filled with bytes from `f.read()` because of this code:\r\n```python\r\nresult = metadata[path]\r\nresult[\"audio\"] = {\"path\": path, \"bytes\": f.read()}\r\n```\r\ncopying the result with `result = dict(metadata[path])` fixes it: the bytes are no longer added to `metadata`\r\n\r\nI also opened PRs to the other CV datasets",
"Amazing, that's a great find! Thanks @lhoestq!",
"I'm closing this one for now, but feel free to reopen if you encounter other memory issues with audio datasets"
] | 2022-03-29T21:38:55 | 2022-08-16T10:22:55 | 2022-08-16T10:22:55 | NONE | null |
## Description
`load_dataset` consumes more and more memory until it's killed, even though it's made with a generator. I'm adding a loading script for a new dataset, made up of ~15s audio coming from a tar file. Tried setting `DEFAULT_WRITER_BATCH_SIZE = 1` as per the discussion in #741 but the problem persists.
## Steps to reproduce the bug
Here's my implementation of `_generate_examples`:
```python
class MyDatasetBuilder(datasets.GeneratorBasedBuilder):
DEFAULT_WRITER_BATCH_SIZE = 1
...
def _split_generators(self, dl_manager):
archive_path = dl_manager.download(_DL_URLS[self.config.name])
return [
datasets.SplitGenerator(
name=datasets.Split.TRAIN,
gen_kwargs={
"audio_tarfile_path": archive_path["audio_tarfile"]
},
),
]
def _generate_examples(self, audio_tarfile_path):
key = 0
with tarfile.open(audio_tarfile_path, mode="r|") as audio_tarfile:
for audio_tarinfo in audio_tarfile:
audio_name = audio_tarinfo.name
audio_file_obj = audio_tarfile.extractfile(audio_tarinfo)
yield key, {"audio": {"path": audio_name, "bytes": audio_file_obj.read()}}
key += 1
```
I then try to load via `ds = load_dataset('./datasets/my_new_dataset', writer_batch_size=1)`, and memory usage grows until all 8GB of my machine are taken and process is killed (`Killed`). Also tried an untarred version of this using `os.walk` but the same happened.
I created a script to confirm that one can safely go through such a generator, which runs just fine with memory <500MB at all times.
```python
import tarfile
def generate_examples():
audio_tarfile = tarfile.open("audios.tar", mode="r|")
key = 0
for audio_tarinfo in audio_tarfile:
audio_name = audio_tarinfo.name
audio_file_obj = audio_tarfile.extractfile(audio_tarinfo)
yield key, {"audio": {"path": audio_name, "bytes": audio_file_obj.read()}}
key += 1
if __name__ == "__main__":
examples = generate_examples()
for example in examples:
pass
```
## Expected results
Memory consumption should be similar to the non-huggingface script.
## Actual results
Process is killed after consuming too much memory.
## Environment info
- `datasets` version: 2.0.1.dev0
- Platform: Linux-4.19.0-20-cloud-amd64-x86_64-with-debian-10.12
- Python version: 3.7.12
- PyArrow version: 6.0.1
- Pandas version: 1.3.5 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4057/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4057/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4056 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4056/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4056/comments | https://api.github.com/repos/huggingface/datasets/issues/4056/events | https://github.com/huggingface/datasets/issues/4056 | 1,185,155,775 | I_kwDODunzps5GpAq_ | 4,056 | Unexpected behavior of _TempDirWithCustomCleanup | {
"login": "JonasGeiping",
"id": 22680696,
"node_id": "MDQ6VXNlcjIyNjgwNjk2",
"avatar_url": "https://avatars.githubusercontent.com/u/22680696?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/JonasGeiping",
"html_url": "https://github.com/JonasGeiping",
"followers_url": "https://api.github.com/users/JonasGeiping/followers",
"following_url": "https://api.github.com/users/JonasGeiping/following{/other_user}",
"gists_url": "https://api.github.com/users/JonasGeiping/gists{/gist_id}",
"starred_url": "https://api.github.com/users/JonasGeiping/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/JonasGeiping/subscriptions",
"organizations_url": "https://api.github.com/users/JonasGeiping/orgs",
"repos_url": "https://api.github.com/users/JonasGeiping/repos",
"events_url": "https://api.github.com/users/JonasGeiping/events{/privacy}",
"received_events_url": "https://api.github.com/users/JonasGeiping/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | open | false | null | [] | null | [
"Hi ! Would setting TMPDIR at the beginning of your python script/session work ? I mean, even before importing transformers, datasets, etc. and using them ? I think this would be the most robust solution given any library that uses `tempfile`. I don't think we aim to support environment variables to be changed at run time",
"Hi, yeah setting the environment variable before the imports / as environment variable outside is another way to fix this. I am just arguing that `datasets` already uses its own global variable to track temporary files: `_TEMP_DIR_FOR_TEMP_CACHE_FILES`, and the creation of this global variable should respect TMPDIR instead of relying on tempfile to do so."
] | 2022-03-29T16:58:22 | 2022-03-30T15:08:04 | null | NONE | null | ## Describe the bug
This is not 100% a bug in `datasets`, but behavior that surprised me and I think this could be made more robust on the `datasets`side.
When using `datasets.disable_caching()`, cache files are written to a temporary directory. This directory should be based on the environment variable TMPDIR. I want to set TMPDIR at runtime using os.ENVIRON["TMPDIR"] = something, but depending on other imported modules this can fail to take effect.
## Steps to reproduce the bug
`_TempDirWithCustomCleanup` relies on `tempfile` to generate a path to a temporary directory. However, `tempfile` generates the path only once. This can be a problem when trying to set TMPDIR at runtime whenever other code imports `tempfile` first and does something unexpected.
For example (after too much trial and error) I found out that a different part of the code base I work with defines a class `PatchedDataCollatorForLanguageModeling(transformers.DataCollatorForLanguageModeling)` based on a `transformers` class. This import is enough to trigger `tempfile` to generate `tempfile` to generate a temporary path and leading to the wrong path being cached in `tempfile.tempdir`.
## Suggestion:
I could file this also as bug with `transformers`, but I think fixing this on the datasets would be much more robust:
Datasets could recompute the temporary path once (technically possible via `tempfile._get_default_tempdir` or resetting
the global variable `tempfile.tmpdir` to None) before setting its own global `_TEMP_DIR_FOR_TEMP_CACHE_FILES`.
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4056/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4056/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4055 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4055/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4055/comments | https://api.github.com/repos/huggingface/datasets/issues/4055/events | https://github.com/huggingface/datasets/pull/4055 | 1,184,976,292 | PR_kwDODunzps41PGF1 | 4,055 | [DO NOT MERGE] Test doc-builder | {
"login": "lewtun",
"id": 26859204,
"node_id": "MDQ6VXNlcjI2ODU5MjA0",
"avatar_url": "https://avatars.githubusercontent.com/u/26859204?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lewtun",
"html_url": "https://github.com/lewtun",
"followers_url": "https://api.github.com/users/lewtun/followers",
"following_url": "https://api.github.com/users/lewtun/following{/other_user}",
"gists_url": "https://api.github.com/users/lewtun/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lewtun/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lewtun/subscriptions",
"organizations_url": "https://api.github.com/users/lewtun/orgs",
"repos_url": "https://api.github.com/users/lewtun/repos",
"events_url": "https://api.github.com/users/lewtun/events{/privacy}",
"received_events_url": "https://api.github.com/users/lewtun/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"Docs built successfully, so closing this."
] | 2022-03-29T14:39:02 | 2022-03-30T12:31:14 | 2022-03-30T12:25:52 | MEMBER | null | This is a test PR to ensure the changes in https://github.com/huggingface/doc-builder/pull/164 don't break anything in `datasets` | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4055/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4055/timeline | null | null | true | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4055",
"html_url": "https://github.com/huggingface/datasets/pull/4055",
"diff_url": "https://github.com/huggingface/datasets/pull/4055.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4055.patch",
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4054 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4054/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4054/comments | https://api.github.com/repos/huggingface/datasets/issues/4054/events | https://github.com/huggingface/datasets/pull/4054 | 1,184,575,368 | PR_kwDODunzps41Nwjz | 4,054 | Support float data types in pearsonr/spearmanr metrics | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-03-29T09:29:10 | 2022-03-29T14:07:59 | 2022-03-29T14:02:20 | MEMBER | null | Fix #4053. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4054/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4054/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4054",
"html_url": "https://github.com/huggingface/datasets/pull/4054",
"diff_url": "https://github.com/huggingface/datasets/pull/4054.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4054.patch",
"merged_at": "2022-03-29T14:02:20"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4053 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4053/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4053/comments | https://api.github.com/repos/huggingface/datasets/issues/4053/events | https://github.com/huggingface/datasets/issues/4053 | 1,184,500,378 | I_kwDODunzps5Gmgqa | 4,053 | Modify datatype from `int32` to `float` for pearsonr, spearmanr. | {
"login": "woodywarhol9",
"id": 86637320,
"node_id": "MDQ6VXNlcjg2NjM3MzIw",
"avatar_url": "https://avatars.githubusercontent.com/u/86637320?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/woodywarhol9",
"html_url": "https://github.com/woodywarhol9",
"followers_url": "https://api.github.com/users/woodywarhol9/followers",
"following_url": "https://api.github.com/users/woodywarhol9/following{/other_user}",
"gists_url": "https://api.github.com/users/woodywarhol9/gists{/gist_id}",
"starred_url": "https://api.github.com/users/woodywarhol9/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/woodywarhol9/subscriptions",
"organizations_url": "https://api.github.com/users/woodywarhol9/orgs",
"repos_url": "https://api.github.com/users/woodywarhol9/repos",
"events_url": "https://api.github.com/users/woodywarhol9/events{/privacy}",
"received_events_url": "https://api.github.com/users/woodywarhol9/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | closed | false | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"@Woodywarhol9 good catch, thanks for reporting.\r\n\r\nWe are fixing this."
] | 2022-03-29T08:27:41 | 2022-03-29T14:02:20 | 2022-03-29T14:02:20 | NONE | null | **Is your feature request related to a problem? Please describe.**
- Now [Pearsonr](https://github.com/huggingface/datasets/blob/master/metrics/pearsonr/pearsonr.py) and [Spearmanr](https://github.com/huggingface/datasets/blob/master/metrics/spearmanr/spearmanr.py) both get input data as 'int32'.
**Describe the solution you'd like**
- Considering that those metrics are widely used for the STS task(labels are in 'float' data type),
it would be better to modify datatype from 'int32' to 'float' for getting exact values of similarity. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4053/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4053/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4052 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4052/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4052/comments | https://api.github.com/repos/huggingface/datasets/issues/4052/events | https://github.com/huggingface/datasets/issues/4052 | 1,184,447,977 | I_kwDODunzps5GmT3p | 4,052 | metric = metric_cls( TypeError: 'NoneType' object is not callable | {
"login": "klyuhang9",
"id": 39409233,
"node_id": "MDQ6VXNlcjM5NDA5MjMz",
"avatar_url": "https://avatars.githubusercontent.com/u/39409233?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/klyuhang9",
"html_url": "https://github.com/klyuhang9",
"followers_url": "https://api.github.com/users/klyuhang9/followers",
"following_url": "https://api.github.com/users/klyuhang9/following{/other_user}",
"gists_url": "https://api.github.com/users/klyuhang9/gists{/gist_id}",
"starred_url": "https://api.github.com/users/klyuhang9/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/klyuhang9/subscriptions",
"organizations_url": "https://api.github.com/users/klyuhang9/orgs",
"repos_url": "https://api.github.com/users/klyuhang9/repos",
"events_url": "https://api.github.com/users/klyuhang9/events{/privacy}",
"received_events_url": "https://api.github.com/users/klyuhang9/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"Hi @klyuhang9,\r\n\r\nI'm sorry but I can't reproduce your problem:\r\n```python\r\nIn [2]: metric = load_metric('glue', 'rte')\r\nDownloading builder script: 5.76kB [00:00, 2.40MB/s]\r\n```\r\n\r\nCould you please, retry to load the metric? Sometimes there are temporary connectivity issues.\r\n\r\nFeel free to re-open this issue of the problem persists."
] | 2022-03-29T07:43:08 | 2022-03-29T14:06:01 | 2022-03-29T14:06:01 | NONE | null | Hi, friend. I meet a problem.
When I run the code:
`metric = load_metric('glue', 'rte')`
There is a problem raising:
`metric = metric_cls(
TypeError: 'NoneType' object is not callable `
I don't know why. Thanks for your help!
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4052/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4052/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4051 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4051/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4051/comments | https://api.github.com/repos/huggingface/datasets/issues/4051/events | https://github.com/huggingface/datasets/issues/4051 | 1,184,400,179 | I_kwDODunzps5GmIMz | 4,051 | ConnectionError: Couldn't reach https://raw.githubusercontent.com/huggingface/datasets/2.0.0/datasets/glue/glue.py | {
"login": "klyuhang9",
"id": 39409233,
"node_id": "MDQ6VXNlcjM5NDA5MjMz",
"avatar_url": "https://avatars.githubusercontent.com/u/39409233?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/klyuhang9",
"html_url": "https://github.com/klyuhang9",
"followers_url": "https://api.github.com/users/klyuhang9/followers",
"following_url": "https://api.github.com/users/klyuhang9/following{/other_user}",
"gists_url": "https://api.github.com/users/klyuhang9/gists{/gist_id}",
"starred_url": "https://api.github.com/users/klyuhang9/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/klyuhang9/subscriptions",
"organizations_url": "https://api.github.com/users/klyuhang9/orgs",
"repos_url": "https://api.github.com/users/klyuhang9/repos",
"events_url": "https://api.github.com/users/klyuhang9/events{/privacy}",
"received_events_url": "https://api.github.com/users/klyuhang9/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi @klyuhang9,\r\n\r\nI'm sorry but I can't reproduce your problem:\r\n```python\r\nIn [4]: ds = load_dataset(\"glue\", \"sst2\", download_mode=\"force_redownload\")\r\nDownloading builder script: 28.8kB [00:00, 9.15MB/s] \r\nDownloading metadata: 28.7kB [00:00, 10.7MB/s] \r\nDownloading and preparing dataset glue/sst2 (download: 7.09 MiB, generated: 4.78 MiB, post-processed: Unknown size, total: 11.88 MiB) to .../.cache/huggingface/datasets/glue/sst2/1.0.0/dacbe3125aa31d7f70367a07a8a9e72a5a0bfeb5fc42e75c9db75b96da6053ad...\r\nDownloading data: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 7.44M/7.44M [00:01<00:00, 4.12MB/s]\r\nDataset glue downloaded and prepared to .../.cache/huggingface/datasets/glue/sst2/1.0.0/dacbe3125aa31d7f70367a07a8a9e72a5a0bfeb5fc42e75c9db75b96da6053ad. Subsequent calls will reuse this data. \r\n100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 3/3 [00:00<00:00, 1047.96it/s]\r\n\r\nIn [5]: ds\r\nOut[5]: \r\nDatasetDict({\r\n train: Dataset({\r\n features: ['sentence', 'label', 'idx'],\r\n num_rows: 67349\r\n })\r\n validation: Dataset({\r\n features: ['sentence', 'label', 'idx'],\r\n num_rows: 872\r\n })\r\n test: Dataset({\r\n features: ['sentence', 'label', 'idx'],\r\n num_rows: 1821\r\n })\r\n})\r\n```\r\n\r\nPlease, note that sometimes GitHub has some temporary connectivity issues. Feel free to retry and re-open this issue if the problem persists.",
"Maybe it's because we are in China.",
"Are you able to access the URL in your web browser?",
"> Are you able to access the URL in your web browser?\r\n\r\nYes, with or without a VPN, we (people in China) can access the URL. And we can even use wget to download these files. We can download the pretrained language model automatically with the code.\r\nHowever, we CANNOT access glue.py & metric.py automatically. Every time, it will raise ConnectionError, and we have to download datasets manually (SQuAD is extremely hard to preprocess) and replace metric.py with scipy.metrics. If this problem is solved, many Chinese will save a lot of time.",
"> ConnectionError: Couldn't reach https://raw.githubusercontent.com/huggingface/datasets/2.0.0/datasets/glue/glue.py\r\n> \r\n> I don't know why; it is ok when I use\r\n\r\nIf you would query the question `ConnectionError: Couldn't reach` in www.baidu.com (Chinese Google, Google is banned and some people cannot access it), you will find that there are so many questions about accessing `https://raw.githubusercontent.com`. There are some solutions like adding `185.199.108.133 raw.githubusercontent.com` to `C:/windows/systen32/drives/etc/hosts`, but it is time-consuming, hard for green-hand, and invalid sometimes."
] | 2022-03-29T07:00:31 | 2022-05-08T07:27:32 | 2022-03-29T08:29:25 | NONE | null | Hi, I meet a problem.
When I run the code:
`dataset = load_dataset('glue','sst2')`
There is a issue raising:
ConnectionError: Couldn't reach https://raw.githubusercontent.com/huggingface/datasets/2.0.0/datasets/glue/glue.py
I don't know why; it is ok when I use Google Chrome to view this url.
Thanks for your help! | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4051/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4051/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4050 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4050/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4050/comments | https://api.github.com/repos/huggingface/datasets/issues/4050/events | https://github.com/huggingface/datasets/pull/4050 | 1,184,346,501 | PR_kwDODunzps41NAMF | 4,050 | Add RVL-CDIP dataset | {
"login": "dnaveenr",
"id": 17746528,
"node_id": "MDQ6VXNlcjE3NzQ2NTI4",
"avatar_url": "https://avatars.githubusercontent.com/u/17746528?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dnaveenr",
"html_url": "https://github.com/dnaveenr",
"followers_url": "https://api.github.com/users/dnaveenr/followers",
"following_url": "https://api.github.com/users/dnaveenr/following{/other_user}",
"gists_url": "https://api.github.com/users/dnaveenr/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dnaveenr/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dnaveenr/subscriptions",
"organizations_url": "https://api.github.com/users/dnaveenr/orgs",
"repos_url": "https://api.github.com/users/dnaveenr/repos",
"events_url": "https://api.github.com/users/dnaveenr/events{/privacy}",
"received_events_url": "https://api.github.com/users/dnaveenr/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"Thanks a lot for inputs. I'll use the URL suggested and check.\r\n\r\n> we need to implement the streamable (can't use os.path.join) and the non-streamable versions of _generate_examples.\r\n\r\nSure. I will check the reference and try this out, will get back to you if I face any issues.\r\n\r\n> The labels-only data file URL doesn't work for me, so feel free to ask the authors whether they are OK with us hosting the file on the Hub/S3 (to speed up the streamable version)\r\n\r\nJust checked. The author (Adam Harley) has responded positively and allowed us to host the file. Do I share the file with you for hosting it on Hub/S3 ?",
"> Just checked. The author (Adam Harley) has responded positively and allowed us to host the file. Do I share the file with you for hosting it on Hub/S3 ?\r\n\r\nYes, feel free to e-mail me the file. Then I'll create a repo under my namespace and push the file there. We run a GH action on a GH dataset after merging to create its repo on the Hub, so after this PR is merged, I'll push the file to the \"official\" namespace and update the download link.",
"> You can use this URL to avoid manual download: https://drive.google.com/uc?export=download&id=0Bz1dfcnrpXM-MUt4cHNzUEFXcmc\r\n\r\nFor some reason, the direct download doesn't seem to work for me even with this URL. \r\n```\r\nDownloading and preparing dataset rvl_cdip/default to ~/.cache/huggingface/datasets/rvl_cdip/default/1.0.0/ea152149e06310d60a9ef3c3020199dd4780bb952a773ba5aac6b57d59f12628...\r\nDownloading data files: 100%|█████████████████████████████████████████████████████| 1/1 [00:00<00:00, 6307.22it/s]\r\n{'rvl-cdip': '~/.cache/huggingface/datasets/downloads/07ef956a33750078d570d76fefe9fed49f7dc32ecf6e872d690de11e66bbe869'}\r\n```\r\nAnd this directory does not exist. Am I doing something wrong ?\r\nTo verify, I tried using [gdown](https://github.com/wkentaro/gdown) for the above URL, we get the following : \r\n```\r\nAccess denied with the following error:\r\n\r\n Cannot retrieve the public link of the file. You may need to change\r\n the permission to 'Anyone with the link', or have had many accesses. \r\n\r\nYou may still be able to access the file from the browser:\r\n```\r\n----\r\n\r\n> Yes, feel free to e-mail me the file. Then I'll create a repo under my namespace and push the file there. We run a GH action on a GH dataset after merging to create its repo on the Hub, so after this PR is merged, I'll push the file to the \"official\" namespace and update the download link.\r\n\r\nGot it. I've sent you an email with the file. Thank you.",
"Actually this URL works for direct download :\r\n`https://drive.google.com/uc?export=download&confirm=pbef&id=0Bz1dfcnrpXM-MUt4cHNzUEFXcmc`\r\nRef : https://github.com/wkentaro/gdown/issues/146#issuecomment-1042382215\r\n\r\nI'm working on the streamable versions of _generate_examples as well, will update you regarding this.",
"Google Drive is a tricky host, and it's easy to exceed daily download quota limits, so if we are allowed to host the `rvl-cdip.tar.gz` file, I can push it to the Hub.",
"Just checked, the authors have agreed. He mentioned that he had complaints about the GDrive link.\r\nYou can push it to the Hub and share the link. :)",
"I have added :\r\n- streaming support for rvl-cdip.tar.gz file. [ Need to test this ]\r\n\r\nIs it possible for you to upload the train.txt, test.txt, val.txt files separately to the Hub instead of labels_only.tar.gz file.\r\nCurrently during the tests in stream mode, we get : \r\n`NotImplementedError: Extraction protocol for TAR archives like 'https://huggingface.co/datasets/mariosasko/rvl_cdip/resolve/main/labels_only.tar.gz' is not implemented in streaming mode. Please use dl_manager.iter_archive instead.`\r\nIf the label files are present as .txt files then we can directly use dl_manager.download.\r\n\r\n\r\n",
"The rvl-cdip.tar.gz archive and txt files with the labels are on the Hub!",
"- Added 🤗 Hub download links.\r\n- streamable and non-streamable versions of _generate_examples.\r\n- Updated dummy data, both real and dummy dataset tests have passed.\r\n\r\n",
"I've removed the extraction of the archive file locally as suggested. Let me know if any other changes are required. :)",
"The check for **Update Hub repositories / update-hub-repositories** has failed.\r\n\r\n> https://github.com/huggingface/datasets/runs/6116502392?check_suite_focus=true\r\n\r\n",
"Hi ! Thanks for reporting ;) yes this CI job has been failing for a few days. I'm working on fixing it, and I'm manually running it on my side in the meantime",
"Great. :D Thank you @lhoestq "
] | 2022-03-29T06:00:02 | 2022-04-22T09:55:07 | 2022-04-21T17:15:41 | CONTRIBUTOR | null | Resolves #2762
Dataset Request : Add RVL-CDIP dataset [#2762](https://github.com/huggingface/datasets/issues/2762)
This PR adds the RVL-CDIP dataset.
The dataset contains Google Drive link for download and wasn't getting downloaded automatically, so I have provided manual_download_instructions.
- I have added the dummy_data.zip as well.
Needed inputs on how I can run the real data and the dummy data tests for datasets with manual download ?
Inputs and suggestions for improvement are welcome. Thank you. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4050/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4050/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4050",
"html_url": "https://github.com/huggingface/datasets/pull/4050",
"diff_url": "https://github.com/huggingface/datasets/pull/4050.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4050.patch",
"merged_at": "2022-04-21T17:15:41"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4049 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4049/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4049/comments | https://api.github.com/repos/huggingface/datasets/issues/4049/events | https://github.com/huggingface/datasets/pull/4049 | 1,183,832,893 | PR_kwDODunzps41LSjv | 4,049 | Create metric card for the Code Eval metric | {
"login": "sashavor",
"id": 14205986,
"node_id": "MDQ6VXNlcjE0MjA1OTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/14205986?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sashavor",
"html_url": "https://github.com/sashavor",
"followers_url": "https://api.github.com/users/sashavor/followers",
"following_url": "https://api.github.com/users/sashavor/following{/other_user}",
"gists_url": "https://api.github.com/users/sashavor/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sashavor/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sashavor/subscriptions",
"organizations_url": "https://api.github.com/users/sashavor/orgs",
"repos_url": "https://api.github.com/users/sashavor/repos",
"events_url": "https://api.github.com/users/sashavor/events{/privacy}",
"received_events_url": "https://api.github.com/users/sashavor/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"if possible, give relevant names to your Pull requests @sashavor (make it easier to scan the repo activity) Thanks!",
"updating them now! thanks for the feedback @julien-c "
] | 2022-03-28T18:34:23 | 2022-03-29T13:38:12 | 2022-03-29T13:32:50 | NONE | null | Creating initial Code Eval metric card | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4049/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4049/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4049",
"html_url": "https://github.com/huggingface/datasets/pull/4049",
"diff_url": "https://github.com/huggingface/datasets/pull/4049.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4049.patch",
"merged_at": "2022-03-29T13:32:50"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4048 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4048/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4048/comments | https://api.github.com/repos/huggingface/datasets/issues/4048/events | https://github.com/huggingface/datasets/issues/4048 | 1,183,804,576 | I_kwDODunzps5Gj2yg | 4,048 | Split size error on `amazon_us_reviews` / `PC_v1_00` dataset | {
"login": "trentonstrong",
"id": 191985,
"node_id": "MDQ6VXNlcjE5MTk4NQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/191985?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/trentonstrong",
"html_url": "https://github.com/trentonstrong",
"followers_url": "https://api.github.com/users/trentonstrong/followers",
"following_url": "https://api.github.com/users/trentonstrong/following{/other_user}",
"gists_url": "https://api.github.com/users/trentonstrong/gists{/gist_id}",
"starred_url": "https://api.github.com/users/trentonstrong/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/trentonstrong/subscriptions",
"organizations_url": "https://api.github.com/users/trentonstrong/orgs",
"repos_url": "https://api.github.com/users/trentonstrong/repos",
"events_url": "https://api.github.com/users/trentonstrong/events{/privacy}",
"received_events_url": "https://api.github.com/users/trentonstrong/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 1935892877,
"node_id": "MDU6TGFiZWwxOTM1ODkyODc3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/good%20first%20issue",
"name": "good first issue",
"color": "7057ff",
"default": true,
"description": "Good for newcomers"
}
] | closed | false | {
"login": "trentonstrong",
"id": 191985,
"node_id": "MDQ6VXNlcjE5MTk4NQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/191985?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/trentonstrong",
"html_url": "https://github.com/trentonstrong",
"followers_url": "https://api.github.com/users/trentonstrong/followers",
"following_url": "https://api.github.com/users/trentonstrong/following{/other_user}",
"gists_url": "https://api.github.com/users/trentonstrong/gists{/gist_id}",
"starred_url": "https://api.github.com/users/trentonstrong/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/trentonstrong/subscriptions",
"organizations_url": "https://api.github.com/users/trentonstrong/orgs",
"repos_url": "https://api.github.com/users/trentonstrong/repos",
"events_url": "https://api.github.com/users/trentonstrong/events{/privacy}",
"received_events_url": "https://api.github.com/users/trentonstrong/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "trentonstrong",
"id": 191985,
"node_id": "MDQ6VXNlcjE5MTk4NQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/191985?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/trentonstrong",
"html_url": "https://github.com/trentonstrong",
"followers_url": "https://api.github.com/users/trentonstrong/followers",
"following_url": "https://api.github.com/users/trentonstrong/following{/other_user}",
"gists_url": "https://api.github.com/users/trentonstrong/gists{/gist_id}",
"starred_url": "https://api.github.com/users/trentonstrong/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/trentonstrong/subscriptions",
"organizations_url": "https://api.github.com/users/trentonstrong/orgs",
"repos_url": "https://api.github.com/users/trentonstrong/repos",
"events_url": "https://api.github.com/users/trentonstrong/events{/privacy}",
"received_events_url": "https://api.github.com/users/trentonstrong/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"Follow-up: I have confirmed there are no duplicate lines via `sort amazon_reviews_us_PC_v1_00.tsv | uniq -cd` after extracting the raw file.",
"Hi @trentonstrong, thanks for reporting!\r\n\r\nI confirm that loading this dataset configuration throws a `NonMatchingSplitsSizesError`:\r\n```\r\nNonMatchingSplitsSizesError: [{'expected': SplitInfo(name='train', num_bytes=350242049, num_examples=785730, dataset_name='amazon_us_reviews'), 'recorded': SplitInfo(name='train', num_bytes=3982712078, num_examples=6908554, dataset_name='amazon_us_reviews')}]\r\n```\r\n\r\nAlso thank you for your offer to fix this. You can find information about how to update the metadata JSON file here: https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md#automatically-add-code-metadata\r\n```shell\r\ndatasets-cli test datasets/amazon_us_reviews --save_infos --all_configs\r\n```\r\nPlease, feel free to open a PR with this fix. And do not hesitate to ping me if you need any help.",
"No sweat. Will get it patched up ASAP."
] | 2022-03-28T18:12:04 | 2022-04-08T12:29:30 | 2022-04-08T12:29:30 | CONTRIBUTOR | null | ## Describe the bug
When downloading this subset as of 3-28-2022 you will encounter a split size error after the dataset is extracted. The extracted dataset has roughly ~6m rows while the split expects <1m.
Upon digging a little deeper, I downloaded the raw files from `https://s3.amazonaws.com/amazon-reviews-pds/tsv/amazon_reviews_us_PC_v1_00.tsv.gz` and extracted them. A line count via `wc -l` confirms the ~6m number that we see and the data looks valid at a glance (I did not check for duplicate rows). My guess is this file has either been updated in place or there is a bug in the dataset metadata.
Happy to submit a PR and fix this up if turns out to be a metadata issue but wanted to get some other :eyes: on it first.
## Steps to reproduce the bug
```python
load_dataset('amazon_us_reviews', 'PC_v1_00')
```
## Expected results
Dataset is downloaded and extracted successfully.
## Actual results
An split size exception is thrown.
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 2.0.0
- Platform: Linux-5.10.16.3-microsoft-standard-WSL2-x86_64-with-glibc2.29
- Python version: 3.8.10
- PyArrow version: 7.0.0
- Pandas version: 1.4.1
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4048/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4048/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4047 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4047/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4047/comments | https://api.github.com/repos/huggingface/datasets/issues/4047/events | https://github.com/huggingface/datasets/issues/4047 | 1,183,789,237 | I_kwDODunzps5GjzC1 | 4,047 | Dataset.unique(column: str) -> ArrowNotImplementedError | {
"login": "orkenstein",
"id": 1461936,
"node_id": "MDQ6VXNlcjE0NjE5MzY=",
"avatar_url": "https://avatars.githubusercontent.com/u/1461936?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/orkenstein",
"html_url": "https://github.com/orkenstein",
"followers_url": "https://api.github.com/users/orkenstein/followers",
"following_url": "https://api.github.com/users/orkenstein/following{/other_user}",
"gists_url": "https://api.github.com/users/orkenstein/gists{/gist_id}",
"starred_url": "https://api.github.com/users/orkenstein/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/orkenstein/subscriptions",
"organizations_url": "https://api.github.com/users/orkenstein/orgs",
"repos_url": "https://api.github.com/users/orkenstein/repos",
"events_url": "https://api.github.com/users/orkenstein/events{/privacy}",
"received_events_url": "https://api.github.com/users/orkenstein/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | [
"Hi @orkenstein, thanks for reporting.\r\n\r\nPlease note that for this case, our `datasets` library uses under the hood the Apache Arrow `unique` function: https://arrow.apache.org/docs/python/generated/pyarrow.compute.unique.html#pyarrow.compute.unique\r\n\r\nAnd currently the Apache Arrow `unique` function is only implemented for these input types (see info in their [docs](https://arrow.apache.org/docs/cpp/compute.html#array-wise-vector-functions)): Boolean, Null, Numeric, Temporal, Binary- and String-like.\r\n\r\nHowever, the data types of the `wikiann` dataset are all `list<item: string>` (see its [dataset card](https://huggingface.co/datasets/wikiann#data-fields)), and thus, not yet supported by the Apache Arrow `unique` function.",
"As a workaround solution you can use pandas:\r\n```python\r\nfrom datasets import load_dataset\r\n\r\ndataset = load_dataset('wikiann', 'en', split='train')\r\ndf = dataset.to_pandas()\r\nunique_df = df[~df.tokens.apply(tuple).duplicated()] # from https://stackoverflow.com/a/46958336/17517845\r\n```\r\n\r\nNote that pandas loads the dataset in memory (this one is small so it's fine).",
"@lhoestq thank you! I will fall back to this method for now"
] | 2022-03-28T17:59:32 | 2022-04-01T18:24:57 | 2022-04-01T18:24:57 | NONE | null | ## Describe the bug
I'm trying to use `unique()` function, but it fails
## Steps to reproduce the bug
1. Get dataset
2. Call `unique`
3. Error
# Sample code to reproduce the bug
```python
!pip show datasets
from datasets import load_dataset
dataset = load_dataset('wikiann', 'en')
dataset['train'].column_names
dataset['train'].unique(dataset['train'].column_names[0])
```
## Expected results
It would be nice to actually see unique items
## Actual results
Error:
```python
---------------------------------------------------------------------------
ArrowNotImplementedError Traceback (most recent call last)
[<ipython-input-10-5e0de07ed42c>](https://s0qyv2vjaji-496ff2e9c6d22116-0-colab.googleusercontent.com/outputframe.html?vrz=colab-20220324-060046-RC00_436956229#) in <module>()
6
7 dataset['train'].column_names
----> 8 dataset['train'].unique(dataset['train'].column_names[0])
5 frames
/usr/local/lib/python3.7/dist-packages/pyarrow/error.pxi in pyarrow.lib.check_status()
ArrowNotImplementedError: Function unique has no kernel matching input types (array[list<item: string>])
```
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 2.0.0
- Platform: Google Collab
- Python version: 3.7.13
- PyArrow version: 6.0.1
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4047/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4047/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4046 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4046/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4046/comments | https://api.github.com/repos/huggingface/datasets/issues/4046/events | https://github.com/huggingface/datasets/pull/4046 | 1,183,723,360 | PR_kwDODunzps41K6_H | 4,046 | Create metric card for XNLI | {
"login": "sashavor",
"id": 14205986,
"node_id": "MDQ6VXNlcjE0MjA1OTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/14205986?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sashavor",
"html_url": "https://github.com/sashavor",
"followers_url": "https://api.github.com/users/sashavor/followers",
"following_url": "https://api.github.com/users/sashavor/following{/other_user}",
"gists_url": "https://api.github.com/users/sashavor/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sashavor/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sashavor/subscriptions",
"organizations_url": "https://api.github.com/users/sashavor/orgs",
"repos_url": "https://api.github.com/users/sashavor/repos",
"events_url": "https://api.github.com/users/sashavor/events{/privacy}",
"received_events_url": "https://api.github.com/users/sashavor/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-03-28T16:57:58 | 2022-03-29T13:32:59 | 2022-03-29T13:27:30 | NONE | null | Proposing a metric card for XNLI | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4046/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4046/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4046",
"html_url": "https://github.com/huggingface/datasets/pull/4046",
"diff_url": "https://github.com/huggingface/datasets/pull/4046.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4046.patch",
"merged_at": "2022-03-29T13:27:30"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4045 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4045/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4045/comments | https://api.github.com/repos/huggingface/datasets/issues/4045/events | https://github.com/huggingface/datasets/pull/4045 | 1,183,661,091 | PR_kwDODunzps41KtfV | 4,045 | Fix CLI dummy data generation | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-03-28T16:09:15 | 2022-03-31T15:04:12 | 2022-03-31T14:59:06 | MEMBER | null | PR:
- #3868
broke the CLI dummy data generation.
Fix #4044. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4045/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4045/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4045",
"html_url": "https://github.com/huggingface/datasets/pull/4045",
"diff_url": "https://github.com/huggingface/datasets/pull/4045.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4045.patch",
"merged_at": "2022-03-31T14:59:06"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4044 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4044/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4044/comments | https://api.github.com/repos/huggingface/datasets/issues/4044/events | https://github.com/huggingface/datasets/issues/4044 | 1,183,658,942 | I_kwDODunzps5GjTO- | 4,044 | CLI dummy data generation is broken | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
] | null | [] | 2022-03-28T16:07:37 | 2022-03-31T14:59:06 | 2022-03-31T14:59:06 | MEMBER | null | ## Describe the bug
We get a TypeError when running CLI dummy data generation:
```shell
datasets-cli dummy_data datasets/<your-dataset-folder> --auto_generate
```
gives:
```
File ".../huggingface/datasets/src/datasets/commands/dummy_data.py", line 361, in _autogenerate_dummy_data
dataset_builder._prepare_split(split_generator)
TypeError: _prepare_split() missing 1 required positional argument: 'check_duplicate_keys'
```
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4044/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4044/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4043 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4043/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4043/comments | https://api.github.com/repos/huggingface/datasets/issues/4043/events | https://github.com/huggingface/datasets/pull/4043 | 1,183,624,475 | PR_kwDODunzps41Kl0b | 4,043 | Create metric card for CUAD | {
"login": "sashavor",
"id": 14205986,
"node_id": "MDQ6VXNlcjE0MjA1OTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/14205986?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sashavor",
"html_url": "https://github.com/sashavor",
"followers_url": "https://api.github.com/users/sashavor/followers",
"following_url": "https://api.github.com/users/sashavor/following{/other_user}",
"gists_url": "https://api.github.com/users/sashavor/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sashavor/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sashavor/subscriptions",
"organizations_url": "https://api.github.com/users/sashavor/orgs",
"repos_url": "https://api.github.com/users/sashavor/repos",
"events_url": "https://api.github.com/users/sashavor/events{/privacy}",
"received_events_url": "https://api.github.com/users/sashavor/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-03-28T15:38:58 | 2022-03-29T15:20:56 | 2022-03-29T15:15:19 | NONE | null | Proposing a CUAD metric card | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4043/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4043/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4043",
"html_url": "https://github.com/huggingface/datasets/pull/4043",
"diff_url": "https://github.com/huggingface/datasets/pull/4043.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4043.patch",
"merged_at": "2022-03-29T15:15:19"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4041 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4041/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4041/comments | https://api.github.com/repos/huggingface/datasets/issues/4041/events | https://github.com/huggingface/datasets/issues/4041 | 1,183,599,461 | I_kwDODunzps5GjEtl | 4,041 | Add support for IIIF in datasets | {
"login": "davanstrien",
"id": 8995957,
"node_id": "MDQ6VXNlcjg5OTU5NTc=",
"avatar_url": "https://avatars.githubusercontent.com/u/8995957?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/davanstrien",
"html_url": "https://github.com/davanstrien",
"followers_url": "https://api.github.com/users/davanstrien/followers",
"following_url": "https://api.github.com/users/davanstrien/following{/other_user}",
"gists_url": "https://api.github.com/users/davanstrien/gists{/gist_id}",
"starred_url": "https://api.github.com/users/davanstrien/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/davanstrien/subscriptions",
"organizations_url": "https://api.github.com/users/davanstrien/orgs",
"repos_url": "https://api.github.com/users/davanstrien/repos",
"events_url": "https://api.github.com/users/davanstrien/events{/privacy}",
"received_events_url": "https://api.github.com/users/davanstrien/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | open | false | null | [] | null | [
"Hi! Thanks for the detailed analysis of adding IIIF support. I like the idea of \"using IIIF through datasets scripts\" due to its ease of use. Another approach that I like is yielding image ids and using the `piffle` library (which offers a bit more flexibility) + `map` to download + cache images. We can handle bad URLs in `map` by returning `None`. Plus, we can add a `Dataset Preprocessing` section with the code that explains this approach to the card of such datasets. WDYT?\r\n\r\n> currently, IIIF is mainly used by cultural heritage organizations (museums, archives etc.) The adoption of IIIF in this sector has been growing but it's possible that adoption won't be extended to other industries which may also be a source of image data for training ML models.\r\n\r\nThis is why (currently) adding a new feature type would be overkill, IMO.\r\n"
] | 2022-03-28T15:19:25 | 2022-04-05T18:20:53 | null | MEMBER | null | This is a feature request for support for IIIF in `datasets`. Apologies for the long issue. I have also used a different format to the usual feature request since I think that makes more sense but happy to use the standard template if preferred.
## What is [IIIF](https://iiif.io/)?
IIIF (International Image Interoperability Framework)
> is a set of open standards for delivering high-quality, attributed digital objects online at scale. It’s also an international community developing and implementing the IIIF APIs. IIIF is backed by a consortium of leading cultural institutions.
The tl;dr is that IIIF provides various specifications for implementing useful functionality for:
- Institutions to make available images for various use cases
- Users to have a consistent way of interacting/requesting these images
- For developers to have a common standard for developing tools for working with IIIF images that will work across all institutions that implement a particular IIIF standard (for example the image viewer for the BNF can also work for the Library of Congress if they both use IIIF).
Some institutions that various levels of support IIF include: The British Library, Internet Archive, Library of Congress, Wikidata. There are also many smaller institutions that have IIIF support. An incomplete list can be found here: https://iiif.io/guides/finding_resources/
## IIIF APIs
IIIF consists of a number of APIs which could be integrated with datasets. I think the most obvious candidate for inclusion would be the [Image API](https://iiif.io/api/image/3.0/)
### IIIF Image API
The Image API https://iiif.io/api/image/3.0/ is likely the most suitable first candidate for integration with datasets. The Image API offers a consistent protocol for requesting images via a URL:
```{scheme}://{server}{/prefix}/{identifier}/{region}/{size}/{rotation}/{quality}.{format}```
A concrete example of this:
```https://stacks.stanford.edu/image/iiif/hg676jb4964%2F0380_796-44/full/full/0/default.jpg```
As you can see the scheme offers a number of options that can be specified in the URL, for example, size. Using the example URL we return:
![](https://stacks.stanford.edu/image/iiif/hg676jb4964%2F0380_796-44/full/full/0/default.jpg)
We can change the size to request a size of 250 by 250, this is done by changing the size from `full` to `250,250` i.e. switching the URL to `https://stacks.stanford.edu/image/iiif/hg676jb4964%2F0380_796-44/full/250,250/0/default.jpg`
![](https://stacks.stanford.edu/image/iiif/hg676jb4964%2F0380_796-44/full/250,250/0/default.jpg)
We can also request the image with max width 250, max height 250 whilst maintaining the aspect ratio using `!w,h`. i.e. change the url to `https://stacks.stanford.edu/image/iiif/hg676jb4964%2F0380_796-44/full/!250,250/0/default.jpg`
![](https://stacks.stanford.edu/image/iiif/hg676jb4964%2F0380_796-44/full/!250,250/0/default.jpg)
A full overview of the options for size can be found here: https://iiif.io/api/image/3.0/#42-size
## Why would/could this be useful for datasets?
There are a few reasons why support for the IIIF Image API could be useful. Broadly the ability to have more control over how an image is returned from a server is useful for many ML workflows:
- images can be requested in the right size, this prevents having to download/stream large images when the actual desired size is much smaller
- can select a subset of an image: it is possible to select a sub-region of an image, this could be useful for example when you already have a bounding box for a subset of an image and then want to use this subset of an image for another task. For example, https://github.com/Living-with-machines/nnanno uses IIIF to request parts of a newspaper image that have been detected as 'photograph', 'illustration' etc for downstream use.
- options for quality, rotation, the format can all be encoded in the URL request.
These may become particularly useful when pre-training models on large image datasets where the cost of downloading images with 1600 pixel width when you actually want 240 has a larger impact.
## What could this look like in datasets?
I think there are various ways in which support for IIIF could potentially be included in `datasets`. These suggestions aren't fully fleshed out but hopefully, give a sense of possible approaches that match existing `datasets` methods in their approach.
### Use through datasets scripts
Loading images via URL is already supported. There are a few possible 'extras' that could be included when using IIIF. One option is to leverage the IIIF protocol in datasets scripts, i.e. the dataset script can expose the IIIF options via the dataset script:
```python
ds = load_dataset("iiif_dataset", image_size="250,250", fmt="jpg")
```
This is already possible. The approach to parsing the IIIF URLs would be left to the person creating the dataset script.
### Support through dataset scripts (with some datasets support)
This is similar to the above but `datasets` would offer some way of saying this is a iiif URL and then expose the options associated with IIIF images automatically. i.e. if you did something like:
```python
features = {"label": ClassLabel(names=['dog','cat']),
"url": datasets.IIIFURL()}
```
inside your loading script, you would automatically have exposed `size`, `fmt` etc. options when loading the dataset.
### Other possible integrations
Some other possible pseudocode ways that a user could interact with IIIF URLs:
The ability to cast to an `IIIFImage` feature type:
```
ds.cast_column('url', IIIFImage, download=False)
```
The ability to specify some options associated with IIIF urls.
```
ds = ds.set_iiif_options(column='url', size="250,250")
```
I think all of these would rely on having an `IIIFImage` feature type - this would be a little bit of a Frankenstein between a `string` and `datasets.Image`. I think most of the actual image behaviour would be exactly the same as `datasets.Image`, the difference would be that the underlying URL could be modified in various ways.
## prerequisite requirements
There are a few pre-requisites that I can anticipate. This doesn't cover a full implementation of IIIF support which would have different requirements depending on the approach taken to implementing IIIF. Some of these features would be useful independently of adding IIIF support:
### support for handling failed images loaded via a URL (or a specific IIIFImage feature).
Working with images via web requests will inevitably return the odd failed request. If these images are then requests and don't return it would be useful to have a `None` returned instead of an error. For example, when using `push_to_hub` `datasets` will try and include the image but currently fails with bad URLs.
```python
from datasets import Dataset
import datasets
urls = ['https://stacks.stanford.edu/image/iiif/hg676jb4964%2F0380_796-44/full/!250,250/0/default.jpg']*3
urls.append("badurl.com/image.jpg")
data = {"url":urls}
ds = Dataset.from_dict(data)
ds = ds.cast_column('url', datasets.Image())
ds[3]['url']
```
returns a `FileNotFoundError`, for streaming large datasets of images using their URLs it could be useful to have `None` returned instead. This has implications for the actual training loop i.e. you now need to somehow skip those examples because of this it might not be desirable to support this.
### Caching support
Since IIIF requests images via a URL it would be great to have a way of not requesting the images multiple times. This is tracked in https://github.com/huggingface/datasets/issues/3142 and I think this would also be very desirable to have here particularly as one of the primary use cases of IIIF may be to do unsupervised pre-training on large datasets of IIIF URLs.
### Support for Parsing IIIF URLs
This gets closer to the actual implementation. Here the requirement would be some way for `datasets` to parse a URL that the users specify is an IIIF URL. An example of a Python library that does this: https://github.com/Princeton-CDH/piffle. I also have a rough version that uses `dataclasses` which I can share.
## Why it might not be worthwhile/suitable for datasets
There are some reasons that this might not be worth implementing:
- currently, IIIF is mainly used by cultural heritage organizations (museums, archives etc.) The adoption of IIIF in this sector has been growing but it's possible that adoption won't be extended to other industries which may also be a source of image data for training ML models.
- It may end up being better to leave this to the user. It would for example be possible for someone to write map functions to change an IIIF URL to the correct size etc. Adding direct support for IIIF in datasets may potentially not be worth the trouble.
- The impact of different approaches to doing image scaling can impact the downstream model's performance, see: https://twitter.com/wightmanr/status/1479528581466243073?s=20. Since different IIIF image servers may implement different approaches to resizing images this could have a downstream impact on model performance. think this is something that could be flagged to the end-user in the documentation. This probably also falls into general "gotchas" that probably aren't the `datasets` libraries' role to protect users from.
Some of the requirements outlined above would be useful for images anyway. These could be implemented prior to a final decision about whether IIIF support could/should be added to datasets.
## Suggested next steps:
I realise this is a long and slightly open-ended issue. I am happy to clarify/answer questions on IIIF and possible integrations. If the prerequisite requirements seem worth exploring/are better explored in their own issues let me know and I can open new issues for those.
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4041/reactions",
"total_count": 2,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 2,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4041/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4039 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4039/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4039/comments | https://api.github.com/repos/huggingface/datasets/issues/4039/events | https://github.com/huggingface/datasets/pull/4039 | 1,183,468,927 | PR_kwDODunzps41KFIf | 4,039 | Support streaming xcopa dataset | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-03-28T13:45:55 | 2022-03-28T16:26:48 | 2022-03-28T16:21:46 | MEMBER | null | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4039/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4039/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4039",
"html_url": "https://github.com/huggingface/datasets/pull/4039",
"diff_url": "https://github.com/huggingface/datasets/pull/4039.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4039.patch",
"merged_at": "2022-03-28T16:21:46"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4038 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4038/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4038/comments | https://api.github.com/repos/huggingface/datasets/issues/4038/events | https://github.com/huggingface/datasets/pull/4038 | 1,183,189,827 | PR_kwDODunzps41JKUG | 4,038 | [DO NOT MERGE] Test doc-builder with skipped installation feature | {
"login": "lewtun",
"id": 26859204,
"node_id": "MDQ6VXNlcjI2ODU5MjA0",
"avatar_url": "https://avatars.githubusercontent.com/u/26859204?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lewtun",
"html_url": "https://github.com/lewtun",
"followers_url": "https://api.github.com/users/lewtun/followers",
"following_url": "https://api.github.com/users/lewtun/following{/other_user}",
"gists_url": "https://api.github.com/users/lewtun/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lewtun/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lewtun/subscriptions",
"organizations_url": "https://api.github.com/users/lewtun/orgs",
"repos_url": "https://api.github.com/users/lewtun/repos",
"events_url": "https://api.github.com/users/lewtun/events{/privacy}",
"received_events_url": "https://api.github.com/users/lewtun/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"Fix in https://github.com/huggingface/doc-builder/pull/162 works as expected (docs build), closing this"
] | 2022-03-28T09:58:31 | 2022-03-28T12:34:05 | 2022-03-28T12:29:09 | MEMBER | null | This PR is just for testing that we can build PR docs with changes made on the [`skip-install-for-real`](https://github.com/huggingface/doc-builder/tree/skip-install-for-real) branch of `doc-builder`. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4038/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4038/timeline | null | null | true | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4038",
"html_url": "https://github.com/huggingface/datasets/pull/4038",
"diff_url": "https://github.com/huggingface/datasets/pull/4038.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4038.patch",
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4037 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4037/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4037/comments | https://api.github.com/repos/huggingface/datasets/issues/4037/events | https://github.com/huggingface/datasets/issues/4037 | 1,183,144,486 | I_kwDODunzps5GhVom | 4,037 | Error while building documentation | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"After some investigation, maybe the bug is in `doc-builder`.\r\n\r\nI've opened an issue there:\r\n- huggingface/doc-builder#160",
"Fixed by @lewtun (thank you):\r\n- huggingface/doc-builder@31fe6c8bc7225810e281c2f6c6cd32f38828c504"
] | 2022-03-28T09:22:44 | 2022-03-28T10:01:52 | 2022-03-28T10:00:48 | MEMBER | null | ## Describe the bug
Documentation building is failing:
- https://github.com/huggingface/datasets/runs/5716300989?check_suite_focus=true
```
ValueError: There was an error when converting ../datasets/docs/source/package_reference/main_classes.mdx to the MDX format.
Unable to find datasets.filesystems.S3FileSystem in datasets. Make sure the path to that object is correct.
```
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4037/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4037/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4036 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4036/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4036/comments | https://api.github.com/repos/huggingface/datasets/issues/4036/events | https://github.com/huggingface/datasets/pull/4036 | 1,183,126,893 | PR_kwDODunzps41I854 | 4,036 | Fix building of documentation | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"Superseded by huggingface/doc-builder@31fe6c8bc7225810e281c2f6c6cd32f38828c504"
] | 2022-03-28T09:09:12 | 2022-03-28T11:18:31 | 2022-03-28T11:13:22 | MEMBER | null | Documentation building is failing:
- https://github.com/huggingface/datasets/runs/5716300989?check_suite_focus=true
```
ValueError: There was an error when converting ../datasets/docs/source/package_reference/main_classes.mdx to the MDX format.
Unable to find datasets.filesystems.S3FileSystem in datasets. Make sure the path to that object is correct.
```
Fix #4037. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4036/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4036/timeline | null | null | true | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4036",
"html_url": "https://github.com/huggingface/datasets/pull/4036",
"diff_url": "https://github.com/huggingface/datasets/pull/4036.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4036.patch",
"merged_at": null
} | true |