url
stringlengths
58
61
repository_url
stringclasses
1 value
labels_url
stringlengths
72
75
comments_url
stringlengths
67
70
events_url
stringlengths
65
68
html_url
stringlengths
46
51
id
int64
599M
1.5B
node_id
stringlengths
18
32
number
int64
1
5.38k
title
stringlengths
1
276
user
dict
labels
list
state
stringclasses
2 values
locked
bool
1 class
assignee
dict
assignees
list
milestone
dict
comments
sequence
created_at
stringlengths
20
20
updated_at
stringlengths
20
20
closed_at
stringlengths
20
20
author_association
stringclasses
3 values
active_lock_reason
null
draft
bool
2 classes
pull_request
dict
body
stringlengths
0
228k
reactions
dict
timeline_url
stringlengths
67
70
performed_via_github_app
null
state_reason
stringclasses
3 values
is_pull_request
bool
1 class
https://api.github.com/repos/huggingface/datasets/issues/2837
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2837/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2837/comments
https://api.github.com/repos/huggingface/datasets/issues/2837/events
https://github.com/huggingface/datasets/issues/2837
979,298,297
MDU6SXNzdWU5NzkyOTgyOTc=
2,837
prepare_module issue when loading from read-only fs
{ "avatar_url": "https://avatars.githubusercontent.com/u/8976546?v=4", "events_url": "https://api.github.com/users/Dref360/events{/privacy}", "followers_url": "https://api.github.com/users/Dref360/followers", "following_url": "https://api.github.com/users/Dref360/following{/other_user}", "gists_url": "https://api.github.com/users/Dref360/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/Dref360", "id": 8976546, "login": "Dref360", "node_id": "MDQ6VXNlcjg5NzY1NDY=", "organizations_url": "https://api.github.com/users/Dref360/orgs", "received_events_url": "https://api.github.com/users/Dref360/received_events", "repos_url": "https://api.github.com/users/Dref360/repos", "site_admin": false, "starred_url": "https://api.github.com/users/Dref360/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Dref360/subscriptions", "type": "User", "url": "https://api.github.com/users/Dref360" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
closed
false
null
[]
null
[]
2021-08-25T15:21:26Z
2021-10-05T17:58:22Z
2021-10-05T17:58:22Z
CONTRIBUTOR
null
null
null
## Describe the bug When we use prepare_module from a readonly file system, we create a FileLock using the `local_path`. This path is not necessarily writable. `lock_path = local_path + ".lock"` ## Steps to reproduce the bug Run `load_dataset` on a readonly python loader file. ```python ds = load_dataset( python_loader, data_files={"train": train_path, "test": test_path} ) ``` where `python_loader` is a path to a file located in a readonly folder. ## Expected results This should work I think? ## Actual results ```python return load_dataset( File "/usr/local/lib/python3.8/dist-packages/datasets/load.py", line 711, in load_dataset module_path, hash, resolved_file_path = prepare_module( File "/usr/local/lib/python3.8/dist-packages/datasets/load.py", line 465, in prepare_module with FileLock(lock_path): File "/usr/local/lib/python3.8/dist-packages/datasets/utils/filelock.py", line 314, in __enter__ self.acquire() File "/usr/local/lib/python3.8/dist-packages/datasets/utils/filelock.py", line 263, in acquire self._acquire() File "/usr/local/lib/python3.8/dist-packages/datasets/utils/filelock.py", line 378, in _acquire fd = os.open(self._lock_file, open_mode) OSError: [Errno 30] Read-only file system: 'YOUR_FILE.py.lock' ``` ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 1.7.0 - Platform: macOS-10.15.7-x86_64-i386-64bit - Python version: 3.8.8 - PyArrow version: 3.0.0
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2837/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2837/timeline
null
completed
true
https://api.github.com/repos/huggingface/datasets/issues/2836
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2836/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2836/comments
https://api.github.com/repos/huggingface/datasets/issues/2836/events
https://github.com/huggingface/datasets/pull/2836
979,230,142
MDExOlB1bGxSZXF1ZXN0NzE5NjY5MDUy
2,836
Optimize Dataset.filter to only compute the indices to keep
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
[]
closed
false
null
[]
null
[]
2021-08-25T14:41:22Z
2021-09-14T14:51:53Z
2021-09-13T15:50:21Z
MEMBER
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/2836.diff", "html_url": "https://github.com/huggingface/datasets/pull/2836", "merged_at": "2021-09-13T15:50:21Z", "patch_url": "https://github.com/huggingface/datasets/pull/2836.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/2836" }
Optimize `Dataset.filter` to only compute the indices of the rows to keep, instead of creating a new Arrow table with the rows to keep. Creating a new table was an issue because it could take a lot of disk space. This will be useful to process audio datasets for example cc @patrickvonplaten
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 2, "hooray": 1, "laugh": 0, "rocket": 0, "total_count": 3, "url": "https://api.github.com/repos/huggingface/datasets/issues/2836/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2836/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/2835
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2835/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2835/comments
https://api.github.com/repos/huggingface/datasets/issues/2835/events
https://github.com/huggingface/datasets/pull/2835
979,209,394
MDExOlB1bGxSZXF1ZXN0NzE5NjUxOTE4
2,835
Update: timit_asr - make the dataset streamable
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
[]
closed
false
null
[]
null
[]
2021-08-25T14:22:49Z
2021-09-07T13:15:47Z
2021-09-07T13:15:46Z
MEMBER
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/2835.diff", "html_url": "https://github.com/huggingface/datasets/pull/2835", "merged_at": "2021-09-07T13:15:46Z", "patch_url": "https://github.com/huggingface/datasets/pull/2835.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/2835" }
The TIMIT ASR dataset had two issues that was preventing it from being streamable: 1. it was missing a call to `open` before `pd.read_csv` 2. it was using `os.path.dirname` which is not supported for streaming I made the dataset streamable by using `open` to load the CSV, and by adding the support for `os.path.dirname` in dataset scripts to stream data You can now do ```python from datasets import load_dataset timit_asr = load_dataset("timit_asr", streaming=True) print(next(iter(timit_asr["train"]))) ``` prints: ```json {"file": "zip://data/TRAIN/DR4/MMDM0/SI681.WAV::https://data.deepai.org/timit.zip", "phonetic_detail": {"start": [0, 1960, 2466, 3480, 4000, 5960, 7480, 7880, 9400, 9960, 10680, 13480, 15680, 15880, 16920, 18297, 18882, 19480, 21723, 22516, 24040, 25190, 27080, 28160, 28560, 30120, 31832, 33240, 34640, 35968, 37720], "utterance": ["h#", "w", "ix", "dcl", "s", "ah", "tcl", "ch", "ix", "n", "ae", "kcl", "t", "ix", "v", "r", "ix", "f", "y", "ux", "zh", "el", "bcl", "b", "iy", "y", "ux", "s", "f", "el", "h#"], "stop": [1960, 2466, 3480, 4000, 5960, 7480, 7880, 9400, 9960, 10680, 13480, 15680, 15880, 16920, 18297, 18882, 19480, 21723, 22516, 24040, 25190, 27080, 28160, 28560, 30120, 31832, 33240, 34640, 35968, 37720, 39920]}, "sentence_type": "SI", "id": "SI681", "speaker_id": "MMDM0", "dialect_region": "DR4", "text": "Would such an act of refusal be useful?", "word_detail": { "start": [1960, 4000, 9400, 10680, 15880, 18297, 27080, 30120], "utterance": ["would", "such", "an", "act", "of", "refusal", "be", "useful"], "stop": [4000, 9400, 10680, 15880, 18297, 27080, 30120, 37720] }} ``` cc @patrickvonplaten @vrindaprabhu
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 1, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/2835/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2835/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/2834
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2834/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2834/comments
https://api.github.com/repos/huggingface/datasets/issues/2834/events
https://github.com/huggingface/datasets/pull/2834
978,309,749
MDExOlB1bGxSZXF1ZXN0NzE4OTE5NjQ0
2,834
Fix IndexError by ignoring empty RecordBatch
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
[]
closed
false
null
[]
null
[]
2021-08-24T17:06:13Z
2021-08-24T17:21:18Z
2021-08-24T17:21:18Z
MEMBER
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/2834.diff", "html_url": "https://github.com/huggingface/datasets/pull/2834", "merged_at": "2021-08-24T17:21:17Z", "patch_url": "https://github.com/huggingface/datasets/pull/2834.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/2834" }
We need to ignore the empty record batches for the interpolation search to work correctly when querying arrow tables Close #2833 cc @SaulLu
{ "+1": 1, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 1, "total_count": 2, "url": "https://api.github.com/repos/huggingface/datasets/issues/2834/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2834/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/2833
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2833/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2833/comments
https://api.github.com/repos/huggingface/datasets/issues/2833/events
https://github.com/huggingface/datasets/issues/2833
978,296,140
MDU6SXNzdWU5NzgyOTYxNDA=
2,833
IndexError when accessing first element of a Dataset if first RecordBatch is empty
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
[]
closed
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" } ]
null
[]
2021-08-24T16:49:20Z
2021-08-24T17:21:17Z
2021-08-24T17:21:17Z
MEMBER
null
null
null
The computation of the offsets of the underlying Table of a Dataset has some issues if the first RecordBatch is empty. ```python from datasets import Dataset import pyarrow as pa pa_table = pa.Table.from_pydict({"a": [1]}) pa_table2 = pa.Table.from_pydict({"a": []}, schema=pa_table.schema) ds_table = pa.concat_tables([pa_table2, pa_table]) dataset = Dataset(ds_table) print([len(b) for b in dataset.data._batches]) # [0, 1] print(dataset.data._offsets) # [0 0 1] (should be [0, 1]) dataset[0] ``` raises ```python --------------------------------------------------------------------------- IndexError Traceback (most recent call last) /usr/local/lib/python3.7/dist-packages/datasets/table.py in _interpolation_search(arr, x) 90 else: 91 i, j = i, k ---> 92 raise IndexError(f"Invalid query '{x}' for size {arr[-1] if len(arr) else 'none'}.") 93 94 IndexError: Invalid query '0' for size 1. ``` This can be fixed by ignoring empty batches when computing `table._batches` and `table._offsets` cc @SaulLu
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 1, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/2833/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2833/timeline
null
completed
true
https://api.github.com/repos/huggingface/datasets/issues/2832
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2832/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2832/comments
https://api.github.com/repos/huggingface/datasets/issues/2832/events
https://github.com/huggingface/datasets/issues/2832
978,012,800
MDU6SXNzdWU5NzgwMTI4MDA=
2,832
Logging levels not taken into account
{ "avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4", "events_url": "https://api.github.com/users/LysandreJik/events{/privacy}", "followers_url": "https://api.github.com/users/LysandreJik/followers", "following_url": "https://api.github.com/users/LysandreJik/following{/other_user}", "gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/LysandreJik", "id": 30755778, "login": "LysandreJik", "node_id": "MDQ6VXNlcjMwNzU1Nzc4", "organizations_url": "https://api.github.com/users/LysandreJik/orgs", "received_events_url": "https://api.github.com/users/LysandreJik/received_events", "repos_url": "https://api.github.com/users/LysandreJik/repos", "site_admin": false, "starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions", "type": "User", "url": "https://api.github.com/users/LysandreJik" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
open
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mariosasko", "id": 47462742, "login": "mariosasko", "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "repos_url": "https://api.github.com/users/mariosasko/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "type": "User", "url": "https://api.github.com/users/mariosasko" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mariosasko", "id": 47462742, "login": "mariosasko", "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "repos_url": "https://api.github.com/users/mariosasko/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "type": "User", "url": "https://api.github.com/users/mariosasko" } ]
null
[]
2021-08-24T11:50:41Z
2022-07-15T12:16:55Z
null
MEMBER
null
null
null
## Describe the bug The `logging` module isn't working as intended relative to the levels to set. ## Steps to reproduce the bug ```python from datasets import logging logging.set_verbosity_debug() logger = logging.get_logger() logger.error("ERROR") logger.warning("WARNING") logger.info("INFO") logger.debug("DEBUG" ``` ## Expected results I expect all logs to be output since I'm putting a `debug` level. ## Actual results Only the two first logs are output. ## Environment info - `datasets` version: 1.11.0 - Platform: Linux-5.13.9-arch1-1-x86_64-with-glibc2.33 - Python version: 3.9.6 - PyArrow version: 5.0.0 ## To go further This logging issue appears in `datasets` but not in `transformers`. It happens because there is no handler defined for the logger. When no handler is defined, the `logging` library will output a one-off error to stderr, using a `StderrHandler` with level `WARNING`. `transformers` sets a default `StreamHandler` [here](https://github.com/huggingface/transformers/blob/5c6eca71a983bae2589eed01e5c04fcf88ba5690/src/transformers/utils/logging.py#L86)
{ "+1": 1, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/2832/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2832/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/2831
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2831/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2831/comments
https://api.github.com/repos/huggingface/datasets/issues/2831/events
https://github.com/huggingface/datasets/issues/2831
977,864,600
MDU6SXNzdWU5Nzc4NjQ2MDA=
2,831
ArrowInvalid when mapping dataset with missing values
{ "avatar_url": "https://avatars.githubusercontent.com/u/12694730?v=4", "events_url": "https://api.github.com/users/uniquefine/events{/privacy}", "followers_url": "https://api.github.com/users/uniquefine/followers", "following_url": "https://api.github.com/users/uniquefine/following{/other_user}", "gists_url": "https://api.github.com/users/uniquefine/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/uniquefine", "id": 12694730, "login": "uniquefine", "node_id": "MDQ6VXNlcjEyNjk0NzMw", "organizations_url": "https://api.github.com/users/uniquefine/orgs", "received_events_url": "https://api.github.com/users/uniquefine/received_events", "repos_url": "https://api.github.com/users/uniquefine/repos", "site_admin": false, "starred_url": "https://api.github.com/users/uniquefine/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/uniquefine/subscriptions", "type": "User", "url": "https://api.github.com/users/uniquefine" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
open
false
null
[]
null
[]
2021-08-24T08:50:42Z
2021-08-31T14:15:34Z
null
NONE
null
null
null
## Describe the bug I encountered an `ArrowInvalid` when mapping dataset with missing values. Here are the files for a minimal example. The exception is only thrown when the first line in the csv has a missing value (if you move the last line to the top it isn't thrown). [data_small.csv](https://github.com/huggingface/datasets/files/7037838/data_small.csv) [data.csv](https://github.com/huggingface/datasets/files/7037842/data.csv) ## Steps to reproduce the bug ```python from datasets import load_dataset datasets = load_dataset("csv", data_files=['data_small.csv']) datasets = datasets.map(lambda e: {'labels': e['match']}, remove_columns=['id']) ``` ## Expected results No error ## Actual results ``` File "pyarrow/error.pxi", line 84, in pyarrow.lib.check_status pyarrow.lib.ArrowInvalid: Invalid null value ``` ## Environment info - `datasets` version: 1.5.0 - Platform: Linux-5.11.0-25-generic-x86_64-with-glibc2.29 - Python version: 3.8.10 - PyTorch version (GPU?): 1.7.1+cpu (False) - Tensorflow version (GPU?): 2.4.1 (False) - Using GPU in script?: no - Using distributed or parallel set-up in script?: no
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2831/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2831/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/2830
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2830/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2830/comments
https://api.github.com/repos/huggingface/datasets/issues/2830/events
https://github.com/huggingface/datasets/pull/2830
977,563,947
MDExOlB1bGxSZXF1ZXN0NzE4MjkyMTM2
2,830
Add imagefolder dataset
{ "avatar_url": "https://avatars.githubusercontent.com/u/32437151?v=4", "events_url": "https://api.github.com/users/nateraw/events{/privacy}", "followers_url": "https://api.github.com/users/nateraw/followers", "following_url": "https://api.github.com/users/nateraw/following{/other_user}", "gists_url": "https://api.github.com/users/nateraw/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/nateraw", "id": 32437151, "login": "nateraw", "node_id": "MDQ6VXNlcjMyNDM3MTUx", "organizations_url": "https://api.github.com/users/nateraw/orgs", "received_events_url": "https://api.github.com/users/nateraw/received_events", "repos_url": "https://api.github.com/users/nateraw/repos", "site_admin": false, "starred_url": "https://api.github.com/users/nateraw/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/nateraw/subscriptions", "type": "User", "url": "https://api.github.com/users/nateraw" }
[]
closed
false
null
[]
null
[]
2021-08-23T23:34:06Z
2022-03-01T16:29:44Z
2022-03-01T16:29:44Z
CONTRIBUTOR
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/2830.diff", "html_url": "https://github.com/huggingface/datasets/pull/2830", "merged_at": "2022-03-01T16:29:44Z", "patch_url": "https://github.com/huggingface/datasets/pull/2830.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/2830" }
A generic imagefolder dataset inspired by `torchvision.datasets.ImageFolder`. Resolves #2508 --- Example Usage: [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/gist/nateraw/954fa8cba4ff806f6147a782fa9efd1a/imagefolder-official-example.ipynb)
{ "+1": 1, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/2830/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2830/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/2829
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2829/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2829/comments
https://api.github.com/repos/huggingface/datasets/issues/2829/events
https://github.com/huggingface/datasets/issues/2829
977,233,360
MDU6SXNzdWU5NzcyMzMzNjA=
2,829
Optimize streaming from TAR archives
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" }, { "color": "fef2c0", "default": false, "description": "", "id": 3287858981, "name": "streaming", "node_id": "MDU6TGFiZWwzMjg3ODU4OTgx", "url": "https://api.github.com/repos/huggingface/datasets/labels/streaming" } ]
closed
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" } ]
null
[]
2021-08-23T16:56:40Z
2022-09-21T14:29:46Z
2022-09-21T14:08:39Z
MEMBER
null
null
null
Hi ! As you know TAR has some constraints for data streaming. While it is optimized for buffering, the files in the TAR archive **need to be streamed in order**. It means that we can't choose which file to stream from, and this notation is to be avoided for TAR archives: ``` tar://books_large_p1.txt::https://storage.googleapis.com/huggingface-nlp/datasets/bookcorpus/bookcorpus.tar.bz2 ``` Instead, I suggest we implement `iter_archive` for the `StreamingDownloadManager`. The regular `DownloadManager` already has it. Then we will have to update the json/txt/csv/etc. loaders to make them use `iter_archive` on TAR archives. That's also what Tensorflow Datasets is doing in this case. See this [dataset](https://github.com/tensorflow/datasets/blob/93895059c80a9e05805e8f32a2e310f66a23fc98/tensorflow_datasets/image_classification/flowers.py) for example. Therefore instead of doing ```python uncompressed = dl_manager.extract(tar_archive) filename = "books_large_p1.txt" with open(os.path.join(uncompressed, filename)) as f: for line in f: ... ``` we'll do ```python for filename, f in dl_manager.iter_archive(tar_archive): for line in f: ... ```
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2829/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2829/timeline
null
completed
true
https://api.github.com/repos/huggingface/datasets/issues/2828
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2828/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2828/comments
https://api.github.com/repos/huggingface/datasets/issues/2828/events
https://github.com/huggingface/datasets/pull/2828
977,181,517
MDExOlB1bGxSZXF1ZXN0NzE3OTYwODg3
2,828
Add code-mixed Kannada Hope speech dataset
{ "avatar_url": "https://avatars.githubusercontent.com/u/46108405?v=4", "events_url": "https://api.github.com/users/adeepH/events{/privacy}", "followers_url": "https://api.github.com/users/adeepH/followers", "following_url": "https://api.github.com/users/adeepH/following{/other_user}", "gists_url": "https://api.github.com/users/adeepH/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/adeepH", "id": 46108405, "login": "adeepH", "node_id": "MDQ6VXNlcjQ2MTA4NDA1", "organizations_url": "https://api.github.com/users/adeepH/orgs", "received_events_url": "https://api.github.com/users/adeepH/received_events", "repos_url": "https://api.github.com/users/adeepH/repos", "site_admin": false, "starred_url": "https://api.github.com/users/adeepH/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/adeepH/subscriptions", "type": "User", "url": "https://api.github.com/users/adeepH" }
[]
closed
false
null
[]
null
[]
2021-08-23T15:55:09Z
2021-10-01T17:21:03Z
2021-10-01T17:21:03Z
CONTRIBUTOR
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/2828.diff", "html_url": "https://github.com/huggingface/datasets/pull/2828", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/2828.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/2828" }
## Adding a Dataset - **Name:** *KanHope* - **Description:** *A code-mixed English-Kannada dataset for Hope speech detection* - **Paper:** *https://arxiv.org/abs/2108.04616* - **Data:** *https://github.com/adeepH/KanHope/tree/main/dataset* - **Motivation:** *The dataset is amongst the very few resources available for code-mixed low-resourced Dravidian languages of India*
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2828/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2828/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/2827
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2827/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2827/comments
https://api.github.com/repos/huggingface/datasets/issues/2827/events
https://github.com/huggingface/datasets/pull/2827
976,976,552
MDExOlB1bGxSZXF1ZXN0NzE3Nzg3MjEw
2,827
add a text classification dataset
{ "avatar_url": "https://avatars.githubusercontent.com/u/46108405?v=4", "events_url": "https://api.github.com/users/adeepH/events{/privacy}", "followers_url": "https://api.github.com/users/adeepH/followers", "following_url": "https://api.github.com/users/adeepH/following{/other_user}", "gists_url": "https://api.github.com/users/adeepH/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/adeepH", "id": 46108405, "login": "adeepH", "node_id": "MDQ6VXNlcjQ2MTA4NDA1", "organizations_url": "https://api.github.com/users/adeepH/orgs", "received_events_url": "https://api.github.com/users/adeepH/received_events", "repos_url": "https://api.github.com/users/adeepH/repos", "site_admin": false, "starred_url": "https://api.github.com/users/adeepH/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/adeepH/subscriptions", "type": "User", "url": "https://api.github.com/users/adeepH" }
[]
closed
false
null
[]
null
[]
2021-08-23T12:24:41Z
2021-08-23T15:51:18Z
2021-08-23T15:51:18Z
CONTRIBUTOR
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/2827.diff", "html_url": "https://github.com/huggingface/datasets/pull/2827", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/2827.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/2827" }
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2827/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2827/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/2826
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2826/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2826/comments
https://api.github.com/repos/huggingface/datasets/issues/2826/events
https://github.com/huggingface/datasets/issues/2826
976,974,254
MDU6SXNzdWU5NzY5NzQyNTQ=
2,826
Add a Text Classification dataset: KanHope
{ "avatar_url": "https://avatars.githubusercontent.com/u/46108405?v=4", "events_url": "https://api.github.com/users/adeepH/events{/privacy}", "followers_url": "https://api.github.com/users/adeepH/followers", "following_url": "https://api.github.com/users/adeepH/following{/other_user}", "gists_url": "https://api.github.com/users/adeepH/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/adeepH", "id": 46108405, "login": "adeepH", "node_id": "MDQ6VXNlcjQ2MTA4NDA1", "organizations_url": "https://api.github.com/users/adeepH/orgs", "received_events_url": "https://api.github.com/users/adeepH/received_events", "repos_url": "https://api.github.com/users/adeepH/repos", "site_admin": false, "starred_url": "https://api.github.com/users/adeepH/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/adeepH/subscriptions", "type": "User", "url": "https://api.github.com/users/adeepH" }
[ { "color": "e99695", "default": false, "description": "Requesting to add a new dataset", "id": 2067376369, "name": "dataset request", "node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request" } ]
closed
false
null
[]
null
[]
2021-08-23T12:21:58Z
2021-10-01T18:06:59Z
2021-10-01T18:06:59Z
CONTRIBUTOR
null
null
null
## Adding a Dataset - **Name:** *KanHope* - **Description:** *A code-mixed English-Kannada dataset for Hope speech detection* - **Paper:** *https://arxiv.org/abs/2108.04616* (I am the author of the paper} - **Author:** *[AdeepH](https://github.com/adeepH)* - **Data:** *https://github.com/adeepH/KanHope/tree/main/dataset* - **Motivation:** *The dataset is amongst the very few resources available for code-mixed Dravidian languages* - I tried following the steps as per the instructions. However, could not resolve an error. Any help would be appreciated. - The dataset card and the scripts for the dataset *https://github.com/adeepH/datasets/tree/multilingual-hope-speech/datasets/mhs_eval* ``` Using custom data configuration default Downloading and preparing dataset bn_hate_speech/default (download: Unknown size, generated: Unknown size, post-processed: Unknown size, total: Unknown size) to /root/.cache/huggingface/datasets/bn_hate_speech/default/0.0.0/5f417ddc89777278abd29988f909f39495f0ec802090f7d8fa63b5bffb121762... --------------------------------------------------------------------------- KeyError Traceback (most recent call last) <ipython-input-114-4a9cdb519e4c> in <module>() 1 from datasets import load_dataset 2 ----> 3 data = load_dataset('/content/bn') 9 frames /usr/local/lib/python3.7/dist-packages/datasets/load.py in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, ignore_verifications, keep_in_memory, save_infos, script_version, use_auth_token, task, streaming, **config_kwargs) 850 ignore_verifications=ignore_verifications, 851 try_from_hf_gcs=try_from_hf_gcs, --> 852 use_auth_token=use_auth_token, 853 ) 854 /usr/local/lib/python3.7/dist-packages/datasets/builder.py in download_and_prepare(self, download_config, download_mode, ignore_verifications, try_from_hf_gcs, dl_manager, base_path, use_auth_token, **download_and_prepare_kwargs) 614 if not downloaded_from_gcs: 615 self._download_and_prepare( --> 616 dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs 617 ) 618 # Sync info /usr/local/lib/python3.7/dist-packages/datasets/builder.py in _download_and_prepare(self, dl_manager, verify_infos, **prepare_split_kwargs) 691 try: 692 # Prepare split will record examples associated to the split --> 693 self._prepare_split(split_generator, **prepare_split_kwargs) 694 except OSError as e: 695 raise OSError( /usr/local/lib/python3.7/dist-packages/datasets/builder.py in _prepare_split(self, split_generator) 1107 disable=bool(logging.get_verbosity() == logging.NOTSET), 1108 ): -> 1109 example = self.info.features.encode_example(record) 1110 writer.write(example, key) 1111 finally: /usr/local/lib/python3.7/dist-packages/datasets/features.py in encode_example(self, example) 1015 """ 1016 example = cast_to_python_objects(example) -> 1017 return encode_nested_example(self, example) 1018 1019 def encode_batch(self, batch): /usr/local/lib/python3.7/dist-packages/datasets/features.py in encode_nested_example(schema, obj) 863 if isinstance(schema, dict): 864 return { --> 865 k: encode_nested_example(sub_schema, sub_obj) for k, (sub_schema, sub_obj) in utils.zip_dict(schema, obj) 866 } 867 elif isinstance(schema, (list, tuple)): /usr/local/lib/python3.7/dist-packages/datasets/features.py in <dictcomp>(.0) 863 if isinstance(schema, dict): 864 return { --> 865 k: encode_nested_example(sub_schema, sub_obj) for k, (sub_schema, sub_obj) in utils.zip_dict(schema, obj) 866 } 867 elif isinstance(schema, (list, tuple)): /usr/local/lib/python3.7/dist-packages/datasets/features.py in encode_nested_example(schema, obj) 890 # ClassLabel will convert from string to int, TranslationVariableLanguages does some checks 891 elif isinstance(schema, (ClassLabel, TranslationVariableLanguages, Value, _ArrayXD)): --> 892 return schema.encode_example(obj) 893 # Other object should be directly convertible to a native Arrow type (like Translation and Translation) 894 return obj /usr/local/lib/python3.7/dist-packages/datasets/features.py in encode_example(self, example_data) 665 # If a string is given, convert to associated integer 666 if isinstance(example_data, str): --> 667 example_data = self.str2int(example_data) 668 669 # Allowing -1 to mean no label. /usr/local/lib/python3.7/dist-packages/datasets/features.py in str2int(self, values) 623 if value not in self._str2int: 624 value = str(value).strip() --> 625 output.append(self._str2int[str(value)]) 626 else: 627 # No names provided, try to integerize KeyError: ' ' ```
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2826/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2826/timeline
null
completed
true
https://api.github.com/repos/huggingface/datasets/issues/2825
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2825/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2825/comments
https://api.github.com/repos/huggingface/datasets/issues/2825/events
https://github.com/huggingface/datasets/issues/2825
976,584,926
MDU6SXNzdWU5NzY1ODQ5MjY=
2,825
The datasets.map function does not load cached dataset after moving python script
{ "avatar_url": "https://avatars.githubusercontent.com/u/35392624?v=4", "events_url": "https://api.github.com/users/hobbitlzy/events{/privacy}", "followers_url": "https://api.github.com/users/hobbitlzy/followers", "following_url": "https://api.github.com/users/hobbitlzy/following{/other_user}", "gists_url": "https://api.github.com/users/hobbitlzy/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/hobbitlzy", "id": 35392624, "login": "hobbitlzy", "node_id": "MDQ6VXNlcjM1MzkyNjI0", "organizations_url": "https://api.github.com/users/hobbitlzy/orgs", "received_events_url": "https://api.github.com/users/hobbitlzy/received_events", "repos_url": "https://api.github.com/users/hobbitlzy/repos", "site_admin": false, "starred_url": "https://api.github.com/users/hobbitlzy/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/hobbitlzy/subscriptions", "type": "User", "url": "https://api.github.com/users/hobbitlzy" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
closed
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" } ]
null
[]
2021-08-23T03:23:37Z
2021-08-31T13:14:41Z
2021-08-31T13:13:36Z
NONE
null
null
null
## Describe the bug The datasets.map function caches the processed data to a certain directory. When the map function is called another time with totally the same parameters, the cached data are supposed to be reloaded instead of re-processing. However, it doesn't reuse cached data sometimes. I use the common data processing in different tasks, the datasets are processed again, the only difference is that I run them in different files. ## Steps to reproduce the bug Just run the following codes in different .py files. ```python if __name__ == '__main__': from datasets import load_dataset from transformers import AutoTokenizer raw_datasets = load_dataset("wikitext", "wikitext-2-raw-v1") tokenizer = AutoTokenizer.from_pretrained("bert-base-uncased") def tokenize_function(examples): return tokenizer(examples["text"], padding="max_length", truncation=True) tokenized_datasets = raw_datasets.map(tokenize_function, batched=True) ``` ## Expected results The map function should reload data in the second or any later runs. ## Actual results The processing happens in each run. ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 1.8.0 - Platform: linux - Python version: 3.7.6 - PyArrow version: 3.0.0 This is the first time I report a bug. If there is any problem or confusing description, please let me know 😄.
{ "+1": 1, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/2825/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2825/timeline
null
completed
true
https://api.github.com/repos/huggingface/datasets/issues/2824
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2824/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2824/comments
https://api.github.com/repos/huggingface/datasets/issues/2824/events
https://github.com/huggingface/datasets/pull/2824
976,394,721
MDExOlB1bGxSZXF1ZXN0NzE3MzIyMzY5
2,824
Fix defaults in cache_dir docstring in load.py
{ "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mariosasko", "id": 47462742, "login": "mariosasko", "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "repos_url": "https://api.github.com/users/mariosasko/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "type": "User", "url": "https://api.github.com/users/mariosasko" }
[]
closed
false
null
[]
null
[]
2021-08-22T14:48:37Z
2021-08-26T13:23:32Z
2021-08-26T11:55:16Z
CONTRIBUTOR
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/2824.diff", "html_url": "https://github.com/huggingface/datasets/pull/2824", "merged_at": "2021-08-26T11:55:16Z", "patch_url": "https://github.com/huggingface/datasets/pull/2824.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/2824" }
Fix defaults in the `cache_dir` docstring.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2824/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2824/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/2823
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2823/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2823/comments
https://api.github.com/repos/huggingface/datasets/issues/2823/events
https://github.com/huggingface/datasets/issues/2823
976,135,355
MDU6SXNzdWU5NzYxMzUzNTU=
2,823
HF_DATASETS_CACHE variable in Windows
{ "avatar_url": "https://avatars.githubusercontent.com/u/8453798?v=4", "events_url": "https://api.github.com/users/rp2839/events{/privacy}", "followers_url": "https://api.github.com/users/rp2839/followers", "following_url": "https://api.github.com/users/rp2839/following{/other_user}", "gists_url": "https://api.github.com/users/rp2839/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/rp2839", "id": 8453798, "login": "rp2839", "node_id": "MDQ6VXNlcjg0NTM3OTg=", "organizations_url": "https://api.github.com/users/rp2839/orgs", "received_events_url": "https://api.github.com/users/rp2839/received_events", "repos_url": "https://api.github.com/users/rp2839/repos", "site_admin": false, "starred_url": "https://api.github.com/users/rp2839/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/rp2839/subscriptions", "type": "User", "url": "https://api.github.com/users/rp2839" }
[]
closed
false
null
[]
null
[]
2021-08-21T13:17:44Z
2021-08-21T13:20:11Z
2021-08-21T13:20:11Z
NONE
null
null
null
I can't seem to use a custom Cache directory in Windows. I have tried: set HF_DATASETS_CACHE = "C:\Datasets" set HF_DATASETS_CACHE = "C:/Datasets" set HF_DATASETS_CACHE = "C:\\Datasets" set HF_DATASETS_CACHE = "r'C:\Datasets'" set HF_DATASETS_CACHE = "\Datasets" set HF_DATASETS_CACHE = "/Datasets" In each instance I get the "[WinError 123] The filename, directory name, or volume label syntax is incorrect" error when attempting to load a dataset
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2823/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2823/timeline
null
completed
true
https://api.github.com/repos/huggingface/datasets/issues/2822
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2822/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2822/comments
https://api.github.com/repos/huggingface/datasets/issues/2822/events
https://github.com/huggingface/datasets/pull/2822
975,744,463
MDExOlB1bGxSZXF1ZXN0NzE2ODUxMTAy
2,822
Add url prefix convention for many compression formats
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
[]
closed
false
null
[]
null
[]
2021-08-20T16:11:23Z
2021-08-23T15:59:16Z
2021-08-23T15:59:14Z
MEMBER
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/2822.diff", "html_url": "https://github.com/huggingface/datasets/pull/2822", "merged_at": "2021-08-23T15:59:14Z", "patch_url": "https://github.com/huggingface/datasets/pull/2822.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/2822" }
## Intro When doing dataset streaming, the uncompression of compressed files is done on the fly using `fsspec`. In particular, the download manager method `download_and_extract` doesn't return a path to the local download and extracted file, but instead a chained URL so that the uncompression can be done when the file is opened. A few examples of chained URLS: - `gz://file.txt::https://foo.bar/file.txt.gz` - `bz2://file.txt::https://foo.bar/file.txt.bz2` - `zip://::https://foo.bar/archive.zip` - `tar://::https://foo.bar/archive.tar.gz` (the TAR uncompression includes gz, bz2 etc. uncompression in `fsspec`) This syntax is highly inspired by the `fsspec` URL chaining syntax from https://filesystem-spec.readthedocs.io/en/latest/features.html#url-chaining This url prefixing allows `open` to know what kind of uncompression to do in a dataset script when doing ```python def _generate_examples(self, urlpath): with open(urlpath) as f: .... ``` ## What it changes This changes the previous behavior from https://github.com/huggingface/datasets/pull/2786 , in which `open` was trying to infer the compression automatically. Infering the compression made it impossible to know whether the user wanted `open` to return compressed data (as the default behavior of the buitin open), or the uncompressed data. By adding uncompression prefixes to the URL, `open` know directly if it has to uncompress or not, and also which protocol to use. ## Additional notes This PR should close https://github.com/huggingface/datasets/issues/2813 It should also close this PR https://github.com/huggingface/datasets/pull/2811 since the oscar dataset script won't try to uncompress twice anymore Note that I had to temporarily remove the support for passing tar and zip files to `data_files` for streaming to make it work, since it makes it ambiguous whether a zip file passed as `data_files` should be uncompressed or not. IMO we can make it work again by changing the syntax to make the glob explicit: ```python load_dataset("json", data_files="zip://*.jsonl::https://foo.bar/archive.zip") ``` This is the exact same convention as fsspec and it removes all ambiguities cc @albertvillanova @lewtun
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2822/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2822/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/2821
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2821/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2821/comments
https://api.github.com/repos/huggingface/datasets/issues/2821/events
https://github.com/huggingface/datasets/issues/2821
975,556,032
MDU6SXNzdWU5NzU1NTYwMzI=
2,821
Cannot load linnaeus dataset
{ "avatar_url": "https://avatars.githubusercontent.com/u/48327001?v=4", "events_url": "https://api.github.com/users/NielsRogge/events{/privacy}", "followers_url": "https://api.github.com/users/NielsRogge/followers", "following_url": "https://api.github.com/users/NielsRogge/following{/other_user}", "gists_url": "https://api.github.com/users/NielsRogge/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/NielsRogge", "id": 48327001, "login": "NielsRogge", "node_id": "MDQ6VXNlcjQ4MzI3MDAx", "organizations_url": "https://api.github.com/users/NielsRogge/orgs", "received_events_url": "https://api.github.com/users/NielsRogge/received_events", "repos_url": "https://api.github.com/users/NielsRogge/repos", "site_admin": false, "starred_url": "https://api.github.com/users/NielsRogge/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/NielsRogge/subscriptions", "type": "User", "url": "https://api.github.com/users/NielsRogge" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
closed
false
null
[]
null
[]
2021-08-20T12:15:15Z
2021-08-31T13:13:02Z
2021-08-31T13:12:09Z
CONTRIBUTOR
null
null
null
## Describe the bug The [linnaeus](https://huggingface.co/datasets/linnaeus) dataset cannot be loaded. To reproduce: ``` from datasets import load_dataset datasets = load_dataset("linnaeus") ``` This results in: ``` Downloading and preparing dataset linnaeus/linnaeus (download: 17.36 MiB, generated: 8.74 MiB, post-processed: Unknown size, total: 26.10 MiB) to /root/.cache/huggingface/datasets/linnaeus/linnaeus/1.0.0/2ff05dbc256108233262f596e09e322dbc3db067202de14286913607cd9cb704... --------------------------------------------------------------------------- ConnectionError Traceback (most recent call last) <ipython-input-4-7ef3a88f6276> in <module>() 1 from datasets import load_dataset 2 ----> 3 datasets = load_dataset("linnaeus") 11 frames /usr/local/lib/python3.7/dist-packages/datasets/utils/file_utils.py in get_from_cache(url, cache_dir, force_download, proxies, etag_timeout, resume_download, user_agent, local_files_only, use_etag, max_retries, use_auth_token) 603 raise FileNotFoundError("Couldn't find file at {}".format(url)) 604 _raise_if_offline_mode_is_enabled(f"Tried to reach {url}") --> 605 raise ConnectionError("Couldn't reach {}".format(url)) 606 607 # Try a second time ConnectionError: Couldn't reach https://drive.google.com/u/0/uc?id=1OletxmPYNkz2ltOr9pyT0b0iBtUWxslh&export=download/ ```
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2821/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2821/timeline
null
completed
true
https://api.github.com/repos/huggingface/datasets/issues/2820
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2820/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2820/comments
https://api.github.com/repos/huggingface/datasets/issues/2820/events
https://github.com/huggingface/datasets/issues/2820
975,210,712
MDU6SXNzdWU5NzUyMTA3MTI=
2,820
Downloading “reddit” dataset keeps timing out.
{ "avatar_url": "https://avatars.githubusercontent.com/u/43877130?v=4", "events_url": "https://api.github.com/users/smeyerhot/events{/privacy}", "followers_url": "https://api.github.com/users/smeyerhot/followers", "following_url": "https://api.github.com/users/smeyerhot/following{/other_user}", "gists_url": "https://api.github.com/users/smeyerhot/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/smeyerhot", "id": 43877130, "login": "smeyerhot", "node_id": "MDQ6VXNlcjQzODc3MTMw", "organizations_url": "https://api.github.com/users/smeyerhot/orgs", "received_events_url": "https://api.github.com/users/smeyerhot/received_events", "repos_url": "https://api.github.com/users/smeyerhot/repos", "site_admin": false, "starred_url": "https://api.github.com/users/smeyerhot/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/smeyerhot/subscriptions", "type": "User", "url": "https://api.github.com/users/smeyerhot" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
closed
false
null
[]
null
[]
2021-08-20T02:52:36Z
2021-09-08T14:52:02Z
2021-09-08T14:52:02Z
NONE
null
null
null
## Describe the bug A clear and concise description of what the bug is. Everytime I try and download the reddit dataset it times out before finishing and I have to try again. There is some timeout error that I will post once it happens again. ## Steps to reproduce the bug ```python from datasets import load_dataset dataset = load_dataset("reddit", ignore_verifications=True, cache_dir="/Volumes/My Passport for Mac/og-chat-data") ``` ## Expected results A clear and concise description of the expected results. I would expect the download to finish, or at least provide a parameter to extend the read timeout window. ## Actual results Specify the actual results or traceback. Shown below in error message. ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 1.11.0 - Platform: macOS - Python version: 3.9.6 (conda env) - PyArrow version: N/A
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2820/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2820/timeline
null
completed
true
https://api.github.com/repos/huggingface/datasets/issues/2819
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2819/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2819/comments
https://api.github.com/repos/huggingface/datasets/issues/2819/events
https://github.com/huggingface/datasets/pull/2819
974,683,155
MDExOlB1bGxSZXF1ZXN0NzE1OTUyMjE1
2,819
Added XL-Sum dataset
{ "avatar_url": "https://avatars.githubusercontent.com/u/49608995?v=4", "events_url": "https://api.github.com/users/abhik1505040/events{/privacy}", "followers_url": "https://api.github.com/users/abhik1505040/followers", "following_url": "https://api.github.com/users/abhik1505040/following{/other_user}", "gists_url": "https://api.github.com/users/abhik1505040/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/abhik1505040", "id": 49608995, "login": "abhik1505040", "node_id": "MDQ6VXNlcjQ5NjA4OTk1", "organizations_url": "https://api.github.com/users/abhik1505040/orgs", "received_events_url": "https://api.github.com/users/abhik1505040/received_events", "repos_url": "https://api.github.com/users/abhik1505040/repos", "site_admin": false, "starred_url": "https://api.github.com/users/abhik1505040/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/abhik1505040/subscriptions", "type": "User", "url": "https://api.github.com/users/abhik1505040" }
[]
closed
false
null
[]
null
[]
2021-08-19T13:47:45Z
2021-09-29T08:13:44Z
2021-09-23T17:49:05Z
NONE
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/2819.diff", "html_url": "https://github.com/huggingface/datasets/pull/2819", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/2819.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/2819" }
Added XL-Sum dataset published in ACL-IJCNLP 2021. (https://aclanthology.org/2021.findings-acl.413/). The default timeout values in `src/datasets/utils/file_utls.py` were increased to enable downloading from the original google drive links.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2819/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2819/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/2818
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2818/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2818/comments
https://api.github.com/repos/huggingface/datasets/issues/2818/events
https://github.com/huggingface/datasets/issues/2818
974,552,009
MDU6SXNzdWU5NzQ1NTIwMDk=
2,818
cannot load data from my loacal path
{ "avatar_url": "https://avatars.githubusercontent.com/u/46920280?v=4", "events_url": "https://api.github.com/users/yang-collect/events{/privacy}", "followers_url": "https://api.github.com/users/yang-collect/followers", "following_url": "https://api.github.com/users/yang-collect/following{/other_user}", "gists_url": "https://api.github.com/users/yang-collect/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/yang-collect", "id": 46920280, "login": "yang-collect", "node_id": "MDQ6VXNlcjQ2OTIwMjgw", "organizations_url": "https://api.github.com/users/yang-collect/orgs", "received_events_url": "https://api.github.com/users/yang-collect/received_events", "repos_url": "https://api.github.com/users/yang-collect/repos", "site_admin": false, "starred_url": "https://api.github.com/users/yang-collect/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/yang-collect/subscriptions", "type": "User", "url": "https://api.github.com/users/yang-collect" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
open
false
null
[]
null
[]
2021-08-19T11:13:30Z
2021-08-31T08:46:16Z
null
NONE
null
null
null
## Describe the bug I just want to directly load data from my local path,but find a bug.And I compare it with pandas to provide my local path is real. here is my code ```python3 # print my local path print(config.train_path) # read data and print data length tarin=pd.read_csv(config.train_path) print(len(tarin)) # loading data by load_dataset data = load_dataset('csv',data_files=config.train_path) print(len(data)) ``` ## Steps to reproduce the bug ```python C:\Users\wie\Documents\项目\文本分类\data\train.csv 7613 Traceback (most recent call last): File "c:/Users/wie/Documents/项目/文本分类/lib/DataPrecess.py", line 17, in <module> data = load_dataset('csv',data_files=config.train_path) File "C:\Users\wie\Miniconda3\lib\site-packages\datasets\load.py", line 830, in load_dataset **config_kwargs, File "C:\Users\wie\Miniconda3\lib\site-packages\datasets\load.py", line 710, in load_dataset_builder **config_kwargs, File "C:\Users\wie\Miniconda3\lib\site-packages\datasets\builder.py", line 271, in __init__ **config_kwargs, File "C:\Users\wie\Miniconda3\lib\site-packages\datasets\builder.py", line 386, in _create_builder_config config_kwargs, custom_features=custom_features, use_auth_token=self.use_auth_token File "C:\Users\wie\Miniconda3\lib\site-packages\datasets\builder.py", line 156, in create_config_id raise ValueError("Please provide a valid `data_files` in `DatasetBuilder`") ValueError: Please provide a valid `data_files` in `DatasetBuilder` ``` ## Expected results A clear and concise description of the expected results. ## Actual results Specify the actual results or traceback. ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 1.11.0 - Platform: win10 - Python version: 3.7.9 - PyArrow version: 5.0.0
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2818/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2818/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/2817
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2817/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2817/comments
https://api.github.com/repos/huggingface/datasets/issues/2817/events
https://github.com/huggingface/datasets/pull/2817
974,486,051
MDExOlB1bGxSZXF1ZXN0NzE1NzgzMDQ3
2,817
Rename The Pile subsets
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
[]
closed
false
null
[]
null
[]
2021-08-19T09:56:22Z
2021-08-23T16:24:10Z
2021-08-23T16:24:09Z
MEMBER
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/2817.diff", "html_url": "https://github.com/huggingface/datasets/pull/2817", "merged_at": "2021-08-23T16:24:09Z", "patch_url": "https://github.com/huggingface/datasets/pull/2817.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/2817" }
After discussing with @yjernite we think it's better to have the subsets of The Pile explicitly have "the_pile" in their names. I'm doing the changes for the subsets that @richarddwang added: - [x] books3 -> the_pile_books3 https://github.com/huggingface/datasets/pull/2801 - [x] stack_exchange -> the_pile_stack_exchange https://github.com/huggingface/datasets/pull/2803 - [x] openwebtext2 -> the_pile_openwebtext2 https://github.com/huggingface/datasets/pull/2802 For consistency we should also rename `bookcorpusopen` to `the_pile_bookcorpus` IMO, but let me know what you think. (we can just add a deprecation message to `bookcorpusopen` for now and add `the_pile_bookcorpus`)
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2817/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2817/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/2816
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2816/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2816/comments
https://api.github.com/repos/huggingface/datasets/issues/2816/events
https://github.com/huggingface/datasets/issues/2816
974,031,404
MDU6SXNzdWU5NzQwMzE0MDQ=
2,816
Add Mostly Basic Python Problems Dataset
{ "avatar_url": "https://avatars.githubusercontent.com/u/7246357?v=4", "events_url": "https://api.github.com/users/osanseviero/events{/privacy}", "followers_url": "https://api.github.com/users/osanseviero/followers", "following_url": "https://api.github.com/users/osanseviero/following{/other_user}", "gists_url": "https://api.github.com/users/osanseviero/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/osanseviero", "id": 7246357, "login": "osanseviero", "node_id": "MDQ6VXNlcjcyNDYzNTc=", "organizations_url": "https://api.github.com/users/osanseviero/orgs", "received_events_url": "https://api.github.com/users/osanseviero/received_events", "repos_url": "https://api.github.com/users/osanseviero/repos", "site_admin": false, "starred_url": "https://api.github.com/users/osanseviero/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/osanseviero/subscriptions", "type": "User", "url": "https://api.github.com/users/osanseviero" }
[ { "color": "e99695", "default": false, "description": "Requesting to add a new dataset", "id": 2067376369, "name": "dataset request", "node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request" } ]
open
false
null
[]
null
[]
2021-08-18T20:28:39Z
2021-09-10T08:04:20Z
null
MEMBER
null
null
null
## Adding a Dataset - **Name:** Mostly Basic Python Problems Dataset - **Description:** The benchmark consists of around 1,000 crowd-sourced Python programming problems, designed to be solvable by entry level programmers, covering programming fundamentals, standard library functionality, and so on. Each problem consists of a task description, code solution and 3 automated test cases. - **Paper:** *link to the dataset paper if available* - **Data:** https://github.com/google-research/google-research/tree/master/mbpp - **Motivation:** Simple, small dataset related to coding problems. Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md).
{ "+1": 1, "-1": 0, "confused": 0, "eyes": 0, "heart": 2, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 3, "url": "https://api.github.com/repos/huggingface/datasets/issues/2816/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2816/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/2815
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2815/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2815/comments
https://api.github.com/repos/huggingface/datasets/issues/2815/events
https://github.com/huggingface/datasets/pull/2815
973,862,024
MDExOlB1bGxSZXF1ZXN0NzE1MjUxNDQ5
2,815
Tiny typo fixes of "fo" -> "of"
{ "avatar_url": "https://avatars.githubusercontent.com/u/9934829?v=4", "events_url": "https://api.github.com/users/aronszanto/events{/privacy}", "followers_url": "https://api.github.com/users/aronszanto/followers", "following_url": "https://api.github.com/users/aronszanto/following{/other_user}", "gists_url": "https://api.github.com/users/aronszanto/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/aronszanto", "id": 9934829, "login": "aronszanto", "node_id": "MDQ6VXNlcjk5MzQ4Mjk=", "organizations_url": "https://api.github.com/users/aronszanto/orgs", "received_events_url": "https://api.github.com/users/aronszanto/received_events", "repos_url": "https://api.github.com/users/aronszanto/repos", "site_admin": false, "starred_url": "https://api.github.com/users/aronszanto/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/aronszanto/subscriptions", "type": "User", "url": "https://api.github.com/users/aronszanto" }
[]
closed
false
null
[]
null
[]
2021-08-18T16:36:11Z
2021-08-19T08:03:02Z
2021-08-19T08:03:02Z
CONTRIBUTOR
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/2815.diff", "html_url": "https://github.com/huggingface/datasets/pull/2815", "merged_at": "2021-08-19T08:03:02Z", "patch_url": "https://github.com/huggingface/datasets/pull/2815.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/2815" }
Noticed a few of these when reading docs- feel free to ignore the PR and just fix on some main contributor branch if more helpful. Thanks for the great library! :)
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2815/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2815/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/2814
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2814/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2814/comments
https://api.github.com/repos/huggingface/datasets/issues/2814/events
https://github.com/huggingface/datasets/pull/2814
973,632,645
MDExOlB1bGxSZXF1ZXN0NzE1MDUwODc4
2,814
Bump tqdm version
{ "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mariosasko", "id": 47462742, "login": "mariosasko", "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "repos_url": "https://api.github.com/users/mariosasko/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "type": "User", "url": "https://api.github.com/users/mariosasko" }
[]
closed
false
null
[]
null
[]
2021-08-18T12:51:29Z
2021-08-18T13:44:11Z
2021-08-18T13:39:50Z
CONTRIBUTOR
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/2814.diff", "html_url": "https://github.com/huggingface/datasets/pull/2814", "merged_at": "2021-08-18T13:39:49Z", "patch_url": "https://github.com/huggingface/datasets/pull/2814.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/2814" }
The recently released tqdm 4.62.1 includes a fix for PermissionError on Windows (submitted by me in https://github.com/tqdm/tqdm/pull/1207), which means we can remove expensive `gc.collect` calls by bumping tqdm to that version. This PR does exactly that and, additionally, fixes a `disable_tqdm` definition that would previously, if used, raise a PermissionError on Windows.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2814/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2814/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/2813
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2813/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2813/comments
https://api.github.com/repos/huggingface/datasets/issues/2813/events
https://github.com/huggingface/datasets/issues/2813
973,470,580
MDU6SXNzdWU5NzM0NzA1ODA=
2,813
Remove compression from xopen
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[ { "color": "c5def5", "default": false, "description": "Generic discussion on the library", "id": 2067400324, "name": "generic discussion", "node_id": "MDU6TGFiZWwyMDY3NDAwMzI0", "url": "https://api.github.com/repos/huggingface/datasets/labels/generic%20discussion" } ]
closed
false
null
[]
null
[]
2021-08-18T09:35:59Z
2021-08-23T15:59:14Z
2021-08-23T15:59:14Z
MEMBER
null
null
null
We implemented support for streaming with 2 requirements: - transparent use for the end user: just needs to pass the parameter `streaming=True` - no additional work for the contributors: previous loading scripts should also work in streaming mode with no (or minor) changes; and new loading scripts should not involve additional code to support streaming In order to fulfill these requirements, streaming implementation patched some Python functions: - the `open(urlpath)` function was patched with `fsspec.open(urlpath)` - the `os.path.join(urlpath, *others)` function was patched in order to add to `urlpath` hops (`::`) and extractor protocols (`zip://`), which are required by `fsspec.open` Recently, we implemented support for streaming all archive+compression formats: zip, tar, gz, bz2, lz4, xz, zst; tar.gz, tar.bz2,... Under the hood, the implementation: - passes an additional parameter `compression` to `fsspec.open`, so that it performs the decompression on the fly: `fsspec.open(urlpath, compression=...)` Some concerns have been raised about passing the parameter `compression` to `fsspec.open`: - https://github.com/huggingface/datasets/pull/2786#discussion_r689550254 - #2811 The main argument is that if `open` decompresses the file and afterwards we call `gzip.open` on it, that will raise an error in `oscar` dataset: ```python gzip.open(open(urlpath ``` While this is true: - it is not natural/usual to call `open` inside `gzip.open` (never seen this before) - indeed, this was recently (2 months ago) coded that way in `datasets` in order to allow streaming support (with previous implementation of streaming) In this particular case, there is a natural fix solution: #2811: - Revert the `open` inside the `gzip.open` (change done 2 months ago): `gzip.open(open(urlpath` => `gzip.open(urlpath` - Patch `gzip.open(urlpath` with `fsspec.open(urlpath, compression="gzip"` Are there other issues apart from this? Note that there is an issue just because the open inside of the gzip.open. There is no issue in the other cases where datasets loading scripts use just - `gzip.open` - `open` (after having called dl_manager.download_and_extract) TODO: - [ ] Is this really an issue? Please enumerate the `datasets` loading scripts where this is problematic. - For the moment, there are only 3 datasets where we have an `open` inside a `gzip.open`: - oscar (since 23 June), mc4 (since 2 July) and c4 (since 2 July) - In the 3 datasets, the only reason to put an open inside a gzip.open was indeed to force supporting streaming - [ ] If this is indeed an issue, which are the possible alternatives? Pros/cons?
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2813/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2813/timeline
null
completed
true
https://api.github.com/repos/huggingface/datasets/issues/2812
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2812/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2812/comments
https://api.github.com/repos/huggingface/datasets/issues/2812/events
https://github.com/huggingface/datasets/issues/2812
972,936,889
MDU6SXNzdWU5NzI5MzY4ODk=
2,812
arXiv Dataset verification problem
{ "avatar_url": "https://avatars.githubusercontent.com/u/13485709?v=4", "events_url": "https://api.github.com/users/eladsegal/events{/privacy}", "followers_url": "https://api.github.com/users/eladsegal/followers", "following_url": "https://api.github.com/users/eladsegal/following{/other_user}", "gists_url": "https://api.github.com/users/eladsegal/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/eladsegal", "id": 13485709, "login": "eladsegal", "node_id": "MDQ6VXNlcjEzNDg1NzA5", "organizations_url": "https://api.github.com/users/eladsegal/orgs", "received_events_url": "https://api.github.com/users/eladsegal/received_events", "repos_url": "https://api.github.com/users/eladsegal/repos", "site_admin": false, "starred_url": "https://api.github.com/users/eladsegal/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/eladsegal/subscriptions", "type": "User", "url": "https://api.github.com/users/eladsegal" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" }, { "color": "2edb81", "default": false, "description": "A bug in a dataset script provided in the library", "id": 2067388877, "name": "dataset bug", "node_id": "MDU6TGFiZWwyMDY3Mzg4ODc3", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20bug" } ]
open
false
null
[]
null
[]
2021-08-17T18:01:48Z
2022-01-19T14:15:35Z
null
CONTRIBUTOR
null
null
null
## Describe the bug `dataset_infos.json` for `arxiv_dataset` contains a fixed number of training examples, however the data (downloaded from an external source) is updated every week with additional examples. Therefore, loading the dataset without `ignore_verifications=True` results in a verification error.
{ "+1": 2, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 2, "url": "https://api.github.com/repos/huggingface/datasets/issues/2812/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2812/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/2811
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2811/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2811/comments
https://api.github.com/repos/huggingface/datasets/issues/2811/events
https://github.com/huggingface/datasets/pull/2811
972,522,480
MDExOlB1bGxSZXF1ZXN0NzE0MTAzNDIy
2,811
Fix stream oscar
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[]
closed
false
null
[]
null
[]
2021-08-17T10:10:59Z
2021-08-26T10:26:15Z
2021-08-26T10:26:14Z
MEMBER
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/2811.diff", "html_url": "https://github.com/huggingface/datasets/pull/2811", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/2811.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/2811" }
Previously, an additional `open` was added to oscar to make it stream-compatible: 587bbb94e891b22863b312b99696e32708c379f4. This was argued that might be problematic: https://github.com/huggingface/datasets/pull/2786#discussion_r690045921 This PR: - removes that additional `open` - patches `gzip.open` with `xopen` + `compression="gzip"`
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2811/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2811/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/2810
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2810/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2810/comments
https://api.github.com/repos/huggingface/datasets/issues/2810/events
https://github.com/huggingface/datasets/pull/2810
972,040,022
MDExOlB1bGxSZXF1ZXN0NzEzNjkzMTI1
2,810
Add WIT Dataset
{ "avatar_url": "https://avatars.githubusercontent.com/u/13920778?v=4", "events_url": "https://api.github.com/users/hassiahk/events{/privacy}", "followers_url": "https://api.github.com/users/hassiahk/followers", "following_url": "https://api.github.com/users/hassiahk/following{/other_user}", "gists_url": "https://api.github.com/users/hassiahk/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/hassiahk", "id": 13920778, "login": "hassiahk", "node_id": "MDQ6VXNlcjEzOTIwNzc4", "organizations_url": "https://api.github.com/users/hassiahk/orgs", "received_events_url": "https://api.github.com/users/hassiahk/received_events", "repos_url": "https://api.github.com/users/hassiahk/repos", "site_admin": false, "starred_url": "https://api.github.com/users/hassiahk/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/hassiahk/subscriptions", "type": "User", "url": "https://api.github.com/users/hassiahk" }
[]
closed
false
null
[]
null
[]
2021-08-16T19:34:09Z
2022-05-06T12:27:29Z
2022-05-06T12:26:16Z
NONE
null
true
{ "diff_url": "https://github.com/huggingface/datasets/pull/2810.diff", "html_url": "https://github.com/huggingface/datasets/pull/2810", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/2810.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/2810" }
Adds Google's [WIT](https://github.com/google-research-datasets/wit) dataset.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2810/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2810/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/2809
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2809/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2809/comments
https://api.github.com/repos/huggingface/datasets/issues/2809/events
https://github.com/huggingface/datasets/pull/2809
971,902,613
MDExOlB1bGxSZXF1ZXN0NzEzNTc2Njcz
2,809
Add Beans Dataset
{ "avatar_url": "https://avatars.githubusercontent.com/u/32437151?v=4", "events_url": "https://api.github.com/users/nateraw/events{/privacy}", "followers_url": "https://api.github.com/users/nateraw/followers", "following_url": "https://api.github.com/users/nateraw/following{/other_user}", "gists_url": "https://api.github.com/users/nateraw/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/nateraw", "id": 32437151, "login": "nateraw", "node_id": "MDQ6VXNlcjMyNDM3MTUx", "organizations_url": "https://api.github.com/users/nateraw/orgs", "received_events_url": "https://api.github.com/users/nateraw/received_events", "repos_url": "https://api.github.com/users/nateraw/repos", "site_admin": false, "starred_url": "https://api.github.com/users/nateraw/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/nateraw/subscriptions", "type": "User", "url": "https://api.github.com/users/nateraw" }
[]
closed
false
null
[]
null
[]
2021-08-16T16:22:33Z
2021-08-26T11:42:27Z
2021-08-26T11:42:27Z
CONTRIBUTOR
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/2809.diff", "html_url": "https://github.com/huggingface/datasets/pull/2809", "merged_at": "2021-08-26T11:42:27Z", "patch_url": "https://github.com/huggingface/datasets/pull/2809.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/2809" }
Adds the [beans](https://github.com/AI-Lab-Makerere/ibean/) image classification dataset.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2809/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2809/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/2808
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2808/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2808/comments
https://api.github.com/repos/huggingface/datasets/issues/2808/events
https://github.com/huggingface/datasets/issues/2808
971,882,320
MDU6SXNzdWU5NzE4ODIzMjA=
2,808
Enable streaming for Wikipedia corpora
{ "avatar_url": "https://avatars.githubusercontent.com/u/26859204?v=4", "events_url": "https://api.github.com/users/lewtun/events{/privacy}", "followers_url": "https://api.github.com/users/lewtun/followers", "following_url": "https://api.github.com/users/lewtun/following{/other_user}", "gists_url": "https://api.github.com/users/lewtun/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lewtun", "id": 26859204, "login": "lewtun", "node_id": "MDQ6VXNlcjI2ODU5MjA0", "organizations_url": "https://api.github.com/users/lewtun/orgs", "received_events_url": "https://api.github.com/users/lewtun/received_events", "repos_url": "https://api.github.com/users/lewtun/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lewtun/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lewtun/subscriptions", "type": "User", "url": "https://api.github.com/users/lewtun" }
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" } ]
open
false
null
[]
null
[]
2021-08-16T15:59:12Z
2021-08-16T15:59:12Z
null
MEMBER
null
null
null
**Is your feature request related to a problem? Please describe.** Several of the [Wikipedia corpora](https://huggingface.co/datasets?search=wiki) on the Hub involve quite large files that would be a good candidate for streaming. Currently it is not possible to stream these corpora: ```python from datasets import load_dataset # Throws ValueError: Builder wikipedia is not streamable. wiki_dataset_streamed = load_dataset("wikipedia", "20200501.en", split="train", streaming=True) ``` Given that these corpora are derived from Wikipedia dumps in XML format which are then processed with Apache Beam, I am not sure whether streaming is possible in principle. The goal of this issue is to discuss whether this feature even makes sense :) **Describe the solution you'd like** It would be nice to be able to stream Wikipedia corpora from the Hub with something like ```python from datasets import load_dataset wiki_dataset_streamed = load_dataset("wikipedia", "20200501.en", split="train", streaming=True) ```
{ "+1": 4, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 4, "url": "https://api.github.com/repos/huggingface/datasets/issues/2808/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2808/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/2807
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2807/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2807/comments
https://api.github.com/repos/huggingface/datasets/issues/2807/events
https://github.com/huggingface/datasets/pull/2807
971,849,863
MDExOlB1bGxSZXF1ZXN0NzEzNTMxNjIw
2,807
Add cats_vs_dogs dataset
{ "avatar_url": "https://avatars.githubusercontent.com/u/32437151?v=4", "events_url": "https://api.github.com/users/nateraw/events{/privacy}", "followers_url": "https://api.github.com/users/nateraw/followers", "following_url": "https://api.github.com/users/nateraw/following{/other_user}", "gists_url": "https://api.github.com/users/nateraw/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/nateraw", "id": 32437151, "login": "nateraw", "node_id": "MDQ6VXNlcjMyNDM3MTUx", "organizations_url": "https://api.github.com/users/nateraw/orgs", "received_events_url": "https://api.github.com/users/nateraw/received_events", "repos_url": "https://api.github.com/users/nateraw/repos", "site_admin": false, "starred_url": "https://api.github.com/users/nateraw/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/nateraw/subscriptions", "type": "User", "url": "https://api.github.com/users/nateraw" }
[]
closed
false
null
[]
null
[]
2021-08-16T15:21:11Z
2021-08-30T16:35:25Z
2021-08-30T16:35:24Z
CONTRIBUTOR
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/2807.diff", "html_url": "https://github.com/huggingface/datasets/pull/2807", "merged_at": "2021-08-30T16:35:24Z", "patch_url": "https://github.com/huggingface/datasets/pull/2807.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/2807" }
Adds Microsoft's [Cats vs. Dogs](https://www.microsoft.com/en-us/download/details.aspx?id=54765) dataset.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2807/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2807/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/2806
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2806/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2806/comments
https://api.github.com/repos/huggingface/datasets/issues/2806/events
https://github.com/huggingface/datasets/pull/2806
971,625,449
MDExOlB1bGxSZXF1ZXN0NzEzMzM5NDUw
2,806
Fix streaming tar files from canonical datasets
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[]
closed
false
null
[]
null
[]
2021-08-16T11:10:28Z
2021-10-13T09:04:03Z
2021-10-13T09:04:02Z
MEMBER
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/2806.diff", "html_url": "https://github.com/huggingface/datasets/pull/2806", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/2806.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/2806" }
Previous PR #2800 implemented support to stream remote tar files when passing the parameter `data_files`: they required a glob string `"*"`. However, this glob string creates an error when streaming canonical datasets (with a `join` after the `open`). This PR fixes this issue and allows streaming tar files both from: - canonical datasets scripts and - data files. This PR also adds support for compressed tar files: `.tar.gz`, `.tar.bz2`,...
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 1, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/2806/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2806/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/2805
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2805/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2805/comments
https://api.github.com/repos/huggingface/datasets/issues/2805/events
https://github.com/huggingface/datasets/pull/2805
971,436,456
MDExOlB1bGxSZXF1ZXN0NzEzMTc3MTI4
2,805
Fix streaming zip files from canonical datasets
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[]
closed
false
null
[]
null
[]
2021-08-16T07:11:40Z
2021-08-16T10:34:00Z
2021-08-16T10:34:00Z
MEMBER
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/2805.diff", "html_url": "https://github.com/huggingface/datasets/pull/2805", "merged_at": "2021-08-16T10:34:00Z", "patch_url": "https://github.com/huggingface/datasets/pull/2805.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/2805" }
Previous PR #2798 fixed streaming remote zip files when passing the parameter `data_files`. However, that broke streaming zip files used in canonical `datasets` scripts, which normally have a subsequent `join()` (patched with `xjoin()`) after the `StreamingDownloadManager.download_and_extract()` is called. This PR fixes this issue and allows streaming zip files both from: - canonical datasets scripts and - data files.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2805/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2805/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/2804
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2804/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2804/comments
https://api.github.com/repos/huggingface/datasets/issues/2804/events
https://github.com/huggingface/datasets/pull/2804
971,353,437
MDExOlB1bGxSZXF1ZXN0NzEzMTA2NTMw
2,804
Add Food-101
{ "avatar_url": "https://avatars.githubusercontent.com/u/32437151?v=4", "events_url": "https://api.github.com/users/nateraw/events{/privacy}", "followers_url": "https://api.github.com/users/nateraw/followers", "following_url": "https://api.github.com/users/nateraw/following{/other_user}", "gists_url": "https://api.github.com/users/nateraw/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/nateraw", "id": 32437151, "login": "nateraw", "node_id": "MDQ6VXNlcjMyNDM3MTUx", "organizations_url": "https://api.github.com/users/nateraw/orgs", "received_events_url": "https://api.github.com/users/nateraw/received_events", "repos_url": "https://api.github.com/users/nateraw/repos", "site_admin": false, "starred_url": "https://api.github.com/users/nateraw/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/nateraw/subscriptions", "type": "User", "url": "https://api.github.com/users/nateraw" }
[]
closed
false
null
[]
null
[]
2021-08-16T04:26:15Z
2021-08-20T14:31:33Z
2021-08-19T12:48:06Z
CONTRIBUTOR
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/2804.diff", "html_url": "https://github.com/huggingface/datasets/pull/2804", "merged_at": "2021-08-19T12:48:06Z", "patch_url": "https://github.com/huggingface/datasets/pull/2804.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/2804" }
Adds image classification dataset [Food-101](https://data.vision.ee.ethz.ch/cvl/datasets_extra/food-101/).
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2804/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2804/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/2803
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2803/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2803/comments
https://api.github.com/repos/huggingface/datasets/issues/2803/events
https://github.com/huggingface/datasets/pull/2803
970,858,928
MDExOlB1bGxSZXF1ZXN0NzEyNzQxODMz
2,803
add stack exchange
{ "avatar_url": "https://avatars.githubusercontent.com/u/17963619?v=4", "events_url": "https://api.github.com/users/richarddwang/events{/privacy}", "followers_url": "https://api.github.com/users/richarddwang/followers", "following_url": "https://api.github.com/users/richarddwang/following{/other_user}", "gists_url": "https://api.github.com/users/richarddwang/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/richarddwang", "id": 17963619, "login": "richarddwang", "node_id": "MDQ6VXNlcjE3OTYzNjE5", "organizations_url": "https://api.github.com/users/richarddwang/orgs", "received_events_url": "https://api.github.com/users/richarddwang/received_events", "repos_url": "https://api.github.com/users/richarddwang/repos", "site_admin": false, "starred_url": "https://api.github.com/users/richarddwang/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/richarddwang/subscriptions", "type": "User", "url": "https://api.github.com/users/richarddwang" }
[]
closed
false
null
[]
null
[]
2021-08-14T08:11:02Z
2021-08-19T10:07:33Z
2021-08-19T08:07:38Z
CONTRIBUTOR
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/2803.diff", "html_url": "https://github.com/huggingface/datasets/pull/2803", "merged_at": "2021-08-19T08:07:38Z", "patch_url": "https://github.com/huggingface/datasets/pull/2803.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/2803" }
stack exchange is part of EleutherAI/The Pile, but AFAIK, The Pile dataset blend all sub datasets together thus we are not able to use just one of its sub dataset from The Pile data. So I create an independent dataset using The Pile preliminary components. I also change default `timeout` to 100 seconds instead of 10 seconds, otherwise I keep getting read time out when downloading source data of stack exchange and cc100 dataset. When I was creating dataset card. I found there is room for creating / editing dataset card. I've made it an issue. #2797 Also I am wondering whether the import of The Pile dataset is actively undertaken (because I may need it recently)? #1675
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2803/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2803/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/2802
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2802/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2802/comments
https://api.github.com/repos/huggingface/datasets/issues/2802/events
https://github.com/huggingface/datasets/pull/2802
970,848,302
MDExOlB1bGxSZXF1ZXN0NzEyNzM0MTc3
2,802
add openwebtext2
{ "avatar_url": "https://avatars.githubusercontent.com/u/17963619?v=4", "events_url": "https://api.github.com/users/richarddwang/events{/privacy}", "followers_url": "https://api.github.com/users/richarddwang/followers", "following_url": "https://api.github.com/users/richarddwang/following{/other_user}", "gists_url": "https://api.github.com/users/richarddwang/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/richarddwang", "id": 17963619, "login": "richarddwang", "node_id": "MDQ6VXNlcjE3OTYzNjE5", "organizations_url": "https://api.github.com/users/richarddwang/orgs", "received_events_url": "https://api.github.com/users/richarddwang/received_events", "repos_url": "https://api.github.com/users/richarddwang/repos", "site_admin": false, "starred_url": "https://api.github.com/users/richarddwang/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/richarddwang/subscriptions", "type": "User", "url": "https://api.github.com/users/richarddwang" }
[]
closed
false
null
[]
null
[]
2021-08-14T07:09:03Z
2021-08-23T14:06:14Z
2021-08-23T14:06:14Z
CONTRIBUTOR
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/2802.diff", "html_url": "https://github.com/huggingface/datasets/pull/2802", "merged_at": "2021-08-23T14:06:14Z", "patch_url": "https://github.com/huggingface/datasets/pull/2802.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/2802" }
openwebtext2 is part of EleutherAI/The Pile, but AFAIK, The Pile dataset blend all sub datasets together thus we are not able to use just one of its sub dataset from The Pile data. So I create an independent dataset using The Pile preliminary components. When I was creating dataset card. I found there is room for creating / editing dataset card. I've made it an issue. #2797 Also I am wondering whether the import of The Pile dataset is actively undertaken (because I may need it recently)? #1675
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2802/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2802/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/2801
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2801/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2801/comments
https://api.github.com/repos/huggingface/datasets/issues/2801/events
https://github.com/huggingface/datasets/pull/2801
970,844,617
MDExOlB1bGxSZXF1ZXN0NzEyNzMwODEz
2,801
add books3
{ "avatar_url": "https://avatars.githubusercontent.com/u/17963619?v=4", "events_url": "https://api.github.com/users/richarddwang/events{/privacy}", "followers_url": "https://api.github.com/users/richarddwang/followers", "following_url": "https://api.github.com/users/richarddwang/following{/other_user}", "gists_url": "https://api.github.com/users/richarddwang/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/richarddwang", "id": 17963619, "login": "richarddwang", "node_id": "MDQ6VXNlcjE3OTYzNjE5", "organizations_url": "https://api.github.com/users/richarddwang/orgs", "received_events_url": "https://api.github.com/users/richarddwang/received_events", "repos_url": "https://api.github.com/users/richarddwang/repos", "site_admin": false, "starred_url": "https://api.github.com/users/richarddwang/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/richarddwang/subscriptions", "type": "User", "url": "https://api.github.com/users/richarddwang" }
[]
closed
false
null
[]
null
[]
2021-08-14T07:04:25Z
2021-08-19T16:43:09Z
2021-08-18T15:36:59Z
CONTRIBUTOR
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/2801.diff", "html_url": "https://github.com/huggingface/datasets/pull/2801", "merged_at": "2021-08-18T15:36:59Z", "patch_url": "https://github.com/huggingface/datasets/pull/2801.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/2801" }
books3 is part of EleutherAI/The Pile, but AFAIK, The Pile dataset blend all sub datasets together thus we are not able to use just one of its sub dataset from The Pile data. So I create an independent dataset using The Pile preliminary components. When I was creating dataset card. I found there is room for creating / editing dataset card. I've made it an issue. #2797 Also I am wondering whether the import of The Pile dataset is actively undertaken (because I may need it recently)? #1675
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2801/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2801/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/2800
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2800/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2800/comments
https://api.github.com/repos/huggingface/datasets/issues/2800/events
https://github.com/huggingface/datasets/pull/2800
970,819,988
MDExOlB1bGxSZXF1ZXN0NzEyNzExNTcx
2,800
Support streaming tar files
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[]
closed
false
null
[]
null
[]
2021-08-14T04:40:17Z
2021-08-26T10:02:30Z
2021-08-14T04:55:57Z
MEMBER
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/2800.diff", "html_url": "https://github.com/huggingface/datasets/pull/2800", "merged_at": "2021-08-14T04:55:57Z", "patch_url": "https://github.com/huggingface/datasets/pull/2800.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/2800" }
This PR adds support to stream tar files by using the `fsspec` tar protocol. It also uses the custom `readline` implemented in PR #2786. The corresponding test is implemented in PR #2786.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2800/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2800/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/2799
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2799/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2799/comments
https://api.github.com/repos/huggingface/datasets/issues/2799/events
https://github.com/huggingface/datasets/issues/2799
970,507,351
MDU6SXNzdWU5NzA1MDczNTE=
2,799
Loading JSON throws ArrowNotImplementedError
{ "avatar_url": "https://avatars.githubusercontent.com/u/26859204?v=4", "events_url": "https://api.github.com/users/lewtun/events{/privacy}", "followers_url": "https://api.github.com/users/lewtun/followers", "following_url": "https://api.github.com/users/lewtun/following{/other_user}", "gists_url": "https://api.github.com/users/lewtun/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lewtun", "id": 26859204, "login": "lewtun", "node_id": "MDQ6VXNlcjI2ODU5MjA0", "organizations_url": "https://api.github.com/users/lewtun/orgs", "received_events_url": "https://api.github.com/users/lewtun/received_events", "repos_url": "https://api.github.com/users/lewtun/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lewtun/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lewtun/subscriptions", "type": "User", "url": "https://api.github.com/users/lewtun" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
closed
false
null
[]
null
[]
2021-08-13T15:31:48Z
2022-01-10T18:59:32Z
2022-01-10T18:59:32Z
MEMBER
null
null
null
## Describe the bug I have created a [dataset](https://huggingface.co/datasets/lewtun/github-issues-test) of GitHub issues in line-separated JSON format and am finding that I cannot load it with the `json` loading script (see stack trace below). Curiously, there is no problem loading the dataset with `pandas` which suggests some incorrect type inference is being made on the `datasets` side. For example, the stack trace indicates that some URL fields are being parsed as timestamps. You can find a Colab notebook which reproduces the error [here](https://colab.research.google.com/drive/1YUCM0j1vx5ZrouQbYSzal6RwB4-Aoh4o?usp=sharing). **Edit:** If one repeatedly tries to load the dataset, it _eventually_ works but I think it would still be good to understand why it fails in the first place :) ## Steps to reproduce the bug ```python from datasets import load_dataset from huggingface_hub import hf_hub_url import pandas as pd # returns https://huggingface.co/datasets/lewtun/github-issues-test/resolve/main/issues-datasets.jsonl data_files = hf_hub_url(repo_id="lewtun/github-issues-test", filename="issues-datasets.jsonl", repo_type="dataset") # throws ArrowNotImplementedError dset = load_dataset("json", data_files=data_files, split="test") # no problem with pandas ... df = pd.read_json(data_files, orient="records", lines=True) df.head() ``` ## Expected results I can load any line-separated JSON file, similar to `pandas`. ## Actual results ``` --------------------------------------------------------------------------- ArrowNotImplementedError Traceback (most recent call last) <ipython-input-7-5b8e82b6c3a2> in <module>() ----> 1 dset = load_dataset("json", data_files=data_files, split="test") 9 frames /usr/local/lib/python3.7/dist-packages/pyarrow/error.pxi in pyarrow.lib.check_status() ArrowNotImplementedError: JSON conversion to struct<url: timestamp[s], html_url: timestamp[s], labels_url: timestamp[s], id: int64, node_id: timestamp[s], number: int64, title: timestamp[s], description: timestamp[s], creator: struct<login: timestamp[s], id: int64, node_id: timestamp[s], avatar_url: timestamp[s], gravatar_id: timestamp[s], url: timestamp[s], html_url: timestamp[s], followers_url: timestamp[s], following_url: timestamp[s], gists_url: timestamp[s], starred_url: timestamp[s], subscriptions_url: timestamp[s], organizations_url: timestamp[s], repos_url: timestamp[s], events_url: timestamp[s], received_events_url: timestamp[s], type: timestamp[s], site_admin: bool>, open_issues: int64, closed_issues: int64, state: timestamp[s], created_at: timestamp[s], updated_at: timestamp[s], due_on: timestamp[s], closed_at: timestamp[s]> is not supported ``` ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 1.11.0 - Platform: Linux-5.4.104+-x86_64-with-Ubuntu-18.04-bionic - Python version: 3.7.11 - PyArrow version: 3.0.0
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2799/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2799/timeline
null
completed
true
https://api.github.com/repos/huggingface/datasets/issues/2798
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2798/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2798/comments
https://api.github.com/repos/huggingface/datasets/issues/2798/events
https://github.com/huggingface/datasets/pull/2798
970,493,126
MDExOlB1bGxSZXF1ZXN0NzEyNDM3ODc2
2,798
Fix streaming zip files
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[]
closed
false
null
[]
null
[]
2021-08-13T15:17:01Z
2021-08-16T14:16:50Z
2021-08-13T15:38:28Z
MEMBER
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/2798.diff", "html_url": "https://github.com/huggingface/datasets/pull/2798", "merged_at": "2021-08-13T15:38:28Z", "patch_url": "https://github.com/huggingface/datasets/pull/2798.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/2798" }
Currently, streaming remote zip data files gives `FileNotFoundError` message: ```python data_files = f"https://huggingface.co/datasets/albertvillanova/datasets-tests-compression/resolve/main/sample.zip" ds = load_dataset("json", split="train", data_files=data_files, streaming=True) next(iter(ds)) ``` This PR fixes it by adding a glob string. The corresponding test is implemented in PR #2786.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2798/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2798/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/2797
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2797/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2797/comments
https://api.github.com/repos/huggingface/datasets/issues/2797/events
https://github.com/huggingface/datasets/issues/2797
970,331,634
MDU6SXNzdWU5NzAzMzE2MzQ=
2,797
Make creating/editing dataset cards easier, by editing on site and dumping info from test command.
{ "avatar_url": "https://avatars.githubusercontent.com/u/17963619?v=4", "events_url": "https://api.github.com/users/richarddwang/events{/privacy}", "followers_url": "https://api.github.com/users/richarddwang/followers", "following_url": "https://api.github.com/users/richarddwang/following{/other_user}", "gists_url": "https://api.github.com/users/richarddwang/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/richarddwang", "id": 17963619, "login": "richarddwang", "node_id": "MDQ6VXNlcjE3OTYzNjE5", "organizations_url": "https://api.github.com/users/richarddwang/orgs", "received_events_url": "https://api.github.com/users/richarddwang/received_events", "repos_url": "https://api.github.com/users/richarddwang/repos", "site_admin": false, "starred_url": "https://api.github.com/users/richarddwang/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/richarddwang/subscriptions", "type": "User", "url": "https://api.github.com/users/richarddwang" }
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" } ]
open
false
null
[]
null
[]
2021-08-13T11:54:49Z
2021-08-14T08:42:09Z
null
CONTRIBUTOR
null
null
null
**Is your feature request related to a problem? Please describe.** Creating and editing dataset cards should be but not that easy - If other else know Some information I don't know (bias of dataset, dataset curation, supported dataset, ...), he/she should know the description on hf.co comes from README.md under github huggingface/datasets/datasets/the dataset, and willing to make a pr to add or fix information. - Many information is also saved in `dataset_info.json` (citaion, description), but still need to write it down to README.md again. - Contributor need to pip install and start a local server just for tagging the dataset's size. And contributor may be creating the dataset on lab's server, which can't open a browser. - if any one proposes a new tag, it doesn't show in the list that another creator see. (a stackoverflow way may be ideal) - dataset card generator web app doesn't generate the necessary subsecion `Contributions` for us. **Describe the solution you'd like** - Everyone (or at least the author/contributor) can edit the description, information, tags of the dataset, on hf.co website. Just like wikipedia+stackoverflow - We can infer the actual data size, citation, data instance, ... from `dataset_info.json` and `dataset.arrow` via `dataset-cli test`
{ "+1": 1, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/2797/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2797/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/2796
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2796/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2796/comments
https://api.github.com/repos/huggingface/datasets/issues/2796/events
https://github.com/huggingface/datasets/pull/2796
970,235,846
MDExOlB1bGxSZXF1ZXN0NzEyMjE1ODM2
2,796
add cedr dataset
{ "avatar_url": "https://avatars.githubusercontent.com/u/22640075?v=4", "events_url": "https://api.github.com/users/naumov-al/events{/privacy}", "followers_url": "https://api.github.com/users/naumov-al/followers", "following_url": "https://api.github.com/users/naumov-al/following{/other_user}", "gists_url": "https://api.github.com/users/naumov-al/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/naumov-al", "id": 22640075, "login": "naumov-al", "node_id": "MDQ6VXNlcjIyNjQwMDc1", "organizations_url": "https://api.github.com/users/naumov-al/orgs", "received_events_url": "https://api.github.com/users/naumov-al/received_events", "repos_url": "https://api.github.com/users/naumov-al/repos", "site_admin": false, "starred_url": "https://api.github.com/users/naumov-al/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/naumov-al/subscriptions", "type": "User", "url": "https://api.github.com/users/naumov-al" }
[]
closed
false
null
[]
null
[]
2021-08-13T09:37:35Z
2021-08-27T16:01:36Z
2021-08-27T16:01:36Z
CONTRIBUTOR
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/2796.diff", "html_url": "https://github.com/huggingface/datasets/pull/2796", "merged_at": "2021-08-27T16:01:35Z", "patch_url": "https://github.com/huggingface/datasets/pull/2796.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/2796" }
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2796/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2796/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/2794
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2794/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2794/comments
https://api.github.com/repos/huggingface/datasets/issues/2794/events
https://github.com/huggingface/datasets/issues/2794
969,728,545
MDU6SXNzdWU5Njk3Mjg1NDU=
2,794
Warnings and documentation about pickling incorrect
{ "avatar_url": "https://avatars.githubusercontent.com/u/1170062?v=4", "events_url": "https://api.github.com/users/mbforbes/events{/privacy}", "followers_url": "https://api.github.com/users/mbforbes/followers", "following_url": "https://api.github.com/users/mbforbes/following{/other_user}", "gists_url": "https://api.github.com/users/mbforbes/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mbforbes", "id": 1170062, "login": "mbforbes", "node_id": "MDQ6VXNlcjExNzAwNjI=", "organizations_url": "https://api.github.com/users/mbforbes/orgs", "received_events_url": "https://api.github.com/users/mbforbes/received_events", "repos_url": "https://api.github.com/users/mbforbes/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mbforbes/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mbforbes/subscriptions", "type": "User", "url": "https://api.github.com/users/mbforbes" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
open
false
null
[]
null
[]
2021-08-12T23:09:13Z
2021-08-12T23:09:31Z
null
NONE
null
null
null
## Describe the bug I have a docs bug and a closely related docs enhancement suggestion! ### Bug The warning and documentation say "either `dill` or `pickle`" for fingerprinting. But it seems that `dill`, which is installed by `datasets` by default, _must_ work, or else the fingerprinting fails. Warning: https://github.com/huggingface/datasets/blob/450b9174765374111e5c6daab0ed294bc3d9b639/src/datasets/fingerprint.py#L262 Docs: > For a transform to be hashable, it needs to be pickleable using dill or pickle. > – [docs](https://huggingface.co/docs/datasets/processing.html#fingerprinting) For my code, `pickle` works, but `dill` fails. The `dill` failure has already been reported in https://github.com/huggingface/datasets/issues/2643. However, the `dill` failure causes a hashing failure in the datasets library, without any backing off to `pickle`. This implies that it's not the case that either `dill` **or** `pickle` can work, but that `dill` must work if it is installed. I think this is more accurate wording, since it is installed and used by default: https://github.com/huggingface/datasets/blob/c93525dc291346e54212567fa72d7d607befe937/setup.py#L83 ... and the hashing will fail if it fails. ### Enhancement I think it'd be very helpful to add to the documentation how to debug hashing failures. It took me a while to figure out how to diagnose this. There is a very nice two-liner by @lhoestq in https://github.com/huggingface/datasets/issues/2516#issuecomment-865173139: ```python from datasets.fingerprint import Hasher Hasher.hash(my_object) ``` I think add this to the docs will help future users quickly debug any hashing troubles of their own :-) ## Steps to reproduce the bug `dill` but not `pickle` hashing failure in https://github.com/huggingface/datasets/issues/2643 ## Expected results If either `dill` or `pickle` can successfully hash, the hashing will succeed. ## Actual results If `dill` or `pickle` cannot hash, the hashing fails. ## Environment info - `datasets` version: 1.9.0 - Platform: Linux-5.8.0-1038-gcp-x86_64-with-glibc2.31 - Python version: 3.9.6 - PyArrow version: 4.0.1
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2794/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2794/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/2793
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2793/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2793/comments
https://api.github.com/repos/huggingface/datasets/issues/2793/events
https://github.com/huggingface/datasets/pull/2793
968,967,773
MDExOlB1bGxSZXF1ZXN0NzExMDQ4NDY2
2,793
Fix type hint for data_files
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[]
closed
false
null
[]
null
[]
2021-08-12T14:42:37Z
2021-08-12T15:35:29Z
2021-08-12T15:35:29Z
MEMBER
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/2793.diff", "html_url": "https://github.com/huggingface/datasets/pull/2793", "merged_at": "2021-08-12T15:35:29Z", "patch_url": "https://github.com/huggingface/datasets/pull/2793.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/2793" }
Fix type hint for `data_files` in signatures and docstrings.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2793/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2793/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/2792
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2792/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2792/comments
https://api.github.com/repos/huggingface/datasets/issues/2792/events
https://github.com/huggingface/datasets/pull/2792
968,650,274
MDExOlB1bGxSZXF1ZXN0NzEwNzUyMjc0
2,792
Update: GooAQ - add train/val/test splits
{ "avatar_url": "https://avatars.githubusercontent.com/u/19718818?v=4", "events_url": "https://api.github.com/users/bhavitvyamalik/events{/privacy}", "followers_url": "https://api.github.com/users/bhavitvyamalik/followers", "following_url": "https://api.github.com/users/bhavitvyamalik/following{/other_user}", "gists_url": "https://api.github.com/users/bhavitvyamalik/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/bhavitvyamalik", "id": 19718818, "login": "bhavitvyamalik", "node_id": "MDQ6VXNlcjE5NzE4ODE4", "organizations_url": "https://api.github.com/users/bhavitvyamalik/orgs", "received_events_url": "https://api.github.com/users/bhavitvyamalik/received_events", "repos_url": "https://api.github.com/users/bhavitvyamalik/repos", "site_admin": false, "starred_url": "https://api.github.com/users/bhavitvyamalik/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/bhavitvyamalik/subscriptions", "type": "User", "url": "https://api.github.com/users/bhavitvyamalik" }
[]
closed
false
null
[]
null
[]
2021-08-12T11:40:18Z
2021-08-27T15:58:45Z
2021-08-27T15:58:14Z
CONTRIBUTOR
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/2792.diff", "html_url": "https://github.com/huggingface/datasets/pull/2792", "merged_at": "2021-08-27T15:58:14Z", "patch_url": "https://github.com/huggingface/datasets/pull/2792.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/2792" }
[GooAQ](https://github.com/allenai/gooaq) dataset was recently updated after splits were added for the same. This PR contains new updated GooAQ with train/val/test splits and updated README as well.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2792/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2792/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/2791
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2791/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2791/comments
https://api.github.com/repos/huggingface/datasets/issues/2791/events
https://github.com/huggingface/datasets/pull/2791
968,360,314
MDExOlB1bGxSZXF1ZXN0NzEwNDgxNDAy
2,791
Fix typo in cnn_dailymail
{ "avatar_url": "https://avatars.githubusercontent.com/u/42531544?v=4", "events_url": "https://api.github.com/users/omaralsayed/events{/privacy}", "followers_url": "https://api.github.com/users/omaralsayed/followers", "following_url": "https://api.github.com/users/omaralsayed/following{/other_user}", "gists_url": "https://api.github.com/users/omaralsayed/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/omaralsayed", "id": 42531544, "login": "omaralsayed", "node_id": "MDQ6VXNlcjQyNTMxNTQ0", "organizations_url": "https://api.github.com/users/omaralsayed/orgs", "received_events_url": "https://api.github.com/users/omaralsayed/received_events", "repos_url": "https://api.github.com/users/omaralsayed/repos", "site_admin": false, "starred_url": "https://api.github.com/users/omaralsayed/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/omaralsayed/subscriptions", "type": "User", "url": "https://api.github.com/users/omaralsayed" }
[]
closed
false
null
[]
null
[]
2021-08-12T08:38:42Z
2021-08-12T11:17:59Z
2021-08-12T11:17:59Z
CONTRIBUTOR
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/2791.diff", "html_url": "https://github.com/huggingface/datasets/pull/2791", "merged_at": "2021-08-12T11:17:59Z", "patch_url": "https://github.com/huggingface/datasets/pull/2791.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/2791" }
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2791/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2791/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/2790
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2790/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2790/comments
https://api.github.com/repos/huggingface/datasets/issues/2790/events
https://github.com/huggingface/datasets/pull/2790
967,772,181
MDExOlB1bGxSZXF1ZXN0NzA5OTI3NjM2
2,790
Fix typo in test_dataset_common
{ "avatar_url": "https://avatars.githubusercontent.com/u/32437151?v=4", "events_url": "https://api.github.com/users/nateraw/events{/privacy}", "followers_url": "https://api.github.com/users/nateraw/followers", "following_url": "https://api.github.com/users/nateraw/following{/other_user}", "gists_url": "https://api.github.com/users/nateraw/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/nateraw", "id": 32437151, "login": "nateraw", "node_id": "MDQ6VXNlcjMyNDM3MTUx", "organizations_url": "https://api.github.com/users/nateraw/orgs", "received_events_url": "https://api.github.com/users/nateraw/received_events", "repos_url": "https://api.github.com/users/nateraw/repos", "site_admin": false, "starred_url": "https://api.github.com/users/nateraw/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/nateraw/subscriptions", "type": "User", "url": "https://api.github.com/users/nateraw" }
[]
closed
false
null
[]
null
[]
2021-08-12T01:10:29Z
2021-08-12T11:31:29Z
2021-08-12T11:31:29Z
CONTRIBUTOR
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/2790.diff", "html_url": "https://github.com/huggingface/datasets/pull/2790", "merged_at": "2021-08-12T11:31:29Z", "patch_url": "https://github.com/huggingface/datasets/pull/2790.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/2790" }
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2790/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2790/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/2789
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2789/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2789/comments
https://api.github.com/repos/huggingface/datasets/issues/2789/events
https://github.com/huggingface/datasets/pull/2789
967,361,934
MDExOlB1bGxSZXF1ZXN0NzA5NTQwMzY5
2,789
Updated dataset description of DaNE
{ "avatar_url": "https://avatars.githubusercontent.com/u/23721977?v=4", "events_url": "https://api.github.com/users/KennethEnevoldsen/events{/privacy}", "followers_url": "https://api.github.com/users/KennethEnevoldsen/followers", "following_url": "https://api.github.com/users/KennethEnevoldsen/following{/other_user}", "gists_url": "https://api.github.com/users/KennethEnevoldsen/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/KennethEnevoldsen", "id": 23721977, "login": "KennethEnevoldsen", "node_id": "MDQ6VXNlcjIzNzIxOTc3", "organizations_url": "https://api.github.com/users/KennethEnevoldsen/orgs", "received_events_url": "https://api.github.com/users/KennethEnevoldsen/received_events", "repos_url": "https://api.github.com/users/KennethEnevoldsen/repos", "site_admin": false, "starred_url": "https://api.github.com/users/KennethEnevoldsen/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/KennethEnevoldsen/subscriptions", "type": "User", "url": "https://api.github.com/users/KennethEnevoldsen" }
[]
closed
false
null
[]
null
[]
2021-08-11T19:58:48Z
2021-08-12T16:10:59Z
2021-08-12T16:06:01Z
CONTRIBUTOR
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/2789.diff", "html_url": "https://github.com/huggingface/datasets/pull/2789", "merged_at": "2021-08-12T16:06:01Z", "patch_url": "https://github.com/huggingface/datasets/pull/2789.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/2789" }
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2789/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2789/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/2788
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2788/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2788/comments
https://api.github.com/repos/huggingface/datasets/issues/2788/events
https://github.com/huggingface/datasets/issues/2788
967,149,389
MDU6SXNzdWU5NjcxNDkzODk=
2,788
How to sample every file in a list of files making up a split in a dataset when loading?
{ "avatar_url": "https://avatars.githubusercontent.com/u/11220949?v=4", "events_url": "https://api.github.com/users/brijow/events{/privacy}", "followers_url": "https://api.github.com/users/brijow/followers", "following_url": "https://api.github.com/users/brijow/following{/other_user}", "gists_url": "https://api.github.com/users/brijow/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/brijow", "id": 11220949, "login": "brijow", "node_id": "MDQ6VXNlcjExMjIwOTQ5", "organizations_url": "https://api.github.com/users/brijow/orgs", "received_events_url": "https://api.github.com/users/brijow/received_events", "repos_url": "https://api.github.com/users/brijow/repos", "site_admin": false, "starred_url": "https://api.github.com/users/brijow/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/brijow/subscriptions", "type": "User", "url": "https://api.github.com/users/brijow" }
[]
open
false
null
[]
null
[]
2021-08-11T17:43:21Z
2021-08-23T17:12:22Z
null
NONE
null
null
null
I am loading a dataset with multiple train, test, and validation files like this: ``` data_files_dict = { "train": [train_file1, train_file2], "test": [test_file1, test_file2], "val": [val_file1, val_file2] } dataset = datasets.load_dataset( "csv", data_files=data_files_dict, split=['train[:8]', 'test[:8]', 'val[:8]'] ) ``` However, this only selects the first 8 rows from train_file1, test_file1, val_file1, since they are the first files in the lists. I'm trying to formulate a split argument that can sample from each file specified in my list of files that make up each split. Is this type of splitting supported? If so, how can I do it?
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2788/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2788/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/2787
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2787/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2787/comments
https://api.github.com/repos/huggingface/datasets/issues/2787/events
https://github.com/huggingface/datasets/issues/2787
967,018,406
MDU6SXNzdWU5NjcwMTg0MDY=
2,787
ConnectionError: Couldn't reach https://raw.githubusercontent.com
{ "avatar_url": "https://avatars.githubusercontent.com/u/39627475?v=4", "events_url": "https://api.github.com/users/jinec/events{/privacy}", "followers_url": "https://api.github.com/users/jinec/followers", "following_url": "https://api.github.com/users/jinec/following{/other_user}", "gists_url": "https://api.github.com/users/jinec/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/jinec", "id": 39627475, "login": "jinec", "node_id": "MDQ6VXNlcjM5NjI3NDc1", "organizations_url": "https://api.github.com/users/jinec/orgs", "received_events_url": "https://api.github.com/users/jinec/received_events", "repos_url": "https://api.github.com/users/jinec/repos", "site_admin": false, "starred_url": "https://api.github.com/users/jinec/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jinec/subscriptions", "type": "User", "url": "https://api.github.com/users/jinec" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
closed
false
null
[]
null
[]
2021-08-11T16:19:01Z
2021-11-24T06:25:38Z
2021-08-18T15:09:18Z
NONE
null
null
null
Hello, I am trying to run run_glue.py and it gives me this error - Traceback (most recent call last): File "E:/BERT/pytorch_hugging/transformers/examples/pytorch/text-classification/run_glue.py", line 546, in <module> main() File "E:/BERT/pytorch_hugging/transformers/examples/pytorch/text-classification/run_glue.py", line 250, in main datasets = load_dataset("glue", data_args.task_name, cache_dir=model_args.cache_dir) File "C:\install\Anaconda3\envs\huggingface\lib\site-packages\datasets\load.py", line 718, in load_dataset use_auth_token=use_auth_token, File "C:\install\Anaconda3\envs\huggingface\lib\site-packages\datasets\load.py", line 320, in prepare_module local_path = cached_path(file_path, download_config=download_config) File "C:\install\Anaconda3\envs\huggingface\lib\site-packages\datasets\utils\file_utils.py", line 291, in cached_path use_auth_token=download_config.use_auth_token, File "C:\install\Anaconda3\envs\huggingface\lib\site-packages\datasets\utils\file_utils.py", line 623, in get_from_cache raise ConnectionError("Couldn't reach {}".format(url)) ConnectionError: Couldn't reach https://raw.githubusercontent.com/huggingface/datasets/1.7.0/datasets/glue/glue.py Trying to do python run_glue.py --model_name_or_path bert-base-cased --task_name mrpc --do_train --do_eval --max_seq_length 128 --per_device_train_batch_size 32 --learning_rate 2e-5 --num_train_epochs 3 --output_dir ./tmp/mrpc/ Is this something on my end? From what I can tell, this was re-fixeded by @fullyz a few months ago. Thank you!
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2787/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2787/timeline
null
completed
true
https://api.github.com/repos/huggingface/datasets/issues/2786
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2786/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2786/comments
https://api.github.com/repos/huggingface/datasets/issues/2786/events
https://github.com/huggingface/datasets/pull/2786
966,282,934
MDExOlB1bGxSZXF1ZXN0NzA4NTQwMzU0
2,786
Support streaming compressed files
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[]
closed
false
null
[]
null
[]
2021-08-11T09:02:06Z
2021-08-17T05:28:39Z
2021-08-16T06:36:19Z
MEMBER
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/2786.diff", "html_url": "https://github.com/huggingface/datasets/pull/2786", "merged_at": "2021-08-16T06:36:19Z", "patch_url": "https://github.com/huggingface/datasets/pull/2786.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/2786" }
Add support to stream compressed files (current options in fsspec): - bz2 - lz4 - xz - zstd cc: @lewtun
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 2, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 2, "url": "https://api.github.com/repos/huggingface/datasets/issues/2786/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2786/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/2783
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2783/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2783/comments
https://api.github.com/repos/huggingface/datasets/issues/2783/events
https://github.com/huggingface/datasets/pull/2783
965,461,382
MDExOlB1bGxSZXF1ZXN0NzA3NzcxOTM3
2,783
Add KS task to SUPERB
{ "avatar_url": "https://avatars.githubusercontent.com/u/26864830?v=4", "events_url": "https://api.github.com/users/anton-l/events{/privacy}", "followers_url": "https://api.github.com/users/anton-l/followers", "following_url": "https://api.github.com/users/anton-l/following{/other_user}", "gists_url": "https://api.github.com/users/anton-l/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/anton-l", "id": 26864830, "login": "anton-l", "node_id": "MDQ6VXNlcjI2ODY0ODMw", "organizations_url": "https://api.github.com/users/anton-l/orgs", "received_events_url": "https://api.github.com/users/anton-l/received_events", "repos_url": "https://api.github.com/users/anton-l/repos", "site_admin": false, "starred_url": "https://api.github.com/users/anton-l/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/anton-l/subscriptions", "type": "User", "url": "https://api.github.com/users/anton-l" }
[]
closed
false
null
[]
null
[]
2021-08-10T22:14:07Z
2021-08-12T16:45:01Z
2021-08-11T20:19:17Z
MEMBER
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/2783.diff", "html_url": "https://github.com/huggingface/datasets/pull/2783", "merged_at": "2021-08-11T20:19:17Z", "patch_url": "https://github.com/huggingface/datasets/pull/2783.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/2783" }
Add the KS (keyword spotting) task as described in the [SUPERB paper](https://arxiv.org/abs/2105.01051). - [s3prl instructions](https://github.com/s3prl/s3prl/blob/master/s3prl/downstream/README.md#ks-keyword-spotting) - [s3prl implementation](https://github.com/s3prl/s3prl/blob/master/s3prl/downstream/speech_commands/dataset.py) - [TFDS implementation](https://github.com/tensorflow/datasets/blob/master/tensorflow_datasets/audio/speech_commands.py) Some notable quirks: - The dataset is originally single-archive (train+val+test all in one), but the test set has a "canonical" distribution in a separate archive, which is also used here (see `_split_ks_files()`). - The `_background_noise_`/`_silence_` audio files are much longer than others, so they require some sort of slicing for downstream training. I decided to leave the implementation of that up to the users, since TFDS and s3prl take different approaches (either slicing wavs deterministically, or subsampling randomly at runtime) Related to #2619.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 3, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 3, "url": "https://api.github.com/repos/huggingface/datasets/issues/2783/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2783/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/2782
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2782/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2782/comments
https://api.github.com/repos/huggingface/datasets/issues/2782/events
https://github.com/huggingface/datasets/pull/2782
964,858,439
MDExOlB1bGxSZXF1ZXN0NzA3MjQ5NDE5
2,782
Fix renaming of corpus_bleu args
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[]
closed
false
null
[]
null
[]
2021-08-10T11:02:34Z
2021-08-10T11:16:07Z
2021-08-10T11:16:07Z
MEMBER
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/2782.diff", "html_url": "https://github.com/huggingface/datasets/pull/2782", "merged_at": "2021-08-10T11:16:07Z", "patch_url": "https://github.com/huggingface/datasets/pull/2782.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/2782" }
Last `sacrebleu` release (v2.0.0) has renamed `sacrebleu.corpus_bleu` args from `(sys_stream, ref_streams)` to `(hipotheses, references)`: https://github.com/mjpost/sacrebleu/pull/152/files#diff-2553a315bb1f7e68c9c1b00d56eaeb74f5205aeb3a189bc3e527b122c6078795L17-R15 This PR passes the args without parameter names, so that it is valid for all versions of `sacrebleu`. This is a partial hotfix of #2781. Close #2781.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2782/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2782/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/2781
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2781/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2781/comments
https://api.github.com/repos/huggingface/datasets/issues/2781/events
https://github.com/huggingface/datasets/issues/2781
964,805,351
MDU6SXNzdWU5NjQ4MDUzNTE=
2,781
Latest v2.0.0 release of sacrebleu has broken some metrics
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
closed
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" } ]
null
[]
2021-08-10T09:59:41Z
2021-08-10T11:16:07Z
2021-08-10T11:16:07Z
MEMBER
null
null
null
## Describe the bug After `sacrebleu` v2.0.0 release (see changes here: https://github.com/mjpost/sacrebleu/pull/152/files#diff-2553a315bb1f7e68c9c1b00d56eaeb74f5205aeb3a189bc3e527b122c6078795L17-R15), some of `datasets` metrics are broken: - Default tokenizer `sacrebleu.DEFAULT_TOKENIZER` no longer exists: - #2739 - #2778 - Bleu tokenizers are no longer accessible with `sacrebleu.TOKENIZERS`: - #2779 - `corpus_bleu` args have been renamed from `(sys_stream, ref_streams)` to `(hipotheses, references)`: - #2782
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2781/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2781/timeline
null
completed
true
https://api.github.com/repos/huggingface/datasets/issues/2780
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2780/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2780/comments
https://api.github.com/repos/huggingface/datasets/issues/2780/events
https://github.com/huggingface/datasets/pull/2780
964,794,764
MDExOlB1bGxSZXF1ZXN0NzA3MTk2NjA3
2,780
VIVOS dataset for Vietnamese ASR
{ "avatar_url": "https://avatars.githubusercontent.com/u/57580923?v=4", "events_url": "https://api.github.com/users/binh234/events{/privacy}", "followers_url": "https://api.github.com/users/binh234/followers", "following_url": "https://api.github.com/users/binh234/following{/other_user}", "gists_url": "https://api.github.com/users/binh234/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/binh234", "id": 57580923, "login": "binh234", "node_id": "MDQ6VXNlcjU3NTgwOTIz", "organizations_url": "https://api.github.com/users/binh234/orgs", "received_events_url": "https://api.github.com/users/binh234/received_events", "repos_url": "https://api.github.com/users/binh234/repos", "site_admin": false, "starred_url": "https://api.github.com/users/binh234/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/binh234/subscriptions", "type": "User", "url": "https://api.github.com/users/binh234" }
[]
closed
false
null
[]
null
[]
2021-08-10T09:47:36Z
2021-08-12T11:09:30Z
2021-08-12T11:09:30Z
CONTRIBUTOR
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/2780.diff", "html_url": "https://github.com/huggingface/datasets/pull/2780", "merged_at": "2021-08-12T11:09:30Z", "patch_url": "https://github.com/huggingface/datasets/pull/2780.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/2780" }
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2780/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2780/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/2779
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2779/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2779/comments
https://api.github.com/repos/huggingface/datasets/issues/2779/events
https://github.com/huggingface/datasets/pull/2779
964,775,085
MDExOlB1bGxSZXF1ZXN0NzA3MTgwNTgw
2,779
Fix sacrebleu tokenizers
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[]
closed
false
null
[]
null
[]
2021-08-10T09:24:27Z
2021-08-10T11:03:08Z
2021-08-10T10:57:54Z
MEMBER
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/2779.diff", "html_url": "https://github.com/huggingface/datasets/pull/2779", "merged_at": "2021-08-10T10:57:54Z", "patch_url": "https://github.com/huggingface/datasets/pull/2779.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/2779" }
Last `sacrebleu` release (v2.0.0) has removed `sacrebleu.TOKENIZERS`: https://github.com/mjpost/sacrebleu/pull/152/files#diff-2553a315bb1f7e68c9c1b00d56eaeb74f5205aeb3a189bc3e527b122c6078795L17-R15 This PR makes a hot fix of the bug by using a private function in `sacrebleu`: `sacrebleu.metrics.bleu._get_tokenizer()`. Eventually, this should be further fixed in order to use only public functions. This is a partial hotfix of #2781.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2779/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2779/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/2778
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2778/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2778/comments
https://api.github.com/repos/huggingface/datasets/issues/2778/events
https://github.com/huggingface/datasets/pull/2778
964,737,422
MDExOlB1bGxSZXF1ZXN0NzA3MTQ5MTk2
2,778
Do not pass tokenize to sacrebleu
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[]
closed
false
null
[]
null
[]
2021-08-10T08:40:37Z
2021-08-10T10:03:37Z
2021-08-10T10:03:37Z
MEMBER
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/2778.diff", "html_url": "https://github.com/huggingface/datasets/pull/2778", "merged_at": "2021-08-10T10:03:37Z", "patch_url": "https://github.com/huggingface/datasets/pull/2778.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/2778" }
Last `sacrebleu` release (v2.0.0) has removed `sacrebleu.DEFAULT_TOKENIZER`: https://github.com/mjpost/sacrebleu/pull/152/files#diff-2553a315bb1f7e68c9c1b00d56eaeb74f5205aeb3a189bc3e527b122c6078795L17-R15 This PR does not pass `tokenize` to `sacrebleu` (note that the user cannot pass it anyway) and `sacrebleu` will use its default, no matter where it is and how it is called. Related to #2739. This is a partial hotfix of #2781.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2778/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2778/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/2777
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2777/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2777/comments
https://api.github.com/repos/huggingface/datasets/issues/2777/events
https://github.com/huggingface/datasets/pull/2777
964,696,380
MDExOlB1bGxSZXF1ZXN0NzA3MTEzNzg3
2,777
Use packaging to handle versions
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[]
closed
false
null
[]
null
[]
2021-08-10T07:51:39Z
2021-08-18T13:56:27Z
2021-08-18T13:56:27Z
MEMBER
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/2777.diff", "html_url": "https://github.com/huggingface/datasets/pull/2777", "merged_at": "2021-08-18T13:56:27Z", "patch_url": "https://github.com/huggingface/datasets/pull/2777.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/2777" }
Use packaging module to handle/validate/check versions of Python packages. Related to #2769.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2777/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2777/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/2776
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2776/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2776/comments
https://api.github.com/repos/huggingface/datasets/issues/2776/events
https://github.com/huggingface/datasets/issues/2776
964,400,596
MDU6SXNzdWU5NjQ0MDA1OTY=
2,776
document `config.HF_DATASETS_OFFLINE` and precedence
{ "avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4", "events_url": "https://api.github.com/users/stas00/events{/privacy}", "followers_url": "https://api.github.com/users/stas00/followers", "following_url": "https://api.github.com/users/stas00/following{/other_user}", "gists_url": "https://api.github.com/users/stas00/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/stas00", "id": 10676103, "login": "stas00", "node_id": "MDQ6VXNlcjEwNjc2MTAz", "organizations_url": "https://api.github.com/users/stas00/orgs", "received_events_url": "https://api.github.com/users/stas00/received_events", "repos_url": "https://api.github.com/users/stas00/repos", "site_admin": false, "starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stas00/subscriptions", "type": "User", "url": "https://api.github.com/users/stas00" }
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" } ]
open
false
null
[]
null
[]
2021-08-09T21:23:17Z
2021-08-09T21:23:17Z
null
MEMBER
null
null
null
https://github.com/huggingface/datasets/pull/1976 implemented `HF_DATASETS_OFFLINE`, but: 1. `config.HF_DATASETS_OFFLINE` is not documented 2. the precedence is not documented (env, config) I'm thinking it probably should be similar to what it says https://huggingface.co/docs/datasets/loading_datasets.html#from-the-huggingface-hub about `datasets.config.IN_MEMORY_MAX_SIZE`: Quote: > The default in 🤗 Datasets is to memory-map the dataset on disk unless you set datasets.config.IN_MEMORY_MAX_SIZE different from 0 bytes (default). In that case, the dataset will be copied in-memory if its size is smaller than datasets.config.IN_MEMORY_MAX_SIZE bytes, and memory-mapped otherwise. This behavior can be enabled by setting either the configuration option datasets.config.IN_MEMORY_MAX_SIZE (higher precedence) or the environment variable HF_DATASETS_IN_MEMORY_MAX_SIZE (lower precedence) to nonzero. Context: trying to use `config.HF_DATASETS_OFFLINE` here: https://github.com/bigscience-workshop/Megatron-DeepSpeed/pull/48 but are uncertain if it's safe, since it's not documented as a public API. Thank you! @lhoestq, @albertvillanova
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2776/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2776/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/2775
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2775/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2775/comments
https://api.github.com/repos/huggingface/datasets/issues/2775/events
https://github.com/huggingface/datasets/issues/2775
964,303,626
MDU6SXNzdWU5NjQzMDM2MjY=
2,775
`generate_random_fingerprint()` deterministic with 🤗Transformers' `set_seed()`
{ "avatar_url": "https://avatars.githubusercontent.com/u/1170062?v=4", "events_url": "https://api.github.com/users/mbforbes/events{/privacy}", "followers_url": "https://api.github.com/users/mbforbes/followers", "following_url": "https://api.github.com/users/mbforbes/following{/other_user}", "gists_url": "https://api.github.com/users/mbforbes/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mbforbes", "id": 1170062, "login": "mbforbes", "node_id": "MDQ6VXNlcjExNzAwNjI=", "organizations_url": "https://api.github.com/users/mbforbes/orgs", "received_events_url": "https://api.github.com/users/mbforbes/received_events", "repos_url": "https://api.github.com/users/mbforbes/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mbforbes/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mbforbes/subscriptions", "type": "User", "url": "https://api.github.com/users/mbforbes" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
open
false
null
[]
null
[]
2021-08-09T19:28:51Z
2021-08-26T08:30:54Z
null
NONE
null
null
null
## Describe the bug **Update:** I dug into this to try to reproduce the underlying issue, and I believe it's that `set_seed()` from the `transformers` library makes the "random" fingerprint identical each time. I believe this is still a bug, because `datasets` is used exactly this way in `transformers` after `set_seed()` has been called, and I think that using `set_seed()` is a standard procedure to aid reproducibility. I've added more details to reproduce this below. Hi there! I'm using my own local dataset and custom preprocessing function. My preprocessing function seems to be unpickle-able, perhaps because it is from a closure (will debug this separately). I get this warning, which is expected: https://github.com/huggingface/datasets/blob/450b9174765374111e5c6daab0ed294bc3d9b639/src/datasets/fingerprint.py#L260-L265 However, what's not expected is that the `datasets` actually _does_ seem to cache and reuse this dataset between runs! After that line, the next thing that's logged looks like: ```text Loading cached processed dataset at /home/xxx/.cache/huggingface/datasets/csv/default-xxx/0.0.0/xxx/cache-xxx.arrow ``` The path is exactly the same each run (e.g., last 26 runs). This becomes a problem because I'll pass in the `--max_eval_samples` flag to the HuggingFace example script I'm running off of ([run_swag.py](https://github.com/huggingface/transformers/blob/master/examples/pytorch/multiple-choice/run_swag.py)). The fact that the cached dataset is reused means this flag gets ignored. I'll try to load 100 examples, and it will load the full cached 1,000,000. I think that https://github.com/huggingface/datasets/blob/450b9174765374111e5c6daab0ed294bc3d9b639/src/datasets/fingerprint.py#L248 ... is actually consistent because randomness is being controlled in HuggingFace/Transformers for reproducibility. I've added a demo of this below. ## Steps to reproduce the bug ```python # Contents of print_fingerprint.py from transformers import set_seed from datasets.fingerprint import generate_random_fingerprint set_seed(42) print(generate_random_fingerprint()) ``` ```bash for i in {0..10}; do python print_fingerprint.py done 1c80317fa3b1799d 1c80317fa3b1799d 1c80317fa3b1799d 1c80317fa3b1799d 1c80317fa3b1799d 1c80317fa3b1799d 1c80317fa3b1799d 1c80317fa3b1799d 1c80317fa3b1799d 1c80317fa3b1799d 1c80317fa3b1799d ``` ## Expected results After the "random hash" warning is emitted, a random hash is generated, and no outdated cached datasets are reused. ## Actual results After the "random hash" warning is emitted, an identical hash is generated each time, and an outdated cached dataset is reused each run. ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 1.9.0 - Platform: Linux-5.8.0-1038-gcp-x86_64-with-glibc2.31 - Python version: 3.9.6 - PyArrow version: 4.0.1
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2775/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2775/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/2774
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2774/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2774/comments
https://api.github.com/repos/huggingface/datasets/issues/2774/events
https://github.com/huggingface/datasets/pull/2774
963,932,199
MDExOlB1bGxSZXF1ZXN0NzA2NDY2MDc0
2,774
Prevent .map from using multiprocessing when loading from cache
{ "avatar_url": "https://avatars.githubusercontent.com/u/24695242?v=4", "events_url": "https://api.github.com/users/thomasw21/events{/privacy}", "followers_url": "https://api.github.com/users/thomasw21/followers", "following_url": "https://api.github.com/users/thomasw21/following{/other_user}", "gists_url": "https://api.github.com/users/thomasw21/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/thomasw21", "id": 24695242, "login": "thomasw21", "node_id": "MDQ6VXNlcjI0Njk1MjQy", "organizations_url": "https://api.github.com/users/thomasw21/orgs", "received_events_url": "https://api.github.com/users/thomasw21/received_events", "repos_url": "https://api.github.com/users/thomasw21/repos", "site_admin": false, "starred_url": "https://api.github.com/users/thomasw21/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/thomasw21/subscriptions", "type": "User", "url": "https://api.github.com/users/thomasw21" }
[]
closed
false
null
[]
null
[]
2021-08-09T12:11:38Z
2021-09-09T10:20:28Z
2021-09-09T10:20:28Z
MEMBER
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/2774.diff", "html_url": "https://github.com/huggingface/datasets/pull/2774", "merged_at": "2021-09-09T10:20:28Z", "patch_url": "https://github.com/huggingface/datasets/pull/2774.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/2774" }
## Context On our setup, we use different setup to train vs proprocessing datasets. Usually we are able to obtain a high number of cpus to preprocess, which allows us to use `num_proc` however we can't use as many during training phase. Currently if we use `num_proc={whatever the preprocessing value was}` we load from cache, but we get: ``` Traceback (most recent call last): File "lib/python3.8/site-packages/multiprocess/pool.py", line 131, in worker put((job, i, result)) File "lib/python3.8/site-packages/multiprocess/queues.py", line 371, in put self._writer.send_bytes(obj) File "lib/python3.8/site-packages/multiprocess/connection.py", line 203, in send_bytes self._send_bytes(m[offset:offset + size]) File "lib/python3.8/site-packages/multiprocess/connection.py", line 414, in _send_bytes self._send(header + buf) File "lib/python3.8/site-packages/multiprocess/connection.py", line 371, in _send n = write(self._handle, buf) BrokenPipeError: [Errno 32] Broken pipe ``` Our current guess, is that we're spawning too many processes compared to the number of cpus available, and it's running OOM. Also we're loading this in DDP setting which means that for each gpu, I need to spawn a high number of processes to match the preprocessing fingerprint. Instead what we suggest: - Allow loading shard sequentially, sharing the same fingerprint as the multiprocessed one, in order to leverage multiprocessing when we actually generate the cache, and remove it when loading from cache. ## Current issues ~I'm having a hard time making fingerprints match. For some reason, the multiprocessing and the sequential version generate two different hash.~ **EDIT**: Turns out multiprocessing and sequential have different `transform` value for fingerprinting (check `fingerprint_transform`) when running `_map_single`: - sequential : `datasets.arrow_dataset.Dataset._map_single` - multiprocessing: `datasets.arrow_dataset._map_single` This discrepancy is caused by multiprocessing pickling the transformer function, it doesn't seem to keep the `Dataset` hierarchy. I'm still unclear on why `func.__qual_name__` isn't handled correctly in multiprocessing. But replacing `__qualname__` by `__name__` fixes the issue. ## What was done ~We try to prevent the usage of multiprocessing when loading a dataset. Instead we load all cached shards sequentially.~ I couldn't find a nice way to obtain the cached_file_name and check they all exist before deciding to use the multiprocessing flow or not. Instead I expose an optional boolean `sequential` in `map` method. ## TODO - [x] Check that the multiprocessed version and the sequential version output the same output - [x] Check that sequential can load multiprocessed - [x] Check that multiprocessed can load sequential ## Test ```python from datasets import load_dataset from multiprocessing import Pool import random def process(batch, rng): length = len(batch["text"]) return {**batch, "processed_text": [f"PROCESSED {rng.random()}" for _ in range(length)]} dataset = load_dataset("stas/openwebtext-10k", split="train") print(dataset.column_names) print(type(dataset)) rng = random.Random(42) dataset1 = dataset.map(process, batched=True, batch_size=50, num_proc=4, fn_kwargs={"rng": rng}) # This one should be loaded from cache rng = random.Random(42) dataset2 = dataset.map(process, batched=True, batch_size=50, num_proc=4, fn_kwargs={"rng": rng}, sequential=True) # Just to check that the random generator was correct print(dataset1[-1]["processed_text"]) print(dataset2[-1]["processed_text"]) ``` ## Other solutions I chose to load everything sequentially, but we can probably find a way to load shards in parallel using another number of workers (essentially this would be an argument not used for fingerprinting, allowing to allow `m` shards using `n` processes, which would be very useful when same dataset have to be loaded on two different setup, and we still want to leverage cache). Also we can use a env variable similarly to `TOKENIZERS_PARALLELISM` as this seems generally setup related (though this changes slightly if we use multiprocessing). cc @lhoestq (since I had asked you previously on `num_proc` being used for fingerprinting). Don't know if this is acceptable.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2774/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2774/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/2773
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2773/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2773/comments
https://api.github.com/repos/huggingface/datasets/issues/2773/events
https://github.com/huggingface/datasets/issues/2773
963,730,497
MDU6SXNzdWU5NjM3MzA0OTc=
2,773
Remove dataset_infos.json
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" }, { "color": "c5def5", "default": false, "description": "Generic discussion on the library", "id": 2067400324, "name": "generic discussion", "node_id": "MDU6TGFiZWwyMDY3NDAwMzI0", "url": "https://api.github.com/repos/huggingface/datasets/labels/generic%20discussion" } ]
open
false
null
[]
null
[]
2021-08-09T07:43:19Z
2021-08-09T07:43:19Z
null
MEMBER
null
null
null
**Is your feature request related to a problem? Please describe.** As discussed, there are infos in the `dataset_infos.json` which are redundant and we could have them only in the README file. Others could be migrated to the README, like: "dataset_size", "size_in_bytes", "download_size", "splits.split_name.[num_bytes, num_examples]",... However, there are others that do not seem too meaningful in the README, like the checksums. **Describe the solution you'd like** Open a discussion to decide what to do with the `dataset_infos.json` files: which information to be migrated and/or which information to be kept. cc: @julien-c @lhoestq
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2773/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2773/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/2772
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2772/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2772/comments
https://api.github.com/repos/huggingface/datasets/issues/2772/events
https://github.com/huggingface/datasets/issues/2772
963,348,834
MDU6SXNzdWU5NjMzNDg4MzQ=
2,772
Remove returned feature constrain
{ "avatar_url": "https://avatars.githubusercontent.com/u/33200481?v=4", "events_url": "https://api.github.com/users/PosoSAgapo/events{/privacy}", "followers_url": "https://api.github.com/users/PosoSAgapo/followers", "following_url": "https://api.github.com/users/PosoSAgapo/following{/other_user}", "gists_url": "https://api.github.com/users/PosoSAgapo/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/PosoSAgapo", "id": 33200481, "login": "PosoSAgapo", "node_id": "MDQ6VXNlcjMzMjAwNDgx", "organizations_url": "https://api.github.com/users/PosoSAgapo/orgs", "received_events_url": "https://api.github.com/users/PosoSAgapo/received_events", "repos_url": "https://api.github.com/users/PosoSAgapo/repos", "site_admin": false, "starred_url": "https://api.github.com/users/PosoSAgapo/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/PosoSAgapo/subscriptions", "type": "User", "url": "https://api.github.com/users/PosoSAgapo" }
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" } ]
open
false
null
[]
null
[]
2021-08-08T04:01:30Z
2021-08-08T08:48:01Z
null
NONE
null
null
null
In the current version, the returned value of the map function has to be list or ndarray. However, this makes it unsuitable for many tasks. In NLP, many features are sparse like verb words, noun chunks, if we want to assign different values to different words, which will result in a large sparse matrix if we only score useful words like verb words. Mostly, when using it on large scale, saving it as a whole takes a lot of disk storage and making it hard to read, the normal method is saving it in sparse form. However, the NumPy does not support sparse, therefore I have to use PyTorch or scipy to transform a matrix into special sparse form, which is not a form that can be transformed into list or ndarry. This violates the feature constraints of the map function. I do appreciate the convenience of Datasets package, but I do not think the compulsory datatype constrain is necessary, in some cases, we just cannot transform it into a list or ndarray due to some reasons. Any way to fix this? Or what I can do to disable the compulsory datatype constrain?
{ "+1": 1, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/2772/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2772/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/2771
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2771/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2771/comments
https://api.github.com/repos/huggingface/datasets/issues/2771/events
https://github.com/huggingface/datasets/pull/2771
963,257,036
MDExOlB1bGxSZXF1ZXN0NzA1OTExMDMw
2,771
[WIP][Common Voice 7] Add common voice 7.0
{ "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/patrickvonplaten", "id": 23423619, "login": "patrickvonplaten", "node_id": "MDQ6VXNlcjIzNDIzNjE5", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "site_admin": false, "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "type": "User", "url": "https://api.github.com/users/patrickvonplaten" }
[]
closed
false
null
[]
null
[]
2021-08-07T16:01:10Z
2021-12-06T23:24:02Z
2021-12-06T23:24:02Z
MEMBER
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/2771.diff", "html_url": "https://github.com/huggingface/datasets/pull/2771", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/2771.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/2771" }
This PR allows to load the new common voice dataset manually as explained when doing: ```python from datasets import load_dataset ds = load_dataset("./datasets/datasets/common_voice_7", "ab") ``` => ``` Please follow the manual download instructions: You need to manually the dataset from `https://commonvoice.mozilla.org/en/datasets`. Make sure you choose the version `Common Voice Corpus 7.0`. Choose a language of your choice and find the corresponding language-id, *e.g.*, `Abkhaz` with language-id `ab`. The following language-ids are available: ['ab', 'ar', 'as', 'az', 'ba', 'bas', 'be', 'bg', 'br', 'ca', 'cnh', 'cs', 'cv', 'cy', 'de', 'dv', 'el', 'en', 'eo', 'es', 'et', 'eu', 'fa', 'fi', 'fr', 'fy-NL', 'ga-IE', 'gl', 'gn', 'ha', 'hi', 'hsb', 'hu', 'hy-AM', 'ia', 'id', 'it', 'ja', 'ka', 'kab', 'kk', 'kmr', 'ky', 'lg', 'lt', 'lv', 'mn', 'mt', 'nl', 'or', 'pa-IN', 'pl', 'pt', 'rm-sursilv', 'rm-vallader', 'ro', 'ru', 'rw', 'sah', 'sk', 'sl', 'sr', 'sv-SE', 'ta', 'th', 'tr', 'tt', 'ug', 'uk', 'ur', 'uz', 'vi', 'vot', 'zh-CN', 'zh-HK', 'zh-TW'] Next, you will have to enter your email address to download the dataset in the `tar.gz` format. Save the file under <path-to-file>. The file should then be extracted with: ``tar -xvzf <path-to-file>`` which will extract a folder called ``cv-corpus-7.0-2021-07-21``. The dataset can then be loaded with `datasets.load_dataset("common_voice", <language-id>, data_dir="<path-to-'cv-corpus-7.0-2021-07-21'-folder>", ignore_verifications=True). ``` Having followed those instructions one can then download the data as follows: ```python from datasets import load_dataset ds = load_dataset("./datasets/datasets/common_voice_7", "ab", data_dir="./cv-corpus-7.0-2021-07-21/", ignore_verifications=True) ``` ## TODO - [ ] Discuss naming. Is the name ok here "common_voice_7"? The dataset script differs only really in one point from `common_voice.py` in that all the metadata is different (more hours etc...) and that it has to use manual data dir for now - [ ] Ideally we should get a bundled download link. For `common_voice.py` there is a bundled download link: `https://voice-prod-bundler-ee1969a6ce8178826482b88e843c335139bd3fb4.s3.amazonaws.com/cv-corpus-6.1-2020-12-11/{}.tar.gz` that allows one to directly download the data. However such a link is missing for Common Voice 7. I guess we should try to contact common voice about it and ask whether we could host the data or help otherwise somehow. See: https://github.com/common-voice/common-voice-bundler/issues/15 cc @yjernite - [ ] I did not compute the dataset.json and it would mean that I'd have to download 76 datasets totalling around 1TB manually before running the checksum command. This just takes too much time. For now the user will have to add a `ignore_verifications=True` to download the data. This step would also be much easier if we could get a bundled link - [ ] Add dummy data
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2771/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2771/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/2770
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2770/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2770/comments
https://api.github.com/repos/huggingface/datasets/issues/2770/events
https://github.com/huggingface/datasets/pull/2770
963,246,512
MDExOlB1bGxSZXF1ZXN0NzA1OTAzMzIy
2,770
Add support for fast tokenizer in BertScore
{ "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mariosasko", "id": 47462742, "login": "mariosasko", "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "repos_url": "https://api.github.com/users/mariosasko/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "type": "User", "url": "https://api.github.com/users/mariosasko" }
[]
closed
false
null
[]
null
[]
2021-08-07T15:00:03Z
2021-08-09T12:34:43Z
2021-08-09T11:16:25Z
CONTRIBUTOR
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/2770.diff", "html_url": "https://github.com/huggingface/datasets/pull/2770", "merged_at": "2021-08-09T11:16:25Z", "patch_url": "https://github.com/huggingface/datasets/pull/2770.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/2770" }
This PR adds support for a fast tokenizer in BertScore, which has been added recently to the lib. Fixes #2765
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 1, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/2770/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2770/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/2769
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2769/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2769/comments
https://api.github.com/repos/huggingface/datasets/issues/2769/events
https://github.com/huggingface/datasets/pull/2769
963,240,802
MDExOlB1bGxSZXF1ZXN0NzA1ODk5MTYy
2,769
Allow PyArrow from source
{ "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/patrickvonplaten", "id": 23423619, "login": "patrickvonplaten", "node_id": "MDQ6VXNlcjIzNDIzNjE5", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "site_admin": false, "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "type": "User", "url": "https://api.github.com/users/patrickvonplaten" }
[]
closed
false
null
[]
null
[]
2021-08-07T14:26:44Z
2021-08-09T15:38:39Z
2021-08-09T15:38:39Z
MEMBER
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/2769.diff", "html_url": "https://github.com/huggingface/datasets/pull/2769", "merged_at": "2021-08-09T15:38:39Z", "patch_url": "https://github.com/huggingface/datasets/pull/2769.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/2769" }
When installing pyarrow from source the version is: ```python >>> import pyarrow; pyarrow.__version__ '2.1.0.dev612' ``` -> however this breaks the install check at init of `datasets`. This PR makes sure that everything coming after the last `'.'` is removed.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2769/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2769/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/2768
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2768/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2768/comments
https://api.github.com/repos/huggingface/datasets/issues/2768/events
https://github.com/huggingface/datasets/issues/2768
963,229,173
MDU6SXNzdWU5NjMyMjkxNzM=
2,768
`ArrowInvalid: Added column's length must match table's length.` after using `select`
{ "avatar_url": "https://avatars.githubusercontent.com/u/8264887?v=4", "events_url": "https://api.github.com/users/lvwerra/events{/privacy}", "followers_url": "https://api.github.com/users/lvwerra/followers", "following_url": "https://api.github.com/users/lvwerra/following{/other_user}", "gists_url": "https://api.github.com/users/lvwerra/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lvwerra", "id": 8264887, "login": "lvwerra", "node_id": "MDQ6VXNlcjgyNjQ4ODc=", "organizations_url": "https://api.github.com/users/lvwerra/orgs", "received_events_url": "https://api.github.com/users/lvwerra/received_events", "repos_url": "https://api.github.com/users/lvwerra/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lvwerra/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lvwerra/subscriptions", "type": "User", "url": "https://api.github.com/users/lvwerra" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
closed
false
null
[]
null
[]
2021-08-07T13:17:29Z
2021-08-09T11:26:43Z
2021-08-09T11:26:43Z
MEMBER
null
null
null
## Describe the bug I would like to add a column to a downsampled dataset. However I get an error message saying the length don't match with the length of the unsampled dataset indicated. I suspect that the dataset size is not updated when calling `select`. ## Steps to reproduce the bug ```python from datasets import load_dataset ds = load_dataset("tweets_hate_speech_detection")['train'].select(range(128)) ds = ds.add_column('ones', [1]*128) ``` ## Expected results I would expect a new column named `ones` filled with `1`. When I check the length of `ds` it says `128`. Interestingly, it works when calling `ds = ds.map(lambda x: x)` before adding the column. ## Actual results Specify the actual results or traceback. ```python --------------------------------------------------------------------------- ArrowInvalid Traceback (most recent call last) /var/folders/l4/2905jygx4tx5jv8_kn03vxsw0000gn/T/ipykernel_6301/868709636.py in <module> 1 from datasets import load_dataset 2 ds = load_dataset("tweets_hate_speech_detection")['train'].select(range(128)) ----> 3 ds = ds.add_column('ones', [0]*128) ~/git/semantic-clustering/env/lib/python3.8/site-packages/datasets/arrow_dataset.py in wrapper(*args, **kwargs) 183 } 184 # apply actual function --> 185 out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs) 186 datasets: List["Dataset"] = list(out.values()) if isinstance(out, dict) else [out] 187 # re-apply format to the output ~/git/semantic-clustering/env/lib/python3.8/site-packages/datasets/fingerprint.py in wrapper(*args, **kwargs) 395 # Call actual function 396 --> 397 out = func(self, *args, **kwargs) 398 399 # Update fingerprint of in-place transforms + update in-place history of transforms ~/git/semantic-clustering/env/lib/python3.8/site-packages/datasets/arrow_dataset.py in add_column(self, name, column, new_fingerprint) 2965 column_table = InMemoryTable.from_pydict({name: column}) 2966 # Concatenate tables horizontally -> 2967 table = ConcatenationTable.from_tables([self._data, column_table], axis=1) 2968 # Update features 2969 info = self.info.copy() ~/git/semantic-clustering/env/lib/python3.8/site-packages/datasets/table.py in from_tables(cls, tables, axis) 715 table_blocks = to_blocks(table) 716 blocks = _extend_blocks(blocks, table_blocks, axis=axis) --> 717 return cls.from_blocks(blocks) 718 719 @property ~/git/semantic-clustering/env/lib/python3.8/site-packages/datasets/table.py in from_blocks(cls, blocks) 663 return cls(table, blocks) 664 else: --> 665 table = cls._concat_blocks_horizontally_and_vertically(blocks) 666 return cls(table, blocks) 667 ~/git/semantic-clustering/env/lib/python3.8/site-packages/datasets/table.py in _concat_blocks_horizontally_and_vertically(cls, blocks) 623 if not tables: 624 continue --> 625 pa_table_horizontally_concatenated = cls._concat_blocks(tables, axis=1) 626 pa_tables_to_concat_vertically.append(pa_table_horizontally_concatenated) 627 return cls._concat_blocks(pa_tables_to_concat_vertically, axis=0) ~/git/semantic-clustering/env/lib/python3.8/site-packages/datasets/table.py in _concat_blocks(blocks, axis) 612 else: 613 for name, col in zip(table.column_names, table.columns): --> 614 pa_table = pa_table.append_column(name, col) 615 return pa_table 616 else: ~/git/semantic-clustering/env/lib/python3.8/site-packages/pyarrow/table.pxi in pyarrow.lib.Table.append_column() ~/git/semantic-clustering/env/lib/python3.8/site-packages/pyarrow/table.pxi in pyarrow.lib.Table.add_column() ~/git/semantic-clustering/env/lib/python3.8/site-packages/pyarrow/error.pxi in pyarrow.lib.pyarrow_internal_check_status() ~/git/semantic-clustering/env/lib/python3.8/site-packages/pyarrow/error.pxi in pyarrow.lib.check_status() ArrowInvalid: Added column's length must match table's length. Expected length 31962 but got length 128 ``` ## Environment info - `datasets` version: 1.11.0 - Platform: macOS-10.16-x86_64-i386-64bit - Python version: 3.8.5 - PyArrow version: 5.0.0
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2768/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2768/timeline
null
completed
true
https://api.github.com/repos/huggingface/datasets/issues/2767
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2767/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2767/comments
https://api.github.com/repos/huggingface/datasets/issues/2767/events
https://github.com/huggingface/datasets/issues/2767
963,002,120
MDU6SXNzdWU5NjMwMDIxMjA=
2,767
equal operation to perform unbatch for huggingface datasets
{ "avatar_url": "https://avatars.githubusercontent.com/u/79288051?v=4", "events_url": "https://api.github.com/users/dorooddorood606/events{/privacy}", "followers_url": "https://api.github.com/users/dorooddorood606/followers", "following_url": "https://api.github.com/users/dorooddorood606/following{/other_user}", "gists_url": "https://api.github.com/users/dorooddorood606/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/dorooddorood606", "id": 79288051, "login": "dorooddorood606", "node_id": "MDQ6VXNlcjc5Mjg4MDUx", "organizations_url": "https://api.github.com/users/dorooddorood606/orgs", "received_events_url": "https://api.github.com/users/dorooddorood606/received_events", "repos_url": "https://api.github.com/users/dorooddorood606/repos", "site_admin": false, "starred_url": "https://api.github.com/users/dorooddorood606/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dorooddorood606/subscriptions", "type": "User", "url": "https://api.github.com/users/dorooddorood606" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
closed
false
null
[]
null
[]
2021-08-06T19:45:52Z
2022-03-07T13:58:00Z
2022-03-07T13:58:00Z
NONE
null
null
null
Hi I need to use "unbatch" operation in tensorflow on a huggingface dataset, I could not find this operation, could you kindly direct me how I can do it, here is the problem I am trying to solve: I am considering "record" dataset in SuperGlue and I need to replicate each entery of the dataset for each answer, to make it similar to what T5 originally did: https://github.com/google-research/text-to-text-transfer-transformer/blob/3c58859b8fe72c2dbca6a43bc775aa510ba7e706/t5/data/preprocessors.py#L925 Here please find an example: For example, a typical example from ReCoRD might look like { 'passsage': 'This is the passage.', 'query': 'A @placeholder is a bird.', 'entities': ['penguin', 'potato', 'pigeon'], 'answers': ['penguin', 'pigeon'], } and I need a prosessor which would turn this example into the following two examples: { 'inputs': 'record query: A @placeholder is a bird. entities: penguin, ' 'potato, pigeon passage: This is the passage.', 'targets': 'penguin', } and { 'inputs': 'record query: A @placeholder is a bird. entities: penguin, ' 'potato, pigeon passage: This is the passage.', 'targets': 'pigeon', } For doing this, one need unbatch, as each entry can map to multiple samples depending on the number of answers, I am not sure how to perform this operation with huggingface datasets library and greatly appreciate your help @lhoestq Thank you very much.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2767/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2767/timeline
null
completed
true
https://api.github.com/repos/huggingface/datasets/issues/2766
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2766/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2766/comments
https://api.github.com/repos/huggingface/datasets/issues/2766/events
https://github.com/huggingface/datasets/pull/2766
962,994,198
MDExOlB1bGxSZXF1ZXN0NzA1NzAyNjM5
2,766
fix typo (ShuffingConfig -> ShufflingConfig)
{ "avatar_url": "https://avatars.githubusercontent.com/u/4944007?v=4", "events_url": "https://api.github.com/users/daleevans/events{/privacy}", "followers_url": "https://api.github.com/users/daleevans/followers", "following_url": "https://api.github.com/users/daleevans/following{/other_user}", "gists_url": "https://api.github.com/users/daleevans/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/daleevans", "id": 4944007, "login": "daleevans", "node_id": "MDQ6VXNlcjQ5NDQwMDc=", "organizations_url": "https://api.github.com/users/daleevans/orgs", "received_events_url": "https://api.github.com/users/daleevans/received_events", "repos_url": "https://api.github.com/users/daleevans/repos", "site_admin": false, "starred_url": "https://api.github.com/users/daleevans/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/daleevans/subscriptions", "type": "User", "url": "https://api.github.com/users/daleevans" }
[]
closed
false
null
[]
null
[]
2021-08-06T19:31:40Z
2021-08-10T14:17:03Z
2021-08-10T14:17:02Z
CONTRIBUTOR
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/2766.diff", "html_url": "https://github.com/huggingface/datasets/pull/2766", "merged_at": "2021-08-10T14:17:02Z", "patch_url": "https://github.com/huggingface/datasets/pull/2766.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/2766" }
pretty straightforward, it should be Shuffling instead of Shuffing
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2766/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2766/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/2765
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2765/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2765/comments
https://api.github.com/repos/huggingface/datasets/issues/2765/events
https://github.com/huggingface/datasets/issues/2765
962,861,395
MDU6SXNzdWU5NjI4NjEzOTU=
2,765
BERTScore Error
{ "avatar_url": "https://avatars.githubusercontent.com/u/49101362?v=4", "events_url": "https://api.github.com/users/gagan3012/events{/privacy}", "followers_url": "https://api.github.com/users/gagan3012/followers", "following_url": "https://api.github.com/users/gagan3012/following{/other_user}", "gists_url": "https://api.github.com/users/gagan3012/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/gagan3012", "id": 49101362, "login": "gagan3012", "node_id": "MDQ6VXNlcjQ5MTAxMzYy", "organizations_url": "https://api.github.com/users/gagan3012/orgs", "received_events_url": "https://api.github.com/users/gagan3012/received_events", "repos_url": "https://api.github.com/users/gagan3012/repos", "site_admin": false, "starred_url": "https://api.github.com/users/gagan3012/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/gagan3012/subscriptions", "type": "User", "url": "https://api.github.com/users/gagan3012" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
closed
false
null
[]
null
[]
2021-08-06T15:58:57Z
2021-08-09T11:16:25Z
2021-08-09T11:16:25Z
NONE
null
null
null
## Describe the bug A clear and concise description of what the bug is. ## Steps to reproduce the bug ```python predictions = ["hello there", "general kenobi"] references = ["hello there", "general kenobi"] bert = load_metric('bertscore') bert.compute(predictions=predictions, references=references,lang='en') ``` # Bug `TypeError: get_hash() missing 1 required positional argument: 'use_fast_tokenizer'` ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: - Platform: Colab - Python version: - PyArrow version:
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2765/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2765/timeline
null
completed
true
https://api.github.com/repos/huggingface/datasets/issues/2764
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2764/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2764/comments
https://api.github.com/repos/huggingface/datasets/issues/2764/events
https://github.com/huggingface/datasets/pull/2764
962,554,799
MDExOlB1bGxSZXF1ZXN0NzA1MzI3MDQ5
2,764
Add DER metric for SUPERB speaker diarization task
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[ { "color": "E3165C", "default": false, "description": "", "id": 4190228726, "name": "transfer-to-evaluate", "node_id": "LA_kwDODunzps75wdD2", "url": "https://api.github.com/repos/huggingface/datasets/labels/transfer-to-evaluate" } ]
open
false
null
[]
null
[]
2021-08-06T09:12:36Z
2022-09-23T08:10:39Z
null
MEMBER
null
true
{ "diff_url": "https://github.com/huggingface/datasets/pull/2764.diff", "html_url": "https://github.com/huggingface/datasets/pull/2764", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/2764.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/2764" }
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2764/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2764/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/2763
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2763/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2763/comments
https://api.github.com/repos/huggingface/datasets/issues/2763/events
https://github.com/huggingface/datasets/issues/2763
961,895,523
MDU6SXNzdWU5NjE4OTU1MjM=
2,763
English wikipedia datasets is not clean
{ "avatar_url": "https://avatars.githubusercontent.com/u/23355969?v=4", "events_url": "https://api.github.com/users/lucadiliello/events{/privacy}", "followers_url": "https://api.github.com/users/lucadiliello/followers", "following_url": "https://api.github.com/users/lucadiliello/following{/other_user}", "gists_url": "https://api.github.com/users/lucadiliello/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lucadiliello", "id": 23355969, "login": "lucadiliello", "node_id": "MDQ6VXNlcjIzMzU1OTY5", "organizations_url": "https://api.github.com/users/lucadiliello/orgs", "received_events_url": "https://api.github.com/users/lucadiliello/received_events", "repos_url": "https://api.github.com/users/lucadiliello/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lucadiliello/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lucadiliello/subscriptions", "type": "User", "url": "https://api.github.com/users/lucadiliello" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
open
false
null
[]
null
[]
2021-08-05T14:37:24Z
2021-08-23T17:00:16Z
null
CONTRIBUTOR
null
null
null
## Describe the bug Wikipedia english dumps contain many wikipedia paragraphs like "References", "Category:" and "See Also" that should not be used for training. ## Steps to reproduce the bug ```python # Sample code to reproduce the bug from datasets import load_dataset w = load_dataset('wikipedia', '20200501.en') print(w['train'][0]['text']) ``` > 'Yangliuqing () is a market town in Xiqing District, in the western suburbs of Tianjin, People\'s Republic of China. Despite its relatively small size, it has been named since 2006 in the "famous historical and cultural market towns in China".\n\nIt is best known in China for creating nianhua or Yangliuqing nianhua. For more than 400 years, Yangliuqing has in effect specialised in the creation of these woodcuts for the New Year. wood block prints using vivid colourschemes to portray traditional scenes of children\'s games often interwoven with auspiciouse objects.\n\n, it had 27 residential communities () and 25 villages under its administration.\n\nShi Family Grand Courtyard\n\nShi Family Grand Courtyard (Tiānjīn Shí Jiā Dà Yuàn, 天津石家大院) is situated in Yangliuqing Town of Xiqing District, which is the former residence of wealthy merchant Shi Yuanshi - the 4th son of Shi Wancheng, one of the eight great masters in Tianjin. First built in 1875, it covers over 6,000 square meters, including large and small yards and over 200 folk houses, a theater and over 275 rooms that served as apartments and places of business and worship for this powerful family. Shifu Garden, which finished its expansion in October 2003, covers 1,200 square meters, incorporates the elegance of imperial garden and delicacy of south garden. Now the courtyard of Shi family covers about 10,000 square meters, which is called the first mansion in North China. Now it serves as the folk custom museum in Yangliuqing, which has a large collection of folk custom museum in Yanliuqing, which has a large collection of folk art pieces like Yanliuqing New Year pictures, brick sculpture.\n\nShi\'s ancestor came from Dong\'e County in Shandong Province, engaged in water transport of grain. As the wealth gradually accumulated, the Shi Family moved to Yangliuqing and bought large tracts of land and set up their residence. Shi Yuanshi came from the fourth generation of the family, who was a successful businessman and a good household manager, and the residence was thus enlarged for several times until it acquired the present scale. It is believed to be the first mansion in the west of Tianjin.\n\nThe residence is symmetric based on the axis formed by a passageway in the middle, on which there are four archways. On the east side of the courtyard, there are traditional single-story houses with rows of rooms around the four sides, which was once the living area for the Shi Family. The rooms on north side were the accountants\' office. On the west are the major constructions including the family hall for worshipping Buddha, theater and the south reception room. On both sides of the residence are side yard rooms for maids and servants.\n\nToday, the Shi mansion, located in the township of Yangliuqing to the west of central Tianjin, stands as a surprisingly well-preserved monument to China\'s pre-revolution mercantile spirit. It also serves as an on-location shoot for many of China\'s popular historical dramas. Many of the rooms feature period furniture, paintings and calligraphy, and the extensive Shifu Garden.\n\nPart of the complex has been turned into the Yangliuqing Museum, which includes displays focused on symbolic aspects of the courtyards\' construction, local folk art and customs, and traditional period furnishings and crafts.\n\n**See also \n\nList of township-level divisions of Tianjin\n\nReferences \n\n http://arts.cultural-china.com/en/65Arts4795.html\n\nCategory:Towns in Tianjin'** ## Expected results I expect no junk in the data. ## Actual results Specify the actual results or traceback. ## Environment info - `datasets` version: 1.10.2 - Platform: macOS-10.15.7-x86_64-i386-64bit - Python version: 3.8.5 - PyArrow version: 3.0.0
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2763/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2763/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/2762
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2762/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2762/comments
https://api.github.com/repos/huggingface/datasets/issues/2762/events
https://github.com/huggingface/datasets/issues/2762
961,652,046
MDU6SXNzdWU5NjE2NTIwNDY=
2,762
Add RVL-CDIP dataset
{ "avatar_url": "https://avatars.githubusercontent.com/u/48327001?v=4", "events_url": "https://api.github.com/users/NielsRogge/events{/privacy}", "followers_url": "https://api.github.com/users/NielsRogge/followers", "following_url": "https://api.github.com/users/NielsRogge/following{/other_user}", "gists_url": "https://api.github.com/users/NielsRogge/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/NielsRogge", "id": 48327001, "login": "NielsRogge", "node_id": "MDQ6VXNlcjQ4MzI3MDAx", "organizations_url": "https://api.github.com/users/NielsRogge/orgs", "received_events_url": "https://api.github.com/users/NielsRogge/received_events", "repos_url": "https://api.github.com/users/NielsRogge/repos", "site_admin": false, "starred_url": "https://api.github.com/users/NielsRogge/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/NielsRogge/subscriptions", "type": "User", "url": "https://api.github.com/users/NielsRogge" }
[ { "color": "e99695", "default": false, "description": "Requesting to add a new dataset", "id": 2067376369, "name": "dataset request", "node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request" }, { "color": "bfdadc", "default": false, "description": "Vision datasets", "id": 3608941089, "name": "vision", "node_id": "LA_kwDODunzps7XHBIh", "url": "https://api.github.com/repos/huggingface/datasets/labels/vision" } ]
closed
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/17746528?v=4", "events_url": "https://api.github.com/users/dnaveenr/events{/privacy}", "followers_url": "https://api.github.com/users/dnaveenr/followers", "following_url": "https://api.github.com/users/dnaveenr/following{/other_user}", "gists_url": "https://api.github.com/users/dnaveenr/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/dnaveenr", "id": 17746528, "login": "dnaveenr", "node_id": "MDQ6VXNlcjE3NzQ2NTI4", "organizations_url": "https://api.github.com/users/dnaveenr/orgs", "received_events_url": "https://api.github.com/users/dnaveenr/received_events", "repos_url": "https://api.github.com/users/dnaveenr/repos", "site_admin": false, "starred_url": "https://api.github.com/users/dnaveenr/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dnaveenr/subscriptions", "type": "User", "url": "https://api.github.com/users/dnaveenr" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/17746528?v=4", "events_url": "https://api.github.com/users/dnaveenr/events{/privacy}", "followers_url": "https://api.github.com/users/dnaveenr/followers", "following_url": "https://api.github.com/users/dnaveenr/following{/other_user}", "gists_url": "https://api.github.com/users/dnaveenr/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/dnaveenr", "id": 17746528, "login": "dnaveenr", "node_id": "MDQ6VXNlcjE3NzQ2NTI4", "organizations_url": "https://api.github.com/users/dnaveenr/orgs", "received_events_url": "https://api.github.com/users/dnaveenr/received_events", "repos_url": "https://api.github.com/users/dnaveenr/repos", "site_admin": false, "starred_url": "https://api.github.com/users/dnaveenr/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dnaveenr/subscriptions", "type": "User", "url": "https://api.github.com/users/dnaveenr" } ]
null
[]
2021-08-05T09:57:05Z
2022-04-21T17:15:41Z
2022-04-21T17:15:41Z
CONTRIBUTOR
null
null
null
## Adding a Dataset - **Name:** RVL-CDIP - **Description:** The RVL-CDIP (Ryerson Vision Lab Complex Document Information Processing) dataset consists of 400,000 grayscale images in 16 classes, with 25,000 images per class. There are 320,000 training images, 40,000 validation images, and 40,000 test images. The images are sized so their largest dimension does not exceed 1000 pixels. - **Paper:** https://www.cs.cmu.edu/~aharley/icdar15/ - **Data:** https://www.cs.cmu.edu/~aharley/rvl-cdip/ - **Motivation:** I'm currently adding LayoutLMv2 and LayoutXLM to HuggingFace Transformers. LayoutLM (v1) already exists in the library. This dataset has a large value for document image classification (i.e. classifying scanned documents). LayoutLM models obtain SOTA on this dataset, so would be great to directly use it in notebooks. Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md).
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2762/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2762/timeline
null
completed
true
https://api.github.com/repos/huggingface/datasets/issues/2761
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2761/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2761/comments
https://api.github.com/repos/huggingface/datasets/issues/2761/events
https://github.com/huggingface/datasets/issues/2761
961,568,287
MDU6SXNzdWU5NjE1NjgyODc=
2,761
Error loading C4 realnewslike dataset
{ "avatar_url": "https://avatars.githubusercontent.com/u/32061512?v=4", "events_url": "https://api.github.com/users/danshirron/events{/privacy}", "followers_url": "https://api.github.com/users/danshirron/followers", "following_url": "https://api.github.com/users/danshirron/following{/other_user}", "gists_url": "https://api.github.com/users/danshirron/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/danshirron", "id": 32061512, "login": "danshirron", "node_id": "MDQ6VXNlcjMyMDYxNTEy", "organizations_url": "https://api.github.com/users/danshirron/orgs", "received_events_url": "https://api.github.com/users/danshirron/received_events", "repos_url": "https://api.github.com/users/danshirron/repos", "site_admin": false, "starred_url": "https://api.github.com/users/danshirron/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/danshirron/subscriptions", "type": "User", "url": "https://api.github.com/users/danshirron" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
closed
false
null
[]
null
[]
2021-08-05T08:16:58Z
2021-08-08T19:44:34Z
2021-08-08T19:44:34Z
NONE
null
null
null
## Describe the bug Error loading C4 realnewslike dataset. Validation part mismatch ## Steps to reproduce the bug ```python raw_datasets = load_dataset('c4', 'realnewslike', cache_dir=model_args.cache_dir) ## Expected results success on data loading ## Actual results Downloading: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 15.3M/15.3M [00:00<00:00, 28.1MB/s]Traceback (most recent call last): File "run_mlm_tf.py", line 794, in <module> main() File "run_mlm_tf.py", line 425, in main raw_datasets = load_dataset(data_args.dataset_name, data_args.dataset_config_name, cache_dir=model_args.cache_dir) File "/home/dshirron/.local/lib/python3.8/site-packages/datasets/load.py", line 843, in load_dataset builder_instance.download_and_prepare( File "/home/dshirron/.local/lib/python3.8/site-packages/datasets/builder.py", line 608, in download_and_prepare self._download_and_prepare( File "/home/dshirron/.local/lib/python3.8/site-packages/datasets/builder.py", line 698, in _download_and_prepare verify_splits(self.info.splits, split_dict) File "/home/dshirron/.local/lib/python3.8/site-packages/datasets/utils/info_utils.py", line 74, in verify_splits raise NonMatchingSplitsSizesError(str(bad_splits)) datasets.utils.info_utils.NonMatchingSplitsSizesError: [{'expected': SplitInfo(name='validation', num_bytes=38165657946, num_examples=13799838, dataset_name='c4'), 'recorded': SplitInfo(name='validation', num_bytes=37875873, num_examples=13863, dataset_name='c4')}] ## Environment info - `datasets` version: 1.10.2 - Platform: Linux-5.4.0-58-generic-x86_64-with-glibc2.29 - Python version: 3.8.10 - PyArrow version: 4.0.1
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2761/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2761/timeline
null
completed
true
https://api.github.com/repos/huggingface/datasets/issues/2760
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2760/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2760/comments
https://api.github.com/repos/huggingface/datasets/issues/2760/events
https://github.com/huggingface/datasets/issues/2760
961,372,667
MDU6SXNzdWU5NjEzNzI2Njc=
2,760
Add Nuswide dataset
{ "avatar_url": "https://avatars.githubusercontent.com/u/19774925?v=4", "events_url": "https://api.github.com/users/shivangibithel/events{/privacy}", "followers_url": "https://api.github.com/users/shivangibithel/followers", "following_url": "https://api.github.com/users/shivangibithel/following{/other_user}", "gists_url": "https://api.github.com/users/shivangibithel/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/shivangibithel", "id": 19774925, "login": "shivangibithel", "node_id": "MDQ6VXNlcjE5Nzc0OTI1", "organizations_url": "https://api.github.com/users/shivangibithel/orgs", "received_events_url": "https://api.github.com/users/shivangibithel/received_events", "repos_url": "https://api.github.com/users/shivangibithel/repos", "site_admin": false, "starred_url": "https://api.github.com/users/shivangibithel/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/shivangibithel/subscriptions", "type": "User", "url": "https://api.github.com/users/shivangibithel" }
[ { "color": "e99695", "default": false, "description": "Requesting to add a new dataset", "id": 2067376369, "name": "dataset request", "node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request" }, { "color": "bfdadc", "default": false, "description": "Vision datasets", "id": 3608941089, "name": "vision", "node_id": "LA_kwDODunzps7XHBIh", "url": "https://api.github.com/repos/huggingface/datasets/labels/vision" } ]
open
false
null
[]
null
[]
2021-08-05T03:00:41Z
2021-12-08T12:06:23Z
null
NONE
null
null
null
## Adding a Dataset - **Name:** *NUSWIDE* - **Description:** *[A Real-World Web Image Dataset from National University of Singapore](https://lms.comp.nus.edu.sg/wp-content/uploads/2019/research/nuswide/NUS-WIDE.html)* - **Paper:** *[here](https://lms.comp.nus.edu.sg/wp-content/uploads/2019/research/nuswide/nuswide-civr2009.pdf)* - **Data:** *[here](https://github.com/wenting-zhao/nuswide)* - **Motivation:** *This dataset is a benchmark in the Text Retrieval task.* Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md).
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2760/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2760/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/2758
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2758/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2758/comments
https://api.github.com/repos/huggingface/datasets/issues/2758/events
https://github.com/huggingface/datasets/pull/2758
960,206,575
MDExOlB1bGxSZXF1ZXN0NzAzMjQ5Nzky
2,758
Raise ManualDownloadError when loading a dataset that requires previous manual download
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[]
closed
false
null
[]
null
[]
2021-08-04T10:19:55Z
2021-08-04T11:36:30Z
2021-08-04T11:36:30Z
MEMBER
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/2758.diff", "html_url": "https://github.com/huggingface/datasets/pull/2758", "merged_at": "2021-08-04T11:36:30Z", "patch_url": "https://github.com/huggingface/datasets/pull/2758.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/2758" }
This PR implements the raising of a `ManualDownloadError` when loading a dataset that requires previous manual download, and this is missing. The `ManualDownloadError` is raised whether the dataset is loaded in normal or streaming mode. Close #2749. cc: @severo
{ "+1": 1, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/2758/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2758/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/2757
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2757/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2757/comments
https://api.github.com/repos/huggingface/datasets/issues/2757/events
https://github.com/huggingface/datasets/issues/2757
959,984,081
MDU6SXNzdWU5NTk5ODQwODE=
2,757
Unexpected type after `concatenate_datasets`
{ "avatar_url": "https://avatars.githubusercontent.com/u/32683010?v=4", "events_url": "https://api.github.com/users/JulesBelveze/events{/privacy}", "followers_url": "https://api.github.com/users/JulesBelveze/followers", "following_url": "https://api.github.com/users/JulesBelveze/following{/other_user}", "gists_url": "https://api.github.com/users/JulesBelveze/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/JulesBelveze", "id": 32683010, "login": "JulesBelveze", "node_id": "MDQ6VXNlcjMyNjgzMDEw", "organizations_url": "https://api.github.com/users/JulesBelveze/orgs", "received_events_url": "https://api.github.com/users/JulesBelveze/received_events", "repos_url": "https://api.github.com/users/JulesBelveze/repos", "site_admin": false, "starred_url": "https://api.github.com/users/JulesBelveze/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/JulesBelveze/subscriptions", "type": "User", "url": "https://api.github.com/users/JulesBelveze" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
closed
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" } ]
null
[]
2021-08-04T07:10:39Z
2021-08-04T16:01:24Z
2021-08-04T16:01:23Z
NONE
null
null
null
## Describe the bug I am trying to concatenate two `Dataset` using `concatenate_datasets` but it turns out that after concatenation the features are casted from `torch.Tensor` to `list`. It then leads to a weird tensors when trying to convert it to a `DataLoader`. However, if I use each `Dataset` separately everything behave as expected. ## Steps to reproduce the bug ```python >>> featurized_teacher Dataset({ features: ['t_labels', 't_input_ids', 't_token_type_ids', 't_attention_mask'], num_rows: 502 }) >>> for f in featurized_teacher.features: print(featurized_teacher[f].shape) torch.Size([502]) torch.Size([502, 300]) torch.Size([502, 300]) torch.Size([502, 300]) >>> featurized_student Dataset({ features: ['s_features', 's_labels'], num_rows: 502 }) >>> for f in featurized_student.features: print(featurized_student[f].shape) torch.Size([502, 64]) torch.Size([502]) ``` The shapes seem alright to me. Then the results after concatenation are as follow: ```python >>> concat_dataset = datasets.concatenate_datasets([featurized_student, featurized_teacher], axis=1) >>> type(concat_dataset["t_labels"]) <class 'list'> ``` One would expect to obtain the same type as the one before concatenation. Am I doing something wrong here? Any idea on how to fix this unexpected behavior? ## Environment info - `datasets` version: 1.9.0 - Platform: macOS-10.14.6-x86_64-i386-64bit - Python version: 3.9.5 - PyArrow version: 3.0.0
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2757/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2757/timeline
null
completed
true
https://api.github.com/repos/huggingface/datasets/issues/2756
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2756/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2756/comments
https://api.github.com/repos/huggingface/datasets/issues/2756/events
https://github.com/huggingface/datasets/pull/2756
959,255,646
MDExOlB1bGxSZXF1ZXN0NzAyMzk4Mjk1
2,756
Fix metadata JSON for ubuntu_dialogs_corpus dataset
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[]
closed
false
null
[]
null
[]
2021-08-03T15:48:59Z
2021-08-04T09:43:25Z
2021-08-04T09:43:25Z
MEMBER
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/2756.diff", "html_url": "https://github.com/huggingface/datasets/pull/2756", "merged_at": "2021-08-04T09:43:25Z", "patch_url": "https://github.com/huggingface/datasets/pull/2756.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/2756" }
Related to #2743.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2756/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2756/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/2755
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2755/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2755/comments
https://api.github.com/repos/huggingface/datasets/issues/2755/events
https://github.com/huggingface/datasets/pull/2755
959,115,888
MDExOlB1bGxSZXF1ZXN0NzAyMjgwMjI4
2,755
Fix metadata JSON for turkish_movie_sentiment dataset
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[]
closed
false
null
[]
null
[]
2021-08-03T13:25:44Z
2021-08-04T09:06:54Z
2021-08-04T09:06:53Z
MEMBER
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/2755.diff", "html_url": "https://github.com/huggingface/datasets/pull/2755", "merged_at": "2021-08-04T09:06:53Z", "patch_url": "https://github.com/huggingface/datasets/pull/2755.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/2755" }
Related to #2743.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2755/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2755/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/2754
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2754/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2754/comments
https://api.github.com/repos/huggingface/datasets/issues/2754/events
https://github.com/huggingface/datasets/pull/2754
959,105,577
MDExOlB1bGxSZXF1ZXN0NzAyMjcxMjM4
2,754
Generate metadata JSON for telugu_books dataset
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[]
closed
false
null
[]
null
[]
2021-08-03T13:14:52Z
2021-08-04T08:49:02Z
2021-08-04T08:49:02Z
MEMBER
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/2754.diff", "html_url": "https://github.com/huggingface/datasets/pull/2754", "merged_at": "2021-08-04T08:49:01Z", "patch_url": "https://github.com/huggingface/datasets/pull/2754.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/2754" }
Related to #2743.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2754/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2754/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/2753
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2753/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2753/comments
https://api.github.com/repos/huggingface/datasets/issues/2753/events
https://github.com/huggingface/datasets/pull/2753
959,036,995
MDExOlB1bGxSZXF1ZXN0NzAyMjEyMjMz
2,753
Generate metadata JSON for reclor dataset
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[]
closed
false
null
[]
null
[]
2021-08-03T11:52:29Z
2021-08-04T08:07:15Z
2021-08-04T08:07:15Z
MEMBER
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/2753.diff", "html_url": "https://github.com/huggingface/datasets/pull/2753", "merged_at": "2021-08-04T08:07:15Z", "patch_url": "https://github.com/huggingface/datasets/pull/2753.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/2753" }
Related to #2743.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2753/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2753/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/2752
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2752/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2752/comments
https://api.github.com/repos/huggingface/datasets/issues/2752/events
https://github.com/huggingface/datasets/pull/2752
959,023,608
MDExOlB1bGxSZXF1ZXN0NzAyMjAxMjAy
2,752
Generate metadata JSON for lm1b dataset
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[]
closed
false
null
[]
null
[]
2021-08-03T11:34:56Z
2021-08-04T06:40:40Z
2021-08-04T06:40:39Z
MEMBER
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/2752.diff", "html_url": "https://github.com/huggingface/datasets/pull/2752", "merged_at": "2021-08-04T06:40:39Z", "patch_url": "https://github.com/huggingface/datasets/pull/2752.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/2752" }
Related to #2743.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2752/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2752/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/2751
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2751/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2751/comments
https://api.github.com/repos/huggingface/datasets/issues/2751/events
https://github.com/huggingface/datasets/pull/2751
959,021,262
MDExOlB1bGxSZXF1ZXN0NzAyMTk5MjA5
2,751
Update metadata for wikihow dataset
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[]
closed
false
null
[]
null
[]
2021-08-03T11:31:57Z
2021-08-03T15:52:09Z
2021-08-03T15:52:09Z
MEMBER
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/2751.diff", "html_url": "https://github.com/huggingface/datasets/pull/2751", "merged_at": "2021-08-03T15:52:09Z", "patch_url": "https://github.com/huggingface/datasets/pull/2751.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/2751" }
Update metadata for wikihow dataset: - Remove leading new line character in description and citation - Update metadata JSON - Remove no longer necessary `urls_checksums/checksums.txt` file Related to #2748.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2751/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2751/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/2750
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2750/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2750/comments
https://api.github.com/repos/huggingface/datasets/issues/2750/events
https://github.com/huggingface/datasets/issues/2750
958,984,730
MDU6SXNzdWU5NTg5ODQ3MzA=
2,750
Second concatenation of datasets produces errors
{ "avatar_url": "https://avatars.githubusercontent.com/u/36672861?v=4", "events_url": "https://api.github.com/users/Aktsvigun/events{/privacy}", "followers_url": "https://api.github.com/users/Aktsvigun/followers", "following_url": "https://api.github.com/users/Aktsvigun/following{/other_user}", "gists_url": "https://api.github.com/users/Aktsvigun/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/Aktsvigun", "id": 36672861, "login": "Aktsvigun", "node_id": "MDQ6VXNlcjM2NjcyODYx", "organizations_url": "https://api.github.com/users/Aktsvigun/orgs", "received_events_url": "https://api.github.com/users/Aktsvigun/received_events", "repos_url": "https://api.github.com/users/Aktsvigun/repos", "site_admin": false, "starred_url": "https://api.github.com/users/Aktsvigun/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Aktsvigun/subscriptions", "type": "User", "url": "https://api.github.com/users/Aktsvigun" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
closed
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" } ]
null
[]
2021-08-03T10:47:04Z
2022-01-19T14:23:43Z
2022-01-19T14:19:05Z
NONE
null
null
null
Hi, I am need to concatenate my dataset with others several times, and after I concatenate it for the second time, the features of features (e.g. tags names) are collapsed. This hinders, for instance, the usage of tokenize function with `data.map`. ``` from datasets import load_dataset, concatenate_datasets data = load_dataset('trec')['train'] concatenated = concatenate_datasets([data, data]) concatenated_2 = concatenate_datasets([concatenated, concatenated]) print('True features of features:', concatenated.features) print('\nProduced features of features:', concatenated_2.features) ``` outputs ``` True features of features: {'label-coarse': ClassLabel(num_classes=6, names=['DESC', 'ENTY', 'ABBR', 'HUM', 'NUM', 'LOC'], names_file=None, id=None), 'label-fine': ClassLabel(num_classes=47, names=['manner', 'cremat', 'animal', 'exp', 'ind', 'gr', 'title', 'def', 'date', 'reason', 'event', 'state', 'desc', 'count', 'other', 'letter', 'religion', 'food', 'country', 'color', 'termeq', 'city', 'body', 'dismed', 'mount', 'money', 'product', 'period', 'substance', 'sport', 'plant', 'techmeth', 'volsize', 'instru', 'abb', 'speed', 'word', 'lang', 'perc', 'code', 'dist', 'temp', 'symbol', 'ord', 'veh', 'weight', 'currency'], names_file=None, id=None), 'text': Value(dtype='string', id=None)} Produced features of features: {'label-coarse': Value(dtype='int64', id=None), 'label-fine': Value(dtype='int64', id=None), 'text': Value(dtype='string', id=None)} ``` I am using `datasets` v.1.11.0
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2750/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2750/timeline
null
completed
true
https://api.github.com/repos/huggingface/datasets/issues/2749
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2749/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2749/comments
https://api.github.com/repos/huggingface/datasets/issues/2749/events
https://github.com/huggingface/datasets/issues/2749
958,968,748
MDU6SXNzdWU5NTg5Njg3NDg=
2,749
Raise a proper exception when trying to stream a dataset that requires to manually download files
{ "avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4", "events_url": "https://api.github.com/users/severo/events{/privacy}", "followers_url": "https://api.github.com/users/severo/followers", "following_url": "https://api.github.com/users/severo/following{/other_user}", "gists_url": "https://api.github.com/users/severo/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/severo", "id": 1676121, "login": "severo", "node_id": "MDQ6VXNlcjE2NzYxMjE=", "organizations_url": "https://api.github.com/users/severo/orgs", "received_events_url": "https://api.github.com/users/severo/received_events", "repos_url": "https://api.github.com/users/severo/repos", "site_admin": false, "starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/severo/subscriptions", "type": "User", "url": "https://api.github.com/users/severo" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
closed
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" } ]
null
[]
2021-08-03T10:26:27Z
2021-08-09T08:53:35Z
2021-08-04T11:36:30Z
CONTRIBUTOR
null
null
null
## Describe the bug At least for 'reclor', 'telugu_books', 'turkish_movie_sentiment', 'ubuntu_dialogs_corpus', 'wikihow', trying to `load_dataset` in streaming mode raises a `TypeError` without any detail about why it fails. ## Steps to reproduce the bug ```python from datasets import load_dataset dataset = load_dataset("reclor", streaming=True) ``` ## Expected results Ideally: raise a specific exception, something like `ManualDownloadError`. Or at least give the reason in the message, as when we load in normal mode: ```python from datasets import load_dataset dataset = load_dataset("reclor") ``` ``` AssertionError: The dataset reclor with config default requires manual data. Please follow the manual download instructions: to use ReClor you need to download it manually. Please go to its homepage (http://whyu.me/reclor/) fill the google form and you will receive a download link and a password to extract it.Please extract all files in one folder and use the path folder in datasets.load_dataset('reclor', data_dir='path/to/folder/folder_name') . Manual data can be loaded with `datasets.load_dataset(reclor, data_dir='<path/to/manual/data>') ``` ## Actual results ``` TypeError: expected str, bytes or os.PathLike object, not NoneType ``` ## Environment info - `datasets` version: 1.11.0 - Platform: macOS-11.5-x86_64-i386-64bit - Python version: 3.8.11 - PyArrow version: 4.0.1
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2749/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2749/timeline
null
completed
true
https://api.github.com/repos/huggingface/datasets/issues/2748
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2748/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2748/comments
https://api.github.com/repos/huggingface/datasets/issues/2748/events
https://github.com/huggingface/datasets/pull/2748
958,889,041
MDExOlB1bGxSZXF1ZXN0NzAyMDg4NTk4
2,748
Generate metadata JSON for wikihow dataset
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[]
closed
false
null
[]
null
[]
2021-08-03T08:55:40Z
2021-08-03T10:17:51Z
2021-08-03T10:17:51Z
MEMBER
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/2748.diff", "html_url": "https://github.com/huggingface/datasets/pull/2748", "merged_at": "2021-08-03T10:17:51Z", "patch_url": "https://github.com/huggingface/datasets/pull/2748.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/2748" }
Related to #2743.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2748/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2748/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/2747
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2747/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2747/comments
https://api.github.com/repos/huggingface/datasets/issues/2747/events
https://github.com/huggingface/datasets/pull/2747
958,867,627
MDExOlB1bGxSZXF1ZXN0NzAyMDcwOTgy
2,747
add multi-proc in `to_json`
{ "avatar_url": "https://avatars.githubusercontent.com/u/19718818?v=4", "events_url": "https://api.github.com/users/bhavitvyamalik/events{/privacy}", "followers_url": "https://api.github.com/users/bhavitvyamalik/followers", "following_url": "https://api.github.com/users/bhavitvyamalik/following{/other_user}", "gists_url": "https://api.github.com/users/bhavitvyamalik/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/bhavitvyamalik", "id": 19718818, "login": "bhavitvyamalik", "node_id": "MDQ6VXNlcjE5NzE4ODE4", "organizations_url": "https://api.github.com/users/bhavitvyamalik/orgs", "received_events_url": "https://api.github.com/users/bhavitvyamalik/received_events", "repos_url": "https://api.github.com/users/bhavitvyamalik/repos", "site_admin": false, "starred_url": "https://api.github.com/users/bhavitvyamalik/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/bhavitvyamalik/subscriptions", "type": "User", "url": "https://api.github.com/users/bhavitvyamalik" }
[]
closed
false
null
[]
null
[]
2021-08-03T08:30:13Z
2021-10-19T18:24:21Z
2021-09-13T13:56:37Z
CONTRIBUTOR
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/2747.diff", "html_url": "https://github.com/huggingface/datasets/pull/2747", "merged_at": "2021-09-13T13:56:37Z", "patch_url": "https://github.com/huggingface/datasets/pull/2747.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/2747" }
Closes #2663. I've tried adding multiprocessing in `to_json`. Here's some benchmarking I did to compare the timings of current version (say v1) and multi-proc version (say v2). I did this with `cpu_count` 4 (2015 Macbook Air) 1. Dataset name: `ascent_kb` - 8.9M samples (all samples were used, reporting this for a single run) v1- ~225 seconds for converting whole dataset to json v2- ~200 seconds for converting whole dataset to json 2. Dataset name: `lama` - 1.3M samples (all samples were used, reporting this for 2 runs) v1- ~26 seconds for converting whole dataset to json v2- ~23.6 seconds for converting whole dataset to json I think it's safe to say that v2 is 10% faster as compared to v1. Timings may improve further with better configuration. The only bottleneck I feel is writing to file from the output list. If we can improve that aspect then timings may improve further. Let me know if any changes/improvements can be done in this @stas00, @lhoestq, @albertvillanova. @lhoestq even suggested to extend this work with other export methods as well like `csv` or `parquet`.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2747/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2747/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/2746
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2746/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2746/comments
https://api.github.com/repos/huggingface/datasets/issues/2746/events
https://github.com/huggingface/datasets/issues/2746
958,551,619
MDU6SXNzdWU5NTg1NTE2MTk=
2,746
Cannot load `few-nerd` dataset
{ "avatar_url": "https://avatars.githubusercontent.com/u/28717374?v=4", "events_url": "https://api.github.com/users/Mehrad0711/events{/privacy}", "followers_url": "https://api.github.com/users/Mehrad0711/followers", "following_url": "https://api.github.com/users/Mehrad0711/following{/other_user}", "gists_url": "https://api.github.com/users/Mehrad0711/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/Mehrad0711", "id": 28717374, "login": "Mehrad0711", "node_id": "MDQ6VXNlcjI4NzE3Mzc0", "organizations_url": "https://api.github.com/users/Mehrad0711/orgs", "received_events_url": "https://api.github.com/users/Mehrad0711/received_events", "repos_url": "https://api.github.com/users/Mehrad0711/repos", "site_admin": false, "starred_url": "https://api.github.com/users/Mehrad0711/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Mehrad0711/subscriptions", "type": "User", "url": "https://api.github.com/users/Mehrad0711" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
closed
false
null
[]
null
[]
2021-08-02T22:18:57Z
2021-11-16T08:51:34Z
2021-08-03T19:45:43Z
NONE
null
null
null
## Describe the bug Cannot load `few-nerd` dataset. ## Steps to reproduce the bug ```python from datasets import load_dataset load_dataset('few-nerd', 'supervised') ``` ## Actual results Executing above code will give the following error: ``` Using the latest cached version of the module from /Users/Mehrad/.cache/huggingface/modules/datasets_modules/datasets/few-nerd/62464ace912a40a0f33a11a8310f9041c9dc3590ff2b3c77c14d83ca53cfec53 (last modified on Wed Jun 2 11:34:25 2021) since it couldn't be found locally at /Users/Mehrad/Documents/GitHub/genienlp/few-nerd/few-nerd.py, or remotely (FileNotFoundError). Downloading and preparing dataset few_nerd/supervised (download: Unknown size, generated: Unknown size, post-processed: Unknown size, total: Unknown size) to /Users/Mehrad/.cache/huggingface/datasets/few_nerd/supervised/0.0.0/62464ace912a40a0f33a11a8310f9041c9dc3590ff2b3c77c14d83ca53cfec53... Traceback (most recent call last): File "/Users/Mehrad/opt/anaconda3/lib/python3.7/site-packages/datasets/builder.py", line 693, in _download_and_prepare self._prepare_split(split_generator, **prepare_split_kwargs) File "/Users/Mehrad/opt/anaconda3/lib/python3.7/site-packages/datasets/builder.py", line 1107, in _prepare_split disable=bool(logging.get_verbosity() == logging.NOTSET), File "/Users/Mehrad/opt/anaconda3/lib/python3.7/site-packages/tqdm/std.py", line 1133, in __iter__ for obj in iterable: File "/Users/Mehrad/.cache/huggingface/modules/datasets_modules/datasets/few-nerd/62464ace912a40a0f33a11a8310f9041c9dc3590ff2b3c77c14d83ca53cfec53/few-nerd.py", line 196, in _generate_examples with open(filepath, encoding="utf-8") as f: FileNotFoundError: [Errno 2] No such file or directory: '/Users/Mehrad/.cache/huggingface/datasets/downloads/supervised/train.json' ``` The bug is probably in identifying and downloading the dataset. If I download the json splits directly from [link](https://github.com/nbroad1881/few-nerd/tree/main/uncompressed) and put them under the downloads directory, they will be processed into arrow format correctly. ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 1.11.0 - Python version: 3.8 - PyArrow version: 1.0.1
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2746/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2746/timeline
null
completed
true
https://api.github.com/repos/huggingface/datasets/issues/2745
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2745/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2745/comments
https://api.github.com/repos/huggingface/datasets/issues/2745/events
https://github.com/huggingface/datasets/pull/2745
958,269,579
MDExOlB1bGxSZXF1ZXN0NzAxNTc0Mjcz
2,745
added semeval18_emotion_classification dataset
{ "avatar_url": "https://avatars.githubusercontent.com/u/31095360?v=4", "events_url": "https://api.github.com/users/maxpel/events{/privacy}", "followers_url": "https://api.github.com/users/maxpel/followers", "following_url": "https://api.github.com/users/maxpel/following{/other_user}", "gists_url": "https://api.github.com/users/maxpel/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/maxpel", "id": 31095360, "login": "maxpel", "node_id": "MDQ6VXNlcjMxMDk1MzYw", "organizations_url": "https://api.github.com/users/maxpel/orgs", "received_events_url": "https://api.github.com/users/maxpel/received_events", "repos_url": "https://api.github.com/users/maxpel/repos", "site_admin": false, "starred_url": "https://api.github.com/users/maxpel/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/maxpel/subscriptions", "type": "User", "url": "https://api.github.com/users/maxpel" }
[]
closed
false
null
[]
null
[]
2021-08-02T15:39:55Z
2021-10-29T09:22:05Z
2021-09-21T09:48:35Z
CONTRIBUTOR
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/2745.diff", "html_url": "https://github.com/huggingface/datasets/pull/2745", "merged_at": "2021-09-21T09:48:35Z", "patch_url": "https://github.com/huggingface/datasets/pull/2745.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/2745" }
I added the data set of SemEval 2018 Task 1 (Subtask 5) for emotion detection in three languages. ``` datasets-cli test datasets/semeval18_emotion_classification/ --save_infos --all_configs RUN_SLOW=1 pytest tests/test_dataset_common.py::LocalDatasetTest::test_load_real_dataset_semeval18_emotion_classification ``` Both commands ran successfully. I couldn't create the dummy data (the files are tsvs but have .txt ending, maybe that's the problem?) and therefore the test on the dummy data fails, maybe someone can help here. I also formatted the code: ``` black --line-length 119 --target-version py36 datasets/semeval18_emotion_classification/ isort datasets/semeval18_emotion_classification/ flake8 datasets/semeval18_emotion_classification/ ``` That's the publication for reference: Mohammad, S., Bravo-Marquez, F., Salameh, M., & Kiritchenko, S. (2018). SemEval-2018 task 1: Affect in tweets. Proceedings of the 12th International Workshop on Semantic Evaluation, 1–17. https://doi.org/10.18653/v1/S18-1001
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2745/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2745/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/2744
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2744/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2744/comments
https://api.github.com/repos/huggingface/datasets/issues/2744/events
https://github.com/huggingface/datasets/pull/2744
958,146,637
MDExOlB1bGxSZXF1ZXN0NzAxNDY4NDcz
2,744
Fix key by recreating metadata JSON for journalists_questions dataset
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[]
closed
false
null
[]
null
[]
2021-08-02T13:27:53Z
2021-08-03T09:25:34Z
2021-08-03T09:25:33Z
MEMBER
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/2744.diff", "html_url": "https://github.com/huggingface/datasets/pull/2744", "merged_at": "2021-08-03T09:25:33Z", "patch_url": "https://github.com/huggingface/datasets/pull/2744.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/2744" }
Close #2743.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2744/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2744/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/2743
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2743/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2743/comments
https://api.github.com/repos/huggingface/datasets/issues/2743/events
https://github.com/huggingface/datasets/issues/2743
958,119,251
MDU6SXNzdWU5NTgxMTkyNTE=
2,743
Dataset JSON is incorrect
{ "avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4", "events_url": "https://api.github.com/users/severo/events{/privacy}", "followers_url": "https://api.github.com/users/severo/followers", "following_url": "https://api.github.com/users/severo/following{/other_user}", "gists_url": "https://api.github.com/users/severo/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/severo", "id": 1676121, "login": "severo", "node_id": "MDQ6VXNlcjE2NzYxMjE=", "organizations_url": "https://api.github.com/users/severo/orgs", "received_events_url": "https://api.github.com/users/severo/received_events", "repos_url": "https://api.github.com/users/severo/repos", "site_admin": false, "starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/severo/subscriptions", "type": "User", "url": "https://api.github.com/users/severo" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
closed
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" } ]
null
[]
2021-08-02T13:01:26Z
2021-08-03T10:06:57Z
2021-08-03T09:25:33Z
CONTRIBUTOR
null
null
null
## Describe the bug The JSON file generated for https://github.com/huggingface/datasets/blob/573f3d35081cee239d1b962878206e9abe6cde91/datasets/journalists_questions/journalists_questions.py is https://github.com/huggingface/datasets/blob/573f3d35081cee239d1b962878206e9abe6cde91/datasets/journalists_questions/dataset_infos.json. The only config should be `plain_text`, but the first key in the JSON is `journalists_questions` (the dataset id) instead. ```json { "journalists_questions": { "description": "The journalists_questions corpus (version 1.0) is a collection of 10K human-written Arabic\ntweets manually labeled for question identification over Arabic tweets posted by journalists.\n", ... ``` ## Steps to reproduce the bug Look at the files. ## Expected results The first key should be `plain_text`: ```json { "plain_text": { "description": "The journalists_questions corpus (version 1.0) is a collection of 10K human-written Arabic\ntweets manually labeled for question identification over Arabic tweets posted by journalists.\n", ... ``` ## Actual results ```json { "journalists_questions": { "description": "The journalists_questions corpus (version 1.0) is a collection of 10K human-written Arabic\ntweets manually labeled for question identification over Arabic tweets posted by journalists.\n", ... ```
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2743/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2743/timeline
null
completed
true
https://api.github.com/repos/huggingface/datasets/issues/2742
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2742/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2742/comments
https://api.github.com/repos/huggingface/datasets/issues/2742/events
https://github.com/huggingface/datasets/issues/2742
958,114,064
MDU6SXNzdWU5NTgxMTQwNjQ=
2,742
Improve detection of streamable file types
{ "avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4", "events_url": "https://api.github.com/users/severo/events{/privacy}", "followers_url": "https://api.github.com/users/severo/followers", "following_url": "https://api.github.com/users/severo/following{/other_user}", "gists_url": "https://api.github.com/users/severo/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/severo", "id": 1676121, "login": "severo", "node_id": "MDQ6VXNlcjE2NzYxMjE=", "organizations_url": "https://api.github.com/users/severo/orgs", "received_events_url": "https://api.github.com/users/severo/received_events", "repos_url": "https://api.github.com/users/severo/repos", "site_admin": false, "starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/severo/subscriptions", "type": "User", "url": "https://api.github.com/users/severo" }
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" }, { "color": "E5583E", "default": false, "description": "Related to the dataset viewer on huggingface.co", "id": 3470211881, "name": "dataset-viewer", "node_id": "LA_kwDODunzps7O1zsp", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset-viewer" } ]
closed
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" } ]
null
[]
2021-08-02T12:55:09Z
2021-11-12T17:18:10Z
2021-11-12T17:18:10Z
CONTRIBUTOR
null
null
null
**Is your feature request related to a problem? Please describe.** ```python from datasets import load_dataset_builder from datasets.utils.streaming_download_manager import StreamingDownloadManager builder = load_dataset_builder("journalists_questions", name="plain_text") builder._split_generators(StreamingDownloadManager(base_path=builder.base_path)) ``` raises ``` NotImplementedError: Extraction protocol for file at https://drive.google.com/uc?export=download&id=1CBrh-9OrSpKmPQBxTK_ji6mq6WTN_U9U is not implemented yet ``` But the file at https://drive.google.com/uc?export=download&id=1CBrh-9OrSpKmPQBxTK_ji6mq6WTN_U9U is a text file and it can be streamed: ```bash curl --header "Range: bytes=0-100" -L https://drive.google.com/uc\?export\=download\&id\=1CBrh-9OrSpKmPQBxTK_ji6mq6WTN_U9U 506938088174940160 yes 1 302221719412830209 yes 1 289761704907268096 yes 1 513820885032378369 yes % ``` Yet, it's wrongly categorized as a file type that cannot be streamed because the test is currently based on 1. the presence of a file extension at the end of the URL (here: no extension), and 2. the inclusion of this extension in a list of supported formats. **Describe the solution you'd like** In the case of an URL (instead of a local path), ask for the MIME type, and decide on that value? Note that it would not work in that case, because the value of `content_type` is `text/html; charset=UTF-8`. **Describe alternatives you've considered** Add a variable in the dataset script to set the data format by hand.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2742/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2742/timeline
null
completed
true
https://api.github.com/repos/huggingface/datasets/issues/2741
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2741/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2741/comments
https://api.github.com/repos/huggingface/datasets/issues/2741/events
https://github.com/huggingface/datasets/issues/2741
957,979,559
MDU6SXNzdWU5NTc5Nzk1NTk=
2,741
Add Hypersim dataset
{ "avatar_url": "https://avatars.githubusercontent.com/u/7246357?v=4", "events_url": "https://api.github.com/users/osanseviero/events{/privacy}", "followers_url": "https://api.github.com/users/osanseviero/followers", "following_url": "https://api.github.com/users/osanseviero/following{/other_user}", "gists_url": "https://api.github.com/users/osanseviero/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/osanseviero", "id": 7246357, "login": "osanseviero", "node_id": "MDQ6VXNlcjcyNDYzNTc=", "organizations_url": "https://api.github.com/users/osanseviero/orgs", "received_events_url": "https://api.github.com/users/osanseviero/received_events", "repos_url": "https://api.github.com/users/osanseviero/repos", "site_admin": false, "starred_url": "https://api.github.com/users/osanseviero/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/osanseviero/subscriptions", "type": "User", "url": "https://api.github.com/users/osanseviero" }
[ { "color": "e99695", "default": false, "description": "Requesting to add a new dataset", "id": 2067376369, "name": "dataset request", "node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request" }, { "color": "bfdadc", "default": false, "description": "Vision datasets", "id": 3608941089, "name": "vision", "node_id": "LA_kwDODunzps7XHBIh", "url": "https://api.github.com/repos/huggingface/datasets/labels/vision" } ]
open
false
null
[]
null
[]
2021-08-02T10:06:50Z
2021-12-08T12:06:51Z
null
MEMBER
null
null
null
## Adding a Dataset - **Name:** Hypersim - **Description:** photorealistic synthetic dataset for holistic indoor scene understanding - **Paper:** *link to the dataset paper if available* - **Data:** https://github.com/apple/ml-hypersim Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md).
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2741/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2741/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/2740
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2740/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2740/comments
https://api.github.com/repos/huggingface/datasets/issues/2740/events
https://github.com/huggingface/datasets/pull/2740
957,911,035
MDExOlB1bGxSZXF1ZXN0NzAxMjY0NTI3
2,740
Update release instructions
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[]
closed
false
null
[]
null
[]
2021-08-02T08:46:00Z
2021-08-02T14:39:56Z
2021-08-02T14:39:56Z
MEMBER
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/2740.diff", "html_url": "https://github.com/huggingface/datasets/pull/2740", "merged_at": "2021-08-02T14:39:56Z", "patch_url": "https://github.com/huggingface/datasets/pull/2740.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/2740" }
Update release instructions.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2740/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2740/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/2739
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2739/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2739/comments
https://api.github.com/repos/huggingface/datasets/issues/2739/events
https://github.com/huggingface/datasets/pull/2739
957,751,260
MDExOlB1bGxSZXF1ZXN0NzAxMTI0ODQ3
2,739
Pass tokenize to sacrebleu only if explicitly passed by user
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[]
closed
false
null
[]
null
[]
2021-08-02T05:09:05Z
2021-08-03T04:23:37Z
2021-08-03T04:23:37Z
MEMBER
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/2739.diff", "html_url": "https://github.com/huggingface/datasets/pull/2739", "merged_at": "2021-08-03T04:23:37Z", "patch_url": "https://github.com/huggingface/datasets/pull/2739.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/2739" }
Next `sacrebleu` release (v2.0.0) will remove `sacrebleu.DEFAULT_TOKENIZER`: https://github.com/mjpost/sacrebleu/pull/152/files#diff-2553a315bb1f7e68c9c1b00d56eaeb74f5205aeb3a189bc3e527b122c6078795L17-R15 This PR passes `tokenize` to `sacrebleu` only if explicitly passed by the user, otherwise it will not pass it (and `sacrebleu` will use its default, no matter where it is and how it is called). Close: #2737.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2739/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2739/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/2738
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2738/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2738/comments
https://api.github.com/repos/huggingface/datasets/issues/2738/events
https://github.com/huggingface/datasets/pull/2738
957,517,746
MDExOlB1bGxSZXF1ZXN0NzAwOTI5NzA4
2,738
Sunbird AI Ugandan low resource language dataset
{ "avatar_url": "https://avatars.githubusercontent.com/u/12105163?v=4", "events_url": "https://api.github.com/users/ak3ra/events{/privacy}", "followers_url": "https://api.github.com/users/ak3ra/followers", "following_url": "https://api.github.com/users/ak3ra/following{/other_user}", "gists_url": "https://api.github.com/users/ak3ra/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/ak3ra", "id": 12105163, "login": "ak3ra", "node_id": "MDQ6VXNlcjEyMTA1MTYz", "organizations_url": "https://api.github.com/users/ak3ra/orgs", "received_events_url": "https://api.github.com/users/ak3ra/received_events", "repos_url": "https://api.github.com/users/ak3ra/repos", "site_admin": false, "starred_url": "https://api.github.com/users/ak3ra/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ak3ra/subscriptions", "type": "User", "url": "https://api.github.com/users/ak3ra" }
[ { "color": "0e8a16", "default": false, "description": "Contribution to a dataset script", "id": 4564477500, "name": "dataset contribution", "node_id": "LA_kwDODunzps8AAAABEBBmPA", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20contribution" } ]
closed
false
null
[]
null
[]
2021-08-01T15:18:00Z
2022-10-03T09:37:30Z
2022-10-03T09:37:30Z
NONE
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/2738.diff", "html_url": "https://github.com/huggingface/datasets/pull/2738", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/2738.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/2738" }
Multi-way parallel text corpus of 5 key Ugandan languages for the task of machine translation.
{ "+1": 1, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/2738/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2738/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/2737
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2737/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2737/comments
https://api.github.com/repos/huggingface/datasets/issues/2737/events
https://github.com/huggingface/datasets/issues/2737
957,124,881
MDU6SXNzdWU5NTcxMjQ4ODE=
2,737
SacreBLEU update
{ "avatar_url": "https://avatars.githubusercontent.com/u/46989091?v=4", "events_url": "https://api.github.com/users/devrimcavusoglu/events{/privacy}", "followers_url": "https://api.github.com/users/devrimcavusoglu/followers", "following_url": "https://api.github.com/users/devrimcavusoglu/following{/other_user}", "gists_url": "https://api.github.com/users/devrimcavusoglu/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/devrimcavusoglu", "id": 46989091, "login": "devrimcavusoglu", "node_id": "MDQ6VXNlcjQ2OTg5MDkx", "organizations_url": "https://api.github.com/users/devrimcavusoglu/orgs", "received_events_url": "https://api.github.com/users/devrimcavusoglu/received_events", "repos_url": "https://api.github.com/users/devrimcavusoglu/repos", "site_admin": false, "starred_url": "https://api.github.com/users/devrimcavusoglu/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/devrimcavusoglu/subscriptions", "type": "User", "url": "https://api.github.com/users/devrimcavusoglu" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
closed
false
null
[]
null
[]
2021-07-30T23:53:08Z
2021-09-22T10:47:41Z
2021-08-03T04:23:37Z
NONE
null
null
null
With the latest release of [sacrebleu](https://github.com/mjpost/sacrebleu), `datasets.metrics.sacrebleu` is broken, and getting error. AttributeError: module 'sacrebleu' has no attribute 'DEFAULT_TOKENIZER' this happens since in new version of sacrebleu there is no `DEFAULT_TOKENIZER`, but sacrebleu.py tries to import it anyways. This can be fixed currently with fixing `sacrebleu==1.5.0` ## Steps to reproduce the bug ```python sacrebleu= datasets.load_metric('sacrebleu') predictions = ["It is a guide to action which ensures that the military always obeys the commands of the party"] references = ["It is a guide to action that ensures that the military will forever heed Party commands"] results = sacrebleu.compute(predictions=predictions, references=references) print(results) ``` ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 1.11.0 - Platform: Windows-10-10.0.19041-SP0 - Python version: Python 3.8.0 - PyArrow version: 5.0.0
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2737/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2737/timeline
null
completed
true
https://api.github.com/repos/huggingface/datasets/issues/2736
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2736/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2736/comments
https://api.github.com/repos/huggingface/datasets/issues/2736/events
https://github.com/huggingface/datasets/issues/2736
956,895,199
MDU6SXNzdWU5NTY4OTUxOTk=
2,736
Add Microsoft Building Footprints dataset
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[ { "color": "e99695", "default": false, "description": "Requesting to add a new dataset", "id": 2067376369, "name": "dataset request", "node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request" }, { "color": "bfdadc", "default": false, "description": "Vision datasets", "id": 3608941089, "name": "vision", "node_id": "LA_kwDODunzps7XHBIh", "url": "https://api.github.com/repos/huggingface/datasets/labels/vision" } ]
open
false
null
[]
null
[]
2021-07-30T16:17:08Z
2021-12-08T12:09:03Z
null
MEMBER
null
null
null
## Adding a Dataset - **Name:** Microsoft Building Footprints - **Description:** With the goal to increase the coverage of building footprint data available as open data for OpenStreetMap and humanitarian efforts, we have released millions of building footprints as open data available to download free of charge. - **Paper:** *link to the dataset paper if available* - **Data:** https://www.microsoft.com/en-us/maps/building-footprints - **Motivation:** this can be a useful dataset for researchers working on climate change adaptation, urban studies, geography, etc. Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md). Reported by: @sashavor
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2736/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2736/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/2735
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2735/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2735/comments
https://api.github.com/repos/huggingface/datasets/issues/2735/events
https://github.com/huggingface/datasets/issues/2735
956,889,365
MDU6SXNzdWU5NTY4ODkzNjU=
2,735
Add Open Buildings dataset
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[ { "color": "e99695", "default": false, "description": "Requesting to add a new dataset", "id": 2067376369, "name": "dataset request", "node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request" } ]
open
false
null
[]
null
[]
2021-07-30T16:08:39Z
2021-07-31T05:01:25Z
null
MEMBER
null
null
null
## Adding a Dataset - **Name:** Open Buildings - **Description:** A dataset of building footprints to support social good applications. Building footprints are useful for a range of important applications, from population estimation, urban planning and humanitarian response, to environmental and climate science. This large-scale open dataset contains the outlines of buildings derived from high-resolution satellite imagery in order to support these types of uses. The project being based in Ghana, the current focus is on the continent of Africa. See: "Mapping Africa's Buildings with Satellite Imagery" https://ai.googleblog.com/2021/07/mapping-africas-buildings-with.html - **Paper:** https://arxiv.org/abs/2107.12283 - **Data:** https://sites.research.google/open-buildings/ - **Motivation:** *what are some good reasons to have this dataset* Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md). Reported by: @osanseviero
{ "+1": 1, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/2735/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2735/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/2734
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2734/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2734/comments
https://api.github.com/repos/huggingface/datasets/issues/2734/events
https://github.com/huggingface/datasets/pull/2734
956,844,874
MDExOlB1bGxSZXF1ZXN0NzAwMzc4NjI4
2,734
Update BibTeX entry
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[]
closed
false
null
[]
null
[]
2021-07-30T15:22:51Z
2021-07-30T15:47:58Z
2021-07-30T15:47:58Z
MEMBER
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/2734.diff", "html_url": "https://github.com/huggingface/datasets/pull/2734", "merged_at": "2021-07-30T15:47:58Z", "patch_url": "https://github.com/huggingface/datasets/pull/2734.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/2734" }
Update BibTeX entry.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2734/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2734/timeline
null
null
true