url
stringlengths 58
61
| repository_url
stringclasses 1
value | labels_url
stringlengths 72
75
| comments_url
stringlengths 67
70
| events_url
stringlengths 65
68
| html_url
stringlengths 46
51
| id
int64 599M
1.5B
| node_id
stringlengths 18
32
| number
int64 1
5.38k
| title
stringlengths 1
276
| user
dict | labels
list | state
stringclasses 2
values | locked
bool 1
class | assignee
dict | assignees
list | milestone
dict | comments
sequence | created_at
stringlengths 20
20
| updated_at
stringlengths 20
20
| closed_at
stringlengths 20
20
⌀ | author_association
stringclasses 3
values | active_lock_reason
null | draft
bool 2
classes | pull_request
dict | body
stringlengths 0
228k
⌀ | reactions
dict | timeline_url
stringlengths 67
70
| performed_via_github_app
null | state_reason
stringclasses 3
values | is_pull_request
bool 1
class |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://api.github.com/repos/huggingface/datasets/issues/2214 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2214/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2214/comments | https://api.github.com/repos/huggingface/datasets/issues/2214/events | https://github.com/huggingface/datasets/issues/2214 | 856,333,657 | MDU6SXNzdWU4NTYzMzM2NTc= | 2,214 | load_metric error: module 'datasets.utils.file_utils' has no attribute 'add_start_docstrings' | {
"avatar_url": "https://avatars.githubusercontent.com/u/414788?v=4",
"events_url": "https://api.github.com/users/nsaphra/events{/privacy}",
"followers_url": "https://api.github.com/users/nsaphra/followers",
"following_url": "https://api.github.com/users/nsaphra/following{/other_user}",
"gists_url": "https://api.github.com/users/nsaphra/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/nsaphra",
"id": 414788,
"login": "nsaphra",
"node_id": "MDQ6VXNlcjQxNDc4OA==",
"organizations_url": "https://api.github.com/users/nsaphra/orgs",
"received_events_url": "https://api.github.com/users/nsaphra/received_events",
"repos_url": "https://api.github.com/users/nsaphra/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/nsaphra/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/nsaphra/subscriptions",
"type": "User",
"url": "https://api.github.com/users/nsaphra"
} | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | closed | false | null | [] | null | [] | 2021-04-12T20:26:01Z | 2021-04-23T15:20:02Z | 2021-04-23T15:20:02Z | NONE | null | null | null | I'm having the same problem as [Notebooks issue 10](https://github.com/huggingface/notebooks/issues/10) on datasets 1.2.1, and it seems to be an issue with the datasets package.
```python
>>> from datasets import load_metric
>>> metric = load_metric("glue", "sst2")
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/ext3/miniconda3/lib/python3.8/site-packages/datasets-1.2.1-py3.8.egg/datasets/load.py", line 502, in load_metric
File "/ext3/miniconda3/lib/python3.8/site-packages/datasets-1.2.1-py3.8.egg/datasets/load.py", line 66, in import_main_class
File "/ext3/miniconda3/lib/python3.8/importlib/__init__.py", line 127, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "<frozen importlib._bootstrap>", line 1014, in _gcd_import
File "<frozen importlib._bootstrap>", line 991, in _find_and_load
File "<frozen importlib._bootstrap>", line 975, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 671, in _load_unlocked
File "<frozen importlib._bootstrap_external>", line 783, in exec_module
File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed
File "/home/ns4008/.cache/huggingface/modules/datasets_modules/metrics/glue/e4606ab9804a36bcd5a9cebb2cb65bb14b6ac78ee9e6d5981fa679a495dd55de/glue.py", line 105, in <module>
@datasets.utils.file_utils.add_start_docstrings(_DESCRIPTION, _KWARGS_DESCRIPTION)
AttributeError: module 'datasets.utils.file_utils' has no attribute 'add_start_docstrings'
``` | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2214/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2214/timeline | null | completed | true |
https://api.github.com/repos/huggingface/datasets/issues/2213 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2213/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2213/comments | https://api.github.com/repos/huggingface/datasets/issues/2213/events | https://github.com/huggingface/datasets/pull/2213 | 856,025,320 | MDExOlB1bGxSZXF1ZXN0NjEzNjcwODk2 | 2,213 | Fix lc_quad download checksum | {
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko"
} | [] | closed | false | null | [] | null | [] | 2021-04-12T14:16:59Z | 2021-04-14T22:04:54Z | 2021-04-14T13:42:25Z | CONTRIBUTOR | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/2213.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2213",
"merged_at": "2021-04-14T13:42:25Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2213.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2213"
} | Fixes #2211 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2213/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2213/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2212 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2212/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2212/comments | https://api.github.com/repos/huggingface/datasets/issues/2212/events | https://github.com/huggingface/datasets/issues/2212 | 855,999,133 | MDU6SXNzdWU4NTU5OTkxMzM= | 2,212 | Can't reach "https://storage.googleapis.com/illuin/fquad/train.json.zip" when trying to load fquad dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/21348833?v=4",
"events_url": "https://api.github.com/users/hanss0n/events{/privacy}",
"followers_url": "https://api.github.com/users/hanss0n/followers",
"following_url": "https://api.github.com/users/hanss0n/following{/other_user}",
"gists_url": "https://api.github.com/users/hanss0n/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/hanss0n",
"id": 21348833,
"login": "hanss0n",
"node_id": "MDQ6VXNlcjIxMzQ4ODMz",
"organizations_url": "https://api.github.com/users/hanss0n/orgs",
"received_events_url": "https://api.github.com/users/hanss0n/received_events",
"repos_url": "https://api.github.com/users/hanss0n/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/hanss0n/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/hanss0n/subscriptions",
"type": "User",
"url": "https://api.github.com/users/hanss0n"
} | [] | open | false | null | [] | null | [] | 2021-04-12T13:49:56Z | 2021-05-17T22:17:06Z | null | NONE | null | null | null | I'm trying to load the [fquad dataset](https://huggingface.co/datasets/fquad) by running:
```Python
fquad = load_dataset("fquad")
```
which produces the following error:
```
Using custom data configuration default
Downloading and preparing dataset fquad/default (download: 3.14 MiB, generated: 6.62 MiB, post-processed: Unknown size, total: 9.76 MiB) to /root/.cache/huggingface/datasets/fquad/default/0.1.0/778dc2c85813d05ddd0c17087294d5f8f24820752340958070876b677af9f061...
---------------------------------------------------------------------------
ConnectionError Traceback (most recent call last)
<ipython-input-48-a2721797e23b> in <module>()
----> 1 fquad = load_dataset("fquad")
11 frames
/usr/local/lib/python3.7/dist-packages/datasets/utils/file_utils.py in get_from_cache(url, cache_dir, force_download, proxies, etag_timeout, resume_download, user_agent, local_files_only, use_etag, max_retries, use_auth_token)
614 raise FileNotFoundError("Couldn't find file at {}".format(url))
615 _raise_if_offline_mode_is_enabled(f"Tried to reach {url}")
--> 616 raise ConnectionError("Couldn't reach {}".format(url))
617
618 # Try a second time
ConnectionError: Couldn't reach https://storage.googleapis.com/illuin/fquad/train.json.zip
```
Does anyone know why that is and how to fix it? | {
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2212/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2212/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2211 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2211/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2211/comments | https://api.github.com/repos/huggingface/datasets/issues/2211/events | https://github.com/huggingface/datasets/issues/2211 | 855,988,410 | MDU6SXNzdWU4NTU5ODg0MTA= | 2,211 | Getting checksum error when trying to load lc_quad dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/21348833?v=4",
"events_url": "https://api.github.com/users/hanss0n/events{/privacy}",
"followers_url": "https://api.github.com/users/hanss0n/followers",
"following_url": "https://api.github.com/users/hanss0n/following{/other_user}",
"gists_url": "https://api.github.com/users/hanss0n/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/hanss0n",
"id": 21348833,
"login": "hanss0n",
"node_id": "MDQ6VXNlcjIxMzQ4ODMz",
"organizations_url": "https://api.github.com/users/hanss0n/orgs",
"received_events_url": "https://api.github.com/users/hanss0n/received_events",
"repos_url": "https://api.github.com/users/hanss0n/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/hanss0n/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/hanss0n/subscriptions",
"type": "User",
"url": "https://api.github.com/users/hanss0n"
} | [] | closed | false | null | [] | null | [] | 2021-04-12T13:38:58Z | 2021-04-14T13:42:25Z | 2021-04-14T13:42:25Z | NONE | null | null | null | I'm having issues loading the [lc_quad](https://huggingface.co/datasets/fquad) dataset by running:
```Python
lc_quad = load_dataset("lc_quad")
```
which is giving me the following error:
```
Using custom data configuration default
Downloading and preparing dataset lc_quad/default (download: 3.69 MiB, generated: 19.77 MiB, post-processed: Unknown size, total: 23.46 MiB) to /root/.cache/huggingface/datasets/lc_quad/default/2.0.0/5a98fe174603f5dec6df07edf1c2b4d2317210d2ad61f5a393839bca4d64e5a7...
---------------------------------------------------------------------------
NonMatchingChecksumError Traceback (most recent call last)
<ipython-input-42-404ace83f73c> in <module>()
----> 1 lc_quad = load_dataset("lc_quad")
3 frames
/usr/local/lib/python3.7/dist-packages/datasets/utils/info_utils.py in verify_checksums(expected_checksums, recorded_checksums, verification_name)
37 if len(bad_urls) > 0:
38 error_msg = "Checksums didn't match" + for_verification_name + ":\n"
---> 39 raise NonMatchingChecksumError(error_msg + str(bad_urls))
40 logger.info("All the checksums matched successfully" + for_verification_name)
41
NonMatchingChecksumError: Checksums didn't match for dataset source files:
['https://github.com/AskNowQA/LC-QuAD2.0/archive/master.zip']
```
Does anyone know why this could be and how I fix it? | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2211/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2211/timeline | null | completed | true |
https://api.github.com/repos/huggingface/datasets/issues/2210 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2210/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2210/comments | https://api.github.com/repos/huggingface/datasets/issues/2210/events | https://github.com/huggingface/datasets/issues/2210 | 855,709,400 | MDU6SXNzdWU4NTU3MDk0MDA= | 2,210 | dataloading slow when using HUGE dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/29157715?v=4",
"events_url": "https://api.github.com/users/hwijeen/events{/privacy}",
"followers_url": "https://api.github.com/users/hwijeen/followers",
"following_url": "https://api.github.com/users/hwijeen/following{/other_user}",
"gists_url": "https://api.github.com/users/hwijeen/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/hwijeen",
"id": 29157715,
"login": "hwijeen",
"node_id": "MDQ6VXNlcjI5MTU3NzE1",
"organizations_url": "https://api.github.com/users/hwijeen/orgs",
"received_events_url": "https://api.github.com/users/hwijeen/received_events",
"repos_url": "https://api.github.com/users/hwijeen/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/hwijeen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/hwijeen/subscriptions",
"type": "User",
"url": "https://api.github.com/users/hwijeen"
} | [] | closed | false | null | [] | null | [] | 2021-04-12T08:33:02Z | 2021-04-13T02:03:05Z | 2021-04-13T02:03:05Z | NONE | null | null | null | Hi,
When I use datasets with 600GB data, the dataloading speed increases significantly.
I am experimenting with two datasets, and one is about 60GB and the other 600GB.
Simply speaking, my code uses `datasets.set_format("torch")` function and let pytorch-lightning handle ddp training.
When looking at the pytorch-lightning supported profile of two different runs, I see that fetching a batch(`get_train_batch`) consumes an unreasonable amount of time when data is large. What could be the cause?
* 60GB data
```
Action | Mean duration (s) |Num calls | Total time (s) | Percentage % |
------------------------------------------------------------------------------------------------------------------------------------
Total | - |_ | 200.33 | 100 % |
------------------------------------------------------------------------------------------------------------------------------------
run_training_epoch | 71.994 |1 | 71.994 | 35.937 |
run_training_batch | 0.64373 |100 | 64.373 | 32.133 |
optimizer_step_and_closure_0 | 0.64322 |100 | 64.322 | 32.108 |
training_step_and_backward | 0.61004 |100 | 61.004 | 30.452 |
model_backward | 0.37552 |100 | 37.552 | 18.745 |
model_forward | 0.22813 |100 | 22.813 | 11.387 |
training_step | 0.22759 |100 | 22.759 | 11.361 |
get_train_batch | 0.066385 |100 | 6.6385 | 3.3138 |
```
* 600GB data
```
Action | Mean duration (s) |Num calls | Total time (s) | Percentage % |
------------------------------------------------------------------------------------------------------------------------------------
Total | - |_ | 3285.6 | 100 % |
------------------------------------------------------------------------------------------------------------------------------------
run_training_epoch | 1397.9 |1 | 1397.9 | 42.546 |
run_training_batch | 7.2596 |100 | 725.96 | 22.095 |
optimizer_step_and_closure_0 | 7.2589 |100 | 725.89 | 22.093 |
training_step_and_backward | 7.223 |100 | 722.3 | 21.984 |
model_backward | 6.9662 |100 | 696.62 | 21.202 |
get_train_batch | 6.322 |100 | 632.2 | 19.241 |
model_forward | 0.24902 |100 | 24.902 | 0.75789 |
training_step | 0.2485 |100 | 24.85 | 0.75633 |
```
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2210/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2210/timeline | null | completed | true |
https://api.github.com/repos/huggingface/datasets/issues/2209 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2209/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2209/comments | https://api.github.com/repos/huggingface/datasets/issues/2209/events | https://github.com/huggingface/datasets/pull/2209 | 855,638,232 | MDExOlB1bGxSZXF1ZXN0NjEzMzQwMTI2 | 2,209 | Add code of conduct to the project | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | [
{
"color": "0075ca",
"default": true,
"description": "Improvements or additions to documentation",
"id": 1935892861,
"name": "documentation",
"node_id": "MDU6TGFiZWwxOTM1ODkyODYx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/documentation"
}
] | closed | false | null | [] | null | [] | 2021-04-12T07:16:14Z | 2021-04-12T17:55:52Z | 2021-04-12T17:55:52Z | MEMBER | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/2209.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2209",
"merged_at": "2021-04-12T17:55:52Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2209.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2209"
} | Add code of conduct to the project and link it from README and CONTRIBUTING.
This was already done in `transformers`. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2209/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2209/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2208 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2208/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2208/comments | https://api.github.com/repos/huggingface/datasets/issues/2208/events | https://github.com/huggingface/datasets/pull/2208 | 855,343,835 | MDExOlB1bGxSZXF1ZXN0NjEzMTAxMzMw | 2,208 | Remove Python2 leftovers | {
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko"
} | [] | closed | false | null | [] | null | [] | 2021-04-11T16:08:03Z | 2021-04-14T22:05:36Z | 2021-04-14T13:40:51Z | CONTRIBUTOR | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/2208.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2208",
"merged_at": "2021-04-14T13:40:50Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2208.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2208"
} | This PR removes Python2 leftovers since this project aims for Python3.6+ (and as of 2020 Python2 is no longer officially supported) | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 1,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2208/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2208/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2207 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2207/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2207/comments | https://api.github.com/repos/huggingface/datasets/issues/2207/events | https://github.com/huggingface/datasets/issues/2207 | 855,267,383 | MDU6SXNzdWU4NTUyNjczODM= | 2,207 | making labels consistent across the datasets | {
"avatar_url": "https://avatars.githubusercontent.com/u/79165106?v=4",
"events_url": "https://api.github.com/users/dorost1234/events{/privacy}",
"followers_url": "https://api.github.com/users/dorost1234/followers",
"following_url": "https://api.github.com/users/dorost1234/following{/other_user}",
"gists_url": "https://api.github.com/users/dorost1234/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/dorost1234",
"id": 79165106,
"login": "dorost1234",
"node_id": "MDQ6VXNlcjc5MTY1MTA2",
"organizations_url": "https://api.github.com/users/dorost1234/orgs",
"received_events_url": "https://api.github.com/users/dorost1234/received_events",
"repos_url": "https://api.github.com/users/dorost1234/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/dorost1234/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dorost1234/subscriptions",
"type": "User",
"url": "https://api.github.com/users/dorost1234"
} | [] | closed | false | null | [] | null | [] | 2021-04-11T10:03:56Z | 2022-06-01T16:23:08Z | 2022-06-01T16:21:10Z | NONE | null | null | null | Hi
For accessing the labels one can type
```
>>> a.features['label']
ClassLabel(num_classes=3, names=['entailment', 'neutral', 'contradiction'], names_file=None, id=None)
```
The labels however are not consistent with the actual labels sometimes, for instance in case of XNLI, the actual labels are 0,1,2, but if one try to access as above they are entailment, neutral,contradiction,
it would be great to have the labels consistent.
thanks
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2207/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2207/timeline | null | completed | true |
https://api.github.com/repos/huggingface/datasets/issues/2206 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2206/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2206/comments | https://api.github.com/repos/huggingface/datasets/issues/2206/events | https://github.com/huggingface/datasets/issues/2206 | 855,252,415 | MDU6SXNzdWU4NTUyNTI0MTU= | 2,206 | Got pyarrow error when loading a dataset while adding special tokens into the tokenizer | {
"avatar_url": "https://avatars.githubusercontent.com/u/38536635?v=4",
"events_url": "https://api.github.com/users/yana-xuyan/events{/privacy}",
"followers_url": "https://api.github.com/users/yana-xuyan/followers",
"following_url": "https://api.github.com/users/yana-xuyan/following{/other_user}",
"gists_url": "https://api.github.com/users/yana-xuyan/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/yana-xuyan",
"id": 38536635,
"login": "yana-xuyan",
"node_id": "MDQ6VXNlcjM4NTM2NjM1",
"organizations_url": "https://api.github.com/users/yana-xuyan/orgs",
"received_events_url": "https://api.github.com/users/yana-xuyan/received_events",
"repos_url": "https://api.github.com/users/yana-xuyan/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/yana-xuyan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yana-xuyan/subscriptions",
"type": "User",
"url": "https://api.github.com/users/yana-xuyan"
} | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | closed | false | null | [] | null | [] | 2021-04-11T08:40:09Z | 2021-11-10T12:18:30Z | 2021-11-10T12:04:28Z | NONE | null | null | null | I added five more special tokens into the GPT2 tokenizer. But after that, when I try to pre-process the data using my previous code, I got an error shown below:
Traceback (most recent call last):
File "/home/xuyan/anaconda3/envs/convqa/lib/python3.7/site-packages/datasets/arrow_dataset.py", line 1687, in _map_single
writer.write(example)
File "/home/xuyan/anaconda3/envs/convqa/lib/python3.7/site-packages/datasets/arrow_writer.py", line 296, in write
self.write_on_file()
File "/home/xuyan/anaconda3/envs/convqa/lib/python3.7/site-packages/datasets/arrow_writer.py", line 270, in write_on_file
pa_array = pa.array(typed_sequence)
File "pyarrow/array.pxi", line 222, in pyarrow.lib.array
File "pyarrow/array.pxi", line 110, in pyarrow.lib._handle_arrow_array_protocol
File "/home/xuyan/anaconda3/envs/convqa/lib/python3.7/site-packages/datasets/arrow_writer.py", line 108, in __arrow_array__
out = out.cast(pa.list_(self.optimized_int_type))
File "pyarrow/array.pxi", line 810, in pyarrow.lib.Array.cast
File "/home/xuyan/anaconda3/envs/convqa/lib/python3.7/site-packages/pyarrow/compute.py", line 281, in cast
return call_function("cast", [arr], options)
File "pyarrow/_compute.pyx", line 465, in pyarrow._compute.call_function
File "pyarrow/_compute.pyx", line 294, in pyarrow._compute.Function.call
File "pyarrow/error.pxi", line 122, in pyarrow.lib.pyarrow_internal_check_status
File "pyarrow/error.pxi", line 84, in pyarrow.lib.check_status
pyarrow.lib.ArrowInvalid: Integer value 50259 not in range: -128 to 127
Do you have any idea about it? | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2206/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2206/timeline | null | completed | true |
https://api.github.com/repos/huggingface/datasets/issues/2205 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2205/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2205/comments | https://api.github.com/repos/huggingface/datasets/issues/2205/events | https://github.com/huggingface/datasets/pull/2205 | 855,207,605 | MDExOlB1bGxSZXF1ZXN0NjEzMDAwMzYw | 2,205 | Updating citation information on LinCE readme | {
"avatar_url": "https://avatars.githubusercontent.com/u/5833357?v=4",
"events_url": "https://api.github.com/users/gaguilar/events{/privacy}",
"followers_url": "https://api.github.com/users/gaguilar/followers",
"following_url": "https://api.github.com/users/gaguilar/following{/other_user}",
"gists_url": "https://api.github.com/users/gaguilar/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/gaguilar",
"id": 5833357,
"login": "gaguilar",
"node_id": "MDQ6VXNlcjU4MzMzNTc=",
"organizations_url": "https://api.github.com/users/gaguilar/orgs",
"received_events_url": "https://api.github.com/users/gaguilar/received_events",
"repos_url": "https://api.github.com/users/gaguilar/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/gaguilar/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gaguilar/subscriptions",
"type": "User",
"url": "https://api.github.com/users/gaguilar"
} | [] | closed | false | null | [] | null | [] | 2021-04-11T03:18:05Z | 2021-04-12T17:53:34Z | 2021-04-12T17:53:34Z | CONTRIBUTOR | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/2205.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2205",
"merged_at": "2021-04-12T17:53:34Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2205.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2205"
} | Hi!
I just updated the citation information in this PR. It had an additional bibtex from one of the datasets used in LinCE and then the LinCE bibtex. I removed the former and added a link that shows the full list of citations for each dataset.
Thanks! | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2205/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2205/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2204 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2204/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2204/comments | https://api.github.com/repos/huggingface/datasets/issues/2204/events | https://github.com/huggingface/datasets/pull/2204 | 855,144,431 | MDExOlB1bGxSZXF1ZXN0NjEyOTU1MzM2 | 2,204 | Add configurable options to `seqeval` metric | {
"avatar_url": "https://avatars.githubusercontent.com/u/44571847?v=4",
"events_url": "https://api.github.com/users/marrodion/events{/privacy}",
"followers_url": "https://api.github.com/users/marrodion/followers",
"following_url": "https://api.github.com/users/marrodion/following{/other_user}",
"gists_url": "https://api.github.com/users/marrodion/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/marrodion",
"id": 44571847,
"login": "marrodion",
"node_id": "MDQ6VXNlcjQ0NTcxODQ3",
"organizations_url": "https://api.github.com/users/marrodion/orgs",
"received_events_url": "https://api.github.com/users/marrodion/received_events",
"repos_url": "https://api.github.com/users/marrodion/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/marrodion/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/marrodion/subscriptions",
"type": "User",
"url": "https://api.github.com/users/marrodion"
} | [] | closed | false | null | [] | null | [] | 2021-04-10T19:58:19Z | 2021-04-15T13:49:46Z | 2021-04-15T13:49:46Z | CONTRIBUTOR | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/2204.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2204",
"merged_at": "2021-04-15T13:49:46Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2204.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2204"
} | Fixes #2148
Adds options to use strict mode, different schemes of evaluation, sample weight and adjust zero_division behavior, if encountered.
`seqeval` provides schemes as objects, hence dynamic import from string, to avoid making the user do the import (thanks to @albertvillanova for the `importlib` idea). | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2204/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2204/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2203 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2203/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2203/comments | https://api.github.com/repos/huggingface/datasets/issues/2203/events | https://github.com/huggingface/datasets/pull/2203 | 855,053,595 | MDExOlB1bGxSZXF1ZXN0NjEyODg4MzA5 | 2,203 | updated banking77 train and test data | {
"avatar_url": "https://avatars.githubusercontent.com/u/6765330?v=4",
"events_url": "https://api.github.com/users/hsali/events{/privacy}",
"followers_url": "https://api.github.com/users/hsali/followers",
"following_url": "https://api.github.com/users/hsali/following{/other_user}",
"gists_url": "https://api.github.com/users/hsali/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/hsali",
"id": 6765330,
"login": "hsali",
"node_id": "MDQ6VXNlcjY3NjUzMzA=",
"organizations_url": "https://api.github.com/users/hsali/orgs",
"received_events_url": "https://api.github.com/users/hsali/received_events",
"repos_url": "https://api.github.com/users/hsali/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/hsali/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/hsali/subscriptions",
"type": "User",
"url": "https://api.github.com/users/hsali"
} | [] | closed | false | null | [] | null | [] | 2021-04-10T12:10:10Z | 2021-04-23T14:33:39Z | 2021-04-23T14:33:39Z | NONE | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/2203.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2203",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/2203.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2203"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2203/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2203/timeline | null | null | true |
|
https://api.github.com/repos/huggingface/datasets/issues/2202 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2202/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2202/comments | https://api.github.com/repos/huggingface/datasets/issues/2202/events | https://github.com/huggingface/datasets/pull/2202 | 854,501,109 | MDExOlB1bGxSZXF1ZXN0NjEyNDM2ODMx | 2,202 | Add classes GenerateMode, DownloadConfig and Version to the documentation | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | [] | closed | false | null | [] | null | [] | 2021-04-09T12:58:19Z | 2021-04-12T17:58:00Z | 2021-04-12T17:57:59Z | MEMBER | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/2202.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2202",
"merged_at": "2021-04-12T17:57:59Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2202.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2202"
} | Add documentation for classes `GenerateMode`, `DownloadConfig` and `Version`.
Update the docstring of `load_dataset` to create cross-reference links to the classes.
Related to #2187. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2202/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2202/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2201 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2201/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2201/comments | https://api.github.com/repos/huggingface/datasets/issues/2201/events | https://github.com/huggingface/datasets/pull/2201 | 854,499,563 | MDExOlB1bGxSZXF1ZXN0NjEyNDM1NTE3 | 2,201 | Fix ArrowWriter overwriting features in ArrowBasedBuilder | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [] | closed | false | null | [] | null | [] | 2021-04-09T12:56:19Z | 2021-04-12T13:32:17Z | 2021-04-12T13:32:16Z | MEMBER | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/2201.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2201",
"merged_at": "2021-04-12T13:32:16Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2201.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2201"
} | This should fix the issues with CSV loading experienced in #2153 and #2200.
The CSV builder is an ArrowBasedBuilder that had an issue with its ArrowWriter used to write the arrow file from the csv data.
The writer wasn't initialized with the features passed by the user. Therefore the writer was inferring the features from the arrow data, discarding the features passed by the user.
I fixed that and I updated the tests | {
"+1": 2,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 2,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2201/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2201/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2200 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2200/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2200/comments | https://api.github.com/repos/huggingface/datasets/issues/2200/events | https://github.com/huggingface/datasets/issues/2200 | 854,449,656 | MDU6SXNzdWU4NTQ0NDk2NTY= | 2,200 | _prepare_split will overwrite DatasetBuilder.info.features | {
"avatar_url": "https://avatars.githubusercontent.com/u/4157614?v=4",
"events_url": "https://api.github.com/users/Gforky/events{/privacy}",
"followers_url": "https://api.github.com/users/Gforky/followers",
"following_url": "https://api.github.com/users/Gforky/following{/other_user}",
"gists_url": "https://api.github.com/users/Gforky/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Gforky",
"id": 4157614,
"login": "Gforky",
"node_id": "MDQ6VXNlcjQxNTc2MTQ=",
"organizations_url": "https://api.github.com/users/Gforky/orgs",
"received_events_url": "https://api.github.com/users/Gforky/received_events",
"repos_url": "https://api.github.com/users/Gforky/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Gforky/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Gforky/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Gforky"
} | [] | closed | false | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
}
] | null | [] | 2021-04-09T11:47:13Z | 2021-06-04T10:37:35Z | 2021-06-04T10:37:35Z | NONE | null | null | null | Hi, here is my issue:
I initialized a Csv datasetbuilder with specific features:
```
def get_dataset_features(data_args):
features = {}
if data_args.text_features:
features.update({text_feature: hf_features.Value("string") for text_feature in data_args.text_features.strip().split(",")})
if data_args.num_features:
features.update({text_feature: hf_features.Value("float32") for text_feature in data_args.num_features.strip().split(",")})
if data_args.label_classes:
features["label"] = hf_features.ClassLabel(names=data_args.label_classes.strip().split(","))
else:
features["label"] = hf_features.Value("float32")
return hf_features.Features(features)
datasets = load_dataset(extension,
data_files=data_files,
sep=data_args.delimiter,
header=data_args.header,
column_names=data_args.column_names.split(",") if data_args.column_names else None,
features=get_dataset_features(data_args=data_args))
```
The `features` is printout as below before `builder_instance.as_dataset` is called:
```
{'label': ClassLabel(num_classes=2, names=['unacceptable', 'acceptable'], names_file=None, id=None), 'notated': Value(dtype='string', id=None), 'sentence': Value(dtype='string', id=None), 'src_code': Value(dtype='string', id=None)}
````
But after the `builder_instance.as_dataset` is called for Csv dataset builder, the `features` is changed to:
```
{'label': Value(dtype='int64', id=None), 'notated': Value(dtype='string', id=None), 'sentence': Value(dtype='string', id=None), 'src_code': Value(dtype='string', id=None)}
```
After digged into the code, I releazed that in `ArrowBasedBuilder._prepare_split`, the DatasetBuilder's info's features will be overwrited by `ArrowWriter`'s `_features`.
But `ArrowWriter` is initailized without passing `features`.
So my concern is:
It's this overwrite must be done, or, should it be an option to pass features in `_prepare_split` function? | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2200/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2200/timeline | null | completed | true |
https://api.github.com/repos/huggingface/datasets/issues/2199 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2199/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2199/comments | https://api.github.com/repos/huggingface/datasets/issues/2199/events | https://github.com/huggingface/datasets/pull/2199 | 854,417,318 | MDExOlB1bGxSZXF1ZXN0NjEyMzY0ODU3 | 2,199 | Fix backward compatibility in Dataset.load_from_disk | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | [] | closed | false | null | [] | null | [] | 2021-04-09T11:01:10Z | 2021-04-09T15:57:05Z | 2021-04-09T15:57:05Z | MEMBER | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/2199.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2199",
"merged_at": "2021-04-09T15:57:05Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2199.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2199"
} | Fix backward compatibility when loading from disk an old dataset saved to disk with indices using key "_indices_data_files".
Related to #2195. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2199/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2199/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2198 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2198/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2198/comments | https://api.github.com/repos/huggingface/datasets/issues/2198/events | https://github.com/huggingface/datasets/pull/2198 | 854,357,481 | MDExOlB1bGxSZXF1ZXN0NjEyMzE0MTIz | 2,198 | added file_permission in load_dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/19718818?v=4",
"events_url": "https://api.github.com/users/bhavitvyamalik/events{/privacy}",
"followers_url": "https://api.github.com/users/bhavitvyamalik/followers",
"following_url": "https://api.github.com/users/bhavitvyamalik/following{/other_user}",
"gists_url": "https://api.github.com/users/bhavitvyamalik/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/bhavitvyamalik",
"id": 19718818,
"login": "bhavitvyamalik",
"node_id": "MDQ6VXNlcjE5NzE4ODE4",
"organizations_url": "https://api.github.com/users/bhavitvyamalik/orgs",
"received_events_url": "https://api.github.com/users/bhavitvyamalik/received_events",
"repos_url": "https://api.github.com/users/bhavitvyamalik/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/bhavitvyamalik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bhavitvyamalik/subscriptions",
"type": "User",
"url": "https://api.github.com/users/bhavitvyamalik"
} | [] | closed | false | null | [] | null | [] | 2021-04-09T09:39:06Z | 2021-04-16T14:11:46Z | 2021-04-16T14:11:46Z | CONTRIBUTOR | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/2198.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2198",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/2198.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2198"
} | As discussed in #2065 I've added `file_permission` argument in `load_dataset`.
Added mainly 2 things here:
1) Permission of downloaded datasets when converted to .arrow files can be changed with argument `file_permission` argument in `load_dataset` (default is 0o644 only)
2) Incase the user uses `map` later on to generate another cache file of dataset, it ensures the permissions of newly generated file are similar to that of` *-train.arrow` file inside cache_dir for that dataset. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2198/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2198/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2197 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2197/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2197/comments | https://api.github.com/repos/huggingface/datasets/issues/2197/events | https://github.com/huggingface/datasets/pull/2197 | 854,356,559 | MDExOlB1bGxSZXF1ZXN0NjEyMzEzMzQw | 2,197 | fix missing indices_files in load_form_disk | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [] | closed | false | null | [] | null | [] | 2021-04-09T09:37:57Z | 2021-04-09T09:54:40Z | 2021-04-09T09:54:39Z | MEMBER | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/2197.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2197",
"merged_at": "2021-04-09T09:54:39Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2197.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2197"
} | This should fix #2195
`load_from_disk` was failing if there was no "_indices_files" field in state.json. This can happen if the dataset has no indices mapping | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2197/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2197/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2196 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2196/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2196/comments | https://api.github.com/repos/huggingface/datasets/issues/2196/events | https://github.com/huggingface/datasets/issues/2196 | 854,126,114 | MDU6SXNzdWU4NTQxMjYxMTQ= | 2,196 | `load_dataset` caches two arrow files? | {
"avatar_url": "https://avatars.githubusercontent.com/u/29157715?v=4",
"events_url": "https://api.github.com/users/hwijeen/events{/privacy}",
"followers_url": "https://api.github.com/users/hwijeen/followers",
"following_url": "https://api.github.com/users/hwijeen/following{/other_user}",
"gists_url": "https://api.github.com/users/hwijeen/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/hwijeen",
"id": 29157715,
"login": "hwijeen",
"node_id": "MDQ6VXNlcjI5MTU3NzE1",
"organizations_url": "https://api.github.com/users/hwijeen/orgs",
"received_events_url": "https://api.github.com/users/hwijeen/received_events",
"repos_url": "https://api.github.com/users/hwijeen/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/hwijeen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/hwijeen/subscriptions",
"type": "User",
"url": "https://api.github.com/users/hwijeen"
} | [
{
"color": "d876e3",
"default": true,
"description": "Further information is requested",
"id": 1935892912,
"name": "question",
"node_id": "MDU6TGFiZWwxOTM1ODkyOTEy",
"url": "https://api.github.com/repos/huggingface/datasets/labels/question"
}
] | closed | false | null | [] | null | [] | 2021-04-09T03:49:19Z | 2021-04-12T05:25:29Z | 2021-04-12T05:25:29Z | NONE | null | null | null | Hi,
I am using datasets to load large json file of 587G.
I checked the cached folder and found that there are two arrow files created:
* `cache-ed205e500a7dc44c.arrow` - 355G
* `json-train.arrow` - 582G
Why is the first file created?
If I delete it, would I still be able to `load_from_disk`? | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2196/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2196/timeline | null | completed | true |
https://api.github.com/repos/huggingface/datasets/issues/2195 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2195/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2195/comments | https://api.github.com/repos/huggingface/datasets/issues/2195/events | https://github.com/huggingface/datasets/issues/2195 | 854,070,194 | MDU6SXNzdWU4NTQwNzAxOTQ= | 2,195 | KeyError: '_indices_files' in `arrow_dataset.py` | {
"avatar_url": "https://avatars.githubusercontent.com/u/15007950?v=4",
"events_url": "https://api.github.com/users/samsontmr/events{/privacy}",
"followers_url": "https://api.github.com/users/samsontmr/followers",
"following_url": "https://api.github.com/users/samsontmr/following{/other_user}",
"gists_url": "https://api.github.com/users/samsontmr/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/samsontmr",
"id": 15007950,
"login": "samsontmr",
"node_id": "MDQ6VXNlcjE1MDA3OTUw",
"organizations_url": "https://api.github.com/users/samsontmr/orgs",
"received_events_url": "https://api.github.com/users/samsontmr/received_events",
"repos_url": "https://api.github.com/users/samsontmr/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/samsontmr/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/samsontmr/subscriptions",
"type": "User",
"url": "https://api.github.com/users/samsontmr"
} | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | closed | false | null | [] | null | [] | 2021-04-09T01:37:12Z | 2021-04-09T09:55:09Z | 2021-04-09T09:54:39Z | NONE | null | null | null | After pulling the latest master, I'm getting a crash when `load_from_disk` tries to load my local dataset.
Trace:
```
Traceback (most recent call last):
File "load_data.py", line 11, in <module>
dataset = load_from_disk(SRC)
File "/opt/conda/envs/py38/lib/python3.8/site-packages/datasets/load.py", line 784, in load_from_disk
return DatasetDict.load_from_disk(dataset_path, fs, keep_in_memory=keep_in_memory)
File "/opt/conda/envs/py38/lib/python3.8/site-packages/datasets/dataset_dict.py", line 692, in load_from_disk
dataset_dict[k] = Dataset.load_from_disk(dataset_dict_split_path, fs, keep_in_memory=keep_in_memory)
File "/opt/conda/envs/py38/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 634, in load_from_disk
if state["_indices_files"]:
KeyError: '_indices_files'
```
I believe this is the line causing the error since there may not be a `_indices_files` key in the older versions:
https://github.com/huggingface/datasets/blob/b70141e3c5149430951773aaa0155555c5fb3e76/src/datasets/arrow_dataset.py#L634
May I suggest using `state.get()` instead of directly indexing the dictionary?
@lhoestq | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2195/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2195/timeline | null | completed | true |
https://api.github.com/repos/huggingface/datasets/issues/2194 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2194/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2194/comments | https://api.github.com/repos/huggingface/datasets/issues/2194/events | https://github.com/huggingface/datasets/issues/2194 | 853,909,452 | MDU6SXNzdWU4NTM5MDk0NTI= | 2,194 | py3.7: TypeError: can't pickle _LazyModule objects | {
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/stas00",
"id": 10676103,
"login": "stas00",
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"repos_url": "https://api.github.com/users/stas00/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"type": "User",
"url": "https://api.github.com/users/stas00"
} | [] | closed | false | null | [] | null | [] | 2021-04-08T21:02:48Z | 2021-04-09T16:56:50Z | 2021-04-09T01:52:57Z | MEMBER | null | null | null | While this works fine with py3.8, under py3.7, with a totally new conda env and transformers install:
```
git clone https://github.com/huggingface/transformers
cd transformers
pip install -e .[testing]
export BS=1; rm -rf /tmp/test-clm; PYTHONPATH=src USE_TF=0 CUDA_VISIBLE_DEVICES=0 python \
examples/language-modeling/run_clm.py --model_name_or_path distilgpt2 --dataset_name wikitext \
--dataset_config_name wikitext-2-raw-v1 --do_train --max_train_samples 1 \
--per_device_train_batch_size $BS --output_dir /tmp/test-clm --block_size 128 --logging_steps 1 \
--fp16
```
```
Traceback (most recent call last):
File "examples/language-modeling/run_clm.py", line 453, in <module>
main()
File "examples/language-modeling/run_clm.py", line 336, in main
load_from_cache_file=not data_args.overwrite_cache,
File "/home/stas/anaconda3/lib/python3.7/site-packages/datasets/dataset_dict.py", line 303, in map
for k, dataset in self.items()
File "/home/stas/anaconda3/lib/python3.7/site-packages/datasets/dataset_dict.py", line 303, in <dictcomp>
for k, dataset in self.items()
File "/home/stas/anaconda3/lib/python3.7/site-packages/datasets/arrow_dataset.py", line 1259, in map
update_data=update_data,
File "/home/stas/anaconda3/lib/python3.7/site-packages/datasets/arrow_dataset.py", line 157, in wrapper
out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
File "/home/stas/anaconda3/lib/python3.7/site-packages/datasets/fingerprint.py", line 158, in wrapper
self._fingerprint, transform, kwargs_for_fingerprint
File "/home/stas/anaconda3/lib/python3.7/site-packages/datasets/fingerprint.py", line 105, in update_fingerprint
hasher.update(transform_args[key])
File "/home/stas/anaconda3/lib/python3.7/site-packages/datasets/fingerprint.py", line 57, in update
self.m.update(self.hash(value).encode("utf-8"))
File "/home/stas/anaconda3/lib/python3.7/site-packages/datasets/fingerprint.py", line 53, in hash
return cls.hash_default(value)
File "/home/stas/anaconda3/lib/python3.7/site-packages/datasets/fingerprint.py", line 46, in hash_default
return cls.hash_bytes(dumps(value))
File "/home/stas/anaconda3/lib/python3.7/site-packages/datasets/utils/py_utils.py", line 389, in dumps
dump(obj, file)
File "/home/stas/anaconda3/lib/python3.7/site-packages/datasets/utils/py_utils.py", line 361, in dump
Pickler(file, recurse=True).dump(obj)
File "/home/stas/anaconda3/lib/python3.7/site-packages/dill/_dill.py", line 454, in dump
StockPickler.dump(self, obj)
File "/home/stas/anaconda3/lib/python3.7/pickle.py", line 437, in dump
self.save(obj)
File "/home/stas/anaconda3/lib/python3.7/pickle.py", line 504, in save
f(self, obj) # Call unbound method with explicit self
File "/home/stas/anaconda3/lib/python3.7/site-packages/datasets/utils/py_utils.py", line 556, in save_function
obj=obj,
File "/home/stas/anaconda3/lib/python3.7/pickle.py", line 638, in save_reduce
save(args)
File "/home/stas/anaconda3/lib/python3.7/pickle.py", line 504, in save
f(self, obj) # Call unbound method with explicit self
File "/home/stas/anaconda3/lib/python3.7/pickle.py", line 789, in save_tuple
save(element)
File "/home/stas/anaconda3/lib/python3.7/pickle.py", line 504, in save
f(self, obj) # Call unbound method with explicit self
File "/home/stas/anaconda3/lib/python3.7/site-packages/dill/_dill.py", line 941, in save_module_dict
StockPickler.save_dict(pickler, obj)
File "/home/stas/anaconda3/lib/python3.7/pickle.py", line 859, in save_dict
self._batch_setitems(obj.items())
File "/home/stas/anaconda3/lib/python3.7/pickle.py", line 885, in _batch_setitems
save(v)
File "/home/stas/anaconda3/lib/python3.7/pickle.py", line 524, in save
rv = reduce(self.proto)
TypeError: can't pickle _LazyModule objects
```
```
$ python --version
Python 3.7.4
$ python -m torch.utils.collect_env
Collecting environment information...
PyTorch version: 1.8.0.dev20210110+cu110
Is debug build: False
CUDA used to build PyTorch: 11.0
ROCM used to build PyTorch: N/A
OS: Ubuntu 20.04.2 LTS (x86_64)
GCC version: (Ubuntu 9.3.0-17ubuntu1~20.04) 9.3.0
Clang version: 10.0.0-4ubuntu1
CMake version: version 3.16.3
```
Thanks. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2194/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2194/timeline | null | completed | true |
https://api.github.com/repos/huggingface/datasets/issues/2193 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2193/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2193/comments | https://api.github.com/repos/huggingface/datasets/issues/2193/events | https://github.com/huggingface/datasets/issues/2193 | 853,725,707 | MDU6SXNzdWU4NTM3MjU3MDc= | 2,193 | Filtering/mapping on one column is very slow | {
"avatar_url": "https://avatars.githubusercontent.com/u/39116809?v=4",
"events_url": "https://api.github.com/users/norabelrose/events{/privacy}",
"followers_url": "https://api.github.com/users/norabelrose/followers",
"following_url": "https://api.github.com/users/norabelrose/following{/other_user}",
"gists_url": "https://api.github.com/users/norabelrose/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/norabelrose",
"id": 39116809,
"login": "norabelrose",
"node_id": "MDQ6VXNlcjM5MTE2ODA5",
"organizations_url": "https://api.github.com/users/norabelrose/orgs",
"received_events_url": "https://api.github.com/users/norabelrose/received_events",
"repos_url": "https://api.github.com/users/norabelrose/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/norabelrose/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/norabelrose/subscriptions",
"type": "User",
"url": "https://api.github.com/users/norabelrose"
} | [
{
"color": "d876e3",
"default": true,
"description": "Further information is requested",
"id": 1935892912,
"name": "question",
"node_id": "MDU6TGFiZWwxOTM1ODkyOTEy",
"url": "https://api.github.com/repos/huggingface/datasets/labels/question"
}
] | closed | false | null | [] | null | [] | 2021-04-08T18:16:14Z | 2021-04-26T16:13:59Z | 2021-04-26T16:13:59Z | CONTRIBUTOR | null | null | null | I'm currently using the `wikipedia` dataset— I'm tokenizing the articles with the `tokenizers` library using `map()` and also adding a new `num_tokens` column to the dataset as part of that map operation.
I want to be able to _filter_ the dataset based on this `num_tokens` column, but even when I specify `input_columns=['num_tokens']`, it seems that the entirety of each row is loaded into memory, which makes the operation take much longer than it should. Indeed, `filter` currently just calls `map`, and I found that in `_map_single` on lines 1690-1704 of `arrow_dataset.py`, the method is just grabbing slices of _all the rows_ of the dataset and then passing only the specified columns to the map function. It seems that, when the user passes a value for `input_columns`, the `map` function should create a temporary pyarrow table by selecting just those columns, and then get slices from that table. Or something like that— I'm not very familiar with the pyarrow API.
I know that in the meantime I can sort of get around this by simply only returning the rows that match my filter criterion from the tokenizing function I pass to `map()`, but I actually _also_ want to map on just the `num_tokens` column in order to compute batches with a roughly uniform number of tokens per batch. I would also ideally like to be able to change my minimum and maximum article lengths without having to re-tokenize the entire dataset.
PS: This is definitely not a "dataset request." I'm realizing that I don't actually know how to remove labels from my own issues on other people's repos, if that is even possible. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2193/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2193/timeline | null | completed | true |
https://api.github.com/repos/huggingface/datasets/issues/2192 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2192/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2192/comments | https://api.github.com/repos/huggingface/datasets/issues/2192/events | https://github.com/huggingface/datasets/pull/2192 | 853,547,910 | MDExOlB1bGxSZXF1ZXN0NjExNjE5NTY0 | 2,192 | Fix typo in huggingface hub | {
"avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4",
"events_url": "https://api.github.com/users/LysandreJik/events{/privacy}",
"followers_url": "https://api.github.com/users/LysandreJik/followers",
"following_url": "https://api.github.com/users/LysandreJik/following{/other_user}",
"gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/LysandreJik",
"id": 30755778,
"login": "LysandreJik",
"node_id": "MDQ6VXNlcjMwNzU1Nzc4",
"organizations_url": "https://api.github.com/users/LysandreJik/orgs",
"received_events_url": "https://api.github.com/users/LysandreJik/received_events",
"repos_url": "https://api.github.com/users/LysandreJik/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions",
"type": "User",
"url": "https://api.github.com/users/LysandreJik"
} | [] | closed | false | null | [] | null | [] | 2021-04-08T14:42:24Z | 2021-04-08T15:47:41Z | 2021-04-08T15:47:40Z | MEMBER | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/2192.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2192",
"merged_at": "2021-04-08T15:47:40Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2192.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2192"
} | pip knows how to resolve to `huggingface_hub`, but conda doesn't!
The `packaging` dependency is also required for the build to complete. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2192/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2192/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2191 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2191/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2191/comments | https://api.github.com/repos/huggingface/datasets/issues/2191/events | https://github.com/huggingface/datasets/pull/2191 | 853,364,204 | MDExOlB1bGxSZXF1ZXN0NjExNDY1Nzc0 | 2,191 | Refactorize tests to use Dataset as context manager | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | [
{
"color": "B67A40",
"default": false,
"description": "Restructuring existing code without changing its external behavior",
"id": 2851292821,
"name": "refactoring",
"node_id": "MDU6TGFiZWwyODUxMjkyODIx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/refactoring"
}
] | closed | false | null | [] | {
"closed_at": "2021-04-20T16:50:46Z",
"closed_issues": 4,
"created_at": "2021-04-09T13:07:51Z",
"creator": {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
},
"description": "Next minor release",
"due_on": "2021-04-16T07:00:00Z",
"html_url": "https://github.com/huggingface/datasets/milestone/1",
"id": 6644198,
"labels_url": "https://api.github.com/repos/huggingface/datasets/milestones/1/labels",
"node_id": "MDk6TWlsZXN0b25lNjY0NDE5OA==",
"number": 1,
"open_issues": 0,
"state": "closed",
"title": "1.6",
"updated_at": "2021-04-20T16:50:46Z",
"url": "https://api.github.com/repos/huggingface/datasets/milestones/1"
} | [] | 2021-04-08T11:21:04Z | 2021-04-19T07:53:11Z | 2021-04-19T07:53:10Z | MEMBER | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/2191.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2191",
"merged_at": "2021-04-19T07:53:10Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2191.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2191"
} | Refactorize Dataset tests to use Dataset as context manager. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2191/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2191/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2190 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2190/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2190/comments | https://api.github.com/repos/huggingface/datasets/issues/2190/events | https://github.com/huggingface/datasets/issues/2190 | 853,181,564 | MDU6SXNzdWU4NTMxODE1NjQ= | 2,190 | News_commentary Dataset Translation Pairs are of Incorrect Language Specified Pairs | {
"avatar_url": "https://avatars.githubusercontent.com/u/8571003?v=4",
"events_url": "https://api.github.com/users/anassalamah/events{/privacy}",
"followers_url": "https://api.github.com/users/anassalamah/followers",
"following_url": "https://api.github.com/users/anassalamah/following{/other_user}",
"gists_url": "https://api.github.com/users/anassalamah/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/anassalamah",
"id": 8571003,
"login": "anassalamah",
"node_id": "MDQ6VXNlcjg1NzEwMDM=",
"organizations_url": "https://api.github.com/users/anassalamah/orgs",
"received_events_url": "https://api.github.com/users/anassalamah/received_events",
"repos_url": "https://api.github.com/users/anassalamah/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/anassalamah/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/anassalamah/subscriptions",
"type": "User",
"url": "https://api.github.com/users/anassalamah"
} | [] | closed | false | null | [] | null | [] | 2021-04-08T07:53:43Z | 2021-05-24T10:03:55Z | 2021-05-24T10:03:55Z | NONE | null | null | null | I used load_dataset to load the news_commentary dataset for "ar-en" translation pairs but found translations from Arabic to Hindi.
```
train_ds = load_dataset("news_commentary", "ar-en", split='train[:98%]')
val_ds = load_dataset("news_commentary", "ar-en", split='train[98%:]')
# filtering out examples that are not ar-en translations but ar-hi
val_ds = val_ds.filter(lambda example, indice: indice not in chain(range(1312,1327) ,range(1384,1399), range(1030,1042)), with_indices=True)
```
* I'm fairly new to using datasets so I might be doing something wrong | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2190/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2190/timeline | null | completed | true |
https://api.github.com/repos/huggingface/datasets/issues/2189 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2189/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2189/comments | https://api.github.com/repos/huggingface/datasets/issues/2189/events | https://github.com/huggingface/datasets/issues/2189 | 853,052,891 | MDU6SXNzdWU4NTMwNTI4OTE= | 2,189 | save_to_disk doesn't work when we use concatenate_datasets function before creating the final dataset_object. | {
"avatar_url": "https://avatars.githubusercontent.com/u/16892570?v=4",
"events_url": "https://api.github.com/users/shamanez/events{/privacy}",
"followers_url": "https://api.github.com/users/shamanez/followers",
"following_url": "https://api.github.com/users/shamanez/following{/other_user}",
"gists_url": "https://api.github.com/users/shamanez/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/shamanez",
"id": 16892570,
"login": "shamanez",
"node_id": "MDQ6VXNlcjE2ODkyNTcw",
"organizations_url": "https://api.github.com/users/shamanez/orgs",
"received_events_url": "https://api.github.com/users/shamanez/received_events",
"repos_url": "https://api.github.com/users/shamanez/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/shamanez/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/shamanez/subscriptions",
"type": "User",
"url": "https://api.github.com/users/shamanez"
} | [] | closed | false | null | [] | null | [] | 2021-04-08T04:42:53Z | 2022-06-01T16:32:15Z | 2022-06-01T16:32:15Z | NONE | null | null | null | As you can see, it saves the entire dataset.
@lhoestq
You can check by going through the following example,
```
from datasets import load_from_disk,concatenate_datasets
loaded_data=load_from_disk('/home/gsir059/HNSW-ori/my_knowledge_dataset')
n=20
kb_list=[loaded_data.shard(n, i, contiguous=True) for i in range(n)]
final_dataset=concatenate_datasets([kb_list[1],kb_list[2]])
final_dataset.save_to_disk('/home/gsir059/haha/k.arrow')
``` | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2189/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2189/timeline | null | completed | true |
https://api.github.com/repos/huggingface/datasets/issues/2188 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2188/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2188/comments | https://api.github.com/repos/huggingface/datasets/issues/2188/events | https://github.com/huggingface/datasets/issues/2188 | 853,044,166 | MDU6SXNzdWU4NTMwNDQxNjY= | 2,188 | Duplicate data in Timit dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/78190188?v=4",
"events_url": "https://api.github.com/users/BHM-RB/events{/privacy}",
"followers_url": "https://api.github.com/users/BHM-RB/followers",
"following_url": "https://api.github.com/users/BHM-RB/following{/other_user}",
"gists_url": "https://api.github.com/users/BHM-RB/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/BHM-RB",
"id": 78190188,
"login": "BHM-RB",
"node_id": "MDQ6VXNlcjc4MTkwMTg4",
"organizations_url": "https://api.github.com/users/BHM-RB/orgs",
"received_events_url": "https://api.github.com/users/BHM-RB/received_events",
"repos_url": "https://api.github.com/users/BHM-RB/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/BHM-RB/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BHM-RB/subscriptions",
"type": "User",
"url": "https://api.github.com/users/BHM-RB"
} | [] | closed | false | null | [] | null | [] | 2021-04-08T04:21:54Z | 2021-04-08T12:13:19Z | 2021-04-08T12:13:19Z | NONE | null | null | null | I ran a simple code to list all texts in Timit dataset and the texts were all the same.
Is this dataset corrupted?
**Code:**
timit = load_dataset("timit_asr")
print(*timit['train']['text'], sep='\n')
**Result:**
Would such an act of refusal be useful?
Would such an act of refusal be useful?
Would such an act of refusal be useful?
Would such an act of refusal be useful?
...
...
Would such an act of refusal be useful? | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2188/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2188/timeline | null | completed | true |
https://api.github.com/repos/huggingface/datasets/issues/2187 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2187/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2187/comments | https://api.github.com/repos/huggingface/datasets/issues/2187/events | https://github.com/huggingface/datasets/issues/2187 | 852,939,736 | MDU6SXNzdWU4NTI5Mzk3MzY= | 2,187 | Question (potential issue?) related to datasets caching | {
"avatar_url": "https://avatars.githubusercontent.com/u/17202292?v=4",
"events_url": "https://api.github.com/users/ioana-blue/events{/privacy}",
"followers_url": "https://api.github.com/users/ioana-blue/followers",
"following_url": "https://api.github.com/users/ioana-blue/following{/other_user}",
"gists_url": "https://api.github.com/users/ioana-blue/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/ioana-blue",
"id": 17202292,
"login": "ioana-blue",
"node_id": "MDQ6VXNlcjE3MjAyMjky",
"organizations_url": "https://api.github.com/users/ioana-blue/orgs",
"received_events_url": "https://api.github.com/users/ioana-blue/received_events",
"repos_url": "https://api.github.com/users/ioana-blue/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/ioana-blue/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ioana-blue/subscriptions",
"type": "User",
"url": "https://api.github.com/users/ioana-blue"
} | [
{
"color": "d876e3",
"default": true,
"description": "Further information is requested",
"id": 1935892912,
"name": "question",
"node_id": "MDU6TGFiZWwxOTM1ODkyOTEy",
"url": "https://api.github.com/repos/huggingface/datasets/labels/question"
}
] | open | false | null | [] | null | [] | 2021-04-08T00:16:28Z | 2021-04-14T14:55:58Z | null | NONE | null | null | null | I thought I had disabled datasets caching in my code, as follows:
```
from datasets import set_caching_enabled
...
def main():
# disable caching in datasets
set_caching_enabled(False)
```
However, in my log files I see messages like the following:
```
04/07/2021 18:34:42 - WARNING - datasets.builder - Using custom data configuration default-888a87931cbc5877
04/07/2021 18:34:42 - WARNING - datasets.builder - Reusing dataset csv (xxxx/cache-transformers/datasets/csv/default-888a87931cbc5877/0.0.0/965b6429be0fc05f975b608ce64e1fa941cc8fb4f30629b523d2390f3c0e1a93
```
Can you please let me know what this reusing dataset csv means? I wouldn't expect any reusing with the datasets caching disabled. Thank you! | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2187/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2187/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2186 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2186/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2186/comments | https://api.github.com/repos/huggingface/datasets/issues/2186/events | https://github.com/huggingface/datasets/pull/2186 | 852,840,819 | MDExOlB1bGxSZXF1ZXN0NjExMDMxNzE0 | 2,186 | GEM: new challenge sets | {
"avatar_url": "https://avatars.githubusercontent.com/u/10469459?v=4",
"events_url": "https://api.github.com/users/yjernite/events{/privacy}",
"followers_url": "https://api.github.com/users/yjernite/followers",
"following_url": "https://api.github.com/users/yjernite/following{/other_user}",
"gists_url": "https://api.github.com/users/yjernite/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/yjernite",
"id": 10469459,
"login": "yjernite",
"node_id": "MDQ6VXNlcjEwNDY5NDU5",
"organizations_url": "https://api.github.com/users/yjernite/orgs",
"received_events_url": "https://api.github.com/users/yjernite/received_events",
"repos_url": "https://api.github.com/users/yjernite/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/yjernite/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yjernite/subscriptions",
"type": "User",
"url": "https://api.github.com/users/yjernite"
} | [] | closed | false | null | [] | null | [] | 2021-04-07T21:39:07Z | 2021-04-07T21:56:35Z | 2021-04-07T21:56:35Z | MEMBER | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/2186.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2186",
"merged_at": "2021-04-07T21:56:35Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2186.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2186"
} | This PR updates the GEM dataset to:
- remove extraneous fields in WikiAuto after https://github.com/huggingface/datasets/pull/2171 fixed the source
- add context and services to Schema Guided Dialog
- Add new or update challenge sets for MLSUM ES and DE, XSUM, and SGD | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 1,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2186/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2186/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2185 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2185/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2185/comments | https://api.github.com/repos/huggingface/datasets/issues/2185/events | https://github.com/huggingface/datasets/issues/2185 | 852,684,395 | MDU6SXNzdWU4NTI2ODQzOTU= | 2,185 | .map() and distributed training | {
"avatar_url": "https://avatars.githubusercontent.com/u/16107619?v=4",
"events_url": "https://api.github.com/users/VictorSanh/events{/privacy}",
"followers_url": "https://api.github.com/users/VictorSanh/followers",
"following_url": "https://api.github.com/users/VictorSanh/following{/other_user}",
"gists_url": "https://api.github.com/users/VictorSanh/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/VictorSanh",
"id": 16107619,
"login": "VictorSanh",
"node_id": "MDQ6VXNlcjE2MTA3NjE5",
"organizations_url": "https://api.github.com/users/VictorSanh/orgs",
"received_events_url": "https://api.github.com/users/VictorSanh/received_events",
"repos_url": "https://api.github.com/users/VictorSanh/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/VictorSanh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/VictorSanh/subscriptions",
"type": "User",
"url": "https://api.github.com/users/VictorSanh"
} | [] | closed | false | null | [] | null | [] | 2021-04-07T18:22:14Z | 2021-10-23T07:11:15Z | 2021-04-09T15:38:31Z | MEMBER | null | null | null | Hi,
I have a question regarding distributed training and the `.map` call on a dataset.
I have a local dataset "my_custom_dataset" that I am loading with `datasets = load_from_disk(dataset_path=my_path)`.
`dataset` is then tokenized:
```python
datasets = load_from_disk(dataset_path=my_path)
[...]
def tokenize_function(examples):
return tokenizer(examples[text_column_name])
logger.info("Mapping dataset to tokenized dataset.")
tokenized_datasets = datasets.map(
tokenize_function,
batched=True,
num_proc=preprocessing_num_workers,
remove_columns=column_names,
load_from_cache_file=True,
)
```
I am using 31 workers (`preprocessing_num_workers=31`) and thus it creates 31 `cache*.arrow` files in `my_path/train` (there is only a train split).
When I relaunch the script, the map is tokenization is skipped in favor of loading the 31 previously cached files, and that's perfect.
Everything so far was done by launching a **single process script**.
I now launch the same training script in **distributed mode** (`pytorch -m torch.distributed.launch --nproc_per_node 2`). However, once it reaches the map call, it re-does the tokenization... instead of loading the 31 cached files.
I tried adding the `cache_file_name` argument: `cache_file_name={"train": my_path/one_of_the_arrow_file}`, but I can't give the 31 cached files, so it probably isn't the right way to do it.
**My question: what is the best way to load cached files if they were pre-processed and dumped in multiple arrow files?** It seems automatically handled for single processes but fails on distributed training.
- I am following the same structure as the examples of transformers (more specifically [run_clm.py](https://github.com/huggingface/transformers/blob/master/examples/language-modeling/run_clm.py) in my case)
- I am using 1.5.0 version of datasets if that matters. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2185/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2185/timeline | null | completed | true |
https://api.github.com/repos/huggingface/datasets/issues/2184 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2184/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2184/comments | https://api.github.com/repos/huggingface/datasets/issues/2184/events | https://github.com/huggingface/datasets/pull/2184 | 852,597,258 | MDExOlB1bGxSZXF1ZXN0NjEwODIxMTc0 | 2,184 | Implementation of class_encode_column | {
"avatar_url": "https://avatars.githubusercontent.com/u/33657802?v=4",
"events_url": "https://api.github.com/users/SBrandeis/events{/privacy}",
"followers_url": "https://api.github.com/users/SBrandeis/followers",
"following_url": "https://api.github.com/users/SBrandeis/following{/other_user}",
"gists_url": "https://api.github.com/users/SBrandeis/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/SBrandeis",
"id": 33657802,
"login": "SBrandeis",
"node_id": "MDQ6VXNlcjMzNjU3ODAy",
"organizations_url": "https://api.github.com/users/SBrandeis/orgs",
"received_events_url": "https://api.github.com/users/SBrandeis/received_events",
"repos_url": "https://api.github.com/users/SBrandeis/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/SBrandeis/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/SBrandeis/subscriptions",
"type": "User",
"url": "https://api.github.com/users/SBrandeis"
} | [] | closed | false | null | [] | null | [] | 2021-04-07T16:47:43Z | 2021-04-16T11:44:37Z | 2021-04-16T11:26:59Z | CONTRIBUTOR | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/2184.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2184",
"merged_at": "2021-04-16T11:26:59Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2184.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2184"
} | Addresses #2176
I'm happy to discuss the API and internals! | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2184/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2184/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2183 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2183/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2183/comments | https://api.github.com/repos/huggingface/datasets/issues/2183/events | https://github.com/huggingface/datasets/pull/2183 | 852,518,411 | MDExOlB1bGxSZXF1ZXN0NjEwNzU3MjUz | 2,183 | Fix s3fs tests for py36 and py37+ | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [] | closed | false | null | [] | null | [] | 2021-04-07T15:17:11Z | 2021-04-08T08:54:45Z | 2021-04-08T08:54:44Z | MEMBER | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/2183.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2183",
"merged_at": "2021-04-08T08:54:44Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2183.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2183"
} | Recently several changes happened:
1. latest versions of `fsspec` require python>3.7 for async features
2. `s3fs` added a dependency on `aiobotocore`, which is not compatible with the `moto` s3 mock context manager
This PR fixes both issues, by pinning `fsspec` and `s3fs` for python 3.6, and by using `moto` in server mode to support running the tests on python>=3.7 with the latest version of `fsspec` and `s3fs`.
cc @philschmid | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2183/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2183/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2182 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2182/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2182/comments | https://api.github.com/repos/huggingface/datasets/issues/2182/events | https://github.com/huggingface/datasets/pull/2182 | 852,384,872 | MDExOlB1bGxSZXF1ZXN0NjEwNjQ2MDIy | 2,182 | Set default in-memory value depending on the dataset size | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | closed | false | null | [] | {
"closed_at": "2021-04-20T16:50:46Z",
"closed_issues": 4,
"created_at": "2021-04-09T13:07:51Z",
"creator": {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
},
"description": "Next minor release",
"due_on": "2021-04-16T07:00:00Z",
"html_url": "https://github.com/huggingface/datasets/milestone/1",
"id": 6644198,
"labels_url": "https://api.github.com/repos/huggingface/datasets/milestones/1/labels",
"node_id": "MDk6TWlsZXN0b25lNjY0NDE5OA==",
"number": 1,
"open_issues": 0,
"state": "closed",
"title": "1.6",
"updated_at": "2021-04-20T16:50:46Z",
"url": "https://api.github.com/repos/huggingface/datasets/milestones/1"
} | [] | 2021-04-07T13:00:18Z | 2021-04-20T14:20:12Z | 2021-04-20T10:04:04Z | MEMBER | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/2182.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2182",
"merged_at": "2021-04-20T10:04:03Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2182.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2182"
} | Set a default value for `in_memory` depending on the size of the dataset to be loaded.
Close #2179.
TODO:
- [x] Add a section in the docs about this.
- ~Add a warning if someone tries to specify `cache_file_name=` in `map`, `filter` etc. on a dataset that is in memory, since the computation is not going to be cached in this case.~ | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2182/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2182/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2181 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2181/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2181/comments | https://api.github.com/repos/huggingface/datasets/issues/2181/events | https://github.com/huggingface/datasets/issues/2181 | 852,261,607 | MDU6SXNzdWU4NTIyNjE2MDc= | 2,181 | Error when loading a HUGE json file (pyarrow.lib.ArrowInvalid: straddling object straddles two block boundaries) | {
"avatar_url": "https://avatars.githubusercontent.com/u/29157715?v=4",
"events_url": "https://api.github.com/users/hwijeen/events{/privacy}",
"followers_url": "https://api.github.com/users/hwijeen/followers",
"following_url": "https://api.github.com/users/hwijeen/following{/other_user}",
"gists_url": "https://api.github.com/users/hwijeen/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/hwijeen",
"id": 29157715,
"login": "hwijeen",
"node_id": "MDQ6VXNlcjI5MTU3NzE1",
"organizations_url": "https://api.github.com/users/hwijeen/orgs",
"received_events_url": "https://api.github.com/users/hwijeen/received_events",
"repos_url": "https://api.github.com/users/hwijeen/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/hwijeen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/hwijeen/subscriptions",
"type": "User",
"url": "https://api.github.com/users/hwijeen"
} | [] | closed | false | null | [] | null | [] | 2021-04-07T10:26:46Z | 2021-04-12T07:15:55Z | 2021-04-12T07:15:55Z | NONE | null | null | null | Hi, thanks for the great library. I have used the brilliant library for a couple of small projects, and now using it for a fairly big project.
When loading a huge json file of 500GB, pyarrow complains as follows:
```
Traceback (most recent call last):
File "/home/user/.pyenv/versions/3.7.9/lib/python3.7/site-packages/datasets/builder.py", line 531, in incomplete_dir
yield tmp_dir
File "/home/user/.pyenv/versions/3.7.9/lib/python3.7/site-packages/datasets/builder.py", line 573, in download_and_prepare
dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs
File "/home/user/.pyenv/versions/3.7.9/lib/python3.7/site-packages/datasets/builder.py", line 650, in _download_and_prepare
self._prepare_split(split_generator, **prepare_split_kwargs)
File "/home/user/.pyenv/versions/3.7.9/lib/python3.7/site-packages/datasets/builder.py", line 1027, in _prepare_split
for key, table in utils.tqdm(generator, unit=" tables", leave=False, disable=not_verbose):
File "/home/user/.pyenv/versions/3.7.9/lib/python3.7/site-packages/tqdm/std.py", line 1133, in __iter__
for obj in iterable:
File "/app/.cache/huggingface/modules/datasets_modules/datasets/json/9498524fd296a6cca99c66d6c5be507d1c0991f5a814e535b507f4a66096a641/json.py", line 83, in _generate_tables
parse_options=self.config.pa_parse_options,
File "pyarrow/_json.pyx", line 247, in pyarrow._json.read_json
File "pyarrow/error.pxi", line 122, in pyarrow.lib.pyarrow_internal_check_status
File "pyarrow/error.pxi", line 84, in pyarrow.lib.check_status
pyarrow.lib.ArrowInvalid: straddling object straddles two block boundaries (try to increase block size?)
```
When using only a small portion of the sample file, say first 100 lines, it works perfectly well..
I see that it is the error from pyarrow, but could you give me a hint or possible solutions?
#369 describes the same error and #372 claims to have fixed the issue, but I have no clue why I am still getting this one. Thanks in advance! | {
"+1": 2,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 2,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2181/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2181/timeline | null | completed | true |
https://api.github.com/repos/huggingface/datasets/issues/2180 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2180/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2180/comments | https://api.github.com/repos/huggingface/datasets/issues/2180/events | https://github.com/huggingface/datasets/pull/2180 | 852,258,635 | MDExOlB1bGxSZXF1ZXN0NjEwNTQxOTA2 | 2,180 | Add tel to xtreme tatoeba | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [] | closed | false | null | [] | null | [] | 2021-04-07T10:23:15Z | 2021-04-07T15:50:35Z | 2021-04-07T15:50:34Z | MEMBER | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/2180.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2180",
"merged_at": "2021-04-07T15:50:34Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2180.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2180"
} | This should fix issue #2149 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2180/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2180/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2179 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2179/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2179/comments | https://api.github.com/repos/huggingface/datasets/issues/2179/events | https://github.com/huggingface/datasets/issues/2179 | 852,237,957 | MDU6SXNzdWU4NTIyMzc5NTc= | 2,179 | Load small datasets in-memory instead of using memory map | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
},
{
"color": "c5def5",
"default": false,
"description": "Generic discussion on the library",
"id": 2067400324,
"name": "generic discussion",
"node_id": "MDU6TGFiZWwyMDY3NDAwMzI0",
"url": "https://api.github.com/repos/huggingface/datasets/labels/generic%20discussion"
}
] | closed | false | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
}
] | null | [] | 2021-04-07T09:58:16Z | 2021-04-20T10:04:04Z | 2021-04-20T10:04:03Z | MEMBER | null | null | null | Currently all datasets are loaded using memory mapping by default in `load_dataset`.
However this might not be necessary for small datasets. If a dataset is small enough, then it can be loaded in-memory and:
- its memory footprint would be small so it's ok
- in-memory computations/queries would be faster
- the caching on-disk would be disabled, making computations even faster (no I/O bound because of the disk)
- but running the same computation a second time would recompute everything since there would be no cached results on-disk. But this is probably fine since computations would be fast anyway + users should be able to provide a cache filename if needed.
Therefore, maybe the default behavior of `load_dataset` should be to load small datasets in-memory and big datasets using memory mapping. | {
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2179/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2179/timeline | null | completed | true |
https://api.github.com/repos/huggingface/datasets/issues/2178 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2178/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2178/comments | https://api.github.com/repos/huggingface/datasets/issues/2178/events | https://github.com/huggingface/datasets/pull/2178 | 852,215,058 | MDExOlB1bGxSZXF1ZXN0NjEwNTA1Mjg1 | 2,178 | Fix cast memory usage by using map on subtables | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | closed | false | null | [] | {
"closed_at": "2021-04-20T16:50:46Z",
"closed_issues": 4,
"created_at": "2021-04-09T13:07:51Z",
"creator": {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
},
"description": "Next minor release",
"due_on": "2021-04-16T07:00:00Z",
"html_url": "https://github.com/huggingface/datasets/milestone/1",
"id": 6644198,
"labels_url": "https://api.github.com/repos/huggingface/datasets/milestones/1/labels",
"node_id": "MDk6TWlsZXN0b25lNjY0NDE5OA==",
"number": 1,
"open_issues": 0,
"state": "closed",
"title": "1.6",
"updated_at": "2021-04-20T16:50:46Z",
"url": "https://api.github.com/repos/huggingface/datasets/milestones/1"
} | [] | 2021-04-07T09:30:50Z | 2021-04-20T14:20:44Z | 2021-04-13T09:28:16Z | MEMBER | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/2178.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2178",
"merged_at": "2021-04-13T09:28:16Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2178.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2178"
} | The `cast` operation on a pyarrow Table may create new arrays in memory.
This is an issue since users expect memory mapped datasets to not fill up the RAM.
To fix that I used `map` to write a new arrow file on disk when cast is used.
To make things more convenient I introduced the `arrow` formatting of a dataset, to make it return pyarrow tables instead of python dicts. This way one can use pyarrow transforms directly when using `map`.
edit: we'll use the same mechanism for `filter` | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 2,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 2,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2178/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2178/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2177 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2177/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2177/comments | https://api.github.com/repos/huggingface/datasets/issues/2177/events | https://github.com/huggingface/datasets/pull/2177 | 852,065,307 | MDExOlB1bGxSZXF1ZXN0NjEwMzc5MDYx | 2,177 | add social thumbnial | {
"avatar_url": "https://avatars.githubusercontent.com/u/32632186?v=4",
"events_url": "https://api.github.com/users/philschmid/events{/privacy}",
"followers_url": "https://api.github.com/users/philschmid/followers",
"following_url": "https://api.github.com/users/philschmid/following{/other_user}",
"gists_url": "https://api.github.com/users/philschmid/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/philschmid",
"id": 32632186,
"login": "philschmid",
"node_id": "MDQ6VXNlcjMyNjMyMTg2",
"organizations_url": "https://api.github.com/users/philschmid/orgs",
"received_events_url": "https://api.github.com/users/philschmid/received_events",
"repos_url": "https://api.github.com/users/philschmid/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/philschmid/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/philschmid/subscriptions",
"type": "User",
"url": "https://api.github.com/users/philschmid"
} | [] | closed | false | null | [] | null | [] | 2021-04-07T06:40:06Z | 2021-04-07T08:16:01Z | 2021-04-07T08:16:01Z | MEMBER | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/2177.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2177",
"merged_at": "2021-04-07T08:16:01Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2177.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2177"
} | # What does this PR do?
I added OpenGraph/ Twitter Card support to the docs to create nice social thumbnails.

To be able to add these I needed to install `sphinxext-opengraph`. I came across this [issue](https://github.com/readthedocs/readthedocs.org/issues/1758) on the readthedocs repo saying that since someone has built this plugin they are not integrating and providing documentation to it. That's why I added it for creating the documentation. The repository can be found [here](https://github.com/wpilibsuite/sphinxext-opengraph/tree/main).
P.S. It seemed that `make style` never ran for `docs/` i hope the changes are okay otherwise I'll revert it. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 1,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2177/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2177/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2176 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2176/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2176/comments | https://api.github.com/repos/huggingface/datasets/issues/2176/events | https://github.com/huggingface/datasets/issues/2176 | 851,865,795 | MDU6SXNzdWU4NTE4NjU3OTU= | 2,176 | Converting a Value to a ClassLabel | {
"avatar_url": "https://avatars.githubusercontent.com/u/7272031?v=4",
"events_url": "https://api.github.com/users/nelson-liu/events{/privacy}",
"followers_url": "https://api.github.com/users/nelson-liu/followers",
"following_url": "https://api.github.com/users/nelson-liu/following{/other_user}",
"gists_url": "https://api.github.com/users/nelson-liu/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/nelson-liu",
"id": 7272031,
"login": "nelson-liu",
"node_id": "MDQ6VXNlcjcyNzIwMzE=",
"organizations_url": "https://api.github.com/users/nelson-liu/orgs",
"received_events_url": "https://api.github.com/users/nelson-liu/received_events",
"repos_url": "https://api.github.com/users/nelson-liu/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/nelson-liu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/nelson-liu/subscriptions",
"type": "User",
"url": "https://api.github.com/users/nelson-liu"
} | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | closed | false | null | [] | null | [] | 2021-04-06T22:54:16Z | 2022-06-01T16:31:49Z | 2022-06-01T16:31:49Z | NONE | null | null | null | Hi!
In the docs for `cast`, it's noted that `For non-trivial conversion, e.g. string <-> ClassLabel you should use map() to update the Dataset.`
Would it be possible to have an example that demonstrates such a string <-> ClassLabel conversion using `map`? Thanks! | {
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2176/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2176/timeline | null | completed | true |
https://api.github.com/repos/huggingface/datasets/issues/2175 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2175/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2175/comments | https://api.github.com/repos/huggingface/datasets/issues/2175/events | https://github.com/huggingface/datasets/issues/2175 | 851,836,096 | MDU6SXNzdWU4NTE4MzYwOTY= | 2,175 | dataset.search_batch() function outputs all -1 indices sometime. | {
"avatar_url": "https://avatars.githubusercontent.com/u/16892570?v=4",
"events_url": "https://api.github.com/users/shamanez/events{/privacy}",
"followers_url": "https://api.github.com/users/shamanez/followers",
"following_url": "https://api.github.com/users/shamanez/following{/other_user}",
"gists_url": "https://api.github.com/users/shamanez/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/shamanez",
"id": 16892570,
"login": "shamanez",
"node_id": "MDQ6VXNlcjE2ODkyNTcw",
"organizations_url": "https://api.github.com/users/shamanez/orgs",
"received_events_url": "https://api.github.com/users/shamanez/received_events",
"repos_url": "https://api.github.com/users/shamanez/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/shamanez/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/shamanez/subscriptions",
"type": "User",
"url": "https://api.github.com/users/shamanez"
} | [] | closed | false | null | [] | null | [] | 2021-04-06T21:50:49Z | 2021-04-16T12:21:16Z | 2021-04-16T12:21:15Z | NONE | null | null | null | I am working with RAG and playing around with different faiss indexes. At the moment I use **index = faiss.index_factory(768, "IVF65536_HNSW32,Flat")**.
During the retrieval phase exactly in [this line of retrieval_rag.py](https://github.com/huggingface/transformers/blob/master/src/transformers/models/rag/retrieval_rag.py#L231) an error issue when all retrieved indices are -1. Please refer to the screenshot of a PID worker.

Here, my retrieve batch size is 2 and n_docs is 5. I can solve this by working around np. stack, but I want to ask, why we get an output index of -1. Do you have any idea :) ?
Is this a problem of the index, where the faiss can't find any similar vector?
Is there documentation on the output index being -1?
@lhoestq
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2175/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2175/timeline | null | completed | true |
https://api.github.com/repos/huggingface/datasets/issues/2174 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2174/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2174/comments | https://api.github.com/repos/huggingface/datasets/issues/2174/events | https://github.com/huggingface/datasets/pull/2174 | 851,383,675 | MDExOlB1bGxSZXF1ZXN0NjA5ODE2OTQ2 | 2,174 | Pin docutils for better doc | {
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/sgugger",
"id": 35901082,
"login": "sgugger",
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"repos_url": "https://api.github.com/users/sgugger/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"type": "User",
"url": "https://api.github.com/users/sgugger"
} | [] | closed | false | null | [] | null | [] | 2021-04-06T12:40:20Z | 2021-04-06T12:55:53Z | 2021-04-06T12:55:53Z | MEMBER | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/2174.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2174",
"merged_at": "2021-04-06T12:55:53Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2174.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2174"
} | The latest release of docutils make the navbar in the documentation weird and the Markdown wrongly interpreted:

We had the same problem in Transformers and solved it by pinning docutils (a dep of sphinx).
You can see the version after the change [here](https://32769-250213286-gh.circle-artifacts.com/0/docs/_build/html/index.html).
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2174/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2174/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2173 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2173/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2173/comments | https://api.github.com/repos/huggingface/datasets/issues/2173/events | https://github.com/huggingface/datasets/pull/2173 | 851,359,284 | MDExOlB1bGxSZXF1ZXN0NjA5Nzk2NzI2 | 2,173 | Add OpenSLR dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/7669893?v=4",
"events_url": "https://api.github.com/users/cahya-wirawan/events{/privacy}",
"followers_url": "https://api.github.com/users/cahya-wirawan/followers",
"following_url": "https://api.github.com/users/cahya-wirawan/following{/other_user}",
"gists_url": "https://api.github.com/users/cahya-wirawan/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/cahya-wirawan",
"id": 7669893,
"login": "cahya-wirawan",
"node_id": "MDQ6VXNlcjc2Njk4OTM=",
"organizations_url": "https://api.github.com/users/cahya-wirawan/orgs",
"received_events_url": "https://api.github.com/users/cahya-wirawan/received_events",
"repos_url": "https://api.github.com/users/cahya-wirawan/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/cahya-wirawan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/cahya-wirawan/subscriptions",
"type": "User",
"url": "https://api.github.com/users/cahya-wirawan"
} | [] | closed | false | null | [] | null | [] | 2021-04-06T12:08:34Z | 2021-04-12T16:54:46Z | 2021-04-12T16:54:46Z | CONTRIBUTOR | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/2173.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2173",
"merged_at": "2021-04-12T16:54:45Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2173.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2173"
} | OpenSLR (https://openslr.org/) is a site devoted to hosting speech and language resources, such as training corpora for speech recognition, and software related to speech recognition. There are around 80 speech datasets listed in OpenSLR, currently this PR includes only 9 speech datasets SLR41, SLR42, SLR43, SLR44, SLR63, SLR64, SLR65, SLR66 and SLR69 (Javanese, Khmer, Nepali and Sundanese, Malayalam, Marathi, Tamil, Telugu and Catalan). I can add other speech datasets gradually next time. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2173/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2173/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2172 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2172/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2172/comments | https://api.github.com/repos/huggingface/datasets/issues/2172/events | https://github.com/huggingface/datasets/pull/2172 | 851,229,399 | MDExOlB1bGxSZXF1ZXN0NjA5Njg4ODgx | 2,172 | Pin fsspec lower than 0.9.0 | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [] | closed | false | null | [] | null | [] | 2021-04-06T09:19:09Z | 2021-04-06T09:49:27Z | 2021-04-06T09:49:26Z | MEMBER | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/2172.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2172",
"merged_at": "2021-04-06T09:49:26Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2172.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2172"
} | Today's release of `fsspec` 0.9.0 implied a new release of `s3fs` 0.6.0 but this version breaks the CI (see [here](https://app.circleci.com/pipelines/github/huggingface/datasets/5312/workflows/490f3240-cd1c-4dd1-bb60-b416771c5584/jobs/32734) for example)
I'm pinning `fsspec` until this has been resolved | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2172/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2172/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2171 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2171/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2171/comments | https://api.github.com/repos/huggingface/datasets/issues/2171/events | https://github.com/huggingface/datasets/pull/2171 | 851,090,662 | MDExOlB1bGxSZXF1ZXN0NjA5NTY4MDcw | 2,171 | Fixed the link to wikiauto training data. | {
"avatar_url": "https://avatars.githubusercontent.com/u/11708999?v=4",
"events_url": "https://api.github.com/users/mounicam/events{/privacy}",
"followers_url": "https://api.github.com/users/mounicam/followers",
"following_url": "https://api.github.com/users/mounicam/following{/other_user}",
"gists_url": "https://api.github.com/users/mounicam/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mounicam",
"id": 11708999,
"login": "mounicam",
"node_id": "MDQ6VXNlcjExNzA4OTk5",
"organizations_url": "https://api.github.com/users/mounicam/orgs",
"received_events_url": "https://api.github.com/users/mounicam/received_events",
"repos_url": "https://api.github.com/users/mounicam/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mounicam/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mounicam/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mounicam"
} | [] | closed | false | null | [] | null | [] | 2021-04-06T07:13:11Z | 2021-04-06T16:05:42Z | 2021-04-06T16:05:09Z | CONTRIBUTOR | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/2171.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2171",
"merged_at": "2021-04-06T16:05:09Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2171.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2171"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2171/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2171/timeline | null | null | true |
|
https://api.github.com/repos/huggingface/datasets/issues/2170 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2170/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2170/comments | https://api.github.com/repos/huggingface/datasets/issues/2170/events | https://github.com/huggingface/datasets/issues/2170 | 850,913,228 | MDU6SXNzdWU4NTA5MTMyMjg= | 2,170 | Wikipedia historic dumps are deleted but hf/datasets hardcodes dump date | {
"avatar_url": "https://avatars.githubusercontent.com/u/946903?v=4",
"events_url": "https://api.github.com/users/leezu/events{/privacy}",
"followers_url": "https://api.github.com/users/leezu/followers",
"following_url": "https://api.github.com/users/leezu/following{/other_user}",
"gists_url": "https://api.github.com/users/leezu/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/leezu",
"id": 946903,
"login": "leezu",
"node_id": "MDQ6VXNlcjk0NjkwMw==",
"organizations_url": "https://api.github.com/users/leezu/orgs",
"received_events_url": "https://api.github.com/users/leezu/received_events",
"repos_url": "https://api.github.com/users/leezu/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/leezu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/leezu/subscriptions",
"type": "User",
"url": "https://api.github.com/users/leezu"
} | [] | open | false | null | [] | null | [] | 2021-04-06T03:13:18Z | 2021-06-16T01:10:50Z | null | NONE | null | null | null | Wikimedia does not keep all historical dumps. For example, as of today https://dumps.wikimedia.org/kowiki/ only provides
```
20201220/ 02-Feb-2021 01:36 -
20210101/ 21-Feb-2021 01:26 -
20210120/ 02-Mar-2021 01:25 -
20210201/ 21-Mar-2021 01:26 -
20210220/ 02-Apr-2021 01:26 -
20210301/ 03-Mar-2021 08:10 -
20210320/ 21-Mar-2021 18:13 -
20210401/ 03-Apr-2021 10:08 -
latest/ 03-Apr-2021 10:08 -
```
However, the wikipedia dataset provided in the library, only supports the following configs, none of which are applicable anymore when disregarding the cached datasets:
```
ValueError: BuilderConfig 20210401.ko not found. Available: ['20200501.aa', '20200501.ab', '20200501.ace', '20200501.ady', '20200501.af', '20200501.ak', '20200501.als', '20200501.am', '20200501.an', '20200501.ang', '20200501.ar', '20200501.arc', '20200501.arz', '20200501.as', '20200501.ast', '20200501.atj', '20200501.av', '20200501.ay', '20200501.az', '20200501.azb', '20200501.ba', '20200501.bar', '20200501.bat-smg', '20200501.bcl', '20200501.be', '20200501.be-x-old', '20200501.bg', '20200501.bh', '20200501.bi', '20200501.bjn', '20200501.bm', '20200501.bn', '20200501.bo', '20200501.bpy', '20200501.br', '20200501.bs', '20200501.bug', '20200501.bxr', '20200501.ca', '20200501.cbk-zam', '20200501.cdo', '20200501.ce', '20200501.ceb', '20200501.ch', '20200501.cho', '20200501.chr', '20200501.chy', '20200501.ckb', '20200501.co', '20200501.cr', '20200501.crh', '20200501.cs', '20200501.csb', '20200501.cu', '20200501.cv', '20200501.cy', '20200501.da', '20200501.de', '20200501.din', '20200501.diq', '20200501.dsb', '20200501.dty', '20200501.dv', '20200501.dz', '20200501.ee', '20200501.el', '20200501.eml', '20200501.en', '20200501.eo', '20200501.es', '20200501.et', '20200501.eu', '20200501.ext', '20200501.fa', '20200501.ff', '20200501.fi', '20200501.fiu-vro', '20200501.fj', '20200501.fo', '20200501.fr', '20200501.frp', '20200501.frr', '20200501.fur', '20200501.fy', '20200501.ga', '20200501.gag', '20200501.gan', '20200501.gd', '20200501.gl', '20200501.glk', '20200501.gn', '20200501.gom', '20200501.gor', '20200501.got', '20200501.gu', '20200501.gv', '20200501.ha', '20200501.hak', '20200501.haw', '20200501.he', '20200501.hi', '20200501.hif', '20200501.ho', '20200501.hr', '20200501.hsb', '20200501.ht', '20200501.hu', '20200501.hy', '20200501.ia', '20200501.id', '20200501.ie', '20200501.ig', '20200501.ii', '20200501.ik', '20200501.ilo', '20200501.inh', '20200501.io', '20200501.is', '20200501.it', '20200501.iu', '20200501.ja', '20200501.jam', '20200501.jbo', '20200501.jv', '20200501.ka', '20200501.kaa', '20200501.kab', '20200501.kbd', '20200501.kbp', '20200501.kg', '20200501.ki', '20200501.kj', '20200501.kk', '20200501.kl', '20200501.km', '20200501.kn', '20200501.ko', '20200501.koi', '20200501.krc', '20200501.ks', '20200501.ksh', '20200501.ku', '20200501.kv', '20200501.kw', '20200501.ky', '20200501.la', '20200501.lad', '20200501.lb', '20200501.lbe', '20200501.lez', '20200501.lfn', '20200501.lg', '20200501.li', '20200501.lij', '20200501.lmo', '20200501.ln', '20200501.lo', '20200501.lrc', '20200501.lt', '20200501.ltg', '20200501.lv', '20200501.mai', '20200501.map-bms', '20200501.mdf', '20200501.mg', '20200501.mh', '20200501.mhr', '20200501.mi', '20200501.min', '20200501.mk', '20200501.ml', '20200501.mn', '20200501.mr', '20200501.mrj', '20200501.ms', '20200501.mt', '20200501.mus', '20200501.mwl', '20200501.my', '20200501.myv', '20200501.mzn', '20200501.na', '20200501.nah', '20200501.nap', '20200501.nds', '20200501.nds-nl', '20200501.ne', '20200501.new', '20200501.ng', '20200501.nl', '20200501.nn', '20200501.no', '20200501.nov', '20200501.nrm', '20200501.nso', '20200501.nv', '20200501.ny', '20200501.oc', '20200501.olo', '20200501.om', '20200501.or', '20200501.os', '20200501.pa', '20200501.pag', '20200501.pam', '20200501.pap', '20200501.pcd', '20200501.pdc', '20200501.pfl', '20200501.pi', '20200501.pih', '20200501.pl', '20200501.pms', '20200501.pnb', '20200501.pnt', '20200501.ps', '20200501.pt', '20200501.qu', '20200501.rm', '20200501.rmy', '20200501.rn', '20200501.ro', '20200501.roa-rup', '20200501.roa-tara', '20200501.ru', '20200501.rue', '20200501.rw', '20200501.sa', '20200501.sah', '20200501.sat', '20200501.sc', '20200501.scn', '20200501.sco', '20200501.sd', '20200501.se', '20200501.sg', '20200501.sh', '20200501.si', '20200501.simple', '20200501.sk', '20200501.sl', '20200501.sm', '20200501.sn', '20200501.so', '20200501.sq', '20200501.sr', '20200501.srn', '20200501.ss', '20200501.st', '20200501.stq', '20200501.su', '20200501.sv', '20200501.sw', '20200501.szl', '20200501.ta', '20200501.tcy', '20200501.te', '20200501.tet', '20200501.tg', '20200501.th', '20200501.ti', '20200501.tk', '20200501.tl', '20200501.tn', '20200501.to', '20200501.tpi', '20200501.tr', '20200501.ts', '20200501.tt', '20200501.tum', '20200501.tw', '20200501.ty', '20200501.tyv', '20200501.udm', '20200501.ug', '20200501.uk', '20200501.ur', '20200501.uz', '20200501.ve', '20200501.vec', '20200501.vep', '20200501.vi', '20200501.vls', '20200501.vo', '20200501.wa', '20200501.war', '20200501.wo', '20200501.wuu', '20200501.xal', '20200501.xh', '20200501.xmf', '20200501.yi', '20200501.yo', '20200501.za', '20200501.zea', '20200501.zh', '20200501.zh-classical', '20200501.zh-min-nan', '20200501.zh-yue', '20200501.zu']
```
The cached datasets:
```
% aws s3 --no-sign-request --endpoint-url https://storage.googleapis.com ls s3://huggingface-nlp/cache/datasets/wikipedia/
PRE 20200501.de/
PRE 20200501.en/
PRE 20200501.fr/
PRE 20200501.frr/
PRE 20200501.it/
PRE 20200501.simple/
``` | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2170/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2170/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2169 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2169/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2169/comments | https://api.github.com/repos/huggingface/datasets/issues/2169/events | https://github.com/huggingface/datasets/pull/2169 | 850,456,180 | MDExOlB1bGxSZXF1ZXN0NjA5MDI2ODUz | 2,169 | Updated WER metric implementation to avoid memory issues | {
"avatar_url": "https://avatars.githubusercontent.com/u/5707233?v=4",
"events_url": "https://api.github.com/users/diego-fustes/events{/privacy}",
"followers_url": "https://api.github.com/users/diego-fustes/followers",
"following_url": "https://api.github.com/users/diego-fustes/following{/other_user}",
"gists_url": "https://api.github.com/users/diego-fustes/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/diego-fustes",
"id": 5707233,
"login": "diego-fustes",
"node_id": "MDQ6VXNlcjU3MDcyMzM=",
"organizations_url": "https://api.github.com/users/diego-fustes/orgs",
"received_events_url": "https://api.github.com/users/diego-fustes/received_events",
"repos_url": "https://api.github.com/users/diego-fustes/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/diego-fustes/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/diego-fustes/subscriptions",
"type": "User",
"url": "https://api.github.com/users/diego-fustes"
} | [] | closed | false | null | [] | null | [] | 2021-04-05T15:43:20Z | 2021-04-06T15:02:58Z | 2021-04-06T15:02:58Z | NONE | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/2169.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2169",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/2169.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2169"
} | This is in order to fix this issue:
https://github.com/huggingface/datasets/issues/2078
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2169/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2169/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2168 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2168/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2168/comments | https://api.github.com/repos/huggingface/datasets/issues/2168/events | https://github.com/huggingface/datasets/pull/2168 | 849,957,941 | MDExOlB1bGxSZXF1ZXN0NjA4NjA4Nzg5 | 2,168 | Preserve split type when realoding dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko"
} | [] | closed | false | null | [] | null | [] | 2021-04-04T20:46:21Z | 2021-04-19T10:57:05Z | 2021-04-19T09:08:55Z | CONTRIBUTOR | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/2168.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2168",
"merged_at": "2021-04-19T09:08:55Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2168.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2168"
} | Fixes #2167
Using `eval` is not ideal for security reasons (in web apps I assume), but without it the code would be much more complex IMO.
In terms of style, instead of explicitly importing a private member (`_RelativeInstruction`), we can add these imports at the top of the module:
```python
from . import arrow_reader # gives us access to ReadInstruction and _RelativeInstruction
from . import splits # gives us access to NamedSplit
```
and then define the `eval` globals as follows:
```python
{**arrow_reader.__dict__, **splits.__dict__}
```
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2168/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2168/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2167 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2167/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2167/comments | https://api.github.com/repos/huggingface/datasets/issues/2167/events | https://github.com/huggingface/datasets/issues/2167 | 849,944,891 | MDU6SXNzdWU4NDk5NDQ4OTE= | 2,167 | Split type not preserved when reloading the dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko"
} | [] | closed | false | null | [] | null | [] | 2021-04-04T19:29:54Z | 2021-04-19T09:08:55Z | 2021-04-19T09:08:55Z | CONTRIBUTOR | null | null | null | A minimal reproducible example:
```python
>>> from datasets import load_dataset, Dataset
>>> dset = load_dataset("sst", split="train")
>>> dset.save_to_disk("sst")
>>> type(dset.split)
<class 'datasets.splits.NamedSplit'>
>>> dset = Dataset.load_from_disk("sst")
>>> type(dset.split) # NamedSplit expected
<class 'str'>
```
It seems like this bug was introduced in #2025. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2167/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2167/timeline | null | completed | true |
https://api.github.com/repos/huggingface/datasets/issues/2166 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2166/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2166/comments | https://api.github.com/repos/huggingface/datasets/issues/2166/events | https://github.com/huggingface/datasets/issues/2166 | 849,778,545 | MDU6SXNzdWU4NDk3Nzg1NDU= | 2,166 | Regarding Test Sets for the GEM datasets | {
"avatar_url": "https://avatars.githubusercontent.com/u/17217068?v=4",
"events_url": "https://api.github.com/users/vyraun/events{/privacy}",
"followers_url": "https://api.github.com/users/vyraun/followers",
"following_url": "https://api.github.com/users/vyraun/following{/other_user}",
"gists_url": "https://api.github.com/users/vyraun/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/vyraun",
"id": 17217068,
"login": "vyraun",
"node_id": "MDQ6VXNlcjE3MjE3MDY4",
"organizations_url": "https://api.github.com/users/vyraun/orgs",
"received_events_url": "https://api.github.com/users/vyraun/received_events",
"repos_url": "https://api.github.com/users/vyraun/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/vyraun/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/vyraun/subscriptions",
"type": "User",
"url": "https://api.github.com/users/vyraun"
} | [
{
"color": "72f99f",
"default": false,
"description": "Discussions on the datasets",
"id": 2067401494,
"name": "Dataset discussion",
"node_id": "MDU6TGFiZWwyMDY3NDAxNDk0",
"url": "https://api.github.com/repos/huggingface/datasets/labels/Dataset%20discussion"
}
] | closed | false | null | [] | null | [] | 2021-04-04T02:02:45Z | 2021-04-06T08:13:12Z | 2021-04-06T08:13:12Z | NONE | null | null | null | @yjernite Hi, are the test sets for the GEM datasets scheduled to be [added soon](https://gem-benchmark.com/shared_task)?
e.g.
```
from datasets import load_dataset
DATASET_NAME="common_gen"
data = load_dataset("gem", DATASET_NAME)
```
The test set doesn't have the target or references.
```
data['test'][0]
{'concept_set_id': 0, 'concepts': ['drill', 'field', 'run', 'team'], 'gem_id': 'common_gen-test-0', 'gem_parent_id': 'common_gen-test-0', 'references': [], 'target': ''}
```
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2166/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2166/timeline | null | completed | true |
https://api.github.com/repos/huggingface/datasets/issues/2165 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2165/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2165/comments | https://api.github.com/repos/huggingface/datasets/issues/2165/events | https://github.com/huggingface/datasets/issues/2165 | 849,771,665 | MDU6SXNzdWU4NDk3NzE2NjU= | 2,165 | How to convert datasets.arrow_dataset.Dataset to torch.utils.data.Dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/24562381?v=4",
"events_url": "https://api.github.com/users/y-rokutan/events{/privacy}",
"followers_url": "https://api.github.com/users/y-rokutan/followers",
"following_url": "https://api.github.com/users/y-rokutan/following{/other_user}",
"gists_url": "https://api.github.com/users/y-rokutan/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/y-rokutan",
"id": 24562381,
"login": "y-rokutan",
"node_id": "MDQ6VXNlcjI0NTYyMzgx",
"organizations_url": "https://api.github.com/users/y-rokutan/orgs",
"received_events_url": "https://api.github.com/users/y-rokutan/received_events",
"repos_url": "https://api.github.com/users/y-rokutan/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/y-rokutan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/y-rokutan/subscriptions",
"type": "User",
"url": "https://api.github.com/users/y-rokutan"
} | [] | closed | false | null | [] | null | [] | 2021-04-04T01:01:48Z | 2021-08-24T15:55:35Z | 2021-04-07T15:06:04Z | NONE | null | null | null | Hi,
I'm trying to pretraine deep-speed model using HF arxiv dataset like:
```
train_ds = nlp.load_dataset('scientific_papers', 'arxiv')
train_ds.set_format(
type="torch",
columns=["input_ids", "attention_mask", "global_attention_mask", "labels"],
)
engine, _, _, _ = deepspeed.initialize(
args=args,
model=model,
model_parameters=[p for p in model.parameters() if p.requires_grad],
training_data=train_ds)
```
but deepspeed.initialize accepts torch.utils.data.Dataset only. How can I convert HF-style dataset to torch-style dataset?
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2165/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2165/timeline | null | completed | true |
https://api.github.com/repos/huggingface/datasets/issues/2164 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2164/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2164/comments | https://api.github.com/repos/huggingface/datasets/issues/2164/events | https://github.com/huggingface/datasets/pull/2164 | 849,739,759 | MDExOlB1bGxSZXF1ZXN0NjA4NDQ0MTE3 | 2,164 | Replace assertTrue(isinstance with assertIsInstance in tests | {
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko"
} | [] | closed | false | null | [] | null | [] | 2021-04-03T21:07:02Z | 2021-04-06T14:41:09Z | 2021-04-06T14:41:08Z | CONTRIBUTOR | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/2164.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2164",
"merged_at": "2021-04-06T14:41:08Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2164.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2164"
} | Replaces all the occurrences of the `assertTrue(isinstance(` pattern with `assertIsInstance`. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2164/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2164/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2163 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2163/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2163/comments | https://api.github.com/repos/huggingface/datasets/issues/2163/events | https://github.com/huggingface/datasets/pull/2163 | 849,669,366 | MDExOlB1bGxSZXF1ZXN0NjA4Mzk0NDMz | 2,163 | Concat only unique fields in DatasetInfo.from_merge | {
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko"
} | [] | closed | false | null | [] | null | [] | 2021-04-03T14:31:30Z | 2021-04-06T14:40:00Z | 2021-04-06T14:39:59Z | CONTRIBUTOR | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/2163.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2163",
"merged_at": "2021-04-06T14:39:59Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2163.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2163"
} | I thought someone from the community with less experience would be interested in fixing this issue, but that wasn't the case.
Fixes #2103 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2163/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2163/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2162 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2162/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2162/comments | https://api.github.com/repos/huggingface/datasets/issues/2162/events | https://github.com/huggingface/datasets/issues/2162 | 849,129,201 | MDU6SXNzdWU4NDkxMjkyMDE= | 2,162 | visualization for cc100 is broken | {
"avatar_url": "https://avatars.githubusercontent.com/u/79165106?v=4",
"events_url": "https://api.github.com/users/dorost1234/events{/privacy}",
"followers_url": "https://api.github.com/users/dorost1234/followers",
"following_url": "https://api.github.com/users/dorost1234/following{/other_user}",
"gists_url": "https://api.github.com/users/dorost1234/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/dorost1234",
"id": 79165106,
"login": "dorost1234",
"node_id": "MDQ6VXNlcjc5MTY1MTA2",
"organizations_url": "https://api.github.com/users/dorost1234/orgs",
"received_events_url": "https://api.github.com/users/dorost1234/received_events",
"repos_url": "https://api.github.com/users/dorost1234/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/dorost1234/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dorost1234/subscriptions",
"type": "User",
"url": "https://api.github.com/users/dorost1234"
} | [
{
"color": "94203D",
"default": false,
"description": "",
"id": 2107841032,
"name": "nlp-viewer",
"node_id": "MDU6TGFiZWwyMTA3ODQxMDMy",
"url": "https://api.github.com/repos/huggingface/datasets/labels/nlp-viewer"
}
] | closed | false | null | [] | null | [] | 2021-04-02T10:11:13Z | 2022-10-05T13:20:24Z | 2022-10-05T13:20:24Z | NONE | null | null | null | Hi
visualization through dataset viewer for cc100 is broken
https://huggingface.co/datasets/viewer/
thanks a lot
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2162/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2162/timeline | null | completed | true |
https://api.github.com/repos/huggingface/datasets/issues/2161 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2161/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2161/comments | https://api.github.com/repos/huggingface/datasets/issues/2161/events | https://github.com/huggingface/datasets/issues/2161 | 849,127,041 | MDU6SXNzdWU4NDkxMjcwNDE= | 2,161 | any possibility to download part of large datasets only? | {
"avatar_url": "https://avatars.githubusercontent.com/u/79165106?v=4",
"events_url": "https://api.github.com/users/dorost1234/events{/privacy}",
"followers_url": "https://api.github.com/users/dorost1234/followers",
"following_url": "https://api.github.com/users/dorost1234/following{/other_user}",
"gists_url": "https://api.github.com/users/dorost1234/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/dorost1234",
"id": 79165106,
"login": "dorost1234",
"node_id": "MDQ6VXNlcjc5MTY1MTA2",
"organizations_url": "https://api.github.com/users/dorost1234/orgs",
"received_events_url": "https://api.github.com/users/dorost1234/received_events",
"repos_url": "https://api.github.com/users/dorost1234/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/dorost1234/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dorost1234/subscriptions",
"type": "User",
"url": "https://api.github.com/users/dorost1234"
} | [] | closed | false | null | [] | null | [] | 2021-04-02T10:06:46Z | 2022-10-05T13:26:51Z | 2022-10-05T13:26:51Z | NONE | null | null | null | Hi
Some of the datasets I need like cc100 are very large, and then I wonder if I can download first X samples of the shuffled/unshuffled data without going through first downloading the whole data then sampling? thanks | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2161/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2161/timeline | null | completed | true |
https://api.github.com/repos/huggingface/datasets/issues/2160 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2160/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2160/comments | https://api.github.com/repos/huggingface/datasets/issues/2160/events | https://github.com/huggingface/datasets/issues/2160 | 849,052,921 | MDU6SXNzdWU4NDkwNTI5MjE= | 2,160 | data_args.preprocessing_num_workers almost freezes | {
"avatar_url": "https://avatars.githubusercontent.com/u/79165106?v=4",
"events_url": "https://api.github.com/users/dorost1234/events{/privacy}",
"followers_url": "https://api.github.com/users/dorost1234/followers",
"following_url": "https://api.github.com/users/dorost1234/following{/other_user}",
"gists_url": "https://api.github.com/users/dorost1234/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/dorost1234",
"id": 79165106,
"login": "dorost1234",
"node_id": "MDQ6VXNlcjc5MTY1MTA2",
"organizations_url": "https://api.github.com/users/dorost1234/orgs",
"received_events_url": "https://api.github.com/users/dorost1234/received_events",
"repos_url": "https://api.github.com/users/dorost1234/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/dorost1234/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dorost1234/subscriptions",
"type": "User",
"url": "https://api.github.com/users/dorost1234"
} | [] | closed | false | null | [] | null | [] | 2021-04-02T07:56:13Z | 2021-04-02T10:14:32Z | 2021-04-02T10:14:31Z | NONE | null | null | null | Hi @lhoestq
I am running this code from huggingface transformers https://github.com/huggingface/transformers/blob/master/examples/language-modeling/run_mlm.py
to speed up tokenization, since I am running on multiple datasets, I am using data_args.preprocessing_num_workers = 4 with opus100 corpus but this moves on till a point and then this freezes almost for sometime during tokenization steps and then this is back again, overall to me taking more time than normal case, I appreciate your advice on how I can use this option properly to speed up.
thanks | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2160/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2160/timeline | null | completed | true |
https://api.github.com/repos/huggingface/datasets/issues/2159 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2159/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2159/comments | https://api.github.com/repos/huggingface/datasets/issues/2159/events | https://github.com/huggingface/datasets/issues/2159 | 848,851,962 | MDU6SXNzdWU4NDg4NTE5NjI= | 2,159 | adding ccnet dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/79165106?v=4",
"events_url": "https://api.github.com/users/dorost1234/events{/privacy}",
"followers_url": "https://api.github.com/users/dorost1234/followers",
"following_url": "https://api.github.com/users/dorost1234/following{/other_user}",
"gists_url": "https://api.github.com/users/dorost1234/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/dorost1234",
"id": 79165106,
"login": "dorost1234",
"node_id": "MDQ6VXNlcjc5MTY1MTA2",
"organizations_url": "https://api.github.com/users/dorost1234/orgs",
"received_events_url": "https://api.github.com/users/dorost1234/received_events",
"repos_url": "https://api.github.com/users/dorost1234/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/dorost1234/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dorost1234/subscriptions",
"type": "User",
"url": "https://api.github.com/users/dorost1234"
} | [
{
"color": "e99695",
"default": false,
"description": "Requesting to add a new dataset",
"id": 2067376369,
"name": "dataset request",
"node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request"
}
] | closed | false | null | [] | null | [] | 2021-04-01T23:28:36Z | 2021-04-02T10:05:19Z | 2021-04-02T10:05:19Z | NONE | null | null | null | ## Adding a Dataset
- **Name:** ccnet
- **Description:**
Common Crawl
- **Paper:**
https://arxiv.org/abs/1911.00359
- **Data:**
https://github.com/facebookresearch/cc_net
- **Motivation:**
this is one of the most comprehensive clean monolingual datasets across a variety of languages. Quite important for cross-lingual reseach
Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md).
thanks | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2159/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2159/timeline | null | completed | true |
https://api.github.com/repos/huggingface/datasets/issues/2158 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2158/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2158/comments | https://api.github.com/repos/huggingface/datasets/issues/2158/events | https://github.com/huggingface/datasets/issues/2158 | 848,506,746 | MDU6SXNzdWU4NDg1MDY3NDY= | 2,158 | viewer "fake_news_english" error | {
"avatar_url": "https://avatars.githubusercontent.com/u/9447991?v=4",
"events_url": "https://api.github.com/users/emanuelevivoli/events{/privacy}",
"followers_url": "https://api.github.com/users/emanuelevivoli/followers",
"following_url": "https://api.github.com/users/emanuelevivoli/following{/other_user}",
"gists_url": "https://api.github.com/users/emanuelevivoli/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/emanuelevivoli",
"id": 9447991,
"login": "emanuelevivoli",
"node_id": "MDQ6VXNlcjk0NDc5OTE=",
"organizations_url": "https://api.github.com/users/emanuelevivoli/orgs",
"received_events_url": "https://api.github.com/users/emanuelevivoli/received_events",
"repos_url": "https://api.github.com/users/emanuelevivoli/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/emanuelevivoli/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/emanuelevivoli/subscriptions",
"type": "User",
"url": "https://api.github.com/users/emanuelevivoli"
} | [
{
"color": "94203D",
"default": false,
"description": "",
"id": 2107841032,
"name": "nlp-viewer",
"node_id": "MDU6TGFiZWwyMTA3ODQxMDMy",
"url": "https://api.github.com/repos/huggingface/datasets/labels/nlp-viewer"
}
] | closed | false | null | [] | null | [] | 2021-04-01T14:13:20Z | 2022-10-05T13:22:02Z | 2022-10-05T13:22:02Z | NONE | null | null | null | When I visit the [Huggingface - viewer](https://huggingface.co/datasets/viewer/) web site, under the dataset "fake_news_english" I've got this error:
> ImportError: To be able to use this dataset, you need to install the following dependencies['openpyxl'] using 'pip install # noqa: requires this pandas optional dependency for reading xlsx files' for instance'
as well as the error Traceback.
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2158/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2158/timeline | null | completed | true |
https://api.github.com/repos/huggingface/datasets/issues/2157 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2157/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2157/comments | https://api.github.com/repos/huggingface/datasets/issues/2157/events | https://github.com/huggingface/datasets/pull/2157 | 847,205,239 | MDExOlB1bGxSZXF1ZXN0NjA2MjM1NjUx | 2,157 | updated user permissions based on umask | {
"avatar_url": "https://avatars.githubusercontent.com/u/19718818?v=4",
"events_url": "https://api.github.com/users/bhavitvyamalik/events{/privacy}",
"followers_url": "https://api.github.com/users/bhavitvyamalik/followers",
"following_url": "https://api.github.com/users/bhavitvyamalik/following{/other_user}",
"gists_url": "https://api.github.com/users/bhavitvyamalik/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/bhavitvyamalik",
"id": 19718818,
"login": "bhavitvyamalik",
"node_id": "MDQ6VXNlcjE5NzE4ODE4",
"organizations_url": "https://api.github.com/users/bhavitvyamalik/orgs",
"received_events_url": "https://api.github.com/users/bhavitvyamalik/received_events",
"repos_url": "https://api.github.com/users/bhavitvyamalik/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/bhavitvyamalik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bhavitvyamalik/subscriptions",
"type": "User",
"url": "https://api.github.com/users/bhavitvyamalik"
} | [] | closed | false | null | [] | null | [] | 2021-03-31T19:38:29Z | 2021-04-06T07:19:19Z | 2021-04-06T07:19:19Z | CONTRIBUTOR | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/2157.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2157",
"merged_at": "2021-04-06T07:19:19Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2157.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2157"
} | Updated user permissions based on running user's umask (#2065). Let me know if `0o666` is looking good or should I change it to `~umask` only (to give execute permissions as well) | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2157/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2157/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2156 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2156/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2156/comments | https://api.github.com/repos/huggingface/datasets/issues/2156/events | https://github.com/huggingface/datasets/pull/2156 | 847,198,295 | MDExOlB1bGxSZXF1ZXN0NjA2MjI5MTky | 2,156 | User permissions | {
"avatar_url": "https://avatars.githubusercontent.com/u/19718818?v=4",
"events_url": "https://api.github.com/users/bhavitvyamalik/events{/privacy}",
"followers_url": "https://api.github.com/users/bhavitvyamalik/followers",
"following_url": "https://api.github.com/users/bhavitvyamalik/following{/other_user}",
"gists_url": "https://api.github.com/users/bhavitvyamalik/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/bhavitvyamalik",
"id": 19718818,
"login": "bhavitvyamalik",
"node_id": "MDQ6VXNlcjE5NzE4ODE4",
"organizations_url": "https://api.github.com/users/bhavitvyamalik/orgs",
"received_events_url": "https://api.github.com/users/bhavitvyamalik/received_events",
"repos_url": "https://api.github.com/users/bhavitvyamalik/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/bhavitvyamalik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bhavitvyamalik/subscriptions",
"type": "User",
"url": "https://api.github.com/users/bhavitvyamalik"
} | [] | closed | false | null | [] | null | [] | 2021-03-31T19:33:48Z | 2021-03-31T19:34:24Z | 2021-03-31T19:34:24Z | CONTRIBUTOR | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/2156.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2156",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/2156.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2156"
} | Updated user permissions based on running user's umask. Let me know if `0o666` is looking good or should I change it to `~umask` only (to give execute permissions as well) | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2156/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2156/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2155 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2155/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2155/comments | https://api.github.com/repos/huggingface/datasets/issues/2155/events | https://github.com/huggingface/datasets/pull/2155 | 846,786,897 | MDExOlB1bGxSZXF1ZXN0NjA1ODU3MTU4 | 2,155 | Add table classes to the documentation | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [] | closed | false | null | [] | null | [] | 2021-03-31T14:36:10Z | 2021-04-01T16:46:30Z | 2021-03-31T15:42:08Z | MEMBER | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/2155.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2155",
"merged_at": "2021-03-31T15:42:08Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2155.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2155"
} | Following #2025 , I added the table classes to the documentation
cc @albertvillanova | {
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2155/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2155/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2154 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2154/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2154/comments | https://api.github.com/repos/huggingface/datasets/issues/2154/events | https://github.com/huggingface/datasets/pull/2154 | 846,763,960 | MDExOlB1bGxSZXF1ZXN0NjA1ODM2Mjc1 | 2,154 | Adding the NorNE dataset for Norwegian POS and NER | {
"avatar_url": "https://avatars.githubusercontent.com/u/173537?v=4",
"events_url": "https://api.github.com/users/versae/events{/privacy}",
"followers_url": "https://api.github.com/users/versae/followers",
"following_url": "https://api.github.com/users/versae/following{/other_user}",
"gists_url": "https://api.github.com/users/versae/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/versae",
"id": 173537,
"login": "versae",
"node_id": "MDQ6VXNlcjE3MzUzNw==",
"organizations_url": "https://api.github.com/users/versae/orgs",
"received_events_url": "https://api.github.com/users/versae/received_events",
"repos_url": "https://api.github.com/users/versae/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/versae/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/versae/subscriptions",
"type": "User",
"url": "https://api.github.com/users/versae"
} | [] | closed | false | null | [] | null | [] | 2021-03-31T14:22:50Z | 2021-04-01T09:27:00Z | 2021-04-01T09:16:08Z | CONTRIBUTOR | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/2154.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2154",
"merged_at": "2021-04-01T09:16:08Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2154.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2154"
} | NorNE is a manually annotated corpus of named entities which extends the annotation of the existing Norwegian Dependency Treebank. Comprising both of the official standards of written Norwegian (Bokmål and Nynorsk), the corpus contains around 600,000 tokens and annotates a rich set of entity types including persons, organizations, locations, geo-political entities, products, and events, in addition to a class corresponding to nominals derived from names.
See #1720. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2154/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2154/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2153 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2153/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2153/comments | https://api.github.com/repos/huggingface/datasets/issues/2153/events | https://github.com/huggingface/datasets/issues/2153 | 846,181,502 | MDU6SXNzdWU4NDYxODE1MDI= | 2,153 | load_dataset ignoring features | {
"avatar_url": "https://avatars.githubusercontent.com/u/37592763?v=4",
"events_url": "https://api.github.com/users/GuillemGSubies/events{/privacy}",
"followers_url": "https://api.github.com/users/GuillemGSubies/followers",
"following_url": "https://api.github.com/users/GuillemGSubies/following{/other_user}",
"gists_url": "https://api.github.com/users/GuillemGSubies/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/GuillemGSubies",
"id": 37592763,
"login": "GuillemGSubies",
"node_id": "MDQ6VXNlcjM3NTkyNzYz",
"organizations_url": "https://api.github.com/users/GuillemGSubies/orgs",
"received_events_url": "https://api.github.com/users/GuillemGSubies/received_events",
"repos_url": "https://api.github.com/users/GuillemGSubies/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/GuillemGSubies/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/GuillemGSubies/subscriptions",
"type": "User",
"url": "https://api.github.com/users/GuillemGSubies"
} | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | closed | false | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
}
] | null | [] | 2021-03-31T08:30:09Z | 2022-10-05T13:29:12Z | 2022-10-05T13:29:12Z | NONE | null | null | null | First of all, I'm sorry if it is a repeated issue or the changes are already in master, I searched and I didn't find anything.
I'm using datasets 1.5.0

As you can see, when I load the dataset, the ClassLabels are ignored, I have to cast the dataset in order to make it work.
Code to reproduce:
```python
import datasets
data_location = "/data/prueba_multiclase"
features = datasets.Features(
{"texto": datasets.Value("string"), "label": datasets.features.ClassLabel(names=["false", "true"])}
)
dataset = datasets.load_dataset(
"csv", data_files=data_location, delimiter="\t", features=features
)
```
Dataset I used:
[prueba_multiclase.zip](https://github.com/huggingface/datasets/files/6235022/prueba_multiclase.zip) (it has to be unzipped)
Thank you! ❤️
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2153/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2153/timeline | null | completed | true |
https://api.github.com/repos/huggingface/datasets/issues/2152 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2152/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2152/comments | https://api.github.com/repos/huggingface/datasets/issues/2152/events | https://github.com/huggingface/datasets/pull/2152 | 845,751,273 | MDExOlB1bGxSZXF1ZXN0NjA0ODk0MDkz | 2,152 | Update README.md | {
"avatar_url": "https://avatars.githubusercontent.com/u/22306304?v=4",
"events_url": "https://api.github.com/users/JieyuZhao/events{/privacy}",
"followers_url": "https://api.github.com/users/JieyuZhao/followers",
"following_url": "https://api.github.com/users/JieyuZhao/following{/other_user}",
"gists_url": "https://api.github.com/users/JieyuZhao/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/JieyuZhao",
"id": 22306304,
"login": "JieyuZhao",
"node_id": "MDQ6VXNlcjIyMzA2MzA0",
"organizations_url": "https://api.github.com/users/JieyuZhao/orgs",
"received_events_url": "https://api.github.com/users/JieyuZhao/received_events",
"repos_url": "https://api.github.com/users/JieyuZhao/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/JieyuZhao/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/JieyuZhao/subscriptions",
"type": "User",
"url": "https://api.github.com/users/JieyuZhao"
} | [] | closed | false | null | [] | null | [] | 2021-03-31T03:21:19Z | 2021-04-01T10:20:37Z | 2021-04-01T10:20:36Z | CONTRIBUTOR | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/2152.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2152",
"merged_at": "2021-04-01T10:20:36Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2152.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2152"
} | Updated some descriptions of Wino_Bias dataset. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2152/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2152/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2151 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2151/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2151/comments | https://api.github.com/repos/huggingface/datasets/issues/2151/events | https://github.com/huggingface/datasets/pull/2151 | 844,886,081 | MDExOlB1bGxSZXF1ZXN0NjA0MDg5MDMw | 2,151 | Add support for axis in concatenate datasets | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | closed | false | null | [] | {
"closed_at": "2021-04-20T16:50:46Z",
"closed_issues": 4,
"created_at": "2021-04-09T13:07:51Z",
"creator": {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
},
"description": "Next minor release",
"due_on": "2021-04-16T07:00:00Z",
"html_url": "https://github.com/huggingface/datasets/milestone/1",
"id": 6644198,
"labels_url": "https://api.github.com/repos/huggingface/datasets/milestones/1/labels",
"node_id": "MDk6TWlsZXN0b25lNjY0NDE5OA==",
"number": 1,
"open_issues": 0,
"state": "closed",
"title": "1.6",
"updated_at": "2021-04-20T16:50:46Z",
"url": "https://api.github.com/repos/huggingface/datasets/milestones/1"
} | [] | 2021-03-30T16:58:44Z | 2021-06-23T17:41:02Z | 2021-04-19T16:07:18Z | MEMBER | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/2151.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2151",
"merged_at": "2021-04-19T16:07:18Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2151.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2151"
} | Add support for `axis` (0 or 1) in `concatenate_datasets`.
Close #853. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2151/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2151/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2150 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2150/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2150/comments | https://api.github.com/repos/huggingface/datasets/issues/2150/events | https://github.com/huggingface/datasets/pull/2150 | 844,776,448 | MDExOlB1bGxSZXF1ZXN0NjAzOTg3OTcx | 2,150 | Allow pickling of big in-memory tables | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [] | closed | false | null | [] | null | [] | 2021-03-30T15:51:56Z | 2021-03-31T10:37:15Z | 2021-03-31T10:37:14Z | MEMBER | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/2150.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2150",
"merged_at": "2021-03-31T10:37:14Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2150.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2150"
} | This should fix issue #2134
Pickling is limited to <4GiB objects, it's not possible to pickle a big arrow table (for multiprocessing for example).
For big tables, we have to write them on disk and only pickle the path to the table. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2150/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2150/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2149 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2149/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2149/comments | https://api.github.com/repos/huggingface/datasets/issues/2149/events | https://github.com/huggingface/datasets/issues/2149 | 844,734,076 | MDU6SXNzdWU4NDQ3MzQwNzY= | 2,149 | Telugu subset missing for xtreme tatoeba dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/50871412?v=4",
"events_url": "https://api.github.com/users/jerryIsHere/events{/privacy}",
"followers_url": "https://api.github.com/users/jerryIsHere/followers",
"following_url": "https://api.github.com/users/jerryIsHere/following{/other_user}",
"gists_url": "https://api.github.com/users/jerryIsHere/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/jerryIsHere",
"id": 50871412,
"login": "jerryIsHere",
"node_id": "MDQ6VXNlcjUwODcxNDEy",
"organizations_url": "https://api.github.com/users/jerryIsHere/orgs",
"received_events_url": "https://api.github.com/users/jerryIsHere/received_events",
"repos_url": "https://api.github.com/users/jerryIsHere/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/jerryIsHere/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jerryIsHere/subscriptions",
"type": "User",
"url": "https://api.github.com/users/jerryIsHere"
} | [] | closed | false | null | [] | null | [] | 2021-03-30T15:26:34Z | 2022-10-05T13:28:30Z | 2022-10-05T13:28:30Z | CONTRIBUTOR | null | null | null | from nlp import load_dataset
train_dataset = load_dataset('xtreme', 'tatoeba.tel')['validation']
ValueError: BuilderConfig tatoeba.tel not found.
but language tel is actually included in xtreme:
https://github.com/google-research/xtreme/blob/master/utils_preprocess.py
def tatoeba_preprocess(args):
lang3_dict = {
'afr':'af', 'ara':'ar', 'bul':'bg', 'ben':'bn',
'deu':'de', 'ell':'el', 'spa':'es', 'est':'et',
'eus':'eu', 'pes':'fa', 'fin':'fi', 'fra':'fr',
'heb':'he', 'hin':'hi', 'hun':'hu', 'ind':'id',
'ita':'it', 'jpn':'ja', 'jav':'jv', 'kat':'ka',
'kaz':'kk', 'kor':'ko', 'mal':'ml', 'mar':'mr',
'nld':'nl', 'por':'pt', 'rus':'ru', 'swh':'sw',
'tam':'ta', **_'tel':'te'_**, 'tha':'th', 'tgl':'tl', <----here
'tur':'tr', 'urd':'ur', 'vie':'vi', 'cmn':'zh',
'eng':'en',
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2149/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2149/timeline | null | completed | true |
https://api.github.com/repos/huggingface/datasets/issues/2148 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2148/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2148/comments | https://api.github.com/repos/huggingface/datasets/issues/2148/events | https://github.com/huggingface/datasets/issues/2148 | 844,700,910 | MDU6SXNzdWU4NDQ3MDA5MTA= | 2,148 | Add configurable options to `seqeval` metric | {
"avatar_url": "https://avatars.githubusercontent.com/u/44571847?v=4",
"events_url": "https://api.github.com/users/marrodion/events{/privacy}",
"followers_url": "https://api.github.com/users/marrodion/followers",
"following_url": "https://api.github.com/users/marrodion/following{/other_user}",
"gists_url": "https://api.github.com/users/marrodion/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/marrodion",
"id": 44571847,
"login": "marrodion",
"node_id": "MDQ6VXNlcjQ0NTcxODQ3",
"organizations_url": "https://api.github.com/users/marrodion/orgs",
"received_events_url": "https://api.github.com/users/marrodion/received_events",
"repos_url": "https://api.github.com/users/marrodion/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/marrodion/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/marrodion/subscriptions",
"type": "User",
"url": "https://api.github.com/users/marrodion"
} | [] | closed | false | null | [] | null | [] | 2021-03-30T15:04:06Z | 2021-04-15T13:49:46Z | 2021-04-15T13:49:46Z | CONTRIBUTOR | null | null | null | Right now `load_metric("seqeval")` only works in the default mode of evaluation (equivalent to conll evaluation).
However, seqeval library [supports](https://github.com/chakki-works/seqeval#support-features) different evaluation schemes (IOB1, IOB2, etc.), which can be plugged in just by supporting additional kwargs in `Seqeval._compute`
https://github.com/huggingface/datasets/blob/85cf7ff920c90ca2e12bedca12b36d2a043c3da2/metrics/seqeval/seqeval.py#L109
Things that would be relevant are, for example, supporting `mode="strict", scheme=IOB2` to count only full entity match as a true positive and omit partial matches.
The only problem I see is that the spirit of `metrics` seems to not require additional imports from user. `seqeval` only supports schemes as objects, without any string aliases.
It can be solved naively with mapping like `{"IOB2": seqeval.scheme.IOB2}`. Or just left as is and require user to explicitly import scheme from `seqeval` if he wants to configure it past the default implementation.
If that makes sense, I am happy to implement the change. | {
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 1,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 2,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2148/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2148/timeline | null | completed | true |
https://api.github.com/repos/huggingface/datasets/issues/2147 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2147/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2147/comments | https://api.github.com/repos/huggingface/datasets/issues/2147/events | https://github.com/huggingface/datasets/pull/2147 | 844,687,831 | MDExOlB1bGxSZXF1ZXN0NjAzOTA3NjM4 | 2,147 | Render docstring return type as inline | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | [
{
"color": "0075ca",
"default": true,
"description": "Improvements or additions to documentation",
"id": 1935892861,
"name": "documentation",
"node_id": "MDU6TGFiZWwxOTM1ODkyODYx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/documentation"
}
] | closed | false | null | [] | null | [] | 2021-03-30T14:55:43Z | 2021-03-31T13:11:05Z | 2021-03-31T13:11:05Z | MEMBER | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/2147.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2147",
"merged_at": "2021-03-31T13:11:05Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2147.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2147"
} | This documentation setting will avoid having the return type in a separate line under `Return type`.
See e.g. current docs for `Dataset.to_csv`. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2147/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2147/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2146 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2146/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2146/comments | https://api.github.com/repos/huggingface/datasets/issues/2146/events | https://github.com/huggingface/datasets/issues/2146 | 844,673,244 | MDU6SXNzdWU4NDQ2NzMyNDQ= | 2,146 | Dataset file size on disk is very large with 3D Array | {
"avatar_url": "https://avatars.githubusercontent.com/u/22685854?v=4",
"events_url": "https://api.github.com/users/jblemoine/events{/privacy}",
"followers_url": "https://api.github.com/users/jblemoine/followers",
"following_url": "https://api.github.com/users/jblemoine/following{/other_user}",
"gists_url": "https://api.github.com/users/jblemoine/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/jblemoine",
"id": 22685854,
"login": "jblemoine",
"node_id": "MDQ6VXNlcjIyNjg1ODU0",
"organizations_url": "https://api.github.com/users/jblemoine/orgs",
"received_events_url": "https://api.github.com/users/jblemoine/received_events",
"repos_url": "https://api.github.com/users/jblemoine/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/jblemoine/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jblemoine/subscriptions",
"type": "User",
"url": "https://api.github.com/users/jblemoine"
} | [] | open | false | null | [] | null | [] | 2021-03-30T14:46:09Z | 2021-04-16T13:07:02Z | null | NONE | null | null | null | Hi,
I have created my own dataset using the provided dataset loading script. It is an image dataset where images are stored as 3D Array with dtype=uint8.
The actual size on disk is surprisingly large. It takes 520 MB. Here is some info from `dataset_info.json`.
`{
"description": "",
"citation": "",
"homepage": "",
"license": "",
"features": {
"image": {
"shape": [224, 224, 3],
"dtype": "uint8",
"id": null,
"_type": "Array3D",
}
},
"post_processed": null,
"supervised_keys": null,
"builder_name": "shot_type_image_dataset",
"config_name": "default",
"version": {
"version_str": "0.0.0",
"description": null,
"major": 0,
"minor": 0,
"patch": 0,
},
"splits": {
"train": {
"name": "train",
"num_bytes": 520803408,
"num_examples": 1479,
"dataset_name": "shot_type_image_dataset",
}
},
"download_checksums": {
"": {
"num_bytes": 16940447118,
"checksum": "5854035705efe08b0ed8f3cf3da7b4d29cba9055c2d2d702c79785350d72ee03",
}
},
"download_size": 16940447118,
"post_processing_size": null,
"dataset_size": 520803408,
"size_in_bytes": 17461250526,
}`
I have created the same dataset with tensorflow_dataset and it takes only 125MB on disk.
I am wondering, is it normal behavior ? I understand `Datasets` uses Arrow for serialization wheres tf uses TF Records.
This might be a problem for large dataset.
Thanks for your help.
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2146/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2146/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2145 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2145/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2145/comments | https://api.github.com/repos/huggingface/datasets/issues/2145/events | https://github.com/huggingface/datasets/pull/2145 | 844,603,518 | MDExOlB1bGxSZXF1ZXN0NjAzODMxOTE2 | 2,145 | Implement Dataset add_column | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | closed | false | null | [] | {
"closed_at": "2021-05-31T16:20:53Z",
"closed_issues": 3,
"created_at": "2021-04-09T13:16:31Z",
"creator": {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
},
"description": "Next minor release",
"due_on": "2021-05-14T07:00:00Z",
"html_url": "https://github.com/huggingface/datasets/milestone/3",
"id": 6644287,
"labels_url": "https://api.github.com/repos/huggingface/datasets/milestones/3/labels",
"node_id": "MDk6TWlsZXN0b25lNjY0NDI4Nw==",
"number": 3,
"open_issues": 0,
"state": "closed",
"title": "1.7",
"updated_at": "2021-05-31T16:20:53Z",
"url": "https://api.github.com/repos/huggingface/datasets/milestones/3"
} | [] | 2021-03-30T14:02:14Z | 2021-04-29T14:50:44Z | 2021-04-29T14:50:43Z | MEMBER | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/2145.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2145",
"merged_at": "2021-04-29T14:50:43Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2145.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2145"
} | Implement `Dataset.add_column`.
Close #1954. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2145/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2145/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2144 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2144/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2144/comments | https://api.github.com/repos/huggingface/datasets/issues/2144/events | https://github.com/huggingface/datasets/issues/2144 | 844,352,067 | MDU6SXNzdWU4NDQzNTIwNjc= | 2,144 | Loading wikipedia 20200501.en throws pyarrow related error | {
"avatar_url": "https://avatars.githubusercontent.com/u/26637405?v=4",
"events_url": "https://api.github.com/users/TomPyonsuke/events{/privacy}",
"followers_url": "https://api.github.com/users/TomPyonsuke/followers",
"following_url": "https://api.github.com/users/TomPyonsuke/following{/other_user}",
"gists_url": "https://api.github.com/users/TomPyonsuke/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/TomPyonsuke",
"id": 26637405,
"login": "TomPyonsuke",
"node_id": "MDQ6VXNlcjI2NjM3NDA1",
"organizations_url": "https://api.github.com/users/TomPyonsuke/orgs",
"received_events_url": "https://api.github.com/users/TomPyonsuke/received_events",
"repos_url": "https://api.github.com/users/TomPyonsuke/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/TomPyonsuke/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/TomPyonsuke/subscriptions",
"type": "User",
"url": "https://api.github.com/users/TomPyonsuke"
} | [] | open | false | null | [] | null | [] | 2021-03-30T10:38:31Z | 2021-04-01T09:21:17Z | null | NONE | null | null | null | **Problem description**
I am getting the following error when trying to load wikipedia/20200501.en dataset.
**Error log**
Downloading and preparing dataset wikipedia/20200501.en (download: 16.99 GiB, generated: 17.07 GiB, post-processed: Unknown size, total: 34.06 GiB) to /usr/local/workspace/NAS_NLP/cache/wikipedia/20200501.en/1.0.0/50aa706aa417bb77d910ad61211cc672c0ef3e0f224225a5e0a18277ade8b931...
Downloading: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 14.6k/14.6k [00:00<00:00, 5.41MB/s]
Downloading: 59%|███████████████████████████████████████████████████████████████████████████████████████▊ | 10.7G/18.3G [11:30<08:08, 15.5MB/s]
Dataset wikipedia downloaded and prepared to /usr/local/workspace/NAS_NLP/cache/wikipedia/20200501.en/1.0.0/50aa706aa417bb77d910ad61211cc672c0ef3e0f224225a5e0a18277ade8b931. Subsequent calls will reuse this data.
Traceback (most recent call last):
File "load_wiki.py", line 2, in <module>
ds = load_dataset('wikipedia', '20200501.en', cache_dir='/usr/local/workspace/NAS_NLP/cache')
File "/usr/local/lib/python3.6/dist-packages/datasets/load.py", line 751, in load_dataset
ds = builder_instance.as_dataset(split=split, ignore_verifications=ignore_verifications, in_memory=keep_in_memory)
File "/usr/local/lib/python3.6/dist-packages/datasets/builder.py", line 746, in as_dataset
map_tuple=True,
File "/usr/local/lib/python3.6/dist-packages/datasets/utils/py_utils.py", line 204, in map_nested
_single_map_nested((function, obj, types, None, True)) for obj in tqdm(iterable, disable=disable_tqdm)
File "/usr/local/lib/python3.6/dist-packages/datasets/utils/py_utils.py", line 204, in <listcomp>
_single_map_nested((function, obj, types, None, True)) for obj in tqdm(iterable, disable=disable_tqdm)
File "/usr/local/lib/python3.6/dist-packages/datasets/utils/py_utils.py", line 142, in _single_map_nested
return function(data_struct)
File "/usr/local/lib/python3.6/dist-packages/datasets/builder.py", line 763, in _build_single_dataset
in_memory=in_memory,
File "/usr/local/lib/python3.6/dist-packages/datasets/builder.py", line 835, in _as_dataset
in_memory=in_memory,
File "/usr/local/lib/python3.6/dist-packages/datasets/arrow_reader.py", line 215, in read
return self.read_files(files=files, original_instructions=instructions, in_memory=in_memory)
File "/usr/local/lib/python3.6/dist-packages/datasets/arrow_reader.py", line 236, in read_files
pa_table = self._read_files(files, in_memory=in_memory)
File "/usr/local/lib/python3.6/dist-packages/datasets/arrow_reader.py", line 171, in _read_files
pa_table: pa.Table = self._get_dataset_from_filename(f_dict, in_memory=in_memory)
File "/usr/local/lib/python3.6/dist-packages/datasets/arrow_reader.py", line 302, in _get_dataset_from_filename
pa_table = ArrowReader.read_table(filename, in_memory=in_memory)
File "/usr/local/lib/python3.6/dist-packages/datasets/arrow_reader.py", line 324, in read_table
pa_table = f.read_all()
File "pyarrow/ipc.pxi", line 544, in pyarrow.lib.RecordBatchReader.read_all
File "pyarrow/error.pxi", line 99, in pyarrow.lib.check_status
OSError: Expected to be able to read 9176784 bytes for message body, got 4918712
**Detailed version info**
datasets==1.5.0
- dataclasses [required: Any, installed: 0.8]
- dill [required: Any, installed: 0.3.3]
- fsspec [required: Any, installed: 0.8.7]
- importlib-metadata [required: Any, installed: 1.7.0]
- zipp [required: >=0.5, installed: 3.1.0]
- huggingface-hub [required: <0.1.0, installed: 0.0.7]
- filelock [required: Any, installed: 3.0.12]
- importlib-metadata [required: Any, installed: 1.7.0]
- zipp [required: >=0.5, installed: 3.1.0]
- requests [required: Any, installed: 2.24.0]
- certifi [required: >=2017.4.17, installed: 2020.6.20]
- chardet [required: >=3.0.2,<4, installed: 3.0.4]
- idna [required: >=2.5,<3, installed: 2.6]
- urllib3 [required: >=1.21.1,<1.26,!=1.25.1,!=1.25.0, installed: 1.25.10]
- tqdm [required: Any, installed: 4.49.0]
- importlib-metadata [required: Any, installed: 1.7.0]
- zipp [required: >=0.5, installed: 3.1.0]
- multiprocess [required: Any, installed: 0.70.11.1]
- dill [required: >=0.3.3, installed: 0.3.3]
- numpy [required: >=1.17, installed: 1.17.0]
- pandas [required: Any, installed: 1.1.5]
- numpy [required: >=1.15.4, installed: 1.17.0]
- python-dateutil [required: >=2.7.3, installed: 2.8.0]
- six [required: >=1.5, installed: 1.15.0]
- pytz [required: >=2017.2, installed: 2020.1]
- pyarrow [required: >=0.17.1, installed: 3.0.0]
- numpy [required: >=1.16.6, installed: 1.17.0]
- requests [required: >=2.19.0, installed: 2.24.0]
- certifi [required: >=2017.4.17, installed: 2020.6.20]
- chardet [required: >=3.0.2,<4, installed: 3.0.4]
- idna [required: >=2.5,<3, installed: 2.6]
- urllib3 [required: >=1.21.1,<1.26,!=1.25.1,!=1.25.0, installed: 1.25.10]
- tqdm [required: >=4.27,<4.50.0, installed: 4.49.0]
- xxhash [required: Any, installed: 2.0.0]
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2144/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2144/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2143 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2143/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2143/comments | https://api.github.com/repos/huggingface/datasets/issues/2143/events | https://github.com/huggingface/datasets/pull/2143 | 844,313,228 | MDExOlB1bGxSZXF1ZXN0NjAzNTc0NjI0 | 2,143 | task casting via load_dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/17948980?v=4",
"events_url": "https://api.github.com/users/theo-m/events{/privacy}",
"followers_url": "https://api.github.com/users/theo-m/followers",
"following_url": "https://api.github.com/users/theo-m/following{/other_user}",
"gists_url": "https://api.github.com/users/theo-m/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/theo-m",
"id": 17948980,
"login": "theo-m",
"node_id": "MDQ6VXNlcjE3OTQ4OTgw",
"organizations_url": "https://api.github.com/users/theo-m/orgs",
"received_events_url": "https://api.github.com/users/theo-m/received_events",
"repos_url": "https://api.github.com/users/theo-m/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/theo-m/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/theo-m/subscriptions",
"type": "User",
"url": "https://api.github.com/users/theo-m"
} | [] | closed | false | {
"avatar_url": "https://avatars.githubusercontent.com/u/17948980?v=4",
"events_url": "https://api.github.com/users/theo-m/events{/privacy}",
"followers_url": "https://api.github.com/users/theo-m/followers",
"following_url": "https://api.github.com/users/theo-m/following{/other_user}",
"gists_url": "https://api.github.com/users/theo-m/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/theo-m",
"id": 17948980,
"login": "theo-m",
"node_id": "MDQ6VXNlcjE3OTQ4OTgw",
"organizations_url": "https://api.github.com/users/theo-m/orgs",
"received_events_url": "https://api.github.com/users/theo-m/received_events",
"repos_url": "https://api.github.com/users/theo-m/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/theo-m/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/theo-m/subscriptions",
"type": "User",
"url": "https://api.github.com/users/theo-m"
} | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/17948980?v=4",
"events_url": "https://api.github.com/users/theo-m/events{/privacy}",
"followers_url": "https://api.github.com/users/theo-m/followers",
"following_url": "https://api.github.com/users/theo-m/following{/other_user}",
"gists_url": "https://api.github.com/users/theo-m/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/theo-m",
"id": 17948980,
"login": "theo-m",
"node_id": "MDQ6VXNlcjE3OTQ4OTgw",
"organizations_url": "https://api.github.com/users/theo-m/orgs",
"received_events_url": "https://api.github.com/users/theo-m/received_events",
"repos_url": "https://api.github.com/users/theo-m/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/theo-m/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/theo-m/subscriptions",
"type": "User",
"url": "https://api.github.com/users/theo-m"
},
{
"avatar_url": "https://avatars.githubusercontent.com/u/33657802?v=4",
"events_url": "https://api.github.com/users/SBrandeis/events{/privacy}",
"followers_url": "https://api.github.com/users/SBrandeis/followers",
"following_url": "https://api.github.com/users/SBrandeis/following{/other_user}",
"gists_url": "https://api.github.com/users/SBrandeis/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/SBrandeis",
"id": 33657802,
"login": "SBrandeis",
"node_id": "MDQ6VXNlcjMzNjU3ODAy",
"organizations_url": "https://api.github.com/users/SBrandeis/orgs",
"received_events_url": "https://api.github.com/users/SBrandeis/received_events",
"repos_url": "https://api.github.com/users/SBrandeis/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/SBrandeis/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/SBrandeis/subscriptions",
"type": "User",
"url": "https://api.github.com/users/SBrandeis"
}
] | null | [] | 2021-03-30T10:00:42Z | 2021-06-11T13:20:41Z | 2021-06-11T13:20:36Z | CONTRIBUTOR | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/2143.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2143",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/2143.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2143"
} | wip
not satisfied with the API, it means as a dataset implementer I need to write a function with boilerplate and write classes for each `<dataset><task>` "facet". | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2143/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2143/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2142 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2142/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2142/comments | https://api.github.com/repos/huggingface/datasets/issues/2142/events | https://github.com/huggingface/datasets/pull/2142 | 843,919,420 | MDExOlB1bGxSZXF1ZXN0NjAzMjQwMzUy | 2,142 | Gem V1.1 | {
"avatar_url": "https://avatars.githubusercontent.com/u/10469459?v=4",
"events_url": "https://api.github.com/users/yjernite/events{/privacy}",
"followers_url": "https://api.github.com/users/yjernite/followers",
"following_url": "https://api.github.com/users/yjernite/following{/other_user}",
"gists_url": "https://api.github.com/users/yjernite/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/yjernite",
"id": 10469459,
"login": "yjernite",
"node_id": "MDQ6VXNlcjEwNDY5NDU5",
"organizations_url": "https://api.github.com/users/yjernite/orgs",
"received_events_url": "https://api.github.com/users/yjernite/received_events",
"repos_url": "https://api.github.com/users/yjernite/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/yjernite/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yjernite/subscriptions",
"type": "User",
"url": "https://api.github.com/users/yjernite"
} | [] | closed | false | null | [] | null | [] | 2021-03-29T23:47:02Z | 2021-03-30T00:10:02Z | 2021-03-30T00:10:02Z | MEMBER | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/2142.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2142",
"merged_at": "2021-03-30T00:10:02Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2142.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2142"
} | This branch updates the GEM benchmark to its 1.1 version which includes:
- challenge sets for most tasks
- detokenized TurkCorpus to match the rest of the text simplification subtasks
- fixed inputs for TurkCorpus and ASSET test sets
- 18 languages in WikiLingua
cc @sebastianGehrmann | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 1,
"hooray": 1,
"laugh": 0,
"rocket": 0,
"total_count": 2,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2142/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2142/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2141 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2141/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2141/comments | https://api.github.com/repos/huggingface/datasets/issues/2141/events | https://github.com/huggingface/datasets/pull/2141 | 843,914,790 | MDExOlB1bGxSZXF1ZXN0NjAzMjM2MjUw | 2,141 | added spans field for the wikiann datasets | {
"avatar_url": "https://avatars.githubusercontent.com/u/6278280?v=4",
"events_url": "https://api.github.com/users/rabeehk/events{/privacy}",
"followers_url": "https://api.github.com/users/rabeehk/followers",
"following_url": "https://api.github.com/users/rabeehk/following{/other_user}",
"gists_url": "https://api.github.com/users/rabeehk/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/rabeehk",
"id": 6278280,
"login": "rabeehk",
"node_id": "MDQ6VXNlcjYyNzgyODA=",
"organizations_url": "https://api.github.com/users/rabeehk/orgs",
"received_events_url": "https://api.github.com/users/rabeehk/received_events",
"repos_url": "https://api.github.com/users/rabeehk/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/rabeehk/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rabeehk/subscriptions",
"type": "User",
"url": "https://api.github.com/users/rabeehk"
} | [] | closed | false | null | [] | null | [] | 2021-03-29T23:38:26Z | 2021-03-31T13:27:50Z | 2021-03-31T13:27:50Z | CONTRIBUTOR | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/2141.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2141",
"merged_at": "2021-03-31T13:27:50Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2141.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2141"
} | Hi @lhoestq
I tried to add spans to the wikiann datasets.
Thanks a lot for kindly having a look.
This addresses https://github.com/huggingface/datasets/issues/2130.
Best regards
Rabeeh | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 1,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2141/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2141/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2140 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2140/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2140/comments | https://api.github.com/repos/huggingface/datasets/issues/2140/events | https://github.com/huggingface/datasets/pull/2140 | 843,830,451 | MDExOlB1bGxSZXF1ZXN0NjAzMTYxMjYx | 2,140 | add banking77 dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/32985207?v=4",
"events_url": "https://api.github.com/users/dkajtoch/events{/privacy}",
"followers_url": "https://api.github.com/users/dkajtoch/followers",
"following_url": "https://api.github.com/users/dkajtoch/following{/other_user}",
"gists_url": "https://api.github.com/users/dkajtoch/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/dkajtoch",
"id": 32985207,
"login": "dkajtoch",
"node_id": "MDQ6VXNlcjMyOTg1MjA3",
"organizations_url": "https://api.github.com/users/dkajtoch/orgs",
"received_events_url": "https://api.github.com/users/dkajtoch/received_events",
"repos_url": "https://api.github.com/users/dkajtoch/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/dkajtoch/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dkajtoch/subscriptions",
"type": "User",
"url": "https://api.github.com/users/dkajtoch"
} | [] | closed | false | null | [] | null | [] | 2021-03-29T21:32:23Z | 2021-04-09T09:32:18Z | 2021-04-09T09:32:18Z | CONTRIBUTOR | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/2140.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2140",
"merged_at": "2021-04-09T09:32:18Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2140.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2140"
} | Intent classification/detection dataset from banking category with 77 unique intents. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2140/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2140/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2139 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2139/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2139/comments | https://api.github.com/repos/huggingface/datasets/issues/2139/events | https://github.com/huggingface/datasets/issues/2139 | 843,662,613 | MDU6SXNzdWU4NDM2NjI2MTM= | 2,139 | TypeError when using save_to_disk in a dataset loaded with ReadInstruction split | {
"avatar_url": "https://avatars.githubusercontent.com/u/22480495?v=4",
"events_url": "https://api.github.com/users/PedroMLF/events{/privacy}",
"followers_url": "https://api.github.com/users/PedroMLF/followers",
"following_url": "https://api.github.com/users/PedroMLF/following{/other_user}",
"gists_url": "https://api.github.com/users/PedroMLF/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/PedroMLF",
"id": 22480495,
"login": "PedroMLF",
"node_id": "MDQ6VXNlcjIyNDgwNDk1",
"organizations_url": "https://api.github.com/users/PedroMLF/orgs",
"received_events_url": "https://api.github.com/users/PedroMLF/received_events",
"repos_url": "https://api.github.com/users/PedroMLF/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/PedroMLF/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/PedroMLF/subscriptions",
"type": "User",
"url": "https://api.github.com/users/PedroMLF"
} | [] | closed | false | null | [] | null | [] | 2021-03-29T18:23:54Z | 2021-03-30T09:12:53Z | 2021-03-30T09:12:53Z | NONE | null | null | null | Hi,
Loading a dataset with `load_dataset` using a split defined via `ReadInstruction` and then saving it to disk results in the following error: `TypeError: Object of type ReadInstruction is not JSON serializable`.
Here is the minimal reproducible example:
```python
from datasets import load_dataset
from datasets import ReadInstruction
data_1 = load_dataset(
"wikiann",
"en",
split="validation",
)
data_1.save_to_disk("temporary_path_1")
print("Save with regular split works.")
data_2 = load_dataset(
"wikiann",
"en",
split=ReadInstruction("validation", to=50, unit="%"),
)
data_2.save_to_disk("temporary_path_2")
```
and the corresponding output:
```
Reusing dataset wikiann (/xxxxx/.cache/huggingface/datasets/wikiann/en/1.1.0/0b11a6fb31eea02f38ca17610657bfba3206100685283014daceb8da291c3be9)
Save with regular split works.
Reusing dataset wikiann (/xxxxx/.cache/huggingface/datasets/wikiann/en/1.1.0/0b11a6fb31eea02f38ca17610657bfba3206100685283014daceb8da291c3be9)
Traceback (most recent call last):
File "bug.py", line 20, in <module>
data_2.save_to_disk("temporary_path_2")
File "/xxxxx/lib/python3.7/site-packages/datasets/arrow_dataset.py", line 645, in save_to_disk
json.dump(state, state_file, indent=2, sort_keys=True)
File "/usr/lib/python3.7/json/__init__.py", line 179, in dump
for chunk in iterable:
File "/usr/lib/python3.7/json/encoder.py", line 431, in _iterencode
yield from _iterencode_dict(o, _current_indent_level)
File "/usr/lib/python3.7/json/encoder.py", line 405, in _iterencode_dict
yield from chunks
File "/usr/lib/python3.7/json/encoder.py", line 438, in _iterencode
o = _default(o)
File "/usr/lib/python3.7/json/encoder.py", line 179, in default
raise TypeError(f'Object of type {o.__class__.__name__} '
TypeError: Object of type ReadInstruction is not JSON serializable
```
Let me know if there is some misuse from my end.
Thanks in advance.
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2139/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2139/timeline | null | completed | true |
https://api.github.com/repos/huggingface/datasets/issues/2138 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2138/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2138/comments | https://api.github.com/repos/huggingface/datasets/issues/2138/events | https://github.com/huggingface/datasets/pull/2138 | 843,508,402 | MDExOlB1bGxSZXF1ZXN0NjAyODc4NzU2 | 2,138 | Add CER metric | {
"avatar_url": "https://avatars.githubusercontent.com/u/6931004?v=4",
"events_url": "https://api.github.com/users/chutaklee/events{/privacy}",
"followers_url": "https://api.github.com/users/chutaklee/followers",
"following_url": "https://api.github.com/users/chutaklee/following{/other_user}",
"gists_url": "https://api.github.com/users/chutaklee/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/chutaklee",
"id": 6931004,
"login": "chutaklee",
"node_id": "MDQ6VXNlcjY5MzEwMDQ=",
"organizations_url": "https://api.github.com/users/chutaklee/orgs",
"received_events_url": "https://api.github.com/users/chutaklee/received_events",
"repos_url": "https://api.github.com/users/chutaklee/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/chutaklee/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/chutaklee/subscriptions",
"type": "User",
"url": "https://api.github.com/users/chutaklee"
} | [] | closed | false | null | [] | null | [] | 2021-03-29T15:52:27Z | 2021-04-06T16:16:11Z | 2021-04-06T07:14:38Z | CONTRIBUTOR | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/2138.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2138",
"merged_at": "2021-04-06T07:14:38Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2138.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2138"
} | Add Character Error Rate (CER) metric that is used in evaluation in ASR. I also have written unittests (hopefully thorough enough) but I'm not sure how to integrate them into the existed codebase.
```python
from cer import CER
cer = CER()
class TestCER(unittest.TestCase):
def test_cer_case_senstive(self):
refs = ['White House']
preds = ['white house']
# S = 2, D = 0, I = 0, N = 11, CER = 2 / 11
char_error_rate = cer.compute(predictions=preds, references=refs)
self.assertTrue(abs(char_error_rate - 0.1818181818) < 1e-6)
def test_cer_whitespace(self):
refs = ['were wolf']
preds = ['werewolf']
# S = 0, D = 0, I = 1, N = 9, CER = 1 / 9
char_error_rate = cer.compute(predictions=preds, references=refs)
self.assertTrue(abs(char_error_rate - 0.1111111) < 1e-6)
refs = ['werewolf']
preds = ['weae wolf']
# S = 1, D = 1, I = 0, N = 8, CER = 0.25
char_error_rate = cer.compute(predictions=preds, references=refs)
self.assertTrue(abs(char_error_rate - 0.25) < 1e-6)
# consecutive whitespaces case 1
refs = ['were wolf']
preds = ['were wolf']
# S = 0, D = 0, I = 0, N = 9, CER = 0
char_error_rate = cer.compute(predictions=preds, references=refs)
self.assertTrue(abs(char_error_rate - 0.0) < 1e-6)
# consecutive whitespaces case 2
refs = ['were wolf']
preds = ['were wolf']
# S = 0, D = 0, I = 0, N = 9, CER = 0
char_error_rate = cer.compute(predictions=preds, references=refs)
self.assertTrue(abs(char_error_rate - 0.0) < 1e-6)
def test_cer_sub(self):
refs = ['werewolf']
preds = ['weaewolf']
# S = 1, D = 0, I = 0, N = 8, CER = 0.125
char_error_rate = cer.compute(predictions=preds, references=refs)
self.assertTrue(abs(char_error_rate - 0.125) < 1e-6)
def test_cer_del(self):
refs = ['werewolf']
preds = ['wereawolf']
# S = 0, D = 1, I = 0, N = 8, CER = 0.125
char_error_rate = cer.compute(predictions=preds, references=refs)
self.assertTrue(abs(char_error_rate - 0.125) < 1e-6)
def test_cer_insert(self):
refs = ['werewolf']
preds = ['wereolf']
# S = 0, D = 0, I = 1, N = 8, CER = 0.125
char_error_rate = cer.compute(predictions=preds, references=refs)
self.assertTrue(abs(char_error_rate - 0.125) < 1e-6)
def test_cer_equal(self):
refs = ['werewolf']
char_error_rate = cer.compute(predictions=refs, references=refs)
self.assertEqual(char_error_rate, 0.0)
def test_cer_list_of_seqs(self):
refs = ['werewolf', 'I am your father']
char_error_rate = cer.compute(predictions=refs, references=refs)
self.assertEqual(char_error_rate, 0.0)
refs = ['werewolf', 'I am your father', 'doge']
preds = ['werxwolf', 'I am your father', 'doge']
# S = 1, D = 0, I = 0, N = 28, CER = 1 / 28
char_error_rate = cer.compute(predictions=preds, references=refs)
self.assertTrue(abs(char_error_rate - 0.03571428) < 1e-6)
def test_cer_unicode(self):
ref = [u'我能吞下玻璃而不伤身体']
pred = [u' 能吞虾玻璃而 不霜身体啦']
# S = 3, D = 2, I = 0, N = 11
# CER = 5 / 11
char_error_rate = cer.compute(predictions=pred, references=ref)
self.assertTrue(abs(char_error_rate - 0.4545454545) < 1e-6)
ref = [u'我能吞', u'下玻璃而不伤身体']
pred = [u'我 能 吞 下 玻 璃', u'而不伤身体']
# S = 0, D = 5, I = 0, N = 11
# CER = 5 / 11
char_error_rate = cer.compute(predictions=pred, references=ref)
self.assertTrue(abs(char_error_rate - 0.454545454545) < 1e-6)
ref = [u'我能吞下玻璃而不伤身体']
char_error_rate = cer.compute(predictions=ref, references=ref)
self.assertFalse(char_error_rate, 0.0)
def test_cer_empty(self):
ref = ''
pred = 'Hypothesis'
with self.assertRaises(ValueError):
char_error_rate = cer.compute(predictions=pred, references=ref)
if __name__ == '__main__':
unittest.main()
``` | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2138/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2138/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2137 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2137/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2137/comments | https://api.github.com/repos/huggingface/datasets/issues/2137/events | https://github.com/huggingface/datasets/pull/2137 | 843,502,835 | MDExOlB1bGxSZXF1ZXN0NjAyODc0MDYw | 2,137 | Fix missing infos from concurrent dataset loading | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [] | closed | false | null | [] | null | [] | 2021-03-29T15:46:12Z | 2021-03-31T10:35:56Z | 2021-03-31T10:35:55Z | MEMBER | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/2137.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2137",
"merged_at": "2021-03-31T10:35:55Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2137.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2137"
} | This should fix issue #2131
When calling `load_dataset` at the same time from 2 workers, one of the worker could have missing split infos when reloading the dataset from the cache.
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2137/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2137/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2136 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2136/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2136/comments | https://api.github.com/repos/huggingface/datasets/issues/2136/events | https://github.com/huggingface/datasets/pull/2136 | 843,492,015 | MDExOlB1bGxSZXF1ZXN0NjAyODY0ODY5 | 2,136 | fix dialogue action slot name and value | {
"avatar_url": "https://avatars.githubusercontent.com/u/31605305?v=4",
"events_url": "https://api.github.com/users/adamlin120/events{/privacy}",
"followers_url": "https://api.github.com/users/adamlin120/followers",
"following_url": "https://api.github.com/users/adamlin120/following{/other_user}",
"gists_url": "https://api.github.com/users/adamlin120/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/adamlin120",
"id": 31605305,
"login": "adamlin120",
"node_id": "MDQ6VXNlcjMxNjA1MzA1",
"organizations_url": "https://api.github.com/users/adamlin120/orgs",
"received_events_url": "https://api.github.com/users/adamlin120/received_events",
"repos_url": "https://api.github.com/users/adamlin120/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/adamlin120/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/adamlin120/subscriptions",
"type": "User",
"url": "https://api.github.com/users/adamlin120"
} | [] | closed | false | null | [] | null | [] | 2021-03-29T15:34:13Z | 2021-03-31T12:48:02Z | 2021-03-31T12:48:01Z | CONTRIBUTOR | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/2136.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2136",
"merged_at": "2021-03-31T12:48:01Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2136.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2136"
} | fix #2128 | {
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2136/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2136/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2135 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2135/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2135/comments | https://api.github.com/repos/huggingface/datasets/issues/2135/events | https://github.com/huggingface/datasets/issues/2135 | 843,246,344 | MDU6SXNzdWU4NDMyNDYzNDQ= | 2,135 | en language data from MLQA dataset is missing | {
"avatar_url": "https://avatars.githubusercontent.com/u/6278280?v=4",
"events_url": "https://api.github.com/users/rabeehk/events{/privacy}",
"followers_url": "https://api.github.com/users/rabeehk/followers",
"following_url": "https://api.github.com/users/rabeehk/following{/other_user}",
"gists_url": "https://api.github.com/users/rabeehk/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/rabeehk",
"id": 6278280,
"login": "rabeehk",
"node_id": "MDQ6VXNlcjYyNzgyODA=",
"organizations_url": "https://api.github.com/users/rabeehk/orgs",
"received_events_url": "https://api.github.com/users/rabeehk/received_events",
"repos_url": "https://api.github.com/users/rabeehk/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/rabeehk/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rabeehk/subscriptions",
"type": "User",
"url": "https://api.github.com/users/rabeehk"
} | [] | closed | false | null | [] | null | [] | 2021-03-29T10:47:50Z | 2021-03-30T10:20:23Z | 2021-03-30T10:20:23Z | CONTRIBUTOR | null | null | null | Hi
I need mlqa-translate-train.en dataset, but it is missing from the MLQA dataset. could you have a look please? @lhoestq thank you for your help to fix this issue. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2135/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2135/timeline | null | completed | true |
https://api.github.com/repos/huggingface/datasets/issues/2134 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2134/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2134/comments | https://api.github.com/repos/huggingface/datasets/issues/2134/events | https://github.com/huggingface/datasets/issues/2134 | 843,242,849 | MDU6SXNzdWU4NDMyNDI4NDk= | 2,134 | Saving large in-memory datasets with save_to_disk crashes because of pickling | {
"avatar_url": "https://avatars.githubusercontent.com/u/5815801?v=4",
"events_url": "https://api.github.com/users/prokopCerny/events{/privacy}",
"followers_url": "https://api.github.com/users/prokopCerny/followers",
"following_url": "https://api.github.com/users/prokopCerny/following{/other_user}",
"gists_url": "https://api.github.com/users/prokopCerny/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/prokopCerny",
"id": 5815801,
"login": "prokopCerny",
"node_id": "MDQ6VXNlcjU4MTU4MDE=",
"organizations_url": "https://api.github.com/users/prokopCerny/orgs",
"received_events_url": "https://api.github.com/users/prokopCerny/received_events",
"repos_url": "https://api.github.com/users/prokopCerny/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/prokopCerny/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/prokopCerny/subscriptions",
"type": "User",
"url": "https://api.github.com/users/prokopCerny"
} | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | closed | false | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
}
] | null | [] | 2021-03-29T10:43:15Z | 2021-05-03T17:59:21Z | 2021-05-03T17:59:21Z | NONE | null | null | null | Using Datasets 1.5.0 on Python 3.7.
Recently I've been working on medium to large size datasets (pretokenized raw text sizes from few gigabytes to low tens of gigabytes), and have found out that several preprocessing steps are massively faster when done in memory, and I have the ability to requisition a lot of RAM, so I decided to do these steps completely out of the datasets library.
So my workflow is to do several .map() on datasets object, then for the operation which is faster in memory to extract the necessary columns from the dataset and then drop it whole, do the transformation in memory, and then create a fresh Dataset object using .from_dict() or other method.
When I then try to call save_to_disk(path) on the dataset, it crashes because of pickling, which appears to be because of using old pickle protocol which doesn't support large files (over 4 GiB).
```
Traceback (most recent call last):
File "./tokenize_and_chunkify_in_memory.py", line 80, in <module>
main()
File "./tokenize_and_chunkify_in_memory.py", line 75, in main
tokenize_and_chunkify(config)
File "./tokenize_and_chunkify_in_memory.py", line 60, in tokenize_and_chunkify
contexts_dataset.save_to_disk(chunked_path)
File "/home/cernypro/dev/envs/huggingface_gpu/lib/python3.7/site-packages/datasets/arrow_dataset.py", line 457, in save_to_disk
self = pickle.loads(pickle.dumps(self))
OverflowError: cannot serialize a bytes object larger than 4 GiB
```
From what I've seen this issue may be possibly fixed, as the line `self = pickle.loads(pickle.dumps(self))` does not appear to be present in the current state of the repository.
To save these datasets to disk, I've resorted to calling .map() over them with `function=None` and specifying the .arrow cache file, and then creating a new dataset using the .from_file() method, which I can then safely save to disk.
Additional issue when working with these large in-memory datasets is when using multiprocessing, is again to do with pickling. I've tried to speed up the mapping with function=None by specifying num_proc to the available cpu count, and I again get issues with transferring the dataset, with the following traceback. I am not sure if I should open a separate issue for that.
```
Traceback (most recent call last):
File "./tokenize_and_chunkify_in_memory.py", line 94, in <module>
main()
File "./tokenize_and_chunkify_in_memory.py", line 89, in main
tokenize_and_chunkify(config)
File "./tokenize_and_chunkify_in_memory.py", line 67, in tokenize_and_chunkify
contexts_dataset.map(function=None, cache_file_name=str(output_dir_path / "tmp.arrow"), writer_batch_size=50000, num_proc=config.threads)
File "/home/cernypro/dev/envs/huggingface_gpu/lib/python3.7/site-packages/datasets/arrow_dataset.py", line 1485, in map
transformed_shards = [r.get() for r in results]
File "/home/cernypro/dev/envs/huggingface_gpu/lib/python3.7/site-packages/datasets/arrow_dataset.py", line 1485, in <listcomp>
transformed_shards = [r.get() for r in results]
File "/home/cernypro/dev/envs/huggingface_gpu/lib/python3.7/site-packages/multiprocess/pool.py", line 657, in get
raise self._value
File "/home/cernypro/dev/envs/huggingface_gpu/lib/python3.7/site-packages/multiprocess/pool.py", line 431, in _handle_tasks
put(task)
File "/home/cernypro/dev/envs/huggingface_gpu/lib/python3.7/site-packages/multiprocess/connection.py", line 209, in send
self._send_bytes(_ForkingPickler.dumps(obj))
File "/home/cernypro/dev/envs/huggingface_gpu/lib/python3.7/site-packages/multiprocess/reduction.py", line 54, in dumps
cls(buf, protocol, *args, **kwds).dump(obj)
File "/home/cernypro/dev/envs/huggingface_gpu/lib/python3.7/site-packages/dill/_dill.py", line 454, in dump
StockPickler.dump(self, obj)
File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 437, in dump
self.save(obj)
File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 504, in save
f(self, obj) # Call unbound method with explicit self
File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 789, in save_tuple
save(element)
File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 504, in save
f(self, obj) # Call unbound method with explicit self
File "/home/cernypro/dev/envs/huggingface_gpu/lib/python3.7/site-packages/dill/_dill.py", line 941, in save_module_dict
StockPickler.save_dict(pickler, obj)
File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 859, in save_dict
self._batch_setitems(obj.items())
File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 885, in _batch_setitems
save(v)
File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 549, in save
self.save_reduce(obj=obj, *rv)
File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 662, in save_reduce
save(state)
File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 504, in save
f(self, obj) # Call unbound method with explicit self
File "/home/cernypro/dev/envs/huggingface_gpu/lib/python3.7/site-packages/dill/_dill.py", line 941, in save_module_dict
StockPickler.save_dict(pickler, obj)
File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 859, in save_dict
self._batch_setitems(obj.items())
File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 885, in _batch_setitems
save(v)
File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 549, in save
self.save_reduce(obj=obj, *rv)
File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 638, in save_reduce
save(args)
File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 504, in save
f(self, obj) # Call unbound method with explicit self
File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 774, in save_tuple
save(element)
File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 504, in save
f(self, obj) # Call unbound method with explicit self
File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 819, in save_list
self._batch_appends(obj)
File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 843, in _batch_appends
save(x)
File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 549, in save
self.save_reduce(obj=obj, *rv)
File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 638, in save_reduce
save(args)
File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 504, in save
f(self, obj) # Call unbound method with explicit self
File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 774, in save_tuple
save(element)
File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 504, in save
f(self, obj) # Call unbound method with explicit self
File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 819, in save_list
self._batch_appends(obj)
File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 846, in _batch_appends
save(tmp[0])
File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 549, in save
self.save_reduce(obj=obj, *rv)
File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 638, in save_reduce
save(args)
File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 504, in save
f(self, obj) # Call unbound method with explicit self
File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 774, in save_tuple
save(element)
File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 504, in save
f(self, obj) # Call unbound method with explicit self
File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 789, in save_tuple
save(element)
File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 504, in save
f(self, obj) # Call unbound method with explicit self
File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 819, in save_list
self._batch_appends(obj)
File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 846, in _batch_appends
save(tmp[0])
File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 504, in save
f(self, obj) # Call unbound method with explicit self
File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 789, in save_tuple
save(element)
File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 504, in save
f(self, obj) # Call unbound method with explicit self
File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 819, in save_list
self._batch_appends(obj)
File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 846, in _batch_appends
save(tmp[0])
File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 504, in save
f(self, obj) # Call unbound method with explicit self
File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 789, in save_tuple
save(element)
File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 504, in save
f(self, obj) # Call unbound method with explicit self
File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 819, in save_list
self._batch_appends(obj)
File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 843, in _batch_appends
save(x)
File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 549, in save
self.save_reduce(obj=obj, *rv)
File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 638, in save_reduce
save(args)
File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 504, in save
f(self, obj) # Call unbound method with explicit self
File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 774, in save_tuple
save(element)
File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 504, in save
f(self, obj) # Call unbound method with explicit self
File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 732, in save_bytes
self._write_large_bytes(BINBYTES + pack("<I", n), obj)
struct.error: 'I' format requires 0 <= number <= 4294967295Traceback (most recent call last):
File "./tokenize_and_chunkify_in_memory.py", line 94, in <module>
main()
File "./tokenize_and_chunkify_in_memory.py", line 89, in main
tokenize_and_chunkify(config)
File "./tokenize_and_chunkify_in_memory.py", line 67, in tokenize_and_chunkify
contexts_dataset.map(function=None, cache_file_name=str(output_dir_path / "tmp.arrow"), writer_batch_size=50000, num_proc=config.threads)
File "/home/cernypro/dev/envs/huggingface_gpu/lib/python3.7/site-packages/datasets/arrow_dataset.py", line 1485, in map
transformed_shards = [r.get() for r in results]
File "/home/cernypro/dev/envs/huggingface_gpu/lib/python3.7/site-packages/datasets/arrow_dataset.py", line 1485, in <listcomp>
transformed_shards = [r.get() for r in results]
File "/home/cernypro/dev/envs/huggingface_gpu/lib/python3.7/site-packages/multiprocess/pool.py", line 657, in get
raise self._value
File "/home/cernypro/dev/envs/huggingface_gpu/lib/python3.7/site-packages/multiprocess/pool.py", line 431, in _handle_tasks
put(task)
File "/home/cernypro/dev/envs/huggingface_gpu/lib/python3.7/site-packages/multiprocess/connection.py", line 209, in send
self._send_bytes(_ForkingPickler.dumps(obj))
File "/home/cernypro/dev/envs/huggingface_gpu/lib/python3.7/site-packages/multiprocess/reduction.py", line 54, in dumps
cls(buf, protocol, *args, **kwds).dump(obj)
File "/home/cernypro/dev/envs/huggingface_gpu/lib/python3.7/site-packages/dill/_dill.py", line 454, in dump
StockPickler.dump(self, obj)
File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 437, in dump
self.save(obj)
File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 504, in save
f(self, obj) # Call unbound method with explicit self
File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 789, in save_tuple
save(element)
File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 504, in save
f(self, obj) # Call unbound method with explicit self
File "/home/cernypro/dev/envs/huggingface_gpu/lib/python3.7/site-packages/dill/_dill.py", line 941, in save_module_dict
StockPickler.save_dict(pickler, obj)
File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 859, in save_dict
self._batch_setitems(obj.items())
File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 885, in _batch_setitems
save(v)
File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 549, in save
self.save_reduce(obj=obj, *rv)
File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 662, in save_reduce
save(state)
File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 504, in save
f(self, obj) # Call unbound method with explicit self
File "/home/cernypro/dev/envs/huggingface_gpu/lib/python3.7/site-packages/dill/_dill.py", line 941, in save_module_dict
StockPickler.save_dict(pickler, obj)
File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 859, in save_dict
self._batch_setitems(obj.items())
File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 885, in _batch_setitems
save(v)
File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 549, in save
self.save_reduce(obj=obj, *rv)
File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 638, in save_reduce
save(args)
File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 504, in save
f(self, obj) # Call unbound method with explicit self
File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 774, in save_tuple
save(element)
File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 504, in save
f(self, obj) # Call unbound method with explicit self
File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 819, in save_list
self._batch_appends(obj)
File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 843, in _batch_appends
save(x)
File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 549, in save
self.save_reduce(obj=obj, *rv)
File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 638, in save_reduce
save(args)
File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 504, in save
f(self, obj) # Call unbound method with explicit self
File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 774, in save_tuple
save(element)
File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 504, in save
f(self, obj) # Call unbound method with explicit self
File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 819, in save_list
self._batch_appends(obj)
File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 846, in _batch_appends
save(tmp[0])
File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 549, in save
self.save_reduce(obj=obj, *rv)
File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 638, in save_reduce
save(args)
File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 504, in save
f(self, obj) # Call unbound method with explicit self
File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 774, in save_tuple
save(element)
File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 504, in save
f(self, obj) # Call unbound method with explicit self
File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 789, in save_tuple
save(element)
File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 504, in save
f(self, obj) # Call unbound method with explicit self
File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 819, in save_list
self._batch_appends(obj)
File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 846, in _batch_appends
save(tmp[0])
File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 504, in save
f(self, obj) # Call unbound method with explicit self
File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 789, in save_tuple
save(element)
File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 504, in save
f(self, obj) # Call unbound method with explicit self
File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 819, in save_list
self._batch_appends(obj)
File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 846, in _batch_appends
save(tmp[0])
File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 504, in save
f(self, obj) # Call unbound method with explicit self
File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 789, in save_tuple
save(element)
File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 504, in save
f(self, obj) # Call unbound method with explicit self
File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 819, in save_list
self._batch_appends(obj)
File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 843, in _batch_appends
save(x)
File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 549, in save
self.save_reduce(obj=obj, *rv)
File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 638, in save_reduce
save(args)
File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 504, in save
f(self, obj) # Call unbound method with explicit self
File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 774, in save_tuple
save(element)
File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 504, in save
f(self, obj) # Call unbound method with explicit self
File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 732, in save_bytes
self._write_large_bytes(BINBYTES + pack("<I", n), obj)
struct.error: 'I' format requires 0 <= number <= 4294967295
``` | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2134/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2134/timeline | null | completed | true |
https://api.github.com/repos/huggingface/datasets/issues/2133 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2133/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2133/comments | https://api.github.com/repos/huggingface/datasets/issues/2133/events | https://github.com/huggingface/datasets/issues/2133 | 843,149,680 | MDU6SXNzdWU4NDMxNDk2ODA= | 2,133 | bug in mlqa dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/79165106?v=4",
"events_url": "https://api.github.com/users/dorost1234/events{/privacy}",
"followers_url": "https://api.github.com/users/dorost1234/followers",
"following_url": "https://api.github.com/users/dorost1234/following{/other_user}",
"gists_url": "https://api.github.com/users/dorost1234/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/dorost1234",
"id": 79165106,
"login": "dorost1234",
"node_id": "MDQ6VXNlcjc5MTY1MTA2",
"organizations_url": "https://api.github.com/users/dorost1234/orgs",
"received_events_url": "https://api.github.com/users/dorost1234/received_events",
"repos_url": "https://api.github.com/users/dorost1234/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/dorost1234/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dorost1234/subscriptions",
"type": "User",
"url": "https://api.github.com/users/dorost1234"
} | [] | closed | false | null | [] | null | [] | 2021-03-29T09:03:09Z | 2021-03-30T17:40:57Z | 2021-03-30T17:40:57Z | NONE | null | null | null | Hi
Looking into MLQA dataset for langauge "ar":
```
"question": [
"\u0645\u062a\u0649 \u0628\u062f\u0627\u062a \u0627\u0644\u0645\u062c\u0644\u0629 \u0627\u0644\u0645\u062f\u0631\u0633\u064a\u0629 \u0641\u064a \u0646\u0648\u062a\u0631\u062f\u0627\u0645 \u0628\u0627\u0644\u0646\u0634\u0631?",
"\u0643\u0645 \u0645\u0631\u0629 \u064a\u062a\u0645 \u0646\u0634\u0631\u0647\u0627 \u0641\u064a \u0646\u0648\u062a\u0631\u062f\u0627\u0645?",
"\u0645\u0627 \u0647\u064a \u0627\u0644\u0648\u0631\u0642\u0629 \u0627\u0644\u064a\u0648\u0645\u064a\u0629 \u0644\u0644\u0637\u0644\u0627\u0628 \u0641\u064a \u0646\u0648\u062a\u0631\u062f\u0627\u0645?",
"\u0643\u0645 \u0639\u062f\u062f \u0627\u0644\u0627\u0648\u0631\u0627\u0642 \u0627\u0644\u0627\u062e\u0628\u0627\u0631\u064a\u0629 \u0644\u0644\u0637\u0644\u0627\u0628 \u0627\u0644\u062a\u064a \u0648\u062c\u062f\u062a \u0641\u064a \u0646\u0648\u062a\u0631\u062f\u0627\u0645?",
"\u0641\u064a \u0627\u064a \u0633\u0646\u0629 \u0628\u062f\u0627\u062a \u0648\u0631\u0642\u0629 \u0627\u0644\u0637\u0627\u0644\u0628 \u0627\u0644\u062d\u0633 \u0627\u0644\u0633\u0644\u064a\u0645 \u0628\u0627\u0644\u0646\u0634\u0631 \u0641\u064a \u0646\u0648\u062a\u0631\u062f\u0627\u0645?"
]
```
the questions are in the wrong format, and not readable, could you please have a look? thanks @lhoestq
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2133/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2133/timeline | null | completed | true |
https://api.github.com/repos/huggingface/datasets/issues/2132 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2132/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2132/comments | https://api.github.com/repos/huggingface/datasets/issues/2132/events | https://github.com/huggingface/datasets/issues/2132 | 843,142,822 | MDU6SXNzdWU4NDMxNDI4MjI= | 2,132 | TydiQA dataset is mixed and is not split per language | {
"avatar_url": "https://avatars.githubusercontent.com/u/79165106?v=4",
"events_url": "https://api.github.com/users/dorost1234/events{/privacy}",
"followers_url": "https://api.github.com/users/dorost1234/followers",
"following_url": "https://api.github.com/users/dorost1234/following{/other_user}",
"gists_url": "https://api.github.com/users/dorost1234/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/dorost1234",
"id": 79165106,
"login": "dorost1234",
"node_id": "MDQ6VXNlcjc5MTY1MTA2",
"organizations_url": "https://api.github.com/users/dorost1234/orgs",
"received_events_url": "https://api.github.com/users/dorost1234/received_events",
"repos_url": "https://api.github.com/users/dorost1234/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/dorost1234/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dorost1234/subscriptions",
"type": "User",
"url": "https://api.github.com/users/dorost1234"
} | [] | open | false | null | [] | null | [] | 2021-03-29T08:56:21Z | 2021-04-04T09:57:15Z | null | NONE | null | null | null | Hi @lhoestq
Currently TydiQA is mixed and user can only access the whole training set of all languages:
https://www.tensorflow.org/datasets/catalog/tydi_qa
for using this dataset, one need to train/evaluate in each separate language, and having them mixed, makes it hard to use this dataset. This is much convenient for user to have them split and I appreciate your help on this.
Meanwhile, till hopefully this is split per language, I greatly appreciate telling me how I can preprocess and get data per language. thanks a lot | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2132/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2132/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2131 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2131/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2131/comments | https://api.github.com/repos/huggingface/datasets/issues/2131/events | https://github.com/huggingface/datasets/issues/2131 | 843,133,112 | MDU6SXNzdWU4NDMxMzMxMTI= | 2,131 | When training with Multi-Node Multi-GPU the worker 2 has TypeError: 'NoneType' object | {
"avatar_url": "https://avatars.githubusercontent.com/u/23011317?v=4",
"events_url": "https://api.github.com/users/andy-yangz/events{/privacy}",
"followers_url": "https://api.github.com/users/andy-yangz/followers",
"following_url": "https://api.github.com/users/andy-yangz/following{/other_user}",
"gists_url": "https://api.github.com/users/andy-yangz/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/andy-yangz",
"id": 23011317,
"login": "andy-yangz",
"node_id": "MDQ6VXNlcjIzMDExMzE3",
"organizations_url": "https://api.github.com/users/andy-yangz/orgs",
"received_events_url": "https://api.github.com/users/andy-yangz/received_events",
"repos_url": "https://api.github.com/users/andy-yangz/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/andy-yangz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/andy-yangz/subscriptions",
"type": "User",
"url": "https://api.github.com/users/andy-yangz"
} | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | closed | false | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
}
] | null | [] | 2021-03-29T08:45:58Z | 2021-04-10T11:08:55Z | 2021-04-10T11:08:55Z | NONE | null | null | null | version: 1.5.0
met a very strange error, I am training large scale language model, and need train on 2 machines(workers).
And sometimes I will get this error `TypeError: 'NoneType' object is not iterable`
This is traceback
```
71 | | Traceback (most recent call last):
-- | -- | --
72 | | File "run_gpt.py", line 316, in <module>
73 | | main()
74 | | File "run_gpt.py", line 222, in main
75 | | delimiter="\t", column_names=["input_ids", "attention_mask", "chinese_ref"])
76 | | File "/data/miniconda3/lib/python3.7/site-packages/datasets/load.py", line 747, in load_dataset
77 | | use_auth_token=use_auth_token,
78 | | File "/data/miniconda3/lib/python3.7/site-packages/datasets/builder.py", line 513, in download_and_prepare
79 | | self.download_post_processing_resources(dl_manager)
80 | | File "/data/miniconda3/lib/python3.7/site-packages/datasets/builder.py", line 673, in download_post_processing_resources
81 | | for split in self.info.splits:
82 | | TypeError: 'NoneType' object is not iterable
83 | | WARNING:datasets.builder:Reusing dataset csv (/usr/local/app/.cache/huggingface/datasets/csv/default-1c257ebd48e225e7/0.0.0/2960f95a26e85d40ca41a230ac88787f715ee3003edaacb8b1f0891e9f04dda2)
84 | | Traceback (most recent call last):
85 | | File "/data/miniconda3/lib/python3.7/runpy.py", line 193, in _run_module_as_main
86 | | "__main__", mod_spec)
87 | | File "/data/miniconda3/lib/python3.7/runpy.py", line 85, in _run_code
88 | | exec(code, run_globals)
89 | | File "/data/miniconda3/lib/python3.7/site-packages/torch/distributed/launch.py", line 340, in <module>
90 | | main()
91 | | File "/data/miniconda3/lib/python3.7/site-packages/torch/distributed/launch.py", line 326, in main
92 | | sigkill_handler(signal.SIGTERM, None) # not coming back
93 | | File "/data/miniconda3/lib/python3.7/site-packages/torch/distributed/launch.py", line 301, in sigkill_handler
94 | | raise subprocess.CalledProcessError(returncode=last_return_code, cmd=cmd)
```
On worker 1 it loads the dataset well, however on worker 2 will get this error.
And I will meet this error from time to time, sometimes it just goes well. | {
"+1": 0,
"-1": 0,
"confused": 1,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2131/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2131/timeline | null | completed | true |
https://api.github.com/repos/huggingface/datasets/issues/2130 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2130/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2130/comments | https://api.github.com/repos/huggingface/datasets/issues/2130/events | https://github.com/huggingface/datasets/issues/2130 | 843,111,936 | MDU6SXNzdWU4NDMxMTE5MzY= | 2,130 | wikiann dataset is missing columns | {
"avatar_url": "https://avatars.githubusercontent.com/u/79165106?v=4",
"events_url": "https://api.github.com/users/dorost1234/events{/privacy}",
"followers_url": "https://api.github.com/users/dorost1234/followers",
"following_url": "https://api.github.com/users/dorost1234/following{/other_user}",
"gists_url": "https://api.github.com/users/dorost1234/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/dorost1234",
"id": 79165106,
"login": "dorost1234",
"node_id": "MDQ6VXNlcjc5MTY1MTA2",
"organizations_url": "https://api.github.com/users/dorost1234/orgs",
"received_events_url": "https://api.github.com/users/dorost1234/received_events",
"repos_url": "https://api.github.com/users/dorost1234/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/dorost1234/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dorost1234/subscriptions",
"type": "User",
"url": "https://api.github.com/users/dorost1234"
} | [
{
"color": "7057ff",
"default": true,
"description": "Good for newcomers",
"id": 1935892877,
"name": "good first issue",
"node_id": "MDU6TGFiZWwxOTM1ODkyODc3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/good%20first%20issue"
}
] | closed | false | null | [] | null | [] | 2021-03-29T08:23:00Z | 2021-08-27T14:44:18Z | 2021-08-27T14:44:18Z | NONE | null | null | null | Hi
Wikiann dataset needs to have "spans" columns, which is necessary to be able to use this dataset, but this column is missing from huggingface datasets, could you please have a look? thank you @lhoestq | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2130/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2130/timeline | null | completed | true |
https://api.github.com/repos/huggingface/datasets/issues/2129 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2129/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2129/comments | https://api.github.com/repos/huggingface/datasets/issues/2129/events | https://github.com/huggingface/datasets/issues/2129 | 843,033,656 | MDU6SXNzdWU4NDMwMzM2NTY= | 2,129 | How to train BERT model with next sentence prediction? | {
"avatar_url": "https://avatars.githubusercontent.com/u/836541?v=4",
"events_url": "https://api.github.com/users/jnishi/events{/privacy}",
"followers_url": "https://api.github.com/users/jnishi/followers",
"following_url": "https://api.github.com/users/jnishi/following{/other_user}",
"gists_url": "https://api.github.com/users/jnishi/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/jnishi",
"id": 836541,
"login": "jnishi",
"node_id": "MDQ6VXNlcjgzNjU0MQ==",
"organizations_url": "https://api.github.com/users/jnishi/orgs",
"received_events_url": "https://api.github.com/users/jnishi/received_events",
"repos_url": "https://api.github.com/users/jnishi/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/jnishi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jnishi/subscriptions",
"type": "User",
"url": "https://api.github.com/users/jnishi"
} | [] | closed | false | null | [] | null | [] | 2021-03-29T06:48:03Z | 2021-04-01T04:58:40Z | 2021-04-01T04:58:40Z | NONE | null | null | null | Hello.
I'm trying to pretrain the BERT model with next sentence prediction. Is there any function that supports next sentence prediction
like ` TextDatasetForNextSentencePrediction` of `huggingface/transformers` ?
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2129/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2129/timeline | null | completed | true |
https://api.github.com/repos/huggingface/datasets/issues/2128 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2128/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2128/comments | https://api.github.com/repos/huggingface/datasets/issues/2128/events | https://github.com/huggingface/datasets/issues/2128 | 843,023,910 | MDU6SXNzdWU4NDMwMjM5MTA= | 2,128 | Dialogue action slot name and value are reversed in MultiWoZ 2.2 | {
"avatar_url": "https://avatars.githubusercontent.com/u/31605305?v=4",
"events_url": "https://api.github.com/users/adamlin120/events{/privacy}",
"followers_url": "https://api.github.com/users/adamlin120/followers",
"following_url": "https://api.github.com/users/adamlin120/following{/other_user}",
"gists_url": "https://api.github.com/users/adamlin120/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/adamlin120",
"id": 31605305,
"login": "adamlin120",
"node_id": "MDQ6VXNlcjMxNjA1MzA1",
"organizations_url": "https://api.github.com/users/adamlin120/orgs",
"received_events_url": "https://api.github.com/users/adamlin120/received_events",
"repos_url": "https://api.github.com/users/adamlin120/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/adamlin120/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/adamlin120/subscriptions",
"type": "User",
"url": "https://api.github.com/users/adamlin120"
} | [
{
"color": "2edb81",
"default": false,
"description": "A bug in a dataset script provided in the library",
"id": 2067388877,
"name": "dataset bug",
"node_id": "MDU6TGFiZWwyMDY3Mzg4ODc3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20bug"
}
] | closed | false | null | [] | null | [] | 2021-03-29T06:34:02Z | 2021-03-31T12:48:01Z | 2021-03-31T12:48:01Z | CONTRIBUTOR | null | null | null | Hi @yjernite, thank you for adding MultiWoZ 2.2 in the huggingface datasets platform. It is beneficial!
I spot an error that the order of Dialogue action slot names and values are reversed.
https://github.com/huggingface/datasets/blob/649b2c469779bc4221e1b6969aa2496d63eb5953/datasets/multi_woz_v22/multi_woz_v22.py#L251-L262 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 1,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2128/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2128/timeline | null | completed | true |
https://api.github.com/repos/huggingface/datasets/issues/2127 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2127/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2127/comments | https://api.github.com/repos/huggingface/datasets/issues/2127/events | https://github.com/huggingface/datasets/pull/2127 | 843,017,199 | MDExOlB1bGxSZXF1ZXN0NjAyNDYxMzc3 | 2,127 | make documentation more clear to use different cloud storage | {
"avatar_url": "https://avatars.githubusercontent.com/u/32632186?v=4",
"events_url": "https://api.github.com/users/philschmid/events{/privacy}",
"followers_url": "https://api.github.com/users/philschmid/followers",
"following_url": "https://api.github.com/users/philschmid/following{/other_user}",
"gists_url": "https://api.github.com/users/philschmid/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/philschmid",
"id": 32632186,
"login": "philschmid",
"node_id": "MDQ6VXNlcjMyNjMyMTg2",
"organizations_url": "https://api.github.com/users/philschmid/orgs",
"received_events_url": "https://api.github.com/users/philschmid/received_events",
"repos_url": "https://api.github.com/users/philschmid/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/philschmid/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/philschmid/subscriptions",
"type": "User",
"url": "https://api.github.com/users/philschmid"
} | [] | closed | false | null | [] | null | [] | 2021-03-29T06:24:06Z | 2021-03-29T12:16:24Z | 2021-03-29T12:16:24Z | MEMBER | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/2127.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2127",
"merged_at": "2021-03-29T12:16:24Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2127.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2127"
} | This PR extends the cloud storage documentation. To show you can use a different `fsspec` implementation. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2127/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2127/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2126 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2126/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2126/comments | https://api.github.com/repos/huggingface/datasets/issues/2126/events | https://github.com/huggingface/datasets/pull/2126 | 842,779,966 | MDExOlB1bGxSZXF1ZXN0NjAyMjcyMjg4 | 2,126 | Replace legacy torch.Tensor constructor with torch.tensor | {
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko"
} | [] | closed | false | null | [] | null | [] | 2021-03-28T16:57:30Z | 2021-03-29T09:27:14Z | 2021-03-29T09:27:13Z | CONTRIBUTOR | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/2126.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2126",
"merged_at": "2021-03-29T09:27:13Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2126.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2126"
} | The title says it all (motivated by [this issue](https://github.com/pytorch/pytorch/issues/53146) in the pytorch repo). | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2126/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2126/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2125 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2125/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2125/comments | https://api.github.com/repos/huggingface/datasets/issues/2125/events | https://github.com/huggingface/datasets/issues/2125 | 842,690,570 | MDU6SXNzdWU4NDI2OTA1NzA= | 2,125 | Is dataset timit_asr broken? | {
"avatar_url": "https://avatars.githubusercontent.com/u/42398050?v=4",
"events_url": "https://api.github.com/users/kosuke-kitahara/events{/privacy}",
"followers_url": "https://api.github.com/users/kosuke-kitahara/followers",
"following_url": "https://api.github.com/users/kosuke-kitahara/following{/other_user}",
"gists_url": "https://api.github.com/users/kosuke-kitahara/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/kosuke-kitahara",
"id": 42398050,
"login": "kosuke-kitahara",
"node_id": "MDQ6VXNlcjQyMzk4MDUw",
"organizations_url": "https://api.github.com/users/kosuke-kitahara/orgs",
"received_events_url": "https://api.github.com/users/kosuke-kitahara/received_events",
"repos_url": "https://api.github.com/users/kosuke-kitahara/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/kosuke-kitahara/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kosuke-kitahara/subscriptions",
"type": "User",
"url": "https://api.github.com/users/kosuke-kitahara"
} | [] | closed | false | null | [] | null | [] | 2021-03-28T08:30:18Z | 2021-03-28T12:29:25Z | 2021-03-28T12:29:25Z | NONE | null | null | null | Using `timit_asr` dataset, I saw all records are the same.
``` python
from datasets import load_dataset, load_metric
timit = load_dataset("timit_asr")
from datasets import ClassLabel
import random
import pandas as pd
from IPython.display import display, HTML
def show_random_elements(dataset, num_examples=10):
assert num_examples <= len(dataset), "Can't pick more elements than there are in the dataset."
picks = []
for _ in range(num_examples):
pick = random.randint(0, len(dataset)-1)
while pick in picks:
pick = random.randint(0, len(dataset)-1)
picks.append(pick)
df = pd.DataFrame(dataset[picks])
display(HTML(df.to_html()))
show_random_elements(timit['train'].remove_columns(["file", "phonetic_detail", "word_detail", "dialect_region", "id",
"sentence_type", "speaker_id"]), num_examples=20)
```
`output`
<img width="312" alt="Screen Shot 2021-03-28 at 17 29 04" src="https://user-images.githubusercontent.com/42398050/112746646-21acee80-8feb-11eb-84f3-dbb5d4269724.png">
I double-checked it [here](https://huggingface.co/datasets/viewer/), and met the same problem.
<img width="1374" alt="Screen Shot 2021-03-28 at 17 32 07" src="https://user-images.githubusercontent.com/42398050/112746698-9bdd7300-8feb-11eb-97ed-5babead385f4.png">
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2125/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2125/timeline | null | completed | true |
https://api.github.com/repos/huggingface/datasets/issues/2124 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2124/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2124/comments | https://api.github.com/repos/huggingface/datasets/issues/2124/events | https://github.com/huggingface/datasets/issues/2124 | 842,627,729 | MDU6SXNzdWU4NDI2Mjc3Mjk= | 2,124 | Adding ScaNN library to do MIPS? | {
"avatar_url": "https://avatars.githubusercontent.com/u/16892570?v=4",
"events_url": "https://api.github.com/users/shamanez/events{/privacy}",
"followers_url": "https://api.github.com/users/shamanez/followers",
"following_url": "https://api.github.com/users/shamanez/following{/other_user}",
"gists_url": "https://api.github.com/users/shamanez/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/shamanez",
"id": 16892570,
"login": "shamanez",
"node_id": "MDQ6VXNlcjE2ODkyNTcw",
"organizations_url": "https://api.github.com/users/shamanez/orgs",
"received_events_url": "https://api.github.com/users/shamanez/received_events",
"repos_url": "https://api.github.com/users/shamanez/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/shamanez/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/shamanez/subscriptions",
"type": "User",
"url": "https://api.github.com/users/shamanez"
} | [] | open | false | null | [] | null | [] | 2021-03-28T00:07:00Z | 2021-03-29T13:23:43Z | null | NONE | null | null | null | @lhoestq Hi I am thinking of adding this new google library to do the MIPS similar to **add_faiss_idex**. As the paper suggests, it is really fast when it comes to retrieving the nearest neighbors.
https://github.com/google-research/google-research/tree/master/scann

| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2124/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2124/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2123 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2123/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2123/comments | https://api.github.com/repos/huggingface/datasets/issues/2123/events | https://github.com/huggingface/datasets/issues/2123 | 842,577,285 | MDU6SXNzdWU4NDI1NzcyODU= | 2,123 | Problem downloading GEM wiki_auto_asset_turk dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/29705940?v=4",
"events_url": "https://api.github.com/users/mille-s/events{/privacy}",
"followers_url": "https://api.github.com/users/mille-s/followers",
"following_url": "https://api.github.com/users/mille-s/following{/other_user}",
"gists_url": "https://api.github.com/users/mille-s/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mille-s",
"id": 29705940,
"login": "mille-s",
"node_id": "MDQ6VXNlcjI5NzA1OTQw",
"organizations_url": "https://api.github.com/users/mille-s/orgs",
"received_events_url": "https://api.github.com/users/mille-s/received_events",
"repos_url": "https://api.github.com/users/mille-s/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mille-s/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mille-s/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mille-s"
} | [] | closed | false | null | [] | null | [] | 2021-03-27T18:41:28Z | 2021-05-12T16:15:18Z | 2021-05-12T16:15:17Z | NONE | null | null | null | @yjernite
### Summary
I am currently working on the GEM datasets and do not manage to download the wiki_auto_asset_turk data, whereas all other datasets download well with the same code.
### Steps to reproduce
Code snippet:
from datasets import load_dataset
#dataset = load_dataset('gem', 'web_nlg_en')
dataset = load_dataset('gem', 'wiki_auto_asset_turk')
```
**Expected behavior:**
I expect the dataset to start downloading (download bar appears and progresses toward 100%)
**Actual behavior:**
Instead of seeing the download bar appearing, nothing happens; the following appears in the console as expected, but nothing more:
Downloading: 36.6kB [00:00, 37.2MB/s]
Downloading: 41.7kB [00:00, ?B/s]
Downloading and preparing dataset gem/wiki_auto_asset_turk (download: 121.37 MiB, generated: 145.69 MiB, post-processed: Unknown size, total: 267.07 MiB) to C:\Users\sfmil\.cache\huggingface\datasets\gem\wiki_auto_asset_turk\1.0.0\f252756d7f1b8f019aac71a1623b2950acfe10d25d956668ac4eae4e93c58b8d...
### Is this a regression?
No, it was the first time I was trying to download this dataset (same for the other ones).
### Debug info
- Python version: Python 3.8.2
- OS version: Windows 10 Family | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2123/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2123/timeline | null | completed | true |
https://api.github.com/repos/huggingface/datasets/issues/2122 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2122/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2122/comments | https://api.github.com/repos/huggingface/datasets/issues/2122/events | https://github.com/huggingface/datasets/pull/2122 | 842,194,588 | MDExOlB1bGxSZXF1ZXN0NjAxODE3MjI0 | 2,122 | Fast table queries with interpolation search | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [] | closed | false | null | [] | null | [] | 2021-03-26T18:09:20Z | 2021-08-04T18:11:59Z | 2021-04-06T14:33:01Z | MEMBER | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/2122.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2122",
"merged_at": "2021-04-06T14:33:01Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2122.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2122"
} | ## Intro
This should fix issue #1803
Currently querying examples in a dataset is O(n) because of the underlying pyarrow ChunkedArrays implementation.
To fix this I implemented interpolation search that is pretty effective since datasets usually verifies the condition of evenly distributed chunks (the default chunk size is fixed).
## Benchmark
Here is a [benchmark](https://pastebin.com/utEXUqsR) I did on bookcorpus (74M rows):
for the current implementation
```python
>>> python speed.py
Loaded dataset 'bookcorpus', len=74004228, nbytes=4835358766
========================= Querying unshuffled bookcorpus =========================
Avg access time key=1 : 0.018ms
Avg access time key=74004227 : 0.215ms
Avg access time key=range(74003204, 74004228) : 1.416ms
Avg access time key=RandIter(low=0, high=74004228, size=1024, seed=42): 92.532ms
========================== Querying shuffled bookcorpus ==========================
Avg access time key=1 : 0.187ms
Avg access time key=74004227 : 6.642ms
Avg access time key=range(74003204, 74004228) : 90.941ms
Avg access time key=RandIter(low=0, high=74004228, size=1024, seed=42): 3448.456ms
```
for the new one using interpolation search:
```python
>>> python speed.py
Loaded dataset 'bookcorpus', len=74004228, nbytes=4835358766
========================= Querying unshuffled bookcorpus =========================
Avg access time key=1 : 0.076ms
Avg access time key=74004227 : 0.056ms
Avg access time key=range(74003204, 74004228) : 1.807ms
Avg access time key=RandIter(low=0, high=74004228, size=1024, seed=42): 24.028ms
========================== Querying shuffled bookcorpus ==========================
Avg access time key=1 : 0.061ms
Avg access time key=74004227 : 0.058ms
Avg access time key=range(74003204, 74004228) : 22.166ms
Avg access time key=RandIter(low=0, high=74004228, size=1024, seed=42): 42.757ms
```
The RandIter class is just an iterable of 1024 random indices from 0 to 74004228.
Here is also a plot showing the speed improvement depending on the dataset size:

## Implementation details:
- `datasets.table.Table` objects implement interpolation search for the `slice` method
- The interpolation search requires to store the offsets of all the chunks of a table. The offsets are stored when the `Table` is initialized.
- `datasets.table.Table.slice` returns a `datasets.table.Table` using interpolation search
- `datasets.table.Table.fast_slice` returns a `pyarrow.Table` object using interpolation search. This is useful to get a part of a dataset if we don't need the indexing structure for future computations. For example it's used when querying an example as a dictionary.
- Now a `Dataset` object is always backed by a `datasets.table.Table` object. If one passes a `pyarrow.Table` to initialize a `Dataset`, then it's converted to a `datasets.table.Table`
## Checklist:
- [x] implement interpolation search
- [x] use `datasets.table.Table` in `Dataset` objects
- [x] update current tests
- [x] add tests for interpolation search
- [x] comments and docstring
- [x] add the benchmark to the CI
Fix #1803. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 5,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 5,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2122/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2122/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2121 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2121/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2121/comments | https://api.github.com/repos/huggingface/datasets/issues/2121/events | https://github.com/huggingface/datasets/pull/2121 | 842,148,633 | MDExOlB1bGxSZXF1ZXN0NjAxNzc4NDc4 | 2,121 | Add Validation For README | {
"avatar_url": "https://avatars.githubusercontent.com/u/29076344?v=4",
"events_url": "https://api.github.com/users/gchhablani/events{/privacy}",
"followers_url": "https://api.github.com/users/gchhablani/followers",
"following_url": "https://api.github.com/users/gchhablani/following{/other_user}",
"gists_url": "https://api.github.com/users/gchhablani/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/gchhablani",
"id": 29076344,
"login": "gchhablani",
"node_id": "MDQ6VXNlcjI5MDc2MzQ0",
"organizations_url": "https://api.github.com/users/gchhablani/orgs",
"received_events_url": "https://api.github.com/users/gchhablani/received_events",
"repos_url": "https://api.github.com/users/gchhablani/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/gchhablani/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gchhablani/subscriptions",
"type": "User",
"url": "https://api.github.com/users/gchhablani"
} | [] | closed | false | null | [] | null | [] | 2021-03-26T17:02:17Z | 2021-05-10T13:17:18Z | 2021-05-10T09:41:41Z | CONTRIBUTOR | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/2121.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2121",
"merged_at": "2021-05-10T09:41:41Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2121.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2121"
} | Hi @lhoestq, @yjernite
This is a simple Readme parser. All classes specific to different sections can inherit `Section` class, and we can define more attributes in each.
Let me know if this is going in the right direction :)
Currently the output looks like this, for `to_dict()` on `FashionMNIST` `README.md`:
```json
{
"name": "./datasets/fashion_mnist/README.md",
"attributes": "",
"subsections": [
{
"name": "Dataset Card for FashionMNIST",
"attributes": "",
"subsections": [
{
"name": "Table of Contents",
"attributes": "- [Dataset Description](#dataset-description)\n - [Dataset Summary](#dataset-summary)\n - [Supported Tasks](#supported-tasks-and-leaderboards)\n - [Languages](#languages)\n- [Dataset Structure](#dataset-structure)\n - [Data Instances](#data-instances)\n - [Data Fields](#data-instances)\n - [Data Splits](#data-instances)\n- [Dataset Creation](#dataset-creation)\n - [Curation Rationale](#curation-rationale)\n - [Source Data](#source-data)\n - [Annotations](#annotations)\n - [Personal and Sensitive Information](#personal-and-sensitive-information)\n- [Considerations for Using the Data](#considerations-for-using-the-data)\n - [Social Impact of Dataset](#social-impact-of-dataset)\n - [Discussion of Biases](#discussion-of-biases)\n - [Other Known Limitations](#other-known-limitations)\n- [Additional Information](#additional-information)\n - [Dataset Curators](#dataset-curators)\n - [Licensing Information](#licensing-information)\n - [Citation Information](#citation-information)\n - [Contributions](#contributions)",
"subsections": []
},
{
"name": "Dataset Description",
"attributes": "- **Homepage:** [GitHub](https://github.com/zalandoresearch/fashion-mnist)\n- **Repository:** [GitHub](https://github.com/zalandoresearch/fashion-mnist)\n- **Paper:** [arXiv](https://arxiv.org/pdf/1708.07747.pdf)\n- **Leaderboard:**\n- **Point of Contact:**",
"subsections": [
{
"name": "Dataset Summary",
"attributes": "Fashion-MNIST is a dataset of Zalando's article images\u2014consisting of a training set of 60,000 examples and a test set of 10,000 examples. Each example is a 28x28 grayscale image, associated with a label from 10 classes. We intend Fashion-MNIST to serve as a direct drop-in replacement for the original MNIST dataset for benchmarking machine learning algorithms. It shares the same image size and structure of training and testing splits.",
"subsections": []
},
{
"name": "Supported Tasks and Leaderboards",
"attributes": "[More Information Needed]",
"subsections": []
},
{
"name": "Languages",
"attributes": "[More Information Needed]",
"subsections": []
}
]
},
{
"name": "Dataset Structure",
"attributes": "",
"subsections": [
{
"name": "Data Instances",
"attributes": "A data point comprises an image and its label.",
"subsections": []
},
{
"name": "Data Fields",
"attributes": "- `image`: a 2d array of integers representing the 28x28 image.\n- `label`: an integer between 0 and 9 representing the classes with the following mapping:\n | Label | Description |\n | --- | --- |\n | 0 | T-shirt/top |\n | 1 | Trouser |\n | 2 | Pullover |\n | 3 | Dress |\n | 4 | Coat |\n | 5 | Sandal |\n | 6 | Shirt |\n | 7 | Sneaker |\n | 8 | Bag |\n | 9 | Ankle boot |",
"subsections": []
},
{
"name": "Data Splits",
"attributes": "The data is split into training and test set. The training set contains 60,000 images and the test set 10,000 images.",
"subsections": []
}
]
},
{
"name": "Dataset Creation",
"attributes": "",
"subsections": [
{
"name": "Curation Rationale",
"attributes": "**From the arXiv paper:**\nThe original MNIST dataset contains a lot of handwritten digits. Members of the AI/ML/Data Science community love this dataset and use it as a benchmark to validate their algorithms. In fact, MNIST is often the first dataset researchers try. \"If it doesn't work on MNIST, it won't work at all\", they said. \"Well, if it does work on MNIST, it may still fail on others.\"\nHere are some good reasons:\n- MNIST is too easy. Convolutional nets can achieve 99.7% on MNIST. Classic machine learning algorithms can also achieve 97% easily. Check out our side-by-side benchmark for Fashion-MNIST vs. MNIST, and read \"Most pairs of MNIST digits can be distinguished pretty well by just one pixel.\"\n- MNIST is overused. In this April 2017 Twitter thread, Google Brain research scientist and deep learning expert Ian Goodfellow calls for people to move away from MNIST.\n- MNIST can not represent modern CV tasks, as noted in this April 2017 Twitter thread, deep learning expert/Keras author Fran\u00e7ois Chollet.",
"subsections": []
},
{
"name": "Source Data",
"attributes": "",
"subsections": [
{
"name": "Initial Data Collection and Normalization",
"attributes": "**From the arXiv paper:**\nFashion-MNIST is based on the assortment on Zalando\u2019s website. Every fashion product on Zalando has a set of pictures shot by professional photographers, demonstrating different aspects of the product, i.e. front and back looks, details, looks with model and in an outfit. The original picture has a light-gray background (hexadecimal color: #fdfdfd) and stored in 762 \u00d7 1000 JPEG format. For efficiently serving different frontend components, the original picture is resampled with multiple resolutions, e.g. large, medium, small, thumbnail and tiny.\nWe use the front look thumbnail images of 70,000 unique products to build Fashion-MNIST. Those products come from different gender groups: men, women, kids and neutral. In particular, whitecolor products are not included in the dataset as they have low contrast to the background. The thumbnails (51 \u00d7 73) are then fed into the following conversion pipeline:\n1. Converting the input to a PNG image.\n2. Trimming any edges that are close to the color of the corner pixels. The \u201ccloseness\u201d is defined by the distance within 5% of the maximum possible intensity in RGB space.\n3. Resizing the longest edge of the image to 28 by subsampling the pixels, i.e. some rows and columns are skipped over.\n4. Sharpening pixels using a Gaussian operator of the radius and standard deviation of 1.0, with increasing effect near outlines.\n5. Extending the shortest edge to 28 and put the image to the center of the canvas.\n6. Negating the intensities of the image.\n7. Converting the image to 8-bit grayscale pixels.",
"subsections": []
},
{
"name": "Who are the source image producers?",
"attributes": "**From the arXiv paper:**\nEvery fashion product on Zalando has a set of pictures shot by professional photographers, demonstrating different aspects of the product, i.e. front and back looks, details, looks with model and in an outfit.",
"subsections": []
}
]
},
{
"name": "Annotations",
"attributes": "",
"subsections": [
{
"name": "Annotation process",
"attributes": "**From the arXiv paper:**\nFor the class labels, they use the silhouette code of the product. The silhouette code is manually labeled by the in-house fashion experts and reviewed by a separate team at Zalando. Each product Zalando is the Europe\u2019s largest online fashion platform. Each product contains only one silhouette code.",
"subsections": []
},
{
"name": "Who are the annotators?",
"attributes": "**From the arXiv paper:**\nThe silhouette code is manually labeled by the in-house fashion experts and reviewed by a separate team at Zalando.",
"subsections": []
}
]
},
{
"name": "Personal and Sensitive Information",
"attributes": "[More Information Needed]",
"subsections": []
}
]
},
{
"name": "Considerations for Using the Data",
"attributes": "",
"subsections": [
{
"name": "Social Impact of Dataset",
"attributes": "[More Information Needed]",
"subsections": []
},
{
"name": "Discussion of Biases",
"attributes": "[More Information Needed]",
"subsections": []
},
{
"name": "Other Known Limitations",
"attributes": "[More Information Needed]",
"subsections": []
}
]
},
{
"name": "Additional Information",
"attributes": "",
"subsections": [
{
"name": "Dataset Curators",
"attributes": "Han Xiao and Kashif Rasul and Roland Vollgraf",
"subsections": []
},
{
"name": "Licensing Information",
"attributes": "MIT Licence",
"subsections": []
},
{
"name": "Citation Information",
"attributes": "@article{DBLP:journals/corr/abs-1708-07747,\n author = {Han Xiao and\n Kashif Rasul and\n Roland Vollgraf},\n title = {Fashion-MNIST: a Novel Image Dataset for Benchmarking Machine Learning\n Algorithms},\n journal = {CoRR},\n volume = {abs/1708.07747},\n year = {2017},\n url = {http://arxiv.org/abs/1708.07747},\n archivePrefix = {arXiv},\n eprint = {1708.07747},\n timestamp = {Mon, 13 Aug 2018 16:47:27 +0200},\n biburl = {https://dblp.org/rec/bib/journals/corr/abs-1708-07747},\n bibsource = {dblp computer science bibliography, https://dblp.org}\n}",
"subsections": []
},
{
"name": "Contributions",
"attributes": "Thanks to [@gchhablani](https://github.com/gchablani) for adding this dataset.",
"subsections": []
}
]
}
]
}
]
}
```
Thanks,
Gunjan | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2121/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2121/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2120 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2120/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2120/comments | https://api.github.com/repos/huggingface/datasets/issues/2120/events | https://github.com/huggingface/datasets/issues/2120 | 841,954,521 | MDU6SXNzdWU4NDE5NTQ1MjE= | 2,120 | dataset viewer does not work anymore | {
"avatar_url": "https://avatars.githubusercontent.com/u/79165106?v=4",
"events_url": "https://api.github.com/users/dorost1234/events{/privacy}",
"followers_url": "https://api.github.com/users/dorost1234/followers",
"following_url": "https://api.github.com/users/dorost1234/following{/other_user}",
"gists_url": "https://api.github.com/users/dorost1234/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/dorost1234",
"id": 79165106,
"login": "dorost1234",
"node_id": "MDQ6VXNlcjc5MTY1MTA2",
"organizations_url": "https://api.github.com/users/dorost1234/orgs",
"received_events_url": "https://api.github.com/users/dorost1234/received_events",
"repos_url": "https://api.github.com/users/dorost1234/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/dorost1234/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dorost1234/subscriptions",
"type": "User",
"url": "https://api.github.com/users/dorost1234"
} | [
{
"color": "94203D",
"default": false,
"description": "",
"id": 2107841032,
"name": "nlp-viewer",
"node_id": "MDU6TGFiZWwyMTA3ODQxMDMy",
"url": "https://api.github.com/repos/huggingface/datasets/labels/nlp-viewer"
}
] | closed | false | null | [] | null | [] | 2021-03-26T13:22:13Z | 2021-03-26T15:52:22Z | 2021-03-26T15:52:22Z | NONE | null | null | null | Hi
I normally use this link to see all datasets and how I can load them
https://huggingface.co/datasets/viewer/
Now I am getting
502 Bad Gateway
nginx/1.18.0 (Ubuntu)
could you bring this webpage back ? this was very helpful @lhoestq
thanks for your help | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2120/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2120/timeline | null | completed | true |
https://api.github.com/repos/huggingface/datasets/issues/2119 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2119/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2119/comments | https://api.github.com/repos/huggingface/datasets/issues/2119/events | https://github.com/huggingface/datasets/pull/2119 | 841,567,199 | MDExOlB1bGxSZXF1ZXN0NjAxMjg2MjIy | 2,119 | copy.deepcopy os.environ instead of copy | {
"avatar_url": "https://avatars.githubusercontent.com/u/5506053?v=4",
"events_url": "https://api.github.com/users/NihalHarish/events{/privacy}",
"followers_url": "https://api.github.com/users/NihalHarish/followers",
"following_url": "https://api.github.com/users/NihalHarish/following{/other_user}",
"gists_url": "https://api.github.com/users/NihalHarish/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/NihalHarish",
"id": 5506053,
"login": "NihalHarish",
"node_id": "MDQ6VXNlcjU1MDYwNTM=",
"organizations_url": "https://api.github.com/users/NihalHarish/orgs",
"received_events_url": "https://api.github.com/users/NihalHarish/received_events",
"repos_url": "https://api.github.com/users/NihalHarish/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/NihalHarish/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/NihalHarish/subscriptions",
"type": "User",
"url": "https://api.github.com/users/NihalHarish"
} | [] | closed | false | null | [] | null | [] | 2021-03-26T03:58:38Z | 2021-03-26T15:13:52Z | 2021-03-26T15:13:52Z | CONTRIBUTOR | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/2119.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2119",
"merged_at": "2021-03-26T15:13:52Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2119.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2119"
} | Fixes: https://github.com/huggingface/datasets/issues/2115
- bug fix: using envrion.copy() returns a dict.
- using deepcopy(environ) returns an `_environ` object
- Changing the datatype of the _environ object can break code, if subsequent libraries perform operations using apis exclusive to the environ object, like `environ.getenv()` for example.
Testing:
Tested the change on my terminal:
```
>>> import os
>>> x = deepcopy(os.environ)
>>> y = os.environ
>>> x is y
False
>>> isinstance(x, type(os.environ))
True
>>> z = os.environ.copy()
>>> isinstance(z, type(os.environ))
False
>>> isinstance(z, dict)
True
``` | {
"+1": 2,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 2,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2119/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2119/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2118 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2118/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2118/comments | https://api.github.com/repos/huggingface/datasets/issues/2118/events | https://github.com/huggingface/datasets/pull/2118 | 841,563,329 | MDExOlB1bGxSZXF1ZXN0NjAxMjgzMDUx | 2,118 | Remove os.environ.copy in Dataset.map | {
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko"
} | [] | closed | false | null | [] | null | [] | 2021-03-26T03:48:17Z | 2021-03-26T12:03:23Z | 2021-03-26T12:00:05Z | CONTRIBUTOR | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/2118.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2118",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/2118.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2118"
} | Replace `os.environ.copy` with in-place modification
Fixes #2115 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2118/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2118/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2117 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2117/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2117/comments | https://api.github.com/repos/huggingface/datasets/issues/2117/events | https://github.com/huggingface/datasets/issues/2117 | 841,535,283 | MDU6SXNzdWU4NDE1MzUyODM= | 2,117 | load_metric from local "glue.py" meet error 'NoneType' object is not callable | {
"avatar_url": "https://avatars.githubusercontent.com/u/54012361?v=4",
"events_url": "https://api.github.com/users/Frankie123421/events{/privacy}",
"followers_url": "https://api.github.com/users/Frankie123421/followers",
"following_url": "https://api.github.com/users/Frankie123421/following{/other_user}",
"gists_url": "https://api.github.com/users/Frankie123421/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Frankie123421",
"id": 54012361,
"login": "Frankie123421",
"node_id": "MDQ6VXNlcjU0MDEyMzYx",
"organizations_url": "https://api.github.com/users/Frankie123421/orgs",
"received_events_url": "https://api.github.com/users/Frankie123421/received_events",
"repos_url": "https://api.github.com/users/Frankie123421/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Frankie123421/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Frankie123421/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Frankie123421"
} | [] | closed | false | null | [] | null | [] | 2021-03-26T02:35:22Z | 2021-08-25T21:44:05Z | 2021-03-26T02:40:26Z | NONE | null | null | null | actual_task = "mnli" if task == "mnli-mm" else task
dataset = load_dataset(path='/home/glue.py', name=actual_task)
metric = load_metric(path='/home/glue.py', name=actual_task)
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-8-7ab77a465d81> in <module>
1 actual_task = "mnli" if task == "mnli-mm" else task
2 dataset = load_dataset(path='/home/jcli/glue.py', name=actual_task)
----> 3 metric = load_metric(path='/home/jcli/glue.py', name=actual_task)
~/anaconda3/envs/pytorch/lib/python3.6/site-packages/datasets/load.py in load_metric(path, config_name, process_id, num_process, cache_dir, experiment_id, keep_in_memory, download_config, download_mode, script_version, **metric_init_kwargs)
508 keep_in_memory=keep_in_memory,
509 experiment_id=experiment_id,
--> 510 **metric_init_kwargs,
511 )
512
TypeError: 'NoneType' object is not callable
Please help | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2117/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2117/timeline | null | completed | true |
https://api.github.com/repos/huggingface/datasets/issues/2116 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2116/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2116/comments | https://api.github.com/repos/huggingface/datasets/issues/2116/events | https://github.com/huggingface/datasets/issues/2116 | 841,481,292 | MDU6SXNzdWU4NDE0ODEyOTI= | 2,116 | Creating custom dataset results in error while calling the map() function | {
"avatar_url": "https://avatars.githubusercontent.com/u/13940397?v=4",
"events_url": "https://api.github.com/users/GeetDsa/events{/privacy}",
"followers_url": "https://api.github.com/users/GeetDsa/followers",
"following_url": "https://api.github.com/users/GeetDsa/following{/other_user}",
"gists_url": "https://api.github.com/users/GeetDsa/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/GeetDsa",
"id": 13940397,
"login": "GeetDsa",
"node_id": "MDQ6VXNlcjEzOTQwMzk3",
"organizations_url": "https://api.github.com/users/GeetDsa/orgs",
"received_events_url": "https://api.github.com/users/GeetDsa/received_events",
"repos_url": "https://api.github.com/users/GeetDsa/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/GeetDsa/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/GeetDsa/subscriptions",
"type": "User",
"url": "https://api.github.com/users/GeetDsa"
} | [] | closed | false | null | [] | null | [] | 2021-03-26T00:37:46Z | 2021-03-31T14:30:32Z | 2021-03-31T14:30:32Z | NONE | null | null | null | calling `map()` of `datasets` library results into an error while defining a Custom dataset.
Reproducible example:
```
import datasets
class MyDataset(datasets.Dataset):
def __init__(self, sentences):
"Initialization"
self.samples = sentences
def __len__(self):
"Denotes the total number of samples"
return len(self.samples)
def __getitem__(self, index):
"Generates one sample of data"
# Select sample
# Load data and get label
samples = self.samples[index]
return samples
def preprocess_function_train(examples):
inputs = examples
labels = [example+tokenizer.eos_token for example in examples ]
inputs = tokenizer(inputs, max_length=30, padding=True, truncation=True)
labels = tokenizer(labels, max_length=30, padding=True, truncation=True)
model_inputs = inputs
model_inputs["labels"] = labels["input_ids"]
print("about to return")
return model_inputs
##train["sentence"] is dataframe column
train_dataset = MyDataset(train['sentence'].values.tolist())
train_dataset = train_dataset.map(
preprocess_function,
batched = True,
batch_size=32
)
```
Stack trace of error:
```
Traceback (most recent call last):
File "dir/train_generate.py", line 362, in <module>
main()
File "dir/train_generate.py", line 245, in main
train_dataset = train_dataset.map(
File "anaconda_dir/anaconda3/envs/env1/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 1244, in map
return self._map_single(
File "anaconda_dir/anaconda3/envs/env1/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 149, in wrapper
unformatted_columns = set(self.column_names) - set(self._format_columns or [])
File "anaconda_dir/anaconda3/envs/env1/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 526, in column_names
return self._data.column_names
AttributeError: 'MyDataset' object has no attribute '_data'
``` | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2116/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2116/timeline | null | completed | true |
https://api.github.com/repos/huggingface/datasets/issues/2115 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2115/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2115/comments | https://api.github.com/repos/huggingface/datasets/issues/2115/events | https://github.com/huggingface/datasets/issues/2115 | 841,283,974 | MDU6SXNzdWU4NDEyODM5NzQ= | 2,115 | The datasets.map() implementation modifies the datatype of os.environ object | {
"avatar_url": "https://avatars.githubusercontent.com/u/19983848?v=4",
"events_url": "https://api.github.com/users/leleamol/events{/privacy}",
"followers_url": "https://api.github.com/users/leleamol/followers",
"following_url": "https://api.github.com/users/leleamol/following{/other_user}",
"gists_url": "https://api.github.com/users/leleamol/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/leleamol",
"id": 19983848,
"login": "leleamol",
"node_id": "MDQ6VXNlcjE5OTgzODQ4",
"organizations_url": "https://api.github.com/users/leleamol/orgs",
"received_events_url": "https://api.github.com/users/leleamol/received_events",
"repos_url": "https://api.github.com/users/leleamol/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/leleamol/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/leleamol/subscriptions",
"type": "User",
"url": "https://api.github.com/users/leleamol"
} | [] | closed | false | null | [] | null | [] | 2021-03-25T20:29:19Z | 2021-03-26T15:13:52Z | 2021-03-26T15:13:52Z | NONE | null | null | null | In our testing, we noticed that the datasets.map() implementation is modifying the datatype of python os.environ object from '_Environ' to 'dict'.
This causes following function calls to fail as follows:
`
x = os.environ.get("TEST_ENV_VARIABLE_AFTER_dataset_map", default=None)
TypeError: get() takes no keyword arguments
`
It looks like the following line in datasets.map implementation introduced this functionality.
https://github.com/huggingface/datasets/blob/0cb1ac06acb0df44a1cf4128d03a01865faa2504/src/datasets/arrow_dataset.py#L1421
Here is the test script to reproduce this error.
```
from datasets import load_dataset
from transformers import AutoTokenizer
import os
def test_train():
model_checkpoint = "distilgpt2"
datasets = load_dataset('wikitext', 'wikitext-2-raw-v1')
tokenizer = AutoTokenizer.from_pretrained(model_checkpoint, use_fast=True)
tokenizer.pad_token = tokenizer.eos_token
def tokenize_function(examples):
y = tokenizer(examples['text'], truncation=True, max_length=64)
return y
x = os.environ.get("TEST_ENV_VARIABLE_BEFORE_dataset_map", default=None)
print(f"Testing environment variable: TEST_ENV_VARIABLE_BEFORE_dataset_map {x}")
print(f"Data type of os.environ before datasets.map = {os.environ.__class__.__name__}")
datasets.map(tokenize_function, batched=True, num_proc=2, remove_columns=["text"])
print(f"Data type of os.environ after datasets.map = {os.environ.__class__.__name__}")
x = os.environ.get("TEST_ENV_VARIABLE_AFTER_dataset_map", default=None)
print(f"Testing environment variable: TEST_ENV_VARIABLE_AFTER_dataset_map {x}")
if __name__ == "__main__":
test_train()
```
| {
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2115/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2115/timeline | null | completed | true |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.