url
stringlengths
58
61
repository_url
stringclasses
1 value
labels_url
stringlengths
72
75
comments_url
stringlengths
67
70
events_url
stringlengths
65
68
html_url
stringlengths
46
51
id
int64
599M
1.5B
node_id
stringlengths
18
32
number
int64
1
5.38k
title
stringlengths
1
276
user
dict
labels
list
state
stringclasses
2 values
locked
bool
1 class
assignee
dict
assignees
list
milestone
dict
comments
sequence
created_at
stringlengths
20
20
updated_at
stringlengths
20
20
closed_at
stringlengths
20
20
βŒ€
author_association
stringclasses
3 values
active_lock_reason
null
draft
bool
2 classes
pull_request
dict
body
stringlengths
0
228k
βŒ€
reactions
dict
timeline_url
stringlengths
67
70
performed_via_github_app
null
state_reason
stringclasses
3 values
is_pull_request
bool
1 class
https://api.github.com/repos/huggingface/datasets/issues/900
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/900/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/900/comments
https://api.github.com/repos/huggingface/datasets/issues/900/events
https://github.com/huggingface/datasets/issues/900
752,214,066
MDU6SXNzdWU3NTIyMTQwNjY=
900
datasets.load_dataset() custom chaching directory bug
{ "avatar_url": "https://avatars.githubusercontent.com/u/44585792?v=4", "events_url": "https://api.github.com/users/SapirWeissbuch/events{/privacy}", "followers_url": "https://api.github.com/users/SapirWeissbuch/followers", "following_url": "https://api.github.com/users/SapirWeissbuch/following{/other_user}", "gists_url": "https://api.github.com/users/SapirWeissbuch/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/SapirWeissbuch", "id": 44585792, "login": "SapirWeissbuch", "node_id": "MDQ6VXNlcjQ0NTg1Nzky", "organizations_url": "https://api.github.com/users/SapirWeissbuch/orgs", "received_events_url": "https://api.github.com/users/SapirWeissbuch/received_events", "repos_url": "https://api.github.com/users/SapirWeissbuch/repos", "site_admin": false, "starred_url": "https://api.github.com/users/SapirWeissbuch/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/SapirWeissbuch/subscriptions", "type": "User", "url": "https://api.github.com/users/SapirWeissbuch" }
[]
closed
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" } ]
null
[]
2020-11-27T12:18:53Z
2020-11-29T22:48:53Z
2020-11-29T22:48:53Z
NONE
null
null
null
Hello, I'm having issue with loading a dataset with a custom `cache_dir`. Despite specifying the output dir, it is still downloaded to `~/.cache`. ## Environment info - `datasets` version: 1.1.3 - Platform: Linux-4.19.129-aufs-1-x86_64-with-debian-10.1 - Python version: 3.7.3 ## The code I'm running: ```python import datasets from pathlib import Path validation_dataset = datasets.load_dataset("natural_questions", split="validation[:5%]", cache_dir=Path("./data")) ``` ## The output: * The dataset is downloaded to my home directory's `.cache` * A new empty directory named "`natural_questions` is created in the specified directory `.data` * `tree data` in the shell outputs: ``` data └── natural_questions └── default └── 0.0.2 3 directories, 0 files ``` The output: ``` Downloading: 8.61kB [00:00, 5.11MB/s] Downloading: 13.6kB [00:00, 7.89MB/s] Using custom data configuration default Downloading and preparing dataset natural_questions/default (download: 41.97 GiB, generated: 92.95 GiB, post-processed: Unknown size, total: 134.92 GiB) to ./data/natural_questions/default/0.0.2/867dbbaf9137c1b8 3ecb19f5eb80559e1002ea26e702c6b919cfa81a17a8c531... Downloading: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 13.6k/13.6k [00:00<00:00, 1.51MB/s] Downloading: 7%|β–ˆβ–ˆβ–ˆβ–Ž | 6.70G/97.4G [03:46<1:37:05, 15.6MB/s] ``` ## Expected behaviour: The dataset "Natural Questions" should be downloaded to the directory "./data"
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/900/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/900/timeline
null
completed
true
https://api.github.com/repos/huggingface/datasets/issues/899
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/899/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/899/comments
https://api.github.com/repos/huggingface/datasets/issues/899/events
https://github.com/huggingface/datasets/pull/899
752,191,227
MDExOlB1bGxSZXF1ZXN0NTI4NTYzNzYz
899
Allow arrow based builder in auto dummy data generation
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
[]
closed
false
null
[]
null
[]
2020-11-27T11:39:38Z
2020-11-27T13:30:09Z
2020-11-27T13:30:08Z
MEMBER
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/899.diff", "html_url": "https://github.com/huggingface/datasets/pull/899", "merged_at": "2020-11-27T13:30:08Z", "patch_url": "https://github.com/huggingface/datasets/pull/899.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/899" }
Following #898 I added support for arrow based builder for the auto dummy data generator
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/899/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/899/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/898
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/898/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/898/comments
https://api.github.com/repos/huggingface/datasets/issues/898/events
https://github.com/huggingface/datasets/pull/898
752,148,284
MDExOlB1bGxSZXF1ZXN0NTI4NTI4MDY1
898
Adding SQA dataset
{ "avatar_url": "https://avatars.githubusercontent.com/u/7353373?v=4", "events_url": "https://api.github.com/users/thomwolf/events{/privacy}", "followers_url": "https://api.github.com/users/thomwolf/followers", "following_url": "https://api.github.com/users/thomwolf/following{/other_user}", "gists_url": "https://api.github.com/users/thomwolf/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/thomwolf", "id": 7353373, "login": "thomwolf", "node_id": "MDQ6VXNlcjczNTMzNzM=", "organizations_url": "https://api.github.com/users/thomwolf/orgs", "received_events_url": "https://api.github.com/users/thomwolf/received_events", "repos_url": "https://api.github.com/users/thomwolf/repos", "site_admin": false, "starred_url": "https://api.github.com/users/thomwolf/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/thomwolf/subscriptions", "type": "User", "url": "https://api.github.com/users/thomwolf" }
[]
closed
false
null
[]
null
[]
2020-11-27T10:29:18Z
2020-12-15T12:54:40Z
2020-12-15T12:54:19Z
MEMBER
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/898.diff", "html_url": "https://github.com/huggingface/datasets/pull/898", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/898.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/898" }
As discussed in #880 Seems like automatic dummy-data generation doesn't work if the builder is a `ArrowBasedBuilder`, do you think you could take a look @lhoestq ?
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/898/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/898/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/897
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/897/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/897/comments
https://api.github.com/repos/huggingface/datasets/issues/897/events
https://github.com/huggingface/datasets/issues/897
752,100,256
MDU6SXNzdWU3NTIxMDAyNTY=
897
Dataset viewer issues
{ "avatar_url": "https://avatars.githubusercontent.com/u/2779410?v=4", "events_url": "https://api.github.com/users/BramVanroy/events{/privacy}", "followers_url": "https://api.github.com/users/BramVanroy/followers", "following_url": "https://api.github.com/users/BramVanroy/following{/other_user}", "gists_url": "https://api.github.com/users/BramVanroy/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/BramVanroy", "id": 2779410, "login": "BramVanroy", "node_id": "MDQ6VXNlcjI3Nzk0MTA=", "organizations_url": "https://api.github.com/users/BramVanroy/orgs", "received_events_url": "https://api.github.com/users/BramVanroy/received_events", "repos_url": "https://api.github.com/users/BramVanroy/repos", "site_admin": false, "starred_url": "https://api.github.com/users/BramVanroy/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/BramVanroy/subscriptions", "type": "User", "url": "https://api.github.com/users/BramVanroy" }
[ { "color": "94203D", "default": false, "description": "", "id": 2107841032, "name": "nlp-viewer", "node_id": "MDU6TGFiZWwyMTA3ODQxMDMy", "url": "https://api.github.com/repos/huggingface/datasets/labels/nlp-viewer" } ]
closed
false
null
[]
null
[]
2020-11-27T09:14:34Z
2021-10-31T09:12:01Z
2021-10-31T09:12:01Z
CONTRIBUTOR
null
null
null
I was looking through the dataset viewer and I like it a lot. Version numbers, citation information, everything's there! I've spotted a few issues/bugs though: - the URL is still under `nlp`, perhaps an alias for `datasets` can be made - when I remove a **feature** (and the feature list is empty), I get an error. This is probably expected, but perhaps a better error message can be shown to the user ```bash IndexError: list index out of range Traceback: File "/home/sasha/streamlit/lib/streamlit/ScriptRunner.py", line 322, in _run_script exec(code, module.__dict__) File "/home/sasha/nlp-viewer/run.py", line 316, in <module> st.table(style) File "/home/sasha/streamlit/lib/streamlit/DeltaGenerator.py", line 122, in wrapped_method return dg._enqueue_new_element_delta(marshall_element, delta_type, last_index) File "/home/sasha/streamlit/lib/streamlit/DeltaGenerator.py", line 367, in _enqueue_new_element_delta rv = marshall_element(msg.delta.new_element) File "/home/sasha/streamlit/lib/streamlit/DeltaGenerator.py", line 120, in marshall_element return method(dg, element, *args, **kwargs) File "/home/sasha/streamlit/lib/streamlit/DeltaGenerator.py", line 2944, in table data_frame_proto.marshall_data_frame(data, element.table) File "/home/sasha/streamlit/lib/streamlit/elements/data_frame_proto.py", line 54, in marshall_data_frame _marshall_styles(proto_df.style, df, styler) File "/home/sasha/streamlit/lib/streamlit/elements/data_frame_proto.py", line 73, in _marshall_styles translated_style = styler._translate() File "/home/sasha/.local/share/virtualenvs/lib-ogGKnCK_/lib/python3.7/site-packages/pandas/io/formats/style.py", line 351, in _translate * (len(clabels[0]) - len(hidden_columns)) ``` - there seems to be **an encoding issue** in the default view, the dataset examples are shown as raw monospace text, without a decent encoding. That makes it hard to read for languages that use a lot of special characters. Take for instance the [cs-en WMT19 set](https://huggingface.co/nlp/viewer/?dataset=wmt19&config=cs-en). This problem goes away when you enable "List view", because then some syntax highlighteris used, and the special characters are coded correctly.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/897/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/897/timeline
null
completed
true
https://api.github.com/repos/huggingface/datasets/issues/896
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/896/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/896/comments
https://api.github.com/repos/huggingface/datasets/issues/896/events
https://github.com/huggingface/datasets/pull/896
751,834,265
MDExOlB1bGxSZXF1ZXN0NTI4MjcyMjc0
896
Add template and documentation for dataset card
{ "avatar_url": "https://avatars.githubusercontent.com/u/10469459?v=4", "events_url": "https://api.github.com/users/yjernite/events{/privacy}", "followers_url": "https://api.github.com/users/yjernite/followers", "following_url": "https://api.github.com/users/yjernite/following{/other_user}", "gists_url": "https://api.github.com/users/yjernite/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/yjernite", "id": 10469459, "login": "yjernite", "node_id": "MDQ6VXNlcjEwNDY5NDU5", "organizations_url": "https://api.github.com/users/yjernite/orgs", "received_events_url": "https://api.github.com/users/yjernite/received_events", "repos_url": "https://api.github.com/users/yjernite/repos", "site_admin": false, "starred_url": "https://api.github.com/users/yjernite/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/yjernite/subscriptions", "type": "User", "url": "https://api.github.com/users/yjernite" }
[]
closed
false
null
[]
null
[]
2020-11-26T21:30:25Z
2020-11-28T01:10:15Z
2020-11-28T01:10:15Z
MEMBER
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/896.diff", "html_url": "https://github.com/huggingface/datasets/pull/896", "merged_at": "2020-11-28T01:10:14Z", "patch_url": "https://github.com/huggingface/datasets/pull/896.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/896" }
This PR adds a template for dataset cards, as well as a guide to filling out the template and a completed example for the ELI5 dataset, building on the work of @mcmillanmajora New pull requests adding datasets should now have a README.md file which serves both to hold the tags we will have to index the datasets and as a data statement. The template is designed to be pretty extensive. The idea is that the person who uploads the dataset should put in all the basic information (at least the Dataset Description section) and whatever else they feel comfortable adding and leave the `[More Information Needed]` annotation everywhere else as a placeholder. We will then work with @mcmillanmajora to involve the data authors more directly in filling out the remaining information. Direct links to: - [Documentation](https://github.com/yjernite/datasets/blob/add_dataset_card_doc/templates/README_guide.md) - [Empty template](https://github.com/yjernite/datasets/blob/add_dataset_card_doc/templates/README.md) - [ELI5 example](https://github.com/yjernite/datasets/blob/add_dataset_card_doc/datasets/eli5/README.md)
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/896/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/896/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/895
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/895/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/895/comments
https://api.github.com/repos/huggingface/datasets/issues/895/events
https://github.com/huggingface/datasets/pull/895
751,782,295
MDExOlB1bGxSZXF1ZXN0NTI4MjMyMjU3
895
Better messages regarding split naming
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
[]
closed
false
null
[]
null
[]
2020-11-26T18:55:46Z
2020-11-27T13:31:00Z
2020-11-27T13:30:59Z
MEMBER
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/895.diff", "html_url": "https://github.com/huggingface/datasets/pull/895", "merged_at": "2020-11-27T13:30:59Z", "patch_url": "https://github.com/huggingface/datasets/pull/895.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/895" }
I made explicit the error message when a bad split name is used. Also I wanted to allow the `-` symbol for split names but actually this symbol is used to name the arrow files `{dataset_name}-{dataset_split}.arrow` so we should probably keep it this way, i.e. not allowing the `-` symbol in split names. Moreover in the future we might want to use `{dataset_name}-{dataset_split}-{shard_id}_of_{n_shards}.arrow` and reuse the `-` symbol.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/895/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/895/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/894
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/894/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/894/comments
https://api.github.com/repos/huggingface/datasets/issues/894/events
https://github.com/huggingface/datasets/pull/894
751,734,905
MDExOlB1bGxSZXF1ZXN0NTI4MTkzNzQy
894
Allow several tags sets
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
[]
closed
false
null
[]
null
[]
2020-11-26T17:04:13Z
2021-05-05T18:24:17Z
2020-11-27T20:15:49Z
MEMBER
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/894.diff", "html_url": "https://github.com/huggingface/datasets/pull/894", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/894.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/894" }
Hi ! Currently we have three dataset cards : snli, cnn_dailymail and allocine. For each one of those datasets a set of tag is defined. The set of tags contains fields like `multilinguality`, `task_ids`, `licenses` etc. For certain datasets like `glue` for example, there exist several configurations: `sst2`, `mnli` etc. Therefore we should define one set of tags per configuration. However the current format used for tags only supports one set of tags per dataset. In this PR I propose a simple change in the yaml format used for tags to allow for several sets of tags. Let me know what you think, especially @julien-c let me know if it's good for you since it's going to be parsed by moon-landing
{ "+1": 1, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/894/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/894/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/893
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/893/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/893/comments
https://api.github.com/repos/huggingface/datasets/issues/893/events
https://github.com/huggingface/datasets/pull/893
751,703,696
MDExOlB1bGxSZXF1ZXN0NTI4MTY4NDgx
893
add metrec: arabic poetry dataset
{ "avatar_url": "https://avatars.githubusercontent.com/u/15667714?v=4", "events_url": "https://api.github.com/users/zaidalyafeai/events{/privacy}", "followers_url": "https://api.github.com/users/zaidalyafeai/followers", "following_url": "https://api.github.com/users/zaidalyafeai/following{/other_user}", "gists_url": "https://api.github.com/users/zaidalyafeai/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/zaidalyafeai", "id": 15667714, "login": "zaidalyafeai", "node_id": "MDQ6VXNlcjE1NjY3NzE0", "organizations_url": "https://api.github.com/users/zaidalyafeai/orgs", "received_events_url": "https://api.github.com/users/zaidalyafeai/received_events", "repos_url": "https://api.github.com/users/zaidalyafeai/repos", "site_admin": false, "starred_url": "https://api.github.com/users/zaidalyafeai/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/zaidalyafeai/subscriptions", "type": "User", "url": "https://api.github.com/users/zaidalyafeai" }
[]
closed
false
null
[]
null
[]
2020-11-26T16:10:16Z
2020-12-01T16:24:55Z
2020-12-01T15:15:07Z
CONTRIBUTOR
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/893.diff", "html_url": "https://github.com/huggingface/datasets/pull/893", "merged_at": "2020-12-01T15:15:07Z", "patch_url": "https://github.com/huggingface/datasets/pull/893.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/893" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/893/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/893/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/892
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/892/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/892/comments
https://api.github.com/repos/huggingface/datasets/issues/892/events
https://github.com/huggingface/datasets/pull/892
751,658,262
MDExOlB1bGxSZXF1ZXN0NTI4MTMxNTE1
892
Add a few datasets of reference in the documentation
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
[]
closed
false
null
[]
null
[]
2020-11-26T15:02:39Z
2020-11-27T18:08:45Z
2020-11-27T18:08:44Z
MEMBER
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/892.diff", "html_url": "https://github.com/huggingface/datasets/pull/892", "merged_at": "2020-11-27T18:08:44Z", "patch_url": "https://github.com/huggingface/datasets/pull/892.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/892" }
I started making a small list of various datasets of reference in the documentation. Since many datasets share a lot in common I think it's good to have a list of datasets scripts to get some inspiration from. Let me know what you think, and if you have ideas of other datasets that we may add to this list, please let me know.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/892/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/892/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/891
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/891/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/891/comments
https://api.github.com/repos/huggingface/datasets/issues/891/events
https://github.com/huggingface/datasets/pull/891
751,576,869
MDExOlB1bGxSZXF1ZXN0NTI4MDY1MTQ3
891
gitignore .python-version
{ "avatar_url": "https://avatars.githubusercontent.com/u/27137566?v=4", "events_url": "https://api.github.com/users/patil-suraj/events{/privacy}", "followers_url": "https://api.github.com/users/patil-suraj/followers", "following_url": "https://api.github.com/users/patil-suraj/following{/other_user}", "gists_url": "https://api.github.com/users/patil-suraj/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/patil-suraj", "id": 27137566, "login": "patil-suraj", "node_id": "MDQ6VXNlcjI3MTM3NTY2", "organizations_url": "https://api.github.com/users/patil-suraj/orgs", "received_events_url": "https://api.github.com/users/patil-suraj/received_events", "repos_url": "https://api.github.com/users/patil-suraj/repos", "site_admin": false, "starred_url": "https://api.github.com/users/patil-suraj/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patil-suraj/subscriptions", "type": "User", "url": "https://api.github.com/users/patil-suraj" }
[]
closed
false
null
[]
null
[]
2020-11-26T13:05:58Z
2020-11-26T13:28:27Z
2020-11-26T13:28:26Z
MEMBER
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/891.diff", "html_url": "https://github.com/huggingface/datasets/pull/891", "merged_at": "2020-11-26T13:28:26Z", "patch_url": "https://github.com/huggingface/datasets/pull/891.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/891" }
ignore `.python-version` added by `pyenv`
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/891/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/891/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/890
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/890/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/890/comments
https://api.github.com/repos/huggingface/datasets/issues/890/events
https://github.com/huggingface/datasets/pull/890
751,534,050
MDExOlB1bGxSZXF1ZXN0NTI4MDI5NjA3
890
Add LER
{ "avatar_url": "https://avatars.githubusercontent.com/u/3775944?v=4", "events_url": "https://api.github.com/users/JoelNiklaus/events{/privacy}", "followers_url": "https://api.github.com/users/JoelNiklaus/followers", "following_url": "https://api.github.com/users/JoelNiklaus/following{/other_user}", "gists_url": "https://api.github.com/users/JoelNiklaus/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/JoelNiklaus", "id": 3775944, "login": "JoelNiklaus", "node_id": "MDQ6VXNlcjM3NzU5NDQ=", "organizations_url": "https://api.github.com/users/JoelNiklaus/orgs", "received_events_url": "https://api.github.com/users/JoelNiklaus/received_events", "repos_url": "https://api.github.com/users/JoelNiklaus/repos", "site_admin": false, "starred_url": "https://api.github.com/users/JoelNiklaus/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/JoelNiklaus/subscriptions", "type": "User", "url": "https://api.github.com/users/JoelNiklaus" }
[]
closed
false
null
[]
null
[]
2020-11-26T11:58:23Z
2020-12-01T13:33:35Z
2020-12-01T13:26:16Z
CONTRIBUTOR
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/890.diff", "html_url": "https://github.com/huggingface/datasets/pull/890", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/890.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/890" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/890/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/890/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/889
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/889/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/889/comments
https://api.github.com/repos/huggingface/datasets/issues/889/events
https://github.com/huggingface/datasets/pull/889
751,115,691
MDExOlB1bGxSZXF1ZXN0NTI3NjkwODE2
889
Optional per-dataset default config name
{ "avatar_url": "https://avatars.githubusercontent.com/u/9353833?v=4", "events_url": "https://api.github.com/users/joeddav/events{/privacy}", "followers_url": "https://api.github.com/users/joeddav/followers", "following_url": "https://api.github.com/users/joeddav/following{/other_user}", "gists_url": "https://api.github.com/users/joeddav/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/joeddav", "id": 9353833, "login": "joeddav", "node_id": "MDQ6VXNlcjkzNTM4MzM=", "organizations_url": "https://api.github.com/users/joeddav/orgs", "received_events_url": "https://api.github.com/users/joeddav/received_events", "repos_url": "https://api.github.com/users/joeddav/repos", "site_admin": false, "starred_url": "https://api.github.com/users/joeddav/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/joeddav/subscriptions", "type": "User", "url": "https://api.github.com/users/joeddav" }
[]
closed
false
null
[]
null
[]
2020-11-25T21:02:30Z
2020-11-30T17:27:33Z
2020-11-30T17:27:27Z
CONTRIBUTOR
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/889.diff", "html_url": "https://github.com/huggingface/datasets/pull/889", "merged_at": "2020-11-30T17:27:27Z", "patch_url": "https://github.com/huggingface/datasets/pull/889.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/889" }
This PR adds a `DEFAULT_CONFIG_NAME` class attribute to `DatasetBuilder`. This allows a dataset to have a specified default config name when a dataset has more than one config but the user does not specify it. For example, after defining `DEFAULT_CONFIG_NAME = "combined"` in PolyglotNER, a user can now do the following: ```python ds = load_dataset("polyglot_ner") ``` which is equivalent to, ```python ds = load_dataset("polyglot_ner", "combined") ``` In effect (for this particular dataset configuration), this means that if the user doesn't specify a language, they are given the combined dataset including all languages. Since it doesn't always make sense to have a default config, this feature is opt-in. If `DEFAULT_CONFIG_NAME` is not defined and a user does not pass a config for a dataset with multiple configs available, a ValueError is raised like usual. Let me know what you think about this approach @lhoestq @thomwolf and I'll add some documentation and define a default for some of our existing datasets.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/889/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/889/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/888
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/888/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/888/comments
https://api.github.com/repos/huggingface/datasets/issues/888/events
https://github.com/huggingface/datasets/issues/888
750,944,422
MDU6SXNzdWU3NTA5NDQ0MjI=
888
Nested lists are zipped unexpectedly
{ "avatar_url": "https://avatars.githubusercontent.com/u/5757359?v=4", "events_url": "https://api.github.com/users/AmitMY/events{/privacy}", "followers_url": "https://api.github.com/users/AmitMY/followers", "following_url": "https://api.github.com/users/AmitMY/following{/other_user}", "gists_url": "https://api.github.com/users/AmitMY/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/AmitMY", "id": 5757359, "login": "AmitMY", "node_id": "MDQ6VXNlcjU3NTczNTk=", "organizations_url": "https://api.github.com/users/AmitMY/orgs", "received_events_url": "https://api.github.com/users/AmitMY/received_events", "repos_url": "https://api.github.com/users/AmitMY/repos", "site_admin": false, "starred_url": "https://api.github.com/users/AmitMY/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/AmitMY/subscriptions", "type": "User", "url": "https://api.github.com/users/AmitMY" }
[]
closed
false
null
[]
null
[]
2020-11-25T16:07:46Z
2020-11-25T17:30:39Z
2020-11-25T17:30:39Z
CONTRIBUTOR
null
null
null
I might misunderstand something, but I expect that if I define: ```python "top": datasets.features.Sequence({ "middle": datasets.features.Sequence({ "bottom": datasets.Value("int32") }) }) ``` And I then create an example: ```python yield 1, { "top": [{ "middle": [ {"bottom": 1}, {"bottom": 2} ] }] } ``` I then load my dataset: ```python train = load_dataset("my dataset")["train"] ``` and expect to be able to access `data[0]["top"][0]["middle"][0]`. That is not the case. Here is `data[0]` as JSON: ```json {"top": {"middle": [{"bottom": [1, 2]}]}} ``` Clearly different than the thing I inputted. ```json {"top": [{"middle": [{"bottom": 1},{"bottom": 2}]}]} ```
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/888/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/888/timeline
null
completed
true
https://api.github.com/repos/huggingface/datasets/issues/887
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/887/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/887/comments
https://api.github.com/repos/huggingface/datasets/issues/887/events
https://github.com/huggingface/datasets/issues/887
750,868,831
MDU6SXNzdWU3NTA4Njg4MzE=
887
pyarrow.lib.ArrowNotImplementedError: MakeBuilder: cannot construct builder for type extension<arrow.py_extension_type>
{ "avatar_url": "https://avatars.githubusercontent.com/u/5757359?v=4", "events_url": "https://api.github.com/users/AmitMY/events{/privacy}", "followers_url": "https://api.github.com/users/AmitMY/followers", "following_url": "https://api.github.com/users/AmitMY/following{/other_user}", "gists_url": "https://api.github.com/users/AmitMY/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/AmitMY", "id": 5757359, "login": "AmitMY", "node_id": "MDQ6VXNlcjU3NTczNTk=", "organizations_url": "https://api.github.com/users/AmitMY/orgs", "received_events_url": "https://api.github.com/users/AmitMY/received_events", "repos_url": "https://api.github.com/users/AmitMY/repos", "site_admin": false, "starred_url": "https://api.github.com/users/AmitMY/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/AmitMY/subscriptions", "type": "User", "url": "https://api.github.com/users/AmitMY" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
open
false
null
[]
null
[]
2020-11-25T14:32:21Z
2021-09-09T17:03:40Z
null
CONTRIBUTOR
null
null
null
I set up a new dataset, with a sequence of arrays (really, I want to have an array of (None, 137, 2), and the first dimension is dynamic) ```python def _info(self): return datasets.DatasetInfo( description=_DESCRIPTION, # This defines the different columns of the dataset and their types features=datasets.Features( { "pose": datasets.features.Sequence(datasets.features.Array2D(shape=(137, 2), dtype="float32")) } ), homepage=_HOMEPAGE, citation=_CITATION, ) def _generate_examples(self): """ Yields examples. """ yield 1, { "pose": [np.zeros(shape=(137, 2), dtype=np.float32)] } ``` But this doesn't work - > pyarrow.lib.ArrowNotImplementedError: MakeBuilder: cannot construct builder for type extension<arrow.py_extension_type>
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/887/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/887/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/886
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/886/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/886/comments
https://api.github.com/repos/huggingface/datasets/issues/886/events
https://github.com/huggingface/datasets/pull/886
750,829,314
MDExOlB1bGxSZXF1ZXN0NTI3NDU1MDU5
886
Fix wikipedia custom config
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
[]
closed
false
null
[]
null
[]
2020-11-25T13:44:12Z
2021-06-25T05:24:16Z
2020-11-25T15:42:13Z
MEMBER
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/886.diff", "html_url": "https://github.com/huggingface/datasets/pull/886", "merged_at": "2020-11-25T15:42:13Z", "patch_url": "https://github.com/huggingface/datasets/pull/886.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/886" }
It should be possible to use the wikipedia dataset with any `language` and `date`. However it was not working as noticed in #784 . Indeed the custom wikipedia configurations were not enabled for some reason. I fixed that and was able to run ```python from datasets import load_dataset load_dataset("./datasets/wikipedia", language="zh", date="20201120", beam_runner='DirectRunner') ``` cc @stvhuang @SamuelCahyawijaya Fix #784
{ "+1": 2, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 2, "url": "https://api.github.com/repos/huggingface/datasets/issues/886/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/886/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/885
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/885/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/885/comments
https://api.github.com/repos/huggingface/datasets/issues/885/events
https://github.com/huggingface/datasets/issues/885
750,789,052
MDU6SXNzdWU3NTA3ODkwNTI=
885
Very slow cold-start
{ "avatar_url": "https://avatars.githubusercontent.com/u/5757359?v=4", "events_url": "https://api.github.com/users/AmitMY/events{/privacy}", "followers_url": "https://api.github.com/users/AmitMY/followers", "following_url": "https://api.github.com/users/AmitMY/following{/other_user}", "gists_url": "https://api.github.com/users/AmitMY/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/AmitMY", "id": 5757359, "login": "AmitMY", "node_id": "MDQ6VXNlcjU3NTczNTk=", "organizations_url": "https://api.github.com/users/AmitMY/orgs", "received_events_url": "https://api.github.com/users/AmitMY/received_events", "repos_url": "https://api.github.com/users/AmitMY/repos", "site_admin": false, "starred_url": "https://api.github.com/users/AmitMY/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/AmitMY/subscriptions", "type": "User", "url": "https://api.github.com/users/AmitMY" }
[ { "color": "e99695", "default": false, "description": "Requesting to add a new dataset", "id": 2067376369, "name": "dataset request", "node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request" } ]
closed
false
null
[]
null
[]
2020-11-25T12:47:58Z
2021-01-13T11:31:25Z
2021-01-13T11:31:25Z
CONTRIBUTOR
null
null
null
Hi, I expect when importing `datasets` that nothing major happens in the background, and so the import should be insignificant. When I load a metric, or a dataset, its fine that it takes time. The following ranges from 3 to 9 seconds: ``` python -m timeit -n 1 -r 1 'from datasets import load_dataset' ``` edit: sorry for the mis-tag, not sure how I added it.
{ "+1": 1, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/885/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/885/timeline
null
completed
true
https://api.github.com/repos/huggingface/datasets/issues/884
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/884/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/884/comments
https://api.github.com/repos/huggingface/datasets/issues/884/events
https://github.com/huggingface/datasets/pull/884
749,862,034
MDExOlB1bGxSZXF1ZXN0NTI2NjA5MDc1
884
Auto generate dummy data
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
[]
closed
false
null
[]
null
[]
2020-11-24T16:31:34Z
2020-11-26T14:18:47Z
2020-11-26T14:18:46Z
MEMBER
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/884.diff", "html_url": "https://github.com/huggingface/datasets/pull/884", "merged_at": "2020-11-26T14:18:46Z", "patch_url": "https://github.com/huggingface/datasets/pull/884.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/884" }
When adding a new dataset to the library, dummy data creation can take some time. To make things easier I added a command line tool that automatically generates dummy data when possible. The tool only supports certain data files types: txt, csv, tsv, jsonl, json and xml. Here are some examples: ``` python datasets-cli dummy_data ./datasets/snli --auto_generate python datasets-cli dummy_data ./datasets/squad --auto_generate --json_field data python datasets-cli dummy_data ./datasets/iwslt2017 --auto_generate --xml_tag seg --match_text_files "train*" --n_lines 15 # --xml_tag seg => each sample corresponds to a "seg" tag in the xml tree # --match_text_files "train*" => also match text files that don't have a proper text file extension (no suffix like ".txt" for example) # --n_lines 15 => some text files have headers so we have to use at least 15 lines ``` and here is the command usage: ``` usage: datasets-cli <command> [<args>] dummy_data [-h] [--auto_generate] [--n_lines N_LINES] [--json_field JSON_FIELD] [--xml_tag XML_TAG] [--match_text_files MATCH_TEXT_FILES] [--keep_uncompressed] [--cache_dir CACHE_DIR] path_to_dataset positional arguments: path_to_dataset Path to the dataset (example: ./datasets/squad) optional arguments: -h, --help show this help message and exit --auto_generate Try to automatically generate dummy data --n_lines N_LINES Number of lines or samples to keep when auto- generating dummy data --json_field JSON_FIELD Optional, json field to read the data from when auto- generating dummy data. In the json data files, this field must point to a list of samples as json objects (ex: the 'data' field for squad-like files) --xml_tag XML_TAG Optional, xml tag name of the samples inside the xml files when auto-generating dummy data. --match_text_files MATCH_TEXT_FILES Optional, a comma separated list of file patterns that looks for line-by-line text files other than *.txt or *.csv. Example: --match_text_files *.label --keep_uncompressed Don't compress the dummy data folders when auto- generating dummy data. Useful for debugging for to do manual adjustements before compressing. --cache_dir CACHE_DIR Cache directory to download and cache files when auto- generating dummy data ``` The command generates all the necessary `dummy_data.zip` files (one per config). How it works: - it runs the split_generators() method of the dataset script to download the original data files - when downloading it records a mapping between the downloaded files and the corresponding expected dummy data files paths - then for each data file it creates the dummy data file keeping only the first samples (the strategy depends on the type of file) - finally it compresses the dummy data folders into dummy_zip files ready for dataset tests Let me know if that makes sense or if you have ideas to improve this tool ! I also added a unit test.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/884/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/884/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/883
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/883/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/883/comments
https://api.github.com/repos/huggingface/datasets/issues/883/events
https://github.com/huggingface/datasets/issues/883
749,750,801
MDU6SXNzdWU3NDk3NTA4MDE=
883
Downloading/caching only a part of a datasets' dataset.
{ "avatar_url": "https://avatars.githubusercontent.com/u/44585792?v=4", "events_url": "https://api.github.com/users/SapirWeissbuch/events{/privacy}", "followers_url": "https://api.github.com/users/SapirWeissbuch/followers", "following_url": "https://api.github.com/users/SapirWeissbuch/following{/other_user}", "gists_url": "https://api.github.com/users/SapirWeissbuch/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/SapirWeissbuch", "id": 44585792, "login": "SapirWeissbuch", "node_id": "MDQ6VXNlcjQ0NTg1Nzky", "organizations_url": "https://api.github.com/users/SapirWeissbuch/orgs", "received_events_url": "https://api.github.com/users/SapirWeissbuch/received_events", "repos_url": "https://api.github.com/users/SapirWeissbuch/repos", "site_admin": false, "starred_url": "https://api.github.com/users/SapirWeissbuch/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/SapirWeissbuch/subscriptions", "type": "User", "url": "https://api.github.com/users/SapirWeissbuch" }
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" }, { "color": "d876e3", "default": true, "description": "Further information is requested", "id": 1935892912, "name": "question", "node_id": "MDU6TGFiZWwxOTM1ODkyOTEy", "url": "https://api.github.com/repos/huggingface/datasets/labels/question" } ]
open
false
null
[]
null
[]
2020-11-24T14:25:18Z
2020-11-27T13:51:55Z
null
NONE
null
null
null
Hi, I want to use the validation data *only* (of natural question). I don't want to have the whole dataset cached in my machine, just the dev set. Is this possible? I can't find a way to do it in the docs. Thank you, Sapir
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/883/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/883/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/882
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/882/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/882/comments
https://api.github.com/repos/huggingface/datasets/issues/882/events
https://github.com/huggingface/datasets/pull/882
749,662,188
MDExOlB1bGxSZXF1ZXN0NTI2NDQyMjA2
882
Update README.md
{ "avatar_url": "https://avatars.githubusercontent.com/u/32997732?v=4", "events_url": "https://api.github.com/users/vaibhavad/events{/privacy}", "followers_url": "https://api.github.com/users/vaibhavad/followers", "following_url": "https://api.github.com/users/vaibhavad/following{/other_user}", "gists_url": "https://api.github.com/users/vaibhavad/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/vaibhavad", "id": 32997732, "login": "vaibhavad", "node_id": "MDQ6VXNlcjMyOTk3NzMy", "organizations_url": "https://api.github.com/users/vaibhavad/orgs", "received_events_url": "https://api.github.com/users/vaibhavad/received_events", "repos_url": "https://api.github.com/users/vaibhavad/repos", "site_admin": false, "starred_url": "https://api.github.com/users/vaibhavad/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/vaibhavad/subscriptions", "type": "User", "url": "https://api.github.com/users/vaibhavad" }
[]
closed
false
null
[]
null
[]
2020-11-24T12:23:52Z
2021-01-29T10:41:07Z
2021-01-29T10:41:07Z
CONTRIBUTOR
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/882.diff", "html_url": "https://github.com/huggingface/datasets/pull/882", "merged_at": "2021-01-29T10:41:06Z", "patch_url": "https://github.com/huggingface/datasets/pull/882.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/882" }
"no label" is "-" in the original dataset but "-1" in Huggingface distribution.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/882/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/882/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/881
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/881/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/881/comments
https://api.github.com/repos/huggingface/datasets/issues/881/events
https://github.com/huggingface/datasets/pull/881
749,548,107
MDExOlB1bGxSZXF1ZXN0NTI2MzQ5MDM2
881
Use GCP download url instead of tensorflow custom download for boolq
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
[]
closed
false
null
[]
null
[]
2020-11-24T09:47:11Z
2020-11-24T10:12:34Z
2020-11-24T10:12:33Z
MEMBER
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/881.diff", "html_url": "https://github.com/huggingface/datasets/pull/881", "merged_at": "2020-11-24T10:12:33Z", "patch_url": "https://github.com/huggingface/datasets/pull/881.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/881" }
BoolQ is a dataset that used tf.io.gfile.copy to download the file from a GCP bucket. It prevented the dataset to be downloaded twice because of a FileAlreadyExistsError. Even though the error could be fixed by providing `overwrite=True` to the tf.io.gfile.copy call, I changed the script to use GCP download urls and use regular downloads instead and remove the tensorflow dependency. Fix #875
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/881/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/881/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/880
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/880/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/880/comments
https://api.github.com/repos/huggingface/datasets/issues/880/events
https://github.com/huggingface/datasets/issues/880
748,949,606
MDU6SXNzdWU3NDg5NDk2MDY=
880
Add SQA
{ "avatar_url": "https://avatars.githubusercontent.com/u/48327001?v=4", "events_url": "https://api.github.com/users/NielsRogge/events{/privacy}", "followers_url": "https://api.github.com/users/NielsRogge/followers", "following_url": "https://api.github.com/users/NielsRogge/following{/other_user}", "gists_url": "https://api.github.com/users/NielsRogge/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/NielsRogge", "id": 48327001, "login": "NielsRogge", "node_id": "MDQ6VXNlcjQ4MzI3MDAx", "organizations_url": "https://api.github.com/users/NielsRogge/orgs", "received_events_url": "https://api.github.com/users/NielsRogge/received_events", "repos_url": "https://api.github.com/users/NielsRogge/repos", "site_admin": false, "starred_url": "https://api.github.com/users/NielsRogge/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/NielsRogge/subscriptions", "type": "User", "url": "https://api.github.com/users/NielsRogge" }
[ { "color": "e99695", "default": false, "description": "Requesting to add a new dataset", "id": 2067376369, "name": "dataset request", "node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request" } ]
closed
false
null
[]
null
[]
2020-11-23T16:31:55Z
2020-12-23T13:58:24Z
2020-12-23T13:58:23Z
CONTRIBUTOR
null
null
null
## Adding a Dataset - **Name:** SQA (Sequential Question Answering) by Microsoft. - **Description:** The SQA dataset was created to explore the task of answering sequences of inter-related questions on HTML tables. It has 6,066 sequences with 17,553 questions in total. - **Paper:** https://www.microsoft.com/en-us/research/publication/search-based-neural-structured-learning-sequential-question-answering/ - **Data:** https://www.microsoft.com/en-us/download/details.aspx?id=54253 - **Motivation:** currently, the [Tapas](https://ai.googleblog.com/2020/04/using-neural-networks-to-find-answers.html) algorithm by Google AI is being added to the Transformers library (see https://github.com/huggingface/transformers/pull/8113). It would be great to use that model in combination with this dataset, on which it achieves SOTA results (average question accuracy of 0.71). Note 1: this dataset actually consists of 2 types of files: 1) TSV files, containing the questions, answer coordinates and answer texts (for training, dev and test) 2) a folder of csv files, which contain the actual tabular data Note 2: if you download the dataset straight from the download link above, then you will see that the `answer_coordinates` and `answer_text` columns are string lists of string tuples and strings respectively, which is not ideal. It would be better to make them true Python lists of tuples and strings respectively (using `ast.literal_eval`), before uploading them to the HuggingFace hub. Adding this would be great! Then we could possibly also add [WTQ (WikiTable Questions)](https://github.com/ppasupat/WikiTableQuestions) and [TabFact (Tabular Fact Checking)](https://github.com/wenhuchen/Table-Fact-Checking) on which TAPAS also achieves state-of-the-art results. Note that the TAPAS algorithm requires these datasets to first be converted into the SQA format. Instructions to add a new dataset can be found [here](https://huggingface.co/docs/datasets/share_dataset.html).
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/880/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/880/timeline
null
completed
true
https://api.github.com/repos/huggingface/datasets/issues/879
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/879/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/879/comments
https://api.github.com/repos/huggingface/datasets/issues/879/events
https://github.com/huggingface/datasets/issues/879
748,848,847
MDU6SXNzdWU3NDg4NDg4NDc=
879
boolq does not load
{ "avatar_url": "https://avatars.githubusercontent.com/u/6278280?v=4", "events_url": "https://api.github.com/users/rabeehk/events{/privacy}", "followers_url": "https://api.github.com/users/rabeehk/followers", "following_url": "https://api.github.com/users/rabeehk/following{/other_user}", "gists_url": "https://api.github.com/users/rabeehk/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/rabeehk", "id": 6278280, "login": "rabeehk", "node_id": "MDQ6VXNlcjYyNzgyODA=", "organizations_url": "https://api.github.com/users/rabeehk/orgs", "received_events_url": "https://api.github.com/users/rabeehk/received_events", "repos_url": "https://api.github.com/users/rabeehk/repos", "site_admin": false, "starred_url": "https://api.github.com/users/rabeehk/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/rabeehk/subscriptions", "type": "User", "url": "https://api.github.com/users/rabeehk" }
[ { "color": "2edb81", "default": false, "description": "A bug in a dataset script provided in the library", "id": 2067388877, "name": "dataset bug", "node_id": "MDU6TGFiZWwyMDY3Mzg4ODc3", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20bug" } ]
closed
false
null
[]
null
[]
2020-11-23T14:28:28Z
2022-10-05T12:23:32Z
2022-10-05T12:23:32Z
CONTRIBUTOR
null
null
null
Hi I am getting these errors trying to load boolq thanks Traceback (most recent call last): File "test.py", line 5, in <module> data = AutoTask().get("boolq").get_dataset("train", n_obs=10) File "/remote/idiap.svm/user.active/rkarimi/dev/internship/seq2seq/tasks/tasks.py", line 42, in get_dataset dataset = self.load_dataset(split=split) File "/remote/idiap.svm/user.active/rkarimi/dev/internship/seq2seq/tasks/tasks.py", line 38, in load_dataset return datasets.load_dataset(self.task.name, split=split) File "/idiap/user/rkarimi/libs/anaconda3/envs/internship/lib/python3.7/site-packages/datasets/load.py", line 611, in load_dataset ignore_verifications=ignore_verifications, File "/idiap/user/rkarimi/libs/anaconda3/envs/internship/lib/python3.7/site-packages/datasets/builder.py", line 476, in download_and_prepare dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs File "/idiap/user/rkarimi/libs/anaconda3/envs/internship/lib/python3.7/site-packages/datasets/builder.py", line 531, in _download_and_prepare split_generators = self._split_generators(dl_manager, **split_generators_kwargs) File " /idiap/home/rkarimi/.cache/huggingface/modules/datasets_modules/datasets/boolq/2987db1f15deaa19500ae24de560eabeaf1f8ef51df88c0470beeec72943bf11/boolq.py", line 74, in _split_generators downloaded_files = dl_manager.download_custom(urls_to_download, tf.io.gfile.copy) File "/idiap/user/rkarimi/libs/anaconda3/envs/internship/lib/python3.7/site-packages/datasets/utils/download_manager.py", line 150, in download_custom get_from_cache(url, cache_dir=cache_dir, local_files_only=True, use_etag=False) File "/idiap/user/rkarimi/libs/anaconda3/envs/internship/lib/python3.7/site-packages/datasets/utils/file_utils.py", line 472, in get_from_cache f"Cannot find the requested files in the cached path at {cache_path} and outgoing traffic has been" FileNotFoundError: Cannot find the requested files in the cached path at /idiap/home/rkarimi/.cache/huggingface/datasets/eaee069e38f6ceaa84de02ad088c34e63ec97671f2cd1910ddb16b10dc60808c and outgoing traffic has been disabled. To enable file online look-ups, set 'local_files_only' to False.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/879/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/879/timeline
null
completed
true
https://api.github.com/repos/huggingface/datasets/issues/878
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/878/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/878/comments
https://api.github.com/repos/huggingface/datasets/issues/878/events
https://github.com/huggingface/datasets/issues/878
748,621,981
MDU6SXNzdWU3NDg2MjE5ODE=
878
Loading Data From S3 Path in Sagemaker
{ "avatar_url": "https://avatars.githubusercontent.com/u/42795522?v=4", "events_url": "https://api.github.com/users/mahesh1amour/events{/privacy}", "followers_url": "https://api.github.com/users/mahesh1amour/followers", "following_url": "https://api.github.com/users/mahesh1amour/following{/other_user}", "gists_url": "https://api.github.com/users/mahesh1amour/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mahesh1amour", "id": 42795522, "login": "mahesh1amour", "node_id": "MDQ6VXNlcjQyNzk1NTIy", "organizations_url": "https://api.github.com/users/mahesh1amour/orgs", "received_events_url": "https://api.github.com/users/mahesh1amour/received_events", "repos_url": "https://api.github.com/users/mahesh1amour/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mahesh1amour/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mahesh1amour/subscriptions", "type": "User", "url": "https://api.github.com/users/mahesh1amour" }
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" }, { "color": "d876e3", "default": true, "description": "Further information is requested", "id": 1935892912, "name": "question", "node_id": "MDU6TGFiZWwxOTM1ODkyOTEy", "url": "https://api.github.com/repos/huggingface/datasets/labels/question" } ]
open
false
null
[]
null
[]
2020-11-23T09:17:22Z
2020-12-23T09:53:08Z
null
NONE
null
null
null
In Sagemaker Im tring to load the data set from S3 path as follows `train_path = 's3://xxxxxxxxxx/xxxxxxxxxx/train.csv' valid_path = 's3://xxxxxxxxxx/xxxxxxxxxx/validation.csv' test_path = 's3://xxxxxxxxxx/xxxxxxxxxx/test.csv' data_files = {} data_files["train"] = train_path data_files["validation"] = valid_path data_files["test"] = test_path extension = train_path.split(".")[-1] datasets = load_dataset(extension, data_files=data_files, s3_enabled=True) print(datasets)` I getting an error of `algo-1-7plil_1 | File "main.py", line 21, in <module> algo-1-7plil_1 | datasets = load_dataset(extension, data_files=data_files) algo-1-7plil_1 | File "/opt/conda/lib/python3.6/site-packages/datasets/load.py", line 603, in load_dataset algo-1-7plil_1 | **config_kwargs, algo-1-7plil_1 | File "/opt/conda/lib/python3.6/site-packages/datasets/builder.py", line 155, in __init__ algo-1-7plil_1 | **config_kwargs, algo-1-7plil_1 | File "/opt/conda/lib/python3.6/site-packages/datasets/builder.py", line 305, in _create_builder_config algo-1-7plil_1 | m.update(str(os.path.getmtime(data_file))) algo-1-7plil_1 | File "/opt/conda/lib/python3.6/genericpath.py", line 55, in getmtime algo-1-7plil_1 | return os.stat(filename).st_mtime algo-1-7plil_1 | FileNotFoundError: [Errno 2] No such file or directory: 's3://lsmv-sagemaker/pubmedbert/test.csv` But when im trying with pandas , it is able to load from S3 Does the datasets library support S3 path to load
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/878/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/878/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/877
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/877/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/877/comments
https://api.github.com/repos/huggingface/datasets/issues/877/events
https://github.com/huggingface/datasets/issues/877
748,234,438
MDU6SXNzdWU3NDgyMzQ0Mzg=
877
DataLoader(datasets) become more and more slowly within iterations
{ "avatar_url": "https://avatars.githubusercontent.com/u/25664170?v=4", "events_url": "https://api.github.com/users/shexuan/events{/privacy}", "followers_url": "https://api.github.com/users/shexuan/followers", "following_url": "https://api.github.com/users/shexuan/following{/other_user}", "gists_url": "https://api.github.com/users/shexuan/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/shexuan", "id": 25664170, "login": "shexuan", "node_id": "MDQ6VXNlcjI1NjY0MTcw", "organizations_url": "https://api.github.com/users/shexuan/orgs", "received_events_url": "https://api.github.com/users/shexuan/received_events", "repos_url": "https://api.github.com/users/shexuan/repos", "site_admin": false, "starred_url": "https://api.github.com/users/shexuan/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/shexuan/subscriptions", "type": "User", "url": "https://api.github.com/users/shexuan" }
[]
closed
false
null
[]
null
[]
2020-11-22T12:41:10Z
2020-11-29T15:45:12Z
2020-11-29T15:45:12Z
NONE
null
null
null
Hello, when I for loop my dataloader, the loading speed is becoming more and more slowly! ``` dataset = load_from_disk(dataset_path) # around 21,000,000 lines lineloader = tqdm(DataLoader(dataset, batch_size=1)) for idx, line in enumerate(lineloader): # do some thing for each line ``` In the begining, the loading speed is around 2000it/s, but after 1 minutes later, the speed is much slower, just around 800it/s. And when I set `num_workers=4` in DataLoader, the loading speed is much lower, just 130it/s. Could you please help me with this problem? Thanks a lot!
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/877/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/877/timeline
null
completed
true
https://api.github.com/repos/huggingface/datasets/issues/876
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/876/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/876/comments
https://api.github.com/repos/huggingface/datasets/issues/876/events
https://github.com/huggingface/datasets/issues/876
748,195,104
MDU6SXNzdWU3NDgxOTUxMDQ=
876
imdb dataset cannot be loaded
{ "avatar_url": "https://avatars.githubusercontent.com/u/6278280?v=4", "events_url": "https://api.github.com/users/rabeehk/events{/privacy}", "followers_url": "https://api.github.com/users/rabeehk/followers", "following_url": "https://api.github.com/users/rabeehk/following{/other_user}", "gists_url": "https://api.github.com/users/rabeehk/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/rabeehk", "id": 6278280, "login": "rabeehk", "node_id": "MDQ6VXNlcjYyNzgyODA=", "organizations_url": "https://api.github.com/users/rabeehk/orgs", "received_events_url": "https://api.github.com/users/rabeehk/received_events", "repos_url": "https://api.github.com/users/rabeehk/repos", "site_admin": false, "starred_url": "https://api.github.com/users/rabeehk/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/rabeehk/subscriptions", "type": "User", "url": "https://api.github.com/users/rabeehk" }
[]
closed
false
null
[]
null
[]
2020-11-22T08:24:43Z
2021-11-26T11:07:16Z
2020-12-24T17:38:47Z
CONTRIBUTOR
null
null
null
Hi I am trying to load the imdb train dataset `dataset = datasets.load_dataset("imdb", split="train")` getting following errors, thanks for your help ``` Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/idiap/user/rkarimi/libs/anaconda3/envs/internship/lib/python3.7/site-packages/datasets/load.py", line 611, in load_dataset ignore_verifications=ignore_verifications, File "/idiap/user/rkarimi/libs/anaconda3/envs/internship/lib/python3.7/site-packages/datasets/builder.py", line 476, in download_and_prepare dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs File "/idiap/user/rkarimi/libs/anaconda3/envs/internship/lib/python3.7/site-packages/datasets/builder.py", line 558, in _download_and_prepare verify_splits(self.info.splits, split_dict) File "/idiap/user/rkarimi/libs/anaconda3/envs/internship/lib/python3.7/site-packages/datasets/utils/info_utils.py", line 73, in verify_splits raise NonMatchingSplitsSizesError(str(bad_splits)) datasets.utils.info_utils.NonMatchingSplitsSizesError: [{'expected': SplitInfo(name='test', num_bytes=32660064, num_examples=25000, dataset_name='imdb'), 'recorded': SplitInfo(name='test', num_bytes=26476338, num_examples=20316, dataset_name='imdb')}, {'expected': SplitInfo(name='train', num_bytes=33442202, num_examples=25000, dataset_name='imdb'), 'recorded': SplitInfo(name='train', num_bytes=0, num_examples=0, dataset_name='imdb')}, {'expected': SplitInfo(name='unsupervised', num_bytes=67125548, num_examples=50000, dataset_name='imdb'), 'recorded': SplitInfo(name='unsupervised', num_bytes=0, num_examples=0, dataset_name='imdb')}] >>> dataset = datasets.load_dataset("imdb", split="train") ```
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/876/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/876/timeline
null
completed
true
https://api.github.com/repos/huggingface/datasets/issues/875
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/875/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/875/comments
https://api.github.com/repos/huggingface/datasets/issues/875/events
https://github.com/huggingface/datasets/issues/875
748,194,311
MDU6SXNzdWU3NDgxOTQzMTE=
875
bug in boolq dataset loading
{ "avatar_url": "https://avatars.githubusercontent.com/u/6278280?v=4", "events_url": "https://api.github.com/users/rabeehk/events{/privacy}", "followers_url": "https://api.github.com/users/rabeehk/followers", "following_url": "https://api.github.com/users/rabeehk/following{/other_user}", "gists_url": "https://api.github.com/users/rabeehk/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/rabeehk", "id": 6278280, "login": "rabeehk", "node_id": "MDQ6VXNlcjYyNzgyODA=", "organizations_url": "https://api.github.com/users/rabeehk/orgs", "received_events_url": "https://api.github.com/users/rabeehk/received_events", "repos_url": "https://api.github.com/users/rabeehk/repos", "site_admin": false, "starred_url": "https://api.github.com/users/rabeehk/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/rabeehk/subscriptions", "type": "User", "url": "https://api.github.com/users/rabeehk" }
[]
closed
false
null
[]
null
[]
2020-11-22T08:18:34Z
2020-11-24T10:12:33Z
2020-11-24T10:12:33Z
CONTRIBUTOR
null
null
null
Hi I am trying to load boolq dataset: ``` import datasets datasets.load_dataset("boolq") ``` I am getting the following errors, thanks for your help ``` >>> import datasets 2020-11-22 09:16:30.070332: W tensorflow/stream_executor/platform/default/dso_loader.cc:60] Could not load dynamic library 'libcudart.so.11.0'; dlerror: libcudart.so.11.0: cannot open shared object file: No such file or directory 2020-11-22 09:16:30.070389: I tensorflow/stream_executor/cuda/cudart_stub.cc:29] Ignore above cudart dlerror if you do not have a GPU set up on your machine. >>> datasets.load_dataset("boolq") cahce dir /idiap/temp/rkarimi/cache_home/datasets cahce dir /idiap/temp/rkarimi/cache_home/datasets Using custom data configuration default Downloading and preparing dataset boolq/default (download: 8.36 MiB, generated: 7.47 MiB, post-processed: Unknown size, total: 15.83 MiB) to /idiap/temp/rkarimi/cache_home/datasets/boolq/default/0.1.0/2987db1f15deaa19500ae24de560eabeaf1f8ef51df88c0470beeec72943bf11... cahce dir /idiap/temp/rkarimi/cache_home/datasets cahce dir /idiap/temp/rkarimi/cache_home/datasets/downloads Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/idiap/user/rkarimi/libs/anaconda3/envs/internship/lib/python3.7/site-packages/datasets/load.py", line 611, in load_dataset ignore_verifications=ignore_verifications, File "/idiap/user/rkarimi/libs/anaconda3/envs/internship/lib/python3.7/site-packages/datasets/builder.py", line 476, in download_and_prepare dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs File "/idiap/user/rkarimi/libs/anaconda3/envs/internship/lib/python3.7/site-packages/datasets/builder.py", line 531, in _download_and_prepare split_generators = self._split_generators(dl_manager, **split_generators_kwargs) File " /idiap/home/rkarimi/.cache/huggingface/modules/datasets_modules/datasets/boolq/2987db1f15deaa19500ae24de560eabeaf1f8ef51df88c0470beeec72943bf11/boolq.py", line 74, in _split_generators downloaded_files = dl_manager.download_custom(urls_to_download, tf.io.gfile.copy) File "/idiap/user/rkarimi/libs/anaconda3/envs/internship/lib/python3.7/site-packages/datasets/utils/download_manager.py", line 149, in download_custom custom_download(url, path) File "/idiap/user/rkarimi/libs/anaconda3/envs/internship/lib/python3.7/site-packages/tensorflow/python/lib/io/file_io.py", line 516, in copy_v2 compat.path_to_bytes(src), compat.path_to_bytes(dst), overwrite) tensorflow.python.framework.errors_impl.AlreadyExistsError: file already exists ```
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/875/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/875/timeline
null
completed
true
https://api.github.com/repos/huggingface/datasets/issues/874
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/874/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/874/comments
https://api.github.com/repos/huggingface/datasets/issues/874/events
https://github.com/huggingface/datasets/issues/874
748,193,140
MDU6SXNzdWU3NDgxOTMxNDA=
874
trec dataset unavailable
{ "avatar_url": "https://avatars.githubusercontent.com/u/6278280?v=4", "events_url": "https://api.github.com/users/rabeehk/events{/privacy}", "followers_url": "https://api.github.com/users/rabeehk/followers", "following_url": "https://api.github.com/users/rabeehk/following{/other_user}", "gists_url": "https://api.github.com/users/rabeehk/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/rabeehk", "id": 6278280, "login": "rabeehk", "node_id": "MDQ6VXNlcjYyNzgyODA=", "organizations_url": "https://api.github.com/users/rabeehk/orgs", "received_events_url": "https://api.github.com/users/rabeehk/received_events", "repos_url": "https://api.github.com/users/rabeehk/repos", "site_admin": false, "starred_url": "https://api.github.com/users/rabeehk/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/rabeehk/subscriptions", "type": "User", "url": "https://api.github.com/users/rabeehk" }
[]
closed
false
null
[]
null
[]
2020-11-22T08:09:36Z
2020-11-27T13:56:42Z
2020-11-27T13:56:42Z
CONTRIBUTOR
null
null
null
Hi when I try to load the trec dataset I am getting these errors, thanks for your help `datasets.load_dataset("trec", split="train") ` ``` File "<stdin>", line 1, in <module> File "/idiap/user/rkarimi/libs/anaconda3/envs/internship/lib/python3.7/site-packages/datasets/load.py", line 611, in load_dataset ignore_verifications=ignore_verifications, File "/idiap/user/rkarimi/libs/anaconda3/envs/internship/lib/python3.7/site-packages/datasets/builder.py", line 476, in download_and_prepare dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs File "/idiap/user/rkarimi/libs/anaconda3/envs/internship/lib/python3.7/site-packages/datasets/builder.py", line 531, in _download_and_prepare split_generators = self._split_generators(dl_manager, **split_generators_kwargs) File " /idiap/home/rkarimi/.cache/huggingface/modules/datasets_modules/datasets/trec/ca4248481ad244f235f4cf277186cad2ee8769f975119a2bbfc41b8932b88bd7/trec.py", line 140, in _split_generators dl_files = dl_manager.download_and_extract(_URLs) File "/idiap/user/rkarimi/libs/anaconda3/envs/internship/lib/python3.7/site-packages/datasets/utils/download_manager.py", line 254, in download_and_extract return self.extract(self.download(url_or_urls)) File "/idiap/user/rkarimi/libs/anaconda3/envs/internship/lib/python3.7/site-packages/datasets/utils/download_manager.py", line 179, in download num_proc=download_config.num_proc, File "/idiap/user/rkarimi/libs/anaconda3/envs/internship/lib/python3.7/site-packages/datasets/utils/py_utils.py", line 225, in map_nested _single_map_nested((function, obj, types, None, True)) for obj in tqdm(iterable, disable=disable_tqdm) File "/idiap/user/rkarimi/libs/anaconda3/envs/internship/lib/python3.7/site-packages/datasets/utils/py_utils.py", line 225, in <listcomp> _single_map_nested((function, obj, types, None, True)) for obj in tqdm(iterable, disable=disable_tqdm) File "/idiap/user/rkarimi/libs/anaconda3/envs/internship/lib/python3.7/site-packages/datasets/utils/py_utils.py", line 163, in _single_map_nested return function(data_struct) File "/idiap/user/rkarimi/libs/anaconda3/envs/internship/lib/python3.7/site-packages/datasets/utils/file_utils.py", line 308, in cached_path use_etag=download_config.use_etag, File "/idiap/user/rkarimi/libs/anaconda3/envs/internship/lib/python3.7/site-packages/datasets/utils/file_utils.py", line 477, in get_from_cache raise ConnectionError("Couldn't reach {}".format(url)) ConnectionError: Couldn't reach http://cogcomp.org/Data/QA/QC/train_5500.label ```
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/874/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/874/timeline
null
completed
true
https://api.github.com/repos/huggingface/datasets/issues/873
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/873/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/873/comments
https://api.github.com/repos/huggingface/datasets/issues/873/events
https://github.com/huggingface/datasets/issues/873
747,959,523
MDU6SXNzdWU3NDc5NTk1MjM=
873
load_dataset('cnn_dalymail', '3.0.0') gives a 'Not a directory' error
{ "avatar_url": "https://avatars.githubusercontent.com/u/19861874?v=4", "events_url": "https://api.github.com/users/vishal-burman/events{/privacy}", "followers_url": "https://api.github.com/users/vishal-burman/followers", "following_url": "https://api.github.com/users/vishal-burman/following{/other_user}", "gists_url": "https://api.github.com/users/vishal-burman/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/vishal-burman", "id": 19861874, "login": "vishal-burman", "node_id": "MDQ6VXNlcjE5ODYxODc0", "organizations_url": "https://api.github.com/users/vishal-burman/orgs", "received_events_url": "https://api.github.com/users/vishal-burman/received_events", "repos_url": "https://api.github.com/users/vishal-burman/repos", "site_admin": false, "starred_url": "https://api.github.com/users/vishal-burman/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/vishal-burman/subscriptions", "type": "User", "url": "https://api.github.com/users/vishal-burman" }
[]
closed
false
null
[]
null
[]
2020-11-21T06:30:45Z
2022-05-05T07:19:59Z
2020-11-22T12:18:05Z
NONE
null
null
null
``` from datasets import load_dataset dataset = load_dataset('cnn_dailymail', '3.0.0') ``` Stack trace: ``` --------------------------------------------------------------------------- NotADirectoryError Traceback (most recent call last) <ipython-input-6-2e06a8332652> in <module>() 1 from datasets import load_dataset ----> 2 dataset = load_dataset('cnn_dailymail', '3.0.0') 5 frames /usr/local/lib/python3.6/dist-packages/datasets/load.py in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, ignore_verifications, save_infos, script_version, **config_kwargs) 608 download_config=download_config, 609 download_mode=download_mode, --> 610 ignore_verifications=ignore_verifications, 611 ) 612 /usr/local/lib/python3.6/dist-packages/datasets/builder.py in download_and_prepare(self, download_config, download_mode, ignore_verifications, try_from_hf_gcs, dl_manager, **download_and_prepare_kwargs) 513 if not downloaded_from_gcs: 514 self._download_and_prepare( --> 515 dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs 516 ) 517 # Sync info /usr/local/lib/python3.6/dist-packages/datasets/builder.py in _download_and_prepare(self, dl_manager, verify_infos, **prepare_split_kwargs) 568 split_dict = SplitDict(dataset_name=self.name) 569 split_generators_kwargs = self._make_split_generators_kwargs(prepare_split_kwargs) --> 570 split_generators = self._split_generators(dl_manager, **split_generators_kwargs) 571 572 # Checksums verification /root/.cache/huggingface/modules/datasets_modules/datasets/cnn_dailymail/0128610a44e10f25b4af6689441c72af86205282d26399642f7db38fa7535602/cnn_dailymail.py in _split_generators(self, dl_manager) 252 def _split_generators(self, dl_manager): 253 dl_paths = dl_manager.download_and_extract(_DL_URLS) --> 254 train_files = _subset_filenames(dl_paths, datasets.Split.TRAIN) 255 # Generate shared vocabulary 256 /root/.cache/huggingface/modules/datasets_modules/datasets/cnn_dailymail/0128610a44e10f25b4af6689441c72af86205282d26399642f7db38fa7535602/cnn_dailymail.py in _subset_filenames(dl_paths, split) 153 else: 154 logging.fatal("Unsupported split: %s", split) --> 155 cnn = _find_files(dl_paths, "cnn", urls) 156 dm = _find_files(dl_paths, "dm", urls) 157 return cnn + dm /root/.cache/huggingface/modules/datasets_modules/datasets/cnn_dailymail/0128610a44e10f25b4af6689441c72af86205282d26399642f7db38fa7535602/cnn_dailymail.py in _find_files(dl_paths, publisher, url_dict) 132 else: 133 logging.fatal("Unsupported publisher: %s", publisher) --> 134 files = sorted(os.listdir(top_dir)) 135 136 ret_files = [] NotADirectoryError: [Errno 20] Not a directory: '/root/.cache/huggingface/datasets/downloads/1bc05d24fa6dda2468e83a73cf6dc207226e01e3c48a507ea716dc0421da583b/cnn/stories' ``` I have ran the code on Google Colab
{ "+1": 1, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/873/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/873/timeline
null
completed
true
https://api.github.com/repos/huggingface/datasets/issues/872
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/872/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/872/comments
https://api.github.com/repos/huggingface/datasets/issues/872/events
https://github.com/huggingface/datasets/pull/872
747,653,697
MDExOlB1bGxSZXF1ZXN0NTI0ODM4NjEx
872
Add IndicGLUE dataset and Metrics
{ "avatar_url": "https://avatars.githubusercontent.com/u/28291870?v=4", "events_url": "https://api.github.com/users/sumanthd17/events{/privacy}", "followers_url": "https://api.github.com/users/sumanthd17/followers", "following_url": "https://api.github.com/users/sumanthd17/following{/other_user}", "gists_url": "https://api.github.com/users/sumanthd17/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/sumanthd17", "id": 28291870, "login": "sumanthd17", "node_id": "MDQ6VXNlcjI4MjkxODcw", "organizations_url": "https://api.github.com/users/sumanthd17/orgs", "received_events_url": "https://api.github.com/users/sumanthd17/received_events", "repos_url": "https://api.github.com/users/sumanthd17/repos", "site_admin": false, "starred_url": "https://api.github.com/users/sumanthd17/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sumanthd17/subscriptions", "type": "User", "url": "https://api.github.com/users/sumanthd17" }
[]
closed
false
null
[]
null
[]
2020-11-20T17:09:34Z
2020-11-25T17:01:11Z
2020-11-25T15:26:07Z
CONTRIBUTOR
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/872.diff", "html_url": "https://github.com/huggingface/datasets/pull/872", "merged_at": "2020-11-25T15:26:07Z", "patch_url": "https://github.com/huggingface/datasets/pull/872.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/872" }
Added IndicGLUE benchmark for evaluating models on 11 Indian Languages. The descriptions of the tasks and the corresponding paper can be found [here](https://indicnlp.ai4bharat.org/indic-glue/) - [x] Followed the instructions in CONTRIBUTING.md - [x] Ran the tests successfully - [x] Created the dummy data
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/872/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/872/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/871
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/871/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/871/comments
https://api.github.com/repos/huggingface/datasets/issues/871/events
https://github.com/huggingface/datasets/issues/871
747,470,136
MDU6SXNzdWU3NDc0NzAxMzY=
871
terminate called after throwing an instance of 'google::protobuf::FatalException'
{ "avatar_url": "https://avatars.githubusercontent.com/u/6278280?v=4", "events_url": "https://api.github.com/users/rabeehk/events{/privacy}", "followers_url": "https://api.github.com/users/rabeehk/followers", "following_url": "https://api.github.com/users/rabeehk/following{/other_user}", "gists_url": "https://api.github.com/users/rabeehk/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/rabeehk", "id": 6278280, "login": "rabeehk", "node_id": "MDQ6VXNlcjYyNzgyODA=", "organizations_url": "https://api.github.com/users/rabeehk/orgs", "received_events_url": "https://api.github.com/users/rabeehk/received_events", "repos_url": "https://api.github.com/users/rabeehk/repos", "site_admin": false, "starred_url": "https://api.github.com/users/rabeehk/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/rabeehk/subscriptions", "type": "User", "url": "https://api.github.com/users/rabeehk" }
[]
closed
false
null
[]
null
[]
2020-11-20T12:56:24Z
2020-12-12T21:16:32Z
2020-12-12T21:16:32Z
CONTRIBUTOR
null
null
null
Hi I am using the dataset "iwslt2017-en-nl", and after downloading it I am getting this error when trying to evaluate it on T5-base with seq2seq_trainer.py in the huggingface repo could you assist me please? thanks 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 63/63 [02:47<00:00, 2.18s/it][libprotobuf FATAL /sentencepiece/src/../third_party/protobuf-lite/google/protobuf/repeated_field.h:1505] CHECK failed: (index) >= (0): terminate called after throwing an instance of 'google::protobuf::FatalException' what(): CHECK failed: (index) >= (0): run_t5_base_eval.sh: line 19: 5795 Aborted
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/871/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/871/timeline
null
completed
true
https://api.github.com/repos/huggingface/datasets/issues/870
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/870/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/870/comments
https://api.github.com/repos/huggingface/datasets/issues/870/events
https://github.com/huggingface/datasets/issues/870
747,021,996
MDU6SXNzdWU3NDcwMjE5OTY=
870
[Feature Request] Add optional parameter in text loading script to preserve linebreaks
{ "avatar_url": "https://avatars.githubusercontent.com/u/31020859?v=4", "events_url": "https://api.github.com/users/jncasey/events{/privacy}", "followers_url": "https://api.github.com/users/jncasey/followers", "following_url": "https://api.github.com/users/jncasey/following{/other_user}", "gists_url": "https://api.github.com/users/jncasey/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/jncasey", "id": 31020859, "login": "jncasey", "node_id": "MDQ6VXNlcjMxMDIwODU5", "organizations_url": "https://api.github.com/users/jncasey/orgs", "received_events_url": "https://api.github.com/users/jncasey/received_events", "repos_url": "https://api.github.com/users/jncasey/repos", "site_admin": false, "starred_url": "https://api.github.com/users/jncasey/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jncasey/subscriptions", "type": "User", "url": "https://api.github.com/users/jncasey" }
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" } ]
closed
false
null
[]
null
[]
2020-11-19T23:51:31Z
2022-06-01T15:25:53Z
2022-06-01T15:25:52Z
NONE
null
null
null
I'm working on a project about rhyming verse using phonetic poetry and song lyrics, and line breaks are a vital part of the data. I recently switched over to use the datasets library when my various corpora grew larger than my computer's memory. And so far, it is SO great. But the first time I processed all of my data into a dataset, I hadn't realized the text loader script was processing the source files line-by-line and stripping off the newlines. Once I caught the issue, I made my own data loader by modifying one line in the default text loader (changing `batch = batch.splitlines()` to `batch = batch.splitlines(True)` inside `_generate_tables`). And so I'm all set as far as my project is concerned. But if my use case is more general, it seems like it'd be pretty trivial to add a kwarg to the default text loader called keeplinebreaks or something, which would default to False and get passed to `splitlines()`.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/870/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/870/timeline
null
completed
true
https://api.github.com/repos/huggingface/datasets/issues/869
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/869/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/869/comments
https://api.github.com/repos/huggingface/datasets/issues/869/events
https://github.com/huggingface/datasets/pull/869
746,495,711
MDExOlB1bGxSZXF1ZXN0NTIzODc3OTkw
869
Update ner datasets infos
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
[]
closed
false
null
[]
null
[]
2020-11-19T11:28:03Z
2020-11-19T14:14:18Z
2020-11-19T14:14:17Z
MEMBER
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/869.diff", "html_url": "https://github.com/huggingface/datasets/pull/869", "merged_at": "2020-11-19T14:14:17Z", "patch_url": "https://github.com/huggingface/datasets/pull/869.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/869" }
Update the dataset_infos.json files for changes made in #850 regarding the ner datasets feature types (and the change to ClassLabel) I also fixed the ner types of conll2003
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/869/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/869/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/868
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/868/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/868/comments
https://api.github.com/repos/huggingface/datasets/issues/868/events
https://github.com/huggingface/datasets/pull/868
745,889,882
MDExOlB1bGxSZXF1ZXN0NTIzMzc2MzQ3
868
Consistent metric outputs
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
[ { "color": "E3165C", "default": false, "description": "", "id": 4190228726, "name": "transfer-to-evaluate", "node_id": "LA_kwDODunzps75wdD2", "url": "https://api.github.com/repos/huggingface/datasets/labels/transfer-to-evaluate" } ]
open
false
null
[]
null
[]
2020-11-18T18:05:59Z
2022-09-23T08:27:37Z
null
MEMBER
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/868.diff", "html_url": "https://github.com/huggingface/datasets/pull/868", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/868.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/868" }
To automate the use of metrics, they should return consistent outputs. In particular I'm working on adding a conversion of metrics to keras metrics. To achieve this we need two things: - have each metric return dictionaries of string -> floats since each keras metrics should return one float - define in the metric info the different fields of the output dictionary In this PR I'm adding these two features. I also fixed a few bugs in some metrics #867 needs to be merged first
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/868/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/868/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/867
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/867/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/867/comments
https://api.github.com/repos/huggingface/datasets/issues/867/events
https://github.com/huggingface/datasets/pull/867
745,773,955
MDExOlB1bGxSZXF1ZXN0NTIzMjc4MjI4
867
Fix some metrics feature types
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
[]
closed
false
null
[]
null
[]
2020-11-18T15:46:11Z
2020-11-19T17:35:58Z
2020-11-19T17:35:57Z
MEMBER
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/867.diff", "html_url": "https://github.com/huggingface/datasets/pull/867", "merged_at": "2020-11-19T17:35:57Z", "patch_url": "https://github.com/huggingface/datasets/pull/867.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/867" }
Replace `int` feature type to `int32` since `int` is not a pyarrow dtype in those metrics: - accuracy - precision - recall - f1 I also added the sklearn citation and used keyword arguments to remove future warnings
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/867/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/867/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/866
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/866/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/866/comments
https://api.github.com/repos/huggingface/datasets/issues/866/events
https://github.com/huggingface/datasets/issues/866
745,719,222
MDU6SXNzdWU3NDU3MTkyMjI=
866
OSCAR from Inria group
{ "avatar_url": "https://avatars.githubusercontent.com/u/34098722?v=4", "events_url": "https://api.github.com/users/jchwenger/events{/privacy}", "followers_url": "https://api.github.com/users/jchwenger/followers", "following_url": "https://api.github.com/users/jchwenger/following{/other_user}", "gists_url": "https://api.github.com/users/jchwenger/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/jchwenger", "id": 34098722, "login": "jchwenger", "node_id": "MDQ6VXNlcjM0MDk4NzIy", "organizations_url": "https://api.github.com/users/jchwenger/orgs", "received_events_url": "https://api.github.com/users/jchwenger/received_events", "repos_url": "https://api.github.com/users/jchwenger/repos", "site_admin": false, "starred_url": "https://api.github.com/users/jchwenger/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jchwenger/subscriptions", "type": "User", "url": "https://api.github.com/users/jchwenger" }
[ { "color": "e99695", "default": false, "description": "Requesting to add a new dataset", "id": 2067376369, "name": "dataset request", "node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request" } ]
closed
false
null
[]
null
[]
2020-11-18T14:40:54Z
2020-11-18T15:01:30Z
2020-11-18T15:01:30Z
NONE
null
null
null
## Adding a Dataset - **Name:** *OSCAR* (Open Super-large Crawled ALMAnaCH coRpus), multilingual parsing of Common Crawl (separate crawls for many different languages), [here](https://oscar-corpus.com/). - **Description:** *OSCAR or Open Super-large Crawled ALMAnaCH coRpus is a huge multilingual corpus obtained by language classification and filtering of the Common Crawl corpus using the goclassy architecture.* - **Paper:** *[here](https://hal.inria.fr/hal-02148693)* - **Data:** *[here](https://oscar-corpus.com/)* - **Motivation:** *useful for unsupervised tasks in separate languages. In an ideal world, your team would be able to obtain the unshuffled version, that could be used to train GPT-2-like models (the shuffled version, I suppose, could be used for translation).* I am aware that you do offer the "colossal" Common Crawl dataset already, but this has the advantage to be available in many subcorpora for different languages.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/866/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/866/timeline
null
completed
true
https://api.github.com/repos/huggingface/datasets/issues/865
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/865/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/865/comments
https://api.github.com/repos/huggingface/datasets/issues/865/events
https://github.com/huggingface/datasets/issues/865
745,430,497
MDU6SXNzdWU3NDU0MzA0OTc=
865
Have Trouble importing `datasets`
{ "avatar_url": "https://avatars.githubusercontent.com/u/2755894?v=4", "events_url": "https://api.github.com/users/forest1988/events{/privacy}", "followers_url": "https://api.github.com/users/forest1988/followers", "following_url": "https://api.github.com/users/forest1988/following{/other_user}", "gists_url": "https://api.github.com/users/forest1988/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/forest1988", "id": 2755894, "login": "forest1988", "node_id": "MDQ6VXNlcjI3NTU4OTQ=", "organizations_url": "https://api.github.com/users/forest1988/orgs", "received_events_url": "https://api.github.com/users/forest1988/received_events", "repos_url": "https://api.github.com/users/forest1988/repos", "site_admin": false, "starred_url": "https://api.github.com/users/forest1988/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/forest1988/subscriptions", "type": "User", "url": "https://api.github.com/users/forest1988" }
[]
closed
false
null
[]
null
[]
2020-11-18T08:04:41Z
2020-11-18T08:16:35Z
2020-11-18T08:16:35Z
CONTRIBUTOR
null
null
null
I'm failing to import transformers (v4.0.0-dev), and tracing the cause seems to be failing to import datasets. I cloned the newest version of datasets (master branch), and do `pip install -e .`. Then, `import datasets` causes the error below. ``` ~/workspace/Clone/datasets/src/datasets/utils/file_utils.py in <module> 116 sys.path.append(str(HF_MODULES_CACHE)) 117 --> 118 os.makedirs(HF_MODULES_CACHE, exist_ok=True) 119 if not os.path.exists(os.path.join(HF_MODULES_CACHE, "__init__.py")): 120 with open(os.path.join(HF_MODULES_CACHE, "__init__.py"), "w"): ~/.pyenv/versions/anaconda3-2020.07/lib/python3.8/os.py in makedirs(name, mode, exist_ok) 221 return 222 try: --> 223 mkdir(name, mode) 224 except OSError: 225 # Cannot rely on checking for EEXIST, since the operating system FileNotFoundError: [Errno 2] No such file or directory: '<MY_HOME_DIRECTORY>/.cache/huggingface/modules' ``` The error occurs in `os.makedirs` in `file_utils.py`, even though `exist_ok = True` option is set. (I use Python 3.8, so `exist_ok` is expected to work.) I've checked some environment variables, and they are set as below. ``` *** NameError: name 'HF_MODULES_CACHE' is not defined *** NameError: name 'hf_cache_home' is not defined *** NameError: name 'XDG_CACHE_HOME' is not defined ``` Should I set some environment variables before using this library? And, do you have any idea why "No such file or directory" occurs even though the `exist_ok = True` option is set? Thank you in advance.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/865/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/865/timeline
null
completed
true
https://api.github.com/repos/huggingface/datasets/issues/864
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/864/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/864/comments
https://api.github.com/repos/huggingface/datasets/issues/864/events
https://github.com/huggingface/datasets/issues/864
745,322,357
MDU6SXNzdWU3NDUzMjIzNTc=
864
Unable to download cnn_dailymail dataset
{ "avatar_url": "https://avatars.githubusercontent.com/u/46031058?v=4", "events_url": "https://api.github.com/users/rohitashwa1907/events{/privacy}", "followers_url": "https://api.github.com/users/rohitashwa1907/followers", "following_url": "https://api.github.com/users/rohitashwa1907/following{/other_user}", "gists_url": "https://api.github.com/users/rohitashwa1907/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/rohitashwa1907", "id": 46031058, "login": "rohitashwa1907", "node_id": "MDQ6VXNlcjQ2MDMxMDU4", "organizations_url": "https://api.github.com/users/rohitashwa1907/orgs", "received_events_url": "https://api.github.com/users/rohitashwa1907/received_events", "repos_url": "https://api.github.com/users/rohitashwa1907/repos", "site_admin": false, "starred_url": "https://api.github.com/users/rohitashwa1907/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/rohitashwa1907/subscriptions", "type": "User", "url": "https://api.github.com/users/rohitashwa1907" }
[ { "color": "2edb81", "default": false, "description": "A bug in a dataset script provided in the library", "id": 2067388877, "name": "dataset bug", "node_id": "MDU6TGFiZWwyMDY3Mzg4ODc3", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20bug" } ]
closed
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" } ]
null
[]
2020-11-18T04:38:02Z
2020-11-20T05:22:11Z
2020-11-20T05:22:10Z
NONE
null
null
null
### Script to reproduce the error ``` from datasets import load_dataset train_dataset = load_dataset("cnn_dailymail", "3.0.0", split= 'train[:10%') valid_dataset = load_dataset("cnn_dailymail","3.0.0", split="validation[:5%]") ``` ### Error ``` --------------------------------------------------------------------------- NotADirectoryError Traceback (most recent call last) <ipython-input-8-47c39c228935> in <module>() 1 from datasets import load_dataset 2 ----> 3 train_dataset = load_dataset("cnn_dailymail", "3.0.0", split= 'train[:10%') 4 valid_dataset = load_dataset("cnn_dailymail","3.0.0", split="validation[:5%]") 5 frames /usr/local/lib/python3.6/dist-packages/datasets/load.py in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, ignore_verifications, save_infos, script_version, **config_kwargs) 609 download_config=download_config, 610 download_mode=download_mode, --> 611 ignore_verifications=ignore_verifications, 612 ) 613 /usr/local/lib/python3.6/dist-packages/datasets/builder.py in download_and_prepare(self, download_config, download_mode, ignore_verifications, try_from_hf_gcs, dl_manager, **download_and_prepare_kwargs) 469 if not downloaded_from_gcs: 470 self._download_and_prepare( --> 471 dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs 472 ) 473 # Sync info /usr/local/lib/python3.6/dist-packages/datasets/builder.py in _download_and_prepare(self, dl_manager, verify_infos, **prepare_split_kwargs) 524 split_dict = SplitDict(dataset_name=self.name) 525 split_generators_kwargs = self._make_split_generators_kwargs(prepare_split_kwargs) --> 526 split_generators = self._split_generators(dl_manager, **split_generators_kwargs) 527 528 # Checksums verification /root/.cache/huggingface/modules/datasets_modules/datasets/cnn_dailymail/0128610a44e10f25b4af6689441c72af86205282d26399642f7db38fa7535602/cnn_dailymail.py in _split_generators(self, dl_manager) 252 def _split_generators(self, dl_manager): 253 dl_paths = dl_manager.download_and_extract(_DL_URLS) --> 254 train_files = _subset_filenames(dl_paths, datasets.Split.TRAIN) 255 # Generate shared vocabulary 256 /root/.cache/huggingface/modules/datasets_modules/datasets/cnn_dailymail/0128610a44e10f25b4af6689441c72af86205282d26399642f7db38fa7535602/cnn_dailymail.py in _subset_filenames(dl_paths, split) 153 else: 154 logging.fatal("Unsupported split: %s", split) --> 155 cnn = _find_files(dl_paths, "cnn", urls) 156 dm = _find_files(dl_paths, "dm", urls) 157 return cnn + dm /root/.cache/huggingface/modules/datasets_modules/datasets/cnn_dailymail/0128610a44e10f25b4af6689441c72af86205282d26399642f7db38fa7535602/cnn_dailymail.py in _find_files(dl_paths, publisher, url_dict) 132 else: 133 logging.fatal("Unsupported publisher: %s", publisher) --> 134 files = sorted(os.listdir(top_dir)) 135 136 ret_files = [] NotADirectoryError: [Errno 20] Not a directory: '/root/.cache/huggingface/datasets/downloads/1bc05d24fa6dda2468e83a73cf6dc207226e01e3c48a507ea716dc0421da583b/cnn/stories' ``` Thanks for any suggestions.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/864/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/864/timeline
null
completed
true
https://api.github.com/repos/huggingface/datasets/issues/863
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/863/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/863/comments
https://api.github.com/repos/huggingface/datasets/issues/863/events
https://github.com/huggingface/datasets/pull/863
744,954,534
MDExOlB1bGxSZXF1ZXN0NTIyNTk0Mjg1
863
Add clear_cache parameter in the test command
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
[]
closed
false
null
[]
null
[]
2020-11-17T17:52:29Z
2020-11-18T14:44:25Z
2020-11-18T14:44:24Z
MEMBER
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/863.diff", "html_url": "https://github.com/huggingface/datasets/pull/863", "merged_at": "2020-11-18T14:44:24Z", "patch_url": "https://github.com/huggingface/datasets/pull/863.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/863" }
For certain datasets like OSCAR #348 there are lots of different configurations and each one of them can take a lot of disk space. I added a `--clear_cache` flag to the `datasets-cli test` command to be able to clear the cache after each configuration test to avoid filling up the disk. It should enable an easier generation for the `dataset_infos.json` file for OSCAR.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/863/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/863/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/862
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/862/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/862/comments
https://api.github.com/repos/huggingface/datasets/issues/862/events
https://github.com/huggingface/datasets/pull/862
744,906,131
MDExOlB1bGxSZXF1ZXN0NTIyNTUzMzY1
862
Update head requests
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
[]
closed
false
null
[]
null
[]
2020-11-17T16:49:06Z
2020-11-18T14:43:53Z
2020-11-18T14:43:50Z
MEMBER
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/862.diff", "html_url": "https://github.com/huggingface/datasets/pull/862", "merged_at": "2020-11-18T14:43:50Z", "patch_url": "https://github.com/huggingface/datasets/pull/862.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/862" }
Get requests and Head requests didn't have the same parameters.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/862/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/862/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/861
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/861/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/861/comments
https://api.github.com/repos/huggingface/datasets/issues/861/events
https://github.com/huggingface/datasets/issues/861
744,753,458
MDU6SXNzdWU3NDQ3NTM0NTg=
861
Possible Bug: Small training/dataset file creates gigantic output
{ "avatar_url": "https://avatars.githubusercontent.com/u/7240417?v=4", "events_url": "https://api.github.com/users/NebelAI/events{/privacy}", "followers_url": "https://api.github.com/users/NebelAI/followers", "following_url": "https://api.github.com/users/NebelAI/following{/other_user}", "gists_url": "https://api.github.com/users/NebelAI/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/NebelAI", "id": 7240417, "login": "NebelAI", "node_id": "MDQ6VXNlcjcyNDA0MTc=", "organizations_url": "https://api.github.com/users/NebelAI/orgs", "received_events_url": "https://api.github.com/users/NebelAI/received_events", "repos_url": "https://api.github.com/users/NebelAI/repos", "site_admin": false, "starred_url": "https://api.github.com/users/NebelAI/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/NebelAI/subscriptions", "type": "User", "url": "https://api.github.com/users/NebelAI" }
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" }, { "color": "d876e3", "default": true, "description": "Further information is requested", "id": 1935892912, "name": "question", "node_id": "MDU6TGFiZWwxOTM1ODkyOTEy", "url": "https://api.github.com/repos/huggingface/datasets/labels/question" } ]
closed
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" } ]
null
[]
2020-11-17T13:48:59Z
2021-03-30T14:04:04Z
2021-03-22T12:04:55Z
NONE
null
null
null
Hey guys, I was trying to create a new bert model from scratch via _huggingface transformers + tokenizers + dataets_ (actually using this example script by your team: https://github.com/huggingface/transformers/blob/master/examples/language-modeling/run_mlm.py). It was supposed to be a first test with a small 5 GB raw text file but I can't even end the preprocessing handled by datasets because this tiny 5 GB text file becomes more than 1 TB when processing. My system was running out of space and crashed prematurely. I've done training from scratch via Google's bert repo in the past and I can remember that the resulting pretraining data can become quite big. But 5 GB becoming 1 TB was never the case. Is this considered normal or is it a bug? I've used the following CMD: `python xla_spawn.py --num_cores=8 run_mlm.py --model_type bert --config_name config.json --tokenizer_name tokenizer.json --train_file dataset_full.txt --do_train --output_dir out --max_steps 500000 --save_steps 2500 --save_total_limit 2 --prediction_loss_only --line_by_line --max_seq_length 128 --pad_to_max_length --preprocessing_num_workers 16 --per_device_train_batch_size 128 --overwrite_output_dir --debug`
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/861/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/861/timeline
null
completed
true
https://api.github.com/repos/huggingface/datasets/issues/860
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/860/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/860/comments
https://api.github.com/repos/huggingface/datasets/issues/860/events
https://github.com/huggingface/datasets/issues/860
744,750,691
MDU6SXNzdWU3NDQ3NTA2OTE=
860
wmt16 cs-en does not donwload
{ "avatar_url": "https://avatars.githubusercontent.com/u/6278280?v=4", "events_url": "https://api.github.com/users/rabeehk/events{/privacy}", "followers_url": "https://api.github.com/users/rabeehk/followers", "following_url": "https://api.github.com/users/rabeehk/following{/other_user}", "gists_url": "https://api.github.com/users/rabeehk/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/rabeehk", "id": 6278280, "login": "rabeehk", "node_id": "MDQ6VXNlcjYyNzgyODA=", "organizations_url": "https://api.github.com/users/rabeehk/orgs", "received_events_url": "https://api.github.com/users/rabeehk/received_events", "repos_url": "https://api.github.com/users/rabeehk/repos", "site_admin": false, "starred_url": "https://api.github.com/users/rabeehk/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/rabeehk/subscriptions", "type": "User", "url": "https://api.github.com/users/rabeehk" }
[ { "color": "2edb81", "default": false, "description": "A bug in a dataset script provided in the library", "id": 2067388877, "name": "dataset bug", "node_id": "MDU6TGFiZWwyMDY3Mzg4ODc3", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20bug" } ]
closed
false
null
[]
null
[]
2020-11-17T13:45:35Z
2022-10-05T12:27:00Z
2022-10-05T12:26:59Z
CONTRIBUTOR
null
null
null
Hi I am trying with wmt16, cs-en pair, thanks for the help, perhaps similar to the ro-en issue. thanks split="train", n_obs=data_args.n_train) for task in data_args.task} File "finetune_t5_trainer.py", line 109, in <dictcomp> split="train", n_obs=data_args.n_train) for task in data_args.task} File "/home/rabeeh/internship/seq2seq/tasks/tasks.py", line 82, in get_dataset dataset = load_dataset("wmt16", self.pair, split=split) File "/opt/conda/envs/internship/lib/python3.7/site-packages/datasets/load.py", line 611, in load_dataset ignore_verifications=ignore_verifications, File "/opt/conda/envs/internship/lib/python3.7/site-packages/datasets/builder.py", line 476, in download_and_prepare dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs File "/opt/conda/envs/internship/lib/python3.7/site-packages/datasets/builder.py", line 531, in _download_and_prepare split_generators = self._split_generators(dl_manager, **split_generators_kwargs) File "/home/rabeeh/.cache/huggingface/modules/datasets_modules/datasets/wmt16/7b2c4443a7d34c2e13df267eaa8cab4c62dd82f6b62b0d9ecc2e3a673ce17308/wmt_utils.py", line 755, in _split_generators downloaded_files = dl_manager.download_and_extract(urls_to_download) File "/opt/conda/envs/internship/lib/python3.7/site-packages/datasets/utils/download_manager.py", line 254, in download_and_extract return self.extract(self.download(url_or_urls)) File "/opt/conda/envs/internship/lib/python3.7/site-packages/datasets/utils/download_manager.py", line 179, in download num_proc=download_config.num_proc, File "/opt/conda/envs/internship/lib/python3.7/site-packages/datasets/utils/py_utils.py", line 225, in map_nested _single_map_nested((function, obj, types, None, True)) for obj in tqdm(iterable, disable=disable_tqdm) File "/opt/conda/envs/internship/lib/python3.7/site-packages/datasets/utils/py_utils.py", line 225, in <listcomp> _single_map_nested((function, obj, types, None, True)) for obj in tqdm(iterable, disable=disable_tqdm) File "/opt/conda/envs/internship/lib/python3.7/site-packages/datasets/utils/py_utils.py", line 181, in _single_map_nested mapped = [_single_map_nested((function, v, types, None, True)) for v in pbar] File "/opt/conda/envs/internship/lib/python3.7/site-packages/datasets/utils/py_utils.py", line 181, in <listcomp> mapped = [_single_map_nested((function, v, types, None, True)) for v in pbar] File "/opt/conda/envs/internship/lib/python3.7/site-packages/datasets/utils/py_utils.py", line 163, in _single_map_nested return function(data_struct) File "/opt/conda/envs/internship/lib/python3.7/site-packages/datasets/utils/file_utils.py", line 308, in cached_path use_etag=download_config.use_etag, File "/opt/conda/envs/internship/lib/python3.7/site-packages/datasets/utils/file_utils.py", line 475, in get_from_cache raise ConnectionError("Couldn't reach {}".format(url)) ConnectionError: Couldn't reach http://www.statmt.org/wmt13/training-parallel-commoncrawl.tgz
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/860/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/860/timeline
null
completed
true
https://api.github.com/repos/huggingface/datasets/issues/859
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/859/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/859/comments
https://api.github.com/repos/huggingface/datasets/issues/859/events
https://github.com/huggingface/datasets/pull/859
743,917,091
MDExOlB1bGxSZXF1ZXN0NTIxNzI4MDM4
859
Integrate file_lock inside the lib for better logging control
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
[]
closed
false
null
[]
null
[]
2020-11-16T15:13:39Z
2020-11-16T17:06:44Z
2020-11-16T17:06:42Z
MEMBER
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/859.diff", "html_url": "https://github.com/huggingface/datasets/pull/859", "merged_at": "2020-11-16T17:06:42Z", "patch_url": "https://github.com/huggingface/datasets/pull/859.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/859" }
Previously the locking system of the lib was based on the file_lock package. However as noticed in #812 there were too many logs printed even when the datasets logging was set to warnings or errors. For example ```python import logging logging.basicConfig(level=logging.INFO) import datasets datasets.set_verbosity_warning() datasets.load_dataset("squad") ``` would still log the file lock events: ``` INFO:filelock:Lock 5737989232 acquired on /Users/quentinlhoest/.cache/huggingface/datasets/44801f118d500eff6114bfc56ab4e6def941f1eb14b70ac1ecc052e15cdac49d.85f43de978b9b25921cb78d7a2f2b350c04acdbaedb9ecb5f7101cd7c0950e68.py.lock INFO:filelock:Lock 5737989232 released on /Users/quentinlhoest/.cache/huggingface/datasets/44801f118d500eff6114bfc56ab4e6def941f1eb14b70ac1ecc052e15cdac49d.85f43de978b9b25921cb78d7a2f2b350c04acdbaedb9ecb5f7101cd7c0950e68.py.lock INFO:filelock:Lock 4393489968 acquired on /Users/quentinlhoest/.cache/huggingface/datasets/_Users_quentinlhoest_.cache_huggingface_datasets_squad_plain_text_1.0.0_1244d044b266a5e4dbd4174d23cb995eead372fbca31a03edc3f8a132787af41.lock INFO:filelock:Lock 4393489968 released on /Users/quentinlhoest/.cache/huggingface/datasets/_Users_quentinlhoest_.cache_huggingface_datasets_squad_plain_text_1.0.0_1244d044b266a5e4dbd4174d23cb995eead372fbca31a03edc3f8a132787af41.lock INFO:filelock:Lock 4393490808 acquired on /Users/quentinlhoest/.cache/huggingface/datasets/_Users_quentinlhoest_.cache_huggingface_datasets_squad_plain_text_1.0.0_1244d044b266a5e4dbd4174d23cb995eead372fbca31a03edc3f8a132787af41.lock Reusing dataset squad (/Users/quentinlhoest/.cache/huggingface/datasets/squad/plain_text/1.0.0/1244d044b266a5e4dbd4174d23cb995eead372fbca31a03edc3f8a132787af41) INFO:filelock:Lock 4393490808 released on /Users/quentinlhoest/.cache/huggingface/datasets/_Users_quentinlhoest_.cache_huggingface_datasets_squad_plain_text_1.0.0_1244d044b266a5e4dbd4174d23cb995eead372fbca31a03edc3f8a132787af41.lock ``` With the integration of file_lock in the library, the ouput is much cleaner: ``` Reusing dataset squad (/Users/quentinlhoest/.cache/huggingface/datasets/squad/plain_text/1.0.0/1244d044b266a5e4dbd4174d23cb995eead372fbca31a03edc3f8a132787af41) ``` Since the file_lock package is only a 450 lines file I think it's fine to have it inside the lib. Fix #812
{ "+1": 1, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/859/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/859/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/858
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/858/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/858/comments
https://api.github.com/repos/huggingface/datasets/issues/858/events
https://github.com/huggingface/datasets/pull/858
743,904,516
MDExOlB1bGxSZXF1ZXN0NTIxNzE3ODQ4
858
Add SemEval-2010 task 8
{ "avatar_url": "https://avatars.githubusercontent.com/u/3775944?v=4", "events_url": "https://api.github.com/users/JoelNiklaus/events{/privacy}", "followers_url": "https://api.github.com/users/JoelNiklaus/followers", "following_url": "https://api.github.com/users/JoelNiklaus/following{/other_user}", "gists_url": "https://api.github.com/users/JoelNiklaus/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/JoelNiklaus", "id": 3775944, "login": "JoelNiklaus", "node_id": "MDQ6VXNlcjM3NzU5NDQ=", "organizations_url": "https://api.github.com/users/JoelNiklaus/orgs", "received_events_url": "https://api.github.com/users/JoelNiklaus/received_events", "repos_url": "https://api.github.com/users/JoelNiklaus/repos", "site_admin": false, "starred_url": "https://api.github.com/users/JoelNiklaus/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/JoelNiklaus/subscriptions", "type": "User", "url": "https://api.github.com/users/JoelNiklaus" }
[]
closed
false
null
[]
null
[]
2020-11-16T14:57:57Z
2020-11-26T17:28:55Z
2020-11-26T17:28:55Z
CONTRIBUTOR
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/858.diff", "html_url": "https://github.com/huggingface/datasets/pull/858", "merged_at": "2020-11-26T17:28:55Z", "patch_url": "https://github.com/huggingface/datasets/pull/858.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/858" }
Hi, I don't know how to add dummy data, since I create the validation set out of the last 1000 examples of the train set. If you have a suggestion, I am happy to implement it. Cheers, Joel
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/858/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/858/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/857
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/857/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/857/comments
https://api.github.com/repos/huggingface/datasets/issues/857/events
https://github.com/huggingface/datasets/pull/857
743,863,214
MDExOlB1bGxSZXF1ZXN0NTIxNjg0ODIx
857
Use pandas reader in csv
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
[]
closed
false
null
[]
null
[]
2020-11-16T14:05:45Z
2020-11-19T17:35:40Z
2020-11-19T17:35:38Z
MEMBER
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/857.diff", "html_url": "https://github.com/huggingface/datasets/pull/857", "merged_at": "2020-11-19T17:35:38Z", "patch_url": "https://github.com/huggingface/datasets/pull/857.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/857" }
The pyarrow CSV reader has issues that the pandas one doesn't (see #836 ). To fix that I switched to the pandas csv reader. The new reader is compatible with all the pandas parameters to read csv files. Moreover it reads csv by chunk in order to save RAM, while the pyarrow one loads everything in memory. Fix #836 Fix #794 Breaking: now all the parameters to read to csv file can be used in the `load_dataset` kwargs when loading csv, and the previous pyarrow objects `pyarrow.csv.ReadOptions`, `pyarrow.csv.ParseOptions` and `pyarrow.csv.ConvertOptions` are not used anymore.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/857/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/857/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/856
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/856/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/856/comments
https://api.github.com/repos/huggingface/datasets/issues/856/events
https://github.com/huggingface/datasets/pull/856
743,799,239
MDExOlB1bGxSZXF1ZXN0NTIxNjMzNTYz
856
Add open book corpus
{ "avatar_url": "https://avatars.githubusercontent.com/u/458335?v=4", "events_url": "https://api.github.com/users/vblagoje/events{/privacy}", "followers_url": "https://api.github.com/users/vblagoje/followers", "following_url": "https://api.github.com/users/vblagoje/following{/other_user}", "gists_url": "https://api.github.com/users/vblagoje/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/vblagoje", "id": 458335, "login": "vblagoje", "node_id": "MDQ6VXNlcjQ1ODMzNQ==", "organizations_url": "https://api.github.com/users/vblagoje/orgs", "received_events_url": "https://api.github.com/users/vblagoje/received_events", "repos_url": "https://api.github.com/users/vblagoje/repos", "site_admin": false, "starred_url": "https://api.github.com/users/vblagoje/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/vblagoje/subscriptions", "type": "User", "url": "https://api.github.com/users/vblagoje" }
[]
closed
false
null
[]
null
[]
2020-11-16T12:30:02Z
2020-11-18T12:03:46Z
2020-11-17T15:22:18Z
CONTRIBUTOR
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/856.diff", "html_url": "https://github.com/huggingface/datasets/pull/856", "merged_at": "2020-11-17T15:22:17Z", "patch_url": "https://github.com/huggingface/datasets/pull/856.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/856" }
Adds book corpus based on Shawn Presser's [work](https://github.com/soskek/bookcorpus/issues/27) @richarddwang, the author of the original BookCorpus dataset, suggested it should be named [OpenBookCorpus](https://github.com/huggingface/datasets/issues/486). I named it BookCorpusOpen to be easily located alphabetically. But, of course, we can rename it if needed. It contains 17868 dataset items; each item contains two fields: title and text. The title is the name of the book (just the file name) while the text contains unprocessed book text. Note that bookcorpus is pre-segmented into a sentence while this bookcorpus is not. This is intentional (see https://github.com/huggingface/datasets/issues/486) as some users might want to further process the text themselves. @lhoestq and others please review this PR thoroughly. cc @shawwn
{ "+1": 2, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 3, "total_count": 5, "url": "https://api.github.com/repos/huggingface/datasets/issues/856/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/856/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/855
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/855/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/855/comments
https://api.github.com/repos/huggingface/datasets/issues/855/events
https://github.com/huggingface/datasets/pull/855
743,690,839
MDExOlB1bGxSZXF1ZXN0NTIxNTQ2Njkx
855
Fix kor nli csv reader
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
[]
closed
false
null
[]
null
[]
2020-11-16T09:53:41Z
2020-11-16T13:59:14Z
2020-11-16T13:59:12Z
MEMBER
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/855.diff", "html_url": "https://github.com/huggingface/datasets/pull/855", "merged_at": "2020-11-16T13:59:12Z", "patch_url": "https://github.com/huggingface/datasets/pull/855.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/855" }
The kor_nli dataset had an issue with the csv reader that was not able to parse the lines correctly. Some lines were merged together for some reason. I fixed that by iterating through the lines directly instead of using a csv reader. I also changed the feature names to match the other NLI datasets (i.e. use "premise", "hypothesis", "label" features) Fix #821
{ "+1": 1, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/855/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/855/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/854
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/854/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/854/comments
https://api.github.com/repos/huggingface/datasets/issues/854/events
https://github.com/huggingface/datasets/issues/854
743,675,376
MDU6SXNzdWU3NDM2NzUzNzY=
854
wmt16 does not download
{ "avatar_url": "https://avatars.githubusercontent.com/u/6278280?v=4", "events_url": "https://api.github.com/users/rabeehk/events{/privacy}", "followers_url": "https://api.github.com/users/rabeehk/followers", "following_url": "https://api.github.com/users/rabeehk/following{/other_user}", "gists_url": "https://api.github.com/users/rabeehk/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/rabeehk", "id": 6278280, "login": "rabeehk", "node_id": "MDQ6VXNlcjYyNzgyODA=", "organizations_url": "https://api.github.com/users/rabeehk/orgs", "received_events_url": "https://api.github.com/users/rabeehk/received_events", "repos_url": "https://api.github.com/users/rabeehk/repos", "site_admin": false, "starred_url": "https://api.github.com/users/rabeehk/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/rabeehk/subscriptions", "type": "User", "url": "https://api.github.com/users/rabeehk" }
[ { "color": "2edb81", "default": false, "description": "A bug in a dataset script provided in the library", "id": 2067388877, "name": "dataset bug", "node_id": "MDU6TGFiZWwyMDY3Mzg4ODc3", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20bug" } ]
closed
false
null
[]
null
[]
2020-11-16T09:31:51Z
2022-10-05T12:27:42Z
2022-10-05T12:27:42Z
CONTRIBUTOR
null
null
null
Hi, I appreciate your help with the following error, thanks >>> from datasets import load_dataset >>> dataset = load_dataset("wmt16", "ro-en", split="train") Downloading and preparing dataset wmt16/ro-en (download: Unknown size, generated: Unknown size, post-processed: Unknown size, total: Unknown size) to /root/.cache/huggingface/datasets/wmt16/ro-en/1.0.0/7b2c4443a7d34c2e13df267eaa8cab4c62dd82f6b62b0d9ecc2e3a673ce17308... Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/datasets/load.py", line 611, in load_dataset ignore_verifications=ignore_verifications, File "/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/datasets/builder.py", line 476, in download_and_prepare dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs File "/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/datasets/builder.py", line 531, in _download_and_prepare split_generators = self._split_generators(dl_manager, **split_generators_kwargs) File "/root/.cache/huggingface/modules/datasets_modules/datasets/wmt16/7b2c4443a7d34c2e13df267eaa8cab4c62dd82f6b62b0d9ecc2e3a673ce17308/wmt_utils.py", line 755, in _split_generators downloaded_files = dl_manager.download_and_extract(urls_to_download) File "/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/datasets/utils/download_manager.py", line 254, in download_and_extract return self.extract(self.download(url_or_urls)) File "/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/datasets/utils/download_manager.py", line 179, in download num_proc=download_config.num_proc, File "/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/datasets/utils/py_utils.py", line 225, in map_nested _single_map_nested((function, obj, types, None, True)) for obj in tqdm(iterable, disable=disable_tqdm) File "/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/datasets/utils/py_utils.py", line 225, in <listcomp> _single_map_nested((function, obj, types, None, True)) for obj in tqdm(iterable, disable=disable_tqdm) File "/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/datasets/utils/py_utils.py", line 181, in _single_map_nested mapped = [_single_map_nested((function, v, types, None, True)) for v in pbar] File "/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/datasets/utils/py_utils.py", line 181, in <listcomp> mapped = [_single_map_nested((function, v, types, None, True)) for v in pbar] File "/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/datasets/utils/py_utils.py", line 163, in _single_map_nested return function(data_struct) File "/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/datasets/utils/file_utils.py", line 308, in cached_path use_etag=download_config.use_etag, File "/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/datasets/utils/file_utils.py", line 475, in get_from_cache raise ConnectionError("Couldn't reach {}".format(url)) ConnectionError: Couldn't reach http://opus.nlpl.eu/download.php?f=SETIMES/v2/tmx/en-ro.tmx.gz
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/854/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/854/timeline
null
completed
true
https://api.github.com/repos/huggingface/datasets/issues/853
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/853/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/853/comments
https://api.github.com/repos/huggingface/datasets/issues/853/events
https://github.com/huggingface/datasets/issues/853
743,426,583
MDU6SXNzdWU3NDM0MjY1ODM=
853
concatenate_datasets support axis=0 or 1 ?
{ "avatar_url": "https://avatars.githubusercontent.com/u/12437751?v=4", "events_url": "https://api.github.com/users/renqingcolin/events{/privacy}", "followers_url": "https://api.github.com/users/renqingcolin/followers", "following_url": "https://api.github.com/users/renqingcolin/following{/other_user}", "gists_url": "https://api.github.com/users/renqingcolin/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/renqingcolin", "id": 12437751, "login": "renqingcolin", "node_id": "MDQ6VXNlcjEyNDM3NzUx", "organizations_url": "https://api.github.com/users/renqingcolin/orgs", "received_events_url": "https://api.github.com/users/renqingcolin/received_events", "repos_url": "https://api.github.com/users/renqingcolin/repos", "site_admin": false, "starred_url": "https://api.github.com/users/renqingcolin/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/renqingcolin/subscriptions", "type": "User", "url": "https://api.github.com/users/renqingcolin" }
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" }, { "color": "008672", "default": true, "description": "Extra attention is needed", "id": 1935892884, "name": "help wanted", "node_id": "MDU6TGFiZWwxOTM1ODkyODg0", "url": "https://api.github.com/repos/huggingface/datasets/labels/help%20wanted" }, { "color": "d876e3", "default": true, "description": "Further information is requested", "id": 1935892912, "name": "question", "node_id": "MDU6TGFiZWwxOTM1ODkyOTEy", "url": "https://api.github.com/repos/huggingface/datasets/labels/question" } ]
closed
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" } ]
null
[]
2020-11-16T02:46:23Z
2021-04-19T16:07:18Z
2021-04-19T16:07:18Z
NONE
null
null
null
I want to achieve the following result ![image](https://user-images.githubusercontent.com/12437751/99207426-f0c8db80-27f8-11eb-820a-4d9f7287b742.png)
{ "+1": 1, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/853/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/853/timeline
null
completed
true
https://api.github.com/repos/huggingface/datasets/issues/852
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/852/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/852/comments
https://api.github.com/repos/huggingface/datasets/issues/852/events
https://github.com/huggingface/datasets/issues/852
743,396,240
MDU6SXNzdWU3NDMzOTYyNDA=
852
wmt cannot be downloaded
{ "avatar_url": "https://avatars.githubusercontent.com/u/6278280?v=4", "events_url": "https://api.github.com/users/rabeehk/events{/privacy}", "followers_url": "https://api.github.com/users/rabeehk/followers", "following_url": "https://api.github.com/users/rabeehk/following{/other_user}", "gists_url": "https://api.github.com/users/rabeehk/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/rabeehk", "id": 6278280, "login": "rabeehk", "node_id": "MDQ6VXNlcjYyNzgyODA=", "organizations_url": "https://api.github.com/users/rabeehk/orgs", "received_events_url": "https://api.github.com/users/rabeehk/received_events", "repos_url": "https://api.github.com/users/rabeehk/repos", "site_admin": false, "starred_url": "https://api.github.com/users/rabeehk/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/rabeehk/subscriptions", "type": "User", "url": "https://api.github.com/users/rabeehk" }
[ { "color": "e99695", "default": false, "description": "Requesting to add a new dataset", "id": 2067376369, "name": "dataset request", "node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request" } ]
closed
false
null
[]
null
[]
2020-11-16T01:04:41Z
2020-11-16T09:31:58Z
2020-11-16T09:31:58Z
CONTRIBUTOR
null
null
null
Hi, I appreciate your help with the following error, thanks >>> from datasets import load_dataset >>> dataset = load_dataset("wmt16", "ro-en", split="train") Downloading and preparing dataset wmt16/ro-en (download: Unknown size, generated: Unknown size, post-processed: Unknown size, total: Unknown size) to /root/.cache/huggingface/datasets/wmt16/ro-en/1.0.0/7b2c4443a7d34c2e13df267eaa8cab4c62dd82f6b62b0d9ecc2e3a673ce17308... Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/datasets/load.py", line 611, in load_dataset ignore_verifications=ignore_verifications, File "/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/datasets/builder.py", line 476, in download_and_prepare dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs File "/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/datasets/builder.py", line 531, in _download_and_prepare split_generators = self._split_generators(dl_manager, **split_generators_kwargs) File "/root/.cache/huggingface/modules/datasets_modules/datasets/wmt16/7b2c4443a7d34c2e13df267eaa8cab4c62dd82f6b62b0d9ecc2e3a673ce17308/wmt_utils.py", line 755, in _split_generators downloaded_files = dl_manager.download_and_extract(urls_to_download) File "/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/datasets/utils/download_manager.py", line 254, in download_and_extract return self.extract(self.download(url_or_urls)) File "/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/datasets/utils/download_manager.py", line 179, in download num_proc=download_config.num_proc, File "/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/datasets/utils/py_utils.py", line 225, in map_nested _single_map_nested((function, obj, types, None, True)) for obj in tqdm(iterable, disable=disable_tqdm) File "/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/datasets/utils/py_utils.py", line 225, in <listcomp> _single_map_nested((function, obj, types, None, True)) for obj in tqdm(iterable, disable=disable_tqdm) File "/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/datasets/utils/py_utils.py", line 181, in _single_map_nested mapped = [_single_map_nested((function, v, types, None, True)) for v in pbar] File "/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/datasets/utils/py_utils.py", line 181, in <listcomp> mapped = [_single_map_nested((function, v, types, None, True)) for v in pbar] File "/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/datasets/utils/py_utils.py", line 163, in _single_map_nested return function(data_struct) File "/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/datasets/utils/file_utils.py", line 308, in cached_path use_etag=download_config.use_etag, File "/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/datasets/utils/file_utils.py", line 475, in get_from_cache raise ConnectionError("Couldn't reach {}".format(url)) ConnectionError: Couldn't reach http://opus.nlpl.eu/download.php?f=SETIMES/v2/tmx/en-ro.tmx.gz
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/852/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/852/timeline
null
completed
true
https://api.github.com/repos/huggingface/datasets/issues/850
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/850/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/850/comments
https://api.github.com/repos/huggingface/datasets/issues/850/events
https://github.com/huggingface/datasets/pull/850
742,369,419
MDExOlB1bGxSZXF1ZXN0NTIwNTE0MDY3
850
Create ClassLabel for labelling tasks datasets
{ "avatar_url": "https://avatars.githubusercontent.com/u/959590?v=4", "events_url": "https://api.github.com/users/jplu/events{/privacy}", "followers_url": "https://api.github.com/users/jplu/followers", "following_url": "https://api.github.com/users/jplu/following{/other_user}", "gists_url": "https://api.github.com/users/jplu/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/jplu", "id": 959590, "login": "jplu", "node_id": "MDQ6VXNlcjk1OTU5MA==", "organizations_url": "https://api.github.com/users/jplu/orgs", "received_events_url": "https://api.github.com/users/jplu/received_events", "repos_url": "https://api.github.com/users/jplu/repos", "site_admin": false, "starred_url": "https://api.github.com/users/jplu/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jplu/subscriptions", "type": "User", "url": "https://api.github.com/users/jplu" }
[]
closed
false
null
[]
null
[]
2020-11-13T11:07:22Z
2020-11-16T10:32:05Z
2020-11-16T10:31:58Z
CONTRIBUTOR
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/850.diff", "html_url": "https://github.com/huggingface/datasets/pull/850", "merged_at": "2020-11-16T10:31:58Z", "patch_url": "https://github.com/huggingface/datasets/pull/850.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/850" }
This PR adds a specific `ClassLabel` for the datasets that are about a labelling task such as POS, NER or Chunking.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/850/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/850/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/849
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/849/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/849/comments
https://api.github.com/repos/huggingface/datasets/issues/849/events
https://github.com/huggingface/datasets/issues/849
742,263,333
MDU6SXNzdWU3NDIyNjMzMzM=
849
Load amazon dataset
{ "avatar_url": "https://avatars.githubusercontent.com/u/19718818?v=4", "events_url": "https://api.github.com/users/bhavitvyamalik/events{/privacy}", "followers_url": "https://api.github.com/users/bhavitvyamalik/followers", "following_url": "https://api.github.com/users/bhavitvyamalik/following{/other_user}", "gists_url": "https://api.github.com/users/bhavitvyamalik/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/bhavitvyamalik", "id": 19718818, "login": "bhavitvyamalik", "node_id": "MDQ6VXNlcjE5NzE4ODE4", "organizations_url": "https://api.github.com/users/bhavitvyamalik/orgs", "received_events_url": "https://api.github.com/users/bhavitvyamalik/received_events", "repos_url": "https://api.github.com/users/bhavitvyamalik/repos", "site_admin": false, "starred_url": "https://api.github.com/users/bhavitvyamalik/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/bhavitvyamalik/subscriptions", "type": "User", "url": "https://api.github.com/users/bhavitvyamalik" }
[]
closed
false
null
[]
null
[]
2020-11-13T08:34:24Z
2020-11-17T07:22:59Z
2020-11-17T07:22:59Z
CONTRIBUTOR
null
null
null
Hi, I was going through amazon_us_reviews dataset and found that example API usage given on website is different from the API usage while loading dataset. Eg. what API usage is on the [website](https://huggingface.co/datasets/amazon_us_reviews) ``` from datasets import load_dataset dataset = load_dataset("amazon_us_reviews") ``` How it is when I tried (the error generated does point me to the right direction though) ``` from datasets import load_dataset dataset = load_dataset("amazon_us_reviews", 'Books_v1_00') ``` Also, there is some issue with formatting as it's not showing bullet list in description with new line. Can I work on it?
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/849/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/849/timeline
null
completed
true
https://api.github.com/repos/huggingface/datasets/issues/848
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/848/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/848/comments
https://api.github.com/repos/huggingface/datasets/issues/848/events
https://github.com/huggingface/datasets/issues/848
742,240,942
MDU6SXNzdWU3NDIyNDA5NDI=
848
Error when concatenate_datasets
{ "avatar_url": "https://avatars.githubusercontent.com/u/25664170?v=4", "events_url": "https://api.github.com/users/shexuan/events{/privacy}", "followers_url": "https://api.github.com/users/shexuan/followers", "following_url": "https://api.github.com/users/shexuan/following{/other_user}", "gists_url": "https://api.github.com/users/shexuan/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/shexuan", "id": 25664170, "login": "shexuan", "node_id": "MDQ6VXNlcjI1NjY0MTcw", "organizations_url": "https://api.github.com/users/shexuan/orgs", "received_events_url": "https://api.github.com/users/shexuan/received_events", "repos_url": "https://api.github.com/users/shexuan/repos", "site_admin": false, "starred_url": "https://api.github.com/users/shexuan/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/shexuan/subscriptions", "type": "User", "url": "https://api.github.com/users/shexuan" }
[]
closed
false
null
[]
null
[]
2020-11-13T07:56:02Z
2020-11-13T17:40:59Z
2020-11-13T15:55:10Z
NONE
null
null
null
Hello, when I concatenate two dataset loading from disk, I encountered a problem: ``` test_dataset = load_from_disk('data/test_dataset') trn_dataset = load_from_disk('data/train_dataset') train_dataset = concatenate_datasets([trn_dataset, test_dataset]) ``` And it reported ValueError blow: ``` --------------------------------------------------------------------------- ValueError Traceback (most recent call last) <ipython-input-38-74fa525512ca> in <module> ----> 1 train_dataset = concatenate_datasets([trn_dataset, test_dataset]) /opt/miniconda3/lib/python3.7/site-packages/datasets/arrow_dataset.py in concatenate_datasets(dsets, info, split) 2547 "However datasets' indices {} come from memory and datasets' indices {} come from disk.".format( 2548 [i for i in range(len(dsets)) if indices_mappings_in_memory[i]], -> 2549 [i for i in range(len(dsets)) if not indices_mappings_in_memory[i]], 2550 ) 2551 ) ValueError: Datasets' indices should ALL come from memory, or should ALL come from disk. However datasets' indices [1] come from memory and datasets' indices [0] come from disk. ``` But it's curious both of my datasets loading from disk, so I check the source code in `arrow_dataset.py` about the Error: ``` trn_dataset._data_files # output [{'filename': 'data/train_dataset/csv-train.arrow', 'skip': 0, 'take': 593264}] test_dataset._data_files # output [{'filename': 'data/test_dataset/csv-test.arrow', 'skip': 0, 'take': 424383}] print([not dset._data_files for dset in [trn_dataset, test_dataset]]) # [False, False] # And I tested the code the same as arrow_dataset, but nothing happened dsets = [trn_dataset, test_dataset] dsets_in_memory = [not dset._data_files for dset in dsets] if any(dset_in_memory != dsets_in_memory[0] for dset_in_memory in dsets_in_memory): raise ValueError( "Datasets should ALL come from memory, or should ALL come from disk.\n" "However datasets {} come from memory and datasets {} come from disk.".format( [i for i in range(len(dsets)) if dsets_in_memory[i]], [i for i in range(len(dsets)) if not dsets_in_memory[i]], ) ) ``` Any suggestions would be greatly appreciated! Thanks!
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/848/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/848/timeline
null
completed
true
https://api.github.com/repos/huggingface/datasets/issues/847
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/847/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/847/comments
https://api.github.com/repos/huggingface/datasets/issues/847/events
https://github.com/huggingface/datasets/issues/847
742,179,495
MDU6SXNzdWU3NDIxNzk0OTU=
847
multiprocessing in dataset map "can only test a child process"
{ "avatar_url": "https://avatars.githubusercontent.com/u/2000204?v=4", "events_url": "https://api.github.com/users/timothyjlaurent/events{/privacy}", "followers_url": "https://api.github.com/users/timothyjlaurent/followers", "following_url": "https://api.github.com/users/timothyjlaurent/following{/other_user}", "gists_url": "https://api.github.com/users/timothyjlaurent/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/timothyjlaurent", "id": 2000204, "login": "timothyjlaurent", "node_id": "MDQ6VXNlcjIwMDAyMDQ=", "organizations_url": "https://api.github.com/users/timothyjlaurent/orgs", "received_events_url": "https://api.github.com/users/timothyjlaurent/received_events", "repos_url": "https://api.github.com/users/timothyjlaurent/repos", "site_admin": false, "starred_url": "https://api.github.com/users/timothyjlaurent/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/timothyjlaurent/subscriptions", "type": "User", "url": "https://api.github.com/users/timothyjlaurent" }
[]
closed
false
null
[]
null
[]
2020-11-13T06:01:04Z
2022-10-05T12:22:51Z
2022-10-05T12:22:51Z
NONE
null
null
null
Using a dataset with a single 'text' field and a fast tokenizer in a jupyter notebook. ``` def tokenizer_fn(example): return tokenizer.batch_encode_plus(example['text']) ds_tokenized = text_dataset.map(tokenizer_fn, batched=True, num_proc=6, remove_columns=['text']) ``` ``` --------------------------------------------------------------------------- RemoteTraceback Traceback (most recent call last) RemoteTraceback: """ Traceback (most recent call last): File "/home/jovyan/share/users/tlaurent/invitae-bert/ve/lib/python3.6/site-packages/multiprocess/pool.py", line 119, in worker result = (True, func(*args, **kwds)) File "/home/jovyan/share/users/tlaurent/invitae-bert/ve/lib/python3.6/site-packages/datasets/arrow_dataset.py", line 156, in wrapper out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs) File "/home/jovyan/share/users/tlaurent/invitae-bert/ve/lib/python3.6/site-packages/datasets/fingerprint.py", line 163, in wrapper out = func(self, *args, **kwargs) File "/home/jovyan/share/users/tlaurent/invitae-bert/ve/lib/python3.6/site-packages/datasets/arrow_dataset.py", line 1510, in _map_single for i in pbar: File "/home/jovyan/share/users/tlaurent/invitae-bert/ve/lib/python3.6/site-packages/tqdm/notebook.py", line 228, in __iter__ for obj in super(tqdm_notebook, self).__iter__(*args, **kwargs): File "/home/jovyan/share/users/tlaurent/invitae-bert/ve/lib/python3.6/site-packages/tqdm/std.py", line 1186, in __iter__ self.close() File "/home/jovyan/share/users/tlaurent/invitae-bert/ve/lib/python3.6/site-packages/tqdm/notebook.py", line 251, in close super(tqdm_notebook, self).close(*args, **kwargs) File "/home/jovyan/share/users/tlaurent/invitae-bert/ve/lib/python3.6/site-packages/tqdm/std.py", line 1291, in close fp_write('') File "/home/jovyan/share/users/tlaurent/invitae-bert/ve/lib/python3.6/site-packages/tqdm/std.py", line 1288, in fp_write self.fp.write(_unicode(s)) File "/home/jovyan/share/users/tlaurent/invitae-bert/ve/lib/python3.6/site-packages/wandb/sdk/lib/redirect.py", line 91, in new_write cb(name, data) File "/home/jovyan/share/users/tlaurent/invitae-bert/ve/lib/python3.6/site-packages/wandb/sdk/wandb_run.py", line 598, in _console_callback self._backend.interface.publish_output(name, data) File "/home/jovyan/share/users/tlaurent/invitae-bert/ve/lib/python3.6/site-packages/wandb/sdk/interface/interface.py", line 146, in publish_output self._publish_output(o) File "/home/jovyan/share/users/tlaurent/invitae-bert/ve/lib/python3.6/site-packages/wandb/sdk/interface/interface.py", line 151, in _publish_output self._publish(rec) File "/home/jovyan/share/users/tlaurent/invitae-bert/ve/lib/python3.6/site-packages/wandb/sdk/interface/interface.py", line 431, in _publish if self._process and not self._process.is_alive(): File "/usr/lib/python3.6/multiprocessing/process.py", line 134, in is_alive assert self._parent_pid == os.getpid(), 'can only test a child process' AssertionError: can only test a child process """ ```
{ "+1": 1, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/847/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/847/timeline
null
completed
true
https://api.github.com/repos/huggingface/datasets/issues/846
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/846/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/846/comments
https://api.github.com/repos/huggingface/datasets/issues/846/events
https://github.com/huggingface/datasets/issues/846
741,885,174
MDU6SXNzdWU3NDE4ODUxNzQ=
846
Add HoVer multi-hop fact verification dataset
{ "avatar_url": "https://avatars.githubusercontent.com/u/10469459?v=4", "events_url": "https://api.github.com/users/yjernite/events{/privacy}", "followers_url": "https://api.github.com/users/yjernite/followers", "following_url": "https://api.github.com/users/yjernite/following{/other_user}", "gists_url": "https://api.github.com/users/yjernite/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/yjernite", "id": 10469459, "login": "yjernite", "node_id": "MDQ6VXNlcjEwNDY5NDU5", "organizations_url": "https://api.github.com/users/yjernite/orgs", "received_events_url": "https://api.github.com/users/yjernite/received_events", "repos_url": "https://api.github.com/users/yjernite/repos", "site_admin": false, "starred_url": "https://api.github.com/users/yjernite/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/yjernite/subscriptions", "type": "User", "url": "https://api.github.com/users/yjernite" }
[ { "color": "e99695", "default": false, "description": "Requesting to add a new dataset", "id": 2067376369, "name": "dataset request", "node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request" } ]
closed
false
null
[]
null
[]
2020-11-12T19:55:46Z
2020-12-10T21:47:33Z
2020-12-10T21:47:33Z
MEMBER
null
null
null
## Adding a Dataset - **Name:** HoVer - **Description:** https://twitter.com/YichenJiang9/status/1326954363806429186 contains 20K claim verification examples - **Paper:** https://arxiv.org/abs/2011.03088 - **Data:** https://hover-nlp.github.io/ - **Motivation:** There are still few multi-hop information extraction benchmarks (HotpotQA, which dataset wase based off, notwithstanding) Instructions to add a new dataset can be found [here](https://huggingface.co/docs/datasets/share_dataset.html).
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/846/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/846/timeline
null
completed
true
https://api.github.com/repos/huggingface/datasets/issues/845
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/845/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/845/comments
https://api.github.com/repos/huggingface/datasets/issues/845/events
https://github.com/huggingface/datasets/pull/845
741,841,350
MDExOlB1bGxSZXF1ZXN0NTIwMDg1NDMy
845
amazon description fields as bullets
{ "avatar_url": "https://avatars.githubusercontent.com/u/9353833?v=4", "events_url": "https://api.github.com/users/joeddav/events{/privacy}", "followers_url": "https://api.github.com/users/joeddav/followers", "following_url": "https://api.github.com/users/joeddav/following{/other_user}", "gists_url": "https://api.github.com/users/joeddav/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/joeddav", "id": 9353833, "login": "joeddav", "node_id": "MDQ6VXNlcjkzNTM4MzM=", "organizations_url": "https://api.github.com/users/joeddav/orgs", "received_events_url": "https://api.github.com/users/joeddav/received_events", "repos_url": "https://api.github.com/users/joeddav/repos", "site_admin": false, "starred_url": "https://api.github.com/users/joeddav/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/joeddav/subscriptions", "type": "User", "url": "https://api.github.com/users/joeddav" }
[]
closed
false
null
[]
null
[]
2020-11-12T18:50:41Z
2020-11-12T18:50:54Z
2020-11-12T18:50:54Z
CONTRIBUTOR
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/845.diff", "html_url": "https://github.com/huggingface/datasets/pull/845", "merged_at": "2020-11-12T18:50:54Z", "patch_url": "https://github.com/huggingface/datasets/pull/845.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/845" }
One more minor formatting change to amazon reviews's description (in addition to #844). Just reformatting the fields to display as a bulleted list in markdown.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/845/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/845/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/844
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/844/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/844/comments
https://api.github.com/repos/huggingface/datasets/issues/844/events
https://github.com/huggingface/datasets/pull/844
741,835,661
MDExOlB1bGxSZXF1ZXN0NTIwMDgwNzM5
844
add newlines to amazon desc
{ "avatar_url": "https://avatars.githubusercontent.com/u/9353833?v=4", "events_url": "https://api.github.com/users/joeddav/events{/privacy}", "followers_url": "https://api.github.com/users/joeddav/followers", "following_url": "https://api.github.com/users/joeddav/following{/other_user}", "gists_url": "https://api.github.com/users/joeddav/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/joeddav", "id": 9353833, "login": "joeddav", "node_id": "MDQ6VXNlcjkzNTM4MzM=", "organizations_url": "https://api.github.com/users/joeddav/orgs", "received_events_url": "https://api.github.com/users/joeddav/received_events", "repos_url": "https://api.github.com/users/joeddav/repos", "site_admin": false, "starred_url": "https://api.github.com/users/joeddav/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/joeddav/subscriptions", "type": "User", "url": "https://api.github.com/users/joeddav" }
[]
closed
false
null
[]
null
[]
2020-11-12T18:41:20Z
2020-11-12T18:42:25Z
2020-11-12T18:42:21Z
CONTRIBUTOR
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/844.diff", "html_url": "https://github.com/huggingface/datasets/pull/844", "merged_at": "2020-11-12T18:42:21Z", "patch_url": "https://github.com/huggingface/datasets/pull/844.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/844" }
Just a quick formatting fix to hopefully make it render nicer on Viewer
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/844/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/844/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/843
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/843/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/843/comments
https://api.github.com/repos/huggingface/datasets/issues/843/events
https://github.com/huggingface/datasets/issues/843
741,531,121
MDU6SXNzdWU3NDE1MzExMjE=
843
use_custom_baseline still produces errors for bertscore
{ "avatar_url": "https://avatars.githubusercontent.com/u/37921244?v=4", "events_url": "https://api.github.com/users/penatbater/events{/privacy}", "followers_url": "https://api.github.com/users/penatbater/followers", "following_url": "https://api.github.com/users/penatbater/following{/other_user}", "gists_url": "https://api.github.com/users/penatbater/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/penatbater", "id": 37921244, "login": "penatbater", "node_id": "MDQ6VXNlcjM3OTIxMjQ0", "organizations_url": "https://api.github.com/users/penatbater/orgs", "received_events_url": "https://api.github.com/users/penatbater/received_events", "repos_url": "https://api.github.com/users/penatbater/repos", "site_admin": false, "starred_url": "https://api.github.com/users/penatbater/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/penatbater/subscriptions", "type": "User", "url": "https://api.github.com/users/penatbater" }
[ { "color": "25b21e", "default": false, "description": "A bug in a metric script", "id": 2067393914, "name": "metric bug", "node_id": "MDU6TGFiZWwyMDY3MzkzOTE0", "url": "https://api.github.com/repos/huggingface/datasets/labels/metric%20bug" } ]
closed
false
null
[]
null
[]
2020-11-12T11:44:32Z
2021-08-31T10:06:44Z
2021-02-09T14:21:48Z
NONE
null
null
null
`metric = load_metric('bertscore')` `a1 = "random sentences"` `b1 = "random sentences"` `metric.compute(predictions = [a1], references = [b1], lang = 'en')` `Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/home/stephen_chan/.local/lib/python3.6/site-packages/datasets/metric.py", line 393, in compute output = self._compute(predictions=predictions, references=references, **kwargs) File "/home/stephen_chan/.cache/huggingface/modules/datasets_modules/metrics/bertscore/361e597a01a41d6cf95d94bbfb01dea16261687abc0c6c74cc9930f80488f363/bertscore.py", line 108, in _compute hashcode = bert_score.utils.get_hash(model_type, num_layers, idf, rescale_with_baseline) TypeError: get_hash() missing 1 required positional argument: 'use_custom_baseline'` Adding 'use_custom_baseline = False' as an argument produces this error `Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/home/stephen_chan/.local/lib/python3.6/site-packages/datasets/metric.py", line 393, in compute output = self._compute(predictions=predictions, references=references, **kwargs) TypeError: _compute() got an unexpected keyword argument 'use_custom_baseline'` This is on Ubuntu 18.04, Python 3.6.9, datasets version 1.1.2
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/843/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/843/timeline
null
completed
true
https://api.github.com/repos/huggingface/datasets/issues/842
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/842/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/842/comments
https://api.github.com/repos/huggingface/datasets/issues/842/events
https://github.com/huggingface/datasets/issues/842
741,208,428
MDU6SXNzdWU3NDEyMDg0Mjg=
842
How to enable `.map()` pre-processing pipelines to support multi-node parallelism?
{ "avatar_url": "https://avatars.githubusercontent.com/u/66387198?v=4", "events_url": "https://api.github.com/users/shangw-nvidia/events{/privacy}", "followers_url": "https://api.github.com/users/shangw-nvidia/followers", "following_url": "https://api.github.com/users/shangw-nvidia/following{/other_user}", "gists_url": "https://api.github.com/users/shangw-nvidia/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/shangw-nvidia", "id": 66387198, "login": "shangw-nvidia", "node_id": "MDQ6VXNlcjY2Mzg3MTk4", "organizations_url": "https://api.github.com/users/shangw-nvidia/orgs", "received_events_url": "https://api.github.com/users/shangw-nvidia/received_events", "repos_url": "https://api.github.com/users/shangw-nvidia/repos", "site_admin": false, "starred_url": "https://api.github.com/users/shangw-nvidia/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/shangw-nvidia/subscriptions", "type": "User", "url": "https://api.github.com/users/shangw-nvidia" }
[]
open
false
null
[]
null
[]
2020-11-12T02:04:38Z
2022-10-12T16:10:51Z
null
NONE
null
null
null
Hi, Currently, multiprocessing can be enabled for the `.map()` stages on a single node. However, in the case of multi-node training, (since more than one node would be available) I'm wondering if it's possible to extend the parallel processing among nodes, instead of only 1 node running the `.map()` while the other node is waiting for it to finish? Thanks!
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/842/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/842/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/841
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/841/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/841/comments
https://api.github.com/repos/huggingface/datasets/issues/841/events
https://github.com/huggingface/datasets/issues/841
740,737,448
MDU6SXNzdWU3NDA3Mzc0NDg=
841
Can not reuse datasets already downloaded
{ "avatar_url": "https://avatars.githubusercontent.com/u/30210529?v=4", "events_url": "https://api.github.com/users/jc-hou/events{/privacy}", "followers_url": "https://api.github.com/users/jc-hou/followers", "following_url": "https://api.github.com/users/jc-hou/following{/other_user}", "gists_url": "https://api.github.com/users/jc-hou/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/jc-hou", "id": 30210529, "login": "jc-hou", "node_id": "MDQ6VXNlcjMwMjEwNTI5", "organizations_url": "https://api.github.com/users/jc-hou/orgs", "received_events_url": "https://api.github.com/users/jc-hou/received_events", "repos_url": "https://api.github.com/users/jc-hou/repos", "site_admin": false, "starred_url": "https://api.github.com/users/jc-hou/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jc-hou/subscriptions", "type": "User", "url": "https://api.github.com/users/jc-hou" }
[]
closed
false
null
[]
null
[]
2020-11-11T12:42:15Z
2020-11-11T18:17:16Z
2020-11-11T18:17:16Z
NONE
null
null
null
Hello, I need to connect to a frontal node (with http proxy, no gpu) before connecting to a gpu node (but no http proxy, so can not use wget so on). I successfully downloaded and reuse the wikipedia datasets in a frontal node. When I connect to the gpu node, I supposed to use the downloaded datasets from cache, but failed and end with time out error. On frontal node: ``` >>> from datasets import load_dataset >>> dataset = load_dataset('wikipedia', '20200501.en') Reusing dataset wikipedia (/linkhome/rech/genini01/uua34ms/.cache/huggingface/datasets/wikipedia/20200501.en/1.0.0/f92599dfccab29832c442b82870fa8f6983e5b4ebbf5e6e2dcbe894e325339cd) /linkhome/rech/genini01/uua34ms/work/anaconda3/envs/pytorch_pip170_cuda102/lib/python3.6/site-packages/torch/cuda/__init__.py:52: UserWarning: CUDA initialization: Found no NVIDIA driver on your system. Please check that you have an NVIDIA GPU and installed a driver from http://www.nvidia.com/Download/index.aspx (Triggered internally at /pytorch/c10/cuda/CUDAFunctions.cpp:100.) return torch._C._cuda_getDeviceCount() > 0 ``` On gpu node: ``` >>> from datasets import load_dataset >>> dataset = load_dataset('wikipedia', '20200501.en') Traceback (most recent call last): File "/linkhome/rech/genini01/uua34ms/work/anaconda3/envs/pytorch_pip170_cuda102/lib/python3.6/site-packages/urllib3/connection.py", line 160, in _new_conn (self._dns_host, self.port), self.timeout, **extra_kw File "/linkhome/rech/genini01/uua34ms/work/anaconda3/envs/pytorch_pip170_cuda102/lib/python3.6/site-packages/urllib3/util/connection.py", line 84, in create_connection raise err File "/linkhome/rech/genini01/uua34ms/work/anaconda3/envs/pytorch_pip170_cuda102/lib/python3.6/site-packages/urllib3/util/connection.py", line 74, in create_connection sock.connect(sa) TimeoutError: [Errno 110] Connection timed out During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/linkhome/rech/genini01/uua34ms/work/anaconda3/envs/pytorch_pip170_cuda102/lib/python3.6/site-packages/urllib3/connectionpool.py", line 677, in urlopen chunked=chunked, File "/linkhome/rech/genini01/uua34ms/work/anaconda3/envs/pytorch_pip170_cuda102/lib/python3.6/site-packages/urllib3/connectionpool.py", line 381, in _make_request self._validate_conn(conn) File "/linkhome/rech/genini01/uua34ms/work/anaconda3/envs/pytorch_pip170_cuda102/lib/python3.6/site-packages/urllib3/connectionpool.py", line 978, in _validate_conn conn.connect() File "/linkhome/rech/genini01/uua34ms/work/anaconda3/envs/pytorch_pip170_cuda102/lib/python3.6/site-packages/urllib3/connection.py", line 309, in connect conn = self._new_conn() File "/linkhome/rech/genini01/uua34ms/work/anaconda3/envs/pytorch_pip170_cuda102/lib/python3.6/site-packages/urllib3/connection.py", line 172, in _new_conn self, "Failed to establish a new connection: %s" % e urllib3.exceptions.NewConnectionError: <urllib3.connection.HTTPSConnection object at 0x14b7b73e4908>: Failed to establish a new connection: [Errno 110] Connection timed out During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/linkhome/rech/genini01/uua34ms/work/anaconda3/envs/pytorch_pip170_cuda102/lib/python3.6/site-packages/requests/adapters.py", line 449, in send timeout=timeout File "/linkhome/rech/genini01/uua34ms/work/anaconda3/envs/pytorch_pip170_cuda102/lib/python3.6/site-packages/urllib3/connectionpool.py", line 727, in urlopen method, url, error=e, _pool=self, _stacktrace=sys.exc_info()[2] File "/linkhome/rech/genini01/uua34ms/work/anaconda3/envs/pytorch_pip170_cuda102/lib/python3.6/site-packages/urllib3/util/retry.py", line 446, in increment raise MaxRetryError(_pool, url, error or ResponseError(cause)) urllib3.exceptions.MaxRetryError: HTTPSConnectionPool(host='s3.amazonaws.com', port=443): Max retries exceeded with url: /datasets.huggingface.co/datasets/datasets/wikipedia/wikipedia.py (Caused by NewConnectionError('<urllib3.connection.HTTPSConnection object at 0x14b7b73e4908>: Failed to establish a new connection: [Errno 110] Connection timed out',)) During handling of the above exception, another exception occurred: Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/linkhome/rech/genini01/uua34ms/work/anaconda3/envs/pytorch_pip170_cuda102/lib/python3.6/site-packages/datasets/load.py", line 590, in load_dataset path, script_version=script_version, download_config=download_config, download_mode=download_mode, dataset=True File "/linkhome/rech/genini01/uua34ms/work/anaconda3/envs/pytorch_pip170_cuda102/lib/python3.6/site-packages/datasets/load.py", line 264, in prepare_module head_hf_s3(path, filename=name, dataset=dataset) File "/linkhome/rech/genini01/uua34ms/work/anaconda3/envs/pytorch_pip170_cuda102/lib/python3.6/site-packages/datasets/utils/file_utils.py", line 200, in head_hf_s3 return requests.head(hf_bucket_url(identifier=identifier, filename=filename, use_cdn=use_cdn, dataset=dataset)) File "/linkhome/rech/genini01/uua34ms/work/anaconda3/envs/pytorch_pip170_cuda102/lib/python3.6/site-packages/requests/api.py", line 104, in head return request('head', url, **kwargs) File "/linkhome/rech/genini01/uua34ms/work/anaconda3/envs/pytorch_pip170_cuda102/lib/python3.6/site-packages/requests/api.py", line 61, in request return session.request(method=method, url=url, **kwargs) File "/linkhome/rech/genini01/uua34ms/work/anaconda3/envs/pytorch_pip170_cuda102/lib/python3.6/site-packages/requests/sessions.py", line 530, in request resp = self.send(prep, **send_kwargs) File "/linkhome/rech/genini01/uua34ms/work/anaconda3/envs/pytorch_pip170_cuda102/lib/python3.6/site-packages/requests/sessions.py", line 643, in send r = adapter.send(request, **kwargs) File "/linkhome/rech/genini01/uua34ms/work/anaconda3/envs/pytorch_pip170_cuda102/lib/python3.6/site-packages/requests/adapters.py", line 516, in send raise ConnectionError(e, request=request) requests.exceptions.ConnectionError: HTTPSConnectionPool(host='s3.amazonaws.com', port=443): Max retries exceeded with url: /datasets.huggingface.co/datasets/datasets/wikipedia/wikipedia.py (Caused by NewConnectionError('<urllib3.connection.HTTPSConnection object at 0x14b7b73e4908>: Failed to establish a new connection: [Errno 110] Connection timed out',)) ``` Any advice?Thanks!
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/841/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/841/timeline
null
completed
true
https://api.github.com/repos/huggingface/datasets/issues/840
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/840/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/840/comments
https://api.github.com/repos/huggingface/datasets/issues/840/events
https://github.com/huggingface/datasets/pull/840
740,632,771
MDExOlB1bGxSZXF1ZXN0NTE5MDg2NDUw
840
Update squad_v2.py
{ "avatar_url": "https://avatars.githubusercontent.com/u/38747614?v=4", "events_url": "https://api.github.com/users/Javier-Jimenez99/events{/privacy}", "followers_url": "https://api.github.com/users/Javier-Jimenez99/followers", "following_url": "https://api.github.com/users/Javier-Jimenez99/following{/other_user}", "gists_url": "https://api.github.com/users/Javier-Jimenez99/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/Javier-Jimenez99", "id": 38747614, "login": "Javier-Jimenez99", "node_id": "MDQ6VXNlcjM4NzQ3NjE0", "organizations_url": "https://api.github.com/users/Javier-Jimenez99/orgs", "received_events_url": "https://api.github.com/users/Javier-Jimenez99/received_events", "repos_url": "https://api.github.com/users/Javier-Jimenez99/repos", "site_admin": false, "starred_url": "https://api.github.com/users/Javier-Jimenez99/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Javier-Jimenez99/subscriptions", "type": "User", "url": "https://api.github.com/users/Javier-Jimenez99" }
[]
closed
false
null
[]
null
[]
2020-11-11T09:58:41Z
2020-11-11T15:29:34Z
2020-11-11T15:26:35Z
CONTRIBUTOR
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/840.diff", "html_url": "https://github.com/huggingface/datasets/pull/840", "merged_at": "2020-11-11T15:26:35Z", "patch_url": "https://github.com/huggingface/datasets/pull/840.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/840" }
Change lines 100 and 102 to prevent overwriting ```predictions``` variable.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/840/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/840/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/839
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/839/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/839/comments
https://api.github.com/repos/huggingface/datasets/issues/839/events
https://github.com/huggingface/datasets/issues/839
740,355,270
MDU6SXNzdWU3NDAzNTUyNzA=
839
XSum dataset missing spaces between sentences
{ "avatar_url": "https://avatars.githubusercontent.com/u/10007282?v=4", "events_url": "https://api.github.com/users/loganlebanoff/events{/privacy}", "followers_url": "https://api.github.com/users/loganlebanoff/followers", "following_url": "https://api.github.com/users/loganlebanoff/following{/other_user}", "gists_url": "https://api.github.com/users/loganlebanoff/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/loganlebanoff", "id": 10007282, "login": "loganlebanoff", "node_id": "MDQ6VXNlcjEwMDA3Mjgy", "organizations_url": "https://api.github.com/users/loganlebanoff/orgs", "received_events_url": "https://api.github.com/users/loganlebanoff/received_events", "repos_url": "https://api.github.com/users/loganlebanoff/repos", "site_admin": false, "starred_url": "https://api.github.com/users/loganlebanoff/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/loganlebanoff/subscriptions", "type": "User", "url": "https://api.github.com/users/loganlebanoff" }
[]
open
false
null
[]
null
[]
2020-11-11T00:34:43Z
2020-11-11T00:34:43Z
null
NONE
null
null
null
I noticed that the XSum dataset has no space between sentences. This could lead to worse results for anyone training or testing on it. Here's an example (0th entry in the test set): `The London trio are up for best UK act and best album, as well as getting two nominations in the best song category."We got told like this morning 'Oh I think you're nominated'", said Dappy."And I was like 'Oh yeah, which one?' And now we've got nominated for four awards. I mean, wow!"Bandmate Fazer added: "We thought it's best of us to come down and mingle with everyone and say hello to the cameras. And now we find we've got four nominations."The band have two shots at the best song prize, getting the nod for their Tynchy Stryder collaboration Number One, and single Strong Again.Their album Uncle B will also go up against records by the likes of Beyonce and Kanye West.N-Dubz picked up the best newcomer Mobo in 2007, but female member Tulisa said they wouldn't be too disappointed if they didn't win this time around."At the end of the day we're grateful to be where we are in our careers."If it don't happen then it don't happen - live to fight another day and keep on making albums and hits for the fans."Dappy also revealed they could be performing live several times on the night.The group will be doing Number One and also a possible rendition of the War Child single, I Got Soul.The charity song is a re-working of The Killers' All These Things That I've Done and is set to feature artists like Chipmunk, Ironik and Pixie Lott.This year's Mobos will be held outside of London for the first time, in Glasgow on 30 September.N-Dubz said they were looking forward to performing for their Scottish fans and boasted about their recent shows north of the border."We just done Edinburgh the other day," said Dappy."We smashed up an N-Dubz show over there. We done Aberdeen about three or four months ago - we smashed up that show over there! Everywhere we go we smash it up!"`
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/839/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/839/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/838
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/838/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/838/comments
https://api.github.com/repos/huggingface/datasets/issues/838/events
https://github.com/huggingface/datasets/pull/838
740,328,382
MDExOlB1bGxSZXF1ZXN0NTE4ODM0NTE5
838
CNN/Dailymail Dataset Card
{ "avatar_url": "https://avatars.githubusercontent.com/u/26722925?v=4", "events_url": "https://api.github.com/users/mcmillanmajora/events{/privacy}", "followers_url": "https://api.github.com/users/mcmillanmajora/followers", "following_url": "https://api.github.com/users/mcmillanmajora/following{/other_user}", "gists_url": "https://api.github.com/users/mcmillanmajora/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mcmillanmajora", "id": 26722925, "login": "mcmillanmajora", "node_id": "MDQ6VXNlcjI2NzIyOTI1", "organizations_url": "https://api.github.com/users/mcmillanmajora/orgs", "received_events_url": "https://api.github.com/users/mcmillanmajora/received_events", "repos_url": "https://api.github.com/users/mcmillanmajora/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mcmillanmajora/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mcmillanmajora/subscriptions", "type": "User", "url": "https://api.github.com/users/mcmillanmajora" }
[]
closed
false
null
[]
null
[]
2020-11-10T23:56:43Z
2020-11-25T21:09:51Z
2020-11-25T21:09:50Z
CONTRIBUTOR
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/838.diff", "html_url": "https://github.com/huggingface/datasets/pull/838", "merged_at": "2020-11-25T21:09:50Z", "patch_url": "https://github.com/huggingface/datasets/pull/838.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/838" }
Link to the card page: https://github.com/mcmillanmajora/datasets/tree/cnn_dailymail_card/datasets/cnn_dailymail One of the questions this dataset brings up is how we want to handle versioning of the cards to mirror versions of the dataset. The different versions of this dataset are used for different tasks (which may not be reflected in the versions that we currently have in the repo?), but it's only the structure that's changing rather than the content in this particular case, at least between versions 2.0.0 and 3.0.0.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/838/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/838/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/837
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/837/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/837/comments
https://api.github.com/repos/huggingface/datasets/issues/837/events
https://github.com/huggingface/datasets/pull/837
740,250,215
MDExOlB1bGxSZXF1ZXN0NTE4NzcwNDM5
837
AlloCinΓ© dataset card
{ "avatar_url": "https://avatars.githubusercontent.com/u/26722925?v=4", "events_url": "https://api.github.com/users/mcmillanmajora/events{/privacy}", "followers_url": "https://api.github.com/users/mcmillanmajora/followers", "following_url": "https://api.github.com/users/mcmillanmajora/following{/other_user}", "gists_url": "https://api.github.com/users/mcmillanmajora/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mcmillanmajora", "id": 26722925, "login": "mcmillanmajora", "node_id": "MDQ6VXNlcjI2NzIyOTI1", "organizations_url": "https://api.github.com/users/mcmillanmajora/orgs", "received_events_url": "https://api.github.com/users/mcmillanmajora/received_events", "repos_url": "https://api.github.com/users/mcmillanmajora/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mcmillanmajora/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mcmillanmajora/subscriptions", "type": "User", "url": "https://api.github.com/users/mcmillanmajora" }
[]
closed
false
null
[]
null
[]
2020-11-10T21:19:53Z
2020-11-25T21:56:27Z
2020-11-25T21:56:27Z
CONTRIBUTOR
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/837.diff", "html_url": "https://github.com/huggingface/datasets/pull/837", "merged_at": "2020-11-25T21:56:27Z", "patch_url": "https://github.com/huggingface/datasets/pull/837.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/837" }
Link to the card page: https://github.com/mcmillanmajora/datasets/blob/allocine_card/datasets/allocine/README.md There wasn't as much information available for this dataset, so I'm wondering what's the best way to address open questions about the dataset. For example, where did the list of films that the dataset creator used come from? I'm also wondering how best to go about talking about limitations when so little is known about the data.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/837/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/837/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/836
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/836/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/836/comments
https://api.github.com/repos/huggingface/datasets/issues/836/events
https://github.com/huggingface/datasets/issues/836
740,187,613
MDU6SXNzdWU3NDAxODc2MTM=
836
load_dataset with 'csv' is not working. while the same file is loading with 'text' mode or with pandas
{ "avatar_url": "https://avatars.githubusercontent.com/u/8919490?v=4", "events_url": "https://api.github.com/users/randubin/events{/privacy}", "followers_url": "https://api.github.com/users/randubin/followers", "following_url": "https://api.github.com/users/randubin/following{/other_user}", "gists_url": "https://api.github.com/users/randubin/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/randubin", "id": 8919490, "login": "randubin", "node_id": "MDQ6VXNlcjg5MTk0OTA=", "organizations_url": "https://api.github.com/users/randubin/orgs", "received_events_url": "https://api.github.com/users/randubin/received_events", "repos_url": "https://api.github.com/users/randubin/repos", "site_admin": false, "starred_url": "https://api.github.com/users/randubin/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/randubin/subscriptions", "type": "User", "url": "https://api.github.com/users/randubin" }
[ { "color": "2edb81", "default": false, "description": "A bug in a dataset script provided in the library", "id": 2067388877, "name": "dataset bug", "node_id": "MDU6TGFiZWwyMDY3Mzg4ODc3", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20bug" } ]
closed
false
null
[]
null
[]
2020-11-10T19:35:40Z
2021-11-24T16:59:19Z
2020-11-19T17:35:38Z
NONE
null
null
null
Hi All I am trying to load a custom dataset and I am trying to load a single file to make sure the file is loading correctly: dataset = load_dataset('csv', data_files=files) When I run it I get: Downloading and preparing dataset csv/default-35575a1051604c88 (download: Unknown size, generated: Unknown size, post-processed: Unknown size, total: Unknown size) tocache/huggingface/datasets/csv/default-35575a1051604c88/0.0.0/49187751790fa4d820300fd4d0707896e5b941f1a9c644652645b866716a4ac4... I am getting this error: 6a4ac4/csv.py in _generate_tables(self, files) 78 def _generate_tables(self, files): 79 for i, file in enumerate(files): ---> 80 pa_table = pac.read_csv( 81 file, 82 read_options=self.config.pa_read_options, ~/anaconda2/envs/nlp/lib/python3.8/site-packages/pyarrow/_csv.pyx in pyarrow._csv.read_csv() ~/anaconda2/envs/nlp/lib/python3.8/site-packages/pyarrow/error.pxi in pyarrow.lib.pyarrow_internal_check_status() ~/anaconda2/envs/nlp/lib/python3.8/site-packages/pyarrow/error.pxi in pyarrow.lib.check_status() **ArrowInvalid: straddling object straddles two block boundaries (try to increase block size?)** The size of the file is 3.5 GB. When I try smaller files I do not have an issue. When I load it with 'text' parser I can see all data but it is not what I need. There is no issue reading the file with pandas. any idea what could be the issue? When I am running a different CSV I do not get this line: (download: Unknown size, generated: Unknown size, post-processed: Unknown size, total: Unknown size) Any ideas?
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/836/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/836/timeline
null
completed
true
https://api.github.com/repos/huggingface/datasets/issues/835
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/835/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/835/comments
https://api.github.com/repos/huggingface/datasets/issues/835/events
https://github.com/huggingface/datasets/issues/835
740,102,210
MDU6SXNzdWU3NDAxMDIyMTA=
835
Wikipedia postprocessing
{ "avatar_url": "https://avatars.githubusercontent.com/u/13353204?v=4", "events_url": "https://api.github.com/users/bminixhofer/events{/privacy}", "followers_url": "https://api.github.com/users/bminixhofer/followers", "following_url": "https://api.github.com/users/bminixhofer/following{/other_user}", "gists_url": "https://api.github.com/users/bminixhofer/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/bminixhofer", "id": 13353204, "login": "bminixhofer", "node_id": "MDQ6VXNlcjEzMzUzMjA0", "organizations_url": "https://api.github.com/users/bminixhofer/orgs", "received_events_url": "https://api.github.com/users/bminixhofer/received_events", "repos_url": "https://api.github.com/users/bminixhofer/repos", "site_admin": false, "starred_url": "https://api.github.com/users/bminixhofer/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/bminixhofer/subscriptions", "type": "User", "url": "https://api.github.com/users/bminixhofer" }
[]
closed
false
null
[]
null
[]
2020-11-10T17:26:38Z
2020-11-10T18:23:20Z
2020-11-10T17:49:21Z
NONE
null
null
null
Hi, thanks for this library! Running this code: ```py import datasets wikipedia = datasets.load_dataset("wikipedia", "20200501.de") print(wikipedia['train']['text'][0]) ``` I get: ``` mini|Ricardo Flores MagΓ³n mini|Mexikanische RevolutionΓ€re, MagΓ³n in der Mitte anfΓΌhrend, gegen die Diktatur von Porfirio Diaz, Ausschnitt des GemΓ€lde β€žTierra y Libertadβ€œ von Idelfonso Carrara (?) von 1930. Ricardo Flores MagΓ³n (* 16. September 1874 in San Antonio EloxochitlΓ‘n im mexikanischen Bundesstaat Oaxaca; † 22. November 1922 im BundesgefΓ€ngnis Leavenworth im US-amerikanischen Bundesstaat Kansas) war als Journalist, Gewerkschafter und Literat ein fΓΌhrender anarchistischer Theoretiker und Aktivist, der die revolutionΓ€re mexikanische Bewegung radikal beeinflusste. MagΓ³n war GrΓΌnder der Partido Liberal Mexicano und Mitglied der Industrial Workers of the World. Politische Biografie Journalistisch und politisch kΓ€mpfte er und sein Bruder sehr kompromisslos gegen die Diktatur Porfirio Diaz. Philosophisch und politisch orientiert an radikal anarchistischen Idealen und den Erfahrungen seiner indigenen Vorfahren bei der gemeinschaftlichen Bewirtschaftung des Gemeindelandes, machte er die Forderung β€žLand und Freiheitβ€œ (Tierra y Libertad) populΓ€r. Besonders Francisco Villa und Emiliano Zapata griffen die Forderung Land und Freiheit auf. Seine Philosophie hatte großen Einfluss auf die Landarbeiter. 1904 floh er in die USA und grΓΌndete 1906 die Partido Liberal Mexicano. Im Exil lernte er u. a. Emma Goldman kennen. Er verbrachte die meiste Zeit seines Lebens in GefΓ€ngnissen und im Exil und wurde 1918 in den USA wegen β€žBehinderung der Kriegsanstrengungenβ€œ zu zwanzig Jahren GefΓ€ngnis verurteilt. Zu seinem Tod gibt es drei verschiedene Theorien. Offiziell starb er an Herzversagen. Librado Rivera, der die Leiche mit eigenen Augen gesehen hat, geht davon aus, dass MagΓ³n von einem Mitgefangenen erdrosselt wurde. Die staatstreue Gewerkschaftszeitung CROM verΓΆffentlichte 1923 einen Beitrag, nachdem MagΓ³n von einem GefΓ€ngniswΓ€rter erschlagen wurde. mini|Die BrΓΌder Ricardo (links) und Enrique Flores MagΓ³n (rechts) vor dem Los Angeles County Jail, 1917 [...] ``` so some Markup like `mini|` is still left. Should I run another parser on this text before feeding it to an ML model or is this a known imperfection of parsing Wiki markup? Apologies if this has been asked before.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/835/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/835/timeline
null
completed
true
https://api.github.com/repos/huggingface/datasets/issues/834
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/834/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/834/comments
https://api.github.com/repos/huggingface/datasets/issues/834/events
https://github.com/huggingface/datasets/issues/834
740,082,890
MDU6SXNzdWU3NDAwODI4OTA=
834
[GEM] add WikiLingua cross-lingual abstractive summarization dataset
{ "avatar_url": "https://avatars.githubusercontent.com/u/10469459?v=4", "events_url": "https://api.github.com/users/yjernite/events{/privacy}", "followers_url": "https://api.github.com/users/yjernite/followers", "following_url": "https://api.github.com/users/yjernite/following{/other_user}", "gists_url": "https://api.github.com/users/yjernite/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/yjernite", "id": 10469459, "login": "yjernite", "node_id": "MDQ6VXNlcjEwNDY5NDU5", "organizations_url": "https://api.github.com/users/yjernite/orgs", "received_events_url": "https://api.github.com/users/yjernite/received_events", "repos_url": "https://api.github.com/users/yjernite/repos", "site_admin": false, "starred_url": "https://api.github.com/users/yjernite/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/yjernite/subscriptions", "type": "User", "url": "https://api.github.com/users/yjernite" }
[ { "color": "e99695", "default": false, "description": "Requesting to add a new dataset", "id": 2067376369, "name": "dataset request", "node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request" } ]
closed
false
null
[]
null
[]
2020-11-10T17:00:43Z
2021-04-15T12:04:09Z
2021-04-15T12:01:38Z
MEMBER
null
null
null
## Adding a Dataset - **Name:** WikiLingua - **Description:** The dataset includes ~770k article and summary pairs in 18 languages from WikiHow. The gold-standard article-summary alignments across languages were extracted by aligning the images that are used to describe each how-to step in an article. - **Paper:** https://arxiv.org/pdf/2010.03093.pdf - **Data:** https://github.com/esdurmus/Wikilingua - **Motivation:** Included in the GEM shared task. Multilingual. Instructions to add a new dataset can be found [here](https://huggingface.co/docs/datasets/share_dataset.html).
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/834/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/834/timeline
null
completed
true
https://api.github.com/repos/huggingface/datasets/issues/833
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/833/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/833/comments
https://api.github.com/repos/huggingface/datasets/issues/833/events
https://github.com/huggingface/datasets/issues/833
740,079,692
MDU6SXNzdWU3NDAwNzk2OTI=
833
[GEM] add ASSET text simplification dataset
{ "avatar_url": "https://avatars.githubusercontent.com/u/10469459?v=4", "events_url": "https://api.github.com/users/yjernite/events{/privacy}", "followers_url": "https://api.github.com/users/yjernite/followers", "following_url": "https://api.github.com/users/yjernite/following{/other_user}", "gists_url": "https://api.github.com/users/yjernite/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/yjernite", "id": 10469459, "login": "yjernite", "node_id": "MDQ6VXNlcjEwNDY5NDU5", "organizations_url": "https://api.github.com/users/yjernite/orgs", "received_events_url": "https://api.github.com/users/yjernite/received_events", "repos_url": "https://api.github.com/users/yjernite/repos", "site_admin": false, "starred_url": "https://api.github.com/users/yjernite/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/yjernite/subscriptions", "type": "User", "url": "https://api.github.com/users/yjernite" }
[ { "color": "e99695", "default": false, "description": "Requesting to add a new dataset", "id": 2067376369, "name": "dataset request", "node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request" } ]
closed
false
null
[]
null
[]
2020-11-10T16:56:30Z
2020-12-03T13:38:15Z
2020-12-03T13:38:15Z
MEMBER
null
null
null
## Adding a Dataset - **Name:** ASSET - **Description:** ASSET is a crowdsourced multi-reference corpus for assessing sentence simplification in English where each simplification was produced by executing several rewriting transformations. - **Paper:** https://www.aclweb.org/anthology/2020.acl-main.424.pdf - **Data:** https://github.com/facebookresearch/asset - **Motivation:** Included in the GEM shared task Instructions to add a new dataset can be found [here](https://huggingface.co/docs/datasets/share_dataset.html).
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/833/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/833/timeline
null
completed
true
https://api.github.com/repos/huggingface/datasets/issues/832
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/832/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/832/comments
https://api.github.com/repos/huggingface/datasets/issues/832/events
https://github.com/huggingface/datasets/issues/832
740,077,228
MDU6SXNzdWU3NDAwNzcyMjg=
832
[GEM] add WikiAuto text simplification dataset
{ "avatar_url": "https://avatars.githubusercontent.com/u/10469459?v=4", "events_url": "https://api.github.com/users/yjernite/events{/privacy}", "followers_url": "https://api.github.com/users/yjernite/followers", "following_url": "https://api.github.com/users/yjernite/following{/other_user}", "gists_url": "https://api.github.com/users/yjernite/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/yjernite", "id": 10469459, "login": "yjernite", "node_id": "MDQ6VXNlcjEwNDY5NDU5", "organizations_url": "https://api.github.com/users/yjernite/orgs", "received_events_url": "https://api.github.com/users/yjernite/received_events", "repos_url": "https://api.github.com/users/yjernite/repos", "site_admin": false, "starred_url": "https://api.github.com/users/yjernite/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/yjernite/subscriptions", "type": "User", "url": "https://api.github.com/users/yjernite" }
[ { "color": "e99695", "default": false, "description": "Requesting to add a new dataset", "id": 2067376369, "name": "dataset request", "node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request" } ]
closed
false
null
[]
null
[]
2020-11-10T16:53:23Z
2020-12-03T13:38:08Z
2020-12-03T13:38:08Z
MEMBER
null
null
null
## Adding a Dataset - **Name:** WikiAuto - **Description:** Sentences in English Wikipedia and their corresponding sentences in Simple English Wikipedia that are written with simpler grammar and word choices. A lot of lexical and syntactic paraphrasing. - **Paper:** https://www.aclweb.org/anthology/2020.acl-main.709.pdf - **Data:** https://github.com/chaojiang06/wiki-auto - **Motivation:** Included in the GEM shared task Instructions to add a new dataset can be found [here](https://huggingface.co/docs/datasets/share_dataset.html).
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/832/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/832/timeline
null
completed
true
https://api.github.com/repos/huggingface/datasets/issues/831
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/831/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/831/comments
https://api.github.com/repos/huggingface/datasets/issues/831/events
https://github.com/huggingface/datasets/issues/831
740,071,697
MDU6SXNzdWU3NDAwNzE2OTc=
831
[GEM] Add WebNLG dataset
{ "avatar_url": "https://avatars.githubusercontent.com/u/10469459?v=4", "events_url": "https://api.github.com/users/yjernite/events{/privacy}", "followers_url": "https://api.github.com/users/yjernite/followers", "following_url": "https://api.github.com/users/yjernite/following{/other_user}", "gists_url": "https://api.github.com/users/yjernite/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/yjernite", "id": 10469459, "login": "yjernite", "node_id": "MDQ6VXNlcjEwNDY5NDU5", "organizations_url": "https://api.github.com/users/yjernite/orgs", "received_events_url": "https://api.github.com/users/yjernite/received_events", "repos_url": "https://api.github.com/users/yjernite/repos", "site_admin": false, "starred_url": "https://api.github.com/users/yjernite/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/yjernite/subscriptions", "type": "User", "url": "https://api.github.com/users/yjernite" }
[ { "color": "e99695", "default": false, "description": "Requesting to add a new dataset", "id": 2067376369, "name": "dataset request", "node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request" } ]
closed
false
null
[]
null
[]
2020-11-10T16:46:48Z
2020-12-03T13:38:01Z
2020-12-03T13:38:01Z
MEMBER
null
null
null
## Adding a Dataset - **Name:** WebNLG - **Description:** WebNLG consists of Data/Text pairs where the data is a set of triples extracted from DBpedia and the text is a verbalisation of these triples (16,095 data inputs and 42,873 data-text pairs). The data is available in English and Russian - **Paper:** https://www.aclweb.org/anthology/P17-1017.pdf - **Data:** https://webnlg-challenge.loria.fr/download/ - **Motivation:** Included in the GEM shared task, multilingual Instructions to add a new dataset can be found [here](https://huggingface.co/docs/datasets/share_dataset.html).
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/831/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/831/timeline
null
completed
true
https://api.github.com/repos/huggingface/datasets/issues/830
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/830/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/830/comments
https://api.github.com/repos/huggingface/datasets/issues/830/events
https://github.com/huggingface/datasets/issues/830
740,065,376
MDU6SXNzdWU3NDAwNjUzNzY=
830
[GEM] add ToTTo Table-to-text dataset
{ "avatar_url": "https://avatars.githubusercontent.com/u/10469459?v=4", "events_url": "https://api.github.com/users/yjernite/events{/privacy}", "followers_url": "https://api.github.com/users/yjernite/followers", "following_url": "https://api.github.com/users/yjernite/following{/other_user}", "gists_url": "https://api.github.com/users/yjernite/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/yjernite", "id": 10469459, "login": "yjernite", "node_id": "MDQ6VXNlcjEwNDY5NDU5", "organizations_url": "https://api.github.com/users/yjernite/orgs", "received_events_url": "https://api.github.com/users/yjernite/received_events", "repos_url": "https://api.github.com/users/yjernite/repos", "site_admin": false, "starred_url": "https://api.github.com/users/yjernite/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/yjernite/subscriptions", "type": "User", "url": "https://api.github.com/users/yjernite" }
[ { "color": "e99695", "default": false, "description": "Requesting to add a new dataset", "id": 2067376369, "name": "dataset request", "node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request" } ]
closed
false
null
[]
null
[]
2020-11-10T16:38:34Z
2020-12-10T13:06:02Z
2020-12-10T13:06:01Z
MEMBER
null
null
null
## Adding a Dataset - **Name:** ToTTo - **Description:** ToTTo is an open-domain English table-to-text dataset with over 120,000 training examples that proposes a controlled generation task: given a Wikipedia table and a set of highlighted table cells, produce a one-sentence description. - **Paper:** https://arxiv.org/abs/2004.14373 - **Data:** https://github.com/google-research-datasets/totto - **Motivation:** Included in the GEM shared task Instructions to add a new dataset can be found [here](https://huggingface.co/docs/datasets/share_dataset.html).
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/830/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/830/timeline
null
completed
true
https://api.github.com/repos/huggingface/datasets/issues/829
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/829/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/829/comments
https://api.github.com/repos/huggingface/datasets/issues/829/events
https://github.com/huggingface/datasets/issues/829
740,061,699
MDU6SXNzdWU3NDAwNjE2OTk=
829
[GEM] add Schema-Guided Dialogue
{ "avatar_url": "https://avatars.githubusercontent.com/u/10469459?v=4", "events_url": "https://api.github.com/users/yjernite/events{/privacy}", "followers_url": "https://api.github.com/users/yjernite/followers", "following_url": "https://api.github.com/users/yjernite/following{/other_user}", "gists_url": "https://api.github.com/users/yjernite/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/yjernite", "id": 10469459, "login": "yjernite", "node_id": "MDQ6VXNlcjEwNDY5NDU5", "organizations_url": "https://api.github.com/users/yjernite/orgs", "received_events_url": "https://api.github.com/users/yjernite/received_events", "repos_url": "https://api.github.com/users/yjernite/repos", "site_admin": false, "starred_url": "https://api.github.com/users/yjernite/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/yjernite/subscriptions", "type": "User", "url": "https://api.github.com/users/yjernite" }
[ { "color": "e99695", "default": false, "description": "Requesting to add a new dataset", "id": 2067376369, "name": "dataset request", "node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request" } ]
closed
false
null
[]
null
[]
2020-11-10T16:33:44Z
2020-12-03T13:37:50Z
2020-12-03T13:37:50Z
MEMBER
null
null
null
## Adding a Dataset - **Name:** The Schema-Guided Dialogue Dataset - **Description:** The Schema-Guided Dialogue (SGD) dataset consists of over 20k annotated multi-domain, task-oriented conversations between a human and a virtual assistant. These conversations involve interactions with services and APIs spanning 20 domains, ranging from banks and events to media, calendar, travel, and weather. - **Paper:** https://arxiv.org/pdf/2002.01359.pdf https://arxiv.org/pdf/2004.15006.pdf - **Data:** https://github.com/google-research-datasets/dstc8-schema-guided-dialogue - **Motivation:** Included in the GEM shared task Instructions to add a new dataset can be found [here](https://huggingface.co/docs/datasets/share_dataset.html).
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/829/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/829/timeline
null
completed
true
https://api.github.com/repos/huggingface/datasets/issues/828
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/828/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/828/comments
https://api.github.com/repos/huggingface/datasets/issues/828/events
https://github.com/huggingface/datasets/pull/828
740,008,683
MDExOlB1bGxSZXF1ZXN0NTE4NTcwMjY3
828
Add writer_batch_size attribute to GeneratorBasedBuilder
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
[]
closed
false
null
[]
null
[]
2020-11-10T15:28:19Z
2020-11-10T16:27:36Z
2020-11-10T16:27:36Z
MEMBER
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/828.diff", "html_url": "https://github.com/huggingface/datasets/pull/828", "merged_at": "2020-11-10T16:27:35Z", "patch_url": "https://github.com/huggingface/datasets/pull/828.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/828" }
As specified in #741 one would need to specify a custom ArrowWriter batch size to avoid filling the RAM. Indeed the defaults buffer size is 10 000 examples but for multimodal datasets that contain images or videos we may want to reduce that.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/828/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/828/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/827
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/827/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/827/comments
https://api.github.com/repos/huggingface/datasets/issues/827/events
https://github.com/huggingface/datasets/issues/827
739,983,024
MDU6SXNzdWU3Mzk5ODMwMjQ=
827
[GEM] MultiWOZ dialogue dataset
{ "avatar_url": "https://avatars.githubusercontent.com/u/10469459?v=4", "events_url": "https://api.github.com/users/yjernite/events{/privacy}", "followers_url": "https://api.github.com/users/yjernite/followers", "following_url": "https://api.github.com/users/yjernite/following{/other_user}", "gists_url": "https://api.github.com/users/yjernite/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/yjernite", "id": 10469459, "login": "yjernite", "node_id": "MDQ6VXNlcjEwNDY5NDU5", "organizations_url": "https://api.github.com/users/yjernite/orgs", "received_events_url": "https://api.github.com/users/yjernite/received_events", "repos_url": "https://api.github.com/users/yjernite/repos", "site_admin": false, "starred_url": "https://api.github.com/users/yjernite/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/yjernite/subscriptions", "type": "User", "url": "https://api.github.com/users/yjernite" }
[ { "color": "e99695", "default": false, "description": "Requesting to add a new dataset", "id": 2067376369, "name": "dataset request", "node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request" } ]
closed
false
null
[]
null
[]
2020-11-10T14:57:50Z
2022-10-05T12:31:13Z
2022-10-05T12:31:13Z
MEMBER
null
null
null
## Adding a Dataset - **Name:** MultiWOZ (Multi-Domain Wizard-of-Oz) - **Description:** 10k annotated human-human dialogues. Each dialogue consists of a goal, multiple user and system utterances as well as a belief state. Only system utterances are annotated with dialogue acts – there are no annotations from the user side. - **Paper:** https://arxiv.org/pdf/2007.12720.pdf - **Data:** https://github.com/budzianowski/multiwoz - **Motivation:** Will likely be part of the GEM shared task Instructions to add a new dataset can be found [here](https://huggingface.co/docs/datasets/share_dataset.html).
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/827/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/827/timeline
null
completed
true
https://api.github.com/repos/huggingface/datasets/issues/826
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/826/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/826/comments
https://api.github.com/repos/huggingface/datasets/issues/826/events
https://github.com/huggingface/datasets/issues/826
739,976,716
MDU6SXNzdWU3Mzk5NzY3MTY=
826
[GEM] Add E2E dataset
{ "avatar_url": "https://avatars.githubusercontent.com/u/10469459?v=4", "events_url": "https://api.github.com/users/yjernite/events{/privacy}", "followers_url": "https://api.github.com/users/yjernite/followers", "following_url": "https://api.github.com/users/yjernite/following{/other_user}", "gists_url": "https://api.github.com/users/yjernite/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/yjernite", "id": 10469459, "login": "yjernite", "node_id": "MDQ6VXNlcjEwNDY5NDU5", "organizations_url": "https://api.github.com/users/yjernite/orgs", "received_events_url": "https://api.github.com/users/yjernite/received_events", "repos_url": "https://api.github.com/users/yjernite/repos", "site_admin": false, "starred_url": "https://api.github.com/users/yjernite/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/yjernite/subscriptions", "type": "User", "url": "https://api.github.com/users/yjernite" }
[ { "color": "e99695", "default": false, "description": "Requesting to add a new dataset", "id": 2067376369, "name": "dataset request", "node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request" } ]
closed
false
null
[]
null
[]
2020-11-10T14:50:40Z
2020-12-03T13:37:57Z
2020-12-03T13:37:57Z
MEMBER
null
null
null
## Adding a Dataset - **Name:** E2E NLG dataset (for End-to-end natural language generation) - **Description:**a dataset for training end-to-end, datadriven natural language generation systems in the restaurant domain, the datasets consists of 5,751 dialogue-act Meaning Representations (structured data) and 8.1 reference free-text utterances per dialogue-act on average - **Paper:** https://arxiv.org/pdf/1706.09254.pdf https://arxiv.org/abs/1901.07931 - **Data:** http://www.macs.hw.ac.uk/InteractionLab/E2E/#data - **Motivation:** This dataset will likely be included in the GEM shared task Instructions to add a new dataset can be found [here](https://huggingface.co/docs/datasets/share_dataset.html).
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/826/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/826/timeline
null
completed
true
https://api.github.com/repos/huggingface/datasets/issues/825
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/825/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/825/comments
https://api.github.com/repos/huggingface/datasets/issues/825/events
https://github.com/huggingface/datasets/pull/825
739,925,960
MDExOlB1bGxSZXF1ZXN0NTE4NTAyNjgx
825
Add accuracy, precision, recall and F1 metrics
{ "avatar_url": "https://avatars.githubusercontent.com/u/959590?v=4", "events_url": "https://api.github.com/users/jplu/events{/privacy}", "followers_url": "https://api.github.com/users/jplu/followers", "following_url": "https://api.github.com/users/jplu/following{/other_user}", "gists_url": "https://api.github.com/users/jplu/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/jplu", "id": 959590, "login": "jplu", "node_id": "MDQ6VXNlcjk1OTU5MA==", "organizations_url": "https://api.github.com/users/jplu/orgs", "received_events_url": "https://api.github.com/users/jplu/received_events", "repos_url": "https://api.github.com/users/jplu/repos", "site_admin": false, "starred_url": "https://api.github.com/users/jplu/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jplu/subscriptions", "type": "User", "url": "https://api.github.com/users/jplu" }
[]
closed
false
null
[]
null
[]
2020-11-10T13:50:35Z
2020-11-11T19:23:48Z
2020-11-11T19:23:43Z
CONTRIBUTOR
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/825.diff", "html_url": "https://github.com/huggingface/datasets/pull/825", "merged_at": "2020-11-11T19:23:43Z", "patch_url": "https://github.com/huggingface/datasets/pull/825.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/825" }
This PR adds several single metrics, namely: - Accuracy - Precision - Recall - F1 They all uses under the hood the sklearn metrics of the same name. They allow different useful features when training a multilabel/multiclass model: - have a macro/micro/per label/weighted/binary/per sample score - score only the selected labels (usually what we call the positive labels) and ignore the negative ones. For example in case of a Named Entity Recognition task, positive labels are (`PERSON`, `LOCATION` or `ORGANIZATION`) and the negative one is `O`.
{ "+1": 1, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/825/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/825/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/824
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/824/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/824/comments
https://api.github.com/repos/huggingface/datasets/issues/824/events
https://github.com/huggingface/datasets/issues/824
739,896,526
MDU6SXNzdWU3Mzk4OTY1MjY=
824
Discussion using datasets in offline mode
{ "avatar_url": "https://avatars.githubusercontent.com/u/77193?v=4", "events_url": "https://api.github.com/users/mandubian/events{/privacy}", "followers_url": "https://api.github.com/users/mandubian/followers", "following_url": "https://api.github.com/users/mandubian/following{/other_user}", "gists_url": "https://api.github.com/users/mandubian/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mandubian", "id": 77193, "login": "mandubian", "node_id": "MDQ6VXNlcjc3MTkz", "organizations_url": "https://api.github.com/users/mandubian/orgs", "received_events_url": "https://api.github.com/users/mandubian/received_events", "repos_url": "https://api.github.com/users/mandubian/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mandubian/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mandubian/subscriptions", "type": "User", "url": "https://api.github.com/users/mandubian" }
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" }, { "color": "c5def5", "default": false, "description": "Generic discussion on the library", "id": 2067400324, "name": "generic discussion", "node_id": "MDU6TGFiZWwyMDY3NDAwMzI0", "url": "https://api.github.com/repos/huggingface/datasets/labels/generic%20discussion" } ]
closed
false
null
[]
null
[]
2020-11-10T13:10:51Z
2022-02-15T10:32:36Z
2022-02-15T10:32:36Z
NONE
null
null
null
`datasets.load_dataset("csv", ...)` breaks if you have no connection (There is already this issue https://github.com/huggingface/datasets/issues/761 about it). It seems to be the same for metrics too. I create this ticket to discuss a bit and gather what you have in mind or other propositions. Here are some points to open discussion: - if you want to prepare your code/datasets on your machine (having internet connexion) but run it on another offline machine (not having internet connexion), it won't work as is, even if you have all files locally on this machine. - AFAIK, you can make it work if you manually put the python files (csv.py for example) on this offline machine and change your code to `datasets.load_dataset("MY_PATH/csv.py", ...)`. But it would be much better if you could run ths same code without modification if files are available locally. - I've also been considering the requirement of downloading Python code and execute on your machine to use datasets. This can be an issue in a professional context. Downloading a CSV/H5 file is acceptable, downloading an executable script can open many security issues. We certainly need a mechanism to at least "freeze" the dataset code you retrieved once so that you can review it if you want and then be sure you use this one everywhere and not a version dowloaded from internet. WDYT? (thks)
{ "+1": 7, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 7, "url": "https://api.github.com/repos/huggingface/datasets/issues/824/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/824/timeline
null
completed
true
https://api.github.com/repos/huggingface/datasets/issues/823
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/823/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/823/comments
https://api.github.com/repos/huggingface/datasets/issues/823/events
https://github.com/huggingface/datasets/issues/823
739,815,763
MDU6SXNzdWU3Mzk4MTU3NjM=
823
how processing in batch works in datasets
{ "avatar_url": "https://avatars.githubusercontent.com/u/73364383?v=4", "events_url": "https://api.github.com/users/rabeehkarimimahabadi/events{/privacy}", "followers_url": "https://api.github.com/users/rabeehkarimimahabadi/followers", "following_url": "https://api.github.com/users/rabeehkarimimahabadi/following{/other_user}", "gists_url": "https://api.github.com/users/rabeehkarimimahabadi/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/rabeehkarimimahabadi", "id": 73364383, "login": "rabeehkarimimahabadi", "node_id": "MDQ6VXNlcjczMzY0Mzgz", "organizations_url": "https://api.github.com/users/rabeehkarimimahabadi/orgs", "received_events_url": "https://api.github.com/users/rabeehkarimimahabadi/received_events", "repos_url": "https://api.github.com/users/rabeehkarimimahabadi/repos", "site_admin": false, "starred_url": "https://api.github.com/users/rabeehkarimimahabadi/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/rabeehkarimimahabadi/subscriptions", "type": "User", "url": "https://api.github.com/users/rabeehkarimimahabadi" }
[ { "color": "e99695", "default": false, "description": "Requesting to add a new dataset", "id": 2067376369, "name": "dataset request", "node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request" } ]
closed
false
null
[]
null
[]
2020-11-10T11:11:17Z
2020-11-10T13:11:10Z
2020-11-10T13:11:09Z
NONE
null
null
null
Hi, I need to process my datasets before it is passed to dataloader in batch, here is my codes ``` class AbstractTask(ABC): task_name: str = NotImplemented preprocessor: Callable = NotImplemented split_to_data_split: Mapping[str, str] = NotImplemented tokenizer: Callable = NotImplemented max_source_length: str = NotImplemented max_target_length: str = NotImplemented # TODO: should not be a task item, but cannot see other ways. tpu_num_cores: int = None # The arguments set are for all tasks and needs to be kept common. def __init__(self, config): self.max_source_length = config['max_source_length'] self.max_target_length = config['max_target_length'] self.tokenizer = config['tokenizer'] self.tpu_num_cores = config['tpu_num_cores'] def _encode(self, batch) -> Dict[str, torch.Tensor]: batch_encoding = self.tokenizer.prepare_seq2seq_batch( [x["src_texts"] for x in batch], tgt_texts=[x["tgt_texts"] for x in batch], max_length=self.max_source_length, max_target_length=self.max_target_length, padding="max_length" if self.tpu_num_cores is not None else "longest", # TPU hack return_tensors="pt" ) return batch_encoding.data def data_split(self, split): return self.split_to_data_split[split] def get_dataset(self, split, n_obs=None): split = self.data_split(split) if n_obs is not None: split = split+"[:{}]".format(n_obs) dataset = load_dataset(self.task_name, split=split) dataset = dataset.map(self.preprocessor, remove_columns=dataset.column_names) dataset = dataset.map(lambda batch: self._encode(batch), batched=True) dataset.set_format(type="torch", columns=['input_ids', 'token_type_ids', 'attention_mask', 'label']) return dataset ``` I call it like `AutoTask.get(task, train_dataset_config).get_dataset(split="train", n_obs=data_args.n_train) ` This gives the following error, to me because the data inside the dataset = dataset.map(lambda batch: self._encode(batch), batched=True) is not processed in batch, could you tell me how I can process dataset in batch inside my function? thanks File "finetune_multitask_trainer.py", line 192, in main if training_args.do_train else None File "finetune_multitask_trainer.py", line 191, in <dictcomp> split="train", n_obs=data_args.n_train) for task in data_args.task} File "/remote/idiap.svm/user.active/rkarimi/dev/internship/seq2seq/tasks.py", line 56, in get_dataset dataset = dataset.map(lambda batch: self._encode(batch), batched=True) File "/idiap/user/rkarimi/libs/anaconda3/envs/internship/lib/python3.7/site-packages/datasets/arrow_dataset.py", line 1236, in map update_data = does_function_return_dict(test_inputs, test_indices) File "/idiap/user/rkarimi/libs/anaconda3/envs/internship/lib/python3.7/site-packages/datasets/arrow_dataset.py", line 1207, in does_function_return_dict function(*fn_args, indices, **fn_kwargs) if with_indices else function(*fn_args, **fn_kwargs) File "/remote/idiap.svm/user.active/rkarimi/dev/internship/seq2seq/tasks.py", line 56, in <lambda> dataset = dataset.map(lambda batch: self._encode(batch), batched=True) File "/remote/idiap.svm/user.active/rkarimi/dev/internship/seq2seq/tasks.py", line 37, in _encode [x["src_texts"] for x in batch], File "/remote/idiap.svm/user.active/rkarimi/dev/internship/seq2seq/tasks.py", line 37, in <listcomp> [x["src_texts"] for x in batch], TypeError: string indices must be integers
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/823/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/823/timeline
null
completed
true
https://api.github.com/repos/huggingface/datasets/issues/822
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/822/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/822/comments
https://api.github.com/repos/huggingface/datasets/issues/822/events
https://github.com/huggingface/datasets/issues/822
739,579,314
MDU6SXNzdWU3Mzk1NzkzMTQ=
822
datasets freezes
{ "avatar_url": "https://avatars.githubusercontent.com/u/73364383?v=4", "events_url": "https://api.github.com/users/rabeehkarimimahabadi/events{/privacy}", "followers_url": "https://api.github.com/users/rabeehkarimimahabadi/followers", "following_url": "https://api.github.com/users/rabeehkarimimahabadi/following{/other_user}", "gists_url": "https://api.github.com/users/rabeehkarimimahabadi/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/rabeehkarimimahabadi", "id": 73364383, "login": "rabeehkarimimahabadi", "node_id": "MDQ6VXNlcjczMzY0Mzgz", "organizations_url": "https://api.github.com/users/rabeehkarimimahabadi/orgs", "received_events_url": "https://api.github.com/users/rabeehkarimimahabadi/received_events", "repos_url": "https://api.github.com/users/rabeehkarimimahabadi/repos", "site_admin": false, "starred_url": "https://api.github.com/users/rabeehkarimimahabadi/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/rabeehkarimimahabadi/subscriptions", "type": "User", "url": "https://api.github.com/users/rabeehkarimimahabadi" }
[ { "color": "2edb81", "default": false, "description": "A bug in a dataset script provided in the library", "id": 2067388877, "name": "dataset bug", "node_id": "MDU6TGFiZWwyMDY3Mzg4ODc3", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20bug" } ]
open
false
null
[]
null
[]
2020-11-10T05:10:19Z
2020-11-12T23:23:03Z
null
NONE
null
null
null
Hi, I want to load these two datasets and convert them to Dataset format in torch and the code freezes for me, could you have a look please? thanks dataset1 = load_dataset("squad", split="train[:10]") dataset1 = dataset1.set_format(type='torch', columns=['context', 'answers', 'question']) dataset2 = load_dataset("imdb", split="train[:10]") dataset2 = dataset2.set_format(type="torch", columns=["text", "label"]) print(len(dataset1))
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/822/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/822/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/821
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/821/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/821/comments
https://api.github.com/repos/huggingface/datasets/issues/821/events
https://github.com/huggingface/datasets/issues/821
739,506,859
MDU6SXNzdWU3Mzk1MDY4NTk=
821
`kor_nli` dataset doesn't being loaded properly
{ "avatar_url": "https://avatars.githubusercontent.com/u/30492059?v=4", "events_url": "https://api.github.com/users/sackoh/events{/privacy}", "followers_url": "https://api.github.com/users/sackoh/followers", "following_url": "https://api.github.com/users/sackoh/following{/other_user}", "gists_url": "https://api.github.com/users/sackoh/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/sackoh", "id": 30492059, "login": "sackoh", "node_id": "MDQ6VXNlcjMwNDkyMDU5", "organizations_url": "https://api.github.com/users/sackoh/orgs", "received_events_url": "https://api.github.com/users/sackoh/received_events", "repos_url": "https://api.github.com/users/sackoh/repos", "site_admin": false, "starred_url": "https://api.github.com/users/sackoh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sackoh/subscriptions", "type": "User", "url": "https://api.github.com/users/sackoh" }
[]
closed
false
null
[]
null
[]
2020-11-10T02:04:12Z
2020-11-16T13:59:12Z
2020-11-16T13:59:12Z
NONE
null
null
null
There are two issues from `kor_nli` dataset 1. csv.DictReader failed to split features by tab - Should not exist `None` value in label feature, but there it is. ```python kor_nli_train['train'].unique('gold_label') # ['neutral', 'entailment', 'contradiction', None] ``` - I found a reason why there is `None` values in label feature as following code ```python from datasets import load_dataset kor_nli_train = load_dataset('kor_nli', 'multi_nli') for idx, example in enumerate(kor_nli_train['train']): if example['gold_label'] is None: print(idx, example) break # 16835 {'gold_label': None, 'sentence1': 'κ·ΈλŠ” μ „μŸ 전에 κ°€λ²Όμš΄ λ²…μŠ€ν‚¨ 암말을 κ°€μ§€κ³  달리기 μœ„ν•΄ 우유처럼 ν•˜μ–€ μŠ€ν„°λ“œλ₯Ό λ„£μ—ˆλ‹€.\tμ „μŸ 전에 닀인쒅 μ—¬μ„±λ“€κ³Ό ν•¨κ»˜ μžˆλŠ” 백인 λ‚¨μžκ°€ μžˆμ—ˆλ‹€.\tentailment\nμŠ¬λ¦Όμ€ 재빨리 μ˜·μ„ μž…μ—ˆκ³ , μˆœκ°„μ μœΌλ‘œ λ―Έμ§€κ·Όν•œ 물을 뿌릴 수 μžˆλŠ” μ•„μΉ¨ 세탁물을 기꺼이 κ°€λ‘μ—ˆλ‹€.\tμŠ¬λ¦Όμ€ 직μž₯에 λŠ¦μ—ˆλ‹€.\tneutral\nλ‰΄μš•μ—μ„œ κ·Έ 식사λ₯Ό ν•΄λ΄€λŠ”λ°, κ±°κΈ°μ„œ μ†Œκ³ κΈ°μ˜ λ©‹μ§„ μ†Œκ³ κΈ° 뢀뢄을 μš”λ¦¬ν•˜κ³  λ°”λ² νλ‘œ λ§Œλ“  널빀지 같은 κ±Έ κ°€μ Έμ™”λŠ”λ°, 정말 λŒ€λ‹¨ν•΄.\t그듀이 κ±°κΈ°μ„œ μš”λ¦¬ν•˜λŠ” μ‡ κ³ κΈ°λŠ” μ—­κ²Ήλ‹€. κ±°κΈ°μ„œ μ ˆλŒ€ λ¨Ήμ§€ 마라.\tcontradiction\nνŒλ§€μ›μ˜ μ£½μŒμ—μ„œ λΈŒλΌμ΄μ–Έ λ°λ„€νžˆ... 크리슀 켈리\t크리슀 μΌˆλ¦¬λŠ” μ„ΈμΌμ¦ˆλ§¨μ˜ μ£½μŒμ„ μ–ΈκΈ‰ν•˜μ§€ μ•ŠλŠ”λ‹€.\tcontradiction\nκ·ΈλŸ¬λŠ” λ™μ•ˆ μš”λ¦¬μ‚¬λŠ” κ·Έλƒ₯ ν™”κ°€ 났어.\tμŠ€νŠœκ°€ λ“λŠ” λ™μ•ˆ μš”λ¦¬μ‚¬λŠ” ν™”κ°€ 났닀.\tneutral\nλ§ˆμ§€λ§‰ 둜마의 맹곡격 μ „λ‚  λ°€, 900λͺ… μ΄μƒμ˜ μœ λŒ€μΈ μˆ˜λΉ„μˆ˜λ“€μ΄ λ‘œλ§ˆμΈλ“€μ—κ²Œ 그듀을 μ‚¬λ‘œμž‘λŠ” 승리λ₯Ό μ£ΌκΈ° λ³΄λ‹€λŠ” λŒ€λŸ‰ μžμ‚΄μ„ μ €μ§ˆλ €λ‹€.\tλ‘œλ§ˆμΈλ“€μ΄ κ·Έλ“€μ˜ ν¬νšμ— μŠΉλ¦¬ν•˜λ„λ‘ 내버렀두기 λ³΄λ‹€λŠ” 900λͺ…μ˜ μœ λŒ€μΈ μˆ˜λΉ„μˆ˜λ“€μ΄ μžμ‚΄ν–ˆλ‹€.\tentailment\nμ•žμœΌλ‘œ λ°œμ‚¬ν•˜λΌ.\tλ°œμ‚¬.\tneutral\n그리고 당신은 우리 땅이 에이컀에 μžˆλ‹€λŠ” 것을 μ•Œκ³  μžˆλ‹€. 우리 μ‚¬λžŒλ“€μ€ μ–΄λ–€ 것이 μ–Όλ§ˆλ‚˜ λ§Žμ€μ§€ μ΄ν•΄ν•˜μ§€ λͺ»ν•  것이닀.\tλͺ¨λ“  μ‚¬λžŒλ“€μ€ 우리의 μΈ‘μ • μ‹œμŠ€ν…œμ΄ μ–΄λ–»κ²Œ μž‘λ™ν•˜λŠ”μ§€ μ•Œκ³  μ΄ν•΄ν•©λ‹ˆλ‹€.\tcontradiction\n주미게슀\tJumiygesλŠ” λ„μ‹œμ˜ 이름이닀.\tneutral\nμ‚¬λžŒμ€ 자기 민쑱을 λŒλ΄μ•Ό ν•œλ‹€...\tμ‚¬λžŒμ€ 쑰ꡭ에 곡감해야 ν•œλ‹€.\tentailment\nλ˜ν•œ PDD 63은 정뢀와 업계가 컴퓨터 기반 곡격에 λŒ€ν•΄ κ²½κ³ ν•˜κ³  λ°©μ–΄ν•  μ€€λΉ„λ₯Ό 더 μž˜ν•  수 μžˆλ„λ‘ μ‹œμŠ€ν…œ μ·¨μ•½μ„±, μœ„ν˜‘, μΉ¨μž… 및 이상에 λŒ€ν•œ 정보λ₯Ό κ³΅μœ ν•˜λŠ” λ©”μ»€λ‹ˆμ¦˜μ„ μˆ˜λ¦½ν•˜λŠ” 것이 μ€‘μš”ν•˜λ‹€λŠ” 것을 μΈμ‹ν–ˆμŠ΅λ‹ˆλ‹€.\t정보 전솑 ν”„λ‘œν† μ½œμ„ λ§Œλ“œλŠ” 것은 μ€‘μš”ν•˜λ‹€.\tentailment\n카페 링 ν”Όμ•„μž 델라 λ ˆν“ŒλΈ”λ¦¬μΉ΄ λ°”λ‘œ 남μͺ½μ—λŠ” ν”Όλ Œμ²΄κ°€ μ•Œλ €μ§„ 짚 μ œν’ˆ λ•Œλ¬Έμ— ν•œλ•Œ 슀트둜 λ§ˆμΌ“μ΄λΌκ³  뢈렸던 16μ„ΈκΈ° λ‘œμ§€μ•„μΈ λ©”λ₯΄μΉ΄ν†  λˆ„μ˜€λ³΄(Mercato Nuovo)κ°€ μžˆλ‹€.\tν”Όμ•„μž 델라 λ ˆν“ŒλΈ”λ¦¬μΉ΄μ—λŠ” μΉ΄νŽ˜κ°€ 많이 μžˆλ‹€.\tentailment\nμš°λ¦¬κ°€ μ—¬κΈ° μžˆλŠ” ν•œ 트린판이 뭘 μ£Όμ› λŠ”μ§€ μ‚΄νŽ΄λ΄μ•Όκ² μ–΄\tμš°λ¦¬λŠ” 트린판이 무엇을 μ£Όμ› λŠ”μ§€ λ³΄λŠ” 데 μ‹œκ°„μ„ λ‚­λΉ„ν•˜μ§€ μ•Šμ„ 것이닀.\tcontradiction\nκ·ΈλŸ¬λ‚˜ 켈트쑱의 문화적 κΈ°λ°˜μ„ κ°€μ§„ μ•„μΌλžœλ“œ κ΅νšŒλŠ” 유럽의 μ‹ ν₯ 기독ꡐ μ„Έκ³„μ™€λŠ” λ‹€λ₯΄κ²Œ λ°œμ „ν–ˆκ³  κ²°κ΅­ λ‘œλ§ˆμ™€ μ€‘μ•™μ§‘κΆŒμ  ν–‰μ •μœΌλ‘œ λŒ€μ²΄λ˜μ—ˆλ‹€.\tμ•„μΌλžœλ“œ κ΅νšŒμ—λŠ” 켈트쑱의 κΈ°μ§€κ°€ μžˆμ—ˆλ‹€.\tentailment\nκΈ€μŽ„, λ„Œ μ„ νƒμ˜ μ—¬μ§€κ°€ μ—†μ–΄\tκΈ€μŽ„, λ„ˆμ—κ² λ§Žμ€ μ„ νƒκΆŒμ΄ μžˆμ–΄.\tcontradiction\n사싀, 곡식적인 보μž₯은 μ—†λ‹€.\tλ‚΄κ°€ μ‚° 물건에 λŒ€ν•œ 보증이 μ—†μ—ˆλ‹€.\tneutral\n덜 ν™œκΈ°μ°¨κΈ΄ ν•˜μ§€λ§Œ, μ•ˆμ‹œμ™€ λ₯΄ λΆ€λ₯΄μ ―의 μ‚¬λž‘μŠ€λŸ¬μš΄ ν˜Έμˆ˜μ—μ„œλ„ 삢은 λ˜‘κ°™μ΄ μƒμΎŒν•˜λ‹€.\tμ•ˆμ‹œμ™€ λ₯΄ λΆ€λ₯΄κ²Ÿμ—μ„œλŠ” ν˜Έμˆ˜μ—μ„œμ˜ ν™œλ™μ΄ μ„œλ‘λ₯΄κ³  λ°”μœ λΆ„μœ„κΈ°λ₯Ό μ—°μΆœν•œλ‹€.\tcontradiction\n그의 μ—¬ν–‰ μ†Œμ‹μ΄ 이미 νΌμ‘Œλ‹€λ©΄ 곡격 μ†Œμ‹λ„ νΌμ‘Œμ„ ν…Œμ§€λ§Œ λ§ˆμ„μ—μ„œλŠ” μ „ν˜€ κ³΅ν™©μ˜ κΈ°λ―Έκ°€ 보이지 μ•Šμ•˜λ‹€.\tκ·ΈλŠ” μ™œ λ§ˆμ„μ΄ λ‹Ήν™©ν•˜μ§€ μ•Šμ•˜λŠ”μ§€ μ•Œ 수 μ—†μ—ˆλ‹€.\tneutral\nκ³Όκ±°μ—λŠ” 죽음의 μœ„ν˜‘μ΄ ν† μ§€μ˜ 판맀λ₯Ό λ§‰λŠ” 데 거의 도움이 λ˜μ§€ μ•Šμ•˜λ‹€.\tν† μ§€ νŒλ§€λŠ” μ–΄λ– ν•œ μœ„ν˜‘λ„ κ΅ν™˜ν•˜μ§€ μ•Šκ³  이루어진닀.\tcontradiction\nμ–΄λŠ μ‹œμ μ— 이λ₯΄λŸ¬ λ‚˜λŠ” μ§€κΈˆ λ‹€κ°€μ˜€λŠ” μƒˆλ‘œμš΄ 것듀과 λ‚˜μ˜€λŠ” λ§Žμ€ μƒˆλ‘œμš΄ 것듀이 λ‚΄κ°€ λŠ™μ–΄κ°€κ³  μžˆλ‹€κ³  λ§ν•˜λŠ” μ‹œλŒ€λ‘œ μ ‘μ–΄λ“€κ³  μžˆλ‹€.\tλ‚˜λŠ” μ—¬μ „νžˆ λ‚΄κ°€ λ³΄λŠ” λͺ¨λ“  μƒˆλ‘œμš΄ 것을 μ‚¬λž‘ν•œλ‹€.\tcontradiction\nλ‰΄μŠ€μœ„ν¬λŠ” λ¬Όλ¦¬ν•™μžλ“€μ΄ κ²½κΈ°μž₯ ν–‰μ‚¬μ—μ„œ κ³ μ†λ„λ‘œμ˜ μžλ™μ°¨ ꡐ톡과 λ³΄ν–‰μž ꡐ톡을 κ°œμ„ ν•˜κΈ° μœ„ν•΄ μƒˆλ–Όμ˜ μ›€μ§μž„μ„ μ—°κ΅¬ν•˜κ³  μžˆλ‹€κ³  λ§ν•œλ‹€.\tκ³ μ†λ„λ‘œμ˜ μžλ™μ°¨ ꡐ톡 흐름을 κ°œμ„ ν•˜λŠ” 것은 λ¬Όλ¦¬ν•™μžλ“€μ΄ μƒˆλ–Όλ₯Ό μ—°κ΅¬ν•˜λŠ” 이유 쀑 ν•˜λ‚˜μ΄λ‹€.\tentailment\nμ–Όλ§ˆλ‚˜ λ‹€λ₯Έκ°€? κ·ΈλŠ” μž μ‹œ 말을 λ©ˆμΆ”μ—ˆλ‹€κ°€ 말을 μ΄μ—ˆλ‹€.\tκ·ΈλŠ” κ·Έ μ†Œλ…€κ°€ 어디에 μžˆλŠ”μ§€ μ•Œκ³  μ‹Άμ—ˆλ‹€.\tentailment\nκΈ€μŽ„, κ·Έμ—κ²Œ λ„ˆλ¬΄ λ§Žμ€ 것을 μ£Όμ§€λ§ˆ.\tκ·ΈλŠ” 훨씬 더 λ§Žμ€ 것을 μš”κ΅¬ν•  것이닀.\tneutral\n아무리 그의 μ°½μž‘λ¬Όμ΄ μ™„λ²½ν•΄ 보인닀고 해도, 그듀을 λ―ΏλŠ” 것은 μ•„λ§ˆλ„ 쒋은 생각이 아닐 것이닀.\'\tλ„μžκΈ°λ₯Ό 잘 λ§Œλ“ λ‹€κ³  ν•΄μ„œ λˆ„κ΅°κ°€λ₯Ό λ―ΏλŠ” 것은 μ•„λ§ˆ μ’‹μ§€ μ•Šμ„ 것이닀.\tneutral\nλ²„μŠ€ν‹€λ§ κ·Έλž€ λΉ„μ•„(Bustling Gran Via)λŠ” ν˜Έν…”, 상점, κ·Ήμž₯, λ‚˜μ΄νŠΈν΄λŸ½, 카페 등이 μ–΄μš°λŸ¬μ Έ μ‚°μ±…κ³Ό μ°½κ°€λ₯Ό λ³Ό 수 μžˆλ‹€.\tGran ViaλŠ” ν˜Έν…”, 상점, κ·Ήμž₯, λ‚˜μ΄νŠΈν΄λŸ½, 카페의 λ²ˆν™”ν•œ 쑰합이닀.\tentailment\nμ •λΆ€ μΈμ‡„μ†Œ\tκ·Έ 사무싀은 μ›Œμ‹±ν„΄μ— μœ„μΉ˜ν•΄ μžˆλ‹€.\tneutral\nμ‹€μ œ λ¬Έν™” μ „μŸμ΄ μ–΄λ”” μžˆλŠ”μ§€ μ•Œκ³  μ‹Άλ‹€λ©΄ 학원을 μžŠμ–΄λ²„λ¦¬κ³  μ‹€λ¦¬μ½˜ 밸리와 λ ˆλ“œλͺ¬λ“œλ₯Ό 생각해 보라.\tμ‹€μ œ λ¬Έν™” μ „μŸμ€ λ ˆλ“œλͺ¬λ“œμ—μ„œ μΌμ–΄λ‚œλ‹€.\tentailment\n그리고 νŽ˜λ‹ˆμ‹€λ¦°μ„ μ£Όμ§€ μ•ŠκΈ° μœ„ν•΄ μΉ¨λŒ€ μœ„μ— μ˜¬λ €λ†¨μ–΄\tκ·Έλ…€μ˜ λ°©μ—λŠ” νŽ˜λ‹ˆμ‹€λ¦°μ΄ μ—†λ‹€λŠ” μ§•ν›„κ°€ μ „ν˜€ μ—†μ—ˆλ‹€.\tcontradiction\nL.A.의 μ•Όμ™Έ μ‹œμž₯을 ν™œλ³΄ν•˜λŠ” 것은 λ§›μžˆκ³  μ €λ ΄ν•œ 그루브λ₯Ό 작고, 끝이 μ—†λŠ” 햇빛을 즐기고, μ‹ μ„ ν•œ 농산물, 꽃, ν–₯, 그리고 κ°€μ ― κ°ˆλ‘œμ–΄λ₯Ό κ΅¬μž…ν•˜λ©΄μ„œ ν˜„μ§€μΈλ“€κ³Ό μ–΄μšΈλ¦΄ 수 μžˆλŠ” ν›Œλ₯­ν•œ 방법이닀.\tLA의 μ•Όμ™Έ μ‹œμž₯을 λŒμ•„λ‹€λ‹ˆλŠ” 것은 μ‹œκ°„ λ‚­λΉ„λ‹€.\tcontradiction\nμ•ˆλ‚˜λŠ” λ°–μœΌλ‘œ λ‚˜μ™€ μ•ˆλ„μ˜ ν•œμˆ¨μ„ λ‚΄μ‰¬μ—ˆλ‹€. 단 ν•œ 번, 그리고 λ§ˆλ¦¬ν›„μ•„μ‰¬ λ§›μ˜ 술둜 λλ‚΄μžλŠ” 결심이 λ’€μ„žμ—¬ μžˆμ—ˆλ‹€.\tμ•ˆλ‚˜λŠ” μ•ˆμ‹¬ν•˜κ³  λ§ˆλ¦¬ν›„μ•„μ‰¬ λ§›μ˜ μˆ μ„ λ‹€ λ§ˆμ‹œκΈ°λ‘œ κ²°μ‹¬ν–ˆλ‹€.\tentailment\n5 월에 VajpayeeλŠ” ν•΅ μ‹€ν—˜μ˜ 성곡적인 μ™„λ£Œλ₯Ό λ°œν‘œν–ˆλŠ”λ°, 인도인듀은 주ꢌ의 ν‘œμ‹œλ‘œ μ„ μ „ν–ˆμ§€λ§Œ 이웃 ꡭ가와 μ„œκ΅¬μ™€μ˜ 인도 관계λ₯Ό λ³΅μž‘ν•˜κ²Œ λ§Œλ“€ 수 μžˆμŠ΅λ‹ˆλ‹€.\tμΈλ„λŠ” 성곡적인 ν•΅μ‹€ν—˜μ„ ν•œ 적이 μ—†λ‹€.\tcontradiction\nν”ŒλΌλ…Έ μ›μ—μ„œ 보톡 μ–Όλ§ˆλ‚˜ λ§Žμ€ 것을 κ°€μ§€κ³  μžˆλŠ”κ°€?\tμ € μ‚¬λžŒλ“€ 쀑에 ν”ŒλΌλ…Έ 원에 κ°€λ³Έ μ‚¬λžŒ μžˆμ–΄?\tcontradiction\nκ·Έκ²ƒμ˜ 전체적인 ν˜•νƒœμ˜ μš°μ•„ν•¨μ€ μš΄ν•˜ κ±΄λ„ˆνŽΈμ—μ„œ κ°€μž₯ 잘 λ³Ό 수 μžˆλ‹€. μ™œλƒν•˜λ©΄, λ‘œλ§ˆμ— μžˆλŠ” μ„± λ² λ“œλ‘œμ²˜λŸΌ, 돔은 κΈΈμ­‰ν•œ λ³Έλ‹Ή λ’€λ‘œ 더 κ°€κΉŒμš΄ 곳에 사라지기 λ•Œλ¬Έμ΄λ‹€.\tμ„± λ² λ“œλ‘œμ˜ κΈΈμ­‰ν•œ 본당은 돔을 κ°€λ¦°λ‹€.\tentailment\n당신은 μˆ˜ν‹΄μ΄ 살에 강박적인 기쁨을 κ°€μ§€κ³  λˆ„λ“œλ₯Ό 그릴 것이라고 μƒκ°ν•˜κ² μ§€λ§Œ, μ•„λ‹ˆμ˜€; κ·ΈλŠ” 그의 λͺ¨λ“  κ²½λ ₯μ—μ„œ 단 ν•œ μ λ§Œμ„ κ·Έλ Έκ³ , 그것은 μ‚¬μ†Œν•œ 그림이닀.\tκ·ΈλŠ” 그것이 κ·Έλ₯Ό λΆˆνŽΈν•˜κ²Œ λ§Œλ“€μ—ˆκΈ° λ•Œλ¬Έμ— ν•˜λ‚˜λ§Œ κ·Έλ Έλ‹€.\tneutral\n이 인상적인 풍경은 μ›λž˜ λ‚˜ν¬ 레온이 루브λ₯΄ λ°•λ¬Όκ΄€μ˜ μΉ¨μ‹€μ—μ„œ λ³Ό 수 μžˆλ„λ‘ κ³„νšλ˜μ—ˆλŠ”λ°, κ·Έ λ‹Ήμ‹œ κΆμ „μ΄μ—ˆμŠ΅λ‹ˆλ‹€.\tλ‚˜ν΄λ ˆμ˜Ήμ€ 그의 λͺ¨λ“  ꢁ전에 μžˆλŠ” 그의 μΉ¨μ‹€μ—μ„œ λ³΄λŠ” κ²½μΉ˜μ— λ§Žμ€ 관심을 κ°€μ‘Œλ‹€.\tneutral\nκ·ΈλŠ” μš°λ¦¬μ—κ²Œ λ¬Έ μ—΄μ‡ λ₯Ό κ±΄λ„€μ£Όκ³ λŠ” κΈ‰νžˆ 떠났닀.\tκ·ΈλŠ” κΈ΄μž₯ν•΄μ„œ μš°λ¦¬μ—κ²Œ μ—΄μ‡ λ₯Ό 빨리 μ£Όμ—ˆλ‹€.\tneutral\nμœ„μ›νšŒλŠ” λ˜ν•œ μ΅œμ’… κ·œμΉ™μ„ OMB에 μ œμΆœν–ˆλ‹€.\tμœ„μ›νšŒλŠ” λ˜ν•œ 이 κ·œμΉ™μ„ λ‹€λ₯Έ 그룹에 μ œμΆœν–ˆμ§€λ§Œ μ΅œμ’… κ·œμΉ™μ€ OMBκ°€ ν‰κ°€ν•˜κΈ° μœ„ν•œ 것이 μ—ˆμŠ΅λ‹ˆλ‹€.\tneutral\nμ •μ›κ°€κ²Œμ— 가보면 μ˜¬λ¦¬λΉ„μ•„μ˜ 볡제 ν™”ν•©λ¬Ό 같은 μœ μΎŒν•œ 이름을 κ°€μ§„ μ œν’ˆλ“€μ„ 찾을 수 μžˆμ„ κ²λ‹ˆλ‹€.이 μ œν’ˆμ΄ 뿌리λ₯Ό 내리도둝 돕기 μœ„ν•΄ 촬영의 μ ˆλ‹¨λœ 끝에 λ©ν¬μŠ›μ„ ν•˜λŠ” 호λ₯΄λͺ¬μ˜ ν˜Όν•©λ¬Όμ΄μ£ .\t정원 κ°€κΎΈκΈ° κ°€κ²Œμ˜ μ œν’ˆλ“€μ€ μ’…μ’… κ·Έλ“€μ˜ λͺ©μ μ„ μ„€λͺ…ν•˜κΈ° μœ„ν•΄ κΈ°μˆ μ μœΌλ‘œλ‚˜ κ³Όν•™μ μœΌλ‘œ νŒŒμƒλœ 이름(μ˜¬λ¦¬λΉ„μ•„μ˜ 볡제 ν™”ν•©λ¬Όμ²˜λŸΌ)을 λΆ€μ—¬λ°›λŠ”λ‹€.\tneutral\nμŠ€νƒ€λŠ” μŠ€ν‹Έ μžμ‹ μ΄λ‚˜ μ™œ κ·Έλ…€μ˜ 이야기λ₯Ό λ°”κΎΈμ—ˆλŠ”μ§€μ— 훨씬 더 관심이 μžˆμ„ 것이닀.\tμŠ€ν‹Έμ˜ μ΄μ•ΌκΈ°λŠ” μ‘°κΈˆλ„ λ³€ν•˜μ§€ μ•Šμ•˜λ‹€.\tcontradiction\nλ‚¨νŽΈκ³Όμ˜ λ§ˆμ§€λ§‰ λŒ€κ²°λ‘œ λ§₯ν‹°μ–΄λŠ” λ…ΈλΌμ˜ 변신을 λ„ˆλ¬΄λ‚˜ λŠ₯μˆ™ν•˜κ²Œ μ˜ˆκ³ ν•΄ μ™”κΈ° λ•Œλ¬Έμ—, κ·Έλ…€μ—κ²ŒλŠ” λ‹Ήν™©μŠ€λŸ¬μšΈ μ •λ„λ‘œ κ°‘μž‘μŠ€λŸ¬μš΄ κ²ƒμ²˜λŸΌ λ³΄μ΄μ§€λ§Œ, μš°λ¦¬μ—κ²ŒλŠ” κ°μ •μ μœΌλ‘œ λΆˆκ°€ν”Όν•΄ 보인닀.\tλ…ΈλΌμ˜ 변신은 λΆ„λͺ…ν•˜κ³  ν•„μ—°μ μ΄μ—ˆλ‹€.\tcontradiction\nμ΄μ§‘νŠΈ μ΅œλ‚¨λ‹¨ λ„μ‹œμΈ μ•„μŠ€μ™„μ€ 였랜 역사λ₯Ό 톡해 μ€‘μš”ν•œ 역할을 ν•΄μ™”λ‹€.\tμ•„μŠ€μ™„μ€ μ΄μ§‘νŠΈ κ΅­κ²½ λ°”λ‘œ μœ„μ— μœ„μΉ˜ν•΄ μžˆμŠ΅λ‹ˆλ‹€.\tneutral\nκ·ΈλŸ¬λ‚˜ 훨씬 더 μš°μ•„ν•œ 건좕적 ν„°μΉ˜λŠ” μ‹ μ„±ν•œ 좀인 Bharatanatyamμ—μ„œ μˆ˜ν–‰λœ 108 κ°€μ§€ κΈ°λ³Έ 포즈λ₯Ό μ‹œλ°” νŒ¨λ„μ—μ„œ λ³Ό 수 μžˆμŠ΅λ‹ˆλ‹€.\tνŒ¨λ„μ— λŒ€ν•œ μ‹œλ°”μ˜ λ¬˜μ‚¬λŠ” 일반적인 λͺ¨ν‹°λΈŒλ‹€.\tneutral\nν˜Έν™”λ‘­κ²Œ 심어진 계단식 정원은 μ΄νƒˆλ¦¬μ•„ ν˜•μ‹μ˜ κ°€μž₯ ν›Œλ₯­ν•œ 앙상블 쀑 ν•˜λ‚˜μž…λ‹ˆλ‹€.\tμ•„λ¦„λ‹€μš΄ 정원과 ν¬κ·€ν•œ 꽃꽂이 λͺ¨λ‘ μ΄νƒˆλ¦¬μ•„μ˜ ν˜•μ‹μ μΈ μŠ€νƒ€μΌμ„ 보여쀀닀.\tneutral\n음, 그랬으면 μ’‹μ•˜μ„ 텐데\tλ‚˜λŠ” 그것을 λ‹€λ₯΄κ²Œ ν•  기회λ₯Ό λͺΉμ‹œ κ°ˆλ§ν•œλ‹€.\tentailment\nνν—ˆκ°€ 된 μ„±μ˜ κΈ°μŠ­μ— 자리작고 μžˆλŠ” 예쁜 쀑세 λ„μ‹œ μΌ€μ΄μ„œμŠ€λ²„κ·ΈλŠ” 노벨 평화상 μˆ˜μƒμž μ•Œλ²„νŠΈ μŠˆλ°”μ΄μ²˜(1875λ…„)의 μΆœμƒμ§€λ‘œ 널리 μ•Œλ €μ Έ μžˆλ‹€.\tμ•Œλ²„νŠΈ μŠˆλ°”μ΄μ²˜λŠ” λ‘˜ λ‹€ μΌ€μ΄μ„œμŠ€λ²„κ·Έ λ§ˆμ„μ— μžˆμ—ˆλ‹€.\tentailment\nκ³ κ°λ„λŠ” λ¬Έμ œκ°€ μžˆλŠ” λŒ€λΆ€λΆ„μ˜ ν™˜μžλ“€μ΄ 발견될 것을 보μž₯ν•œλ‹€.\tμž₯λΉ„ λ―Όκ°λ„λŠ” 문제 탐지와 관련이 μ—†μŠ΅λ‹ˆλ‹€.\tcontradiction\nμ˜€λŠ˜μ€ ν™•μ‹€νžˆ λ°˜λ°”μ§€ 같은 λ‚ μ΄μ—ˆμ–΄\t였늘 사무싀에 μžˆλŠ” λͺ¨λ“  μ‚¬λžŒλ“€μ€ λ°˜λ°”μ§€λ₯Ό μž…μ—ˆλ‹€.\tneutral\nλͺ»μƒκΈ΄ ν„±μ‹œλ„λ₯Ό μž…κ³ .\t그것은 뢄홍색과 μ£Όν™©μƒ‰μž…λ‹ˆλ‹€.\tneutral\n이주 노동 μˆ˜μš©μ†Œ 였 마이 κ°“ 그듀은 νŒμ§€ μƒμžμ— μ‚°λ‹€.\t노동 μˆ˜μš©μ†Œμ—λŠ” νŒμ§€ μƒμžμ— μ‚¬λŠ” 이주 λ…Έλ™μžλ“€μ˜ 사진이 μžˆλ‹€.\tneutral\n그래, κ·Έκ°€ μ „ 세계λ₯Ό μ—¬ν–‰ν•œ 후에 그런 κ±°μ•Ό\t그것은 μ‚¬λžŒλ“€μ˜ 세계 여행을 λ”°λ₯Έλ‹€.\tentailment\nκ±΄λ„ˆνŽΈμ— 크고 큰 μ°Έλ‚˜λ¬΄ λͺ‡ 그루가 μžˆλ‹€.\tμš°λ¦¬λŠ” μ—¬κΈ° μ˜€ν¬λ‚˜ μ–΄λ–€ μ’…λ₯˜μ˜ λ―Έκ΅­ λ‚˜λ¬΄λ„ μ—†λ‹€.\tcontradiction\nFort-de-Franceμ—μ„œ μΆœλ°œν•˜λŠ” μžλ™μ°¨λ‚˜ μ—¬κ°μ„ μœΌλ‘œ, 당신은 μ•ˆμ„Έ ? λ°”λ‹€ 포도가 κ·ΈλŠ˜μ„ μ œκ³΅ν•˜λŠ” μΎŒμ ν•œ κ°ˆμƒ‰ λͺ¨λž˜ ν•΄λ³€κ³Ό 피크닉 ν…Œμ΄λΈ”, 어린이 λ―Έλ„λŸΌν‹€, 식당이 μžˆλŠ” μ•ˆλŠμ— 도착할 수 μžˆλ‹€.\tν”„λž‘μŠ€ μš”μƒˆμ—μ„œ μžλ™μ°¨λ‚˜ 페리λ₯Ό 타고 μ•ˆμ„Έλ‘œ 갈 수 μžˆλ‹€.\tentailment\n그리고 그것은 μ•¨λΌλ°°λ§ˆμ£Όκ°€ μ˜ˆμƒν–ˆλ˜ λŒ€λ‘œ μ˜ˆμ‚°μ—μ„œ 50만 λ‹¬λŸ¬λ₯Ό μ‚­κ°ν•˜μ§€ μ•Šμ„ κ²ƒμ΄λΌλŠ” 것을 μ˜λ―Έν•œλ‹€.\tμ•¨λΌλ°°λ§ˆ μ£ΌλŠ” μ˜ˆμ‚° 삭감을 ν•˜μ§€ μ•Šμ•˜λ‹€. μ™œλƒν•˜λ©΄ κ·Έλ ‡κ²Œ ν•˜λŠ” 것에 λŒ€ν•œ 초기 정당성이 μ •λ°€ 쑰사에 λ§žμ„œμ§€ μ•Šμ•˜κΈ° λ•Œλ¬Έμ΄λ‹€.\tneutral\nμ•Œμ•˜μ–΄ λ¨Όμ € μ–΄ .. μ–΄ .. λ…ΈμΈμ΄λ‚˜ 가쑱을 μš”μ–‘μ›μ— λ³΄λ‚΄λŠ” 것에 λŒ€ν•΄ μ–΄λ–»κ²Œ μƒκ°ν•˜λ‹ˆ?\t가쑱을 μš”μ–‘μ›μ— λ³΄λ‚΄μ„œ μ‚¬λŠ” 것에 λŒ€ν•΄ μ–΄λ–»κ²Œ μƒκ°ν•˜λŠ”μ§€ μ•Œ ν•„μš”κ°€ μ—†λ‹€.\tcontradiction\nλ‚˜λ¨Έμ§€λŠ” λ„ˆμ—κ²Œ 달렸어.\tλ‚˜λ¨Έμ§€λŠ” λ„ˆμ—κ²Œ λ‹¬λ Έμ§€λ§Œ μ‹œκ°„μ΄ λ§Žμ§€ μ•Šλ‹€.\tneutral\n음-흠, 3월에 햇볕에 νƒ€λŠ” 것에 λŒ€ν•΄ κ±±μ •ν•˜λ©΄ μ•ˆ λœλ‹€λŠ” 것을 μ•Œκ³  μžˆλŠ” 3월이야.\t3월은 κ·Έλ ‡κ²Œ λ₯μ§€ μ•Šλ‹€.\tneutral\n그리고 μ–΄, 그런 μž‘μ€ κ²ƒλ“€λ‘œ λ‹€μ‹œ μ‹œμž‘ν•΄λ΄. 아직 훨씬 μ‹Έ. μ–΄, κ·Έ νŠΉλ³„ν•œ λͺ¨λΈ μ°¨λŠ” 150λ‹¬λŸ¬μ•Ό.\tκ·Έ λͺ¨ν˜•μ°¨λŠ” 4천 λ‹¬λŸ¬κ°€ λ“ λ‹€.\tcontradiction\n내일 λŒμ•„κ°€μ•Ό ν•œλ‹€λ©΄, 칼이 λ§ν–ˆλ‹€.\tλŒμ•„κ°ˆ 수 μ—†μ–΄. μ˜€λŠ˜μ€ μ•ˆ 돼. 내일은 μ•ˆ 돼. μ ˆλŒ€ μ•ˆ 돼." 칼이 λ§ν–ˆλ‹€.', 'sentence2': 'contradiction'} ``` 2. (Optional) Preferred to change the name of the features for the compatibility with `run_glue.py` in πŸ€— Transformers - `kor_nli` dataset has same data structure of multi_nli, xnli - Changing the name of features and the feature type of 'gold_label' to ClassLabel might be helpful ```python def _info(self): return datasets.DatasetInfo( description=_DESCRIPTION, features=datasets.Features( { "premise": datasets.Value("string"), "hypothesis": datasets.Value("string"), "label": datasets.features.ClassLabel(names=["entailment", "neutral", "contradiction"]), } ), ``` If you don't mind, I would like to fix this. Thanks!
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/821/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/821/timeline
null
completed
true
https://api.github.com/repos/huggingface/datasets/issues/820
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/820/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/820/comments
https://api.github.com/repos/huggingface/datasets/issues/820/events
https://github.com/huggingface/datasets/pull/820
739,387,617
MDExOlB1bGxSZXF1ZXN0NTE4MDYwMjQ0
820
Update quail dataset to v1.3
{ "avatar_url": "https://avatars.githubusercontent.com/u/4889636?v=4", "events_url": "https://api.github.com/users/ngdodd/events{/privacy}", "followers_url": "https://api.github.com/users/ngdodd/followers", "following_url": "https://api.github.com/users/ngdodd/following{/other_user}", "gists_url": "https://api.github.com/users/ngdodd/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/ngdodd", "id": 4889636, "login": "ngdodd", "node_id": "MDQ6VXNlcjQ4ODk2MzY=", "organizations_url": "https://api.github.com/users/ngdodd/orgs", "received_events_url": "https://api.github.com/users/ngdodd/received_events", "repos_url": "https://api.github.com/users/ngdodd/repos", "site_admin": false, "starred_url": "https://api.github.com/users/ngdodd/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ngdodd/subscriptions", "type": "User", "url": "https://api.github.com/users/ngdodd" }
[]
closed
false
null
[]
null
[]
2020-11-09T21:49:26Z
2020-11-10T09:06:35Z
2020-11-10T09:06:35Z
CONTRIBUTOR
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/820.diff", "html_url": "https://github.com/huggingface/datasets/pull/820", "merged_at": "2020-11-10T09:06:35Z", "patch_url": "https://github.com/huggingface/datasets/pull/820.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/820" }
Updated quail to most recent version, to address the problem originally discussed [here](https://github.com/huggingface/datasets/issues/806).
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/820/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/820/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/819
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/819/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/819/comments
https://api.github.com/repos/huggingface/datasets/issues/819/events
https://github.com/huggingface/datasets/pull/819
739,250,624
MDExOlB1bGxSZXF1ZXN0NTE3OTQ2MjYy
819
Make save function use deterministic global vars order
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
[]
closed
false
null
[]
null
[]
2020-11-09T18:12:03Z
2021-11-30T13:34:09Z
2020-11-11T15:20:51Z
MEMBER
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/819.diff", "html_url": "https://github.com/huggingface/datasets/pull/819", "merged_at": "2020-11-11T15:20:50Z", "patch_url": "https://github.com/huggingface/datasets/pull/819.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/819" }
The `dumps` function need to be deterministic for the caching mechanism. However in #816 I noticed that one of dill's method to recursively check the globals of a function may return the globals in different orders each time it's used. To fix that I sort the globals by key in the `globs` dictionary. I had to add a rectified `save_function` to the saving functions registry of the Pickler to make it work. This should fix #816
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/819/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/819/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/818
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/818/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/818/comments
https://api.github.com/repos/huggingface/datasets/issues/818/events
https://github.com/huggingface/datasets/pull/818
739,173,861
MDExOlB1bGxSZXF1ZXN0NTE3ODgzMzk0
818
Fix type hints pickling in python 3.6
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
[]
closed
false
null
[]
null
[]
2020-11-09T16:27:47Z
2020-11-10T09:07:03Z
2020-11-10T09:07:02Z
MEMBER
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/818.diff", "html_url": "https://github.com/huggingface/datasets/pull/818", "merged_at": "2020-11-10T09:07:01Z", "patch_url": "https://github.com/huggingface/datasets/pull/818.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/818" }
Type hints can't be properly pickled in python 3.6. This was causing errors the `run_mlm.py` script from `transformers` with python 3.6 However Cloupickle proposed a [fix](https://github.com/cloudpipe/cloudpickle/pull/318/files) to make it work anyway. The idea is just to implement the pickling/unpickling of parameterized type hints. There is one detail though: since in python 3.6 we can't use `isinstance` on type hints, then we can't use pickle saving functions registry directly. Therefore we just wrap the `save_global` method of the Pickler. This should fix https://github.com/huggingface/transformers/issues/8212 for python 3.6 and make `run_mlm.py` support python 3.6 cc @sgugger
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 1, "hooray": 1, "laugh": 0, "rocket": 0, "total_count": 2, "url": "https://api.github.com/repos/huggingface/datasets/issues/818/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/818/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/817
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/817/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/817/comments
https://api.github.com/repos/huggingface/datasets/issues/817/events
https://github.com/huggingface/datasets/issues/817
739,145,369
MDU6SXNzdWU3MzkxNDUzNjk=
817
Add MRQA dataset
{ "avatar_url": "https://avatars.githubusercontent.com/u/16107619?v=4", "events_url": "https://api.github.com/users/VictorSanh/events{/privacy}", "followers_url": "https://api.github.com/users/VictorSanh/followers", "following_url": "https://api.github.com/users/VictorSanh/following{/other_user}", "gists_url": "https://api.github.com/users/VictorSanh/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/VictorSanh", "id": 16107619, "login": "VictorSanh", "node_id": "MDQ6VXNlcjE2MTA3NjE5", "organizations_url": "https://api.github.com/users/VictorSanh/orgs", "received_events_url": "https://api.github.com/users/VictorSanh/received_events", "repos_url": "https://api.github.com/users/VictorSanh/repos", "site_admin": false, "starred_url": "https://api.github.com/users/VictorSanh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/VictorSanh/subscriptions", "type": "User", "url": "https://api.github.com/users/VictorSanh" }
[ { "color": "e99695", "default": false, "description": "Requesting to add a new dataset", "id": 2067376369, "name": "dataset request", "node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request" } ]
closed
false
null
[]
null
[]
2020-11-09T15:52:19Z
2020-12-04T15:44:42Z
2020-12-04T15:44:41Z
MEMBER
null
null
null
## Adding a Dataset - **Name:** MRQA - **Description:** Collection of different (subsets of) QA datasets all converted to the same format to evaluate out-of-domain generalization (the datasets come from different domains, distributions, etc.). Some datasets are used for training and others are used for evaluation. This dataset was collected as part of MRQA 2019's shared task - **Paper:** https://arxiv.org/abs/1910.09753 - **Data:** https://github.com/mrqa/MRQA-Shared-Task-2019 - **Motivation:** Out-of-domain generalization is becoming (has become) a de-factor evaluation for NLU systems Instructions to add a new dataset can be found [here](https://huggingface.co/docs/datasets/share_dataset.html).
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 1, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/817/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/817/timeline
null
completed
true
https://api.github.com/repos/huggingface/datasets/issues/816
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/816/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/816/comments
https://api.github.com/repos/huggingface/datasets/issues/816/events
https://github.com/huggingface/datasets/issues/816
739,102,686
MDU6SXNzdWU3MzkxMDI2ODY=
816
[Caching] Dill globalvars() output order is not deterministic and can cause cache issues.
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
[]
closed
false
null
[]
null
[]
2020-11-09T15:01:20Z
2020-11-11T15:20:50Z
2020-11-11T15:20:50Z
MEMBER
null
null
null
Dill uses `dill.detect.globalvars` to get the globals used by a function in a recursive dump. `globalvars` returns a dictionary of all the globals that a dumped function needs. However the order of the keys in this dict is not deterministic and can cause caching issues. To fix that one could register an implementation of dill's `save_function` in the `datasets` pickler that sorts the globals keys before dumping a function.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/816/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/816/timeline
null
completed
true
https://api.github.com/repos/huggingface/datasets/issues/815
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/815/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/815/comments
https://api.github.com/repos/huggingface/datasets/issues/815/events
https://github.com/huggingface/datasets/issues/815
738,842,092
MDU6SXNzdWU3Mzg4NDIwOTI=
815
Is dataset iterative or not?
{ "avatar_url": "https://avatars.githubusercontent.com/u/73364383?v=4", "events_url": "https://api.github.com/users/rabeehkarimimahabadi/events{/privacy}", "followers_url": "https://api.github.com/users/rabeehkarimimahabadi/followers", "following_url": "https://api.github.com/users/rabeehkarimimahabadi/following{/other_user}", "gists_url": "https://api.github.com/users/rabeehkarimimahabadi/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/rabeehkarimimahabadi", "id": 73364383, "login": "rabeehkarimimahabadi", "node_id": "MDQ6VXNlcjczMzY0Mzgz", "organizations_url": "https://api.github.com/users/rabeehkarimimahabadi/orgs", "received_events_url": "https://api.github.com/users/rabeehkarimimahabadi/received_events", "repos_url": "https://api.github.com/users/rabeehkarimimahabadi/repos", "site_admin": false, "starred_url": "https://api.github.com/users/rabeehkarimimahabadi/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/rabeehkarimimahabadi/subscriptions", "type": "User", "url": "https://api.github.com/users/rabeehkarimimahabadi" }
[ { "color": "e99695", "default": false, "description": "Requesting to add a new dataset", "id": 2067376369, "name": "dataset request", "node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request" } ]
closed
false
null
[]
null
[]
2020-11-09T09:11:48Z
2020-11-10T10:50:03Z
2020-11-10T10:50:03Z
NONE
null
null
null
Hi I want to use your library for large-scale training, I am not sure if this is implemented as iterative datasets or not? could you provide me with example how I can use datasets as iterative datasets? thanks
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/815/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/815/timeline
null
completed
true
https://api.github.com/repos/huggingface/datasets/issues/814
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/814/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/814/comments
https://api.github.com/repos/huggingface/datasets/issues/814/events
https://github.com/huggingface/datasets/issues/814
738,500,443
MDU6SXNzdWU3Mzg1MDA0NDM=
814
Joining multiple datasets
{ "avatar_url": "https://avatars.githubusercontent.com/u/73364383?v=4", "events_url": "https://api.github.com/users/rabeehkarimimahabadi/events{/privacy}", "followers_url": "https://api.github.com/users/rabeehkarimimahabadi/followers", "following_url": "https://api.github.com/users/rabeehkarimimahabadi/following{/other_user}", "gists_url": "https://api.github.com/users/rabeehkarimimahabadi/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/rabeehkarimimahabadi", "id": 73364383, "login": "rabeehkarimimahabadi", "node_id": "MDQ6VXNlcjczMzY0Mzgz", "organizations_url": "https://api.github.com/users/rabeehkarimimahabadi/orgs", "received_events_url": "https://api.github.com/users/rabeehkarimimahabadi/received_events", "repos_url": "https://api.github.com/users/rabeehkarimimahabadi/repos", "site_admin": false, "starred_url": "https://api.github.com/users/rabeehkarimimahabadi/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/rabeehkarimimahabadi/subscriptions", "type": "User", "url": "https://api.github.com/users/rabeehkarimimahabadi" }
[ { "color": "e99695", "default": false, "description": "Requesting to add a new dataset", "id": 2067376369, "name": "dataset request", "node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request" } ]
closed
false
null
[]
null
[]
2020-11-08T16:19:30Z
2020-11-08T19:38:48Z
2020-11-08T19:38:48Z
NONE
null
null
null
Hi I have multiple iterative datasets from your library with different size and I want to join them in a way that each datasets is sampled equally, so smaller datasets more, larger one less, could you tell me how to implement this in pytorch? thanks
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/814/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/814/timeline
null
completed
true
https://api.github.com/repos/huggingface/datasets/issues/813
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/813/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/813/comments
https://api.github.com/repos/huggingface/datasets/issues/813/events
https://github.com/huggingface/datasets/issues/813
738,489,852
MDU6SXNzdWU3Mzg0ODk4NTI=
813
How to implement DistributedSampler with datasets
{ "avatar_url": "https://avatars.githubusercontent.com/u/73364383?v=4", "events_url": "https://api.github.com/users/rabeehkarimimahabadi/events{/privacy}", "followers_url": "https://api.github.com/users/rabeehkarimimahabadi/followers", "following_url": "https://api.github.com/users/rabeehkarimimahabadi/following{/other_user}", "gists_url": "https://api.github.com/users/rabeehkarimimahabadi/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/rabeehkarimimahabadi", "id": 73364383, "login": "rabeehkarimimahabadi", "node_id": "MDQ6VXNlcjczMzY0Mzgz", "organizations_url": "https://api.github.com/users/rabeehkarimimahabadi/orgs", "received_events_url": "https://api.github.com/users/rabeehkarimimahabadi/received_events", "repos_url": "https://api.github.com/users/rabeehkarimimahabadi/repos", "site_admin": false, "starred_url": "https://api.github.com/users/rabeehkarimimahabadi/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/rabeehkarimimahabadi/subscriptions", "type": "User", "url": "https://api.github.com/users/rabeehkarimimahabadi" }
[ { "color": "e99695", "default": false, "description": "Requesting to add a new dataset", "id": 2067376369, "name": "dataset request", "node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request" } ]
closed
false
null
[]
null
[]
2020-11-08T15:27:11Z
2022-10-05T12:54:23Z
2022-10-05T12:54:23Z
NONE
null
null
null
Hi, I am using your datasets to define my dataloaders, and I am training finetune_trainer.py in huggingface repo on them. I need a distributedSampler to be able to train the models on TPUs being able to distribute the load across the TPU cores. Could you tell me how I can implement the distribued sampler when using datasets in which datasets are iterative? To give you more context, I have multiple of datasets and I need to write sampler for this case. thanks.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/813/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/813/timeline
null
completed
true
https://api.github.com/repos/huggingface/datasets/issues/812
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/812/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/812/comments
https://api.github.com/repos/huggingface/datasets/issues/812/events
https://github.com/huggingface/datasets/issues/812
738,340,217
MDU6SXNzdWU3MzgzNDAyMTc=
812
Too much logging
{ "avatar_url": "https://avatars.githubusercontent.com/u/6183050?v=4", "events_url": "https://api.github.com/users/dspoka/events{/privacy}", "followers_url": "https://api.github.com/users/dspoka/followers", "following_url": "https://api.github.com/users/dspoka/following{/other_user}", "gists_url": "https://api.github.com/users/dspoka/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/dspoka", "id": 6183050, "login": "dspoka", "node_id": "MDQ6VXNlcjYxODMwNTA=", "organizations_url": "https://api.github.com/users/dspoka/orgs", "received_events_url": "https://api.github.com/users/dspoka/received_events", "repos_url": "https://api.github.com/users/dspoka/repos", "site_admin": false, "starred_url": "https://api.github.com/users/dspoka/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dspoka/subscriptions", "type": "User", "url": "https://api.github.com/users/dspoka" }
[]
closed
false
null
[]
null
[]
2020-11-07T23:56:30Z
2021-01-26T14:31:34Z
2020-11-16T17:06:42Z
NONE
null
null
null
I'm doing this in the beginning of my script: from datasets.utils import logging as datasets_logging datasets_logging.set_verbosity_warning() but I'm still getting these logs: [2020-11-07 15:45:41,908][filelock][INFO] - Lock 139958278886176 acquired on /home/username/.cache/huggingface/datasets/cfe20ffaa80ef1c145a0a210d5b9cdce2b60002831e6ed0edc7ab9275d6f0d48.1bd4ccbce9de3dad0698d84674a19d6cc66a84db736a6398110bd196795dde7e.py.lock [2020-11-07 15:45:41,909][filelock][INFO] - Lock 139958278886176 released on /home/username/.cache/huggingface/datasets/cfe20ffaa80ef1c145a0a210d5b9cdce2b60002831e6ed0edc7ab9275d6f0d48.1bd4ccbce9de3dad0698d84674a19d6cc66a84db736a6398110bd196795dde7e.py.lock using datasets version = 1.1.2
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/812/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/812/timeline
null
completed
true
https://api.github.com/repos/huggingface/datasets/issues/811
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/811/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/811/comments
https://api.github.com/repos/huggingface/datasets/issues/811/events
https://github.com/huggingface/datasets/issues/811
738,280,132
MDU6SXNzdWU3MzgyODAxMzI=
811
nlp viewer error
{ "avatar_url": "https://avatars.githubusercontent.com/u/30210529?v=4", "events_url": "https://api.github.com/users/jc-hou/events{/privacy}", "followers_url": "https://api.github.com/users/jc-hou/followers", "following_url": "https://api.github.com/users/jc-hou/following{/other_user}", "gists_url": "https://api.github.com/users/jc-hou/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/jc-hou", "id": 30210529, "login": "jc-hou", "node_id": "MDQ6VXNlcjMwMjEwNTI5", "organizations_url": "https://api.github.com/users/jc-hou/orgs", "received_events_url": "https://api.github.com/users/jc-hou/received_events", "repos_url": "https://api.github.com/users/jc-hou/repos", "site_admin": false, "starred_url": "https://api.github.com/users/jc-hou/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jc-hou/subscriptions", "type": "User", "url": "https://api.github.com/users/jc-hou" }
[ { "color": "94203D", "default": false, "description": "", "id": 2107841032, "name": "nlp-viewer", "node_id": "MDU6TGFiZWwyMTA3ODQxMDMy", "url": "https://api.github.com/repos/huggingface/datasets/labels/nlp-viewer" } ]
closed
false
null
[]
null
[]
2020-11-07T17:08:58Z
2022-02-15T10:51:44Z
2022-02-14T15:24:20Z
NONE
null
null
null
Hello, when I select amazon_us_reviews in nlp viewer, it shows error. https://huggingface.co/nlp/viewer/?dataset=amazon_us_reviews ![image](https://user-images.githubusercontent.com/30210529/98447334-4aa81200-2124-11eb-9dca-82c3ab34ccc2.png)
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/811/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/811/timeline
null
completed
true
https://api.github.com/repos/huggingface/datasets/issues/810
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/810/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/810/comments
https://api.github.com/repos/huggingface/datasets/issues/810/events
https://github.com/huggingface/datasets/pull/810
737,878,370
MDExOlB1bGxSZXF1ZXN0NTE2ODQzMzQ3
810
Fix seqeval metric
{ "avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4", "events_url": "https://api.github.com/users/sgugger/events{/privacy}", "followers_url": "https://api.github.com/users/sgugger/followers", "following_url": "https://api.github.com/users/sgugger/following{/other_user}", "gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/sgugger", "id": 35901082, "login": "sgugger", "node_id": "MDQ6VXNlcjM1OTAxMDgy", "organizations_url": "https://api.github.com/users/sgugger/orgs", "received_events_url": "https://api.github.com/users/sgugger/received_events", "repos_url": "https://api.github.com/users/sgugger/repos", "site_admin": false, "starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sgugger/subscriptions", "type": "User", "url": "https://api.github.com/users/sgugger" }
[]
closed
false
null
[]
null
[]
2020-11-06T16:11:43Z
2020-11-09T14:04:29Z
2020-11-09T14:04:28Z
MEMBER
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/810.diff", "html_url": "https://github.com/huggingface/datasets/pull/810", "merged_at": "2020-11-09T14:04:27Z", "patch_url": "https://github.com/huggingface/datasets/pull/810.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/810" }
The current seqeval metric returns the following error when computed: ``` ~/.cache/huggingface/modules/datasets_modules/metrics/seqeval/78a944d83252b5a16c9a2e49f057f4c6e02f18cc03349257025a8c9aea6524d8/seqeval.py in _compute(self, predictions, references, suffix) 102 scores = {} 103 for type_name, score in report.items(): --> 104 scores[type_name]["precision"] = score["precision"] 105 scores[type_name]["recall"] = score["recall"] 106 scores[type_name]["f1"] = score["f1-score"] KeyError: 'LOC' ``` This is because the current code basically tries to do: ``` scores = {} scores["LOC"]["precision"] = some_value ``` which does not work in python. This PR fixes that while keeping the previous nested structure of results, with the same keys.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/810/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/810/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/809
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/809/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/809/comments
https://api.github.com/repos/huggingface/datasets/issues/809/events
https://github.com/huggingface/datasets/issues/809
737,832,701
MDU6SXNzdWU3Mzc4MzI3MDE=
809
Add Google Taskmaster dataset
{ "avatar_url": "https://avatars.githubusercontent.com/u/10469459?v=4", "events_url": "https://api.github.com/users/yjernite/events{/privacy}", "followers_url": "https://api.github.com/users/yjernite/followers", "following_url": "https://api.github.com/users/yjernite/following{/other_user}", "gists_url": "https://api.github.com/users/yjernite/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/yjernite", "id": 10469459, "login": "yjernite", "node_id": "MDQ6VXNlcjEwNDY5NDU5", "organizations_url": "https://api.github.com/users/yjernite/orgs", "received_events_url": "https://api.github.com/users/yjernite/received_events", "repos_url": "https://api.github.com/users/yjernite/repos", "site_admin": false, "starred_url": "https://api.github.com/users/yjernite/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/yjernite/subscriptions", "type": "User", "url": "https://api.github.com/users/yjernite" }
[ { "color": "e99695", "default": false, "description": "Requesting to add a new dataset", "id": 2067376369, "name": "dataset request", "node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request" } ]
closed
false
null
[]
null
[]
2020-11-06T15:10:41Z
2021-04-20T13:09:26Z
2021-04-20T13:09:26Z
MEMBER
null
null
null
## Adding a Dataset - **Name:** Taskmaster - **Description:** A large dataset of task-oriented dialogue with annotated goals (55K dialogues covering entertainment and travel reservations) - **Paper:** https://arxiv.org/abs/1909.05358 - **Data:** https://github.com/google-research-datasets/Taskmaster - **Motivation:** One of few annotated datasets of this size for goal-oriented dialogue Instructions to add a new dataset can be found [here](https://huggingface.co/docs/datasets/share_dataset.html).
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/809/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/809/timeline
null
completed
true
https://api.github.com/repos/huggingface/datasets/issues/808
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/808/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/808/comments
https://api.github.com/repos/huggingface/datasets/issues/808/events
https://github.com/huggingface/datasets/pull/808
737,638,942
MDExOlB1bGxSZXF1ZXN0NTE2NjQ0NDc0
808
dataset(dgs): initial dataset loading script
{ "avatar_url": "https://avatars.githubusercontent.com/u/5757359?v=4", "events_url": "https://api.github.com/users/AmitMY/events{/privacy}", "followers_url": "https://api.github.com/users/AmitMY/followers", "following_url": "https://api.github.com/users/AmitMY/following{/other_user}", "gists_url": "https://api.github.com/users/AmitMY/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/AmitMY", "id": 5757359, "login": "AmitMY", "node_id": "MDQ6VXNlcjU3NTczNTk=", "organizations_url": "https://api.github.com/users/AmitMY/orgs", "received_events_url": "https://api.github.com/users/AmitMY/received_events", "repos_url": "https://api.github.com/users/AmitMY/repos", "site_admin": false, "starred_url": "https://api.github.com/users/AmitMY/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/AmitMY/subscriptions", "type": "User", "url": "https://api.github.com/users/AmitMY" }
[]
closed
false
null
[]
null
[]
2020-11-06T10:14:43Z
2021-03-23T06:18:55Z
2021-03-23T06:18:55Z
CONTRIBUTOR
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/808.diff", "html_url": "https://github.com/huggingface/datasets/pull/808", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/808.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/808" }
When trying to create dummy data I get: > Dataset datasets with config None seems to already open files in the method `_split_generators(...)`. You might consider to instead only open files in the method `_generate_examples(...)` instead. If this is not possible the dummy data has t o be created with less guidance. Make sure you create the file dummy_data. I am not sure how to manually create the dummy_data (what exactly it should contain) Also note, this library says: > ImportError: To be able to use this dataset, you need to install the following dependencies['pympi'] using 'pip install pympi' for instance' When you actually need to `pip install pympi-ling`
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/808/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/808/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/807
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/807/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/807/comments
https://api.github.com/repos/huggingface/datasets/issues/807/events
https://github.com/huggingface/datasets/issues/807
737,509,954
MDU6SXNzdWU3Mzc1MDk5NTQ=
807
load_dataset for LOCAL CSV files report CONNECTION ERROR
{ "avatar_url": "https://avatars.githubusercontent.com/u/25664170?v=4", "events_url": "https://api.github.com/users/shexuan/events{/privacy}", "followers_url": "https://api.github.com/users/shexuan/followers", "following_url": "https://api.github.com/users/shexuan/following{/other_user}", "gists_url": "https://api.github.com/users/shexuan/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/shexuan", "id": 25664170, "login": "shexuan", "node_id": "MDQ6VXNlcjI1NjY0MTcw", "organizations_url": "https://api.github.com/users/shexuan/orgs", "received_events_url": "https://api.github.com/users/shexuan/received_events", "repos_url": "https://api.github.com/users/shexuan/repos", "site_admin": false, "starred_url": "https://api.github.com/users/shexuan/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/shexuan/subscriptions", "type": "User", "url": "https://api.github.com/users/shexuan" }
[]
closed
false
null
[]
null
[]
2020-11-06T06:33:04Z
2021-01-11T01:30:27Z
2020-11-14T05:30:34Z
NONE
null
null
null
## load_dataset for LOCAL CSV files report CONNECTION ERROR - **Description:** A local demo csv file: ``` import pandas as pd import numpy as np from datasets import load_dataset import torch import transformers df = pd.DataFrame(np.arange(1200).reshape(300,4)) df.to_csv('test.csv', header=False, index=False) print('datasets version: ', datasets.__version__) print('pytorch version: ', torch.__version__) print('transformers version: ', transformers.__version__) # output: datasets version: 1.1.2 pytorch version: 1.5.0 transformers version: 3.2.0 ``` when I load data through `dataset`: ``` dataset = load_dataset('csv', data_files='./test.csv', delimiter=',', autogenerate_column_names=False) ``` Error infos: ``` ConnectionError Traceback (most recent call last) <ipython-input-17-bbdadb9a0c78> in <module> ----> 1 dataset = load_dataset('csv', data_files='./test.csv', delimiter=',', autogenerate_column_names=False) ~/.conda/envs/py36/lib/python3.6/site-packages/datasets/load.py in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, ignore_verifications, save_infos, script_version, **config_kwargs) 588 # Download/copy dataset processing script 589 module_path, hash = prepare_module( --> 590 path, script_version=script_version, download_config=download_config, download_mode=download_mode, dataset=True 591 ) 592 ~/.conda/envs/py36/lib/python3.6/site-packages/datasets/load.py in prepare_module(path, script_version, download_config, download_mode, dataset, force_local_path, **download_kwargs) 266 file_path = hf_github_url(path=path, name=name, dataset=dataset, version=script_version) 267 try: --> 268 local_path = cached_path(file_path, download_config=download_config) 269 except FileNotFoundError: 270 if script_version is not None: ~/.conda/envs/py36/lib/python3.6/site-packages/datasets/utils/file_utils.py in cached_path(url_or_filename, download_config, **download_kwargs) 306 user_agent=download_config.user_agent, 307 local_files_only=download_config.local_files_only, --> 308 use_etag=download_config.use_etag, 309 ) 310 elif os.path.exists(url_or_filename): ~/.conda/envs/py36/lib/python3.6/site-packages/datasets/utils/file_utils.py in get_from_cache(url, cache_dir, force_download, proxies, etag_timeout, resume_download, user_agent, local_files_only, use_etag) 473 elif response is not None and response.status_code == 404: 474 raise FileNotFoundError("Couldn't find file at {}".format(url)) --> 475 raise ConnectionError("Couldn't reach {}".format(url)) 476 477 # Try a second time ConnectionError: Couldn't reach https://raw.githubusercontent.com/huggingface/datasets/1.1.2/datasets/csv/csv.py ``` And I try to connect to the site with requests: ``` import requests requests.head("https://raw.githubusercontent.com/huggingface/datasets/1.1.2/datasets/csv/csv.py") ``` Similarly Error occurs: ``` --------------------------------------------------------------------------- ConnectionRefusedError Traceback (most recent call last) ~/.conda/envs/py36/lib/python3.6/site-packages/urllib3/connection.py in _new_conn(self) 159 conn = connection.create_connection( --> 160 (self._dns_host, self.port), self.timeout, **extra_kw 161 ) ~/.conda/envs/py36/lib/python3.6/site-packages/urllib3/util/connection.py in create_connection(address, timeout, source_address, socket_options) 83 if err is not None: ---> 84 raise err 85 ~/.conda/envs/py36/lib/python3.6/site-packages/urllib3/util/connection.py in create_connection(address, timeout, source_address, socket_options) 73 sock.bind(source_address) ---> 74 sock.connect(sa) 75 return sock ConnectionRefusedError: [Errno 111] Connection refused During handling of the above exception, another exception occurred: NewConnectionError Traceback (most recent call last) ~/.conda/envs/py36/lib/python3.6/site-packages/urllib3/connectionpool.py in urlopen(self, method, url, body, headers, retries, redirect, assert_same_host, timeout, pool_timeout, release_conn, chunked, body_pos, **response_kw) 676 headers=headers, --> 677 chunked=chunked, 678 ) ~/.conda/envs/py36/lib/python3.6/site-packages/urllib3/connectionpool.py in _make_request(self, conn, method, url, timeout, chunked, **httplib_request_kw) 380 try: --> 381 self._validate_conn(conn) 382 except (SocketTimeout, BaseSSLError) as e: ~/.conda/envs/py36/lib/python3.6/site-packages/urllib3/connectionpool.py in _validate_conn(self, conn) 975 if not getattr(conn, "sock", None): # AppEngine might not have `.sock` --> 976 conn.connect() 977 ~/.conda/envs/py36/lib/python3.6/site-packages/urllib3/connection.py in connect(self) 307 # Add certificate verification --> 308 conn = self._new_conn() 309 hostname = self.host ~/.conda/envs/py36/lib/python3.6/site-packages/urllib3/connection.py in _new_conn(self) 171 raise NewConnectionError( --> 172 self, "Failed to establish a new connection: %s" % e 173 ) NewConnectionError: <urllib3.connection.HTTPSConnection object at 0x7f3cceda5e48>: Failed to establish a new connection: [Errno 111] Connection refused During handling of the above exception, another exception occurred: MaxRetryError Traceback (most recent call last) ~/.conda/envs/py36/lib/python3.6/site-packages/requests/adapters.py in send(self, request, stream, timeout, verify, cert, proxies) 448 retries=self.max_retries, --> 449 timeout=timeout 450 ) ~/.conda/envs/py36/lib/python3.6/site-packages/urllib3/connectionpool.py in urlopen(self, method, url, body, headers, retries, redirect, assert_same_host, timeout, pool_timeout, release_conn, chunked, body_pos, **response_kw) 724 retries = retries.increment( --> 725 method, url, error=e, _pool=self, _stacktrace=sys.exc_info()[2] 726 ) ~/.conda/envs/py36/lib/python3.6/site-packages/urllib3/util/retry.py in increment(self, method, url, response, error, _pool, _stacktrace) 438 if new_retry.is_exhausted(): --> 439 raise MaxRetryError(_pool, url, error or ResponseError(cause)) 440 MaxRetryError: HTTPSConnectionPool(host='raw.githubusercontent.com', port=443): Max retries exceeded with url: /huggingface/datasets/1.1.2/datasets/csv/csv.py (Caused by NewConnectionError('<urllib3.connection.HTTPSConnection object at 0x7f3cceda5e48>: Failed to establish a new connection: [Errno 111] Connection refused',)) During handling of the above exception, another exception occurred: ConnectionError Traceback (most recent call last) <ipython-input-20-18cc3eb4a049> in <module> 1 import requests 2 ----> 3 requests.head("https://raw.githubusercontent.com/huggingface/datasets/1.1.2/datasets/csv/csv.py") ~/.conda/envs/py36/lib/python3.6/site-packages/requests/api.py in head(url, **kwargs) 102 103 kwargs.setdefault('allow_redirects', False) --> 104 return request('head', url, **kwargs) 105 106 ~/.conda/envs/py36/lib/python3.6/site-packages/requests/api.py in request(method, url, **kwargs) 59 # cases, and look like a memory leak in others. 60 with sessions.Session() as session: ---> 61 return session.request(method=method, url=url, **kwargs) 62 63 ~/.conda/envs/py36/lib/python3.6/site-packages/requests/sessions.py in request(self, method, url, params, data, headers, cookies, files, auth, timeout, allow_redirects, proxies, hooks, stream, verify, cert, json) 528 } 529 send_kwargs.update(settings) --> 530 resp = self.send(prep, **send_kwargs) 531 532 return resp ~/.conda/envs/py36/lib/python3.6/site-packages/requests/sessions.py in send(self, request, **kwargs) 641 642 # Send the request --> 643 r = adapter.send(request, **kwargs) 644 645 # Total elapsed time of the request (approximately) ~/.conda/envs/py36/lib/python3.6/site-packages/requests/adapters.py in send(self, request, stream, timeout, verify, cert, proxies) 514 raise SSLError(e, request=request) 515 --> 516 raise ConnectionError(e, request=request) 517 518 except ClosedPoolError as e: ConnectionError: HTTPSConnectionPool(host='raw.githubusercontent.com', port=443): Max retries exceeded with url: /huggingface/datasets/1.1.2/datasets/csv/csv.py (Caused by NewConnectionError('<urllib3.connection.HTTPSConnection object at 0x7f3cceda5e48>: Failed to establish a new connection: [Errno 111] Connection refused',)) ```
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/807/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/807/timeline
null
completed
true
https://api.github.com/repos/huggingface/datasets/issues/806
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/806/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/806/comments
https://api.github.com/repos/huggingface/datasets/issues/806/events
https://github.com/huggingface/datasets/issues/806
737,215,430
MDU6SXNzdWU3MzcyMTU0MzA=
806
Quail dataset urls are out of date
{ "avatar_url": "https://avatars.githubusercontent.com/u/4889636?v=4", "events_url": "https://api.github.com/users/ngdodd/events{/privacy}", "followers_url": "https://api.github.com/users/ngdodd/followers", "following_url": "https://api.github.com/users/ngdodd/following{/other_user}", "gists_url": "https://api.github.com/users/ngdodd/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/ngdodd", "id": 4889636, "login": "ngdodd", "node_id": "MDQ6VXNlcjQ4ODk2MzY=", "organizations_url": "https://api.github.com/users/ngdodd/orgs", "received_events_url": "https://api.github.com/users/ngdodd/received_events", "repos_url": "https://api.github.com/users/ngdodd/repos", "site_admin": false, "starred_url": "https://api.github.com/users/ngdodd/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ngdodd/subscriptions", "type": "User", "url": "https://api.github.com/users/ngdodd" }
[]
closed
false
null
[]
null
[]
2020-11-05T19:40:19Z
2020-11-10T14:02:51Z
2020-11-10T14:02:51Z
CONTRIBUTOR
null
null
null
<h3>Code</h3> ``` from datasets import load_dataset quail = load_dataset('quail') ``` <h3>Error</h3> ``` FileNotFoundError: Couldn't find file at https://raw.githubusercontent.com/text-machine-lab/quail/master/quail_v1.2/xml/ordered/quail_1.2_train.xml ``` As per [quail v1.3 commit](https://github.com/text-machine-lab/quail/commit/506501cfa34d9ec6c042d31026ba6fea6bcec8ff) it looks like the location and suggested ordering has changed. In [https://github.com/huggingface/datasets/blob/master/datasets/quail/quail.py#L52-L58](https://github.com/huggingface/datasets/blob/master/datasets/quail/quail.py#L52-L58) the quail v1.2 datasets are being pointed to, which don't exist anymore.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/806/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/806/timeline
null
completed
true
https://api.github.com/repos/huggingface/datasets/issues/805
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/805/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/805/comments
https://api.github.com/repos/huggingface/datasets/issues/805/events
https://github.com/huggingface/datasets/issues/805
737,019,360
MDU6SXNzdWU3MzcwMTkzNjA=
805
On loading a metric from datasets, I get the following error
{ "avatar_url": "https://avatars.githubusercontent.com/u/36405283?v=4", "events_url": "https://api.github.com/users/laibamehnaz/events{/privacy}", "followers_url": "https://api.github.com/users/laibamehnaz/followers", "following_url": "https://api.github.com/users/laibamehnaz/following{/other_user}", "gists_url": "https://api.github.com/users/laibamehnaz/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/laibamehnaz", "id": 36405283, "login": "laibamehnaz", "node_id": "MDQ6VXNlcjM2NDA1Mjgz", "organizations_url": "https://api.github.com/users/laibamehnaz/orgs", "received_events_url": "https://api.github.com/users/laibamehnaz/received_events", "repos_url": "https://api.github.com/users/laibamehnaz/repos", "site_admin": false, "starred_url": "https://api.github.com/users/laibamehnaz/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/laibamehnaz/subscriptions", "type": "User", "url": "https://api.github.com/users/laibamehnaz" }
[]
closed
false
null
[]
null
[]
2020-11-05T15:14:38Z
2022-02-14T15:32:59Z
2022-02-14T15:32:59Z
NONE
null
null
null
`from datasets import load_metric` `metric = load_metric('bleurt')` Traceback: 210 class _ArrayXDExtensionType(pa.PyExtensionType): 211 212 ndims: int = None AttributeError: module 'pyarrow' has no attribute 'PyExtensionType' Any help will be appreciated. Thank you.
{ "+1": 2, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 2, "url": "https://api.github.com/repos/huggingface/datasets/issues/805/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/805/timeline
null
completed
true
https://api.github.com/repos/huggingface/datasets/issues/804
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/804/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/804/comments
https://api.github.com/repos/huggingface/datasets/issues/804/events
https://github.com/huggingface/datasets/issues/804
736,858,507
MDU6SXNzdWU3MzY4NTg1MDc=
804
Empty output/answer in TriviaQA test set (both in 'kilt_tasks' and 'trivia_qa')
{ "avatar_url": "https://avatars.githubusercontent.com/u/25532159?v=4", "events_url": "https://api.github.com/users/PaulLerner/events{/privacy}", "followers_url": "https://api.github.com/users/PaulLerner/followers", "following_url": "https://api.github.com/users/PaulLerner/following{/other_user}", "gists_url": "https://api.github.com/users/PaulLerner/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/PaulLerner", "id": 25532159, "login": "PaulLerner", "node_id": "MDQ6VXNlcjI1NTMyMTU5", "organizations_url": "https://api.github.com/users/PaulLerner/orgs", "received_events_url": "https://api.github.com/users/PaulLerner/received_events", "repos_url": "https://api.github.com/users/PaulLerner/repos", "site_admin": false, "starred_url": "https://api.github.com/users/PaulLerner/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/PaulLerner/subscriptions", "type": "User", "url": "https://api.github.com/users/PaulLerner" }
[]
closed
false
null
[]
null
[]
2020-11-05T11:38:01Z
2020-11-09T14:14:59Z
2020-11-09T14:14:58Z
CONTRIBUTOR
null
null
null
# The issue It's all in the title, it appears to be fine on the train and validation sets. Is there some kind of mapping to do like for the questions (see https://github.com/huggingface/datasets/blob/master/datasets/kilt_tasks/README.md) ? # How to reproduce ```py from datasets import load_dataset kilt_tasks = load_dataset("kilt_tasks") trivia_qa = load_dataset('trivia_qa', 'unfiltered.nocontext') # both in "kilt_tasks" In [18]: any([output['answer'] for output in kilt_tasks['test_triviaqa']['output']]) Out[18]: False # and "trivia_qa" In [13]: all([answer['value'] == '<unk>' for answer in trivia_qa['test']['answer']]) Out[13]: True # appears to be fine on the train and validation sets. In [14]: all([answer['value'] == '<unk>' for answer in trivia_qa['train']['answer']]) Out[14]: False In [15]: all([answer['value'] == '<unk>' for answer in trivia_qa['validation']['answer']]) Out[15]: False In [16]: any([output['answer'] for output in kilt_tasks['train_triviaqa']['output']]) Out[16]: True In [17]: any([output['answer'] for output in kilt_tasks['validation_triviaqa']['output']]) Out[17]: True ```
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/804/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/804/timeline
null
completed
true
https://api.github.com/repos/huggingface/datasets/issues/803
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/803/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/803/comments
https://api.github.com/repos/huggingface/datasets/issues/803/events
https://github.com/huggingface/datasets/pull/803
736,818,917
MDExOlB1bGxSZXF1ZXN0NTE1OTY1ODE2
803
fix: typos in tutorial to map KILT and TriviaQA
{ "avatar_url": "https://avatars.githubusercontent.com/u/25532159?v=4", "events_url": "https://api.github.com/users/PaulLerner/events{/privacy}", "followers_url": "https://api.github.com/users/PaulLerner/followers", "following_url": "https://api.github.com/users/PaulLerner/following{/other_user}", "gists_url": "https://api.github.com/users/PaulLerner/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/PaulLerner", "id": 25532159, "login": "PaulLerner", "node_id": "MDQ6VXNlcjI1NTMyMTU5", "organizations_url": "https://api.github.com/users/PaulLerner/orgs", "received_events_url": "https://api.github.com/users/PaulLerner/received_events", "repos_url": "https://api.github.com/users/PaulLerner/repos", "site_admin": false, "starred_url": "https://api.github.com/users/PaulLerner/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/PaulLerner/subscriptions", "type": "User", "url": "https://api.github.com/users/PaulLerner" }
[]
closed
false
null
[]
null
[]
2020-11-05T10:42:00Z
2020-11-10T09:08:07Z
2020-11-10T09:08:07Z
CONTRIBUTOR
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/803.diff", "html_url": "https://github.com/huggingface/datasets/pull/803", "merged_at": "2020-11-10T09:08:07Z", "patch_url": "https://github.com/huggingface/datasets/pull/803.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/803" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/803/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/803/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/802
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/802/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/802/comments
https://api.github.com/repos/huggingface/datasets/issues/802/events
https://github.com/huggingface/datasets/pull/802
736,296,343
MDExOlB1bGxSZXF1ZXN0NTE1NTM1MDI0
802
Add XGlue
{ "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/patrickvonplaten", "id": 23423619, "login": "patrickvonplaten", "node_id": "MDQ6VXNlcjIzNDIzNjE5", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "site_admin": false, "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "type": "User", "url": "https://api.github.com/users/patrickvonplaten" }
[]
closed
false
null
[]
null
[]
2020-11-04T17:29:54Z
2022-04-28T08:15:36Z
2020-12-01T15:58:27Z
MEMBER
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/802.diff", "html_url": "https://github.com/huggingface/datasets/pull/802", "merged_at": "2020-12-01T15:58:27Z", "patch_url": "https://github.com/huggingface/datasets/pull/802.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/802" }
Dataset is ready to merge. An important feature of this dataset is that for each config the train data is in English, while dev and test data are in multiple languages. Therefore, @lhoestq and I decided offline that we will give the dataset the following API, *e.g.* for ```python load_dataset("xglue", "ner") # would give the splits 'train', 'validation.en', 'test.en', 'validation.es', 'test.es', ... ``` => therefore one can load a single language test via ```python load_dataset("xglue", "ner", split="test.es") ``` Close #749.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/802/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/802/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/801
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/801/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/801/comments
https://api.github.com/repos/huggingface/datasets/issues/801/events
https://github.com/huggingface/datasets/issues/801
735,790,876
MDU6SXNzdWU3MzU3OTA4NzY=
801
How to join two datasets?
{ "avatar_url": "https://avatars.githubusercontent.com/u/66387198?v=4", "events_url": "https://api.github.com/users/shangw-nvidia/events{/privacy}", "followers_url": "https://api.github.com/users/shangw-nvidia/followers", "following_url": "https://api.github.com/users/shangw-nvidia/following{/other_user}", "gists_url": "https://api.github.com/users/shangw-nvidia/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/shangw-nvidia", "id": 66387198, "login": "shangw-nvidia", "node_id": "MDQ6VXNlcjY2Mzg3MTk4", "organizations_url": "https://api.github.com/users/shangw-nvidia/orgs", "received_events_url": "https://api.github.com/users/shangw-nvidia/received_events", "repos_url": "https://api.github.com/users/shangw-nvidia/repos", "site_admin": false, "starred_url": "https://api.github.com/users/shangw-nvidia/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/shangw-nvidia/subscriptions", "type": "User", "url": "https://api.github.com/users/shangw-nvidia" }
[]
closed
false
null
[]
null
[]
2020-11-04T03:53:11Z
2020-12-23T14:02:58Z
2020-12-23T14:02:58Z
NONE
null
null
null
Hi, I'm wondering if it's possible to join two (preprocessed) datasets with the same number of rows but different labels? I'm currently trying to create paired sentences for BERT from `wikipedia/'20200501.en`, and I couldn't figure out a way to create a paired sentence using `.map()` where the second sentence is **not** the next sentence (i.e., from a different article) of the first sentence. Thanks!
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/801/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/801/timeline
null
completed
true
https://api.github.com/repos/huggingface/datasets/issues/800
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/800/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/800/comments
https://api.github.com/repos/huggingface/datasets/issues/800/events
https://github.com/huggingface/datasets/pull/800
735,772,775
MDExOlB1bGxSZXF1ZXN0NTE1MTAyMjc3
800
Update loading_metrics.rst
{ "avatar_url": "https://avatars.githubusercontent.com/u/5400513?v=4", "events_url": "https://api.github.com/users/ayushidalmia/events{/privacy}", "followers_url": "https://api.github.com/users/ayushidalmia/followers", "following_url": "https://api.github.com/users/ayushidalmia/following{/other_user}", "gists_url": "https://api.github.com/users/ayushidalmia/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/ayushidalmia", "id": 5400513, "login": "ayushidalmia", "node_id": "MDQ6VXNlcjU0MDA1MTM=", "organizations_url": "https://api.github.com/users/ayushidalmia/orgs", "received_events_url": "https://api.github.com/users/ayushidalmia/received_events", "repos_url": "https://api.github.com/users/ayushidalmia/repos", "site_admin": false, "starred_url": "https://api.github.com/users/ayushidalmia/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ayushidalmia/subscriptions", "type": "User", "url": "https://api.github.com/users/ayushidalmia" }
[]
closed
false
null
[]
null
[]
2020-11-04T02:57:11Z
2020-11-11T15:28:32Z
2020-11-11T15:28:32Z
CONTRIBUTOR
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/800.diff", "html_url": "https://github.com/huggingface/datasets/pull/800", "merged_at": "2020-11-11T15:28:32Z", "patch_url": "https://github.com/huggingface/datasets/pull/800.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/800" }
Minor bug
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/800/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/800/timeline
null
null
true