url
stringlengths 58
61
| repository_url
stringclasses 1
value | labels_url
stringlengths 72
75
| comments_url
stringlengths 67
70
| events_url
stringlengths 65
68
| html_url
stringlengths 46
51
| id
int64 599M
1.12B
| node_id
stringlengths 18
32
| number
int64 1
3.64k
| title
stringlengths 1
276
| user
dict | labels
list | state
stringclasses 2
values | locked
bool 1
class | assignee
dict | assignees
list | milestone
dict | comments
sequence | created_at
int64 1,587B
1,643B
| updated_at
int64 1,587B
1,643B
| closed_at
int64 1,587B
1,643B
โ | author_association
stringclasses 3
values | active_lock_reason
null | body
stringlengths 0
228k
โ | reactions
dict | timeline_url
stringlengths 67
70
| performed_via_github_app
null | draft
bool 2
classes | pull_request
dict | is_pull_request
bool 2
classes |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://api.github.com/repos/huggingface/datasets/issues/902 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/902/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/902/comments | https://api.github.com/repos/huggingface/datasets/issues/902/events | https://github.com/huggingface/datasets/pull/902 | 752,345,739 | MDExOlB1bGxSZXF1ZXN0NTI4Njg3NTYw | 902 | Follow cache_dir parameter to gcs downloader | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
} | [] | null | [] | 1,606,492,926,000 | 1,606,690,134,000 | 1,606,690,133,000 | MEMBER | null | As noticed in #900 the cache_dir parameter was not followed to the downloader in the case of an already processed dataset hosted on our google storage (one of them is natural questions).
Fix #900 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/902/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/902/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/902",
"html_url": "https://github.com/huggingface/datasets/pull/902",
"diff_url": "https://github.com/huggingface/datasets/pull/902.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/902.patch",
"merged_at": 1606690133000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/901 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/901/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/901/comments | https://api.github.com/repos/huggingface/datasets/issues/901/events | https://github.com/huggingface/datasets/pull/901 | 752,233,851 | MDExOlB1bGxSZXF1ZXN0NTI4NTk3NDU5 | 901 | Addition of Nl2Bash Dataset | {
"login": "reshinthadithyan",
"id": 36307201,
"node_id": "MDQ6VXNlcjM2MzA3MjAx",
"avatar_url": "https://avatars.githubusercontent.com/u/36307201?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/reshinthadithyan",
"html_url": "https://github.com/reshinthadithyan",
"followers_url": "https://api.github.com/users/reshinthadithyan/followers",
"following_url": "https://api.github.com/users/reshinthadithyan/following{/other_user}",
"gists_url": "https://api.github.com/users/reshinthadithyan/gists{/gist_id}",
"starred_url": "https://api.github.com/users/reshinthadithyan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/reshinthadithyan/subscriptions",
"organizations_url": "https://api.github.com/users/reshinthadithyan/orgs",
"repos_url": "https://api.github.com/users/reshinthadithyan/repos",
"events_url": "https://api.github.com/users/reshinthadithyan/events{/privacy}",
"received_events_url": "https://api.github.com/users/reshinthadithyan/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
} | [] | null | [
"Hello, thanks. I had a talk with the dataset authors, found out that the data now is obsolete and they'll get a stable version soon. So temporality closing the PR.\r\n Although I have a question, What should _id_ be in the return statement? Should that be something like a start index (or) the type of split will do? Thanks. ",
"@reshinthadithyan we should hold off on this for a couple of weeks till NeurIPS concludes. The [NLC2CMD](http://nlc2cmd.us-east.mybluemix.net/) data will be out then; which includes a cleaner version of this NL2Bash data. The older data is sort of obsolete now. ",
"Ah nvm you already commented ๐ "
] | 1,606,481,635,000 | 1,606,673,365,000 | 1,606,673,331,000 | CONTRIBUTOR | null | ## Overview
The NL2Bash data contains over 10,000 instances of linux shell commands and their corresponding natural language descriptions provided by experts, from the Tellina system. The dataset features 100+ commonly used shell utilities.
## Footnotes
The following dataset marks the first ML on source code related Dataset in datasets module. It'll be really useful as a lot of the research direction involves Transformer Based Model.
Thanks.
### Reference Links
> Paper Link = https://arxiv.org/pdf/1802.08979.pdf
> Github Link = https://github.com/TellinaTool/nl2bash
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/901/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/901/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/901",
"html_url": "https://github.com/huggingface/datasets/pull/901",
"diff_url": "https://github.com/huggingface/datasets/pull/901.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/901.patch",
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/900 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/900/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/900/comments | https://api.github.com/repos/huggingface/datasets/issues/900/events | https://github.com/huggingface/datasets/issues/900 | 752,214,066 | MDU6SXNzdWU3NTIyMTQwNjY= | 900 | datasets.load_dataset() custom chaching directory bug | {
"login": "SapirWeissbuch",
"id": 44585792,
"node_id": "MDQ6VXNlcjQ0NTg1Nzky",
"avatar_url": "https://avatars.githubusercontent.com/u/44585792?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/SapirWeissbuch",
"html_url": "https://github.com/SapirWeissbuch",
"followers_url": "https://api.github.com/users/SapirWeissbuch/followers",
"following_url": "https://api.github.com/users/SapirWeissbuch/following{/other_user}",
"gists_url": "https://api.github.com/users/SapirWeissbuch/gists{/gist_id}",
"starred_url": "https://api.github.com/users/SapirWeissbuch/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/SapirWeissbuch/subscriptions",
"organizations_url": "https://api.github.com/users/SapirWeissbuch/orgs",
"repos_url": "https://api.github.com/users/SapirWeissbuch/repos",
"events_url": "https://api.github.com/users/SapirWeissbuch/events{/privacy}",
"received_events_url": "https://api.github.com/users/SapirWeissbuch/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"Thanks for reporting ! I'm looking into it."
] | 1,606,479,533,000 | 1,606,690,133,000 | 1,606,690,133,000 | NONE | null | Hello,
I'm having issue with loading a dataset with a custom `cache_dir`. Despite specifying the output dir, it is still downloaded to
`~/.cache`.
## Environment info
- `datasets` version: 1.1.3
- Platform: Linux-4.19.129-aufs-1-x86_64-with-debian-10.1
- Python version: 3.7.3
## The code I'm running:
```python
import datasets
from pathlib import Path
validation_dataset = datasets.load_dataset("natural_questions", split="validation[:5%]", cache_dir=Path("./data"))
```
## The output:
* The dataset is downloaded to my home directory's `.cache`
* A new empty directory named "`natural_questions` is created in the specified directory `.data`
* `tree data` in the shell outputs:
```
data
โโโ natural_questions
โโโ default
โโโ 0.0.2
3 directories, 0 files
```
The output:
```
Downloading: 8.61kB [00:00, 5.11MB/s]
Downloading: 13.6kB [00:00, 7.89MB/s]
Using custom data configuration default
Downloading and preparing dataset natural_questions/default (download: 41.97 GiB, generated: 92.95 GiB, post-processed: Unknown size, total: 134.92 GiB) to ./data/natural_questions/default/0.0.2/867dbbaf9137c1b8
3ecb19f5eb80559e1002ea26e702c6b919cfa81a17a8c531...
Downloading: 100%|โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ| 13.6k/13.6k [00:00<00:00, 1.51MB/s]
Downloading: 7%|โโโโ | 6.70G/97.4G [03:46<1:37:05, 15.6MB/s]
```
## Expected behaviour:
The dataset "Natural Questions" should be downloaded to the directory "./data"
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/900/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/900/timeline | null | null | {
"url": "",
"html_url": "",
"diff_url": "",
"patch_url": "",
"merged_at": 0
} | false |
https://api.github.com/repos/huggingface/datasets/issues/899 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/899/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/899/comments | https://api.github.com/repos/huggingface/datasets/issues/899/events | https://github.com/huggingface/datasets/pull/899 | 752,191,227 | MDExOlB1bGxSZXF1ZXN0NTI4NTYzNzYz | 899 | Allow arrow based builder in auto dummy data generation | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
} | [] | null | [] | 1,606,477,178,000 | 1,606,483,809,000 | 1,606,483,808,000 | MEMBER | null | Following #898 I added support for arrow based builder for the auto dummy data generator | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/899/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/899/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/899",
"html_url": "https://github.com/huggingface/datasets/pull/899",
"diff_url": "https://github.com/huggingface/datasets/pull/899.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/899.patch",
"merged_at": 1606483808000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/898 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/898/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/898/comments | https://api.github.com/repos/huggingface/datasets/issues/898/events | https://github.com/huggingface/datasets/pull/898 | 752,148,284 | MDExOlB1bGxSZXF1ZXN0NTI4NTI4MDY1 | 898 | Adding SQA dataset | {
"login": "thomwolf",
"id": 7353373,
"node_id": "MDQ6VXNlcjczNTMzNzM=",
"avatar_url": "https://avatars.githubusercontent.com/u/7353373?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/thomwolf",
"html_url": "https://github.com/thomwolf",
"followers_url": "https://api.github.com/users/thomwolf/followers",
"following_url": "https://api.github.com/users/thomwolf/following{/other_user}",
"gists_url": "https://api.github.com/users/thomwolf/gists{/gist_id}",
"starred_url": "https://api.github.com/users/thomwolf/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/thomwolf/subscriptions",
"organizations_url": "https://api.github.com/users/thomwolf/orgs",
"repos_url": "https://api.github.com/users/thomwolf/repos",
"events_url": "https://api.github.com/users/thomwolf/events{/privacy}",
"received_events_url": "https://api.github.com/users/thomwolf/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
} | [] | null | [
"This dataset seems to have around 1000 configs. Therefore when creating the dummy data we end up with hundreds of MB of dummy data which we don't want to add in the repo.\r\nLet's make this PR on hold for now and find a solution after the sprint of next week",
"Closing in favor of #1566 "
] | 1,606,472,958,000 | 1,608,036,880,000 | 1,608,036,859,000 | MEMBER | null | As discussed in #880
Seems like automatic dummy-data generation doesn't work if the builder is a `ArrowBasedBuilder`, do you think you could take a look @lhoestq ? | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/898/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/898/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/898",
"html_url": "https://github.com/huggingface/datasets/pull/898",
"diff_url": "https://github.com/huggingface/datasets/pull/898.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/898.patch",
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/897 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/897/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/897/comments | https://api.github.com/repos/huggingface/datasets/issues/897/events | https://github.com/huggingface/datasets/issues/897 | 752,100,256 | MDU6SXNzdWU3NTIxMDAyNTY= | 897 | Dataset viewer issues | {
"login": "BramVanroy",
"id": 2779410,
"node_id": "MDQ6VXNlcjI3Nzk0MTA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2779410?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BramVanroy",
"html_url": "https://github.com/BramVanroy",
"followers_url": "https://api.github.com/users/BramVanroy/followers",
"following_url": "https://api.github.com/users/BramVanroy/following{/other_user}",
"gists_url": "https://api.github.com/users/BramVanroy/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BramVanroy/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BramVanroy/subscriptions",
"organizations_url": "https://api.github.com/users/BramVanroy/orgs",
"repos_url": "https://api.github.com/users/BramVanroy/repos",
"events_url": "https://api.github.com/users/BramVanroy/events{/privacy}",
"received_events_url": "https://api.github.com/users/BramVanroy/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2107841032,
"node_id": "MDU6TGFiZWwyMTA3ODQxMDMy",
"url": "https://api.github.com/repos/huggingface/datasets/labels/nlp-viewer",
"name": "nlp-viewer",
"color": "94203D",
"default": false,
"description": ""
}
] | closed | false | {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
} | [] | null | [
"Thanks for reporting !\r\ncc @srush for the empty feature list issue and the encoding issue\r\ncc @julien-c maybe we can update the url and just have a redirection from the old url to the new one ?",
"Ok, I redirected on our side to a new url. โ ๏ธ @srush: if you update the Streamlit config too to `/datasets/viewer`, let me know because I'll need to change our nginx config at the same time",
"9",
"โโ โโโโ โโโโ โโ ",
"โโ โโโโ โโโโ โโ "
] | 1,606,468,474,000 | 1,635,671,521,000 | 1,635,671,521,000 | CONTRIBUTOR | null | I was looking through the dataset viewer and I like it a lot. Version numbers, citation information, everything's there! I've spotted a few issues/bugs though:
- the URL is still under `nlp`, perhaps an alias for `datasets` can be made
- when I remove a **feature** (and the feature list is empty), I get an error. This is probably expected, but perhaps a better error message can be shown to the user
```bash
IndexError: list index out of range
Traceback:
File "/home/sasha/streamlit/lib/streamlit/ScriptRunner.py", line 322, in _run_script
exec(code, module.__dict__)
File "/home/sasha/nlp-viewer/run.py", line 316, in <module>
st.table(style)
File "/home/sasha/streamlit/lib/streamlit/DeltaGenerator.py", line 122, in wrapped_method
return dg._enqueue_new_element_delta(marshall_element, delta_type, last_index)
File "/home/sasha/streamlit/lib/streamlit/DeltaGenerator.py", line 367, in _enqueue_new_element_delta
rv = marshall_element(msg.delta.new_element)
File "/home/sasha/streamlit/lib/streamlit/DeltaGenerator.py", line 120, in marshall_element
return method(dg, element, *args, **kwargs)
File "/home/sasha/streamlit/lib/streamlit/DeltaGenerator.py", line 2944, in table
data_frame_proto.marshall_data_frame(data, element.table)
File "/home/sasha/streamlit/lib/streamlit/elements/data_frame_proto.py", line 54, in marshall_data_frame
_marshall_styles(proto_df.style, df, styler)
File "/home/sasha/streamlit/lib/streamlit/elements/data_frame_proto.py", line 73, in _marshall_styles
translated_style = styler._translate()
File "/home/sasha/.local/share/virtualenvs/lib-ogGKnCK_/lib/python3.7/site-packages/pandas/io/formats/style.py", line 351, in _translate
* (len(clabels[0]) - len(hidden_columns))
```
- there seems to be **an encoding issue** in the default view, the dataset examples are shown as raw monospace text, without a decent encoding. That makes it hard to read for languages that use a lot of special characters. Take for instance the [cs-en WMT19 set](https://huggingface.co/nlp/viewer/?dataset=wmt19&config=cs-en). This problem goes away when you enable "List view", because then some syntax highlighteris used, and the special characters are coded correctly.
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/897/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/897/timeline | null | null | {
"url": "",
"html_url": "",
"diff_url": "",
"patch_url": "",
"merged_at": 0
} | false |
https://api.github.com/repos/huggingface/datasets/issues/896 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/896/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/896/comments | https://api.github.com/repos/huggingface/datasets/issues/896/events | https://github.com/huggingface/datasets/pull/896 | 751,834,265 | MDExOlB1bGxSZXF1ZXN0NTI4MjcyMjc0 | 896 | Add template and documentation for dataset card | {
"login": "yjernite",
"id": 10469459,
"node_id": "MDQ6VXNlcjEwNDY5NDU5",
"avatar_url": "https://avatars.githubusercontent.com/u/10469459?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yjernite",
"html_url": "https://github.com/yjernite",
"followers_url": "https://api.github.com/users/yjernite/followers",
"following_url": "https://api.github.com/users/yjernite/following{/other_user}",
"gists_url": "https://api.github.com/users/yjernite/gists{/gist_id}",
"starred_url": "https://api.github.com/users/yjernite/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yjernite/subscriptions",
"organizations_url": "https://api.github.com/users/yjernite/orgs",
"repos_url": "https://api.github.com/users/yjernite/repos",
"events_url": "https://api.github.com/users/yjernite/events{/privacy}",
"received_events_url": "https://api.github.com/users/yjernite/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
} | [] | null | [] | 1,606,426,225,000 | 1,606,525,815,000 | 1,606,525,815,000 | MEMBER | null | This PR adds a template for dataset cards, as well as a guide to filling out the template and a completed example for the ELI5 dataset, building on the work of @mcmillanmajora
New pull requests adding datasets should now have a README.md file which serves both to hold the tags we will have to index the datasets and as a data statement.
The template is designed to be pretty extensive. The idea is that the person who uploads the dataset should put in all the basic information (at least the Dataset Description section) and whatever else they feel comfortable adding and leave the `[More Information Needed]` annotation everywhere else as a placeholder.
We will then work with @mcmillanmajora to involve the data authors more directly in filling out the remaining information.
Direct links to:
- [Documentation](https://github.com/yjernite/datasets/blob/add_dataset_card_doc/templates/README_guide.md)
- [Empty template](https://github.com/yjernite/datasets/blob/add_dataset_card_doc/templates/README.md)
- [ELI5 example](https://github.com/yjernite/datasets/blob/add_dataset_card_doc/datasets/eli5/README.md) | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/896/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/896/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/896",
"html_url": "https://github.com/huggingface/datasets/pull/896",
"diff_url": "https://github.com/huggingface/datasets/pull/896.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/896.patch",
"merged_at": 1606525814000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/895 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/895/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/895/comments | https://api.github.com/repos/huggingface/datasets/issues/895/events | https://github.com/huggingface/datasets/pull/895 | 751,782,295 | MDExOlB1bGxSZXF1ZXN0NTI4MjMyMjU3 | 895 | Better messages regarding split naming | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
} | [] | null | [] | 1,606,416,946,000 | 1,606,483,860,000 | 1,606,483,859,000 | MEMBER | null | I made explicit the error message when a bad split name is used.
Also I wanted to allow the `-` symbol for split names but actually this symbol is used to name the arrow files `{dataset_name}-{dataset_split}.arrow` so we should probably keep it this way, i.e. not allowing the `-` symbol in split names. Moreover in the future we might want to use `{dataset_name}-{dataset_split}-{shard_id}_of_{n_shards}.arrow` and reuse the `-` symbol. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/895/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/895/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/895",
"html_url": "https://github.com/huggingface/datasets/pull/895",
"diff_url": "https://github.com/huggingface/datasets/pull/895.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/895.patch",
"merged_at": 1606483859000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/894 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/894/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/894/comments | https://api.github.com/repos/huggingface/datasets/issues/894/events | https://github.com/huggingface/datasets/pull/894 | 751,734,905 | MDExOlB1bGxSZXF1ZXN0NTI4MTkzNzQy | 894 | Allow several tags sets | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
} | [] | null | [
"Closing since we don't need to update the tags of those three datasets (for each one of them there is only one tag set)"
] | 1,606,410,253,000 | 1,620,239,057,000 | 1,606,508,149,000 | MEMBER | null | Hi !
Currently we have three dataset cards : snli, cnn_dailymail and allocine.
For each one of those datasets a set of tag is defined. The set of tags contains fields like `multilinguality`, `task_ids`, `licenses` etc.
For certain datasets like `glue` for example, there exist several configurations: `sst2`, `mnli` etc. Therefore we should define one set of tags per configuration. However the current format used for tags only supports one set of tags per dataset.
In this PR I propose a simple change in the yaml format used for tags to allow for several sets of tags.
Let me know what you think, especially @julien-c let me know if it's good for you since it's going to be parsed by moon-landing | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/894/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/894/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/894",
"html_url": "https://github.com/huggingface/datasets/pull/894",
"diff_url": "https://github.com/huggingface/datasets/pull/894.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/894.patch",
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/893 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/893/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/893/comments | https://api.github.com/repos/huggingface/datasets/issues/893/events | https://github.com/huggingface/datasets/pull/893 | 751,703,696 | MDExOlB1bGxSZXF1ZXN0NTI4MTY4NDgx | 893 | add metrec: arabic poetry dataset | {
"login": "zaidalyafeai",
"id": 15667714,
"node_id": "MDQ6VXNlcjE1NjY3NzE0",
"avatar_url": "https://avatars.githubusercontent.com/u/15667714?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zaidalyafeai",
"html_url": "https://github.com/zaidalyafeai",
"followers_url": "https://api.github.com/users/zaidalyafeai/followers",
"following_url": "https://api.github.com/users/zaidalyafeai/following{/other_user}",
"gists_url": "https://api.github.com/users/zaidalyafeai/gists{/gist_id}",
"starred_url": "https://api.github.com/users/zaidalyafeai/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zaidalyafeai/subscriptions",
"organizations_url": "https://api.github.com/users/zaidalyafeai/orgs",
"repos_url": "https://api.github.com/users/zaidalyafeai/repos",
"events_url": "https://api.github.com/users/zaidalyafeai/events{/privacy}",
"received_events_url": "https://api.github.com/users/zaidalyafeai/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
} | [] | null | [
"@lhoestq removed prints and added the dataset card. ",
"@lhoestq, I want to add other datasets as well. I am not sure if it is possible to do so with the same branch. ",
"Hi @zaidalyafeai, really excited to get more Arabic coverage in the lib, thanks for your contribution!\r\n\r\nCouple of last comments:\r\n- this PR seems to modify some files that are unrelated to your dataset. Could you rebase from master? It should take care of that.\r\n- The dataset card is a good start! Can you describe the task in a few words and add more information in the Data Structure part, including listing and describing the fields? Also, if you don't know how to fill out a paragraph, or if you have some information but think more would be beneficial, please leave `[More Information Needed]` instead of `[N/A]`",
"> Hi @zaidalyafeai, really excited to get more Arabic coverage in the lib, thanks for your contribution!\r\n> \r\n> Couple of last comments:\r\n> \r\n> * this PR seems to modify some files that are unrelated to your dataset. Could you rebase from master? It should take care of that.\r\n> * The dataset card is a good start! Can you describe the task in a few words and add more information in the Data Structure part, including listing and describing the fields? Also, if you don't know how to fill out a paragraph, or if you have some information but think more would be beneficial, please leave `[More Information Needed]` instead of `[N/A]`\r\n\r\nI have no idea how some other files changed. I tried to rebase and push but this created some errors. I had to run the command \r\n`git push -u --force origin add-metrec-dataset` which might cause some problems. ",
"Feel free to create another branch/another PR without all the other changes",
"@yjernite can you explain which other files are changed because of the PR ? https://github.com/huggingface/datasets/pull/893/files only shows files related to the dataset. ",
"Right ! github is nice with us today :)",
"Looks like this one is ready to merge, thanks @zaidalyafeai !",
"@lhoestq thanks for the merge. I am not a GitHub geek. I already have another dataset to add. I'm not sure how to add another given my forked repo. Do I follow the same steps with a different checkout name ?",
"If you've followed the instructions in here : https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md#start-by-preparing-your-environment\r\n\r\n(especially point 2. and the command `git remote add upstream ....`)\r\n\r\nThen you can try\r\n```\r\ngit checkout master\r\ngit fetch upstream\r\ngit rebase upstream/master\r\ngit checkout -b add-<my-new-dataset-name>\r\n```"
] | 1,606,407,016,000 | 1,606,839,895,000 | 1,606,835,707,000 | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/893/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/893/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/893",
"html_url": "https://github.com/huggingface/datasets/pull/893",
"diff_url": "https://github.com/huggingface/datasets/pull/893.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/893.patch",
"merged_at": 1606835707000
} | true |
|
https://api.github.com/repos/huggingface/datasets/issues/892 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/892/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/892/comments | https://api.github.com/repos/huggingface/datasets/issues/892/events | https://github.com/huggingface/datasets/pull/892 | 751,658,262 | MDExOlB1bGxSZXF1ZXN0NTI4MTMxNTE1 | 892 | Add a few datasets of reference in the documentation | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
} | [] | null | [
"Looks good to me. Do we also support TSV in this helper (explain if it should be text or CSV) and in the dummy-data creator?",
"snli is basically based on tsv files (but named as .txt) and it is in the list of datasets of reference.\r\nThe dummy data creator supports tsv",
"merging this one.\r\nIf you think of other datasets of reference to add we can still add them later"
] | 1,606,402,959,000 | 1,606,500,525,000 | 1,606,500,524,000 | MEMBER | null | I started making a small list of various datasets of reference in the documentation.
Since many datasets share a lot in common I think it's good to have a list of datasets scripts to get some inspiration from.
Let me know what you think, and if you have ideas of other datasets that we may add to this list, please let me know. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/892/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/892/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/892",
"html_url": "https://github.com/huggingface/datasets/pull/892",
"diff_url": "https://github.com/huggingface/datasets/pull/892.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/892.patch",
"merged_at": 1606500524000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/891 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/891/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/891/comments | https://api.github.com/repos/huggingface/datasets/issues/891/events | https://github.com/huggingface/datasets/pull/891 | 751,576,869 | MDExOlB1bGxSZXF1ZXN0NTI4MDY1MTQ3 | 891 | gitignore .python-version | {
"login": "patil-suraj",
"id": 27137566,
"node_id": "MDQ6VXNlcjI3MTM3NTY2",
"avatar_url": "https://avatars.githubusercontent.com/u/27137566?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patil-suraj",
"html_url": "https://github.com/patil-suraj",
"followers_url": "https://api.github.com/users/patil-suraj/followers",
"following_url": "https://api.github.com/users/patil-suraj/following{/other_user}",
"gists_url": "https://api.github.com/users/patil-suraj/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patil-suraj/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patil-suraj/subscriptions",
"organizations_url": "https://api.github.com/users/patil-suraj/orgs",
"repos_url": "https://api.github.com/users/patil-suraj/repos",
"events_url": "https://api.github.com/users/patil-suraj/events{/privacy}",
"received_events_url": "https://api.github.com/users/patil-suraj/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
} | [] | null | [] | 1,606,395,958,000 | 1,606,397,307,000 | 1,606,397,306,000 | MEMBER | null | ignore `.python-version` added by `pyenv` | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/891/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/891/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/891",
"html_url": "https://github.com/huggingface/datasets/pull/891",
"diff_url": "https://github.com/huggingface/datasets/pull/891.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/891.patch",
"merged_at": 1606397306000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/890 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/890/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/890/comments | https://api.github.com/repos/huggingface/datasets/issues/890/events | https://github.com/huggingface/datasets/pull/890 | 751,534,050 | MDExOlB1bGxSZXF1ZXN0NTI4MDI5NjA3 | 890 | Add LER | {
"login": "JoelNiklaus",
"id": 3775944,
"node_id": "MDQ6VXNlcjM3NzU5NDQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/3775944?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/JoelNiklaus",
"html_url": "https://github.com/JoelNiklaus",
"followers_url": "https://api.github.com/users/JoelNiklaus/followers",
"following_url": "https://api.github.com/users/JoelNiklaus/following{/other_user}",
"gists_url": "https://api.github.com/users/JoelNiklaus/gists{/gist_id}",
"starred_url": "https://api.github.com/users/JoelNiklaus/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/JoelNiklaus/subscriptions",
"organizations_url": "https://api.github.com/users/JoelNiklaus/orgs",
"repos_url": "https://api.github.com/users/JoelNiklaus/repos",
"events_url": "https://api.github.com/users/JoelNiklaus/events{/privacy}",
"received_events_url": "https://api.github.com/users/JoelNiklaus/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
} | [] | null | [
"Thanks for the comments. I addressed them and pushed again.\r\nWhen I run \"make quality\" I get the following error but I don't know how to resolve it or what the problem ist respectively:\r\nwould reformat /Users/joelniklaus/NextCloud/PhDJoelNiklaus/Code/datasets/datasets/ler/ler.py\r\nOh no! ๐ฅ ๐ ๐ฅ\r\n1 file would be reformatted, 257 files would be left unchanged.\r\nmake: *** [quality] Error 1\r\n",
"Awesome thanks :)\r\nTo automatically format the python files you can run `make style`",
"I did that now. But still getting the following error:\r\nblack --check --line-length 119 --target-version py36 tests src benchmarks datasets metrics\r\nAll done! โจ ๐ฐ โจ\r\n258 files would be left unchanged.\r\nisort --check-only tests src benchmarks datasets metrics\r\nflake8 tests src benchmarks datasets metrics\r\ndatasets/ler/ler.py:46:96: W291 trailing whitespace\r\ndatasets/ler/ler.py:47:68: W291 trailing whitespace\r\ndatasets/ler/ler.py:48:102: W291 trailing whitespace\r\ndatasets/ler/ler.py:49:112: W291 trailing whitespace\r\ndatasets/ler/ler.py:50:92: W291 trailing whitespace\r\ndatasets/ler/ler.py:51:116: W291 trailing whitespace\r\ndatasets/ler/ler.py:52:84: W291 trailing whitespace\r\nmake: *** [quality] Error 1\r\n\r\nHowever: When I look at the file I don't see any trailing whitespace",
"maybe a bug with flake8 ? could you try to update it ? which version do you have ?",
"This is my flake8 version: 3.7.9 (mccabe: 0.6.1, pycodestyle: 2.5.0, pyflakes: 2.1.1) CPython 3.8.5 on Darwin\r\n",
"Now I updated to: 3.8.4 (mccabe: 0.6.1, pycodestyle: 2.6.0, pyflakes: 2.2.0) CPython 3.8.5 on Darwin\r\n\r\nAnd now I even get additional errors:\r\nblack --check --line-length 119 --target-version py36 tests src benchmarks datasets metrics\r\nAll done! โจ ๐ฐ โจ\r\n258 files would be left unchanged.\r\nisort --check-only tests src benchmarks datasets metrics\r\nflake8 tests src benchmarks datasets metrics\r\ndatasets/polyglot_ner/polyglot_ner.py:123:64: F541 f-string is missing placeholders\r\ndatasets/ler/ler.py:46:96: W291 trailing whitespace\r\ndatasets/ler/ler.py:47:68: W291 trailing whitespace\r\ndatasets/ler/ler.py:48:102: W291 trailing whitespace\r\ndatasets/ler/ler.py:49:112: W291 trailing whitespace\r\ndatasets/ler/ler.py:50:92: W291 trailing whitespace\r\ndatasets/ler/ler.py:51:116: W291 trailing whitespace\r\ndatasets/ler/ler.py:52:84: W291 trailing whitespace\r\ndatasets/math_dataset/math_dataset.py:233:25: E741 ambiguous variable name 'l'\r\nmetrics/coval/coval.py:236:31: F541 f-string is missing placeholders\r\nmake: *** [quality] Error 1\r\n\r\nI do this on macOS Catalina 10.15.7 in case this matters",
"Code quality test now passes, thanks :) \r\n\r\nTo fix the other tests failing I think you can just rebase from master.\r\nAlso make sure that the dummy data test passes with\r\n```python\r\nRUN_SLOW=1 pytest tests/test_dataset_common.py::LocalDatasetTest::test_load_dataset_all_configs_ler\r\n```",
"I will close this PR because abishek did the same better (https://github.com/huggingface/datasets/pull/944)",
"Sorry you had to close your PR ! It looks like this week's sprint doesn't always make it easy to see what's being added/what's already added. \r\nThank you for contributing to the library. You did a great job on adding LER so feel free to add other ones that you would like to see in the library, it will be a pleasure to review"
] | 1,606,391,903,000 | 1,606,829,615,000 | 1,606,829,176,000 | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/890/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/890/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/890",
"html_url": "https://github.com/huggingface/datasets/pull/890",
"diff_url": "https://github.com/huggingface/datasets/pull/890.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/890.patch",
"merged_at": null
} | true |
|
https://api.github.com/repos/huggingface/datasets/issues/889 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/889/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/889/comments | https://api.github.com/repos/huggingface/datasets/issues/889/events | https://github.com/huggingface/datasets/pull/889 | 751,115,691 | MDExOlB1bGxSZXF1ZXN0NTI3NjkwODE2 | 889 | Optional per-dataset default config name | {
"login": "joeddav",
"id": 9353833,
"node_id": "MDQ6VXNlcjkzNTM4MzM=",
"avatar_url": "https://avatars.githubusercontent.com/u/9353833?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/joeddav",
"html_url": "https://github.com/joeddav",
"followers_url": "https://api.github.com/users/joeddav/followers",
"following_url": "https://api.github.com/users/joeddav/following{/other_user}",
"gists_url": "https://api.github.com/users/joeddav/gists{/gist_id}",
"starred_url": "https://api.github.com/users/joeddav/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/joeddav/subscriptions",
"organizations_url": "https://api.github.com/users/joeddav/orgs",
"repos_url": "https://api.github.com/users/joeddav/repos",
"events_url": "https://api.github.com/users/joeddav/events{/privacy}",
"received_events_url": "https://api.github.com/users/joeddav/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
} | [] | null | [
"I like the idea ! And the approach is right imo\r\n\r\nNote that by changing this we will have to add a way for users to get the config lists of a dataset. In the current user workflow, the user could see the list of the config when the missing config error is raised but now it won't be the case because of the default config.",
"Maybe let's add a test in the test_builder.py test script ?",
"@lhoestq Okay great, I added a test as well as two new inspect functions: `get_dataset_config_names` and `get_dataset_infos` (the latter is something I've been wanting anyway). As a quick hack, you can also just pass a random config name (e.g. an empty string) to `load_dataset` to get the config names in the error msg as before. Also added a couple paragraphs to the adding new datasets doc.\r\n\r\nI'll send a separate PR incorporating this in existing datasets so we can get this merged before our sprint on Monday.\r\n\r\nAny ideas on the failing tests? I'm having trouble making sense of it. **Edit**: nvm, it was master."
] | 1,606,338,150,000 | 1,606,757,253,000 | 1,606,757,247,000 | CONTRIBUTOR | null | This PR adds a `DEFAULT_CONFIG_NAME` class attribute to `DatasetBuilder`. This allows a dataset to have a specified default config name when a dataset has more than one config but the user does not specify it. For example, after defining `DEFAULT_CONFIG_NAME = "combined"` in PolyglotNER, a user can now do the following:
```python
ds = load_dataset("polyglot_ner")
```
which is equivalent to,
```python
ds = load_dataset("polyglot_ner", "combined")
```
In effect (for this particular dataset configuration), this means that if the user doesn't specify a language, they are given the combined dataset including all languages.
Since it doesn't always make sense to have a default config, this feature is opt-in. If `DEFAULT_CONFIG_NAME` is not defined and a user does not pass a config for a dataset with multiple configs available, a ValueError is raised like usual.
Let me know what you think about this approach @lhoestq @thomwolf and I'll add some documentation and define a default for some of our existing datasets. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/889/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/889/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/889",
"html_url": "https://github.com/huggingface/datasets/pull/889",
"diff_url": "https://github.com/huggingface/datasets/pull/889.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/889.patch",
"merged_at": 1606757247000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/888 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/888/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/888/comments | https://api.github.com/repos/huggingface/datasets/issues/888/events | https://github.com/huggingface/datasets/issues/888 | 750,944,422 | MDU6SXNzdWU3NTA5NDQ0MjI= | 888 | Nested lists are zipped unexpectedly | {
"login": "AmitMY",
"id": 5757359,
"node_id": "MDQ6VXNlcjU3NTczNTk=",
"avatar_url": "https://avatars.githubusercontent.com/u/5757359?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/AmitMY",
"html_url": "https://github.com/AmitMY",
"followers_url": "https://api.github.com/users/AmitMY/followers",
"following_url": "https://api.github.com/users/AmitMY/following{/other_user}",
"gists_url": "https://api.github.com/users/AmitMY/gists{/gist_id}",
"starred_url": "https://api.github.com/users/AmitMY/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/AmitMY/subscriptions",
"organizations_url": "https://api.github.com/users/AmitMY/orgs",
"repos_url": "https://api.github.com/users/AmitMY/repos",
"events_url": "https://api.github.com/users/AmitMY/events{/privacy}",
"received_events_url": "https://api.github.com/users/AmitMY/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
} | [] | null | [
"Yes following the Tensorflow Datasets convention, objects with type `Sequence of a Dict` are actually stored as a `dictionary of lists`.\r\nSee the [documentation](https://huggingface.co/docs/datasets/features.html?highlight=features) for more details",
"Thanks.\r\nThis is a bit (very) confusing, but I guess if its intended, I'll just work with it as if its how my data was originally structured :) \r\n"
] | 1,606,320,466,000 | 1,606,325,439,000 | 1,606,325,439,000 | CONTRIBUTOR | null | I might misunderstand something, but I expect that if I define:
```python
"top": datasets.features.Sequence({
"middle": datasets.features.Sequence({
"bottom": datasets.Value("int32")
})
})
```
And I then create an example:
```python
yield 1, {
"top": [{
"middle": [
{"bottom": 1},
{"bottom": 2}
]
}]
}
```
I then load my dataset:
```python
train = load_dataset("my dataset")["train"]
```
and expect to be able to access `data[0]["top"][0]["middle"][0]`.
That is not the case. Here is `data[0]` as JSON:
```json
{"top": {"middle": [{"bottom": [1, 2]}]}}
```
Clearly different than the thing I inputted.
```json
{"top": [{"middle": [{"bottom": 1},{"bottom": 2}]}]}
``` | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/888/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/888/timeline | null | null | {
"url": "",
"html_url": "",
"diff_url": "",
"patch_url": "",
"merged_at": 0
} | false |
https://api.github.com/repos/huggingface/datasets/issues/887 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/887/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/887/comments | https://api.github.com/repos/huggingface/datasets/issues/887/events | https://github.com/huggingface/datasets/issues/887 | 750,868,831 | MDU6SXNzdWU3NTA4Njg4MzE= | 887 | pyarrow.lib.ArrowNotImplementedError: MakeBuilder: cannot construct builder for type extension<arrow.py_extension_type> | {
"login": "AmitMY",
"id": 5757359,
"node_id": "MDQ6VXNlcjU3NTczNTk=",
"avatar_url": "https://avatars.githubusercontent.com/u/5757359?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/AmitMY",
"html_url": "https://github.com/AmitMY",
"followers_url": "https://api.github.com/users/AmitMY/followers",
"following_url": "https://api.github.com/users/AmitMY/following{/other_user}",
"gists_url": "https://api.github.com/users/AmitMY/gists{/gist_id}",
"starred_url": "https://api.github.com/users/AmitMY/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/AmitMY/subscriptions",
"organizations_url": "https://api.github.com/users/AmitMY/orgs",
"repos_url": "https://api.github.com/users/AmitMY/repos",
"events_url": "https://api.github.com/users/AmitMY/events{/privacy}",
"received_events_url": "https://api.github.com/users/AmitMY/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | open | false | {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
} | [] | null | [
"Yes right now `ArrayXD` can only be used as a column feature type, not a subtype.\r\nWith the current Arrow limitations I don't think we'll be able to make it work as a subtype, however it should be possible to allow dimensions of dynamic sizes (`Array3D(shape=(None, 137, 2), dtype=\"float32\")` for example since the [underlying arrow type](https://github.com/huggingface/datasets/blob/master/src/datasets/features.py#L236) allows dynamic sizes.\r\n\r\nFor now I'd suggest the use of nested `Sequence` types. Once we have the dynamic sizes you can update the dataset.\r\nWhat do you think ?",
"> Yes right now ArrayXD can only be used as a column feature type, not a subtype. \r\n\r\nMeaning it can't be nested under `Sequence`?\r\nIf so, for now I'll just make it a python list and make it with the nested `Sequence` type you suggested.",
"Yea unfortunately..\r\nThat's a current limitation with Arrow ExtensionTypes that can't be used in the default Arrow Array objects.\r\nWe already have an ExtensionArray that allows us to use them as column types but not for subtypes.\r\nMaybe we can extend it, I haven't experimented with that yet",
"Cool\r\nSo please consider this issue as a feature request for:\r\n```\r\nArray3D(shape=(None, 137, 2), dtype=\"float32\")\r\n```\r\n\r\nits a way to represent videos, poses, and other cool sequences",
"@lhoestq well, so sequence of sequences doesn't work either...\r\n\r\n```\r\npyarrow.lib.ArrowCapacityError: List array cannot contain more than 2147483646 child elements, have 2147483648\r\n```\r\n\r\n\r\n",
"Working with Arrow can be quite fun sometimes.\r\nYou can fix this issue by trying to reduce the writer batch size (same trick than the one used to reduce the RAM usage in https://github.com/huggingface/datasets/issues/741).\r\n\r\nLet me know if it works.\r\nI haven't investigated yet on https://github.com/huggingface/datasets/issues/741 since I was preparing this week's sprint to add datasets but this is in my priority list for early next week.",
"The batch size fix doesn't work... not for #741 and not for this dataset I'm trying (DGS corpus)\r\nLoading the DGS corpus takes 400GB of RAM, which is fine with me as my machine is large enough\r\n",
"Sorry it doesn't work. Will let you know once I fixed it",
"Hi @lhoestq , any update on dynamic sized arrays?\r\n(`Array3D(shape=(None, 137, 2), dtype=\"float32\")`)",
"Not yet, I've been pretty busy with the dataset sprint lately but this is something that's been asked several times already. So I'll definitely work on this as soon as I'm done with the sprint and with the RAM issue you reported.",
"Hi @lhoestq,\r\nAny chance you have some updates on the supporting `ArrayXD` as a subtype or support of dynamic sized arrays?\r\n\r\ne.g.:\r\n`datasets.features.Sequence(datasets.features.Array2D(shape=(137, 2), dtype=\"float32\"))`\r\n`Array3D(shape=(None, 137, 2), dtype=\"float32\")`",
"Hi ! We haven't worked in this lately and it's not in our very short-term roadmap since it requires a bit a work to make it work with arrow. Though this will definitely be added at one point.",
"@lhoestq, thanks for the update.\r\n\r\nI actually tried to modify some piece of code to make it work. Can you please tell if I missing anything here?\r\nI think that for vast majority of cases it's enough to make first dimension of the array dynamic i.e. `shape=(None, 100, 100)`. For that, it's enough to modify class [ArrayExtensionArray](https://github.com/huggingface/datasets/blob/9ca24250ea44e7611c4dabd01ecf9415a7f0be6c/src/datasets/features.py#L397) to output list of arrays of different sizes instead of list of arrays of same sizes (current version)\r\nBelow are my modifications of this class.\r\n\r\n```\r\nclass ArrayExtensionArray(pa.ExtensionArray):\r\n def __array__(self):\r\n zero_copy_only = _is_zero_copy_only(self.storage.type)\r\n return self.to_numpy(zero_copy_only=zero_copy_only)\r\n\r\n def __getitem__(self, i):\r\n return self.storage[i]\r\n\r\n def to_numpy(self, zero_copy_only=True):\r\n storage: pa.ListArray = self.storage\r\n size = 1\r\n for i in range(self.type.ndims):\r\n size *= self.type.shape[i]\r\n storage = storage.flatten()\r\n numpy_arr = storage.to_numpy(zero_copy_only=zero_copy_only)\r\n numpy_arr = numpy_arr.reshape(len(self), *self.type.shape)\r\n return numpy_arr\r\n\r\n def to_list_of_numpy(self, zero_copy_only=True):\r\n storage: pa.ListArray = self.storage\r\n shape = self.type.shape\r\n arrays = []\r\n for dim in range(1, self.type.ndims):\r\n assert shape[dim] is not None, f\"Support only dynamic size on first dimension. Got: {shape}\"\r\n\r\n first_dim_offsets = np.array([off.as_py() for off in storage.offsets])\r\n for i in range(len(storage)):\r\n storage_el = storage[i:i+1]\r\n first_dim = first_dim_offsets[i+1] - first_dim_offsets[i]\r\n # flatten storage\r\n for dim in range(self.type.ndims):\r\n storage_el = storage_el.flatten()\r\n\r\n numpy_arr = storage_el.to_numpy(zero_copy_only=zero_copy_only)\r\n arrays.append(numpy_arr.reshape(first_dim, *shape[1:]))\r\n\r\n return arrays\r\n\r\n def to_pylist(self):\r\n zero_copy_only = _is_zero_copy_only(self.storage.type)\r\n if self.type.shape[0] is None:\r\n return self.to_list_of_numpy(zero_copy_only=zero_copy_only)\r\n else:\r\n return self.to_numpy(zero_copy_only=zero_copy_only).tolist()\r\n```\r\n\r\nI ran few tests and it works as expected. Let me know what you think.",
"Thanks for diving into this !\r\n\r\nIndeed focusing on making the first dimensions dynamic make total sense (and users could still re-order their dimensions to match this constraint).\r\nYour code looks great :) I think it can even be extended to support several dynamic dimensions if we want to.\r\n\r\nFeel free to open a PR to include these changes, then we can update our test suite to make sure it works in all use cases.\r\nIn particular I think we might need a few tweaks to allow it to be converted to pandas (though I haven't tested yet):\r\n\r\n```python\r\nfrom datasets import Dataset, Features, Array3D\r\n\r\n# this works\r\nmatrix = [[1, 0], [0, 1]]\r\nfeatures = Features({\"a\": Array3D(dtype=\"int32\", shape=(1, 2, 2))})\r\nd = Dataset.from_dict({\"a\": [[matrix], [matrix]]})\r\nprint(d.to_pandas())\r\n\r\n# this should work as well\r\nmatrix = [[1, 0], [0, 1]]\r\nfeatures = Features({\"a\": Array3D(dtype=\"int32\", shape=(None, 2, 2))})\r\nd = Dataset.from_dict({\"a\": [[matrix], [matrix] * 2]})\r\nprint(d.to_pandas())\r\n```\r\n\r\nI'll be happy to help you on this :)"
] | 1,606,314,741,000 | 1,631,207,020,000 | null | CONTRIBUTOR | null | I set up a new dataset, with a sequence of arrays (really, I want to have an array of (None, 137, 2), and the first dimension is dynamic)
```python
def _info(self):
return datasets.DatasetInfo(
description=_DESCRIPTION,
# This defines the different columns of the dataset and their types
features=datasets.Features(
{
"pose": datasets.features.Sequence(datasets.features.Array2D(shape=(137, 2), dtype="float32"))
}
),
homepage=_HOMEPAGE,
citation=_CITATION,
)
def _generate_examples(self):
""" Yields examples. """
yield 1, {
"pose": [np.zeros(shape=(137, 2), dtype=np.float32)]
}
```
But this doesn't work -
> pyarrow.lib.ArrowNotImplementedError: MakeBuilder: cannot construct builder for type extension<arrow.py_extension_type> | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/887/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/887/timeline | null | null | {
"url": "",
"html_url": "",
"diff_url": "",
"patch_url": "",
"merged_at": 0
} | false |
https://api.github.com/repos/huggingface/datasets/issues/886 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/886/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/886/comments | https://api.github.com/repos/huggingface/datasets/issues/886/events | https://github.com/huggingface/datasets/pull/886 | 750,829,314 | MDExOlB1bGxSZXF1ZXN0NTI3NDU1MDU5 | 886 | Fix wikipedia custom config | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
} | [] | null | [
"I think this issue is still not resolve yet. Please check my comment in the following issue, thanks.\r\n[#577](https://github.com/huggingface/datasets/issues/577#issuecomment-868122769)"
] | 1,606,311,852,000 | 1,624,598,656,000 | 1,606,318,933,000 | MEMBER | null | It should be possible to use the wikipedia dataset with any `language` and `date`.
However it was not working as noticed in #784 . Indeed the custom wikipedia configurations were not enabled for some reason.
I fixed that and was able to run
```python
from datasets import load_dataset
load_dataset("./datasets/wikipedia", language="zh", date="20201120", beam_runner='DirectRunner')
```
cc @stvhuang @SamuelCahyawijaya
Fix #784 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/886/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/886/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/886",
"html_url": "https://github.com/huggingface/datasets/pull/886",
"diff_url": "https://github.com/huggingface/datasets/pull/886.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/886.patch",
"merged_at": 1606318933000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/885 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/885/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/885/comments | https://api.github.com/repos/huggingface/datasets/issues/885/events | https://github.com/huggingface/datasets/issues/885 | 750,789,052 | MDU6SXNzdWU3NTA3ODkwNTI= | 885 | Very slow cold-start | {
"login": "AmitMY",
"id": 5757359,
"node_id": "MDQ6VXNlcjU3NTczNTk=",
"avatar_url": "https://avatars.githubusercontent.com/u/5757359?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/AmitMY",
"html_url": "https://github.com/AmitMY",
"followers_url": "https://api.github.com/users/AmitMY/followers",
"following_url": "https://api.github.com/users/AmitMY/following{/other_user}",
"gists_url": "https://api.github.com/users/AmitMY/gists{/gist_id}",
"starred_url": "https://api.github.com/users/AmitMY/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/AmitMY/subscriptions",
"organizations_url": "https://api.github.com/users/AmitMY/orgs",
"repos_url": "https://api.github.com/users/AmitMY/repos",
"events_url": "https://api.github.com/users/AmitMY/events{/privacy}",
"received_events_url": "https://api.github.com/users/AmitMY/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2067376369,
"node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request",
"name": "dataset request",
"color": "e99695",
"default": false,
"description": "Requesting to add a new dataset"
}
] | closed | false | {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
} | [] | null | [
"Good point!",
"Yes indeed. We can probably improve that by using lazy imports",
"#1690 added fast start-up of the library "
] | 1,606,308,478,000 | 1,610,537,485,000 | 1,610,537,485,000 | CONTRIBUTOR | null | Hi,
I expect when importing `datasets` that nothing major happens in the background, and so the import should be insignificant.
When I load a metric, or a dataset, its fine that it takes time.
The following ranges from 3 to 9 seconds:
```
python -m timeit -n 1 -r 1 'from datasets import load_dataset'
```
edit:
sorry for the mis-tag, not sure how I added it. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/885/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/885/timeline | null | null | {
"url": "",
"html_url": "",
"diff_url": "",
"patch_url": "",
"merged_at": 0
} | false |
https://api.github.com/repos/huggingface/datasets/issues/884 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/884/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/884/comments | https://api.github.com/repos/huggingface/datasets/issues/884/events | https://github.com/huggingface/datasets/pull/884 | 749,862,034 | MDExOlB1bGxSZXF1ZXN0NTI2NjA5MDc1 | 884 | Auto generate dummy data | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
} | [] | null | [
"I took your comments into account.\r\nAlso now after compressing the dummy_data.zip file it runs a dummy data test (=make sure each split has at least 1 example using the dummy data)",
"I just tested the tool with some datasets and found out that it's not working for datasets that download files using `download_and_extract(file_url)` (where file_url is a `str`). That's because in that case the dummy_data.zip is not a folder but a single zipped file.\r\n\r\nI think we have to fix that or we can have unexpected behavior when a scripts calls `download_and_extract(file_url)` several times, since it would always point to the same dummy data file.\r\n\r\nSo I decided to change that to have a folder containing the dummy files instead but it breaks around 90 tests so I need to update 90 dummy data files to follow this scheme. I'll probably fix them tomorrow morning.\r\n\r\nWhat do you guys think ? Also cc @patrickvonplaten to make sure I understand things correctly",
"Ok I changed to use the dummy_data.zip content to be a folder even for single url calls to `dl_manager.download_and_extract`. Therefore the automatic dummy data generation tool works for most datasets now.\r\n\r\nTo avoid having to change all the old dummy_data.zip files I added backward compatiblity. \r\n\r\nThe only test failing is `tests/test_dataset_common.py::RemoteDatasetTest::test_load_dataset_xcopa`\r\nIt is expected to fail since I had modify its dummy data structure that was wrong. It was causing issue with backward compatibility. It will be fixed as soon as this PR is merged"
] | 1,606,235,494,000 | 1,606,400,327,000 | 1,606,400,326,000 | MEMBER | null | When adding a new dataset to the library, dummy data creation can take some time.
To make things easier I added a command line tool that automatically generates dummy data when possible.
The tool only supports certain data files types: txt, csv, tsv, jsonl, json and xml.
Here are some examples:
```
python datasets-cli dummy_data ./datasets/snli --auto_generate
python datasets-cli dummy_data ./datasets/squad --auto_generate --json_field data
python datasets-cli dummy_data ./datasets/iwslt2017 --auto_generate --xml_tag seg --match_text_files "train*" --n_lines 15
# --xml_tag seg => each sample corresponds to a "seg" tag in the xml tree
# --match_text_files "train*" => also match text files that don't have a proper text file extension (no suffix like ".txt" for example)
# --n_lines 15 => some text files have headers so we have to use at least 15 lines
```
and here is the command usage:
```
usage: datasets-cli <command> [<args>] dummy_data [-h] [--auto_generate]
[--n_lines N_LINES]
[--json_field JSON_FIELD]
[--xml_tag XML_TAG]
[--match_text_files MATCH_TEXT_FILES]
[--keep_uncompressed]
[--cache_dir CACHE_DIR]
path_to_dataset
positional arguments:
path_to_dataset Path to the dataset (example: ./datasets/squad)
optional arguments:
-h, --help show this help message and exit
--auto_generate Try to automatically generate dummy data
--n_lines N_LINES Number of lines or samples to keep when auto-
generating dummy data
--json_field JSON_FIELD
Optional, json field to read the data from when auto-
generating dummy data. In the json data files, this
field must point to a list of samples as json objects
(ex: the 'data' field for squad-like files)
--xml_tag XML_TAG Optional, xml tag name of the samples inside the xml
files when auto-generating dummy data.
--match_text_files MATCH_TEXT_FILES
Optional, a comma separated list of file patterns that
looks for line-by-line text files other than *.txt or
*.csv. Example: --match_text_files *.label
--keep_uncompressed Don't compress the dummy data folders when auto-
generating dummy data. Useful for debugging for to do
manual adjustements before compressing.
--cache_dir CACHE_DIR
Cache directory to download and cache files when auto-
generating dummy data
```
The command generates all the necessary `dummy_data.zip` files (one per config).
How it works:
- it runs the split_generators() method of the dataset script to download the original data files
- when downloading it records a mapping between the downloaded files and the corresponding expected dummy data files paths
- then for each data file it creates the dummy data file keeping only the first samples (the strategy depends on the type of file)
- finally it compresses the dummy data folders into dummy_zip files ready for dataset tests
Let me know if that makes sense or if you have ideas to improve this tool !
I also added a unit test. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/884/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/884/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/884",
"html_url": "https://github.com/huggingface/datasets/pull/884",
"diff_url": "https://github.com/huggingface/datasets/pull/884.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/884.patch",
"merged_at": 1606400326000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/883 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/883/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/883/comments | https://api.github.com/repos/huggingface/datasets/issues/883/events | https://github.com/huggingface/datasets/issues/883 | 749,750,801 | MDU6SXNzdWU3NDk3NTA4MDE= | 883 | Downloading/caching only a part of a datasets' dataset. | {
"login": "SapirWeissbuch",
"id": 44585792,
"node_id": "MDQ6VXNlcjQ0NTg1Nzky",
"avatar_url": "https://avatars.githubusercontent.com/u/44585792?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/SapirWeissbuch",
"html_url": "https://github.com/SapirWeissbuch",
"followers_url": "https://api.github.com/users/SapirWeissbuch/followers",
"following_url": "https://api.github.com/users/SapirWeissbuch/following{/other_user}",
"gists_url": "https://api.github.com/users/SapirWeissbuch/gists{/gist_id}",
"starred_url": "https://api.github.com/users/SapirWeissbuch/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/SapirWeissbuch/subscriptions",
"organizations_url": "https://api.github.com/users/SapirWeissbuch/orgs",
"repos_url": "https://api.github.com/users/SapirWeissbuch/repos",
"events_url": "https://api.github.com/users/SapirWeissbuch/events{/privacy}",
"received_events_url": "https://api.github.com/users/SapirWeissbuch/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
},
{
"id": 1935892912,
"node_id": "MDU6TGFiZWwxOTM1ODkyOTEy",
"url": "https://api.github.com/repos/huggingface/datasets/labels/question",
"name": "question",
"color": "d876e3",
"default": true,
"description": "Further information is requested"
}
] | open | false | {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
} | [] | null | [
"Not at the moment but we could likely support this feature.",
"?",
"I think it would be a very helpful feature, because sometimes one only wants to evaluate models on the dev set, and the whole training data may be many times bigger.\r\nThis makes the task impossible with limited memory resources."
] | 1,606,227,918,000 | 1,606,485,115,000 | null | NONE | null | Hi,
I want to use the validation data *only* (of natural question).
I don't want to have the whole dataset cached in my machine, just the dev set.
Is this possible? I can't find a way to do it in the docs.
Thank you,
Sapir | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/883/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/883/timeline | null | null | {
"url": "",
"html_url": "",
"diff_url": "",
"patch_url": "",
"merged_at": 0
} | false |
https://api.github.com/repos/huggingface/datasets/issues/882 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/882/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/882/comments | https://api.github.com/repos/huggingface/datasets/issues/882/events | https://github.com/huggingface/datasets/pull/882 | 749,662,188 | MDExOlB1bGxSZXF1ZXN0NTI2NDQyMjA2 | 882 | Update README.md | {
"login": "vaibhavad",
"id": 32997732,
"node_id": "MDQ6VXNlcjMyOTk3NzMy",
"avatar_url": "https://avatars.githubusercontent.com/u/32997732?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/vaibhavad",
"html_url": "https://github.com/vaibhavad",
"followers_url": "https://api.github.com/users/vaibhavad/followers",
"following_url": "https://api.github.com/users/vaibhavad/following{/other_user}",
"gists_url": "https://api.github.com/users/vaibhavad/gists{/gist_id}",
"starred_url": "https://api.github.com/users/vaibhavad/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/vaibhavad/subscriptions",
"organizations_url": "https://api.github.com/users/vaibhavad/orgs",
"repos_url": "https://api.github.com/users/vaibhavad/repos",
"events_url": "https://api.github.com/users/vaibhavad/events{/privacy}",
"received_events_url": "https://api.github.com/users/vaibhavad/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
} | [] | null | [] | 1,606,220,632,000 | 1,611,916,867,000 | 1,611,916,867,000 | CONTRIBUTOR | null | "no label" is "-" in the original dataset but "-1" in Huggingface distribution. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/882/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/882/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/882",
"html_url": "https://github.com/huggingface/datasets/pull/882",
"diff_url": "https://github.com/huggingface/datasets/pull/882.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/882.patch",
"merged_at": 1611916866000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/881 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/881/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/881/comments | https://api.github.com/repos/huggingface/datasets/issues/881/events | https://github.com/huggingface/datasets/pull/881 | 749,548,107 | MDExOlB1bGxSZXF1ZXN0NTI2MzQ5MDM2 | 881 | Use GCP download url instead of tensorflow custom download for boolq | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
} | [] | null | [] | 1,606,211,231,000 | 1,606,212,754,000 | 1,606,212,753,000 | MEMBER | null | BoolQ is a dataset that used tf.io.gfile.copy to download the file from a GCP bucket.
It prevented the dataset to be downloaded twice because of a FileAlreadyExistsError.
Even though the error could be fixed by providing `overwrite=True` to the tf.io.gfile.copy call, I changed the script to use GCP download urls and use regular downloads instead and remove the tensorflow dependency.
Fix #875 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/881/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/881/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/881",
"html_url": "https://github.com/huggingface/datasets/pull/881",
"diff_url": "https://github.com/huggingface/datasets/pull/881.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/881.patch",
"merged_at": 1606212753000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/880 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/880/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/880/comments | https://api.github.com/repos/huggingface/datasets/issues/880/events | https://github.com/huggingface/datasets/issues/880 | 748,949,606 | MDU6SXNzdWU3NDg5NDk2MDY= | 880 | Add SQA | {
"login": "NielsRogge",
"id": 48327001,
"node_id": "MDQ6VXNlcjQ4MzI3MDAx",
"avatar_url": "https://avatars.githubusercontent.com/u/48327001?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/NielsRogge",
"html_url": "https://github.com/NielsRogge",
"followers_url": "https://api.github.com/users/NielsRogge/followers",
"following_url": "https://api.github.com/users/NielsRogge/following{/other_user}",
"gists_url": "https://api.github.com/users/NielsRogge/gists{/gist_id}",
"starred_url": "https://api.github.com/users/NielsRogge/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/NielsRogge/subscriptions",
"organizations_url": "https://api.github.com/users/NielsRogge/orgs",
"repos_url": "https://api.github.com/users/NielsRogge/repos",
"events_url": "https://api.github.com/users/NielsRogge/events{/privacy}",
"received_events_url": "https://api.github.com/users/NielsRogge/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2067376369,
"node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request",
"name": "dataset request",
"color": "e99695",
"default": false,
"description": "Requesting to add a new dataset"
}
] | closed | false | {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
} | [] | null | [
"Iโll take this one to test the workflow for the sprint next week cc @yjernite @lhoestq ",
"@thomwolf here's a slightly adapted version of the code from the [official Tapas repository](https://github.com/google-research/tapas/blob/master/tapas/utils/interaction_utils.py) that is used to turn the `answer_coordinates` and `answer_texts` columns into true Python lists of tuples/strings:\r\n\r\n```\r\nimport pandas as pd\r\nimport ast\r\n\r\ndata = pd.read_csv(\"/content/sqa_data/random-split-1-dev.tsv\", sep='\\t')\r\n\r\ndef _parse_answer_coordinates(answer_coordinate_str):\r\n \"\"\"Parses the answer_coordinates of a question.\r\n Args:\r\n answer_coordinate_str: A string representation of a Python list of tuple\r\n strings.\r\n For example: \"['(1, 4)','(1, 3)', ...]\"\r\n \"\"\"\r\n\r\n try:\r\n answer_coordinates = []\r\n # make a list of strings\r\n coords = ast.literal_eval(answer_coordinate_str)\r\n # parse each string as a tuple\r\n for row_index, column_index in sorted(\r\n ast.literal_eval(coord) for coord in coords):\r\n answer_coordinates.append((row_index, column_index))\r\n except SyntaxError:\r\n raise ValueError('Unable to evaluate %s' % answer_coordinate_str)\r\n \r\n return answer_coordinates\r\n\r\n\r\ndef _parse_answer_text(answer_text):\r\n \"\"\"Populates the answer_texts field of `answer` by parsing `answer_text`.\r\n Args:\r\n answer_text: A string representation of a Python list of strings.\r\n For example: \"[u'test', u'hello', ...]\"\r\n \"\"\"\r\n try:\r\n answer = []\r\n for value in ast.literal_eval(answer_text):\r\n answer.append(value)\r\n except SyntaxError:\r\n raise ValueError('Unable to evaluate %s' % answer_text)\r\n\r\n return answer\r\n\r\ndata['answer_coordinates'] = data['answer_coordinates'].apply(lambda coords_str: _parse_answer_coordinates(coords_str))\r\ndata['answer_text'] = data['answer_text'].apply(lambda txt: _parse_answer_text(txt))\r\n```\r\n\r\nHere I'm using Pandas to read in one of the TSV files (the dev set). \r\n\r\n",
"Closing since SQA was added in #1566 "
] | 1,606,149,115,000 | 1,608,731,904,000 | 1,608,731,903,000 | NONE | null | ## Adding a Dataset
- **Name:** SQA (Sequential Question Answering) by Microsoft.
- **Description:** The SQA dataset was created to explore the task of answering sequences of inter-related questions on HTML tables. It has 6,066 sequences with 17,553 questions in total.
- **Paper:** https://www.microsoft.com/en-us/research/publication/search-based-neural-structured-learning-sequential-question-answering/
- **Data:** https://www.microsoft.com/en-us/download/details.aspx?id=54253
- **Motivation:** currently, the [Tapas](https://ai.googleblog.com/2020/04/using-neural-networks-to-find-answers.html) algorithm by Google AI is being added to the Transformers library (see https://github.com/huggingface/transformers/pull/8113). It would be great to use that model in combination with this dataset, on which it achieves SOTA results (average question accuracy of 0.71).
Note 1: this dataset actually consists of 2 types of files:
1) TSV files, containing the questions, answer coordinates and answer texts (for training, dev and test)
2) a folder of csv files, which contain the actual tabular data
Note 2: if you download the dataset straight from the download link above, then you will see that the `answer_coordinates` and `answer_text` columns are string lists of string tuples and strings respectively, which is not ideal. It would be better to make them true Python lists of tuples and strings respectively (using `ast.literal_eval`), before uploading them to the HuggingFace hub.
Adding this would be great! Then we could possibly also add [WTQ (WikiTable Questions)](https://github.com/ppasupat/WikiTableQuestions) and [TabFact (Tabular Fact Checking)](https://github.com/wenhuchen/Table-Fact-Checking) on which TAPAS also achieves state-of-the-art results. Note that the TAPAS algorithm requires these datasets to first be converted into the SQA format.
Instructions to add a new dataset can be found [here](https://huggingface.co/docs/datasets/share_dataset.html).
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/880/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/880/timeline | null | null | {
"url": "",
"html_url": "",
"diff_url": "",
"patch_url": "",
"merged_at": 0
} | false |
https://api.github.com/repos/huggingface/datasets/issues/879 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/879/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/879/comments | https://api.github.com/repos/huggingface/datasets/issues/879/events | https://github.com/huggingface/datasets/issues/879 | 748,848,847 | MDU6SXNzdWU3NDg4NDg4NDc= | 879 | boolq does not load | {
"login": "rabeehk",
"id": 6278280,
"node_id": "MDQ6VXNlcjYyNzgyODA=",
"avatar_url": "https://avatars.githubusercontent.com/u/6278280?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rabeehk",
"html_url": "https://github.com/rabeehk",
"followers_url": "https://api.github.com/users/rabeehk/followers",
"following_url": "https://api.github.com/users/rabeehk/following{/other_user}",
"gists_url": "https://api.github.com/users/rabeehk/gists{/gist_id}",
"starred_url": "https://api.github.com/users/rabeehk/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rabeehk/subscriptions",
"organizations_url": "https://api.github.com/users/rabeehk/orgs",
"repos_url": "https://api.github.com/users/rabeehk/repos",
"events_url": "https://api.github.com/users/rabeehk/events{/privacy}",
"received_events_url": "https://api.github.com/users/rabeehk/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2067388877,
"node_id": "MDU6TGFiZWwyMDY3Mzg4ODc3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20bug",
"name": "dataset bug",
"color": "2edb81",
"default": false,
"description": "A bug in a dataset script provided in the library"
}
] | open | false | {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
} | [] | null | [
"Hi ! It runs on my side without issues. I tried\r\n```python\r\nfrom datasets import load_dataset\r\nload_dataset(\"boolq\")\r\n```\r\n\r\nWhat version of datasets and tensorflow are your runnning ?\r\nAlso if you manage to get a minimal reproducible script (on google colab for example) that would be useful.",
"hey\ni do the exact same commands. for me it fails i guess might be issues with\ncaching maybe?\nthanks\nbest\nrabeeh\n\nOn Tue, Nov 24, 2020, 10:24 AM Quentin Lhoest <[email protected]>\nwrote:\n\n> Hi ! It runs on my side without issues. I tried\n>\n> from datasets import load_datasetload_dataset(\"boolq\")\n>\n> What version of datasets and tensorflow are your runnning ?\n> Also if you manage to get a minimal reproducible script (on google colab\n> for example) that would be useful.\n>\n> โ\n> You are receiving this because you authored the thread.\n> Reply to this email directly, view it on GitHub\n> <https://github.com/huggingface/datasets/issues/879#issuecomment-732769114>,\n> or unsubscribe\n> <https://github.com/notifications/unsubscribe-auth/ABP4ZCGGDR2FUMRKZTIY5CTSRN3VXANCNFSM4T7R3U6A>\n> .\n>\n",
"Could you check if it works on the master branch ?\r\nYou can use `load_dataset(\"boolq\", script_version=\"master\")` to do so.\r\nWe did some changes recently in boolq to remove the TF dependency and we changed the way the data files are downloaded in https://github.com/huggingface/datasets/pull/881"
] | 1,606,141,708,000 | 1,606,485,071,000 | null | CONTRIBUTOR | null | Hi
I am getting these errors trying to load boolq thanks
Traceback (most recent call last):
File "test.py", line 5, in <module>
data = AutoTask().get("boolq").get_dataset("train", n_obs=10)
File "/remote/idiap.svm/user.active/rkarimi/dev/internship/seq2seq/tasks/tasks.py", line 42, in get_dataset
dataset = self.load_dataset(split=split)
File "/remote/idiap.svm/user.active/rkarimi/dev/internship/seq2seq/tasks/tasks.py", line 38, in load_dataset
return datasets.load_dataset(self.task.name, split=split)
File "/idiap/user/rkarimi/libs/anaconda3/envs/internship/lib/python3.7/site-packages/datasets/load.py", line 611, in load_dataset
ignore_verifications=ignore_verifications,
File "/idiap/user/rkarimi/libs/anaconda3/envs/internship/lib/python3.7/site-packages/datasets/builder.py", line 476, in download_and_prepare
dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs
File "/idiap/user/rkarimi/libs/anaconda3/envs/internship/lib/python3.7/site-packages/datasets/builder.py", line 531, in _download_and_prepare
split_generators = self._split_generators(dl_manager, **split_generators_kwargs)
File " /idiap/home/rkarimi/.cache/huggingface/modules/datasets_modules/datasets/boolq/2987db1f15deaa19500ae24de560eabeaf1f8ef51df88c0470beeec72943bf11/boolq.py", line 74, in _split_generators
downloaded_files = dl_manager.download_custom(urls_to_download, tf.io.gfile.copy)
File "/idiap/user/rkarimi/libs/anaconda3/envs/internship/lib/python3.7/site-packages/datasets/utils/download_manager.py", line 150, in download_custom
get_from_cache(url, cache_dir=cache_dir, local_files_only=True, use_etag=False)
File "/idiap/user/rkarimi/libs/anaconda3/envs/internship/lib/python3.7/site-packages/datasets/utils/file_utils.py", line 472, in get_from_cache
f"Cannot find the requested files in the cached path at {cache_path} and outgoing traffic has been"
FileNotFoundError: Cannot find the requested files in the cached path at /idiap/home/rkarimi/.cache/huggingface/datasets/eaee069e38f6ceaa84de02ad088c34e63ec97671f2cd1910ddb16b10dc60808c and outgoing traffic has been disabled. To enable file online look-ups, set 'local_files_only' to False.
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/879/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/879/timeline | null | null | {
"url": "",
"html_url": "",
"diff_url": "",
"patch_url": "",
"merged_at": 0
} | false |
https://api.github.com/repos/huggingface/datasets/issues/878 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/878/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/878/comments | https://api.github.com/repos/huggingface/datasets/issues/878/events | https://github.com/huggingface/datasets/issues/878 | 748,621,981 | MDU6SXNzdWU3NDg2MjE5ODE= | 878 | Loading Data From S3 Path in Sagemaker | {
"login": "mahesh1amour",
"id": 42795522,
"node_id": "MDQ6VXNlcjQyNzk1NTIy",
"avatar_url": "https://avatars.githubusercontent.com/u/42795522?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mahesh1amour",
"html_url": "https://github.com/mahesh1amour",
"followers_url": "https://api.github.com/users/mahesh1amour/followers",
"following_url": "https://api.github.com/users/mahesh1amour/following{/other_user}",
"gists_url": "https://api.github.com/users/mahesh1amour/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mahesh1amour/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mahesh1amour/subscriptions",
"organizations_url": "https://api.github.com/users/mahesh1amour/orgs",
"repos_url": "https://api.github.com/users/mahesh1amour/repos",
"events_url": "https://api.github.com/users/mahesh1amour/events{/privacy}",
"received_events_url": "https://api.github.com/users/mahesh1amour/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
},
{
"id": 1935892912,
"node_id": "MDU6TGFiZWwxOTM1ODkyOTEy",
"url": "https://api.github.com/repos/huggingface/datasets/labels/question",
"name": "question",
"color": "d876e3",
"default": true,
"description": "Further information is requested"
}
] | open | false | {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
} | [] | null | [
"This would be a neat feature",
"> neat feature\r\n\r\nI dint get these clearly, can you please elaborate like how to work on these ",
"It could maybe work almost out of the box just by using `cached_path` in the text/csv/json scripts, no?",
"Thanks thomwolf and julien-c\r\n\r\nI'm still confusion on what you guys said, \r\n\r\nI have solved the problem as follows:\r\n\r\n1. read the csv file using pandas from s3 \r\n2. Convert to dictionary key as column name and values as list column data\r\n3. convert it to Dataset using \r\n`from datasets import Dataset`\r\n`train_dataset = Dataset.from_dict(train_dict)`",
"We were brainstorming around your use-case.\r\n\r\nLet's keep the issue open for now, I think this is an interesting question to think about.",
"> We were brainstorming around your use-case.\r\n> \r\n> Let's keep the issue open for now, I think this is an interesting question to think about.\r\n\r\nSure thomwolf, Thanks for your concern ",
"I agree it would be cool to have that feature. Also that's good to know that pandas supports this.\r\nFor the moment I'd suggest to first download the files locally as thom suggested and then load the dataset by providing paths to the local files",
"Don't get\n",
"Any updates on this issue?\r\nI face a similar issue. I have many parquet files in S3 and I would like to train on them. \r\nTo be honest I even face issues with only getting the last layer embedding out of them.",
"Hi dorlavie, \r\nYou can find one solution that i have mentioned above, that can help you. \r\nAnd there is one more solution also which is downloading files locally\r\n",
"> Hi dorlavie,\r\n> You can find one solution that i have mentioned above, that can help you.\r\n> And there is one more solution also which is downloading files locally\r\n\r\nmahesh1amour, thanks for the fast reply\r\n\r\nUnfortunately, in my case I can not read with pandas. The dataset is too big (50GB). \r\nIn addition, due to security concerns I am not allowed to save the data locally",
"@dorlavie could use `boto3` to download the data to your local machine and then load it with `dataset`\r\n\r\nboto3 example [documentation](https://boto3.amazonaws.com/v1/documentation/api/latest/guide/s3-example-download-file.html)\r\n```python\r\nimport boto3\r\n\r\ns3 = boto3.client('s3')\r\ns3.download_file('BUCKET_NAME', 'OBJECT_NAME', 'FILE_NAME')\r\n```\r\n\r\ndatasets example [documentation](https://huggingface.co/docs/datasets/loading_datasets.html)\r\n\r\n```python\r\nfrom datasets import load_dataset\r\ndataset = load_dataset('csv', data_files=['my_file_1.csv', 'my_file_2.csv', 'my_file_3.csv'])\r\n```\r\n",
"Thanks @philschmid for the suggestion.\r\nAs I mentioned in the previous comment, due to security issues I can not save the data locally.\r\nI need to read it from S3 and process it directly.\r\n\r\nI guess that many other people try to train / fit those models on huge datasets (e.g entire Wiki), what is the best practice in those cases?",
"If I understand correctly you're not allowed to write data on disk that you downloaded from S3 for example ?\r\nOr is it the use of the `boto3` library that is not allowed in your case ?",
"@lhoestq yes you are correct.\r\nI am not allowed to save the \"raw text\" locally - The \"raw text\" must be saved only on S3.\r\nI am allowed to save the output of any model locally. \r\nIt doesn't matter how I do it boto3/pandas/pyarrow, it is forbidden",
"@dorlavie are you using sagemaker for training too? Then you could use S3 URI, for example `s3://my-bucket/my-training-data` and pass it within the `.fit()` function when you start the sagemaker training job. Sagemaker would then download the data from s3 into the training runtime and you could load it from disk\r\n\r\n**sagemaker start training job**\r\n```python\r\npytorch_estimator.fit({'train':'s3://my-bucket/my-training-data','eval':'s3://my-bucket/my-evaluation-data'})\r\n```\r\n\r\n**in the train.py script**\r\n```python\r\nfrom datasets import load_from_disk\r\n\r\ntrain_dataset = load_from_disk(os.environ['SM_CHANNEL_TRAIN'])\r\n```\r\n\r\nI have created an example of how to use transformers and datasets with sagemaker. \r\nhttps://github.com/philschmid/huggingface-sagemaker-example/tree/main/03_huggingface_sagemaker_trainer_with_data_from_s3\r\n\r\nThe example contains a jupyter notebook `sagemaker-example.ipynb` and an `src/` folder. The sagemaker-example is a jupyter notebook that is used to create the training job on AWS Sagemaker. The `src/` folder contains the `train.py`, our training script, and `requirements.txt` for additional dependencies.\r\n\r\n"
] | 1,606,123,042,000 | 1,608,717,188,000 | null | NONE | null | In Sagemaker Im tring to load the data set from S3 path as follows
`train_path = 's3://xxxxxxxxxx/xxxxxxxxxx/train.csv'
valid_path = 's3://xxxxxxxxxx/xxxxxxxxxx/validation.csv'
test_path = 's3://xxxxxxxxxx/xxxxxxxxxx/test.csv'
data_files = {}
data_files["train"] = train_path
data_files["validation"] = valid_path
data_files["test"] = test_path
extension = train_path.split(".")[-1]
datasets = load_dataset(extension, data_files=data_files, s3_enabled=True)
print(datasets)`
I getting an error of
`algo-1-7plil_1 | File "main.py", line 21, in <module>
algo-1-7plil_1 | datasets = load_dataset(extension, data_files=data_files)
algo-1-7plil_1 | File "/opt/conda/lib/python3.6/site-packages/datasets/load.py", line 603, in load_dataset
algo-1-7plil_1 | **config_kwargs,
algo-1-7plil_1 | File "/opt/conda/lib/python3.6/site-packages/datasets/builder.py", line 155, in __init__
algo-1-7plil_1 | **config_kwargs,
algo-1-7plil_1 | File "/opt/conda/lib/python3.6/site-packages/datasets/builder.py", line 305, in _create_builder_config
algo-1-7plil_1 | m.update(str(os.path.getmtime(data_file)))
algo-1-7plil_1 | File "/opt/conda/lib/python3.6/genericpath.py", line 55, in getmtime
algo-1-7plil_1 | return os.stat(filename).st_mtime
algo-1-7plil_1 | FileNotFoundError: [Errno 2] No such file or directory: 's3://lsmv-sagemaker/pubmedbert/test.csv`
But when im trying with pandas , it is able to load from S3
Does the datasets library support S3 path to load | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/878/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/878/timeline | null | null | {
"url": "",
"html_url": "",
"diff_url": "",
"patch_url": "",
"merged_at": 0
} | false |
https://api.github.com/repos/huggingface/datasets/issues/877 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/877/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/877/comments | https://api.github.com/repos/huggingface/datasets/issues/877/events | https://github.com/huggingface/datasets/issues/877 | 748,234,438 | MDU6SXNzdWU3NDgyMzQ0Mzg= | 877 | DataLoader(datasets) become more and more slowly within iterations | {
"login": "shexuan",
"id": 25664170,
"node_id": "MDQ6VXNlcjI1NjY0MTcw",
"avatar_url": "https://avatars.githubusercontent.com/u/25664170?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/shexuan",
"html_url": "https://github.com/shexuan",
"followers_url": "https://api.github.com/users/shexuan/followers",
"following_url": "https://api.github.com/users/shexuan/following{/other_user}",
"gists_url": "https://api.github.com/users/shexuan/gists{/gist_id}",
"starred_url": "https://api.github.com/users/shexuan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/shexuan/subscriptions",
"organizations_url": "https://api.github.com/users/shexuan/orgs",
"repos_url": "https://api.github.com/users/shexuan/repos",
"events_url": "https://api.github.com/users/shexuan/events{/privacy}",
"received_events_url": "https://api.github.com/users/shexuan/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
} | [] | null | [
"Hi ! Thanks for reporting.\r\nDo you have the same slowdown when you iterate through the raw dataset object as well ? (no dataloader)\r\nIt would be nice to know whether it comes from the dataloader or not",
"> Hi ! Thanks for reporting.\r\n> Do you have the same slowdown when you iterate through the raw dataset object as well ? (no dataloader)\r\n> It would be nice to know whether it comes from the dataloader or not\r\n\r\nI did not iter data from raw dataset, maybe I will test later. Now I iter all files directly from `open(file)`, around 20000it/s."
] | 1,606,048,870,000 | 1,606,664,712,000 | 1,606,664,712,000 | NONE | null | Hello, when I for loop my dataloader, the loading speed is becoming more and more slowly!
```
dataset = load_from_disk(dataset_path) # around 21,000,000 lines
lineloader = tqdm(DataLoader(dataset, batch_size=1))
for idx, line in enumerate(lineloader):
# do some thing for each line
```
In the begining, the loading speed is around 2000it/s, but after 1 minutes later, the speed is much slower, just around 800it/s.
And when I set `num_workers=4` in DataLoader, the loading speed is much lower, just 130it/s.
Could you please help me with this problem?
Thanks a lot! | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/877/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/877/timeline | null | null | {
"url": "",
"html_url": "",
"diff_url": "",
"patch_url": "",
"merged_at": 0
} | false |
https://api.github.com/repos/huggingface/datasets/issues/876 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/876/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/876/comments | https://api.github.com/repos/huggingface/datasets/issues/876/events | https://github.com/huggingface/datasets/issues/876 | 748,195,104 | MDU6SXNzdWU3NDgxOTUxMDQ= | 876 | imdb dataset cannot be loaded | {
"login": "rabeehk",
"id": 6278280,
"node_id": "MDQ6VXNlcjYyNzgyODA=",
"avatar_url": "https://avatars.githubusercontent.com/u/6278280?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rabeehk",
"html_url": "https://github.com/rabeehk",
"followers_url": "https://api.github.com/users/rabeehk/followers",
"following_url": "https://api.github.com/users/rabeehk/following{/other_user}",
"gists_url": "https://api.github.com/users/rabeehk/gists{/gist_id}",
"starred_url": "https://api.github.com/users/rabeehk/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rabeehk/subscriptions",
"organizations_url": "https://api.github.com/users/rabeehk/orgs",
"repos_url": "https://api.github.com/users/rabeehk/repos",
"events_url": "https://api.github.com/users/rabeehk/events{/privacy}",
"received_events_url": "https://api.github.com/users/rabeehk/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
} | [] | null | [
"It looks like there was an issue while building the imdb dataset.\r\nCould you provide more information about your OS and the version of python and `datasets` ?\r\n\r\nAlso could you try again with \r\n```python\r\ndataset = datasets.load_dataset(\"imdb\", split=\"train\", download_mode=\"force_redownload\")\r\n```\r\nto make sure it's not a corrupted file issue ?",
"I was using version 1.1.2 and this resolved with version 1.1.3, thanks. ",
"Hello,\r\nI have the same pb with 1.8.0",
"Hi ! I just tried in 1.8.0 and it worked fine. Can you try again ? Maybe the dataset host had some issues that are fixed now",
"Hello,\r\nIt works fine now :) !\r\nThanks !"
] | 1,606,033,483,000 | 1,637,924,836,000 | 1,608,831,527,000 | CONTRIBUTOR | null | Hi
I am trying to load the imdb train dataset
`dataset = datasets.load_dataset("imdb", split="train")`
getting following errors, thanks for your help
```
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/idiap/user/rkarimi/libs/anaconda3/envs/internship/lib/python3.7/site-packages/datasets/load.py", line 611, in load_dataset
ignore_verifications=ignore_verifications,
File "/idiap/user/rkarimi/libs/anaconda3/envs/internship/lib/python3.7/site-packages/datasets/builder.py", line 476, in download_and_prepare
dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs
File "/idiap/user/rkarimi/libs/anaconda3/envs/internship/lib/python3.7/site-packages/datasets/builder.py", line 558, in _download_and_prepare
verify_splits(self.info.splits, split_dict)
File "/idiap/user/rkarimi/libs/anaconda3/envs/internship/lib/python3.7/site-packages/datasets/utils/info_utils.py", line 73, in verify_splits
raise NonMatchingSplitsSizesError(str(bad_splits))
datasets.utils.info_utils.NonMatchingSplitsSizesError: [{'expected': SplitInfo(name='test', num_bytes=32660064, num_examples=25000, dataset_name='imdb'), 'recorded': SplitInfo(name='test', num_bytes=26476338, num_examples=20316, dataset_name='imdb')}, {'expected': SplitInfo(name='train', num_bytes=33442202, num_examples=25000, dataset_name='imdb'), 'recorded': SplitInfo(name='train', num_bytes=0, num_examples=0, dataset_name='imdb')}, {'expected': SplitInfo(name='unsupervised', num_bytes=67125548, num_examples=50000, dataset_name='imdb'), 'recorded': SplitInfo(name='unsupervised', num_bytes=0, num_examples=0, dataset_name='imdb')}]
>>> dataset = datasets.load_dataset("imdb", split="train")
```
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/876/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/876/timeline | null | null | {
"url": "",
"html_url": "",
"diff_url": "",
"patch_url": "",
"merged_at": 0
} | false |
https://api.github.com/repos/huggingface/datasets/issues/875 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/875/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/875/comments | https://api.github.com/repos/huggingface/datasets/issues/875/events | https://github.com/huggingface/datasets/issues/875 | 748,194,311 | MDU6SXNzdWU3NDgxOTQzMTE= | 875 | bug in boolq dataset loading | {
"login": "rabeehk",
"id": 6278280,
"node_id": "MDQ6VXNlcjYyNzgyODA=",
"avatar_url": "https://avatars.githubusercontent.com/u/6278280?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rabeehk",
"html_url": "https://github.com/rabeehk",
"followers_url": "https://api.github.com/users/rabeehk/followers",
"following_url": "https://api.github.com/users/rabeehk/following{/other_user}",
"gists_url": "https://api.github.com/users/rabeehk/gists{/gist_id}",
"starred_url": "https://api.github.com/users/rabeehk/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rabeehk/subscriptions",
"organizations_url": "https://api.github.com/users/rabeehk/orgs",
"repos_url": "https://api.github.com/users/rabeehk/repos",
"events_url": "https://api.github.com/users/rabeehk/events{/privacy}",
"received_events_url": "https://api.github.com/users/rabeehk/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
} | [] | null | [
"I just opened a PR to fix this.\r\nThanks for reporting !"
] | 1,606,033,114,000 | 1,606,212,753,000 | 1,606,212,753,000 | CONTRIBUTOR | null | Hi
I am trying to load boolq dataset:
```
import datasets
datasets.load_dataset("boolq")
```
I am getting the following errors, thanks for your help
```
>>> import datasets
2020-11-22 09:16:30.070332: W tensorflow/stream_executor/platform/default/dso_loader.cc:60] Could not load dynamic library 'libcudart.so.11.0'; dlerror: libcudart.so.11.0: cannot open shared object file: No such file or directory
2020-11-22 09:16:30.070389: I tensorflow/stream_executor/cuda/cudart_stub.cc:29] Ignore above cudart dlerror if you do not have a GPU set up on your machine.
>>> datasets.load_dataset("boolq")
cahce dir /idiap/temp/rkarimi/cache_home/datasets
cahce dir /idiap/temp/rkarimi/cache_home/datasets
Using custom data configuration default
Downloading and preparing dataset boolq/default (download: 8.36 MiB, generated: 7.47 MiB, post-processed: Unknown size, total: 15.83 MiB) to /idiap/temp/rkarimi/cache_home/datasets/boolq/default/0.1.0/2987db1f15deaa19500ae24de560eabeaf1f8ef51df88c0470beeec72943bf11...
cahce dir /idiap/temp/rkarimi/cache_home/datasets
cahce dir /idiap/temp/rkarimi/cache_home/datasets/downloads
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/idiap/user/rkarimi/libs/anaconda3/envs/internship/lib/python3.7/site-packages/datasets/load.py", line 611, in load_dataset
ignore_verifications=ignore_verifications,
File "/idiap/user/rkarimi/libs/anaconda3/envs/internship/lib/python3.7/site-packages/datasets/builder.py", line 476, in download_and_prepare
dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs
File "/idiap/user/rkarimi/libs/anaconda3/envs/internship/lib/python3.7/site-packages/datasets/builder.py", line 531, in _download_and_prepare
split_generators = self._split_generators(dl_manager, **split_generators_kwargs)
File " /idiap/home/rkarimi/.cache/huggingface/modules/datasets_modules/datasets/boolq/2987db1f15deaa19500ae24de560eabeaf1f8ef51df88c0470beeec72943bf11/boolq.py", line 74, in _split_generators
downloaded_files = dl_manager.download_custom(urls_to_download, tf.io.gfile.copy)
File "/idiap/user/rkarimi/libs/anaconda3/envs/internship/lib/python3.7/site-packages/datasets/utils/download_manager.py", line 149, in download_custom
custom_download(url, path)
File "/idiap/user/rkarimi/libs/anaconda3/envs/internship/lib/python3.7/site-packages/tensorflow/python/lib/io/file_io.py", line 516, in copy_v2
compat.path_to_bytes(src), compat.path_to_bytes(dst), overwrite)
tensorflow.python.framework.errors_impl.AlreadyExistsError: file already exists
``` | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/875/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/875/timeline | null | null | {
"url": "",
"html_url": "",
"diff_url": "",
"patch_url": "",
"merged_at": 0
} | false |
https://api.github.com/repos/huggingface/datasets/issues/874 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/874/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/874/comments | https://api.github.com/repos/huggingface/datasets/issues/874/events | https://github.com/huggingface/datasets/issues/874 | 748,193,140 | MDU6SXNzdWU3NDgxOTMxNDA= | 874 | trec dataset unavailable | {
"login": "rabeehk",
"id": 6278280,
"node_id": "MDQ6VXNlcjYyNzgyODA=",
"avatar_url": "https://avatars.githubusercontent.com/u/6278280?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rabeehk",
"html_url": "https://github.com/rabeehk",
"followers_url": "https://api.github.com/users/rabeehk/followers",
"following_url": "https://api.github.com/users/rabeehk/following{/other_user}",
"gists_url": "https://api.github.com/users/rabeehk/gists{/gist_id}",
"starred_url": "https://api.github.com/users/rabeehk/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rabeehk/subscriptions",
"organizations_url": "https://api.github.com/users/rabeehk/orgs",
"repos_url": "https://api.github.com/users/rabeehk/repos",
"events_url": "https://api.github.com/users/rabeehk/events{/privacy}",
"received_events_url": "https://api.github.com/users/rabeehk/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
} | [] | null | [
"This was fixed in #740 \r\nCould you try to update `datasets` and try again ?",
"This has been fixed in datasets 1.1.3"
] | 1,606,032,576,000 | 1,606,485,402,000 | 1,606,485,402,000 | CONTRIBUTOR | null | Hi
when I try to load the trec dataset I am getting these errors, thanks for your help
`datasets.load_dataset("trec", split="train")
`
```
File "<stdin>", line 1, in <module>
File "/idiap/user/rkarimi/libs/anaconda3/envs/internship/lib/python3.7/site-packages/datasets/load.py", line 611, in load_dataset
ignore_verifications=ignore_verifications,
File "/idiap/user/rkarimi/libs/anaconda3/envs/internship/lib/python3.7/site-packages/datasets/builder.py", line 476, in download_and_prepare
dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs
File "/idiap/user/rkarimi/libs/anaconda3/envs/internship/lib/python3.7/site-packages/datasets/builder.py", line 531, in _download_and_prepare
split_generators = self._split_generators(dl_manager, **split_generators_kwargs)
File " /idiap/home/rkarimi/.cache/huggingface/modules/datasets_modules/datasets/trec/ca4248481ad244f235f4cf277186cad2ee8769f975119a2bbfc41b8932b88bd7/trec.py", line 140, in _split_generators
dl_files = dl_manager.download_and_extract(_URLs)
File "/idiap/user/rkarimi/libs/anaconda3/envs/internship/lib/python3.7/site-packages/datasets/utils/download_manager.py", line 254, in download_and_extract
return self.extract(self.download(url_or_urls))
File "/idiap/user/rkarimi/libs/anaconda3/envs/internship/lib/python3.7/site-packages/datasets/utils/download_manager.py", line 179, in download
num_proc=download_config.num_proc,
File "/idiap/user/rkarimi/libs/anaconda3/envs/internship/lib/python3.7/site-packages/datasets/utils/py_utils.py", line 225, in map_nested
_single_map_nested((function, obj, types, None, True)) for obj in tqdm(iterable, disable=disable_tqdm)
File "/idiap/user/rkarimi/libs/anaconda3/envs/internship/lib/python3.7/site-packages/datasets/utils/py_utils.py", line 225, in <listcomp>
_single_map_nested((function, obj, types, None, True)) for obj in tqdm(iterable, disable=disable_tqdm)
File "/idiap/user/rkarimi/libs/anaconda3/envs/internship/lib/python3.7/site-packages/datasets/utils/py_utils.py", line 163, in _single_map_nested
return function(data_struct)
File "/idiap/user/rkarimi/libs/anaconda3/envs/internship/lib/python3.7/site-packages/datasets/utils/file_utils.py", line 308, in cached_path
use_etag=download_config.use_etag,
File "/idiap/user/rkarimi/libs/anaconda3/envs/internship/lib/python3.7/site-packages/datasets/utils/file_utils.py", line 477, in get_from_cache
raise ConnectionError("Couldn't reach {}".format(url))
ConnectionError: Couldn't reach http://cogcomp.org/Data/QA/QC/train_5500.label
``` | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/874/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/874/timeline | null | null | {
"url": "",
"html_url": "",
"diff_url": "",
"patch_url": "",
"merged_at": 0
} | false |
https://api.github.com/repos/huggingface/datasets/issues/873 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/873/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/873/comments | https://api.github.com/repos/huggingface/datasets/issues/873/events | https://github.com/huggingface/datasets/issues/873 | 747,959,523 | MDU6SXNzdWU3NDc5NTk1MjM= | 873 | load_dataset('cnn_dalymail', '3.0.0') gives a 'Not a directory' error | {
"login": "vishal-burman",
"id": 19861874,
"node_id": "MDQ6VXNlcjE5ODYxODc0",
"avatar_url": "https://avatars.githubusercontent.com/u/19861874?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/vishal-burman",
"html_url": "https://github.com/vishal-burman",
"followers_url": "https://api.github.com/users/vishal-burman/followers",
"following_url": "https://api.github.com/users/vishal-burman/following{/other_user}",
"gists_url": "https://api.github.com/users/vishal-burman/gists{/gist_id}",
"starred_url": "https://api.github.com/users/vishal-burman/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/vishal-burman/subscriptions",
"organizations_url": "https://api.github.com/users/vishal-burman/orgs",
"repos_url": "https://api.github.com/users/vishal-burman/repos",
"events_url": "https://api.github.com/users/vishal-burman/events{/privacy}",
"received_events_url": "https://api.github.com/users/vishal-burman/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
} | [] | null | [
"I get the same error. It was fixed some days ago, but again it appears",
"Hi @mrm8488 it's working again today without any fix so I am closing this issue.",
"I see the issue happening again today - \r\n\r\n[nltk_data] Downloading package stopwords to /root/nltk_data...\r\n[nltk_data] Package stopwords is already up-to-date!\r\nDownloading and preparing dataset cnn_dailymail/3.0.0 (download: 558.32 MiB, generated: 1.28 GiB, post-processed: Unknown size, total: 1.82 GiB) to /root/.cache/huggingface/datasets/cnn_dailymail/3.0.0/3.0.0/0128610a44e10f25b4af6689441c72af86205282d26399642f7db38fa7535602...\r\n\r\n---------------------------------------------------------------------------\r\n\r\nNotADirectoryError Traceback (most recent call last)\r\n\r\n<ipython-input-9-cd4bf8bea840> in <module>()\r\n 22 \r\n 23 \r\n---> 24 train = load_dataset('cnn_dailymail', '3.0.0', split='train')\r\n 25 validation = load_dataset('cnn_dailymail', '3.0.0', split='validation')\r\n 26 test = load_dataset('cnn_dailymail', '3.0.0', split='test')\r\n\r\n5 frames\r\n\r\n/root/.cache/huggingface/modules/datasets_modules/datasets/cnn_dailymail/0128610a44e10f25b4af6689441c72af86205282d26399642f7db38fa7535602/cnn_dailymail.py in _find_files(dl_paths, publisher, url_dict)\r\n 132 else:\r\n 133 logging.fatal(\"Unsupported publisher: %s\", publisher)\r\n--> 134 files = sorted(os.listdir(top_dir))\r\n 135 \r\n 136 ret_files = []\r\n\r\nNotADirectoryError: [Errno 20] Not a directory: '/root/.cache/huggingface/datasets/downloads/1bc05d24fa6dda2468e83a73cf6dc207226e01e3c48a507ea716dc0421da583b/cnn/stories'\r\n\r\nCan someone please take a look ?",
"Sometimes happens. Try in a while",
"It is working now, thank you. "
] | 1,605,940,245,000 | 1,606,993,455,000 | 1,606,047,485,000 | NONE | null | ```
from datasets import load_dataset
dataset = load_dataset('cnn_dailymail', '3.0.0')
```
Stack trace:
```
---------------------------------------------------------------------------
NotADirectoryError Traceback (most recent call last)
<ipython-input-6-2e06a8332652> in <module>()
1 from datasets import load_dataset
----> 2 dataset = load_dataset('cnn_dailymail', '3.0.0')
5 frames
/usr/local/lib/python3.6/dist-packages/datasets/load.py in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, ignore_verifications, save_infos, script_version, **config_kwargs)
608 download_config=download_config,
609 download_mode=download_mode,
--> 610 ignore_verifications=ignore_verifications,
611 )
612
/usr/local/lib/python3.6/dist-packages/datasets/builder.py in download_and_prepare(self, download_config, download_mode, ignore_verifications, try_from_hf_gcs, dl_manager, **download_and_prepare_kwargs)
513 if not downloaded_from_gcs:
514 self._download_and_prepare(
--> 515 dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs
516 )
517 # Sync info
/usr/local/lib/python3.6/dist-packages/datasets/builder.py in _download_and_prepare(self, dl_manager, verify_infos, **prepare_split_kwargs)
568 split_dict = SplitDict(dataset_name=self.name)
569 split_generators_kwargs = self._make_split_generators_kwargs(prepare_split_kwargs)
--> 570 split_generators = self._split_generators(dl_manager, **split_generators_kwargs)
571
572 # Checksums verification
/root/.cache/huggingface/modules/datasets_modules/datasets/cnn_dailymail/0128610a44e10f25b4af6689441c72af86205282d26399642f7db38fa7535602/cnn_dailymail.py in _split_generators(self, dl_manager)
252 def _split_generators(self, dl_manager):
253 dl_paths = dl_manager.download_and_extract(_DL_URLS)
--> 254 train_files = _subset_filenames(dl_paths, datasets.Split.TRAIN)
255 # Generate shared vocabulary
256
/root/.cache/huggingface/modules/datasets_modules/datasets/cnn_dailymail/0128610a44e10f25b4af6689441c72af86205282d26399642f7db38fa7535602/cnn_dailymail.py in _subset_filenames(dl_paths, split)
153 else:
154 logging.fatal("Unsupported split: %s", split)
--> 155 cnn = _find_files(dl_paths, "cnn", urls)
156 dm = _find_files(dl_paths, "dm", urls)
157 return cnn + dm
/root/.cache/huggingface/modules/datasets_modules/datasets/cnn_dailymail/0128610a44e10f25b4af6689441c72af86205282d26399642f7db38fa7535602/cnn_dailymail.py in _find_files(dl_paths, publisher, url_dict)
132 else:
133 logging.fatal("Unsupported publisher: %s", publisher)
--> 134 files = sorted(os.listdir(top_dir))
135
136 ret_files = []
NotADirectoryError: [Errno 20] Not a directory: '/root/.cache/huggingface/datasets/downloads/1bc05d24fa6dda2468e83a73cf6dc207226e01e3c48a507ea716dc0421da583b/cnn/stories'
```
I have ran the code on Google Colab | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/873/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/873/timeline | null | null | {
"url": "",
"html_url": "",
"diff_url": "",
"patch_url": "",
"merged_at": 0
} | false |
https://api.github.com/repos/huggingface/datasets/issues/872 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/872/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/872/comments | https://api.github.com/repos/huggingface/datasets/issues/872/events | https://github.com/huggingface/datasets/pull/872 | 747,653,697 | MDExOlB1bGxSZXF1ZXN0NTI0ODM4NjEx | 872 | Add IndicGLUE dataset and Metrics | {
"login": "sumanthd17",
"id": 28291870,
"node_id": "MDQ6VXNlcjI4MjkxODcw",
"avatar_url": "https://avatars.githubusercontent.com/u/28291870?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sumanthd17",
"html_url": "https://github.com/sumanthd17",
"followers_url": "https://api.github.com/users/sumanthd17/followers",
"following_url": "https://api.github.com/users/sumanthd17/following{/other_user}",
"gists_url": "https://api.github.com/users/sumanthd17/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sumanthd17/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sumanthd17/subscriptions",
"organizations_url": "https://api.github.com/users/sumanthd17/orgs",
"repos_url": "https://api.github.com/users/sumanthd17/repos",
"events_url": "https://api.github.com/users/sumanthd17/events{/privacy}",
"received_events_url": "https://api.github.com/users/sumanthd17/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
} | [] | null | [
"thanks ! merging now"
] | 1,605,892,174,000 | 1,606,323,671,000 | 1,606,317,967,000 | CONTRIBUTOR | null | Added IndicGLUE benchmark for evaluating models on 11 Indian Languages. The descriptions of the tasks and the corresponding paper can be found [here](https://indicnlp.ai4bharat.org/indic-glue/)
- [x] Followed the instructions in CONTRIBUTING.md
- [x] Ran the tests successfully
- [x] Created the dummy data | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/872/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/872/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/872",
"html_url": "https://github.com/huggingface/datasets/pull/872",
"diff_url": "https://github.com/huggingface/datasets/pull/872.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/872.patch",
"merged_at": 1606317967000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/871 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/871/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/871/comments | https://api.github.com/repos/huggingface/datasets/issues/871/events | https://github.com/huggingface/datasets/issues/871 | 747,470,136 | MDU6SXNzdWU3NDc0NzAxMzY= | 871 | terminate called after throwing an instance of 'google::protobuf::FatalException' | {
"login": "rabeehk",
"id": 6278280,
"node_id": "MDQ6VXNlcjYyNzgyODA=",
"avatar_url": "https://avatars.githubusercontent.com/u/6278280?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rabeehk",
"html_url": "https://github.com/rabeehk",
"followers_url": "https://api.github.com/users/rabeehk/followers",
"following_url": "https://api.github.com/users/rabeehk/following{/other_user}",
"gists_url": "https://api.github.com/users/rabeehk/gists{/gist_id}",
"starred_url": "https://api.github.com/users/rabeehk/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rabeehk/subscriptions",
"organizations_url": "https://api.github.com/users/rabeehk/orgs",
"repos_url": "https://api.github.com/users/rabeehk/repos",
"events_url": "https://api.github.com/users/rabeehk/events{/privacy}",
"received_events_url": "https://api.github.com/users/rabeehk/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
} | [] | null | [
"Loading the iwslt2017-en-nl config of iwslt2017 works fine on my side. \r\nMaybe you can open an issue on transformers as well ? And also add more details about your environment (OS, python version, version of transformers and datasets etc.)",
"closing now, figured out this is because the max length of decoder was set smaller than the input_dimensions. thanks "
] | 1,605,876,984,000 | 1,607,807,792,000 | 1,607,807,792,000 | CONTRIBUTOR | null | Hi
I am using the dataset "iwslt2017-en-nl", and after downloading it I am getting this error when trying to evaluate it on T5-base with seq2seq_trainer.py in the huggingface repo could you assist me please? thanks
100%|โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ| 63/63 [02:47<00:00, 2.18s/it][libprotobuf FATAL /sentencepiece/src/../third_party/protobuf-lite/google/protobuf/repeated_field.h:1505] CHECK failed: (index) >= (0):
terminate called after throwing an instance of 'google::protobuf::FatalException'
what(): CHECK failed: (index) >= (0):
run_t5_base_eval.sh: line 19: 5795 Aborted | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/871/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/871/timeline | null | null | {
"url": "",
"html_url": "",
"diff_url": "",
"patch_url": "",
"merged_at": 0
} | false |
https://api.github.com/repos/huggingface/datasets/issues/870 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/870/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/870/comments | https://api.github.com/repos/huggingface/datasets/issues/870/events | https://github.com/huggingface/datasets/issues/870 | 747,021,996 | MDU6SXNzdWU3NDcwMjE5OTY= | 870 | [Feature Request] Add optional parameter in text loading script to preserve linebreaks | {
"login": "jncasey",
"id": 31020859,
"node_id": "MDQ6VXNlcjMxMDIwODU5",
"avatar_url": "https://avatars.githubusercontent.com/u/31020859?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jncasey",
"html_url": "https://github.com/jncasey",
"followers_url": "https://api.github.com/users/jncasey/followers",
"following_url": "https://api.github.com/users/jncasey/following{/other_user}",
"gists_url": "https://api.github.com/users/jncasey/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jncasey/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jncasey/subscriptions",
"organizations_url": "https://api.github.com/users/jncasey/orgs",
"repos_url": "https://api.github.com/users/jncasey/repos",
"events_url": "https://api.github.com/users/jncasey/events{/privacy}",
"received_events_url": "https://api.github.com/users/jncasey/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | open | false | {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
} | [] | null | [
"Hi ! Thanks for your message.\r\nIndeed it's a free feature we can add and that can be useful.\r\nIf you want to contribute, feel free to open a PR to add it to the text dataset script :)"
] | 1,605,829,891,000 | 1,606,484,891,000 | null | NONE | null | I'm working on a project about rhyming verse using phonetic poetry and song lyrics, and line breaks are a vital part of the data.
I recently switched over to use the datasets library when my various corpora grew larger than my computer's memory. And so far, it is SO great.
But the first time I processed all of my data into a dataset, I hadn't realized the text loader script was processing the source files line-by-line and stripping off the newlines.
Once I caught the issue, I made my own data loader by modifying one line in the default text loader (changing `batch = batch.splitlines()` to `batch = batch.splitlines(True)` inside `_generate_tables`). And so I'm all set as far as my project is concerned.
But if my use case is more general, it seems like it'd be pretty trivial to add a kwarg to the default text loader called keeplinebreaks or something, which would default to False and get passed to `splitlines()`. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/870/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/870/timeline | null | null | {
"url": "",
"html_url": "",
"diff_url": "",
"patch_url": "",
"merged_at": 0
} | false |
https://api.github.com/repos/huggingface/datasets/issues/869 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/869/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/869/comments | https://api.github.com/repos/huggingface/datasets/issues/869/events | https://github.com/huggingface/datasets/pull/869 | 746,495,711 | MDExOlB1bGxSZXF1ZXN0NTIzODc3OTkw | 869 | Update ner datasets infos | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
} | [] | null | [
":+1: Thanks for fixing it!"
] | 1,605,785,283,000 | 1,605,795,258,000 | 1,605,795,257,000 | MEMBER | null | Update the dataset_infos.json files for changes made in #850 regarding the ner datasets feature types (and the change to ClassLabel)
I also fixed the ner types of conll2003 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/869/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/869/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/869",
"html_url": "https://github.com/huggingface/datasets/pull/869",
"diff_url": "https://github.com/huggingface/datasets/pull/869.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/869.patch",
"merged_at": 1605795257000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/868 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/868/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/868/comments | https://api.github.com/repos/huggingface/datasets/issues/868/events | https://github.com/huggingface/datasets/pull/868 | 745,889,882 | MDExOlB1bGxSZXF1ZXN0NTIzMzc2MzQ3 | 868 | Consistent metric outputs | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
} | [] | null | [
"I keep this PR in stand-by for next week's datasets sprint. If the next release is 2.0.0 then we can include it given that it's breaking for many metrics"
] | 1,605,722,759,000 | 1,606,411,947,000 | null | MEMBER | null | To automate the use of metrics, they should return consistent outputs.
In particular I'm working on adding a conversion of metrics to keras metrics.
To achieve this we need two things:
- have each metric return dictionaries of string -> floats since each keras metrics should return one float
- define in the metric info the different fields of the output dictionary
In this PR I'm adding these two features.
I also fixed a few bugs in some metrics
#867 needs to be merged first | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/868/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/868/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/868",
"html_url": "https://github.com/huggingface/datasets/pull/868",
"diff_url": "https://github.com/huggingface/datasets/pull/868.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/868.patch",
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/867 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/867/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/867/comments | https://api.github.com/repos/huggingface/datasets/issues/867/events | https://github.com/huggingface/datasets/pull/867 | 745,773,955 | MDExOlB1bGxSZXF1ZXN0NTIzMjc4MjI4 | 867 | Fix some metrics feature types | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
} | [] | null | [] | 1,605,714,371,000 | 1,605,807,358,000 | 1,605,807,357,000 | MEMBER | null | Replace `int` feature type to `int32` since `int` is not a pyarrow dtype in those metrics:
- accuracy
- precision
- recall
- f1
I also added the sklearn citation and used keyword arguments to remove future warnings | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/867/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/867/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/867",
"html_url": "https://github.com/huggingface/datasets/pull/867",
"diff_url": "https://github.com/huggingface/datasets/pull/867.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/867.patch",
"merged_at": 1605807357000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/866 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/866/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/866/comments | https://api.github.com/repos/huggingface/datasets/issues/866/events | https://github.com/huggingface/datasets/issues/866 | 745,719,222 | MDU6SXNzdWU3NDU3MTkyMjI= | 866 | OSCAR from Inria group | {
"login": "jchwenger",
"id": 34098722,
"node_id": "MDQ6VXNlcjM0MDk4NzIy",
"avatar_url": "https://avatars.githubusercontent.com/u/34098722?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jchwenger",
"html_url": "https://github.com/jchwenger",
"followers_url": "https://api.github.com/users/jchwenger/followers",
"following_url": "https://api.github.com/users/jchwenger/following{/other_user}",
"gists_url": "https://api.github.com/users/jchwenger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jchwenger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jchwenger/subscriptions",
"organizations_url": "https://api.github.com/users/jchwenger/orgs",
"repos_url": "https://api.github.com/users/jchwenger/repos",
"events_url": "https://api.github.com/users/jchwenger/events{/privacy}",
"received_events_url": "https://api.github.com/users/jchwenger/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2067376369,
"node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request",
"name": "dataset request",
"color": "e99695",
"default": false,
"description": "Requesting to add a new dataset"
}
] | closed | false | {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
} | [] | null | [
"PR is already open here : #348 \r\nThe only thing remaining is to compute the metadata of each subdataset (one per language + shuffled/unshuffled).\r\nAs soon as #863 is merged we can start computing them. This will take a bit of time though",
"Grand, thanks for this!"
] | 1,605,710,454,000 | 1,605,711,690,000 | 1,605,711,690,000 | NONE | null | ## Adding a Dataset
- **Name:** *OSCAR* (Open Super-large Crawled ALMAnaCH coRpus), multilingual parsing of Common Crawl (separate crawls for many different languages), [here](https://oscar-corpus.com/).
- **Description:** *OSCAR or Open Super-large Crawled ALMAnaCH coRpus is a huge multilingual corpus obtained by language classification and filtering of the Common Crawl corpus using the goclassy architecture.*
- **Paper:** *[here](https://hal.inria.fr/hal-02148693)*
- **Data:** *[here](https://oscar-corpus.com/)*
- **Motivation:** *useful for unsupervised tasks in separate languages. In an ideal world, your team would be able to obtain the unshuffled version, that could be used to train GPT-2-like models (the shuffled version, I suppose, could be used for translation).*
I am aware that you do offer the "colossal" Common Crawl dataset already, but this has the advantage to be available in many subcorpora for different languages.
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/866/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/866/timeline | null | null | {
"url": "",
"html_url": "",
"diff_url": "",
"patch_url": "",
"merged_at": 0
} | false |
https://api.github.com/repos/huggingface/datasets/issues/865 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/865/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/865/comments | https://api.github.com/repos/huggingface/datasets/issues/865/events | https://github.com/huggingface/datasets/issues/865 | 745,430,497 | MDU6SXNzdWU3NDU0MzA0OTc= | 865 | Have Trouble importing `datasets` | {
"login": "forest1988",
"id": 2755894,
"node_id": "MDQ6VXNlcjI3NTU4OTQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/2755894?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/forest1988",
"html_url": "https://github.com/forest1988",
"followers_url": "https://api.github.com/users/forest1988/followers",
"following_url": "https://api.github.com/users/forest1988/following{/other_user}",
"gists_url": "https://api.github.com/users/forest1988/gists{/gist_id}",
"starred_url": "https://api.github.com/users/forest1988/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/forest1988/subscriptions",
"organizations_url": "https://api.github.com/users/forest1988/orgs",
"repos_url": "https://api.github.com/users/forest1988/repos",
"events_url": "https://api.github.com/users/forest1988/events{/privacy}",
"received_events_url": "https://api.github.com/users/forest1988/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
} | [] | null | [
"I'm sorry, this was a problem with my environment.\r\nNow that I have identified the cause of environmental dependency, I would like to fix it and try it.\r\nExcuse me for making a noise."
] | 1,605,686,681,000 | 1,605,687,395,000 | 1,605,687,395,000 | CONTRIBUTOR | null | I'm failing to import transformers (v4.0.0-dev), and tracing the cause seems to be failing to import datasets.
I cloned the newest version of datasets (master branch), and do `pip install -e .`.
Then, `import datasets` causes the error below.
```
~/workspace/Clone/datasets/src/datasets/utils/file_utils.py in <module>
116 sys.path.append(str(HF_MODULES_CACHE))
117
--> 118 os.makedirs(HF_MODULES_CACHE, exist_ok=True)
119 if not os.path.exists(os.path.join(HF_MODULES_CACHE, "__init__.py")):
120 with open(os.path.join(HF_MODULES_CACHE, "__init__.py"), "w"):
~/.pyenv/versions/anaconda3-2020.07/lib/python3.8/os.py in makedirs(name, mode, exist_ok)
221 return
222 try:
--> 223 mkdir(name, mode)
224 except OSError:
225 # Cannot rely on checking for EEXIST, since the operating system
FileNotFoundError: [Errno 2] No such file or directory: '<MY_HOME_DIRECTORY>/.cache/huggingface/modules'
```
The error occurs in `os.makedirs` in `file_utils.py`, even though `exist_ok = True` option is set.
(I use Python 3.8, so `exist_ok` is expected to work.)
I've checked some environment variables, and they are set as below.
```
*** NameError: name 'HF_MODULES_CACHE' is not defined
*** NameError: name 'hf_cache_home' is not defined
*** NameError: name 'XDG_CACHE_HOME' is not defined
```
Should I set some environment variables before using this library?
And, do you have any idea why "No such file or directory" occurs even though the `exist_ok = True` option is set?
Thank you in advance. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/865/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/865/timeline | null | null | {
"url": "",
"html_url": "",
"diff_url": "",
"patch_url": "",
"merged_at": 0
} | false |
https://api.github.com/repos/huggingface/datasets/issues/864 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/864/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/864/comments | https://api.github.com/repos/huggingface/datasets/issues/864/events | https://github.com/huggingface/datasets/issues/864 | 745,322,357 | MDU6SXNzdWU3NDUzMjIzNTc= | 864 | Unable to download cnn_dailymail dataset | {
"login": "rohitashwa1907",
"id": 46031058,
"node_id": "MDQ6VXNlcjQ2MDMxMDU4",
"avatar_url": "https://avatars.githubusercontent.com/u/46031058?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rohitashwa1907",
"html_url": "https://github.com/rohitashwa1907",
"followers_url": "https://api.github.com/users/rohitashwa1907/followers",
"following_url": "https://api.github.com/users/rohitashwa1907/following{/other_user}",
"gists_url": "https://api.github.com/users/rohitashwa1907/gists{/gist_id}",
"starred_url": "https://api.github.com/users/rohitashwa1907/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rohitashwa1907/subscriptions",
"organizations_url": "https://api.github.com/users/rohitashwa1907/orgs",
"repos_url": "https://api.github.com/users/rohitashwa1907/repos",
"events_url": "https://api.github.com/users/rohitashwa1907/events{/privacy}",
"received_events_url": "https://api.github.com/users/rohitashwa1907/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2067388877,
"node_id": "MDU6TGFiZWwyMDY3Mzg4ODc3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20bug",
"name": "dataset bug",
"color": "2edb81",
"default": false,
"description": "A bug in a dataset script provided in the library"
}
] | closed | false | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"Same error here!\r\n",
"Same here! My kaggle notebook stopped working like yesterday. It's strange because I have fixed version of datasets==1.1.2",
"I'm looking at it right now",
"I couldn't reproduce unfortunately. I tried\r\n```python\r\nfrom datasets import load_dataset\r\n\r\nload_dataset(\"cnn_dailymail\", \"3.0.0\", download_mode=\"force_redownload\")\r\n```\r\nand it worked fine on both my env (python 3.7.2) and colab (python 3.6.9)\r\n\r\nMaybe there was an issue with the google drive download link of the dataset ?\r\nAre you still having the issue ? If so could your give me more info about your python and requests version ?",
"No, It's working fine now. Very strange. Here are my python and request versions\r\n\r\nrequests 2.24.0\r\nPython 3.8.2",
"It's working as expected. Closing the issue \r\n\r\nThanks everybody."
] | 1,605,674,282,000 | 1,605,849,731,000 | 1,605,849,730,000 | NONE | null | ### Script to reproduce the error
```
from datasets import load_dataset
train_dataset = load_dataset("cnn_dailymail", "3.0.0", split= 'train[:10%')
valid_dataset = load_dataset("cnn_dailymail","3.0.0", split="validation[:5%]")
```
### Error
```
---------------------------------------------------------------------------
NotADirectoryError Traceback (most recent call last)
<ipython-input-8-47c39c228935> in <module>()
1 from datasets import load_dataset
2
----> 3 train_dataset = load_dataset("cnn_dailymail", "3.0.0", split= 'train[:10%')
4 valid_dataset = load_dataset("cnn_dailymail","3.0.0", split="validation[:5%]")
5 frames
/usr/local/lib/python3.6/dist-packages/datasets/load.py in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, ignore_verifications, save_infos, script_version, **config_kwargs)
609 download_config=download_config,
610 download_mode=download_mode,
--> 611 ignore_verifications=ignore_verifications,
612 )
613
/usr/local/lib/python3.6/dist-packages/datasets/builder.py in download_and_prepare(self, download_config, download_mode, ignore_verifications, try_from_hf_gcs, dl_manager, **download_and_prepare_kwargs)
469 if not downloaded_from_gcs:
470 self._download_and_prepare(
--> 471 dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs
472 )
473 # Sync info
/usr/local/lib/python3.6/dist-packages/datasets/builder.py in _download_and_prepare(self, dl_manager, verify_infos, **prepare_split_kwargs)
524 split_dict = SplitDict(dataset_name=self.name)
525 split_generators_kwargs = self._make_split_generators_kwargs(prepare_split_kwargs)
--> 526 split_generators = self._split_generators(dl_manager, **split_generators_kwargs)
527
528 # Checksums verification
/root/.cache/huggingface/modules/datasets_modules/datasets/cnn_dailymail/0128610a44e10f25b4af6689441c72af86205282d26399642f7db38fa7535602/cnn_dailymail.py in _split_generators(self, dl_manager)
252 def _split_generators(self, dl_manager):
253 dl_paths = dl_manager.download_and_extract(_DL_URLS)
--> 254 train_files = _subset_filenames(dl_paths, datasets.Split.TRAIN)
255 # Generate shared vocabulary
256
/root/.cache/huggingface/modules/datasets_modules/datasets/cnn_dailymail/0128610a44e10f25b4af6689441c72af86205282d26399642f7db38fa7535602/cnn_dailymail.py in _subset_filenames(dl_paths, split)
153 else:
154 logging.fatal("Unsupported split: %s", split)
--> 155 cnn = _find_files(dl_paths, "cnn", urls)
156 dm = _find_files(dl_paths, "dm", urls)
157 return cnn + dm
/root/.cache/huggingface/modules/datasets_modules/datasets/cnn_dailymail/0128610a44e10f25b4af6689441c72af86205282d26399642f7db38fa7535602/cnn_dailymail.py in _find_files(dl_paths, publisher, url_dict)
132 else:
133 logging.fatal("Unsupported publisher: %s", publisher)
--> 134 files = sorted(os.listdir(top_dir))
135
136 ret_files = []
NotADirectoryError: [Errno 20] Not a directory: '/root/.cache/huggingface/datasets/downloads/1bc05d24fa6dda2468e83a73cf6dc207226e01e3c48a507ea716dc0421da583b/cnn/stories'
```
Thanks for any suggestions. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/864/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/864/timeline | null | null | {
"url": "",
"html_url": "",
"diff_url": "",
"patch_url": "",
"merged_at": 0
} | false |
https://api.github.com/repos/huggingface/datasets/issues/863 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/863/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/863/comments | https://api.github.com/repos/huggingface/datasets/issues/863/events | https://github.com/huggingface/datasets/pull/863 | 744,954,534 | MDExOlB1bGxSZXF1ZXN0NTIyNTk0Mjg1 | 863 | Add clear_cache parameter in the test command | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
} | [] | null | [] | 1,605,635,549,000 | 1,605,710,665,000 | 1,605,710,664,000 | MEMBER | null | For certain datasets like OSCAR #348 there are lots of different configurations and each one of them can take a lot of disk space.
I added a `--clear_cache` flag to the `datasets-cli test` command to be able to clear the cache after each configuration test to avoid filling up the disk. It should enable an easier generation for the `dataset_infos.json` file for OSCAR. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/863/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/863/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/863",
"html_url": "https://github.com/huggingface/datasets/pull/863",
"diff_url": "https://github.com/huggingface/datasets/pull/863.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/863.patch",
"merged_at": 1605710664000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/862 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/862/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/862/comments | https://api.github.com/repos/huggingface/datasets/issues/862/events | https://github.com/huggingface/datasets/pull/862 | 744,906,131 | MDExOlB1bGxSZXF1ZXN0NTIyNTUzMzY1 | 862 | Update head requests | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
} | [] | null | [] | 1,605,631,746,000 | 1,605,710,633,000 | 1,605,710,630,000 | MEMBER | null | Get requests and Head requests didn't have the same parameters. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/862/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/862/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/862",
"html_url": "https://github.com/huggingface/datasets/pull/862",
"diff_url": "https://github.com/huggingface/datasets/pull/862.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/862.patch",
"merged_at": 1605710630000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/861 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/861/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/861/comments | https://api.github.com/repos/huggingface/datasets/issues/861/events | https://github.com/huggingface/datasets/issues/861 | 744,753,458 | MDU6SXNzdWU3NDQ3NTM0NTg= | 861 | Possible Bug: Small training/dataset file creates gigantic output | {
"login": "NebelAI",
"id": 7240417,
"node_id": "MDQ6VXNlcjcyNDA0MTc=",
"avatar_url": "https://avatars.githubusercontent.com/u/7240417?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/NebelAI",
"html_url": "https://github.com/NebelAI",
"followers_url": "https://api.github.com/users/NebelAI/followers",
"following_url": "https://api.github.com/users/NebelAI/following{/other_user}",
"gists_url": "https://api.github.com/users/NebelAI/gists{/gist_id}",
"starred_url": "https://api.github.com/users/NebelAI/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/NebelAI/subscriptions",
"organizations_url": "https://api.github.com/users/NebelAI/orgs",
"repos_url": "https://api.github.com/users/NebelAI/repos",
"events_url": "https://api.github.com/users/NebelAI/events{/privacy}",
"received_events_url": "https://api.github.com/users/NebelAI/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
},
{
"id": 1935892912,
"node_id": "MDU6TGFiZWwxOTM1ODkyOTEy",
"url": "https://api.github.com/repos/huggingface/datasets/labels/question",
"name": "question",
"color": "d876e3",
"default": true,
"description": "Further information is requested"
}
] | closed | false | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"The preprocessing tokenizes the input text. Tokenization outputs `input_ids`, `attention_mask`, `token_type_ids` and `special_tokens_mask`. All those are of length`max_seq_length` because of padding. Therefore for each sample it generate 4 *`max_seq_length` integers. Currently they're all saved as int64. This is why the tokenization takes so much space.\r\n\r\nI'm sure we can optimize that though\r\nWhat do you think @sgugger ?",
"First I think we should disable padding in the dataset processing and let the data collator do it.\r\n\r\nThen I'm wondering if you need attention_mask and token_type_ids at this point ?\r\n\r\nFinally we can also specify the output feature types at this line https://github.com/huggingface/transformers/blob/master/examples/language-modeling/run_mlm.py#L280 to use more optimized integer precisions for the output. Maybe something like:\r\n- input_ids: uint16 or uint32\r\n- token_type_ids: uint8 or bool\r\n- attention_mask: bool\r\n- special_tokens_mask: bool\r\n\r\nAlso IMO these changes are all on the `transformers` side. Maybe we should discuss on the `transformers` repo",
"> First I think we should disable padding in the dataset processing and let the data collator do it.\r\n\r\nNo, you can't do that on TPUs as dynamic shapes will result in a very slow training. The script can however be tweaked to use the `PaddingDataCollator` with a fixed max length instead of dynamic batching.\r\n\r\nFor the other optimizations, they can be done by changing the script directly for each user's use case. Not sure we can find something that is general enough to be in transformers or the examples script.",
"Oh yes right..\r\nDo you think that a lazy map feature on the `datasets` side could help to avoid storing padded tokenized texts then ?",
"I think I can do the tweak mentioned above with the data collator as short fix (but fully focused on v4 right now so that will be for later this week, beginning of next week :-) ).\r\nIf it doesn't hurt performance to tokenize on the fly, that would clearly be the long-term solution however!",
"> Hey guys,\r\n> \r\n> I was trying to create a new bert model from scratch via _huggingface transformers + tokenizers + dataets_ (actually using this example script by your team: https://github.com/huggingface/transformers/blob/master/examples/language-modeling/run_mlm.py). It was supposed to be a first test with a small 5 GB raw text file but I can't even end the preprocessing handled by datasets because this tiny 5 GB text file becomes more than 1 TB when processing. My system was running out of space and crashed prematurely.\r\n> \r\n> I've done training from scratch via Google's bert repo in the past and I can remember that the resulting pretraining data can become quite big. But 5 GB becoming 1 TB was never the case. Is this considered normal or is it a bug?\r\n> \r\n> I've used the following CMD:\r\n> `python xla_spawn.py --num_cores=8 run_mlm.py --model_type bert --config_name config.json --tokenizer_name tokenizer.json --train_file dataset_full.txt --do_train --output_dir out --max_steps 500000 --save_steps 2500 --save_total_limit 2 --prediction_loss_only --line_by_line --max_seq_length 128 --pad_to_max_length --preprocessing_num_workers 16 --per_device_train_batch_size 128 --overwrite_output_dir --debug`\r\n\r\nIt's actually because of the parameter 'preprocessing_num_worker' when using TPU. \r\nI am also planning to have my model trained on the google TPU with a 11gb text corpus. With x8 cores enabled, each TPU core has its own dataset. When not using distributed training, the preprocessed file is about 77gb. On the opposite, if enable xla, the file produced will easily consume all my free space(more than 220gb, I think it will be, in the end, around 600gb ). \r\nSo I think that's maybe where the problem came from. \r\n\r\nIs there any possibility that all of the cores share the same preprocess dataset?\r\n\r\n@sgugger @RammMaschine ",
"Hi @NebelAI, we have optimized Datasets' disk usage in the latest release v1.5.\r\n\r\nFeel free to update your Datasets version\r\n```shell\r\npip install -U datasets\r\n```\r\nand see if it better suits your needs."
] | 1,605,620,939,000 | 1,617,113,044,000 | 1,616,414,695,000 | NONE | null | Hey guys,
I was trying to create a new bert model from scratch via _huggingface transformers + tokenizers + dataets_ (actually using this example script by your team: https://github.com/huggingface/transformers/blob/master/examples/language-modeling/run_mlm.py). It was supposed to be a first test with a small 5 GB raw text file but I can't even end the preprocessing handled by datasets because this tiny 5 GB text file becomes more than 1 TB when processing. My system was running out of space and crashed prematurely.
I've done training from scratch via Google's bert repo in the past and I can remember that the resulting pretraining data can become quite big. But 5 GB becoming 1 TB was never the case. Is this considered normal or is it a bug?
I've used the following CMD:
`python xla_spawn.py --num_cores=8 run_mlm.py --model_type bert --config_name config.json --tokenizer_name tokenizer.json --train_file dataset_full.txt --do_train --output_dir out --max_steps 500000 --save_steps 2500 --save_total_limit 2 --prediction_loss_only --line_by_line --max_seq_length 128 --pad_to_max_length --preprocessing_num_workers 16 --per_device_train_batch_size 128 --overwrite_output_dir --debug`
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/861/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/861/timeline | null | null | {
"url": "",
"html_url": "",
"diff_url": "",
"patch_url": "",
"merged_at": 0
} | false |
https://api.github.com/repos/huggingface/datasets/issues/860 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/860/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/860/comments | https://api.github.com/repos/huggingface/datasets/issues/860/events | https://github.com/huggingface/datasets/issues/860 | 744,750,691 | MDU6SXNzdWU3NDQ3NTA2OTE= | 860 | wmt16 cs-en does not donwload | {
"login": "rabeehk",
"id": 6278280,
"node_id": "MDQ6VXNlcjYyNzgyODA=",
"avatar_url": "https://avatars.githubusercontent.com/u/6278280?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rabeehk",
"html_url": "https://github.com/rabeehk",
"followers_url": "https://api.github.com/users/rabeehk/followers",
"following_url": "https://api.github.com/users/rabeehk/following{/other_user}",
"gists_url": "https://api.github.com/users/rabeehk/gists{/gist_id}",
"starred_url": "https://api.github.com/users/rabeehk/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rabeehk/subscriptions",
"organizations_url": "https://api.github.com/users/rabeehk/orgs",
"repos_url": "https://api.github.com/users/rabeehk/repos",
"events_url": "https://api.github.com/users/rabeehk/events{/privacy}",
"received_events_url": "https://api.github.com/users/rabeehk/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2067388877,
"node_id": "MDU6TGFiZWwyMDY3Mzg4ODc3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20bug",
"name": "dataset bug",
"color": "2edb81",
"default": false,
"description": "A bug in a dataset script provided in the library"
}
] | open | false | {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
} | [] | null | [] | 1,605,620,735,000 | 1,606,484,824,000 | null | CONTRIBUTOR | null | Hi
I am trying with wmt16, cs-en pair, thanks for the help, perhaps similar to the ro-en issue. thanks
split="train", n_obs=data_args.n_train) for task in data_args.task}
File "finetune_t5_trainer.py", line 109, in <dictcomp>
split="train", n_obs=data_args.n_train) for task in data_args.task}
File "/home/rabeeh/internship/seq2seq/tasks/tasks.py", line 82, in get_dataset
dataset = load_dataset("wmt16", self.pair, split=split)
File "/opt/conda/envs/internship/lib/python3.7/site-packages/datasets/load.py", line 611, in load_dataset
ignore_verifications=ignore_verifications,
File "/opt/conda/envs/internship/lib/python3.7/site-packages/datasets/builder.py", line 476, in download_and_prepare
dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs
File "/opt/conda/envs/internship/lib/python3.7/site-packages/datasets/builder.py", line 531, in _download_and_prepare
split_generators = self._split_generators(dl_manager, **split_generators_kwargs)
File "/home/rabeeh/.cache/huggingface/modules/datasets_modules/datasets/wmt16/7b2c4443a7d34c2e13df267eaa8cab4c62dd82f6b62b0d9ecc2e3a673ce17308/wmt_utils.py", line 755, in _split_generators
downloaded_files = dl_manager.download_and_extract(urls_to_download)
File "/opt/conda/envs/internship/lib/python3.7/site-packages/datasets/utils/download_manager.py", line 254, in download_and_extract
return self.extract(self.download(url_or_urls))
File "/opt/conda/envs/internship/lib/python3.7/site-packages/datasets/utils/download_manager.py", line 179, in download
num_proc=download_config.num_proc,
File "/opt/conda/envs/internship/lib/python3.7/site-packages/datasets/utils/py_utils.py", line 225, in map_nested
_single_map_nested((function, obj, types, None, True)) for obj in tqdm(iterable, disable=disable_tqdm)
File "/opt/conda/envs/internship/lib/python3.7/site-packages/datasets/utils/py_utils.py", line 225, in <listcomp>
_single_map_nested((function, obj, types, None, True)) for obj in tqdm(iterable, disable=disable_tqdm)
File "/opt/conda/envs/internship/lib/python3.7/site-packages/datasets/utils/py_utils.py", line 181, in _single_map_nested
mapped = [_single_map_nested((function, v, types, None, True)) for v in pbar]
File "/opt/conda/envs/internship/lib/python3.7/site-packages/datasets/utils/py_utils.py", line 181, in <listcomp>
mapped = [_single_map_nested((function, v, types, None, True)) for v in pbar]
File "/opt/conda/envs/internship/lib/python3.7/site-packages/datasets/utils/py_utils.py", line 163, in _single_map_nested
return function(data_struct)
File "/opt/conda/envs/internship/lib/python3.7/site-packages/datasets/utils/file_utils.py", line 308, in cached_path
use_etag=download_config.use_etag,
File "/opt/conda/envs/internship/lib/python3.7/site-packages/datasets/utils/file_utils.py", line 475, in get_from_cache
raise ConnectionError("Couldn't reach {}".format(url))
ConnectionError: Couldn't reach http://www.statmt.org/wmt13/training-parallel-commoncrawl.tgz | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/860/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/860/timeline | null | null | {
"url": "",
"html_url": "",
"diff_url": "",
"patch_url": "",
"merged_at": 0
} | false |
https://api.github.com/repos/huggingface/datasets/issues/859 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/859/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/859/comments | https://api.github.com/repos/huggingface/datasets/issues/859/events | https://github.com/huggingface/datasets/pull/859 | 743,917,091 | MDExOlB1bGxSZXF1ZXN0NTIxNzI4MDM4 | 859 | Integrate file_lock inside the lib for better logging control | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
} | [] | null | [] | 1,605,539,619,000 | 1,605,546,404,000 | 1,605,546,402,000 | MEMBER | null | Previously the locking system of the lib was based on the file_lock package. However as noticed in #812 there were too many logs printed even when the datasets logging was set to warnings or errors.
For example
```python
import logging
logging.basicConfig(level=logging.INFO)
import datasets
datasets.set_verbosity_warning()
datasets.load_dataset("squad")
```
would still log the file lock events:
```
INFO:filelock:Lock 5737989232 acquired on /Users/quentinlhoest/.cache/huggingface/datasets/44801f118d500eff6114bfc56ab4e6def941f1eb14b70ac1ecc052e15cdac49d.85f43de978b9b25921cb78d7a2f2b350c04acdbaedb9ecb5f7101cd7c0950e68.py.lock
INFO:filelock:Lock 5737989232 released on /Users/quentinlhoest/.cache/huggingface/datasets/44801f118d500eff6114bfc56ab4e6def941f1eb14b70ac1ecc052e15cdac49d.85f43de978b9b25921cb78d7a2f2b350c04acdbaedb9ecb5f7101cd7c0950e68.py.lock
INFO:filelock:Lock 4393489968 acquired on /Users/quentinlhoest/.cache/huggingface/datasets/_Users_quentinlhoest_.cache_huggingface_datasets_squad_plain_text_1.0.0_1244d044b266a5e4dbd4174d23cb995eead372fbca31a03edc3f8a132787af41.lock
INFO:filelock:Lock 4393489968 released on /Users/quentinlhoest/.cache/huggingface/datasets/_Users_quentinlhoest_.cache_huggingface_datasets_squad_plain_text_1.0.0_1244d044b266a5e4dbd4174d23cb995eead372fbca31a03edc3f8a132787af41.lock
INFO:filelock:Lock 4393490808 acquired on /Users/quentinlhoest/.cache/huggingface/datasets/_Users_quentinlhoest_.cache_huggingface_datasets_squad_plain_text_1.0.0_1244d044b266a5e4dbd4174d23cb995eead372fbca31a03edc3f8a132787af41.lock
Reusing dataset squad (/Users/quentinlhoest/.cache/huggingface/datasets/squad/plain_text/1.0.0/1244d044b266a5e4dbd4174d23cb995eead372fbca31a03edc3f8a132787af41)
INFO:filelock:Lock 4393490808 released on /Users/quentinlhoest/.cache/huggingface/datasets/_Users_quentinlhoest_.cache_huggingface_datasets_squad_plain_text_1.0.0_1244d044b266a5e4dbd4174d23cb995eead372fbca31a03edc3f8a132787af41.lock
```
With the integration of file_lock in the library, the ouput is much cleaner:
```
Reusing dataset squad (/Users/quentinlhoest/.cache/huggingface/datasets/squad/plain_text/1.0.0/1244d044b266a5e4dbd4174d23cb995eead372fbca31a03edc3f8a132787af41)
```
Since the file_lock package is only a 450 lines file I think it's fine to have it inside the lib.
Fix #812 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/859/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/859/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/859",
"html_url": "https://github.com/huggingface/datasets/pull/859",
"diff_url": "https://github.com/huggingface/datasets/pull/859.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/859.patch",
"merged_at": 1605546402000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/858 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/858/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/858/comments | https://api.github.com/repos/huggingface/datasets/issues/858/events | https://github.com/huggingface/datasets/pull/858 | 743,904,516 | MDExOlB1bGxSZXF1ZXN0NTIxNzE3ODQ4 | 858 | Add SemEval-2010 task 8 | {
"login": "JoelNiklaus",
"id": 3775944,
"node_id": "MDQ6VXNlcjM3NzU5NDQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/3775944?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/JoelNiklaus",
"html_url": "https://github.com/JoelNiklaus",
"followers_url": "https://api.github.com/users/JoelNiklaus/followers",
"following_url": "https://api.github.com/users/JoelNiklaus/following{/other_user}",
"gists_url": "https://api.github.com/users/JoelNiklaus/gists{/gist_id}",
"starred_url": "https://api.github.com/users/JoelNiklaus/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/JoelNiklaus/subscriptions",
"organizations_url": "https://api.github.com/users/JoelNiklaus/orgs",
"repos_url": "https://api.github.com/users/JoelNiklaus/repos",
"events_url": "https://api.github.com/users/JoelNiklaus/events{/privacy}",
"received_events_url": "https://api.github.com/users/JoelNiklaus/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
} | [] | null | [
"Added dummy data and encoding to open(). Now everything should be fine, hopefully :)"
] | 1,605,538,677,000 | 1,606,411,735,000 | 1,606,411,735,000 | CONTRIBUTOR | null | Hi,
I don't know how to add dummy data, since I create the validation set out of the last 1000 examples of the train set. If you have a suggestion, I am happy to implement it.
Cheers,
Joel | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/858/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/858/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/858",
"html_url": "https://github.com/huggingface/datasets/pull/858",
"diff_url": "https://github.com/huggingface/datasets/pull/858.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/858.patch",
"merged_at": 1606411735000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/857 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/857/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/857/comments | https://api.github.com/repos/huggingface/datasets/issues/857/events | https://github.com/huggingface/datasets/pull/857 | 743,863,214 | MDExOlB1bGxSZXF1ZXN0NTIxNjg0ODIx | 857 | Use pandas reader in csv | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
} | [] | null | [] | 1,605,535,545,000 | 1,605,807,340,000 | 1,605,807,338,000 | MEMBER | null | The pyarrow CSV reader has issues that the pandas one doesn't (see #836 ).
To fix that I switched to the pandas csv reader.
The new reader is compatible with all the pandas parameters to read csv files.
Moreover it reads csv by chunk in order to save RAM, while the pyarrow one loads everything in memory.
Fix #836
Fix #794
Breaking: now all the parameters to read to csv file can be used in the `load_dataset` kwargs when loading csv, and the previous pyarrow objects `pyarrow.csv.ReadOptions`, `pyarrow.csv.ParseOptions` and `pyarrow.csv.ConvertOptions` are not used anymore. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/857/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/857/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/857",
"html_url": "https://github.com/huggingface/datasets/pull/857",
"diff_url": "https://github.com/huggingface/datasets/pull/857.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/857.patch",
"merged_at": 1605807338000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/856 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/856/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/856/comments | https://api.github.com/repos/huggingface/datasets/issues/856/events | https://github.com/huggingface/datasets/pull/856 | 743,799,239 | MDExOlB1bGxSZXF1ZXN0NTIxNjMzNTYz | 856 | Add open book corpus | {
"login": "vblagoje",
"id": 458335,
"node_id": "MDQ6VXNlcjQ1ODMzNQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/458335?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/vblagoje",
"html_url": "https://github.com/vblagoje",
"followers_url": "https://api.github.com/users/vblagoje/followers",
"following_url": "https://api.github.com/users/vblagoje/following{/other_user}",
"gists_url": "https://api.github.com/users/vblagoje/gists{/gist_id}",
"starred_url": "https://api.github.com/users/vblagoje/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/vblagoje/subscriptions",
"organizations_url": "https://api.github.com/users/vblagoje/orgs",
"repos_url": "https://api.github.com/users/vblagoje/repos",
"events_url": "https://api.github.com/users/vblagoje/events{/privacy}",
"received_events_url": "https://api.github.com/users/vblagoje/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
} | [] | null | [
"@lhoestq I fixed issues except for the dummy_data zip file. But I think I know why is it happening. So when unzipping dummy_data.zip it gets save in /tmp directory where glob doesn't pick it up. For regular downloads, the archive gets unzipped in ~/.cache/huggingface. Could that be a reason?",
"Nice thanks :)\r\n\r\nWhen testing with the dummy data, the `download_manager.download_and_extract()` call returns the path to the unzipped dummy_data.zip archive. Therefore glob should be able to find your dummy .epub.txt file",
"@lhoestq I understand but for some reason, it is not happening. I added logs to see where dummy_data.zip gets unzipped in /tmp but I suppose when the test process finishes that tmp is gone. I also tried to glob anything in _generate_examples from that directory using /* instead of **/*.epub.txt and nothing is being returned. Always an empty array. ",
"Ok weird ! I can take a look tomorrow if you want",
"Please do, I will take a fresh look as well. ",
"In _generate_examples_ I wrote the following:\r\n```\r\nglob_target = os.path.join(directory, \"**/*.epub.txt\")\r\nprint(f\"Glob target {glob_target }\")\r\n```\r\n\r\nAnd here is the test failure:\r\n\r\n\r\n========================================================================================== FAILURES ===========================================================================================\r\n________________________________________________________________ LocalDatasetTest.test_load_dataset_all_configs_bookcorpusopen ________________________________________________________________\r\n\r\nself = <tests.test_dataset_common.LocalDatasetTest testMethod=test_load_dataset_all_configs_bookcorpusopen>, dataset_name = 'bookcorpusopen'\r\n\r\n @slow\r\n def test_load_dataset_all_configs(self, dataset_name):\r\n configs = self.dataset_tester.load_all_configs(dataset_name, is_local=True)\r\n> self.dataset_tester.check_load_dataset(dataset_name, configs, is_local=True)\r\n\r\ntests/test_dataset_common.py:232: \r\n_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _\r\ntests/test_dataset_common.py:193: in check_load_dataset\r\n self.parent.assertTrue(len(dataset[split]) > 0)\r\nE AssertionError: False is not true\r\n------------------------------------------------------------------------------------ Captured stdout call -------------------------------------------------------------------------------------\r\nDownloading and preparing dataset book_corpus_open/plain_text (download: 1.00 MiB, generated: 1.00 MiB, post-processed: Unknown size, total: 2.00 MiB) to /var/folders/y_/6k6zhblx0k9dsdz5nd_z9x5c0000gp/T/tmpmuu0_ln2/book_corpus_open/plain_text/1.0.0...\r\nGlob target /var/folders/y_/6k6zhblx0k9dsdz5nd_z9x5c0000gp/T/tmpm6tpvb3f/extracted/d953b414cceb4fe3985eeaf68aec2f4435f166b2edf66863d805e3825b7d336b/dummy_data/**/*.epub.txt\r\nDataset book_corpus_open downloaded and prepared to /var/folders/y_/6k6zhblx0k9dsdz5nd_z9x5c0000gp/T/tmpmuu0_ln2/book_corpus_open/plain_text/1.0.0. Subsequent calls will reuse this data.\r\n------------------------------------------------------------------------------------ Captured stderr call -------------------------------------------------------------------------------------\r\n \r\n",
"And when I do os.listdir on the given directory I get:\r\n\r\n glob_target = os.path.join(directory, \"**/*.epub.txt\")\r\n print(f\"Glob target {glob_target }\")\r\n> print(os.listdir(path=directory))\r\nE FileNotFoundError: [Errno 2] No such file or directory: '/var/folders/y_/6k6zhblx0k9dsdz5nd_z9x5c0000gp/T/tmpbu_aom5q/extracted/d953b414cceb4fe3985eeaf68aec2f4435f166b2edf66863d805e3825b7d336b/dummy_data'\r\n",
"Thanks for the info, I'm looking at it right now",
"Ok found the issue !\r\n\r\nThe dummy_data.zip file must be an archive of a folder named dummy_data. Currently the dummy_data.zip is an archive of a folder named book1. In order to have a valid dummy_data.zip file you must first take the dummy book1 folder, place it inside a folder named dummy_data and then compress the dummy_data folder to get dummy_data.zip",
"Excellent, I am on it @lhoestq ",
"> Awesome thank you so much for adding it :)\r\n\r\nYou're welcome, ok all tests are green now! I needed it asap as well. Thanks for your help @lhoestq .",
"I just wanted to say thank you to everyone involved in making this happen! I was certain that I would have to add bookcorpusnew myself, but then @vblagoje came along and did it, and @lhoestq gave some great support in a timely fashion.\r\n\r\nBy the way @vblagoje, are you on Twitter? I'm https://twitter.com/theshawwn if you'd like to DM and say hello. Once again, thanks for doing this!\r\n\r\nI'll mention over at https://github.com/soskek/bookcorpus/issues/27 that this was merged.",
"Thank you Shawn. You did all the heavy lifting ;-)",
"@vblagoje Would you be interested in adding books3 as well? https://twitter.com/theshawwn/status/1320282149329784833\r\n\r\nHuggingface is interested and asked me to add it, but I had a bit of trouble during setup (https://github.com/huggingface/datasets/issues/790) and never got around to it. At this point you have much more experience than I do with the datasets lib.\r\n\r\nIt *seems* like it might simply be a matter of copy-pasting this PR, changing books1 to books3, and possibly trimming off the leading paths -- each book is at e.g. the-eye/Books/Bibliotok/J/Jurassic Park.epub.txt, which is rather lengthy compared to just the filename -- but the full path is probably fine, so feel free to do the least amount of work that gets the job done. Otherwise I suppose I'll get around to it eventually; thanks again!",
"@shawwn I'll take a look as soon as I clear my work queue. TBH, I would likely work on making sure HF datasets has all the datasets used to train https://github.com/alexa/bort/ and these are: Wikipedia, Wiktionary, OpenWebText (Gokaslan and Cohen, 2019), UrbanDictionary, Onel Billion Words (Chelba et al., 2014), the news subset of Common Crawl (Nagel, 2016)10, and Bookcorpus. cc @lhoestq "
] | 1,605,529,802,000 | 1,605,701,026,000 | 1,605,626,538,000 | CONTRIBUTOR | null | Adds book corpus based on Shawn Presser's [work](https://github.com/soskek/bookcorpus/issues/27) @richarddwang, the author of the original BookCorpus dataset, suggested it should be named [OpenBookCorpus](https://github.com/huggingface/datasets/issues/486). I named it BookCorpusOpen to be easily located alphabetically. But, of course, we can rename it if needed.
It contains 17868 dataset items; each item contains two fields: title and text. The title is the name of the book (just the file name) while the text contains unprocessed book text. Note that bookcorpus is pre-segmented into a sentence while this bookcorpus is not. This is intentional (see https://github.com/huggingface/datasets/issues/486) as some users might want to further process the text themselves.
@lhoestq and others please review this PR thoroughly. cc @shawwn | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/856/reactions",
"total_count": 5,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 3,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/856/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/856",
"html_url": "https://github.com/huggingface/datasets/pull/856",
"diff_url": "https://github.com/huggingface/datasets/pull/856.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/856.patch",
"merged_at": 1605626537000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/855 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/855/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/855/comments | https://api.github.com/repos/huggingface/datasets/issues/855/events | https://github.com/huggingface/datasets/pull/855 | 743,690,839 | MDExOlB1bGxSZXF1ZXN0NTIxNTQ2Njkx | 855 | Fix kor nli csv reader | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
} | [] | null | [] | 1,605,520,421,000 | 1,605,535,154,000 | 1,605,535,152,000 | MEMBER | null | The kor_nli dataset had an issue with the csv reader that was not able to parse the lines correctly. Some lines were merged together for some reason.
I fixed that by iterating through the lines directly instead of using a csv reader.
I also changed the feature names to match the other NLI datasets (i.e. use "premise", "hypothesis", "label" features)
Fix #821 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/855/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/855/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/855",
"html_url": "https://github.com/huggingface/datasets/pull/855",
"diff_url": "https://github.com/huggingface/datasets/pull/855.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/855.patch",
"merged_at": 1605535152000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/854 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/854/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/854/comments | https://api.github.com/repos/huggingface/datasets/issues/854/events | https://github.com/huggingface/datasets/issues/854 | 743,675,376 | MDU6SXNzdWU3NDM2NzUzNzY= | 854 | wmt16 does not download | {
"login": "rabeehk",
"id": 6278280,
"node_id": "MDQ6VXNlcjYyNzgyODA=",
"avatar_url": "https://avatars.githubusercontent.com/u/6278280?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rabeehk",
"html_url": "https://github.com/rabeehk",
"followers_url": "https://api.github.com/users/rabeehk/followers",
"following_url": "https://api.github.com/users/rabeehk/following{/other_user}",
"gists_url": "https://api.github.com/users/rabeehk/gists{/gist_id}",
"starred_url": "https://api.github.com/users/rabeehk/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rabeehk/subscriptions",
"organizations_url": "https://api.github.com/users/rabeehk/orgs",
"repos_url": "https://api.github.com/users/rabeehk/repos",
"events_url": "https://api.github.com/users/rabeehk/events{/privacy}",
"received_events_url": "https://api.github.com/users/rabeehk/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2067388877,
"node_id": "MDU6TGFiZWwyMDY3Mzg4ODc3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20bug",
"name": "dataset bug",
"color": "2edb81",
"default": false,
"description": "A bug in a dataset script provided in the library"
}
] | open | false | {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
} | [] | null | [
"Hi,I also posted it to the forum, but this is a bug, perhaps it needs to be reported here? thanks ",
"It looks like the official OPUS server for WMT16 doesn't provide the data files anymore (503 error).\r\nI searched a bit and couldn't find a mirror except maybe http://nlp.ffzg.hr/resources/corpora/setimes/ (the data are a cleaned version of the original ones though)\r\nShould we consider replacing the old urls with these ones even though it's not the exact same data ?",
"The data storage is down at the moment. Sorry. Hopefully, it will come back soon. Apologies for the inconvenience ...",
"Dear great huggingface team, this is not working yet, I really appreciate some temporary fix on this, I need this for my project and this is time sensitive and I will be grateful for your help on this. ",
"We have reached out to the OPUS team which is currently working on making the data available again. Cc @jorgtied ",
"thank you @thomwolf and HuggingFace team for the help. ",
"OPUS is still down - hopefully back tomorrow.",
"Hi, this is still down, I would be really grateful if you could ping them one more time. thank you so much. ",
"Hi\r\nI am trying with multiple setting of wmt datasets and all failed so far, I need to have at least one dataset working for testing somecodes, and this is really time sensitive, I greatly appreciate letting me know of one translation datasets currently working. thanks ",
"It is still down, unfortunately. I'm sorry for that. It should come up again later today or tomorrow at the latest if no additional complications will happen.",
"Hi all, \r\nI pulled a request that fix this issue by replacing urls. \r\n\r\nhttps://github.com/huggingface/datasets/pull/1901\r\n\r\nThanks!\r\n",
"It's still down for the wmt."
] | 1,605,519,111,000 | 1,614,222,909,000 | null | CONTRIBUTOR | null | Hi, I appreciate your help with the following error, thanks
>>> from datasets import load_dataset
>>> dataset = load_dataset("wmt16", "ro-en", split="train")
Downloading and preparing dataset wmt16/ro-en (download: Unknown size, generated: Unknown size, post-processed: Unknown size, total: Unknown size) to /root/.cache/huggingface/datasets/wmt16/ro-en/1.0.0/7b2c4443a7d34c2e13df267eaa8cab4c62dd82f6b62b0d9ecc2e3a673ce17308...
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/datasets/load.py", line 611, in load_dataset
ignore_verifications=ignore_verifications,
File "/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/datasets/builder.py", line 476, in download_and_prepare
dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs
File "/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/datasets/builder.py", line 531, in _download_and_prepare
split_generators = self._split_generators(dl_manager, **split_generators_kwargs)
File "/root/.cache/huggingface/modules/datasets_modules/datasets/wmt16/7b2c4443a7d34c2e13df267eaa8cab4c62dd82f6b62b0d9ecc2e3a673ce17308/wmt_utils.py", line 755, in _split_generators
downloaded_files = dl_manager.download_and_extract(urls_to_download)
File "/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/datasets/utils/download_manager.py", line 254, in download_and_extract
return self.extract(self.download(url_or_urls))
File "/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/datasets/utils/download_manager.py", line 179, in download
num_proc=download_config.num_proc,
File "/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/datasets/utils/py_utils.py", line 225, in map_nested
_single_map_nested((function, obj, types, None, True)) for obj in tqdm(iterable, disable=disable_tqdm)
File "/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/datasets/utils/py_utils.py", line 225, in <listcomp>
_single_map_nested((function, obj, types, None, True)) for obj in tqdm(iterable, disable=disable_tqdm)
File "/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/datasets/utils/py_utils.py", line 181, in _single_map_nested
mapped = [_single_map_nested((function, v, types, None, True)) for v in pbar]
File "/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/datasets/utils/py_utils.py", line 181, in <listcomp>
mapped = [_single_map_nested((function, v, types, None, True)) for v in pbar]
File "/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/datasets/utils/py_utils.py", line 163, in _single_map_nested
return function(data_struct)
File "/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/datasets/utils/file_utils.py", line 308, in cached_path
use_etag=download_config.use_etag,
File "/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/datasets/utils/file_utils.py", line 475, in get_from_cache
raise ConnectionError("Couldn't reach {}".format(url))
ConnectionError: Couldn't reach http://opus.nlpl.eu/download.php?f=SETIMES/v2/tmx/en-ro.tmx.gz | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/854/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/854/timeline | null | null | {
"url": "",
"html_url": "",
"diff_url": "",
"patch_url": "",
"merged_at": 0
} | false |
https://api.github.com/repos/huggingface/datasets/issues/853 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/853/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/853/comments | https://api.github.com/repos/huggingface/datasets/issues/853/events | https://github.com/huggingface/datasets/issues/853 | 743,426,583 | MDU6SXNzdWU3NDM0MjY1ODM= | 853 | concatenate_datasets support axis=0 or 1 ๏ผ | {
"login": "renqingcolin",
"id": 12437751,
"node_id": "MDQ6VXNlcjEyNDM3NzUx",
"avatar_url": "https://avatars.githubusercontent.com/u/12437751?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/renqingcolin",
"html_url": "https://github.com/renqingcolin",
"followers_url": "https://api.github.com/users/renqingcolin/followers",
"following_url": "https://api.github.com/users/renqingcolin/following{/other_user}",
"gists_url": "https://api.github.com/users/renqingcolin/gists{/gist_id}",
"starred_url": "https://api.github.com/users/renqingcolin/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/renqingcolin/subscriptions",
"organizations_url": "https://api.github.com/users/renqingcolin/orgs",
"repos_url": "https://api.github.com/users/renqingcolin/repos",
"events_url": "https://api.github.com/users/renqingcolin/events{/privacy}",
"received_events_url": "https://api.github.com/users/renqingcolin/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
},
{
"id": 1935892884,
"node_id": "MDU6TGFiZWwxOTM1ODkyODg0",
"url": "https://api.github.com/repos/huggingface/datasets/labels/help%20wanted",
"name": "help wanted",
"color": "008672",
"default": true,
"description": "Extra attention is needed"
},
{
"id": 1935892912,
"node_id": "MDU6TGFiZWwxOTM1ODkyOTEy",
"url": "https://api.github.com/repos/huggingface/datasets/labels/question",
"name": "question",
"color": "d876e3",
"default": true,
"description": "Further information is requested"
}
] | closed | false | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"Unfortunately `concatenate_datasets` only supports concatenating the rows, while what you want to achieve is concatenate the columns.\r\nCurrently to add more columns to a dataset, one must use `map`.\r\nWhat you can do is somehting like this:\r\n```python\r\n# suppose you have datasets d1, d2, d3\r\ndef add_columns(example, index):\r\n example.update(d2[index])\r\n example.update(d3[index])\r\n return example\r\n\r\nfull_dataset = d1.map(add_columns, with_indices=True)\r\n```",
"Closing this one, feel free to re-open if you have other questions about this issue",
"That's not really difficult to add, though, no?\r\nI think it can be done without copy.\r\nMaybe let's add it to the roadmap?",
"Actually it's doable but requires to update the `Dataset._data_files` schema to support this.\r\nI'm re-opening this since we may want to add this in the future",
"Hi @lhoestq, I would love to help and add this feature if still needed. My plan is to add an axis variable in the `concatenate_datasets` function in `arrow_dataset.py` and when that is set to 1 concatenate columns instead of rows. ",
"Hi ! I would love to see this feature implemented as well :) Thank you for proposing your help !\r\n\r\nHere is a few things about the current implementation:\r\n- A dataset object is a wrapper of one `pyarrow.Table` that contains the data\r\n- Pyarrow offers an API that allows to transform Table objects. For example there are functions like `concat_tables`, `Table.rename_columns`, `Table.add_column` etc.\r\n\r\nTherefore adding columns from another dataset is possible thanks to the pyarrow API and in particular `Table.add_column` :) \r\n\r\nHowever this breaks some features we have regarding pickle. A dataset object can be pickled and unpickled without loading all the data in memory. It is useful for multiprocessing for example. Pickling a dataset object is possible thanks to the `Dataset._data_files` which defines the list of arrow files that will be used to form the final Table (basically all the data from each files are concatenated on axis 0).\r\n\r\nTherefore to be able to add columns to a Dataset and still be able to work with it in a multiprocessing setup, we need to extend this last aspect to be able to reconstruct a Table object from multiple arrow files that are combined in both axis 0 and 1. Currently this reconstruction mechanism only supports axis 0.\r\n\r\nI'm sure we can figure something out that enables users to add columns from another dataset while keeping the multiprocessing support.",
"@lhoestq, we have two Pull Requests to implement:\r\n- Dataset.add_item: #1870\r\n- Dataset.add_column: #2145\r\nwhich add a single row or column, repectively.\r\n\r\nThe request here is to implement the concatenation of *multiple* rows/columns. Am I right?\r\n\r\nWe should agree on the API:\r\n- `concatenate_datasets` with `axis`?\r\n- other Dataset method name?",
"For the API, I like `concatenate_datasets` with `axis` personally :)\r\nFrom a list of `Dataset` objects, it would concatenate them to a new `Dataset` object backed by a `ConcatenationTable`, that is the concatenation of the tables of each input dataset. The concatenation is either on axis=0 (append rows) or on axis=1 (append columns).\r\n\r\nRegarding what we need to implement:\r\nThe axis=0 is already supported and is the current behavior of `concatenate_datasets`.\r\nAlso `add_item` is not needed to implement axis=1 (though it's an awesome addition to this library).\r\n\r\nTo implement axis=1, we either need `add_column` or a `ConcatenationTable` constructor to concatenate tables horizontally.\r\nI have a preference for using a `ConcatenationTable` constructor because this way we can end up with a `ConcatenationTable` with only 1 additional block per table, while `add_column` would add 1 block per new column.\r\n\r\nMaybe we can simply have an equivalent of `ConcatenationTable.from_tables` but for axis=1 ?\r\n`axis` could also be an argument of `ConcatenationTable.from_tables`",
"@lhoestq I think I guessed your suggestions in advance... ๐ #2151",
"Cool ! Sorry I missed this one ^^\r\nI'm taking a look ;)"
] | 1,605,494,783,000 | 1,618,848,438,000 | 1,618,848,438,000 | NONE | null | I want to achieve the following result

| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/853/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/853/timeline | null | null | {
"url": "",
"html_url": "",
"diff_url": "",
"patch_url": "",
"merged_at": 0
} | false |
https://api.github.com/repos/huggingface/datasets/issues/852 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/852/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/852/comments | https://api.github.com/repos/huggingface/datasets/issues/852/events | https://github.com/huggingface/datasets/issues/852 | 743,396,240 | MDU6SXNzdWU3NDMzOTYyNDA= | 852 | wmt cannot be downloaded | {
"login": "rabeehk",
"id": 6278280,
"node_id": "MDQ6VXNlcjYyNzgyODA=",
"avatar_url": "https://avatars.githubusercontent.com/u/6278280?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rabeehk",
"html_url": "https://github.com/rabeehk",
"followers_url": "https://api.github.com/users/rabeehk/followers",
"following_url": "https://api.github.com/users/rabeehk/following{/other_user}",
"gists_url": "https://api.github.com/users/rabeehk/gists{/gist_id}",
"starred_url": "https://api.github.com/users/rabeehk/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rabeehk/subscriptions",
"organizations_url": "https://api.github.com/users/rabeehk/orgs",
"repos_url": "https://api.github.com/users/rabeehk/repos",
"events_url": "https://api.github.com/users/rabeehk/events{/privacy}",
"received_events_url": "https://api.github.com/users/rabeehk/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2067376369,
"node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request",
"name": "dataset request",
"color": "e99695",
"default": false,
"description": "Requesting to add a new dataset"
}
] | closed | false | {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
} | [] | null | [] | 1,605,488,681,000 | 1,605,519,118,000 | 1,605,519,118,000 | CONTRIBUTOR | null | Hi, I appreciate your help with the following error, thanks
>>> from datasets import load_dataset
>>> dataset = load_dataset("wmt16", "ro-en", split="train")
Downloading and preparing dataset wmt16/ro-en (download: Unknown size, generated: Unknown size, post-processed: Unknown size, total: Unknown size) to /root/.cache/huggingface/datasets/wmt16/ro-en/1.0.0/7b2c4443a7d34c2e13df267eaa8cab4c62dd82f6b62b0d9ecc2e3a673ce17308...
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/datasets/load.py", line 611, in load_dataset
ignore_verifications=ignore_verifications,
File "/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/datasets/builder.py", line 476, in download_and_prepare
dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs
File "/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/datasets/builder.py", line 531, in _download_and_prepare
split_generators = self._split_generators(dl_manager, **split_generators_kwargs)
File "/root/.cache/huggingface/modules/datasets_modules/datasets/wmt16/7b2c4443a7d34c2e13df267eaa8cab4c62dd82f6b62b0d9ecc2e3a673ce17308/wmt_utils.py", line 755, in _split_generators
downloaded_files = dl_manager.download_and_extract(urls_to_download)
File "/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/datasets/utils/download_manager.py", line 254, in download_and_extract
return self.extract(self.download(url_or_urls))
File "/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/datasets/utils/download_manager.py", line 179, in download
num_proc=download_config.num_proc,
File "/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/datasets/utils/py_utils.py", line 225, in map_nested
_single_map_nested((function, obj, types, None, True)) for obj in tqdm(iterable, disable=disable_tqdm)
File "/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/datasets/utils/py_utils.py", line 225, in <listcomp>
_single_map_nested((function, obj, types, None, True)) for obj in tqdm(iterable, disable=disable_tqdm)
File "/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/datasets/utils/py_utils.py", line 181, in _single_map_nested
mapped = [_single_map_nested((function, v, types, None, True)) for v in pbar]
File "/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/datasets/utils/py_utils.py", line 181, in <listcomp>
mapped = [_single_map_nested((function, v, types, None, True)) for v in pbar]
File "/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/datasets/utils/py_utils.py", line 163, in _single_map_nested
return function(data_struct)
File "/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/datasets/utils/file_utils.py", line 308, in cached_path
use_etag=download_config.use_etag,
File "/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/datasets/utils/file_utils.py", line 475, in get_from_cache
raise ConnectionError("Couldn't reach {}".format(url))
ConnectionError: Couldn't reach http://opus.nlpl.eu/download.php?f=SETIMES/v2/tmx/en-ro.tmx.gz | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/852/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/852/timeline | null | null | {
"url": "",
"html_url": "",
"diff_url": "",
"patch_url": "",
"merged_at": 0
} | false |
https://api.github.com/repos/huggingface/datasets/issues/851 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/851/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/851/comments | https://api.github.com/repos/huggingface/datasets/issues/851/events | https://github.com/huggingface/datasets/issues/851 | 743,343,278 | MDU6SXNzdWU3NDMzNDMyNzg= | 851 | Add support for other languages for rouge | {
"login": "alexyalunin",
"id": 23011284,
"node_id": "MDQ6VXNlcjIzMDExMjg0",
"avatar_url": "https://avatars.githubusercontent.com/u/23011284?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/alexyalunin",
"html_url": "https://github.com/alexyalunin",
"followers_url": "https://api.github.com/users/alexyalunin/followers",
"following_url": "https://api.github.com/users/alexyalunin/following{/other_user}",
"gists_url": "https://api.github.com/users/alexyalunin/gists{/gist_id}",
"starred_url": "https://api.github.com/users/alexyalunin/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/alexyalunin/subscriptions",
"organizations_url": "https://api.github.com/users/alexyalunin/orgs",
"repos_url": "https://api.github.com/users/alexyalunin/repos",
"events_url": "https://api.github.com/users/alexyalunin/events{/privacy}",
"received_events_url": "https://api.github.com/users/alexyalunin/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2067400959,
"node_id": "MDU6TGFiZWwyMDY3NDAwOTU5",
"url": "https://api.github.com/repos/huggingface/datasets/labels/Metric%20discussion",
"name": "Metric discussion",
"color": "d722e8",
"default": false,
"description": "Discussions on the metrics"
}
] | open | false | {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
} | [] | null | [
"@alexyalunin \r\n\r\nI did something similar for others languages.\r\n\r\n[Repo: rouge-metric](https://github.com/m3hrdadfi/rouge-metric)"
] | 1,605,473,865,000 | 1,622,970,472,000 | null | NONE | null | I calculate rouge with
```
from datasets import load_metric
rouge = load_metric("rouge")
rouge_output = rouge.compute(predictions=['ัะตัั ัะตัั ะฟัะธะฒะตั'], references=['ัะตัั ัะตัั ะฟะพะบะฐ'], rouge_types=[
"rouge2"])["rouge2"].mid
print(rouge_output)
```
the result is
`Score(precision=0.0, recall=0.0, fmeasure=0.0)`
It seems like the `rouge_score` library that this metric uses filters all non-alphanueric latin characters
in `rouge_scorer/tokenize.py` with `text = re.sub(r"[^a-z0-9]+", " ", six.ensure_str(text))`.
Please add support for other languages. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/851/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/851/timeline | null | null | {
"url": "",
"html_url": "",
"diff_url": "",
"patch_url": "",
"merged_at": 0
} | false |
https://api.github.com/repos/huggingface/datasets/issues/850 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/850/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/850/comments | https://api.github.com/repos/huggingface/datasets/issues/850/events | https://github.com/huggingface/datasets/pull/850 | 742,369,419 | MDExOlB1bGxSZXF1ZXN0NTIwNTE0MDY3 | 850 | Create ClassLabel for labelling tasks datasets | {
"login": "jplu",
"id": 959590,
"node_id": "MDQ6VXNlcjk1OTU5MA==",
"avatar_url": "https://avatars.githubusercontent.com/u/959590?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jplu",
"html_url": "https://github.com/jplu",
"followers_url": "https://api.github.com/users/jplu/followers",
"following_url": "https://api.github.com/users/jplu/following{/other_user}",
"gists_url": "https://api.github.com/users/jplu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jplu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jplu/subscriptions",
"organizations_url": "https://api.github.com/users/jplu/orgs",
"repos_url": "https://api.github.com/users/jplu/repos",
"events_url": "https://api.github.com/users/jplu/events{/privacy}",
"received_events_url": "https://api.github.com/users/jplu/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
} | [] | null | [
"@lhoestq Better?"
] | 1,605,265,642,000 | 1,605,522,725,000 | 1,605,522,718,000 | CONTRIBUTOR | null | This PR adds a specific `ClassLabel` for the datasets that are about a labelling task such as POS, NER or Chunking. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/850/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/850/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/850",
"html_url": "https://github.com/huggingface/datasets/pull/850",
"diff_url": "https://github.com/huggingface/datasets/pull/850.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/850.patch",
"merged_at": 1605522718000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/849 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/849/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/849/comments | https://api.github.com/repos/huggingface/datasets/issues/849/events | https://github.com/huggingface/datasets/issues/849 | 742,263,333 | MDU6SXNzdWU3NDIyNjMzMzM= | 849 | Load amazon dataset | {
"login": "bhavitvyamalik",
"id": 19718818,
"node_id": "MDQ6VXNlcjE5NzE4ODE4",
"avatar_url": "https://avatars.githubusercontent.com/u/19718818?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bhavitvyamalik",
"html_url": "https://github.com/bhavitvyamalik",
"followers_url": "https://api.github.com/users/bhavitvyamalik/followers",
"following_url": "https://api.github.com/users/bhavitvyamalik/following{/other_user}",
"gists_url": "https://api.github.com/users/bhavitvyamalik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/bhavitvyamalik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bhavitvyamalik/subscriptions",
"organizations_url": "https://api.github.com/users/bhavitvyamalik/orgs",
"repos_url": "https://api.github.com/users/bhavitvyamalik/repos",
"events_url": "https://api.github.com/users/bhavitvyamalik/events{/privacy}",
"received_events_url": "https://api.github.com/users/bhavitvyamalik/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
} | [] | null | [
"Thanks for reporting !\r\nWe plan to show information about the different configs of the datasets on the website, with the corresponding `load_dataset` calls.\r\n\r\nAlso I think the bullet points formatting has been fixed"
] | 1,605,256,464,000 | 1,605,597,779,000 | 1,605,597,779,000 | CONTRIBUTOR | null | Hi,
I was going through amazon_us_reviews dataset and found that example API usage given on website is different from the API usage while loading dataset.
Eg. what API usage is on the [website](https://huggingface.co/datasets/amazon_us_reviews)
```
from datasets import load_dataset
dataset = load_dataset("amazon_us_reviews")
```
How it is when I tried (the error generated does point me to the right direction though)
```
from datasets import load_dataset
dataset = load_dataset("amazon_us_reviews", 'Books_v1_00')
```
Also, there is some issue with formatting as it's not showing bullet list in description with new line. Can I work on it? | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/849/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/849/timeline | null | null | {
"url": "",
"html_url": "",
"diff_url": "",
"patch_url": "",
"merged_at": 0
} | false |
https://api.github.com/repos/huggingface/datasets/issues/848 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/848/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/848/comments | https://api.github.com/repos/huggingface/datasets/issues/848/events | https://github.com/huggingface/datasets/issues/848 | 742,240,942 | MDU6SXNzdWU3NDIyNDA5NDI= | 848 | Error when concatenate_datasets | {
"login": "shexuan",
"id": 25664170,
"node_id": "MDQ6VXNlcjI1NjY0MTcw",
"avatar_url": "https://avatars.githubusercontent.com/u/25664170?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/shexuan",
"html_url": "https://github.com/shexuan",
"followers_url": "https://api.github.com/users/shexuan/followers",
"following_url": "https://api.github.com/users/shexuan/following{/other_user}",
"gists_url": "https://api.github.com/users/shexuan/gists{/gist_id}",
"starred_url": "https://api.github.com/users/shexuan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/shexuan/subscriptions",
"organizations_url": "https://api.github.com/users/shexuan/orgs",
"repos_url": "https://api.github.com/users/shexuan/repos",
"events_url": "https://api.github.com/users/shexuan/events{/privacy}",
"received_events_url": "https://api.github.com/users/shexuan/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
} | [] | null | [
"As you can see in the error the test checks if `indices_mappings_in_memory` is True or not, which is different from the test you do in your script. In a dataset, both the data and the indices mapping can be either on disk or in memory.\r\n\r\nThe indices mapping correspond to a mapping on top of the data table that is used to re-order/select a sample of the original data table. For example if you do `dataset.train_test_split`, then the resulting train and test datasets will have both an indices mapping to tell which examples are in train and which ones in test.\r\n\r\nBefore saving your datasets on disk, you should call `dataset.flatten_indices()` to remove the indices mapping. It should fix your issue. Under the hood it will create a new data table using the indices mapping. The new data table is going to be a subset of the old one (for example taking only the test set examples), and since the indices mapping will be gone you'll be able to concatenate your datasets.\r\n",
"> As you can see in the error the test checks if `indices_mappings_in_memory` is True or not, which is different from the test you do in your script. In a dataset, both the data and the indices mapping can be either on disk or in memory.\r\n> \r\n> The indices mapping correspond to a mapping on top of the data table that is used to re-order/select a sample of the original data table. For example if you do `dataset.train_test_split`, then the resulting train and test datasets will have both an indices mapping to tell which examples are in train and which ones in test.\r\n> \r\n> Before saving your datasets on disk, you should call `dataset.flatten_indices()` to remove the indices mapping. It should fix your issue. Under the hood it will create a new data table using the indices mapping. The new data table is going to be a subset of the old one (for example taking only the test set examples), and since the indices mapping will be gone you'll be able to concatenate your datasets.\r\n\r\n`dataset.flatten_indices()` solved my problem, thanks so much!",
"@lhoestq we can add a mention of `dataset.flatten_indices()` in the error message (no rush, just put it on your TODO list or I can do it when I come at it)",
"Yup I agree ! And in the docs as well"
] | 1,605,254,162,000 | 1,605,289,259,000 | 1,605,282,910,000 | NONE | null | Hello, when I concatenate two dataset loading from disk, I encountered a problem:
```
test_dataset = load_from_disk('data/test_dataset')
trn_dataset = load_from_disk('data/train_dataset')
train_dataset = concatenate_datasets([trn_dataset, test_dataset])
```
And it reported ValueError blow:
```
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
<ipython-input-38-74fa525512ca> in <module>
----> 1 train_dataset = concatenate_datasets([trn_dataset, test_dataset])
/opt/miniconda3/lib/python3.7/site-packages/datasets/arrow_dataset.py in concatenate_datasets(dsets, info, split)
2547 "However datasets' indices {} come from memory and datasets' indices {} come from disk.".format(
2548 [i for i in range(len(dsets)) if indices_mappings_in_memory[i]],
-> 2549 [i for i in range(len(dsets)) if not indices_mappings_in_memory[i]],
2550 )
2551 )
ValueError: Datasets' indices should ALL come from memory, or should ALL come from disk.
However datasets' indices [1] come from memory and datasets' indices [0] come from disk.
```
But it's curious both of my datasets loading from disk, so I check the source code in `arrow_dataset.py` about the Error:
```
trn_dataset._data_files
# output
[{'filename': 'data/train_dataset/csv-train.arrow', 'skip': 0, 'take': 593264}]
test_dataset._data_files
# output
[{'filename': 'data/test_dataset/csv-test.arrow', 'skip': 0, 'take': 424383}]
print([not dset._data_files for dset in [trn_dataset, test_dataset]])
# [False, False]
# And I tested the code the same as arrow_dataset, but nothing happened
dsets = [trn_dataset, test_dataset]
dsets_in_memory = [not dset._data_files for dset in dsets]
if any(dset_in_memory != dsets_in_memory[0] for dset_in_memory in dsets_in_memory):
raise ValueError(
"Datasets should ALL come from memory, or should ALL come from disk.\n"
"However datasets {} come from memory and datasets {} come from disk.".format(
[i for i in range(len(dsets)) if dsets_in_memory[i]],
[i for i in range(len(dsets)) if not dsets_in_memory[i]],
)
)
```
Any suggestions would be greatly appreciated!
Thanks! | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/848/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/848/timeline | null | null | {
"url": "",
"html_url": "",
"diff_url": "",
"patch_url": "",
"merged_at": 0
} | false |
https://api.github.com/repos/huggingface/datasets/issues/847 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/847/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/847/comments | https://api.github.com/repos/huggingface/datasets/issues/847/events | https://github.com/huggingface/datasets/issues/847 | 742,179,495 | MDU6SXNzdWU3NDIxNzk0OTU= | 847 | multiprocessing in dataset map "can only test a child process" | {
"login": "timothyjlaurent",
"id": 2000204,
"node_id": "MDQ6VXNlcjIwMDAyMDQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/2000204?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/timothyjlaurent",
"html_url": "https://github.com/timothyjlaurent",
"followers_url": "https://api.github.com/users/timothyjlaurent/followers",
"following_url": "https://api.github.com/users/timothyjlaurent/following{/other_user}",
"gists_url": "https://api.github.com/users/timothyjlaurent/gists{/gist_id}",
"starred_url": "https://api.github.com/users/timothyjlaurent/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/timothyjlaurent/subscriptions",
"organizations_url": "https://api.github.com/users/timothyjlaurent/orgs",
"repos_url": "https://api.github.com/users/timothyjlaurent/repos",
"events_url": "https://api.github.com/users/timothyjlaurent/events{/privacy}",
"received_events_url": "https://api.github.com/users/timothyjlaurent/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
} | [] | null | [
"It looks like an issue with wandb/tqdm here.\r\nWe're using the `multiprocess` library instead of the `multiprocessing` builtin python package to support various types of mapping functions. Maybe there's some sort of incompatibility.\r\n\r\nCould you make a minimal script to reproduce or a google colab ?",
"hi facing the same issue here - \r\n\r\n`AssertionError: Caught AssertionError in DataLoader worker process 0.\r\nOriginal Traceback (most recent call last):\r\n File \"/usr/lib/python3.6/logging/__init__.py\", line 996, in emit\r\n stream.write(msg)\r\n File \"/usr/local/lib/python3.6/dist-packages/wandb/sdk/lib/redirect.py\", line 100, in new_write\r\n cb(name, data)\r\n File \"/usr/local/lib/python3.6/dist-packages/wandb/sdk/wandb_run.py\", line 723, in _console_callback\r\n self._backend.interface.publish_output(name, data)\r\n File \"/usr/local/lib/python3.6/dist-packages/wandb/sdk/interface/interface.py\", line 153, in publish_output\r\n self._publish_output(o)\r\n File \"/usr/local/lib/python3.6/dist-packages/wandb/sdk/interface/interface.py\", line 158, in _publish_output\r\n self._publish(rec)\r\n File \"/usr/local/lib/python3.6/dist-packages/wandb/sdk/interface/interface.py\", line 456, in _publish\r\n if self._process and not self._process.is_alive():\r\n File \"/usr/lib/python3.6/multiprocessing/process.py\", line 134, in is_alive\r\n assert self._parent_pid == os.getpid(), 'can only test a child process'\r\nAssertionError: can only test a child process\r\n\r\nDuring handling of the above exception, another exception occurred:\r\n\r\nTraceback (most recent call last):\r\n File \"/usr/local/lib/python3.6/dist-packages/torch/utils/data/_utils/worker.py\", line 198, in _worker_loop\r\n data = fetcher.fetch(index)\r\n File \"/usr/local/lib/python3.6/dist-packages/torch/utils/data/_utils/fetch.py\", line 44, in fetch\r\n data = [self.dataset[idx] for idx in possibly_batched_index]\r\n File \"/usr/local/lib/python3.6/dist-packages/torch/utils/data/_utils/fetch.py\", line 44, in <listcomp>\r\n data = [self.dataset[idx] for idx in possibly_batched_index]\r\n File \"<ipython-input-8-a4d9a08d114e>\", line 20, in __getitem__\r\n return_token_type_ids=True\r\n File \"/usr/local/lib/python3.6/dist-packages/transformers/tokenization_utils_base.py\", line 2405, in encode_plus\r\n **kwargs,\r\n File \"/usr/local/lib/python3.6/dist-packages/transformers/tokenization_utils_base.py\", line 2125, in _get_padding_truncation_strategies\r\n \"Truncation was not explicitly activated but `max_length` is provided a specific value, \"\r\n File \"/usr/lib/python3.6/logging/__init__.py\", line 1320, in warning\r\n self._log(WARNING, msg, args, **kwargs)\r\n File \"/usr/lib/python3.6/logging/__init__.py\", line 1444, in _log\r\n self.handle(record)\r\n File \"/usr/lib/python3.6/logging/__init__.py\", line 1454, in handle\r\n self.callHandlers(record)\r\n File \"/usr/lib/python3.6/logging/__init__.py\", line 1516, in callHandlers\r\n hdlr.handle(record)\r\n File \"/usr/lib/python3.6/logging/__init__.py\", line 865, in handle\r\n self.emit(record)\r\n File \"/usr/lib/python3.6/logging/__init__.py\", line 1000, in emit\r\n self.handleError(record)\r\n File \"/usr/lib/python3.6/logging/__init__.py\", line 917, in handleError\r\n sys.stderr.write('--- Logging error ---\\n')\r\n File \"/usr/local/lib/python3.6/dist-packages/wandb/sdk/lib/redirect.py\", line 100, in new_write\r\n cb(name, data)\r\n File \"/usr/local/lib/python3.6/dist-packages/wandb/sdk/wandb_run.py\", line 723, in _console_callback\r\n self._backend.interface.publish_output(name, data)\r\n File \"/usr/local/lib/python3.6/dist-packages/wandb/sdk/interface/interface.py\", line 153, in publish_output\r\n self._publish_output(o)\r\n File \"/usr/local/lib/python3.6/dist-packages/wandb/sdk/interface/interface.py\", line 158, in _publish_output\r\n self._publish(rec)\r\n File \"/usr/local/lib/python3.6/dist-packages/wandb/sdk/interface/interface.py\", line 456, in _publish\r\n if self._process and not self._process.is_alive():\r\n File \"/usr/lib/python3.6/multiprocessing/process.py\", line 134, in is_alive\r\n assert self._parent_pid == os.getpid(), 'can only test a child process'\r\nAssertionError: can only test a child process`\r\n",
"It looks like this warning : \r\n\"Truncation was not explicitly activated but max_length is provided a specific value, \"\r\nis not handled well by wandb.\r\n\r\nThe error occurs when calling the tokenizer.\r\nMaybe you can try to specify `truncation=True` when calling the tokenizer to remove the warning ?\r\nOtherwise I don't know why wandb would fail on a warning. Maybe one of its logging handlers have some issues with the logging of tokenizers. Maybe @n1t0 knows more about this ?",
"I'm having a similar issue but when I try to do multiprocessing with the `DataLoader`\r\n\r\nCode to reproduce:\r\n\r\n```\r\nfrom datasets import load_dataset\r\n\r\nbook_corpus = load_dataset('bookcorpus', 'plain_text', cache_dir='/home/ad/Desktop/bookcorpus', split='train[:1%]')\r\nbook_corpus = book_corpus.map(encode, batched=True, num_proc=20, load_from_cache_file=True, batch_size=5000)\r\nbook_corpus.set_format(type='torch', columns=['text', \"input_ids\", \"attention_mask\", \"token_type_ids\"])\r\n\r\nfrom transformers import DataCollatorForWholeWordMask\r\nfrom transformers import Trainer, TrainingArguments\r\n\r\ndata_collator = DataCollatorForWholeWordMask(\r\n tokenizer=tokenizer, mlm=True, mlm_probability=0.15)\r\n\r\ntraining_args = TrainingArguments(\r\n output_dir=\"./mobile_linear_att_8L_128_128_03layerdrop_shared\",\r\n overwrite_output_dir=True,\r\n num_train_epochs=1,\r\n per_device_train_batch_size=64,\r\n save_steps=50,\r\n save_total_limit=2,\r\n logging_first_step=True,\r\n warmup_steps=100,\r\n logging_steps=50,\r\n gradient_accumulation_steps=1,\r\n fp16=True,\r\n **dataloader_num_workers=10**,\r\n)\r\n\r\ntrainer = Trainer(\r\n model=model,\r\n args=training_args,\r\n data_collator=data_collator,\r\n train_dataset=book_corpus,\r\n tokenizer=tokenizer)\r\n\r\ntrainer.train()\r\n```\r\n\r\n```\r\n---------------------------------------------------------------------------\r\nAssertionError Traceback (most recent call last)\r\n<timed eval> in <module>\r\n\r\n~/anaconda3/envs/tfm/lib/python3.6/site-packages/transformers/trainer.py in train(self, model_path, trial)\r\n 869 self.control = self.callback_handler.on_epoch_begin(self.args, self.state, self.control)\r\n 870 \r\n--> 871 for step, inputs in enumerate(epoch_iterator):\r\n 872 \r\n 873 # Skip past any already trained steps if resuming training\r\n\r\n~/anaconda3/envs/tfm/lib/python3.6/site-packages/torch/utils/data/dataloader.py in __next__(self)\r\n 433 if self._sampler_iter is None:\r\n 434 self._reset()\r\n--> 435 data = self._next_data()\r\n 436 self._num_yielded += 1\r\n 437 if self._dataset_kind == _DatasetKind.Iterable and \\\r\n\r\n~/anaconda3/envs/tfm/lib/python3.6/site-packages/torch/utils/data/dataloader.py in _next_data(self)\r\n 1083 else:\r\n 1084 del self._task_info[idx]\r\n-> 1085 return self._process_data(data)\r\n 1086 \r\n 1087 def _try_put_index(self):\r\n\r\n~/anaconda3/envs/tfm/lib/python3.6/site-packages/torch/utils/data/dataloader.py in _process_data(self, data)\r\n 1109 self._try_put_index()\r\n 1110 if isinstance(data, ExceptionWrapper):\r\n-> 1111 data.reraise()\r\n 1112 return data\r\n 1113 \r\n\r\n~/anaconda3/envs/tfm/lib/python3.6/site-packages/torch/_utils.py in reraise(self)\r\n 426 # have message field\r\n 427 raise self.exc_type(message=msg)\r\n--> 428 raise self.exc_type(msg)\r\n 429 \r\n 430 \r\n\r\nAssertionError: Caught AssertionError in DataLoader worker process 0.\r\nOriginal Traceback (most recent call last):\r\n File \"/home/ad/anaconda3/envs/tfm/lib/python3.6/site-packages/torch/utils/data/_utils/worker.py\", line 198, in _worker_loop\r\n data = fetcher.fetch(index)\r\n File \"/home/ad/anaconda3/envs/tfm/lib/python3.6/site-packages/torch/utils/data/_utils/fetch.py\", line 44, in fetch\r\n data = [self.dataset[idx] for idx in possibly_batched_index]\r\n File \"/home/ad/anaconda3/envs/tfm/lib/python3.6/site-packages/torch/utils/data/_utils/fetch.py\", line 44, in <listcomp>\r\n data = [self.dataset[idx] for idx in possibly_batched_index]\r\n File \"/home/ad/anaconda3/envs/tfm/lib/python3.6/site-packages/datasets/arrow_dataset.py\", line 1087, in __getitem__\r\n format_kwargs=self._format_kwargs,\r\n File \"/home/ad/anaconda3/envs/tfm/lib/python3.6/site-packages/datasets/arrow_dataset.py\", line 1074, in _getitem\r\n format_kwargs=format_kwargs,\r\n File \"/home/ad/anaconda3/envs/tfm/lib/python3.6/site-packages/datasets/arrow_dataset.py\", line 890, in _convert_outputs\r\n v = map_nested(command, v, **map_nested_kwargs)\r\n File \"/home/ad/anaconda3/envs/tfm/lib/python3.6/site-packages/datasets/utils/py_utils.py\", line 225, in map_nested\r\n return function(data_struct)\r\n File \"/home/ad/anaconda3/envs/tfm/lib/python3.6/site-packages/datasets/arrow_dataset.py\", line 851, in command\r\n return torch.tensor(x, **format_kwargs)\r\n File \"/home/ad/anaconda3/envs/tfm/lib/python3.6/warnings.py\", line 101, in _showwarnmsg\r\n _showwarnmsg_impl(msg)\r\n File \"/home/ad/anaconda3/envs/tfm/lib/python3.6/warnings.py\", line 30, in _showwarnmsg_impl\r\n file.write(text)\r\n File \"/home/ad/anaconda3/envs/tfm/lib/python3.6/site-packages/wandb/sdk/lib/redirect.py\", line 100, in new_write\r\n cb(name, data)\r\n File \"/home/ad/anaconda3/envs/tfm/lib/python3.6/site-packages/wandb/sdk/wandb_run.py\", line 723, in _console_callback\r\n self._backend.interface.publish_output(name, data)\r\n File \"/home/ad/anaconda3/envs/tfm/lib/python3.6/site-packages/wandb/sdk/interface/interface.py\", line 153, in publish_output\r\n self._publish_output(o)\r\n File \"/home/ad/anaconda3/envs/tfm/lib/python3.6/site-packages/wandb/sdk/interface/interface.py\", line 158, in _publish_output\r\n self._publish(rec)\r\n File \"/home/ad/anaconda3/envs/tfm/lib/python3.6/site-packages/wandb/sdk/interface/interface.py\", line 456, in _publish\r\n if self._process and not self._process.is_alive():\r\n File \"/home/ad/anaconda3/envs/tfm/lib/python3.6/multiprocessing/process.py\", line 134, in is_alive\r\n assert self._parent_pid == os.getpid(), 'can only test a child process'\r\nAssertionError: can only test a child process\r\n```\r\n\r\nAs a workaround I have commented line 456 and 457 in `/home/ad/anaconda3/envs/tfm/lib/python3.6/site-packages/wandb/sdk/interface/interface.py`",
"Isn't it more the pytorch warning on the use of non-writable memory for tensor that trigger this here @lhoestq? (since it seems to be a warning triggered in `torch.tensor()`",
"Yep this time this is a warning from pytorch that causes wandb to not work properly.\r\nCould this by a wandb issue ?",
"Hi @timothyjlaurent @gaceladri \r\nIf you're running `transformers` from `master` you can try setting the env var `WAND_DISABLE=true` (from https://github.com/huggingface/transformers/pull/9896) and try again ?\r\nThis issue might be related to https://github.com/huggingface/transformers/issues/9623 ",
"I have commented the lines that cause my code break. I'm now seeing my reports on Wandb and my code does not break. I am training now, so I will check probably in 6 hours. I suppose that setting wandb disable will work as well."
] | 1,605,247,264,000 | 1,612,198,408,000 | null | NONE | null | Using a dataset with a single 'text' field and a fast tokenizer in a jupyter notebook.
```
def tokenizer_fn(example):
return tokenizer.batch_encode_plus(example['text'])
ds_tokenized = text_dataset.map(tokenizer_fn, batched=True, num_proc=6, remove_columns=['text'])
```
```
---------------------------------------------------------------------------
RemoteTraceback Traceback (most recent call last)
RemoteTraceback:
"""
Traceback (most recent call last):
File "/home/jovyan/share/users/tlaurent/invitae-bert/ve/lib/python3.6/site-packages/multiprocess/pool.py", line 119, in worker
result = (True, func(*args, **kwds))
File "/home/jovyan/share/users/tlaurent/invitae-bert/ve/lib/python3.6/site-packages/datasets/arrow_dataset.py", line 156, in wrapper
out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
File "/home/jovyan/share/users/tlaurent/invitae-bert/ve/lib/python3.6/site-packages/datasets/fingerprint.py", line 163, in wrapper
out = func(self, *args, **kwargs)
File "/home/jovyan/share/users/tlaurent/invitae-bert/ve/lib/python3.6/site-packages/datasets/arrow_dataset.py", line 1510, in _map_single
for i in pbar:
File "/home/jovyan/share/users/tlaurent/invitae-bert/ve/lib/python3.6/site-packages/tqdm/notebook.py", line 228, in __iter__
for obj in super(tqdm_notebook, self).__iter__(*args, **kwargs):
File "/home/jovyan/share/users/tlaurent/invitae-bert/ve/lib/python3.6/site-packages/tqdm/std.py", line 1186, in __iter__
self.close()
File "/home/jovyan/share/users/tlaurent/invitae-bert/ve/lib/python3.6/site-packages/tqdm/notebook.py", line 251, in close
super(tqdm_notebook, self).close(*args, **kwargs)
File "/home/jovyan/share/users/tlaurent/invitae-bert/ve/lib/python3.6/site-packages/tqdm/std.py", line 1291, in close
fp_write('')
File "/home/jovyan/share/users/tlaurent/invitae-bert/ve/lib/python3.6/site-packages/tqdm/std.py", line 1288, in fp_write
self.fp.write(_unicode(s))
File "/home/jovyan/share/users/tlaurent/invitae-bert/ve/lib/python3.6/site-packages/wandb/sdk/lib/redirect.py", line 91, in new_write
cb(name, data)
File "/home/jovyan/share/users/tlaurent/invitae-bert/ve/lib/python3.6/site-packages/wandb/sdk/wandb_run.py", line 598, in _console_callback
self._backend.interface.publish_output(name, data)
File "/home/jovyan/share/users/tlaurent/invitae-bert/ve/lib/python3.6/site-packages/wandb/sdk/interface/interface.py", line 146, in publish_output
self._publish_output(o)
File "/home/jovyan/share/users/tlaurent/invitae-bert/ve/lib/python3.6/site-packages/wandb/sdk/interface/interface.py", line 151, in _publish_output
self._publish(rec)
File "/home/jovyan/share/users/tlaurent/invitae-bert/ve/lib/python3.6/site-packages/wandb/sdk/interface/interface.py", line 431, in _publish
if self._process and not self._process.is_alive():
File "/usr/lib/python3.6/multiprocessing/process.py", line 134, in is_alive
assert self._parent_pid == os.getpid(), 'can only test a child process'
AssertionError: can only test a child process
"""
``` | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/847/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/847/timeline | null | null | {
"url": "",
"html_url": "",
"diff_url": "",
"patch_url": "",
"merged_at": 0
} | false |
https://api.github.com/repos/huggingface/datasets/issues/846 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/846/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/846/comments | https://api.github.com/repos/huggingface/datasets/issues/846/events | https://github.com/huggingface/datasets/issues/846 | 741,885,174 | MDU6SXNzdWU3NDE4ODUxNzQ= | 846 | Add HoVer multi-hop fact verification dataset | {
"login": "yjernite",
"id": 10469459,
"node_id": "MDQ6VXNlcjEwNDY5NDU5",
"avatar_url": "https://avatars.githubusercontent.com/u/10469459?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yjernite",
"html_url": "https://github.com/yjernite",
"followers_url": "https://api.github.com/users/yjernite/followers",
"following_url": "https://api.github.com/users/yjernite/following{/other_user}",
"gists_url": "https://api.github.com/users/yjernite/gists{/gist_id}",
"starred_url": "https://api.github.com/users/yjernite/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yjernite/subscriptions",
"organizations_url": "https://api.github.com/users/yjernite/orgs",
"repos_url": "https://api.github.com/users/yjernite/repos",
"events_url": "https://api.github.com/users/yjernite/events{/privacy}",
"received_events_url": "https://api.github.com/users/yjernite/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2067376369,
"node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request",
"name": "dataset request",
"color": "e99695",
"default": false,
"description": "Requesting to add a new dataset"
}
] | closed | false | {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
} | [] | null | [
"Hi @yjernite I'm new but wanted to contribute. Has anyone already taken this problem and do you think it is suitable for newbies?",
"Hi @tenjjin! This dataset is still up for grabs! Here's the link with the guide to add it. You should play around with the library first (download and look at a few datasets), then follow the steps here:\r\n\r\nhttps://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md",
"Closed by #1399 "
] | 1,605,210,946,000 | 1,607,636,853,000 | 1,607,636,853,000 | MEMBER | null | ## Adding a Dataset
- **Name:** HoVer
- **Description:** https://twitter.com/YichenJiang9/status/1326954363806429186 contains 20K claim verification examples
- **Paper:** https://arxiv.org/abs/2011.03088
- **Data:** https://hover-nlp.github.io/
- **Motivation:** There are still few multi-hop information extraction benchmarks (HotpotQA, which dataset wase based off, notwithstanding)
Instructions to add a new dataset can be found [here](https://huggingface.co/docs/datasets/share_dataset.html).
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/846/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/846/timeline | null | null | {
"url": "",
"html_url": "",
"diff_url": "",
"patch_url": "",
"merged_at": 0
} | false |
https://api.github.com/repos/huggingface/datasets/issues/845 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/845/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/845/comments | https://api.github.com/repos/huggingface/datasets/issues/845/events | https://github.com/huggingface/datasets/pull/845 | 741,841,350 | MDExOlB1bGxSZXF1ZXN0NTIwMDg1NDMy | 845 | amazon description fields as bullets | {
"login": "joeddav",
"id": 9353833,
"node_id": "MDQ6VXNlcjkzNTM4MzM=",
"avatar_url": "https://avatars.githubusercontent.com/u/9353833?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/joeddav",
"html_url": "https://github.com/joeddav",
"followers_url": "https://api.github.com/users/joeddav/followers",
"following_url": "https://api.github.com/users/joeddav/following{/other_user}",
"gists_url": "https://api.github.com/users/joeddav/gists{/gist_id}",
"starred_url": "https://api.github.com/users/joeddav/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/joeddav/subscriptions",
"organizations_url": "https://api.github.com/users/joeddav/orgs",
"repos_url": "https://api.github.com/users/joeddav/repos",
"events_url": "https://api.github.com/users/joeddav/events{/privacy}",
"received_events_url": "https://api.github.com/users/joeddav/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
} | [] | null | [] | 1,605,207,041,000 | 1,605,207,054,000 | 1,605,207,054,000 | CONTRIBUTOR | null | One more minor formatting change to amazon reviews's description (in addition to #844). Just reformatting the fields to display as a bulleted list in markdown. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/845/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/845/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/845",
"html_url": "https://github.com/huggingface/datasets/pull/845",
"diff_url": "https://github.com/huggingface/datasets/pull/845.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/845.patch",
"merged_at": 1605207054000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/844 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/844/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/844/comments | https://api.github.com/repos/huggingface/datasets/issues/844/events | https://github.com/huggingface/datasets/pull/844 | 741,835,661 | MDExOlB1bGxSZXF1ZXN0NTIwMDgwNzM5 | 844 | add newlines to amazon desc | {
"login": "joeddav",
"id": 9353833,
"node_id": "MDQ6VXNlcjkzNTM4MzM=",
"avatar_url": "https://avatars.githubusercontent.com/u/9353833?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/joeddav",
"html_url": "https://github.com/joeddav",
"followers_url": "https://api.github.com/users/joeddav/followers",
"following_url": "https://api.github.com/users/joeddav/following{/other_user}",
"gists_url": "https://api.github.com/users/joeddav/gists{/gist_id}",
"starred_url": "https://api.github.com/users/joeddav/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/joeddav/subscriptions",
"organizations_url": "https://api.github.com/users/joeddav/orgs",
"repos_url": "https://api.github.com/users/joeddav/repos",
"events_url": "https://api.github.com/users/joeddav/events{/privacy}",
"received_events_url": "https://api.github.com/users/joeddav/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
} | [] | null | [] | 1,605,206,480,000 | 1,605,206,545,000 | 1,605,206,541,000 | CONTRIBUTOR | null | Just a quick formatting fix to hopefully make it render nicer on Viewer | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/844/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/844/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/844",
"html_url": "https://github.com/huggingface/datasets/pull/844",
"diff_url": "https://github.com/huggingface/datasets/pull/844.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/844.patch",
"merged_at": 1605206541000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/843 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/843/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/843/comments | https://api.github.com/repos/huggingface/datasets/issues/843/events | https://github.com/huggingface/datasets/issues/843 | 741,531,121 | MDU6SXNzdWU3NDE1MzExMjE= | 843 | use_custom_baseline still produces errors for bertscore | {
"login": "penatbater",
"id": 37921244,
"node_id": "MDQ6VXNlcjM3OTIxMjQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/37921244?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/penatbater",
"html_url": "https://github.com/penatbater",
"followers_url": "https://api.github.com/users/penatbater/followers",
"following_url": "https://api.github.com/users/penatbater/following{/other_user}",
"gists_url": "https://api.github.com/users/penatbater/gists{/gist_id}",
"starred_url": "https://api.github.com/users/penatbater/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/penatbater/subscriptions",
"organizations_url": "https://api.github.com/users/penatbater/orgs",
"repos_url": "https://api.github.com/users/penatbater/repos",
"events_url": "https://api.github.com/users/penatbater/events{/privacy}",
"received_events_url": "https://api.github.com/users/penatbater/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2067393914,
"node_id": "MDU6TGFiZWwyMDY3MzkzOTE0",
"url": "https://api.github.com/repos/huggingface/datasets/labels/metric%20bug",
"name": "metric bug",
"color": "25b21e",
"default": false,
"description": "A bug in a metric script"
}
] | closed | false | {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
} | [] | null | [
"Thanks for reporting ! That's a bug indeed\r\nIf you want to contribute, feel free to fix this issue and open a PR :)",
"This error is because of a mismatch between `datasets` and `bert_score`. With `datasets=1.1.2` and `bert_score>=0.3.6` it works ok. So `pip install -U bert_score` should fix the problem. ",
"Thanks for the heads up @pvl and for the PR as well :)",
"Hello everyone,\r\n\r\nI think the problem is not solved: \r\n\r\n```\r\nfrom datasets import load_metric\r\nmetric=load_metric('bertscore')\r\nmetric.compute(\r\n predictions=predictions,\r\n references=references,\r\n lang='fr',\r\n rescale_with_baseline=True\r\n)\r\nTypeError: get_hash() missing 2 required positional arguments: 'use_custom_baseline' and 'use_fast_tokenizer'\r\n```\r\nThis code is produced using `Python 3.6.9 datasets==1.1.2 and bert_score==0.3.10`",
"Hi ! This has been fixed by https://github.com/huggingface/datasets/pull/2770, we'll do a new release soon to make the fix available :)\r\n\r\nIn the meantime please use an older version of `bert_score`"
] | 1,605,181,472,000 | 1,630,404,404,000 | 1,612,880,508,000 | NONE | null | `metric = load_metric('bertscore')`
`a1 = "random sentences"`
`b1 = "random sentences"`
`metric.compute(predictions = [a1], references = [b1], lang = 'en')`
`Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/stephen_chan/.local/lib/python3.6/site-packages/datasets/metric.py", line 393, in compute
output = self._compute(predictions=predictions, references=references, **kwargs)
File "/home/stephen_chan/.cache/huggingface/modules/datasets_modules/metrics/bertscore/361e597a01a41d6cf95d94bbfb01dea16261687abc0c6c74cc9930f80488f363/bertscore.py", line 108, in _compute
hashcode = bert_score.utils.get_hash(model_type, num_layers, idf, rescale_with_baseline)
TypeError: get_hash() missing 1 required positional argument: 'use_custom_baseline'`
Adding 'use_custom_baseline = False' as an argument produces this error
`Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/stephen_chan/.local/lib/python3.6/site-packages/datasets/metric.py", line 393, in compute
output = self._compute(predictions=predictions, references=references, **kwargs)
TypeError: _compute() got an unexpected keyword argument 'use_custom_baseline'`
This is on Ubuntu 18.04, Python 3.6.9, datasets version 1.1.2 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/843/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/843/timeline | null | null | {
"url": "",
"html_url": "",
"diff_url": "",
"patch_url": "",
"merged_at": 0
} | false |
https://api.github.com/repos/huggingface/datasets/issues/842 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/842/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/842/comments | https://api.github.com/repos/huggingface/datasets/issues/842/events | https://github.com/huggingface/datasets/issues/842 | 741,208,428 | MDU6SXNzdWU3NDEyMDg0Mjg= | 842 | How to enable `.map()` pre-processing pipelines to support multi-node parallelism? | {
"login": "shangw-nvidia",
"id": 66387198,
"node_id": "MDQ6VXNlcjY2Mzg3MTk4",
"avatar_url": "https://avatars.githubusercontent.com/u/66387198?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/shangw-nvidia",
"html_url": "https://github.com/shangw-nvidia",
"followers_url": "https://api.github.com/users/shangw-nvidia/followers",
"following_url": "https://api.github.com/users/shangw-nvidia/following{/other_user}",
"gists_url": "https://api.github.com/users/shangw-nvidia/gists{/gist_id}",
"starred_url": "https://api.github.com/users/shangw-nvidia/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/shangw-nvidia/subscriptions",
"organizations_url": "https://api.github.com/users/shangw-nvidia/orgs",
"repos_url": "https://api.github.com/users/shangw-nvidia/repos",
"events_url": "https://api.github.com/users/shangw-nvidia/events{/privacy}",
"received_events_url": "https://api.github.com/users/shangw-nvidia/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | {
"login": null,
"id": null,
"node_id": null,
"avatar_url": null,
"gravatar_id": null,
"url": null,
"html_url": null,
"followers_url": null,
"following_url": null,
"gists_url": null,
"starred_url": null,
"subscriptions_url": null,
"organizations_url": null,
"repos_url": null,
"events_url": null,
"received_events_url": null,
"type": null,
"site_admin": null
} | [] | null | [
"Right now multiprocessing only runs on single node.\r\n\r\nHowever it's probably possible to extend it to support multi nodes. Indeed we're using the `multiprocess` library from the `pathos` project to do multiprocessing in `datasets`, and `pathos` is made to support parallelism on several nodes. More info about pathos [on the pathos repo](https://github.com/uqfoundation/pathos).\r\n\r\nIf you're familiar with pathos or if you want to give it a try, it could be a nice addition to the library :)"
] | 1,605,146,678,000 | 1,605,223,707,000 | null | NONE | null | Hi,
Currently, multiprocessing can be enabled for the `.map()` stages on a single node. However, in the case of multi-node training, (since more than one node would be available) I'm wondering if it's possible to extend the parallel processing among nodes, instead of only 1 node running the `.map()` while the other node is waiting for it to finish?
Thanks! | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/842/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/842/timeline | null | null | {
"url": null,
"html_url": null,
"diff_url": null,
"patch_url": null,
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/841 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/841/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/841/comments | https://api.github.com/repos/huggingface/datasets/issues/841/events | https://github.com/huggingface/datasets/issues/841 | 740,737,448 | MDU6SXNzdWU3NDA3Mzc0NDg= | 841 | Can not reuse datasets already downloaded | {
"login": "jc-hou",
"id": 30210529,
"node_id": "MDQ6VXNlcjMwMjEwNTI5",
"avatar_url": "https://avatars.githubusercontent.com/u/30210529?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jc-hou",
"html_url": "https://github.com/jc-hou",
"followers_url": "https://api.github.com/users/jc-hou/followers",
"following_url": "https://api.github.com/users/jc-hou/following{/other_user}",
"gists_url": "https://api.github.com/users/jc-hou/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jc-hou/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jc-hou/subscriptions",
"organizations_url": "https://api.github.com/users/jc-hou/orgs",
"repos_url": "https://api.github.com/users/jc-hou/repos",
"events_url": "https://api.github.com/users/jc-hou/events{/privacy}",
"received_events_url": "https://api.github.com/users/jc-hou/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": null,
"id": null,
"node_id": null,
"avatar_url": null,
"gravatar_id": null,
"url": null,
"html_url": null,
"followers_url": null,
"following_url": null,
"gists_url": null,
"starred_url": null,
"subscriptions_url": null,
"organizations_url": null,
"repos_url": null,
"events_url": null,
"received_events_url": null,
"type": null,
"site_admin": null
} | [] | null | [
"It seems the process needs '/datasets.huggingface.co/datasets/datasets/wikipedia/wikipedia.py'\r\nWhere and how to assign this ```wikipedia.py``` after I manually download it ?",
"\r\ndownload the ```wikipedia.py``` at the working directory and go with ```dataset = load_dataset('wikipedia.py', '20200501.en')``` works."
] | 1,605,098,535,000 | 1,605,118,636,000 | 1,605,118,636,000 | NONE | null | Hello,
I need to connect to a frontal node (with http proxy, no gpu) before connecting to a gpu node (but no http proxy, so can not use wget so on).
I successfully downloaded and reuse the wikipedia datasets in a frontal node.
When I connect to the gpu node, I supposed to use the downloaded datasets from cache, but failed and end with time out error.
On frontal node:
```
>>> from datasets import load_dataset
>>> dataset = load_dataset('wikipedia', '20200501.en')
Reusing dataset wikipedia (/linkhome/rech/genini01/uua34ms/.cache/huggingface/datasets/wikipedia/20200501.en/1.0.0/f92599dfccab29832c442b82870fa8f6983e5b4ebbf5e6e2dcbe894e325339cd)
/linkhome/rech/genini01/uua34ms/work/anaconda3/envs/pytorch_pip170_cuda102/lib/python3.6/site-packages/torch/cuda/__init__.py:52: UserWarning: CUDA initialization: Found no NVIDIA driver on your system. Please check that you have an NVIDIA GPU and installed a driver from http://www.nvidia.com/Download/index.aspx (Triggered internally at /pytorch/c10/cuda/CUDAFunctions.cpp:100.)
return torch._C._cuda_getDeviceCount() > 0
```
On gpu node:
```
>>> from datasets import load_dataset
>>> dataset = load_dataset('wikipedia', '20200501.en')
Traceback (most recent call last):
File "/linkhome/rech/genini01/uua34ms/work/anaconda3/envs/pytorch_pip170_cuda102/lib/python3.6/site-packages/urllib3/connection.py", line 160, in _new_conn
(self._dns_host, self.port), self.timeout, **extra_kw
File "/linkhome/rech/genini01/uua34ms/work/anaconda3/envs/pytorch_pip170_cuda102/lib/python3.6/site-packages/urllib3/util/connection.py", line 84, in create_connection
raise err
File "/linkhome/rech/genini01/uua34ms/work/anaconda3/envs/pytorch_pip170_cuda102/lib/python3.6/site-packages/urllib3/util/connection.py", line 74, in create_connection
sock.connect(sa)
TimeoutError: [Errno 110] Connection timed out
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/linkhome/rech/genini01/uua34ms/work/anaconda3/envs/pytorch_pip170_cuda102/lib/python3.6/site-packages/urllib3/connectionpool.py", line 677, in urlopen
chunked=chunked,
File "/linkhome/rech/genini01/uua34ms/work/anaconda3/envs/pytorch_pip170_cuda102/lib/python3.6/site-packages/urllib3/connectionpool.py", line 381, in _make_request
self._validate_conn(conn)
File "/linkhome/rech/genini01/uua34ms/work/anaconda3/envs/pytorch_pip170_cuda102/lib/python3.6/site-packages/urllib3/connectionpool.py", line 978, in _validate_conn
conn.connect()
File "/linkhome/rech/genini01/uua34ms/work/anaconda3/envs/pytorch_pip170_cuda102/lib/python3.6/site-packages/urllib3/connection.py", line 309, in connect
conn = self._new_conn()
File "/linkhome/rech/genini01/uua34ms/work/anaconda3/envs/pytorch_pip170_cuda102/lib/python3.6/site-packages/urllib3/connection.py", line 172, in _new_conn
self, "Failed to establish a new connection: %s" % e
urllib3.exceptions.NewConnectionError: <urllib3.connection.HTTPSConnection object at 0x14b7b73e4908>: Failed to establish a new connection: [Errno 110] Connection timed out
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/linkhome/rech/genini01/uua34ms/work/anaconda3/envs/pytorch_pip170_cuda102/lib/python3.6/site-packages/requests/adapters.py", line 449, in send
timeout=timeout
File "/linkhome/rech/genini01/uua34ms/work/anaconda3/envs/pytorch_pip170_cuda102/lib/python3.6/site-packages/urllib3/connectionpool.py", line 727, in urlopen
method, url, error=e, _pool=self, _stacktrace=sys.exc_info()[2]
File "/linkhome/rech/genini01/uua34ms/work/anaconda3/envs/pytorch_pip170_cuda102/lib/python3.6/site-packages/urllib3/util/retry.py", line 446, in increment
raise MaxRetryError(_pool, url, error or ResponseError(cause))
urllib3.exceptions.MaxRetryError: HTTPSConnectionPool(host='s3.amazonaws.com', port=443): Max retries exceeded with url: /datasets.huggingface.co/datasets/datasets/wikipedia/wikipedia.py (Caused by NewConnectionError('<urllib3.connection.HTTPSConnection object at 0x14b7b73e4908>: Failed to establish a new connection: [Errno 110] Connection timed out',))
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/linkhome/rech/genini01/uua34ms/work/anaconda3/envs/pytorch_pip170_cuda102/lib/python3.6/site-packages/datasets/load.py", line 590, in load_dataset
path, script_version=script_version, download_config=download_config, download_mode=download_mode, dataset=True
File "/linkhome/rech/genini01/uua34ms/work/anaconda3/envs/pytorch_pip170_cuda102/lib/python3.6/site-packages/datasets/load.py", line 264, in prepare_module
head_hf_s3(path, filename=name, dataset=dataset)
File "/linkhome/rech/genini01/uua34ms/work/anaconda3/envs/pytorch_pip170_cuda102/lib/python3.6/site-packages/datasets/utils/file_utils.py", line 200, in head_hf_s3
return requests.head(hf_bucket_url(identifier=identifier, filename=filename, use_cdn=use_cdn, dataset=dataset))
File "/linkhome/rech/genini01/uua34ms/work/anaconda3/envs/pytorch_pip170_cuda102/lib/python3.6/site-packages/requests/api.py", line 104, in head
return request('head', url, **kwargs)
File "/linkhome/rech/genini01/uua34ms/work/anaconda3/envs/pytorch_pip170_cuda102/lib/python3.6/site-packages/requests/api.py", line 61, in request
return session.request(method=method, url=url, **kwargs)
File "/linkhome/rech/genini01/uua34ms/work/anaconda3/envs/pytorch_pip170_cuda102/lib/python3.6/site-packages/requests/sessions.py", line 530, in request
resp = self.send(prep, **send_kwargs)
File "/linkhome/rech/genini01/uua34ms/work/anaconda3/envs/pytorch_pip170_cuda102/lib/python3.6/site-packages/requests/sessions.py", line 643, in send
r = adapter.send(request, **kwargs)
File "/linkhome/rech/genini01/uua34ms/work/anaconda3/envs/pytorch_pip170_cuda102/lib/python3.6/site-packages/requests/adapters.py", line 516, in send
raise ConnectionError(e, request=request)
requests.exceptions.ConnectionError: HTTPSConnectionPool(host='s3.amazonaws.com', port=443): Max retries exceeded with url: /datasets.huggingface.co/datasets/datasets/wikipedia/wikipedia.py (Caused by NewConnectionError('<urllib3.connection.HTTPSConnection object at 0x14b7b73e4908>: Failed to establish a new connection: [Errno 110] Connection timed out',))
```
Any advice?Thanks!
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/841/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/841/timeline | null | null | {
"url": null,
"html_url": null,
"diff_url": null,
"patch_url": null,
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/840 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/840/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/840/comments | https://api.github.com/repos/huggingface/datasets/issues/840/events | https://github.com/huggingface/datasets/pull/840 | 740,632,771 | MDExOlB1bGxSZXF1ZXN0NTE5MDg2NDUw | 840 | Update squad_v2.py | {
"login": "Javier-Jimenez99",
"id": 38747614,
"node_id": "MDQ6VXNlcjM4NzQ3NjE0",
"avatar_url": "https://avatars.githubusercontent.com/u/38747614?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Javier-Jimenez99",
"html_url": "https://github.com/Javier-Jimenez99",
"followers_url": "https://api.github.com/users/Javier-Jimenez99/followers",
"following_url": "https://api.github.com/users/Javier-Jimenez99/following{/other_user}",
"gists_url": "https://api.github.com/users/Javier-Jimenez99/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Javier-Jimenez99/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Javier-Jimenez99/subscriptions",
"organizations_url": "https://api.github.com/users/Javier-Jimenez99/orgs",
"repos_url": "https://api.github.com/users/Javier-Jimenez99/repos",
"events_url": "https://api.github.com/users/Javier-Jimenez99/events{/privacy}",
"received_events_url": "https://api.github.com/users/Javier-Jimenez99/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": null,
"id": null,
"node_id": null,
"avatar_url": null,
"gravatar_id": null,
"url": null,
"html_url": null,
"followers_url": null,
"following_url": null,
"gists_url": null,
"starred_url": null,
"subscriptions_url": null,
"organizations_url": null,
"repos_url": null,
"events_url": null,
"received_events_url": null,
"type": null,
"site_admin": null
} | [] | null | [
"With this change all the checks are passed.",
"Good"
] | 1,605,088,721,000 | 1,605,108,574,000 | 1,605,108,395,000 | CONTRIBUTOR | null | Change lines 100 and 102 to prevent overwriting ```predictions``` variable. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/840/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/840/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/840",
"html_url": "https://github.com/huggingface/datasets/pull/840",
"diff_url": "https://github.com/huggingface/datasets/pull/840.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/840.patch",
"merged_at": 1605108395000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/839 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/839/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/839/comments | https://api.github.com/repos/huggingface/datasets/issues/839/events | https://github.com/huggingface/datasets/issues/839 | 740,355,270 | MDU6SXNzdWU3NDAzNTUyNzA= | 839 | XSum dataset missing spaces between sentences | {
"login": "loganlebanoff",
"id": 10007282,
"node_id": "MDQ6VXNlcjEwMDA3Mjgy",
"avatar_url": "https://avatars.githubusercontent.com/u/10007282?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/loganlebanoff",
"html_url": "https://github.com/loganlebanoff",
"followers_url": "https://api.github.com/users/loganlebanoff/followers",
"following_url": "https://api.github.com/users/loganlebanoff/following{/other_user}",
"gists_url": "https://api.github.com/users/loganlebanoff/gists{/gist_id}",
"starred_url": "https://api.github.com/users/loganlebanoff/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/loganlebanoff/subscriptions",
"organizations_url": "https://api.github.com/users/loganlebanoff/orgs",
"repos_url": "https://api.github.com/users/loganlebanoff/repos",
"events_url": "https://api.github.com/users/loganlebanoff/events{/privacy}",
"received_events_url": "https://api.github.com/users/loganlebanoff/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | {
"login": null,
"id": null,
"node_id": null,
"avatar_url": null,
"gravatar_id": null,
"url": null,
"html_url": null,
"followers_url": null,
"following_url": null,
"gists_url": null,
"starred_url": null,
"subscriptions_url": null,
"organizations_url": null,
"repos_url": null,
"events_url": null,
"received_events_url": null,
"type": null,
"site_admin": null
} | [] | null | [] | 1,605,054,883,000 | 1,605,054,883,000 | null | NONE | null | I noticed that the XSum dataset has no space between sentences. This could lead to worse results for anyone training or testing on it. Here's an example (0th entry in the test set):
`The London trio are up for best UK act and best album, as well as getting two nominations in the best song category."We got told like this morning 'Oh I think you're nominated'", said Dappy."And I was like 'Oh yeah, which one?' And now we've got nominated for four awards. I mean, wow!"Bandmate Fazer added: "We thought it's best of us to come down and mingle with everyone and say hello to the cameras. And now we find we've got four nominations."The band have two shots at the best song prize, getting the nod for their Tynchy Stryder collaboration Number One, and single Strong Again.Their album Uncle B will also go up against records by the likes of Beyonce and Kanye West.N-Dubz picked up the best newcomer Mobo in 2007, but female member Tulisa said they wouldn't be too disappointed if they didn't win this time around."At the end of the day we're grateful to be where we are in our careers."If it don't happen then it don't happen - live to fight another day and keep on making albums and hits for the fans."Dappy also revealed they could be performing live several times on the night.The group will be doing Number One and also a possible rendition of the War Child single, I Got Soul.The charity song is a re-working of The Killers' All These Things That I've Done and is set to feature artists like Chipmunk, Ironik and Pixie Lott.This year's Mobos will be held outside of London for the first time, in Glasgow on 30 September.N-Dubz said they were looking forward to performing for their Scottish fans and boasted about their recent shows north of the border."We just done Edinburgh the other day," said Dappy."We smashed up an N-Dubz show over there. We done Aberdeen about three or four months ago - we smashed up that show over there! Everywhere we go we smash it up!"` | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/839/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/839/timeline | null | null | {
"url": null,
"html_url": null,
"diff_url": null,
"patch_url": null,
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/838 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/838/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/838/comments | https://api.github.com/repos/huggingface/datasets/issues/838/events | https://github.com/huggingface/datasets/pull/838 | 740,328,382 | MDExOlB1bGxSZXF1ZXN0NTE4ODM0NTE5 | 838 | CNN/Dailymail Dataset Card | {
"login": "mcmillanmajora",
"id": 26722925,
"node_id": "MDQ6VXNlcjI2NzIyOTI1",
"avatar_url": "https://avatars.githubusercontent.com/u/26722925?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mcmillanmajora",
"html_url": "https://github.com/mcmillanmajora",
"followers_url": "https://api.github.com/users/mcmillanmajora/followers",
"following_url": "https://api.github.com/users/mcmillanmajora/following{/other_user}",
"gists_url": "https://api.github.com/users/mcmillanmajora/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mcmillanmajora/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mcmillanmajora/subscriptions",
"organizations_url": "https://api.github.com/users/mcmillanmajora/orgs",
"repos_url": "https://api.github.com/users/mcmillanmajora/repos",
"events_url": "https://api.github.com/users/mcmillanmajora/events{/privacy}",
"received_events_url": "https://api.github.com/users/mcmillanmajora/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": null,
"id": null,
"node_id": null,
"avatar_url": null,
"gravatar_id": null,
"url": null,
"html_url": null,
"followers_url": null,
"following_url": null,
"gists_url": null,
"starred_url": null,
"subscriptions_url": null,
"organizations_url": null,
"repos_url": null,
"events_url": null,
"received_events_url": null,
"type": null,
"site_admin": null
} | [] | null | [] | 1,605,052,603,000 | 1,606,338,591,000 | 1,606,338,590,000 | CONTRIBUTOR | null | Link to the card page: https://github.com/mcmillanmajora/datasets/tree/cnn_dailymail_card/datasets/cnn_dailymail
One of the questions this dataset brings up is how we want to handle versioning of the cards to mirror versions of the dataset. The different versions of this dataset are used for different tasks (which may not be reflected in the versions that we currently have in the repo?), but it's only the structure that's changing rather than the content in this particular case, at least between versions 2.0.0 and 3.0.0. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/838/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/838/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/838",
"html_url": "https://github.com/huggingface/datasets/pull/838",
"diff_url": "https://github.com/huggingface/datasets/pull/838.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/838.patch",
"merged_at": 1606338590000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/837 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/837/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/837/comments | https://api.github.com/repos/huggingface/datasets/issues/837/events | https://github.com/huggingface/datasets/pull/837 | 740,250,215 | MDExOlB1bGxSZXF1ZXN0NTE4NzcwNDM5 | 837 | AlloCinรฉ dataset card | {
"login": "mcmillanmajora",
"id": 26722925,
"node_id": "MDQ6VXNlcjI2NzIyOTI1",
"avatar_url": "https://avatars.githubusercontent.com/u/26722925?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mcmillanmajora",
"html_url": "https://github.com/mcmillanmajora",
"followers_url": "https://api.github.com/users/mcmillanmajora/followers",
"following_url": "https://api.github.com/users/mcmillanmajora/following{/other_user}",
"gists_url": "https://api.github.com/users/mcmillanmajora/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mcmillanmajora/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mcmillanmajora/subscriptions",
"organizations_url": "https://api.github.com/users/mcmillanmajora/orgs",
"repos_url": "https://api.github.com/users/mcmillanmajora/repos",
"events_url": "https://api.github.com/users/mcmillanmajora/events{/privacy}",
"received_events_url": "https://api.github.com/users/mcmillanmajora/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": null,
"id": null,
"node_id": null,
"avatar_url": null,
"gravatar_id": null,
"url": null,
"html_url": null,
"followers_url": null,
"following_url": null,
"gists_url": null,
"starred_url": null,
"subscriptions_url": null,
"organizations_url": null,
"repos_url": null,
"events_url": null,
"received_events_url": null,
"type": null,
"site_admin": null
} | [] | null | [] | 1,605,043,193,000 | 1,606,341,387,000 | 1,606,341,387,000 | CONTRIBUTOR | null | Link to the card page: https://github.com/mcmillanmajora/datasets/blob/allocine_card/datasets/allocine/README.md
There wasn't as much information available for this dataset, so I'm wondering what's the best way to address open questions about the dataset. For example, where did the list of films that the dataset creator used come from?
I'm also wondering how best to go about talking about limitations when so little is known about the data. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/837/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/837/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/837",
"html_url": "https://github.com/huggingface/datasets/pull/837",
"diff_url": "https://github.com/huggingface/datasets/pull/837.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/837.patch",
"merged_at": 1606341387000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/836 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/836/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/836/comments | https://api.github.com/repos/huggingface/datasets/issues/836/events | https://github.com/huggingface/datasets/issues/836 | 740,187,613 | MDU6SXNzdWU3NDAxODc2MTM= | 836 | load_dataset with 'csv' is not working. while the same file is loading with 'text' mode or with pandas | {
"login": "randubin",
"id": 8919490,
"node_id": "MDQ6VXNlcjg5MTk0OTA=",
"avatar_url": "https://avatars.githubusercontent.com/u/8919490?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/randubin",
"html_url": "https://github.com/randubin",
"followers_url": "https://api.github.com/users/randubin/followers",
"following_url": "https://api.github.com/users/randubin/following{/other_user}",
"gists_url": "https://api.github.com/users/randubin/gists{/gist_id}",
"starred_url": "https://api.github.com/users/randubin/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/randubin/subscriptions",
"organizations_url": "https://api.github.com/users/randubin/orgs",
"repos_url": "https://api.github.com/users/randubin/repos",
"events_url": "https://api.github.com/users/randubin/events{/privacy}",
"received_events_url": "https://api.github.com/users/randubin/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2067388877,
"node_id": "MDU6TGFiZWwyMDY3Mzg4ODc3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20bug",
"name": "dataset bug",
"color": "2edb81",
"default": false,
"description": "A bug in a dataset script provided in the library"
}
] | closed | false | {
"login": null,
"id": null,
"node_id": null,
"avatar_url": null,
"gravatar_id": null,
"url": null,
"html_url": null,
"followers_url": null,
"following_url": null,
"gists_url": null,
"starred_url": null,
"subscriptions_url": null,
"organizations_url": null,
"repos_url": null,
"events_url": null,
"received_events_url": null,
"type": null,
"site_admin": null
} | [] | null | [
"Which version of pyarrow do you have ? Could you try to update pyarrow and try again ?",
"Thanks for the fast response. I have the latest version '2.0.0' (I tried to update)\r\nI am working with Python 3.8.5",
"I think that the issue is similar to this one:https://issues.apache.org/jira/browse/ARROW-9612\r\nThe problem is in arrow when the column data contains long strings.\r\nAny ideas on how to bypass this?",
"We should expose the [`block_size` argument](https://arrow.apache.org/docs/python/generated/pyarrow.csv.ReadOptions.html#pyarrow.csv.ReadOptions) of Apache Arrow csv `ReadOptions` in the [script](https://github.com/huggingface/datasets/blob/master/datasets/csv/csv.py).\r\n\r\n\r\nIn the meantime you can specify yourself the `ReadOptions` config like this:\r\n```python\r\nimport pyarrow.csv as pac # PyArrow is installed with `datasets`\r\n\r\nread_options = pac.ReadOptions(block_size=1e9) # try to find the right value for your use-case\r\ndataset = load_dataset('csv', data_files=files, read_options=read_options)\r\n```\r\n",
"This did help to load the data. But the problem now is that I get:\r\nArrowInvalid: CSV parse error: Expected 5 columns, got 187\r\n\r\nIt seems that this change the parsing so I changed the table to tab-separated and tried to load it directly from pyarrow\r\nBut I got a similar error, again it loaded fine in pandas so I am not sure what to do.\r\n\r\n\r\n\r\n",
"Got almost the same error loading a ~5GB TSV file, first got the same error as OP, then tried giving it my own ReadOptions and also got the same CSV parse error.",
"> We should expose the [`block_size` argument](https://arrow.apache.org/docs/python/generated/pyarrow.csv.ReadOptions.html#pyarrow.csv.ReadOptions) of Apache Arrow csv `ReadOptions` in the [script](https://github.com/huggingface/datasets/blob/master/datasets/csv/csv.py).\r\n> \r\n> In the meantime you can specify yourself the `ReadOptions` config like this:\r\n> \r\n> ```python\r\n> import pyarrow.csv as pac # PyArrow is installed with `datasets`\r\n> \r\n> read_options = pac.ReadOptions(block_size=1e9) # try to find the right value for your use-case\r\n> dataset = load_dataset('csv', data_files=files, read_options=read_options)\r\n> ```\r\n\r\nThis did not work for me, I got\r\n`TypeError: __init__() got an unexpected keyword argument 'read_options'`",
"Hi ! Yes because of issues with PyArrow's CSV reader we switched to using the Pandas CSV reader. In particular the `read_options` argument is not supported anymore, but you can pass any parameter of Pandas' `read_csv` function (see the list here in [Pandas documentation](https://pandas.pydata.org/docs/reference/api/pandas.read_csv.html))"
] | 1,605,036,940,000 | 1,637,773,159,000 | 1,605,807,338,000 | NONE | null | Hi All
I am trying to load a custom dataset and I am trying to load a single file to make sure the file is loading correctly:
dataset = load_dataset('csv', data_files=files)
When I run it I get:
Downloading and preparing dataset csv/default-35575a1051604c88 (download: Unknown size, generated: Unknown size, post-processed: Unknown size, total: Unknown size) tocache/huggingface/datasets/csv/default-35575a1051604c88/0.0.0/49187751790fa4d820300fd4d0707896e5b941f1a9c644652645b866716a4ac4...
I am getting this error:
6a4ac4/csv.py in _generate_tables(self, files)
78 def _generate_tables(self, files):
79 for i, file in enumerate(files):
---> 80 pa_table = pac.read_csv(
81 file,
82 read_options=self.config.pa_read_options,
~/anaconda2/envs/nlp/lib/python3.8/site-packages/pyarrow/_csv.pyx in pyarrow._csv.read_csv()
~/anaconda2/envs/nlp/lib/python3.8/site-packages/pyarrow/error.pxi in pyarrow.lib.pyarrow_internal_check_status()
~/anaconda2/envs/nlp/lib/python3.8/site-packages/pyarrow/error.pxi in pyarrow.lib.check_status()
**ArrowInvalid: straddling object straddles two block boundaries (try to increase block size?)**
The size of the file is 3.5 GB. When I try smaller files I do not have an issue. When I load it with 'text' parser I can see all data but it is not what I need.
There is no issue reading the file with pandas. any idea what could be the issue?
When I am running a different CSV I do not get this line:
(download: Unknown size, generated: Unknown size, post-processed: Unknown size, total: Unknown size)
Any ideas?
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/836/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/836/timeline | null | null | {
"url": null,
"html_url": null,
"diff_url": null,
"patch_url": null,
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/835 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/835/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/835/comments | https://api.github.com/repos/huggingface/datasets/issues/835/events | https://github.com/huggingface/datasets/issues/835 | 740,102,210 | MDU6SXNzdWU3NDAxMDIyMTA= | 835 | Wikipedia postprocessing | {
"login": "bminixhofer",
"id": 13353204,
"node_id": "MDQ6VXNlcjEzMzUzMjA0",
"avatar_url": "https://avatars.githubusercontent.com/u/13353204?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bminixhofer",
"html_url": "https://github.com/bminixhofer",
"followers_url": "https://api.github.com/users/bminixhofer/followers",
"following_url": "https://api.github.com/users/bminixhofer/following{/other_user}",
"gists_url": "https://api.github.com/users/bminixhofer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/bminixhofer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bminixhofer/subscriptions",
"organizations_url": "https://api.github.com/users/bminixhofer/orgs",
"repos_url": "https://api.github.com/users/bminixhofer/repos",
"events_url": "https://api.github.com/users/bminixhofer/events{/privacy}",
"received_events_url": "https://api.github.com/users/bminixhofer/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": null,
"id": null,
"node_id": null,
"avatar_url": null,
"gravatar_id": null,
"url": null,
"html_url": null,
"followers_url": null,
"following_url": null,
"gists_url": null,
"starred_url": null,
"subscriptions_url": null,
"organizations_url": null,
"repos_url": null,
"events_url": null,
"received_events_url": null,
"type": null,
"site_admin": null
} | [] | null | [
"Hi @bminixhofer ! Parsing WikiMedia is notoriously difficult: this processing used [mwparserfromhell](https://github.com/earwig/mwparserfromhell) which is pretty good but not perfect.\r\n\r\nAs an alternative, you can also use the Wiki40b dataset which was pre-processed using an un-released Google internal tool",
"Ok, thanks! I'll try the Wiki40b dataset.",
"If anyone else is concerned about this, `wiki40b` does indeed seem very well cleaned."
] | 1,605,029,198,000 | 1,605,032,600,000 | 1,605,030,561,000 | NONE | null | Hi, thanks for this library!
Running this code:
```py
import datasets
wikipedia = datasets.load_dataset("wikipedia", "20200501.de")
print(wikipedia['train']['text'][0])
```
I get:
```
mini|Ricardo Flores Magรณn
mini|Mexikanische Revolutionรคre, Magรณn in der Mitte anfรผhrend, gegen die Diktatur von Porfirio Diaz, Ausschnitt des Gemรคlde โTierra y Libertadโ von Idelfonso Carrara (?) von 1930.
Ricardo Flores Magรณn (* 16. September 1874 in San Antonio Eloxochitlรกn im mexikanischen Bundesstaat Oaxaca; โ 22. November 1922 im Bundesgefรคngnis Leavenworth im US-amerikanischen Bundesstaat Kansas) war als Journalist, Gewerkschafter und Literat ein fรผhrender anarchistischer Theoretiker und Aktivist, der die revolutionรคre mexikanische Bewegung radikal beeinflusste. Magรณn war Grรผnder der Partido Liberal Mexicano und Mitglied der Industrial Workers of the World.
Politische Biografie
Journalistisch und politisch kรคmpfte er und sein Bruder sehr kompromisslos gegen die Diktatur Porfirio Diaz. Philosophisch und politisch orientiert an radikal anarchistischen Idealen und den Erfahrungen seiner indigenen Vorfahren bei der gemeinschaftlichen Bewirtschaftung des Gemeindelandes, machte er die Forderung โLand und Freiheitโ (Tierra y Libertad) populรคr. Besonders Francisco Villa und Emiliano Zapata griffen die Forderung Land und Freiheit auf. Seine Philosophie hatte groรen Einfluss auf die Landarbeiter. 1904 floh er in die USA und grรผndete 1906 die Partido Liberal Mexicano. Im Exil lernte er u. a. Emma Goldman kennen. Er verbrachte die meiste Zeit seines Lebens in Gefรคngnissen und im Exil und wurde 1918 in den USA wegen โBehinderung der Kriegsanstrengungenโ zu zwanzig Jahren Gefรคngnis verurteilt. Zu seinem Tod gibt es drei verschiedene Theorien. Offiziell starb er an Herzversagen. Librado Rivera, der die Leiche mit eigenen Augen gesehen hat, geht davon aus, dass Magรณn von einem Mitgefangenen erdrosselt wurde. Die staatstreue Gewerkschaftszeitung CROM verรถffentlichte 1923 einen Beitrag, nachdem Magรณn von einem Gefรคngniswรคrter erschlagen wurde.
mini|Die Brรผder Ricardo (links) und Enrique Flores Magรณn (rechts) vor dem Los Angeles County Jail, 1917
[...]
```
so some Markup like `mini|` is still left. Should I run another parser on this text before feeding it to an ML model or is this a known imperfection of parsing Wiki markup?
Apologies if this has been asked before. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/835/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/835/timeline | null | null | {
"url": null,
"html_url": null,
"diff_url": null,
"patch_url": null,
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/834 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/834/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/834/comments | https://api.github.com/repos/huggingface/datasets/issues/834/events | https://github.com/huggingface/datasets/issues/834 | 740,082,890 | MDU6SXNzdWU3NDAwODI4OTA= | 834 | [GEM] add WikiLingua cross-lingual abstractive summarization dataset | {
"login": "yjernite",
"id": 10469459,
"node_id": "MDQ6VXNlcjEwNDY5NDU5",
"avatar_url": "https://avatars.githubusercontent.com/u/10469459?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yjernite",
"html_url": "https://github.com/yjernite",
"followers_url": "https://api.github.com/users/yjernite/followers",
"following_url": "https://api.github.com/users/yjernite/following{/other_user}",
"gists_url": "https://api.github.com/users/yjernite/gists{/gist_id}",
"starred_url": "https://api.github.com/users/yjernite/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yjernite/subscriptions",
"organizations_url": "https://api.github.com/users/yjernite/orgs",
"repos_url": "https://api.github.com/users/yjernite/repos",
"events_url": "https://api.github.com/users/yjernite/events{/privacy}",
"received_events_url": "https://api.github.com/users/yjernite/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2067376369,
"node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request",
"name": "dataset request",
"color": "e99695",
"default": false,
"description": "Requesting to add a new dataset"
}
] | closed | false | {
"login": null,
"id": null,
"node_id": null,
"avatar_url": null,
"gravatar_id": null,
"url": null,
"html_url": null,
"followers_url": null,
"following_url": null,
"gists_url": null,
"starred_url": null,
"subscriptions_url": null,
"organizations_url": null,
"repos_url": null,
"events_url": null,
"received_events_url": null,
"type": null,
"site_admin": null
} | [] | null | [
"Hey @yjernite. This is a very interesting dataset. Would love to work on adding it but I see that the link to the data is to a gdrive folder. Can I just confirm wether dlmanager can handle gdrive urls or would this have to be a manual dl?",
"Hi @KMFODA ! A version of WikiLingua is actually already accessible in the [GEM dataset](https://huggingface.co/datasets/gem)\r\n\r\nYou can use it for example to load the French to English translation with:\r\n```python\r\nfrom datasets import load_dataset\r\nwikilingua = load_dataset(\"gem\", \"wiki_lingua_french_fr\")\r\n```\r\n\r\nClosed by https://github.com/huggingface/datasets/pull/1807"
] | 1,605,027,643,000 | 1,618,488,249,000 | 1,618,488,098,000 | MEMBER | null | ## Adding a Dataset
- **Name:** WikiLingua
- **Description:** The dataset includes ~770k article and summary pairs in 18 languages from WikiHow. The gold-standard article-summary alignments across languages were extracted by aligning the images that are used to describe each how-to step in an article.
- **Paper:** https://arxiv.org/pdf/2010.03093.pdf
- **Data:** https://github.com/esdurmus/Wikilingua
- **Motivation:** Included in the GEM shared task. Multilingual.
Instructions to add a new dataset can be found [here](https://huggingface.co/docs/datasets/share_dataset.html).
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/834/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/834/timeline | null | null | {
"url": null,
"html_url": null,
"diff_url": null,
"patch_url": null,
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/833 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/833/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/833/comments | https://api.github.com/repos/huggingface/datasets/issues/833/events | https://github.com/huggingface/datasets/issues/833 | 740,079,692 | MDU6SXNzdWU3NDAwNzk2OTI= | 833 | [GEM] add ASSET text simplification dataset | {
"login": "yjernite",
"id": 10469459,
"node_id": "MDQ6VXNlcjEwNDY5NDU5",
"avatar_url": "https://avatars.githubusercontent.com/u/10469459?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yjernite",
"html_url": "https://github.com/yjernite",
"followers_url": "https://api.github.com/users/yjernite/followers",
"following_url": "https://api.github.com/users/yjernite/following{/other_user}",
"gists_url": "https://api.github.com/users/yjernite/gists{/gist_id}",
"starred_url": "https://api.github.com/users/yjernite/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yjernite/subscriptions",
"organizations_url": "https://api.github.com/users/yjernite/orgs",
"repos_url": "https://api.github.com/users/yjernite/repos",
"events_url": "https://api.github.com/users/yjernite/events{/privacy}",
"received_events_url": "https://api.github.com/users/yjernite/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2067376369,
"node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request",
"name": "dataset request",
"color": "e99695",
"default": false,
"description": "Requesting to add a new dataset"
}
] | closed | false | {
"login": null,
"id": null,
"node_id": null,
"avatar_url": null,
"gravatar_id": null,
"url": null,
"html_url": null,
"followers_url": null,
"following_url": null,
"gists_url": null,
"starred_url": null,
"subscriptions_url": null,
"organizations_url": null,
"repos_url": null,
"events_url": null,
"received_events_url": null,
"type": null,
"site_admin": null
} | [] | null | [] | 1,605,027,390,000 | 1,607,002,695,000 | 1,607,002,695,000 | MEMBER | null | ## Adding a Dataset
- **Name:** ASSET
- **Description:** ASSET is a crowdsourced
multi-reference corpus for assessing sentence simplification in English where each simplification was produced by executing several rewriting transformations.
- **Paper:** https://www.aclweb.org/anthology/2020.acl-main.424.pdf
- **Data:** https://github.com/facebookresearch/asset
- **Motivation:** Included in the GEM shared task
Instructions to add a new dataset can be found [here](https://huggingface.co/docs/datasets/share_dataset.html).
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/833/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/833/timeline | null | null | {
"url": null,
"html_url": null,
"diff_url": null,
"patch_url": null,
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/832 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/832/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/832/comments | https://api.github.com/repos/huggingface/datasets/issues/832/events | https://github.com/huggingface/datasets/issues/832 | 740,077,228 | MDU6SXNzdWU3NDAwNzcyMjg= | 832 | [GEM] add WikiAuto text simplification dataset | {
"login": "yjernite",
"id": 10469459,
"node_id": "MDQ6VXNlcjEwNDY5NDU5",
"avatar_url": "https://avatars.githubusercontent.com/u/10469459?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yjernite",
"html_url": "https://github.com/yjernite",
"followers_url": "https://api.github.com/users/yjernite/followers",
"following_url": "https://api.github.com/users/yjernite/following{/other_user}",
"gists_url": "https://api.github.com/users/yjernite/gists{/gist_id}",
"starred_url": "https://api.github.com/users/yjernite/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yjernite/subscriptions",
"organizations_url": "https://api.github.com/users/yjernite/orgs",
"repos_url": "https://api.github.com/users/yjernite/repos",
"events_url": "https://api.github.com/users/yjernite/events{/privacy}",
"received_events_url": "https://api.github.com/users/yjernite/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2067376369,
"node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request",
"name": "dataset request",
"color": "e99695",
"default": false,
"description": "Requesting to add a new dataset"
}
] | closed | false | {
"login": null,
"id": null,
"node_id": null,
"avatar_url": null,
"gravatar_id": null,
"url": null,
"html_url": null,
"followers_url": null,
"following_url": null,
"gists_url": null,
"starred_url": null,
"subscriptions_url": null,
"organizations_url": null,
"repos_url": null,
"events_url": null,
"received_events_url": null,
"type": null,
"site_admin": null
} | [] | null | [] | 1,605,027,203,000 | 1,607,002,688,000 | 1,607,002,688,000 | MEMBER | null | ## Adding a Dataset
- **Name:** WikiAuto
- **Description:** Sentences in English Wikipedia and their corresponding sentences in Simple English Wikipedia that are written with simpler grammar and word choices. A lot of lexical and syntactic paraphrasing.
- **Paper:** https://www.aclweb.org/anthology/2020.acl-main.709.pdf
- **Data:** https://github.com/chaojiang06/wiki-auto
- **Motivation:** Included in the GEM shared task
Instructions to add a new dataset can be found [here](https://huggingface.co/docs/datasets/share_dataset.html).
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/832/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/832/timeline | null | null | {
"url": null,
"html_url": null,
"diff_url": null,
"patch_url": null,
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/831 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/831/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/831/comments | https://api.github.com/repos/huggingface/datasets/issues/831/events | https://github.com/huggingface/datasets/issues/831 | 740,071,697 | MDU6SXNzdWU3NDAwNzE2OTc= | 831 | [GEM] Add WebNLG dataset | {
"login": "yjernite",
"id": 10469459,
"node_id": "MDQ6VXNlcjEwNDY5NDU5",
"avatar_url": "https://avatars.githubusercontent.com/u/10469459?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yjernite",
"html_url": "https://github.com/yjernite",
"followers_url": "https://api.github.com/users/yjernite/followers",
"following_url": "https://api.github.com/users/yjernite/following{/other_user}",
"gists_url": "https://api.github.com/users/yjernite/gists{/gist_id}",
"starred_url": "https://api.github.com/users/yjernite/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yjernite/subscriptions",
"organizations_url": "https://api.github.com/users/yjernite/orgs",
"repos_url": "https://api.github.com/users/yjernite/repos",
"events_url": "https://api.github.com/users/yjernite/events{/privacy}",
"received_events_url": "https://api.github.com/users/yjernite/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2067376369,
"node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request",
"name": "dataset request",
"color": "e99695",
"default": false,
"description": "Requesting to add a new dataset"
}
] | closed | false | {
"login": null,
"id": null,
"node_id": null,
"avatar_url": null,
"gravatar_id": null,
"url": null,
"html_url": null,
"followers_url": null,
"following_url": null,
"gists_url": null,
"starred_url": null,
"subscriptions_url": null,
"organizations_url": null,
"repos_url": null,
"events_url": null,
"received_events_url": null,
"type": null,
"site_admin": null
} | [] | null | [] | 1,605,026,808,000 | 1,607,002,681,000 | 1,607,002,681,000 | MEMBER | null | ## Adding a Dataset
- **Name:** WebNLG
- **Description:** WebNLG consists of Data/Text pairs where the data is a set of triples extracted from DBpedia and the text is a verbalisation of these triples (16,095 data inputs and 42,873 data-text pairs). The data is available in English and Russian
- **Paper:** https://www.aclweb.org/anthology/P17-1017.pdf
- **Data:** https://webnlg-challenge.loria.fr/download/
- **Motivation:** Included in the GEM shared task, multilingual
Instructions to add a new dataset can be found [here](https://huggingface.co/docs/datasets/share_dataset.html).
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/831/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/831/timeline | null | null | {
"url": null,
"html_url": null,
"diff_url": null,
"patch_url": null,
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/830 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/830/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/830/comments | https://api.github.com/repos/huggingface/datasets/issues/830/events | https://github.com/huggingface/datasets/issues/830 | 740,065,376 | MDU6SXNzdWU3NDAwNjUzNzY= | 830 | [GEM] add ToTTo Table-to-text dataset | {
"login": "yjernite",
"id": 10469459,
"node_id": "MDQ6VXNlcjEwNDY5NDU5",
"avatar_url": "https://avatars.githubusercontent.com/u/10469459?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yjernite",
"html_url": "https://github.com/yjernite",
"followers_url": "https://api.github.com/users/yjernite/followers",
"following_url": "https://api.github.com/users/yjernite/following{/other_user}",
"gists_url": "https://api.github.com/users/yjernite/gists{/gist_id}",
"starred_url": "https://api.github.com/users/yjernite/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yjernite/subscriptions",
"organizations_url": "https://api.github.com/users/yjernite/orgs",
"repos_url": "https://api.github.com/users/yjernite/repos",
"events_url": "https://api.github.com/users/yjernite/events{/privacy}",
"received_events_url": "https://api.github.com/users/yjernite/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2067376369,
"node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request",
"name": "dataset request",
"color": "e99695",
"default": false,
"description": "Requesting to add a new dataset"
}
] | closed | false | {
"login": null,
"id": null,
"node_id": null,
"avatar_url": null,
"gravatar_id": null,
"url": null,
"html_url": null,
"followers_url": null,
"following_url": null,
"gists_url": null,
"starred_url": null,
"subscriptions_url": null,
"organizations_url": null,
"repos_url": null,
"events_url": null,
"received_events_url": null,
"type": null,
"site_admin": null
} | [] | null | [
"closed via #1098 "
] | 1,605,026,314,000 | 1,607,605,562,000 | 1,607,605,561,000 | MEMBER | null | ## Adding a Dataset
- **Name:** ToTTo
- **Description:** ToTTo is an open-domain English table-to-text dataset with over 120,000 training examples that proposes a controlled generation task: given a Wikipedia table and a set of highlighted table cells, produce a one-sentence description.
- **Paper:** https://arxiv.org/abs/2004.14373
- **Data:** https://github.com/google-research-datasets/totto
- **Motivation:** Included in the GEM shared task
Instructions to add a new dataset can be found [here](https://huggingface.co/docs/datasets/share_dataset.html).
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/830/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/830/timeline | null | null | {
"url": null,
"html_url": null,
"diff_url": null,
"patch_url": null,
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/829 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/829/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/829/comments | https://api.github.com/repos/huggingface/datasets/issues/829/events | https://github.com/huggingface/datasets/issues/829 | 740,061,699 | MDU6SXNzdWU3NDAwNjE2OTk= | 829 | [GEM] add Schema-Guided Dialogue | {
"login": "yjernite",
"id": 10469459,
"node_id": "MDQ6VXNlcjEwNDY5NDU5",
"avatar_url": "https://avatars.githubusercontent.com/u/10469459?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yjernite",
"html_url": "https://github.com/yjernite",
"followers_url": "https://api.github.com/users/yjernite/followers",
"following_url": "https://api.github.com/users/yjernite/following{/other_user}",
"gists_url": "https://api.github.com/users/yjernite/gists{/gist_id}",
"starred_url": "https://api.github.com/users/yjernite/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yjernite/subscriptions",
"organizations_url": "https://api.github.com/users/yjernite/orgs",
"repos_url": "https://api.github.com/users/yjernite/repos",
"events_url": "https://api.github.com/users/yjernite/events{/privacy}",
"received_events_url": "https://api.github.com/users/yjernite/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2067376369,
"node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request",
"name": "dataset request",
"color": "e99695",
"default": false,
"description": "Requesting to add a new dataset"
}
] | closed | false | {
"login": null,
"id": null,
"node_id": null,
"avatar_url": null,
"gravatar_id": null,
"url": null,
"html_url": null,
"followers_url": null,
"following_url": null,
"gists_url": null,
"starred_url": null,
"subscriptions_url": null,
"organizations_url": null,
"repos_url": null,
"events_url": null,
"received_events_url": null,
"type": null,
"site_admin": null
} | [] | null | [] | 1,605,026,024,000 | 1,607,002,670,000 | 1,607,002,670,000 | MEMBER | null | ## Adding a Dataset
- **Name:** The Schema-Guided Dialogue Dataset
- **Description:** The Schema-Guided Dialogue (SGD) dataset consists of over 20k annotated multi-domain, task-oriented conversations between a human and a virtual assistant. These conversations involve interactions with services and APIs spanning 20 domains, ranging from banks and events to media, calendar, travel, and weather.
- **Paper:** https://arxiv.org/pdf/2002.01359.pdf https://arxiv.org/pdf/2004.15006.pdf
- **Data:** https://github.com/google-research-datasets/dstc8-schema-guided-dialogue
- **Motivation:** Included in the GEM shared task
Instructions to add a new dataset can be found [here](https://huggingface.co/docs/datasets/share_dataset.html).
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/829/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/829/timeline | null | null | {
"url": null,
"html_url": null,
"diff_url": null,
"patch_url": null,
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/828 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/828/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/828/comments | https://api.github.com/repos/huggingface/datasets/issues/828/events | https://github.com/huggingface/datasets/pull/828 | 740,008,683 | MDExOlB1bGxSZXF1ZXN0NTE4NTcwMjY3 | 828 | Add writer_batch_size attribute to GeneratorBasedBuilder | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": null,
"id": null,
"node_id": null,
"avatar_url": null,
"gravatar_id": null,
"url": null,
"html_url": null,
"followers_url": null,
"following_url": null,
"gists_url": null,
"starred_url": null,
"subscriptions_url": null,
"organizations_url": null,
"repos_url": null,
"events_url": null,
"received_events_url": null,
"type": null,
"site_admin": null
} | [] | null | [] | 1,605,022,099,000 | 1,605,025,656,000 | 1,605,025,656,000 | MEMBER | null | As specified in #741 one would need to specify a custom ArrowWriter batch size to avoid filling the RAM. Indeed the defaults buffer size is 10 000 examples but for multimodal datasets that contain images or videos we may want to reduce that. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/828/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/828/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/828",
"html_url": "https://github.com/huggingface/datasets/pull/828",
"diff_url": "https://github.com/huggingface/datasets/pull/828.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/828.patch",
"merged_at": 1605025655000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/827 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/827/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/827/comments | https://api.github.com/repos/huggingface/datasets/issues/827/events | https://github.com/huggingface/datasets/issues/827 | 739,983,024 | MDU6SXNzdWU3Mzk5ODMwMjQ= | 827 | [GEM] MultiWOZ dialogue dataset | {
"login": "yjernite",
"id": 10469459,
"node_id": "MDQ6VXNlcjEwNDY5NDU5",
"avatar_url": "https://avatars.githubusercontent.com/u/10469459?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yjernite",
"html_url": "https://github.com/yjernite",
"followers_url": "https://api.github.com/users/yjernite/followers",
"following_url": "https://api.github.com/users/yjernite/following{/other_user}",
"gists_url": "https://api.github.com/users/yjernite/gists{/gist_id}",
"starred_url": "https://api.github.com/users/yjernite/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yjernite/subscriptions",
"organizations_url": "https://api.github.com/users/yjernite/orgs",
"repos_url": "https://api.github.com/users/yjernite/repos",
"events_url": "https://api.github.com/users/yjernite/events{/privacy}",
"received_events_url": "https://api.github.com/users/yjernite/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2067376369,
"node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request",
"name": "dataset request",
"color": "e99695",
"default": false,
"description": "Requesting to add a new dataset"
}
] | open | false | {
"login": null,
"id": null,
"node_id": null,
"avatar_url": null,
"gravatar_id": null,
"url": null,
"html_url": null,
"followers_url": null,
"following_url": null,
"gists_url": null,
"starred_url": null,
"subscriptions_url": null,
"organizations_url": null,
"repos_url": null,
"events_url": null,
"received_events_url": null,
"type": null,
"site_admin": null
} | [] | null | [
"Hi @yjernite can I help in adding this dataset? \r\n\r\nI am excited about this because this will be my first contribution to the datasets library as well as to hugginface."
] | 1,605,020,270,000 | 1,607,780,550,000 | null | MEMBER | null | ## Adding a Dataset
- **Name:** MultiWOZ (Multi-Domain Wizard-of-Oz)
- **Description:** 10k annotated human-human dialogues. Each dialogue consists of a goal, multiple user and system utterances as well as a belief state. Only system utterances are annotated with dialogue acts โ there are no annotations from the user side.
- **Paper:** https://arxiv.org/pdf/2007.12720.pdf
- **Data:** https://github.com/budzianowski/multiwoz
- **Motivation:** Will likely be part of the GEM shared task
Instructions to add a new dataset can be found [here](https://huggingface.co/docs/datasets/share_dataset.html).
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/827/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/827/timeline | null | null | {
"url": null,
"html_url": null,
"diff_url": null,
"patch_url": null,
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/826 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/826/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/826/comments | https://api.github.com/repos/huggingface/datasets/issues/826/events | https://github.com/huggingface/datasets/issues/826 | 739,976,716 | MDU6SXNzdWU3Mzk5NzY3MTY= | 826 | [GEM] Add E2E dataset | {
"login": "yjernite",
"id": 10469459,
"node_id": "MDQ6VXNlcjEwNDY5NDU5",
"avatar_url": "https://avatars.githubusercontent.com/u/10469459?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yjernite",
"html_url": "https://github.com/yjernite",
"followers_url": "https://api.github.com/users/yjernite/followers",
"following_url": "https://api.github.com/users/yjernite/following{/other_user}",
"gists_url": "https://api.github.com/users/yjernite/gists{/gist_id}",
"starred_url": "https://api.github.com/users/yjernite/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yjernite/subscriptions",
"organizations_url": "https://api.github.com/users/yjernite/orgs",
"repos_url": "https://api.github.com/users/yjernite/repos",
"events_url": "https://api.github.com/users/yjernite/events{/privacy}",
"received_events_url": "https://api.github.com/users/yjernite/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2067376369,
"node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request",
"name": "dataset request",
"color": "e99695",
"default": false,
"description": "Requesting to add a new dataset"
}
] | closed | false | {
"login": null,
"id": null,
"node_id": null,
"avatar_url": null,
"gravatar_id": null,
"url": null,
"html_url": null,
"followers_url": null,
"following_url": null,
"gists_url": null,
"starred_url": null,
"subscriptions_url": null,
"organizations_url": null,
"repos_url": null,
"events_url": null,
"received_events_url": null,
"type": null,
"site_admin": null
} | [] | null | [] | 1,605,019,840,000 | 1,607,002,677,000 | 1,607,002,677,000 | MEMBER | null | ## Adding a Dataset
- **Name:** E2E NLG dataset (for End-to-end natural language generation)
- **Description:**a dataset for training end-to-end, datadriven natural language generation systems in the restaurant domain, the datasets consists of 5,751 dialogue-act Meaning Representations (structured data) and 8.1 reference free-text utterances per dialogue-act on average
- **Paper:** https://arxiv.org/pdf/1706.09254.pdf https://arxiv.org/abs/1901.07931
- **Data:** http://www.macs.hw.ac.uk/InteractionLab/E2E/#data
- **Motivation:** This dataset will likely be included in the GEM shared task
Instructions to add a new dataset can be found [here](https://huggingface.co/docs/datasets/share_dataset.html).
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/826/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/826/timeline | null | null | {
"url": null,
"html_url": null,
"diff_url": null,
"patch_url": null,
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/825 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/825/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/825/comments | https://api.github.com/repos/huggingface/datasets/issues/825/events | https://github.com/huggingface/datasets/pull/825 | 739,925,960 | MDExOlB1bGxSZXF1ZXN0NTE4NTAyNjgx | 825 | Add accuracy, precision, recall and F1 metrics | {
"login": "jplu",
"id": 959590,
"node_id": "MDQ6VXNlcjk1OTU5MA==",
"avatar_url": "https://avatars.githubusercontent.com/u/959590?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jplu",
"html_url": "https://github.com/jplu",
"followers_url": "https://api.github.com/users/jplu/followers",
"following_url": "https://api.github.com/users/jplu/following{/other_user}",
"gists_url": "https://api.github.com/users/jplu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jplu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jplu/subscriptions",
"organizations_url": "https://api.github.com/users/jplu/orgs",
"repos_url": "https://api.github.com/users/jplu/repos",
"events_url": "https://api.github.com/users/jplu/events{/privacy}",
"received_events_url": "https://api.github.com/users/jplu/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": null,
"id": null,
"node_id": null,
"avatar_url": null,
"gravatar_id": null,
"url": null,
"html_url": null,
"followers_url": null,
"following_url": null,
"gists_url": null,
"starred_url": null,
"subscriptions_url": null,
"organizations_url": null,
"repos_url": null,
"events_url": null,
"received_events_url": null,
"type": null,
"site_admin": null
} | [] | null | [] | 1,605,016,235,000 | 1,605,122,628,000 | 1,605,122,623,000 | CONTRIBUTOR | null | This PR adds several single metrics, namely:
- Accuracy
- Precision
- Recall
- F1
They all uses under the hood the sklearn metrics of the same name. They allow different useful features when training a multilabel/multiclass model:
- have a macro/micro/per label/weighted/binary/per sample score
- score only the selected labels (usually what we call the positive labels) and ignore the negative ones. For example in case of a Named Entity Recognition task, positive labels are (`PERSON`, `LOCATION` or `ORGANIZATION`) and the negative one is `O`. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/825/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/825/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/825",
"html_url": "https://github.com/huggingface/datasets/pull/825",
"diff_url": "https://github.com/huggingface/datasets/pull/825.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/825.patch",
"merged_at": 1605122623000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/824 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/824/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/824/comments | https://api.github.com/repos/huggingface/datasets/issues/824/events | https://github.com/huggingface/datasets/issues/824 | 739,896,526 | MDU6SXNzdWU3Mzk4OTY1MjY= | 824 | Discussion using datasets in offline mode | {
"login": "mandubian",
"id": 77193,
"node_id": "MDQ6VXNlcjc3MTkz",
"avatar_url": "https://avatars.githubusercontent.com/u/77193?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mandubian",
"html_url": "https://github.com/mandubian",
"followers_url": "https://api.github.com/users/mandubian/followers",
"following_url": "https://api.github.com/users/mandubian/following{/other_user}",
"gists_url": "https://api.github.com/users/mandubian/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mandubian/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mandubian/subscriptions",
"organizations_url": "https://api.github.com/users/mandubian/orgs",
"repos_url": "https://api.github.com/users/mandubian/repos",
"events_url": "https://api.github.com/users/mandubian/events{/privacy}",
"received_events_url": "https://api.github.com/users/mandubian/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
},
{
"id": 2067400324,
"node_id": "MDU6TGFiZWwyMDY3NDAwMzI0",
"url": "https://api.github.com/repos/huggingface/datasets/labels/generic%20discussion",
"name": "generic discussion",
"color": "c5def5",
"default": false,
"description": "Generic discussion on the library"
}
] | open | false | {
"login": null,
"id": null,
"node_id": null,
"avatar_url": null,
"gravatar_id": null,
"url": null,
"html_url": null,
"followers_url": null,
"following_url": null,
"gists_url": null,
"starred_url": null,
"subscriptions_url": null,
"organizations_url": null,
"repos_url": null,
"events_url": null,
"received_events_url": null,
"type": null,
"site_admin": null
} | [] | null | [
"No comments ?",
"I think it would be very cool. I'm currently working on a cluster from Compute Canada, and I have internet access only when I'm not in the nodes where I run the scripts. So I was expecting to be able to use the wmt14 dataset until I realized I needed internet connection even if I downloaded the data already. I'm going to try option 2 you mention for now though! Thanks ;)",
"Requiring online connection is a deal breaker in some cases unfortunately so it'd be great if offline mode is added similar to how `transformers` loads models offline fine.\r\n\r\n@mandubian's second bullet point suggests that there's a workaround allowing you to use your offline (custom?) dataset with `datasets`. Could you please elaborate on how that should look like?",
"here is my way to load a dataset offline, but it **requires** an online machine\r\n1. (online machine)\r\n```\r\nimport datasets\r\ndata = datasets.load_dataset(...)\r\ndata.save_to_disk(/YOUR/DATASET/DIR)\r\n```\r\n2. copy the dir from online to the offline machine\r\n3. (offline machine)\r\n```\r\nimport datasets\r\ndata = datasets.load_from_disk(/SAVED/DATA/DIR)\r\n```\r\n\r\nHTH.",
"> here is my way to load a dataset offline, but it **requires** an online machine\n> \n> 1. (online machine)\n> \n> ```\n> \n> import datasets\n> \n> data = datasets.load_dataset(...)\n> \n> data.save_to_disk(/YOUR/DATASET/DIR)\n> \n> ```\n> \n> 2. copy the dir from online to the offline machine\n> \n> 3. (offline machine)\n> \n> ```\n> \n> import datasets\n> \n> data = datasets.load_from_disk(/SAVED/DATA/DIR)\n> \n> ```\n> \n> \n> \n> HTH.\n\n",
"I opened a PR that allows to reload modules that have already been loaded once even if there's no internet.\r\n\r\nLet me know if you know other ways that can make the offline mode experience better. I'd be happy to add them :) \r\n\r\nI already note the \"freeze\" modules option, to prevent local modules updates. It would be a cool feature.\r\n\r\n----------\r\n\r\n> @mandubian's second bullet point suggests that there's a workaround allowing you to use your offline (custom?) dataset with `datasets`. Could you please elaborate on how that should look like?\r\n\r\nIndeed `load_dataset` allows to load remote dataset script (squad, glue, etc.) but also you own local ones.\r\nFor example if you have a dataset script at `./my_dataset/my_dataset.py` then you can do\r\n```python\r\nload_dataset(\"./my_dataset\")\r\n```\r\nand the dataset script will generate your dataset once and for all.\r\n\r\n----------\r\n\r\nAbout I'm looking into having `csv`, `json`, `text`, `pandas` dataset builders already included in the `datasets` package, so that they are available offline by default, as opposed to the other datasets that require the script to be downloaded.\r\ncf #1724 ",
"The local dataset builders (csv, text , json and pandas) are now part of the `datasets` package since #1726 :)\r\nYou can now use them offline\r\n```python\r\ndatasets = load_dataset('text', data_files=data_files)\r\n```\r\n\r\nWe'll do a new release soon"
] | 1,605,013,851,000 | 1,611,151,504,000 | null | NONE | null | `datasets.load_dataset("csv", ...)` breaks if you have no connection (There is already this issue https://github.com/huggingface/datasets/issues/761 about it). It seems to be the same for metrics too.
I create this ticket to discuss a bit and gather what you have in mind or other propositions.
Here are some points to open discussion:
- if you want to prepare your code/datasets on your machine (having internet connexion) but run it on another offline machine (not having internet connexion), it won't work as is, even if you have all files locally on this machine.
- AFAIK, you can make it work if you manually put the python files (csv.py for example) on this offline machine and change your code to `datasets.load_dataset("MY_PATH/csv.py", ...)`. But it would be much better if you could run ths same code without modification if files are available locally.
- I've also been considering the requirement of downloading Python code and execute on your machine to use datasets. This can be an issue in a professional context. Downloading a CSV/H5 file is acceptable, downloading an executable script can open many security issues. We certainly need a mechanism to at least "freeze" the dataset code you retrieved once so that you can review it if you want and then be sure you use this one everywhere and not a version dowloaded from internet.
WDYT? (thks)
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/824/reactions",
"total_count": 7,
"+1": 7,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/824/timeline | null | null | {
"url": null,
"html_url": null,
"diff_url": null,
"patch_url": null,
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/823 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/823/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/823/comments | https://api.github.com/repos/huggingface/datasets/issues/823/events | https://github.com/huggingface/datasets/issues/823 | 739,815,763 | MDU6SXNzdWU3Mzk4MTU3NjM= | 823 | how processing in batch works in datasets | {
"login": "rabeehkarimimahabadi",
"id": 73364383,
"node_id": "MDQ6VXNlcjczMzY0Mzgz",
"avatar_url": "https://avatars.githubusercontent.com/u/73364383?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rabeehkarimimahabadi",
"html_url": "https://github.com/rabeehkarimimahabadi",
"followers_url": "https://api.github.com/users/rabeehkarimimahabadi/followers",
"following_url": "https://api.github.com/users/rabeehkarimimahabadi/following{/other_user}",
"gists_url": "https://api.github.com/users/rabeehkarimimahabadi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/rabeehkarimimahabadi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rabeehkarimimahabadi/subscriptions",
"organizations_url": "https://api.github.com/users/rabeehkarimimahabadi/orgs",
"repos_url": "https://api.github.com/users/rabeehkarimimahabadi/repos",
"events_url": "https://api.github.com/users/rabeehkarimimahabadi/events{/privacy}",
"received_events_url": "https://api.github.com/users/rabeehkarimimahabadi/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2067376369,
"node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request",
"name": "dataset request",
"color": "e99695",
"default": false,
"description": "Requesting to add a new dataset"
}
] | closed | false | {
"login": null,
"id": null,
"node_id": null,
"avatar_url": null,
"gravatar_id": null,
"url": null,
"html_url": null,
"followers_url": null,
"following_url": null,
"gists_url": null,
"starred_url": null,
"subscriptions_url": null,
"organizations_url": null,
"repos_url": null,
"events_url": null,
"received_events_url": null,
"type": null,
"site_admin": null
} | [] | null | [
"Hi I donโt think this is a request for a dataset like you labeled it.\r\n\r\nI also think this would be better suited for the forum at https://discuss.huggingface.co. we try to keep the issue for the repo for bug reports and new features/dataset requests and have usage questions discussed on the forum. Thanks.",
"Hi Thomas,\nwhat I do not get from documentation is that why when you set batched=True,\nthis is processed in batch, while data is not divided to batched\nbeforehand, basically this is a question on the documentation and I do not\nget the batched=True, but sure, if you think this is more appropriate in\nforum I will post it there.\nthanks\nBest\nRabeeh\n\nOn Tue, Nov 10, 2020 at 12:21 PM Thomas Wolf <[email protected]>\nwrote:\n\n> Hi I donโt think this is a request for a dataset like you labeled it.\n>\n> I also think this would be better suited for the forum at\n> https://discuss.huggingface.co. we try to keep the issue for the repo for\n> bug reports and new features/dataset requests and have usage questions\n> discussed on the forum. Thanks.\n>\n> โ\n> You are receiving this because you authored the thread.\n> Reply to this email directly, view it on GitHub\n> <https://github.com/huggingface/datasets/issues/823#issuecomment-724639476>,\n> or unsubscribe\n> <https://github.com/notifications/unsubscribe-auth/ARPXHH4FIPFHVVUHANAE4F3SPEO2JANCNFSM4TQQVEXQ>\n> .\n>\n",
"Yes the forum is perfect for that. You can post in the `datasets` section.\r\nThanks a lot!"
] | 1,605,006,677,000 | 1,605,013,870,000 | 1,605,013,869,000 | NONE | null | Hi,
I need to process my datasets before it is passed to dataloader in batch,
here is my codes
```
class AbstractTask(ABC):
task_name: str = NotImplemented
preprocessor: Callable = NotImplemented
split_to_data_split: Mapping[str, str] = NotImplemented
tokenizer: Callable = NotImplemented
max_source_length: str = NotImplemented
max_target_length: str = NotImplemented
# TODO: should not be a task item, but cannot see other ways.
tpu_num_cores: int = None
# The arguments set are for all tasks and needs to be kept common.
def __init__(self, config):
self.max_source_length = config['max_source_length']
self.max_target_length = config['max_target_length']
self.tokenizer = config['tokenizer']
self.tpu_num_cores = config['tpu_num_cores']
def _encode(self, batch) -> Dict[str, torch.Tensor]:
batch_encoding = self.tokenizer.prepare_seq2seq_batch(
[x["src_texts"] for x in batch],
tgt_texts=[x["tgt_texts"] for x in batch],
max_length=self.max_source_length,
max_target_length=self.max_target_length,
padding="max_length" if self.tpu_num_cores is not None else "longest", # TPU hack
return_tensors="pt"
)
return batch_encoding.data
def data_split(self, split):
return self.split_to_data_split[split]
def get_dataset(self, split, n_obs=None):
split = self.data_split(split)
if n_obs is not None:
split = split+"[:{}]".format(n_obs)
dataset = load_dataset(self.task_name, split=split)
dataset = dataset.map(self.preprocessor, remove_columns=dataset.column_names)
dataset = dataset.map(lambda batch: self._encode(batch), batched=True)
dataset.set_format(type="torch", columns=['input_ids', 'token_type_ids', 'attention_mask', 'label'])
return dataset
```
I call it like
`AutoTask.get(task, train_dataset_config).get_dataset(split="train", n_obs=data_args.n_train)
`
This gives the following error, to me because the data inside the dataset = dataset.map(lambda batch: self._encode(batch), batched=True) is not processed in batch, could you tell me how I can process dataset in batch inside my function? thanks
File "finetune_multitask_trainer.py", line 192, in main
if training_args.do_train else None
File "finetune_multitask_trainer.py", line 191, in <dictcomp>
split="train", n_obs=data_args.n_train) for task in data_args.task}
File "/remote/idiap.svm/user.active/rkarimi/dev/internship/seq2seq/tasks.py", line 56, in get_dataset
dataset = dataset.map(lambda batch: self._encode(batch), batched=True)
File "/idiap/user/rkarimi/libs/anaconda3/envs/internship/lib/python3.7/site-packages/datasets/arrow_dataset.py", line 1236, in map
update_data = does_function_return_dict(test_inputs, test_indices)
File "/idiap/user/rkarimi/libs/anaconda3/envs/internship/lib/python3.7/site-packages/datasets/arrow_dataset.py", line 1207, in does_function_return_dict
function(*fn_args, indices, **fn_kwargs) if with_indices else function(*fn_args, **fn_kwargs)
File "/remote/idiap.svm/user.active/rkarimi/dev/internship/seq2seq/tasks.py", line 56, in <lambda>
dataset = dataset.map(lambda batch: self._encode(batch), batched=True)
File "/remote/idiap.svm/user.active/rkarimi/dev/internship/seq2seq/tasks.py", line 37, in _encode
[x["src_texts"] for x in batch],
File "/remote/idiap.svm/user.active/rkarimi/dev/internship/seq2seq/tasks.py", line 37, in <listcomp>
[x["src_texts"] for x in batch],
TypeError: string indices must be integers
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/823/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/823/timeline | null | null | {
"url": null,
"html_url": null,
"diff_url": null,
"patch_url": null,
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/822 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/822/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/822/comments | https://api.github.com/repos/huggingface/datasets/issues/822/events | https://github.com/huggingface/datasets/issues/822 | 739,579,314 | MDU6SXNzdWU3Mzk1NzkzMTQ= | 822 | datasets freezes | {
"login": "rabeehkarimimahabadi",
"id": 73364383,
"node_id": "MDQ6VXNlcjczMzY0Mzgz",
"avatar_url": "https://avatars.githubusercontent.com/u/73364383?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rabeehkarimimahabadi",
"html_url": "https://github.com/rabeehkarimimahabadi",
"followers_url": "https://api.github.com/users/rabeehkarimimahabadi/followers",
"following_url": "https://api.github.com/users/rabeehkarimimahabadi/following{/other_user}",
"gists_url": "https://api.github.com/users/rabeehkarimimahabadi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/rabeehkarimimahabadi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rabeehkarimimahabadi/subscriptions",
"organizations_url": "https://api.github.com/users/rabeehkarimimahabadi/orgs",
"repos_url": "https://api.github.com/users/rabeehkarimimahabadi/repos",
"events_url": "https://api.github.com/users/rabeehkarimimahabadi/events{/privacy}",
"received_events_url": "https://api.github.com/users/rabeehkarimimahabadi/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2067388877,
"node_id": "MDU6TGFiZWwyMDY3Mzg4ODc3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20bug",
"name": "dataset bug",
"color": "2edb81",
"default": false,
"description": "A bug in a dataset script provided in the library"
}
] | open | false | {
"login": null,
"id": null,
"node_id": null,
"avatar_url": null,
"gravatar_id": null,
"url": null,
"html_url": null,
"followers_url": null,
"following_url": null,
"gists_url": null,
"starred_url": null,
"subscriptions_url": null,
"organizations_url": null,
"repos_url": null,
"events_url": null,
"received_events_url": null,
"type": null,
"site_admin": null
} | [] | null | [
"Pytorch is unable to convert strings to tensors unfortunately.\r\nYou can use `set_format(type=\"torch\")` on columns that can be converted to tensors, such as token ids.\r\n\r\nThis makes me think that we should probably raise an error or at least a warning when one tries to create pytorch tensors out of text columns"
] | 1,604,985,019,000 | 1,605,223,383,000 | null | NONE | null | Hi, I want to load these two datasets and convert them to Dataset format in torch and the code freezes for me, could you have a look please? thanks
dataset1 = load_dataset("squad", split="train[:10]")
dataset1 = dataset1.set_format(type='torch', columns=['context', 'answers', 'question'])
dataset2 = load_dataset("imdb", split="train[:10]")
dataset2 = dataset2.set_format(type="torch", columns=["text", "label"])
print(len(dataset1))
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/822/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/822/timeline | null | null | {
"url": null,
"html_url": null,
"diff_url": null,
"patch_url": null,
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/821 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/821/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/821/comments | https://api.github.com/repos/huggingface/datasets/issues/821/events | https://github.com/huggingface/datasets/issues/821 | 739,506,859 | MDU6SXNzdWU3Mzk1MDY4NTk= | 821 | `kor_nli` dataset doesn't being loaded properly | {
"login": "sackoh",
"id": 30492059,
"node_id": "MDQ6VXNlcjMwNDkyMDU5",
"avatar_url": "https://avatars.githubusercontent.com/u/30492059?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sackoh",
"html_url": "https://github.com/sackoh",
"followers_url": "https://api.github.com/users/sackoh/followers",
"following_url": "https://api.github.com/users/sackoh/following{/other_user}",
"gists_url": "https://api.github.com/users/sackoh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sackoh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sackoh/subscriptions",
"organizations_url": "https://api.github.com/users/sackoh/orgs",
"repos_url": "https://api.github.com/users/sackoh/repos",
"events_url": "https://api.github.com/users/sackoh/events{/privacy}",
"received_events_url": "https://api.github.com/users/sackoh/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": null,
"id": null,
"node_id": null,
"avatar_url": null,
"gravatar_id": null,
"url": null,
"html_url": null,
"followers_url": null,
"following_url": null,
"gists_url": null,
"starred_url": null,
"subscriptions_url": null,
"organizations_url": null,
"repos_url": null,
"events_url": null,
"received_events_url": null,
"type": null,
"site_admin": null
} | [] | null | [] | 1,604,973,852,000 | 1,605,535,152,000 | 1,605,535,152,000 | NONE | null | There are two issues from `kor_nli` dataset
1. csv.DictReader failed to split features by tab
- Should not exist `None` value in label feature, but there it is.
```python
kor_nli_train['train'].unique('gold_label')
# ['neutral', 'entailment', 'contradiction', None]
```
- I found a reason why there is `None` values in label feature as following code
```python
from datasets import load_dataset
kor_nli_train = load_dataset('kor_nli', 'multi_nli')
for idx, example in enumerate(kor_nli_train['train']):
if example['gold_label'] is None:
print(idx, example)
break
# 16835 {'gold_label': None, 'sentence1': '๊ทธ๋ ์ ์ ์ ์ ๊ฐ๋ฒผ์ด ๋ฒ
์คํจ ์๋ง์ ๊ฐ์ง๊ณ ๋ฌ๋ฆฌ๊ธฐ ์ํด ์ฐ์ ์ฒ๋ผ ํ์ ์คํฐ๋๋ฅผ ๋ฃ์๋ค.\t์ ์ ์ ์ ๋ค์ธ์ข
์ฌ์ฑ๋ค๊ณผ ํจ๊ป ์๋ ๋ฐฑ์ธ ๋จ์๊ฐ ์์๋ค.\tentailment\n์ฌ๋ฆผ์ ์ฌ๋นจ๋ฆฌ ์ท์ ์
์๊ณ , ์๊ฐ์ ์ผ๋ก ๋ฏธ์ง๊ทผํ ๋ฌผ์ ๋ฟ๋ฆด ์ ์๋ ์์นจ ์ธํ๋ฌผ์ ๊ธฐ๊บผ์ด ๊ฐ๋์๋ค.\t์ฌ๋ฆผ์ ์ง์ฅ์ ๋ฆ์๋ค.\tneutral\n๋ด์์์ ๊ทธ ์์ฌ๋ฅผ ํด๋ดค๋๋ฐ, ๊ฑฐ๊ธฐ์ ์๊ณ ๊ธฐ์ ๋ฉ์ง ์๊ณ ๊ธฐ ๋ถ๋ถ์ ์๋ฆฌํ๊ณ ๋ฐ๋ฒ ํ๋ก ๋ง๋ ๋๋นค์ง ๊ฐ์ ๊ฑธ ๊ฐ์ ธ์๋๋ฐ, ์ ๋ง ๋๋จํด.\t๊ทธ๋ค์ด ๊ฑฐ๊ธฐ์ ์๋ฆฌํ๋ ์ ๊ณ ๊ธฐ๋ ์ญ๊ฒน๋ค. ๊ฑฐ๊ธฐ์ ์ ๋ ๋จน์ง ๋ง๋ผ.\tcontradiction\nํ๋งค์์ ์ฃฝ์์์ ๋ธ๋ผ์ด์ธ ๋ฐ๋คํ... ํฌ๋ฆฌ์ค ์ผ๋ฆฌ\tํฌ๋ฆฌ์ค ์ผ๋ฆฌ๋ ์ธ์ผ์ฆ๋งจ์ ์ฃฝ์์ ์ธ๊ธํ์ง ์๋๋ค.\tcontradiction\n๊ทธ๋ฌ๋ ๋์ ์๋ฆฌ์ฌ๋ ๊ทธ๋ฅ ํ๊ฐ ๋ฌ์ด.\t์คํ๊ฐ ๋๋ ๋์ ์๋ฆฌ์ฌ๋ ํ๊ฐ ๋ฌ๋ค.\tneutral\n๋ง์ง๋ง ๋ก๋ง์ ๋งน๊ณต๊ฒฉ ์ ๋ ๋ฐค, 900๋ช
์ด์์ ์ ๋์ธ ์๋น์๋ค์ด ๋ก๋ง์ธ๋ค์๊ฒ ๊ทธ๋ค์ ์ฌ๋ก์ก๋ ์น๋ฆฌ๋ฅผ ์ฃผ๊ธฐ ๋ณด๋ค๋ ๋๋ ์์ด์ ์ ์ง๋ ๋ค.\t๋ก๋ง์ธ๋ค์ด ๊ทธ๋ค์ ํฌํ์ ์น๋ฆฌํ๋๋ก ๋ด๋ฒ๋ ค๋๊ธฐ ๋ณด๋ค๋ 900๋ช
์ ์ ๋์ธ ์๋น์๋ค์ด ์์ดํ๋ค.\tentailment\n์์ผ๋ก ๋ฐ์ฌํ๋ผ.\t๋ฐ์ฌ.\tneutral\n๊ทธ๋ฆฌ๊ณ ๋น์ ์ ์ฐ๋ฆฌ ๋
์ด ์์ด์ปค์ ์๋ค๋ ๊ฒ์ ์๊ณ ์๋ค. ์ฐ๋ฆฌ ์ฌ๋๋ค์ ์ด๋ค ๊ฒ์ด ์ผ๋ง๋ ๋ง์์ง ์ดํดํ์ง ๋ชปํ ๊ฒ์ด๋ค.\t๋ชจ๋ ์ฌ๋๋ค์ ์ฐ๋ฆฌ์ ์ธก์ ์์คํ
์ด ์ด๋ป๊ฒ ์๋ํ๋์ง ์๊ณ ์ดํดํฉ๋๋ค.\tcontradiction\n์ฃผ๋ฏธ๊ฒ์ค\tJumiyges๋ ๋์์ ์ด๋ฆ์ด๋ค.\tneutral\n์ฌ๋์ ์๊ธฐ ๋ฏผ์กฑ์ ๋๋ด์ผ ํ๋ค...\t์ฌ๋์ ์กฐ๊ตญ์ ๊ณต๊ฐํด์ผ ํ๋ค.\tentailment\n๋ํ PDD 63์ ์ ๋ถ์ ์
๊ณ๊ฐ ์ปดํจํฐ ๊ธฐ๋ฐ ๊ณต๊ฒฉ์ ๋ํด ๊ฒฝ๊ณ ํ๊ณ ๋ฐฉ์ดํ ์ค๋น๋ฅผ ๋ ์ํ ์ ์๋๋ก ์์คํ
์ทจ์ฝ์ฑ, ์ํ, ์นจ์
๋ฐ ์ด์์ ๋ํ ์ ๋ณด๋ฅผ ๊ณต์ ํ๋ ๋ฉ์ปค๋์ฆ์ ์๋ฆฝํ๋ ๊ฒ์ด ์ค์ํ๋ค๋ ๊ฒ์ ์ธ์ํ์ต๋๋ค.\t์ ๋ณด ์ ์ก ํ๋กํ ์ฝ์ ๋ง๋๋ ๊ฒ์ ์ค์ํ๋ค.\tentailment\n์นดํ ๋ง ํผ์์ ๋ธ๋ผ ๋ ํ๋ธ๋ฆฌ์นด ๋ฐ๋ก ๋จ์ชฝ์๋ ํผ๋ ์ฒด๊ฐ ์๋ ค์ง ์ง ์ ํ ๋๋ฌธ์ ํ๋ ์คํธ๋ก ๋ง์ผ์ด๋ผ๊ณ ๋ถ๋ ธ๋ 16์ธ๊ธฐ ๋ก์ง์์ธ ๋ฉ๋ฅด์นดํ ๋์ค๋ณด(Mercato Nuovo)๊ฐ ์๋ค.\tํผ์์ ๋ธ๋ผ ๋ ํ๋ธ๋ฆฌ์นด์๋ ์นดํ๊ฐ ๋ง์ด ์๋ค.\tentailment\n์ฐ๋ฆฌ๊ฐ ์ฌ๊ธฐ ์๋ ํ ํธ๋ฆฐํ์ด ๋ญ ์ฃผ์ ๋์ง ์ดํด๋ด์ผ๊ฒ ์ด\t์ฐ๋ฆฌ๋ ํธ๋ฆฐํ์ด ๋ฌด์์ ์ฃผ์ ๋์ง ๋ณด๋ ๋ฐ ์๊ฐ์ ๋ญ๋นํ์ง ์์ ๊ฒ์ด๋ค.\tcontradiction\n๊ทธ๋ฌ๋ ์ผํธ์กฑ์ ๋ฌธํ์ ๊ธฐ๋ฐ์ ๊ฐ์ง ์์ผ๋๋ ๊ตํ๋ ์ ๋ฝ์ ์ ํฅ ๊ธฐ๋
๊ต ์ธ๊ณ์๋ ๋ค๋ฅด๊ฒ ๋ฐ์ ํ๊ณ ๊ฒฐ๊ตญ ๋ก๋ง์ ์ค์์ง๊ถ์ ํ์ ์ผ๋ก ๋์ฒด๋์๋ค.\t์์ผ๋๋ ๊ตํ์๋ ์ผํธ์กฑ์ ๊ธฐ์ง๊ฐ ์์๋ค.\tentailment\n๊ธ์, ๋ ์ ํ์ ์ฌ์ง๊ฐ ์์ด\t๊ธ์, ๋์๊ฒ ๋ง์ ์ ํ๊ถ์ด ์์ด.\tcontradiction\n์ฌ์ค, ๊ณต์์ ์ธ ๋ณด์ฅ์ ์๋ค.\t๋ด๊ฐ ์ฐ ๋ฌผ๊ฑด์ ๋ํ ๋ณด์ฆ์ด ์์๋ค.\tneutral\n๋ ํ๊ธฐ์ฐจ๊ธด ํ์ง๋ง, ์์์ ๋ฅด ๋ถ๋ฅด์ ฏ์ ์ฌ๋์ค๋ฌ์ด ํธ์์์๋ ์ถ์ ๋๊ฐ์ด ์์พํ๋ค.\t์์์ ๋ฅด ๋ถ๋ฅด๊ฒ์์๋ ํธ์์์์ ํ๋์ด ์๋๋ฅด๊ณ ๋ฐ์ ๋ถ์๊ธฐ๋ฅผ ์ฐ์ถํ๋ค.\tcontradiction\n๊ทธ์ ์ฌํ ์์์ด ์ด๋ฏธ ํผ์ก๋ค๋ฉด ๊ณต๊ฒฉ ์์๋ ํผ์ก์ ํ
์ง๋ง ๋ง์์์๋ ์ ํ ๊ณตํฉ์ ๊ธฐ๋ฏธ๊ฐ ๋ณด์ด์ง ์์๋ค.\t๊ทธ๋ ์ ๋ง์์ด ๋นํฉํ์ง ์์๋์ง ์ ์ ์์๋ค.\tneutral\n๊ณผ๊ฑฐ์๋ ์ฃฝ์์ ์ํ์ด ํ ์ง์ ํ๋งค๋ฅผ ๋ง๋ ๋ฐ ๊ฑฐ์ ๋์์ด ๋์ง ์์๋ค.\tํ ์ง ํ๋งค๋ ์ด๋ ํ ์ํ๋ ๊ตํํ์ง ์๊ณ ์ด๋ฃจ์ด์ง๋ค.\tcontradiction\n์ด๋ ์์ ์ ์ด๋ฅด๋ฌ ๋๋ ์ง๊ธ ๋ค๊ฐ์ค๋ ์๋ก์ด ๊ฒ๋ค๊ณผ ๋์ค๋ ๋ง์ ์๋ก์ด ๊ฒ๋ค์ด ๋ด๊ฐ ๋์ด๊ฐ๊ณ ์๋ค๊ณ ๋งํ๋ ์๋๋ก ์ ์ด๋ค๊ณ ์๋ค.\t๋๋ ์ฌ์ ํ ๋ด๊ฐ ๋ณด๋ ๋ชจ๋ ์๋ก์ด ๊ฒ์ ์ฌ๋ํ๋ค.\tcontradiction\n๋ด์ค์ํฌ๋ ๋ฌผ๋ฆฌํ์๋ค์ด ๊ฒฝ๊ธฐ์ฅ ํ์ฌ์์ ๊ณ ์๋๋ก์ ์๋์ฐจ ๊ตํต๊ณผ ๋ณดํ์ ๊ตํต์ ๊ฐ์ ํ๊ธฐ ์ํด ์๋ผ์ ์์ง์์ ์ฐ๊ตฌํ๊ณ ์๋ค๊ณ ๋งํ๋ค.\t๊ณ ์๋๋ก์ ์๋์ฐจ ๊ตํต ํ๋ฆ์ ๊ฐ์ ํ๋ ๊ฒ์ ๋ฌผ๋ฆฌํ์๋ค์ด ์๋ผ๋ฅผ ์ฐ๊ตฌํ๋ ์ด์ ์ค ํ๋์ด๋ค.\tentailment\n์ผ๋ง๋ ๋ค๋ฅธ๊ฐ? ๊ทธ๋ ์ ์ ๋ง์ ๋ฉ์ถ์๋ค๊ฐ ๋ง์ ์ด์๋ค.\t๊ทธ๋ ๊ทธ ์๋
๊ฐ ์ด๋์ ์๋์ง ์๊ณ ์ถ์๋ค.\tentailment\n๊ธ์, ๊ทธ์๊ฒ ๋๋ฌด ๋ง์ ๊ฒ์ ์ฃผ์ง๋ง.\t๊ทธ๋ ํจ์ฌ ๋ ๋ง์ ๊ฒ์ ์๊ตฌํ ๊ฒ์ด๋ค.\tneutral\n์๋ฌด๋ฆฌ ๊ทธ์ ์ฐฝ์๋ฌผ์ด ์๋ฒฝํด ๋ณด์ธ๋ค๊ณ ํด๋, ๊ทธ๋ค์ ๋ฏฟ๋ ๊ฒ์ ์๋ง๋ ์ข์ ์๊ฐ์ด ์๋ ๊ฒ์ด๋ค.\'\t๋์๊ธฐ๋ฅผ ์ ๋ง๋ ๋ค๊ณ ํด์ ๋๊ตฐ๊ฐ๋ฅผ ๋ฏฟ๋ ๊ฒ์ ์๋ง ์ข์ง ์์ ๊ฒ์ด๋ค.\tneutral\n๋ฒ์คํ๋ง ๊ทธ๋ ๋น์(Bustling Gran Via)๋ ํธํ
, ์์ , ๊ทน์ฅ, ๋์ดํธํด๋ฝ, ์นดํ ๋ฑ์ด ์ด์ฐ๋ฌ์ ธ ์ฐ์ฑ
๊ณผ ์ฐฝ๊ฐ๋ฅผ ๋ณผ ์ ์๋ค.\tGran Via๋ ํธํ
, ์์ , ๊ทน์ฅ, ๋์ดํธํด๋ฝ, ์นดํ์ ๋ฒํํ ์กฐํฉ์ด๋ค.\tentailment\n์ ๋ถ ์ธ์์\t๊ทธ ์ฌ๋ฌด์ค์ ์์ฑํด์ ์์นํด ์๋ค.\tneutral\n์ค์ ๋ฌธํ ์ ์์ด ์ด๋ ์๋์ง ์๊ณ ์ถ๋ค๋ฉด ํ์์ ์์ด๋ฒ๋ฆฌ๊ณ ์ค๋ฆฌ์ฝ ๋ฐธ๋ฆฌ์ ๋ ๋๋ชฌ๋๋ฅผ ์๊ฐํด ๋ณด๋ผ.\t์ค์ ๋ฌธํ ์ ์์ ๋ ๋๋ชฌ๋์์ ์ผ์ด๋๋ค.\tentailment\n๊ทธ๋ฆฌ๊ณ ํ๋์ค๋ฆฐ์ ์ฃผ์ง ์๊ธฐ ์ํด ์นจ๋ ์์ ์ฌ๋ ค๋จ์ด\t๊ทธ๋
์ ๋ฐฉ์๋ ํ๋์ค๋ฆฐ์ด ์๋ค๋ ์งํ๊ฐ ์ ํ ์์๋ค.\tcontradiction\nL.A.์ ์ผ์ธ ์์ฅ์ ํ๋ณดํ๋ ๊ฒ์ ๋ง์๊ณ ์ ๋ ดํ ๊ทธ๋ฃจ๋ธ๋ฅผ ์ก๊ณ , ๋์ด ์๋ ํ๋น์ ์ฆ๊ธฐ๊ณ , ์ ์ ํ ๋์ฐ๋ฌผ, ๊ฝ, ํฅ, ๊ทธ๋ฆฌ๊ณ ๊ฐ์ ฏ ๊ฐ๋ก์ด๋ฅผ ๊ตฌ์
ํ๋ฉด์ ํ์ง์ธ๋ค๊ณผ ์ด์ธ๋ฆด ์ ์๋ ํ๋ฅญํ ๋ฐฉ๋ฒ์ด๋ค.\tLA์ ์ผ์ธ ์์ฅ์ ๋์๋ค๋๋ ๊ฒ์ ์๊ฐ ๋ญ๋น๋ค.\tcontradiction\n์๋๋ ๋ฐ์ผ๋ก ๋์ ์๋์ ํ์จ์ ๋ด์ฌ์๋ค. ๋จ ํ ๋ฒ, ๊ทธ๋ฆฌ๊ณ ๋ง๋ฆฌํ์์ฌ ๋ง์ ์ ๋ก ๋๋ด์๋ ๊ฒฐ์ฌ์ด ๋ค์์ฌ ์์๋ค.\t์๋๋ ์์ฌํ๊ณ ๋ง๋ฆฌํ์์ฌ ๋ง์ ์ ์ ๋ค ๋ง์๊ธฐ๋ก ๊ฒฐ์ฌํ๋ค.\tentailment\n5 ์์ Vajpayee๋ ํต ์คํ์ ์ฑ๊ณต์ ์ธ ์๋ฃ๋ฅผ ๋ฐํํ๋๋ฐ, ์ธ๋์ธ๋ค์ ์ฃผ๊ถ์ ํ์๋ก ์ ์ ํ์ง๋ง ์ด์ ๊ตญ๊ฐ์ ์๊ตฌ์์ ์ธ๋ ๊ด๊ณ๋ฅผ ๋ณต์กํ๊ฒ ๋ง๋ค ์ ์์ต๋๋ค.\t์ธ๋๋ ์ฑ๊ณต์ ์ธ ํต์คํ์ ํ ์ ์ด ์๋ค.\tcontradiction\nํ๋ผ๋
ธ ์์์ ๋ณดํต ์ผ๋ง๋ ๋ง์ ๊ฒ์ ๊ฐ์ง๊ณ ์๋๊ฐ?\t์ ์ฌ๋๋ค ์ค์ ํ๋ผ๋
ธ ์์ ๊ฐ๋ณธ ์ฌ๋ ์์ด?\tcontradiction\n๊ทธ๊ฒ์ ์ ์ฒด์ ์ธ ํํ์ ์ฐ์ํจ์ ์ดํ ๊ฑด๋ํธ์์ ๊ฐ์ฅ ์ ๋ณผ ์ ์๋ค. ์๋ํ๋ฉด, ๋ก๋ง์ ์๋ ์ฑ ๋ฒ ๋๋ก์ฒ๋ผ, ๋์ ๊ธธ์ญํ ๋ณธ๋น ๋ค๋ก ๋ ๊ฐ๊น์ด ๊ณณ์ ์ฌ๋ผ์ง๊ธฐ ๋๋ฌธ์ด๋ค.\t์ฑ ๋ฒ ๋๋ก์ ๊ธธ์ญํ ๋ณธ๋น์ ๋์ ๊ฐ๋ฆฐ๋ค.\tentailment\n๋น์ ์ ์ํด์ด ์ด์ ๊ฐ๋ฐ์ ์ธ ๊ธฐ์จ์ ๊ฐ์ง๊ณ ๋๋๋ฅผ ๊ทธ๋ฆด ๊ฒ์ด๋ผ๊ณ ์๊ฐํ๊ฒ ์ง๋ง, ์๋์ค; ๊ทธ๋ ๊ทธ์ ๋ชจ๋ ๊ฒฝ๋ ฅ์์ ๋จ ํ ์ ๋ง์ ๊ทธ๋ ธ๊ณ , ๊ทธ๊ฒ์ ์ฌ์ํ ๊ทธ๋ฆผ์ด๋ค.\t๊ทธ๋ ๊ทธ๊ฒ์ด ๊ทธ๋ฅผ ๋ถํธํ๊ฒ ๋ง๋ค์๊ธฐ ๋๋ฌธ์ ํ๋๋ง ๊ทธ๋ ธ๋ค.\tneutral\n์ด ์ธ์์ ์ธ ํ๊ฒฝ์ ์๋ ๋ํฌ ๋ ์จ์ด ๋ฃจ๋ธ๋ฅด ๋ฐ๋ฌผ๊ด์ ์นจ์ค์์ ๋ณผ ์ ์๋๋ก ๊ณํ๋์๋๋ฐ, ๊ทธ ๋น์ ๊ถ์ ์ด์์ต๋๋ค.\t๋ํด๋ ์น์ ๊ทธ์ ๋ชจ๋ ๊ถ์ ์ ์๋ ๊ทธ์ ์นจ์ค์์ ๋ณด๋ ๊ฒฝ์น์ ๋ง์ ๊ด์ฌ์ ๊ฐ์ก๋ค.\tneutral\n๊ทธ๋ ์ฐ๋ฆฌ์๊ฒ ๋ฌธ ์ด์ ๋ฅผ ๊ฑด๋ค์ฃผ๊ณ ๋ ๊ธํ ๋ ๋ฌ๋ค.\t๊ทธ๋ ๊ธด์ฅํด์ ์ฐ๋ฆฌ์๊ฒ ์ด์ ๋ฅผ ๋นจ๋ฆฌ ์ฃผ์๋ค.\tneutral\n์์ํ๋ ๋ํ ์ต์ข
๊ท์น์ OMB์ ์ ์ถํ๋ค.\t์์ํ๋ ๋ํ ์ด ๊ท์น์ ๋ค๋ฅธ ๊ทธ๋ฃน์ ์ ์ถํ์ง๋ง ์ต์ข
๊ท์น์ OMB๊ฐ ํ๊ฐํ๊ธฐ ์ํ ๊ฒ์ด ์์ต๋๋ค.\tneutral\n์ ์๊ฐ๊ฒ์ ๊ฐ๋ณด๋ฉด ์ฌ๋ฆฌ๋น์์ ๋ณต์ ํํฉ๋ฌผ ๊ฐ์ ์ ์พํ ์ด๋ฆ์ ๊ฐ์ง ์ ํ๋ค์ ์ฐพ์ ์ ์์ ๊ฒ๋๋ค.์ด ์ ํ์ด ๋ฟ๋ฆฌ๋ฅผ ๋ด๋ฆฌ๋๋ก ๋๊ธฐ ์ํด ์ดฌ์์ ์ ๋จ๋ ๋์ ๋ฉํฌ์์ ํ๋ ํธ๋ฅด๋ชฌ์ ํผํฉ๋ฌผ์ด์ฃ .\t์ ์ ๊ฐ๊พธ๊ธฐ ๊ฐ๊ฒ์ ์ ํ๋ค์ ์ข
์ข
๊ทธ๋ค์ ๋ชฉ์ ์ ์ค๋ช
ํ๊ธฐ ์ํด ๊ธฐ์ ์ ์ผ๋ก๋ ๊ณผํ์ ์ผ๋ก ํ์๋ ์ด๋ฆ(์ฌ๋ฆฌ๋น์์ ๋ณต์ ํํฉ๋ฌผ์ฒ๋ผ)์ ๋ถ์ฌ๋ฐ๋๋ค.\tneutral\n์คํ๋ ์คํธ ์์ ์ด๋ ์ ๊ทธ๋
์ ์ด์ผ๊ธฐ๋ฅผ ๋ฐ๊พธ์๋์ง์ ํจ์ฌ ๋ ๊ด์ฌ์ด ์์ ๊ฒ์ด๋ค.\t์คํธ์ ์ด์ผ๊ธฐ๋ ์กฐ๊ธ๋ ๋ณํ์ง ์์๋ค.\tcontradiction\n๋จํธ๊ณผ์ ๋ง์ง๋ง ๋๊ฒฐ๋ก ๋งฅํฐ์ด๋ ๋
ธ๋ผ์ ๋ณ์ ์ ๋๋ฌด๋ ๋ฅ์ํ๊ฒ ์๊ณ ํด ์๊ธฐ ๋๋ฌธ์, ๊ทธ๋
์๊ฒ๋ ๋นํฉ์ค๋ฌ์ธ ์ ๋๋ก ๊ฐ์์ค๋ฌ์ด ๊ฒ์ฒ๋ผ ๋ณด์ด์ง๋ง, ์ฐ๋ฆฌ์๊ฒ๋ ๊ฐ์ ์ ์ผ๋ก ๋ถ๊ฐํผํด ๋ณด์ธ๋ค.\t๋
ธ๋ผ์ ๋ณ์ ์ ๋ถ๋ช
ํ๊ณ ํ์ฐ์ ์ด์๋ค.\tcontradiction\n์ด์งํธ ์ต๋จ๋จ ๋์์ธ ์์ค์์ ์ค๋ ์ญ์ฌ๋ฅผ ํตํด ์ค์ํ ์ญํ ์ ํด์๋ค.\t์์ค์์ ์ด์งํธ ๊ตญ๊ฒฝ ๋ฐ๋ก ์์ ์์นํด ์์ต๋๋ค.\tneutral\n๊ทธ๋ฌ๋ ํจ์ฌ ๋ ์ฐ์ํ ๊ฑด์ถ์ ํฐ์น๋ ์ ์ฑํ ์ถค์ธ Bharatanatyam์์ ์ํ๋ 108 ๊ฐ์ง ๊ธฐ๋ณธ ํฌ์ฆ๋ฅผ ์๋ฐ ํจ๋์์ ๋ณผ ์ ์์ต๋๋ค.\tํจ๋์ ๋ํ ์๋ฐ์ ๋ฌ์ฌ๋ ์ผ๋ฐ์ ์ธ ๋ชจํฐ๋ธ๋ค.\tneutral\nํธํ๋กญ๊ฒ ์ฌ์ด์ง ๊ณ๋จ์ ์ ์์ ์ดํ๋ฆฌ์ ํ์์ ๊ฐ์ฅ ํ๋ฅญํ ์์๋ธ ์ค ํ๋์
๋๋ค.\t์๋ฆ๋ค์ด ์ ์๊ณผ ํฌ๊ทํ ๊ฝ๊ฝ์ด ๋ชจ๋ ์ดํ๋ฆฌ์์ ํ์์ ์ธ ์คํ์ผ์ ๋ณด์ฌ์ค๋ค.\tneutral\n์, ๊ทธ๋ฌ์ผ๋ฉด ์ข์์ ํ
๋ฐ\t๋๋ ๊ทธ๊ฒ์ ๋ค๋ฅด๊ฒ ํ ๊ธฐํ๋ฅผ ๋ชน์ ๊ฐ๋งํ๋ค.\tentailment\nํํ๊ฐ ๋ ์ฑ์ ๊ธฐ์ญ์ ์๋ฆฌ์ก๊ณ ์๋ ์์ ์ค์ธ ๋์ ์ผ์ด์์ค๋ฒ๊ทธ๋ ๋
ธ๋ฒจ ํํ์ ์์์ ์๋ฒํธ ์๋ฐ์ด์ฒ(1875๋
)์ ์ถ์์ง๋ก ๋๋ฆฌ ์๋ ค์ ธ ์๋ค.\t์๋ฒํธ ์๋ฐ์ด์ฒ๋ ๋ ๋ค ์ผ์ด์์ค๋ฒ๊ทธ ๋ง์์ ์์๋ค.\tentailment\n๊ณ ๊ฐ๋๋ ๋ฌธ์ ๊ฐ ์๋ ๋๋ถ๋ถ์ ํ์๋ค์ด ๋ฐ๊ฒฌ๋ ๊ฒ์ ๋ณด์ฅํ๋ค.\t์ฅ๋น ๋ฏผ๊ฐ๋๋ ๋ฌธ์ ํ์ง์ ๊ด๋ จ์ด ์์ต๋๋ค.\tcontradiction\n์ค๋์ ํ์คํ ๋ฐ๋ฐ์ง ๊ฐ์ ๋ ์ด์์ด\t์ค๋ ์ฌ๋ฌด์ค์ ์๋ ๋ชจ๋ ์ฌ๋๋ค์ ๋ฐ๋ฐ์ง๋ฅผ ์
์๋ค.\tneutral\n๋ชป์๊ธด ํฑ์๋๋ฅผ ์
๊ณ .\t๊ทธ๊ฒ์ ๋ถํ์๊ณผ ์ฃผํฉ์์
๋๋ค.\tneutral\n์ด์ฃผ ๋
ธ๋ ์์ฉ์ ์ค ๋ง์ด ๊ฐ ๊ทธ๋ค์ ํ์ง ์์์ ์ฐ๋ค.\t๋
ธ๋ ์์ฉ์์๋ ํ์ง ์์์ ์ฌ๋ ์ด์ฃผ ๋
ธ๋์๋ค์ ์ฌ์ง์ด ์๋ค.\tneutral\n๊ทธ๋, ๊ทธ๊ฐ ์ ์ธ๊ณ๋ฅผ ์ฌํํ ํ์ ๊ทธ๋ฐ ๊ฑฐ์ผ\t๊ทธ๊ฒ์ ์ฌ๋๋ค์ ์ธ๊ณ ์ฌํ์ ๋ฐ๋ฅธ๋ค.\tentailment\n๊ฑด๋ํธ์ ํฌ๊ณ ํฐ ์ฐธ๋๋ฌด ๋ช ๊ทธ๋ฃจ๊ฐ ์๋ค.\t์ฐ๋ฆฌ๋ ์ฌ๊ธฐ ์คํฌ๋ ์ด๋ค ์ข
๋ฅ์ ๋ฏธ๊ตญ ๋๋ฌด๋ ์๋ค.\tcontradiction\nFort-de-France์์ ์ถ๋ฐํ๋ ์๋์ฐจ๋ ์ฌ๊ฐ์ ์ผ๋ก, ๋น์ ์ ์์ธ ? ๋ฐ๋ค ํฌ๋๊ฐ ๊ทธ๋์ ์ ๊ณตํ๋ ์พ์ ํ ๊ฐ์ ๋ชจ๋ ํด๋ณ๊ณผ ํผํฌ๋ ํ
์ด๋ธ, ์ด๋ฆฐ์ด ๋ฏธ๋๋ผํ, ์๋น์ด ์๋ ์๋์ ๋์ฐฉํ ์ ์๋ค.\tํ๋์ค ์์์์ ์๋์ฐจ๋ ํ๋ฆฌ๋ฅผ ํ๊ณ ์์ธ๋ก ๊ฐ ์ ์๋ค.\tentailment\n๊ทธ๋ฆฌ๊ณ ๊ทธ๊ฒ์ ์จ๋ผ๋ฐฐ๋ง์ฃผ๊ฐ ์์ํ๋ ๋๋ก ์์ฐ์์ 50๋ง ๋ฌ๋ฌ๋ฅผ ์ญ๊ฐํ์ง ์์ ๊ฒ์ด๋ผ๋ ๊ฒ์ ์๋ฏธํ๋ค.\t์จ๋ผ๋ฐฐ๋ง ์ฃผ๋ ์์ฐ ์ญ๊ฐ์ ํ์ง ์์๋ค. ์๋ํ๋ฉด ๊ทธ๋ ๊ฒ ํ๋ ๊ฒ์ ๋ํ ์ด๊ธฐ ์ ๋น์ฑ์ด ์ ๋ฐ ์กฐ์ฌ์ ๋ง์์ง ์์๊ธฐ ๋๋ฌธ์ด๋ค.\tneutral\n์์์ด ๋จผ์ ์ด .. ์ด .. ๋
ธ์ธ์ด๋ ๊ฐ์กฑ์ ์์์์ ๋ณด๋ด๋ ๊ฒ์ ๋ํด ์ด๋ป๊ฒ ์๊ฐํ๋?\t๊ฐ์กฑ์ ์์์์ ๋ณด๋ด์ ์ฌ๋ ๊ฒ์ ๋ํด ์ด๋ป๊ฒ ์๊ฐํ๋์ง ์ ํ์๊ฐ ์๋ค.\tcontradiction\n๋๋จธ์ง๋ ๋์๊ฒ ๋ฌ๋ ธ์ด.\t๋๋จธ์ง๋ ๋์๊ฒ ๋ฌ๋ ธ์ง๋ง ์๊ฐ์ด ๋ง์ง ์๋ค.\tneutral\n์-ํ , 3์์ ํ๋ณ์ ํ๋ ๊ฒ์ ๋ํด ๊ฑฑ์ ํ๋ฉด ์ ๋๋ค๋ ๊ฒ์ ์๊ณ ์๋ 3์์ด์ผ.\t3์์ ๊ทธ๋ ๊ฒ ๋ฅ์ง ์๋ค.\tneutral\n๊ทธ๋ฆฌ๊ณ ์ด, ๊ทธ๋ฐ ์์ ๊ฒ๋ค๋ก ๋ค์ ์์ํด๋ด. ์์ง ํจ์ฌ ์ธ. ์ด, ๊ทธ ํน๋ณํ ๋ชจ๋ธ ์ฐจ๋ 150๋ฌ๋ฌ์ผ.\t๊ทธ ๋ชจํ์ฐจ๋ 4์ฒ ๋ฌ๋ฌ๊ฐ ๋ ๋ค.\tcontradiction\n๋ด์ผ ๋์๊ฐ์ผ ํ๋ค๋ฉด, ์นผ์ด ๋งํ๋ค.\t๋์๊ฐ ์ ์์ด. ์ค๋์ ์ ๋ผ. ๋ด์ผ์ ์ ๋ผ. ์ ๋ ์ ๋ผ." ์นผ์ด ๋งํ๋ค.', 'sentence2': 'contradiction'}
```
2. (Optional) Preferred to change the name of the features for the compatibility with `run_glue.py` in ๐ค Transformers
- `kor_nli` dataset has same data structure of multi_nli, xnli
- Changing the name of features and the feature type of 'gold_label' to ClassLabel might be helpful
```python
def _info(self):
return datasets.DatasetInfo(
description=_DESCRIPTION,
features=datasets.Features(
{
"premise": datasets.Value("string"),
"hypothesis": datasets.Value("string"),
"label": datasets.features.ClassLabel(names=["entailment", "neutral", "contradiction"]),
}
),
```
If you don't mind, I would like to fix this.
Thanks! | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/821/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/821/timeline | null | null | {
"url": null,
"html_url": null,
"diff_url": null,
"patch_url": null,
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/820 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/820/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/820/comments | https://api.github.com/repos/huggingface/datasets/issues/820/events | https://github.com/huggingface/datasets/pull/820 | 739,387,617 | MDExOlB1bGxSZXF1ZXN0NTE4MDYwMjQ0 | 820 | Update quail dataset to v1.3 | {
"login": "ngdodd",
"id": 4889636,
"node_id": "MDQ6VXNlcjQ4ODk2MzY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4889636?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ngdodd",
"html_url": "https://github.com/ngdodd",
"followers_url": "https://api.github.com/users/ngdodd/followers",
"following_url": "https://api.github.com/users/ngdodd/following{/other_user}",
"gists_url": "https://api.github.com/users/ngdodd/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ngdodd/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ngdodd/subscriptions",
"organizations_url": "https://api.github.com/users/ngdodd/orgs",
"repos_url": "https://api.github.com/users/ngdodd/repos",
"events_url": "https://api.github.com/users/ngdodd/events{/privacy}",
"received_events_url": "https://api.github.com/users/ngdodd/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": null,
"id": null,
"node_id": null,
"avatar_url": null,
"gravatar_id": null,
"url": null,
"html_url": null,
"followers_url": null,
"following_url": null,
"gists_url": null,
"starred_url": null,
"subscriptions_url": null,
"organizations_url": null,
"repos_url": null,
"events_url": null,
"received_events_url": null,
"type": null,
"site_admin": null
} | [] | null | [] | 1,604,958,566,000 | 1,604,999,195,000 | 1,604,999,195,000 | CONTRIBUTOR | null | Updated quail to most recent version, to address the problem originally discussed [here](https://github.com/huggingface/datasets/issues/806). | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/820/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/820/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/820",
"html_url": "https://github.com/huggingface/datasets/pull/820",
"diff_url": "https://github.com/huggingface/datasets/pull/820.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/820.patch",
"merged_at": 1604999195000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/819 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/819/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/819/comments | https://api.github.com/repos/huggingface/datasets/issues/819/events | https://github.com/huggingface/datasets/pull/819 | 739,250,624 | MDExOlB1bGxSZXF1ZXN0NTE3OTQ2MjYy | 819 | Make save function use deterministic global vars order | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": null,
"id": null,
"node_id": null,
"avatar_url": null,
"gravatar_id": null,
"url": null,
"html_url": null,
"followers_url": null,
"following_url": null,
"gists_url": null,
"starred_url": null,
"subscriptions_url": null,
"organizations_url": null,
"repos_url": null,
"events_url": null,
"received_events_url": null,
"type": null,
"site_admin": null
} | [] | null | [
"Sorry, asking for help here, but the dill thread stop around 2013. Is it possible to use dill deterministically? I tried to monkeypatch the solution presented here into dill, but I suppose it requires forking their project.",
"Hi ! What we did was to subclass `dill`'s Pickler to fix the non-deterministic behaviors, and it's been working fine. A fork should also do the job"
] | 1,604,945,523,000 | 1,638,279,249,000 | 1,605,108,051,000 | MEMBER | null | The `dumps` function need to be deterministic for the caching mechanism.
However in #816 I noticed that one of dill's method to recursively check the globals of a function may return the globals in different orders each time it's used. To fix that I sort the globals by key in the `globs` dictionary.
I had to add a rectified `save_function` to the saving functions registry of the Pickler to make it work.
This should fix #816 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/819/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/819/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/819",
"html_url": "https://github.com/huggingface/datasets/pull/819",
"diff_url": "https://github.com/huggingface/datasets/pull/819.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/819.patch",
"merged_at": 1605108050000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/818 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/818/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/818/comments | https://api.github.com/repos/huggingface/datasets/issues/818/events | https://github.com/huggingface/datasets/pull/818 | 739,173,861 | MDExOlB1bGxSZXF1ZXN0NTE3ODgzMzk0 | 818 | Fix type hints pickling in python 3.6 | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": null,
"id": null,
"node_id": null,
"avatar_url": null,
"gravatar_id": null,
"url": null,
"html_url": null,
"followers_url": null,
"following_url": null,
"gists_url": null,
"starred_url": null,
"subscriptions_url": null,
"organizations_url": null,
"repos_url": null,
"events_url": null,
"received_events_url": null,
"type": null,
"site_admin": null
} | [] | null | [] | 1,604,939,267,000 | 1,604,999,223,000 | 1,604,999,222,000 | MEMBER | null | Type hints can't be properly pickled in python 3.6. This was causing errors the `run_mlm.py` script from `transformers` with python 3.6
However Cloupickle proposed a [fix](https://github.com/cloudpipe/cloudpickle/pull/318/files) to make it work anyway.
The idea is just to implement the pickling/unpickling of parameterized type hints. There is one detail though: since in python 3.6 we can't use `isinstance` on type hints, then we can't use pickle saving functions registry directly. Therefore we just wrap the `save_global` method of the Pickler.
This should fix https://github.com/huggingface/transformers/issues/8212 for python 3.6 and make `run_mlm.py` support python 3.6
cc @sgugger | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/818/reactions",
"total_count": 2,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 1,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/818/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/818",
"html_url": "https://github.com/huggingface/datasets/pull/818",
"diff_url": "https://github.com/huggingface/datasets/pull/818.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/818.patch",
"merged_at": 1604999221000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/817 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/817/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/817/comments | https://api.github.com/repos/huggingface/datasets/issues/817/events | https://github.com/huggingface/datasets/issues/817 | 739,145,369 | MDU6SXNzdWU3MzkxNDUzNjk= | 817 | Add MRQA dataset | {
"login": "VictorSanh",
"id": 16107619,
"node_id": "MDQ6VXNlcjE2MTA3NjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/16107619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/VictorSanh",
"html_url": "https://github.com/VictorSanh",
"followers_url": "https://api.github.com/users/VictorSanh/followers",
"following_url": "https://api.github.com/users/VictorSanh/following{/other_user}",
"gists_url": "https://api.github.com/users/VictorSanh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/VictorSanh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/VictorSanh/subscriptions",
"organizations_url": "https://api.github.com/users/VictorSanh/orgs",
"repos_url": "https://api.github.com/users/VictorSanh/repos",
"events_url": "https://api.github.com/users/VictorSanh/events{/privacy}",
"received_events_url": "https://api.github.com/users/VictorSanh/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2067376369,
"node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request",
"name": "dataset request",
"color": "e99695",
"default": false,
"description": "Requesting to add a new dataset"
}
] | closed | false | {
"login": null,
"id": null,
"node_id": null,
"avatar_url": null,
"gravatar_id": null,
"url": null,
"html_url": null,
"followers_url": null,
"following_url": null,
"gists_url": null,
"starred_url": null,
"subscriptions_url": null,
"organizations_url": null,
"repos_url": null,
"events_url": null,
"received_events_url": null,
"type": null,
"site_admin": null
} | [] | null | [
"Done! cf #1117 and #1022"
] | 1,604,937,139,000 | 1,607,096,682,000 | 1,607,096,681,000 | MEMBER | null | ## Adding a Dataset
- **Name:** MRQA
- **Description:** Collection of different (subsets of) QA datasets all converted to the same format to evaluate out-of-domain generalization (the datasets come from different domains, distributions, etc.). Some datasets are used for training and others are used for evaluation. This dataset was collected as part of MRQA 2019's shared task
- **Paper:** https://arxiv.org/abs/1910.09753
- **Data:** https://github.com/mrqa/MRQA-Shared-Task-2019
- **Motivation:** Out-of-domain generalization is becoming (has become) a de-factor evaluation for NLU systems
Instructions to add a new dataset can be found [here](https://huggingface.co/docs/datasets/share_dataset.html). | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/817/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/817/timeline | null | null | {
"url": null,
"html_url": null,
"diff_url": null,
"patch_url": null,
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/816 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/816/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/816/comments | https://api.github.com/repos/huggingface/datasets/issues/816/events | https://github.com/huggingface/datasets/issues/816 | 739,102,686 | MDU6SXNzdWU3MzkxMDI2ODY= | 816 | [Caching] Dill globalvars() output order is not deterministic and can cause cache issues. | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": null,
"id": null,
"node_id": null,
"avatar_url": null,
"gravatar_id": null,
"url": null,
"html_url": null,
"followers_url": null,
"following_url": null,
"gists_url": null,
"starred_url": null,
"subscriptions_url": null,
"organizations_url": null,
"repos_url": null,
"events_url": null,
"received_events_url": null,
"type": null,
"site_admin": null
} | [] | null | [
"To show the issue:\r\n```\r\npython -c \"from datasets.fingerprint import Hasher; a=[]; func = lambda : len(a); print(Hasher.hash(func))\"\r\n```\r\ndoesn't always return the same ouput since `globs` is a dictionary with \"a\" and \"len\" as keys but sometimes not in the same order"
] | 1,604,934,080,000 | 1,605,108,050,000 | 1,605,108,050,000 | MEMBER | null | Dill uses `dill.detect.globalvars` to get the globals used by a function in a recursive dump. `globalvars` returns a dictionary of all the globals that a dumped function needs. However the order of the keys in this dict is not deterministic and can cause caching issues.
To fix that one could register an implementation of dill's `save_function` in the `datasets` pickler that sorts the globals keys before dumping a function. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/816/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/816/timeline | null | null | {
"url": null,
"html_url": null,
"diff_url": null,
"patch_url": null,
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/815 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/815/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/815/comments | https://api.github.com/repos/huggingface/datasets/issues/815/events | https://github.com/huggingface/datasets/issues/815 | 738,842,092 | MDU6SXNzdWU3Mzg4NDIwOTI= | 815 | Is dataset iterative or not? | {
"login": "rabeehkarimimahabadi",
"id": 73364383,
"node_id": "MDQ6VXNlcjczMzY0Mzgz",
"avatar_url": "https://avatars.githubusercontent.com/u/73364383?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rabeehkarimimahabadi",
"html_url": "https://github.com/rabeehkarimimahabadi",
"followers_url": "https://api.github.com/users/rabeehkarimimahabadi/followers",
"following_url": "https://api.github.com/users/rabeehkarimimahabadi/following{/other_user}",
"gists_url": "https://api.github.com/users/rabeehkarimimahabadi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/rabeehkarimimahabadi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rabeehkarimimahabadi/subscriptions",
"organizations_url": "https://api.github.com/users/rabeehkarimimahabadi/orgs",
"repos_url": "https://api.github.com/users/rabeehkarimimahabadi/repos",
"events_url": "https://api.github.com/users/rabeehkarimimahabadi/events{/privacy}",
"received_events_url": "https://api.github.com/users/rabeehkarimimahabadi/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2067376369,
"node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request",
"name": "dataset request",
"color": "e99695",
"default": false,
"description": "Requesting to add a new dataset"
}
] | closed | false | {
"login": null,
"id": null,
"node_id": null,
"avatar_url": null,
"gravatar_id": null,
"url": null,
"html_url": null,
"followers_url": null,
"following_url": null,
"gists_url": null,
"starred_url": null,
"subscriptions_url": null,
"organizations_url": null,
"repos_url": null,
"events_url": null,
"received_events_url": null,
"type": null,
"site_admin": null
} | [] | null | [
"Hello !\r\nCould you give more details ?\r\n\r\nIf you mean iter through one dataset then yes, `Dataset` object does implement the `__iter__` method so you can use \r\n```python\r\nfor example in dataset:\r\n # do something\r\n```\r\n\r\nIf you want to iter through several datasets you can first concatenate them\r\n```python\r\nfrom datasets import concatenate_datasets\r\n\r\nnew_dataset = concatenate_datasets([dataset1, dataset2])\r\n```\r\nLet me know if this helps !",
"Hi Huggingface/Datasets team,\nI want to use the datasets inside Seq2SeqDataset here\nhttps://github.com/huggingface/transformers/blob/master/examples/seq2seq/utils.py\nand there I need to return back each line from the datasets and I am not\nsure how to access each line and implement this?\nIt seems it also has get_item attribute? so I was not sure if this is\niterative dataset? or if this is non-iterable datasets?\nthanks.\n\n\n\nOn Mon, Nov 9, 2020 at 10:18 AM Quentin Lhoest <[email protected]>\nwrote:\n\n> Hello !\n> Could you give more details ?\n>\n> If you mean iter through one dataset then yes, Dataset object does\n> implement the __iter__ method so you can use\n>\n> for example in dataset:\n> # do something\n>\n> If you want to iter through several datasets you can first concatenate them\n>\n> from datasets import concatenate_datasets\n> new_dataset = concatenate_datasets([dataset1, dataset2])\n>\n> Let me know if this helps !\n>\n> โ\n> You are receiving this because you authored the thread.\n> Reply to this email directly, view it on GitHub\n> <https://github.com/huggingface/datasets/issues/815#issuecomment-723881199>,\n> or unsubscribe\n> <https://github.com/notifications/unsubscribe-auth/ARPXHHYRLSSYW6NZN2HYDBTSO6XV5ANCNFSM4TPB7OWA>\n> .\n>\n",
"could you tell me please if datasets also has __getitem__ any idea on how\nto integrate it with Seq2SeqDataset is appreciated thanks\n\nOn Mon, Nov 9, 2020 at 10:22 AM Rabeeh Karimi Mahabadi <[email protected]>\nwrote:\n\n> Hi Huggingface/Datasets team,\n> I want to use the datasets inside Seq2SeqDataset here\n> https://github.com/huggingface/transformers/blob/master/examples/seq2seq/utils.py\n> and there I need to return back each line from the datasets and I am not\n> sure how to access each line and implement this?\n> It seems it also has get_item attribute? so I was not sure if this is\n> iterative dataset? or if this is non-iterable datasets?\n> thanks.\n>\n>\n>\n> On Mon, Nov 9, 2020 at 10:18 AM Quentin Lhoest <[email protected]>\n> wrote:\n>\n>> Hello !\n>> Could you give more details ?\n>>\n>> If you mean iter through one dataset then yes, Dataset object does\n>> implement the __iter__ method so you can use\n>>\n>> for example in dataset:\n>> # do something\n>>\n>> If you want to iter through several datasets you can first concatenate\n>> them\n>>\n>> from datasets import concatenate_datasets\n>> new_dataset = concatenate_datasets([dataset1, dataset2])\n>>\n>> Let me know if this helps !\n>>\n>> โ\n>> You are receiving this because you authored the thread.\n>> Reply to this email directly, view it on GitHub\n>> <https://github.com/huggingface/datasets/issues/815#issuecomment-723881199>,\n>> or unsubscribe\n>> <https://github.com/notifications/unsubscribe-auth/ARPXHHYRLSSYW6NZN2HYDBTSO6XV5ANCNFSM4TPB7OWA>\n>> .\n>>\n>\n",
"`datasets.Dataset` objects implement indeed `__getitem__`. It returns a dictionary with one field per column.\r\n\r\nWe've not added the integration of the datasets library for the seq2seq utilities yet. The current seq2seq utilities are based on text files.\r\n\r\nHowever as soon as you have a `datasets.Dataset` with columns \"tgt_texts\" (str), \"src_texts\" (str), and \"id\" (int) you should be able to implement your own Seq2SeqDataset class that wraps your dataset object. Does that make sense to you ?",
"Hi\nI am sorry for asking it multiple times but I am not getting the dataloader\ntype, could you confirm if the dataset library returns back an iterable\ntype dataloader or a mapping type one where one has access to __getitem__,\nin the former case, one can iterate with __iter__, and how I can configure\nit to return the data back as the iterative type? I am dealing with\nlarge-scale datasets and I do not want to bring all in memory\nthanks for your help\nBest regards\nRabeeh\n\nOn Mon, Nov 9, 2020 at 11:17 AM Quentin Lhoest <[email protected]>\nwrote:\n\n> datasets.Dataset objects implement indeed __getitem__. It returns a\n> dictionary with one field per column.\n>\n> We've not added the integration of the datasets library for the seq2seq\n> utilities yet. The current seq2seq utilities are based on text files.\n>\n> However as soon as you have a datasets.Dataset with columns \"tgt_texts\"\n> (str), \"src_texts\" (str), and \"id\" (int) you should be able to implement\n> your own Seq2SeqDataset class that wraps your dataset object. Does that\n> make sense ?\n>\n> โ\n> You are receiving this because you authored the thread.\n> Reply to this email directly, view it on GitHub\n> <https://github.com/huggingface/datasets/issues/815#issuecomment-723915556>,\n> or unsubscribe\n> <https://github.com/notifications/unsubscribe-auth/ARPXHHYOC22EM7F666BZSOTSO66R3ANCNFSM4TPB7OWA>\n> .\n>\n",
"`datasets.Dataset` objects are both iterative and mapping types: it has both `__iter__` and `__getitem__`\r\nFor example you can do\r\n```python\r\nfor example in dataset:\r\n # do something\r\n```\r\nor\r\n```python\r\nfor i in range(len(dataset)):\r\n example = dataset[i]\r\n # do something\r\n```\r\nWhen you do that, one and only one example is loaded into memory at a time.",
"Hi there, \r\nHere is what I am trying, this is not working for me in map-style datasets, could you please tell me how to use datasets with being able to access ___getitem__ ? could you assist me please correcting this example? I need map-style datasets which is formed from concatenation of two datasets from your library. thanks \r\n\r\n\r\n```\r\nimport datasets\r\ndataset1 = load_dataset(\"squad\", split=\"train[:10]\")\r\ndataset1 = dataset1.map(lambda example: {\"src_texts\": \"question: {0} context: {1} \".format(\r\n example[\"question\"], example[\"context\"]),\r\n \"tgt_texts\": example[\"answers\"][\"text\"][0]}, remove_columns=dataset1.column_names)\r\ndataset2 = load_dataset(\"imdb\", split=\"train[:10]\")\r\ndataset2 = dataset2.map(lambda example: {\"src_texts\": \"imdb: \" + example[\"text\"],\r\n \"tgt_texts\": str(example[\"label\"])}, remove_columns=dataset2.column_names)\r\ntrain_dataset = datasets.concatenate_datasets([dataset1, dataset2])\r\ntrain_dataset.set_format(type='torch', columns=['src_texts', 'tgt_texts'])\r\ndataloader = torch.utils.data.DataLoader(train_dataset, batch_size=32)\r\nfor id, batch in enumerate(dataloader):\r\n print(batch)\r\n\r\n```",
"closed since I found this response on the issue https://github.com/huggingface/datasets/issues/469"
] | 1,604,913,108,000 | 1,605,005,403,000 | 1,605,005,403,000 | NONE | null | Hi
I want to use your library for large-scale training, I am not sure if this is implemented as iterative datasets or not?
could you provide me with example how I can use datasets as iterative datasets?
thanks | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/815/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/815/timeline | null | null | {
"url": null,
"html_url": null,
"diff_url": null,
"patch_url": null,
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/814 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/814/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/814/comments | https://api.github.com/repos/huggingface/datasets/issues/814/events | https://github.com/huggingface/datasets/issues/814 | 738,500,443 | MDU6SXNzdWU3Mzg1MDA0NDM= | 814 | Joining multiple datasets | {
"login": "rabeehkarimimahabadi",
"id": 73364383,
"node_id": "MDQ6VXNlcjczMzY0Mzgz",
"avatar_url": "https://avatars.githubusercontent.com/u/73364383?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rabeehkarimimahabadi",
"html_url": "https://github.com/rabeehkarimimahabadi",
"followers_url": "https://api.github.com/users/rabeehkarimimahabadi/followers",
"following_url": "https://api.github.com/users/rabeehkarimimahabadi/following{/other_user}",
"gists_url": "https://api.github.com/users/rabeehkarimimahabadi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/rabeehkarimimahabadi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rabeehkarimimahabadi/subscriptions",
"organizations_url": "https://api.github.com/users/rabeehkarimimahabadi/orgs",
"repos_url": "https://api.github.com/users/rabeehkarimimahabadi/repos",
"events_url": "https://api.github.com/users/rabeehkarimimahabadi/events{/privacy}",
"received_events_url": "https://api.github.com/users/rabeehkarimimahabadi/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2067376369,
"node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request",
"name": "dataset request",
"color": "e99695",
"default": false,
"description": "Requesting to add a new dataset"
}
] | closed | false | {
"login": null,
"id": null,
"node_id": null,
"avatar_url": null,
"gravatar_id": null,
"url": null,
"html_url": null,
"followers_url": null,
"following_url": null,
"gists_url": null,
"starred_url": null,
"subscriptions_url": null,
"organizations_url": null,
"repos_url": null,
"events_url": null,
"received_events_url": null,
"type": null,
"site_admin": null
} | [] | null | [
"found a solution here https://discuss.pytorch.org/t/train-simultaneously-on-two-datasets/649/35, closed for now, thanks "
] | 1,604,852,370,000 | 1,604,864,328,000 | 1,604,864,328,000 | NONE | null | Hi
I have multiple iterative datasets from your library with different size and I want to join them in a way that each datasets is sampled equally, so smaller datasets more, larger one less, could you tell me how to implement this in pytorch? thanks | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/814/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/814/timeline | null | null | {
"url": null,
"html_url": null,
"diff_url": null,
"patch_url": null,
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/813 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/813/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/813/comments | https://api.github.com/repos/huggingface/datasets/issues/813/events | https://github.com/huggingface/datasets/issues/813 | 738,489,852 | MDU6SXNzdWU3Mzg0ODk4NTI= | 813 | How to implement DistributedSampler with datasets | {
"login": "rabeehkarimimahabadi",
"id": 73364383,
"node_id": "MDQ6VXNlcjczMzY0Mzgz",
"avatar_url": "https://avatars.githubusercontent.com/u/73364383?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rabeehkarimimahabadi",
"html_url": "https://github.com/rabeehkarimimahabadi",
"followers_url": "https://api.github.com/users/rabeehkarimimahabadi/followers",
"following_url": "https://api.github.com/users/rabeehkarimimahabadi/following{/other_user}",
"gists_url": "https://api.github.com/users/rabeehkarimimahabadi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/rabeehkarimimahabadi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rabeehkarimimahabadi/subscriptions",
"organizations_url": "https://api.github.com/users/rabeehkarimimahabadi/orgs",
"repos_url": "https://api.github.com/users/rabeehkarimimahabadi/repos",
"events_url": "https://api.github.com/users/rabeehkarimimahabadi/events{/privacy}",
"received_events_url": "https://api.github.com/users/rabeehkarimimahabadi/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2067376369,
"node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request",
"name": "dataset request",
"color": "e99695",
"default": false,
"description": "Requesting to add a new dataset"
}
] | open | false | {
"login": null,
"id": null,
"node_id": null,
"avatar_url": null,
"gravatar_id": null,
"url": null,
"html_url": null,
"followers_url": null,
"following_url": null,
"gists_url": null,
"starred_url": null,
"subscriptions_url": null,
"organizations_url": null,
"repos_url": null,
"events_url": null,
"received_events_url": null,
"type": null,
"site_admin": null
} | [] | null | [
"Hi Apparently I need to shard the data and give one host a chunk, could you provide me please with examples on how to do it? I want to use it jointly with finetune_trainer.py in huggingface repo seq2seq examples. thanks. ",
"Hey @rabeehkarimimahabadi I'm actually looking for the same feature. Did you manage to get somewhere?",
"@rabeehkarimimahabadi need the same feature"
] | 1,604,849,231,000 | 1,635,158,199,000 | null | NONE | null | Hi,
I am using your datasets to define my dataloaders, and I am training finetune_trainer.py in huggingface repo on them.
I need a distributedSampler to be able to train the models on TPUs being able to distribute the load across the TPU cores. Could you tell me how I can implement the distribued sampler when using datasets in which datasets are iterative? To give you more context, I have multiple of datasets and I need to write sampler for this case. thanks. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/813/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/813/timeline | null | null | {
"url": null,
"html_url": null,
"diff_url": null,
"patch_url": null,
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/812 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/812/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/812/comments | https://api.github.com/repos/huggingface/datasets/issues/812/events | https://github.com/huggingface/datasets/issues/812 | 738,340,217 | MDU6SXNzdWU3MzgzNDAyMTc= | 812 | Too much logging | {
"login": "dspoka",
"id": 6183050,
"node_id": "MDQ6VXNlcjYxODMwNTA=",
"avatar_url": "https://avatars.githubusercontent.com/u/6183050?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dspoka",
"html_url": "https://github.com/dspoka",
"followers_url": "https://api.github.com/users/dspoka/followers",
"following_url": "https://api.github.com/users/dspoka/following{/other_user}",
"gists_url": "https://api.github.com/users/dspoka/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dspoka/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dspoka/subscriptions",
"organizations_url": "https://api.github.com/users/dspoka/orgs",
"repos_url": "https://api.github.com/users/dspoka/repos",
"events_url": "https://api.github.com/users/dspoka/events{/privacy}",
"received_events_url": "https://api.github.com/users/dspoka/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": null,
"id": null,
"node_id": null,
"avatar_url": null,
"gravatar_id": null,
"url": null,
"html_url": null,
"followers_url": null,
"following_url": null,
"gists_url": null,
"starred_url": null,
"subscriptions_url": null,
"organizations_url": null,
"repos_url": null,
"events_url": null,
"received_events_url": null,
"type": null,
"site_admin": null
} | [] | null | [
"Hi ! Thanks for reporting :) \r\nI agree these one should be hidden when the logging level is warning, we'll fix that",
"+1, the amount of logging is excessive.\r\n\r\nMost of it indeed comes from `filelock.py`, though there are occasionally messages from other sources too. Below is an example (all of these messages were logged after I already called `datasets.logging.set_verbosity_error()`)\r\n\r\n```\r\nI1109 21:26:01.742688 139785006901056 filelock.py:318] Lock 139778216292192 released on /home/kitaev/.cache/huggingface/datasets/9ed4f2e133395826175a892c70611f68522c7bc61a35476e8b51a31afb76e4bf.e6f3e3f3e3875a07469d1cfd32e16e1d06b149616b11eef2d081c43d515b492d.py.lock\r\nI1109 21:26:01.747898 139785006901056 filelock.py:274] Lock 139778216290176 acquired on /home/kitaev/.cache/huggingface/datasets/_home_kitaev_.cache_huggingface_datasets_glue_mnli_1.0.0_7c99657241149a24692c402a5c3f34d4c9f1df5ac2e4c3759fadea38f6cb29c4.lock\r\nI1109 21:26:01.748258 139785006901056 filelock.py:318] Lock 139778216290176 released on /home/kitaev/.cache/huggingface/datasets/_home_kitaev_.cache_huggingface_datasets_glue_mnli_1.0.0_7c99657241149a24692c402a5c3f34d4c9f1df5ac2e4c3759fadea38f6cb29c4.lock\r\nI1109 21:26:01.748412 139785006901056 filelock.py:274] Lock 139778215853024 acquired on /home/kitaev/.cache/huggingface/datasets/_home_kitaev_.cache_huggingface_datasets_glue_mnli_1.0.0_7c99657241149a24692c402a5c3f34d4c9f1df5ac2e4c3759fadea38f6cb29c4.lock\r\nI1109 21:26:01.748497 139785006901056 filelock.py:318] Lock 139778215853024 released on /home/kitaev/.cache/huggingface/datasets/_home_kitaev_.cache_huggingface_datasets_glue_mnli_1.0.0_7c99657241149a24692c402a5c3f34d4c9f1df5ac2e4c3759fadea38f6cb29c4.lock\r\nI1109 21:07:17.029001 140301730502464 filelock.py:274] Lock 140289479304360 acquired on /home/kitaev/.cache/huggingface/datasets/b16d3a04bf2cad1346896852bf120ba846ea1bebb1cd60255bb3a1a2bbcc3a67.ec871b06a00118091ec63eff0a641fddcb8d3c7cd52e855bbb2be28944df4b82.py.lock\r\nI1109 21:07:17.029341 140301730502464 filelock.py:318] Lock 140289479304360 released on /home/kitaev/.cache/huggingface/datasets/b16d3a04bf2cad1346896852bf120ba846ea1bebb1cd60255bb3a1a2bbcc3a67.ec871b06a00118091ec63eff0a641fddcb8d3c7cd52e855bbb2be28944df4b82.py.lock\r\nI1109 21:07:17.058964 140301730502464 filelock.py:274] Lock 140251889388120 acquired on /home/kitaev/.cache/huggingface/metrics/glue/mnli/default_experiment-1-0.arrow.lock\r\nI1109 21:07:17.060933 140301730502464 filelock.py:318] Lock 140251889388120 released on /home/kitaev/.cache/huggingface/metrics/glue/mnli/default_experiment-1-0.arrow.lock\r\nI1109 21:07:17.061067 140301730502464 filelock.py:274] Lock 140296072521488 acquired on /home/kitaev/.cache/huggingface/metrics/glue/mnli/default_experiment-1-0.arrow.lock\r\nI1109 21:07:17.069736 140301730502464 metric.py:400] Removing /home/kitaev/.cache/huggingface/metrics/glue/mnli/default_experiment-1-0.arrow\r\nI1109 21:07:17.069949 140301730502464 filelock.py:318] Lock 140296072521488 released on /home/kitaev/.cache/huggingface/metrics/glue/mnli/default_experiment-1-0.arrow.lock\r\n```",
"So how to solve this problem?",
"In the latest version of the lib the logs about locks are at the DEBUG level so you won't see them by default.\r\nAlso `set_verbosity_warning` does take into account these logs now.\r\nCan you try to update the lib ?\r\n```\r\npip install --upgrade datasets\r\n```",
"Thanks. For some reason I have to use the older version. Is that possible I can fix this by some surface-level trick?\r\n\r\nI'm still using 1.13 version datasets.",
"On older versions you can use\r\n```python\r\nimport logging\r\n\r\nlogging.getLogger(\"filelock\").setLevel(logging.WARNING)\r\n```",
"Whoa Thank you! It works!"
] | 1,604,793,390,000 | 1,611,671,494,000 | 1,605,546,402,000 | NONE | null | I'm doing this in the beginning of my script:
from datasets.utils import logging as datasets_logging
datasets_logging.set_verbosity_warning()
but I'm still getting these logs:
[2020-11-07 15:45:41,908][filelock][INFO] - Lock 139958278886176 acquired on /home/username/.cache/huggingface/datasets/cfe20ffaa80ef1c145a0a210d5b9cdce2b60002831e6ed0edc7ab9275d6f0d48.1bd4ccbce9de3dad0698d84674a19d6cc66a84db736a6398110bd196795dde7e.py.lock
[2020-11-07 15:45:41,909][filelock][INFO] - Lock 139958278886176 released on /home/username/.cache/huggingface/datasets/cfe20ffaa80ef1c145a0a210d5b9cdce2b60002831e6ed0edc7ab9275d6f0d48.1bd4ccbce9de3dad0698d84674a19d6cc66a84db736a6398110bd196795dde7e.py.lock
using datasets version = 1.1.2 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/812/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/812/timeline | null | null | {
"url": null,
"html_url": null,
"diff_url": null,
"patch_url": null,
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/811 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/811/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/811/comments | https://api.github.com/repos/huggingface/datasets/issues/811/events | https://github.com/huggingface/datasets/issues/811 | 738,280,132 | MDU6SXNzdWU3MzgyODAxMzI= | 811 | nlp viewer error | {
"login": "jc-hou",
"id": 30210529,
"node_id": "MDQ6VXNlcjMwMjEwNTI5",
"avatar_url": "https://avatars.githubusercontent.com/u/30210529?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jc-hou",
"html_url": "https://github.com/jc-hou",
"followers_url": "https://api.github.com/users/jc-hou/followers",
"following_url": "https://api.github.com/users/jc-hou/following{/other_user}",
"gists_url": "https://api.github.com/users/jc-hou/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jc-hou/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jc-hou/subscriptions",
"organizations_url": "https://api.github.com/users/jc-hou/orgs",
"repos_url": "https://api.github.com/users/jc-hou/repos",
"events_url": "https://api.github.com/users/jc-hou/events{/privacy}",
"received_events_url": "https://api.github.com/users/jc-hou/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2107841032,
"node_id": "MDU6TGFiZWwyMTA3ODQxMDMy",
"url": "https://api.github.com/repos/huggingface/datasets/labels/nlp-viewer",
"name": "nlp-viewer",
"color": "94203D",
"default": false,
"description": ""
}
] | open | false | {
"login": null,
"id": null,
"node_id": null,
"avatar_url": null,
"gravatar_id": null,
"url": null,
"html_url": null,
"followers_url": null,
"following_url": null,
"gists_url": null,
"starred_url": null,
"subscriptions_url": null,
"organizations_url": null,
"repos_url": null,
"events_url": null,
"received_events_url": null,
"type": null,
"site_admin": null
} | [] | null | [
"and also for 'blog_authorship_corpus'\r\nhttps://huggingface.co/nlp/viewer/?dataset=blog_authorship_corpus\r\n\r\n",
"Is this the problem of my local computer or ??"
] | 1,604,768,938,000 | 1,605,540,383,000 | null | NONE | null | Hello,
when I select amazon_us_reviews in nlp viewer, it shows error.
https://huggingface.co/nlp/viewer/?dataset=amazon_us_reviews

| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/811/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/811/timeline | null | null | {
"url": null,
"html_url": null,
"diff_url": null,
"patch_url": null,
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/810 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/810/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/810/comments | https://api.github.com/repos/huggingface/datasets/issues/810/events | https://github.com/huggingface/datasets/pull/810 | 737,878,370 | MDExOlB1bGxSZXF1ZXN0NTE2ODQzMzQ3 | 810 | Fix seqeval metric | {
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": null,
"id": null,
"node_id": null,
"avatar_url": null,
"gravatar_id": null,
"url": null,
"html_url": null,
"followers_url": null,
"following_url": null,
"gists_url": null,
"starred_url": null,
"subscriptions_url": null,
"organizations_url": null,
"repos_url": null,
"events_url": null,
"received_events_url": null,
"type": null,
"site_admin": null
} | [] | null | [] | 1,604,679,103,000 | 1,604,930,669,000 | 1,604,930,668,000 | MEMBER | null | The current seqeval metric returns the following error when computed:
```
~/.cache/huggingface/modules/datasets_modules/metrics/seqeval/78a944d83252b5a16c9a2e49f057f4c6e02f18cc03349257025a8c9aea6524d8/seqeval.py in _compute(self, predictions, references, suffix)
102 scores = {}
103 for type_name, score in report.items():
--> 104 scores[type_name]["precision"] = score["precision"]
105 scores[type_name]["recall"] = score["recall"]
106 scores[type_name]["f1"] = score["f1-score"]
KeyError: 'LOC'
```
This is because the current code basically tries to do:
```
scores = {}
scores["LOC"]["precision"] = some_value
```
which does not work in python. This PR fixes that while keeping the previous nested structure of results, with the same keys. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/810/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/810/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/810",
"html_url": "https://github.com/huggingface/datasets/pull/810",
"diff_url": "https://github.com/huggingface/datasets/pull/810.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/810.patch",
"merged_at": 1604930667000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/809 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/809/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/809/comments | https://api.github.com/repos/huggingface/datasets/issues/809/events | https://github.com/huggingface/datasets/issues/809 | 737,832,701 | MDU6SXNzdWU3Mzc4MzI3MDE= | 809 | Add Google Taskmaster dataset | {
"login": "yjernite",
"id": 10469459,
"node_id": "MDQ6VXNlcjEwNDY5NDU5",
"avatar_url": "https://avatars.githubusercontent.com/u/10469459?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yjernite",
"html_url": "https://github.com/yjernite",
"followers_url": "https://api.github.com/users/yjernite/followers",
"following_url": "https://api.github.com/users/yjernite/following{/other_user}",
"gists_url": "https://api.github.com/users/yjernite/gists{/gist_id}",
"starred_url": "https://api.github.com/users/yjernite/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yjernite/subscriptions",
"organizations_url": "https://api.github.com/users/yjernite/orgs",
"repos_url": "https://api.github.com/users/yjernite/repos",
"events_url": "https://api.github.com/users/yjernite/events{/privacy}",
"received_events_url": "https://api.github.com/users/yjernite/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2067376369,
"node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request",
"name": "dataset request",
"color": "e99695",
"default": false,
"description": "Requesting to add a new dataset"
}
] | closed | false | {
"login": null,
"id": null,
"node_id": null,
"avatar_url": null,
"gravatar_id": null,
"url": null,
"html_url": null,
"followers_url": null,
"following_url": null,
"gists_url": null,
"starred_url": null,
"subscriptions_url": null,
"organizations_url": null,
"repos_url": null,
"events_url": null,
"received_events_url": null,
"type": null,
"site_admin": null
} | [] | null | [
"Hey @yjernite. Was going to start working on this but found taskmaster 1,2 & 3 in the datasets library already so think this can be closed now?",
"You are absolutely right :) \r\n\r\nClosed by https://github.com/huggingface/datasets/pull/1193 https://github.com/huggingface/datasets/pull/1197 https://github.com/huggingface/datasets/pull/1213"
] | 1,604,675,441,000 | 1,618,924,166,000 | 1,618,924,166,000 | MEMBER | null | ## Adding a Dataset
- **Name:** Taskmaster
- **Description:** A large dataset of task-oriented dialogue with annotated goals (55K dialogues covering entertainment and travel reservations)
- **Paper:** https://arxiv.org/abs/1909.05358
- **Data:** https://github.com/google-research-datasets/Taskmaster
- **Motivation:** One of few annotated datasets of this size for goal-oriented dialogue
Instructions to add a new dataset can be found [here](https://huggingface.co/docs/datasets/share_dataset.html).
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/809/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/809/timeline | null | null | {
"url": null,
"html_url": null,
"diff_url": null,
"patch_url": null,
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/808 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/808/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/808/comments | https://api.github.com/repos/huggingface/datasets/issues/808/events | https://github.com/huggingface/datasets/pull/808 | 737,638,942 | MDExOlB1bGxSZXF1ZXN0NTE2NjQ0NDc0 | 808 | dataset(dgs): initial dataset loading script | {
"login": "AmitMY",
"id": 5757359,
"node_id": "MDQ6VXNlcjU3NTczNTk=",
"avatar_url": "https://avatars.githubusercontent.com/u/5757359?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/AmitMY",
"html_url": "https://github.com/AmitMY",
"followers_url": "https://api.github.com/users/AmitMY/followers",
"following_url": "https://api.github.com/users/AmitMY/following{/other_user}",
"gists_url": "https://api.github.com/users/AmitMY/gists{/gist_id}",
"starred_url": "https://api.github.com/users/AmitMY/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/AmitMY/subscriptions",
"organizations_url": "https://api.github.com/users/AmitMY/orgs",
"repos_url": "https://api.github.com/users/AmitMY/repos",
"events_url": "https://api.github.com/users/AmitMY/events{/privacy}",
"received_events_url": "https://api.github.com/users/AmitMY/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": null,
"id": null,
"node_id": null,
"avatar_url": null,
"gravatar_id": null,
"url": null,
"html_url": null,
"followers_url": null,
"following_url": null,
"gists_url": null,
"starred_url": null,
"subscriptions_url": null,
"organizations_url": null,
"repos_url": null,
"events_url": null,
"received_events_url": null,
"type": null,
"site_admin": null
} | [] | null | [
"Hi @AmitMY, \r\n\r\nWere you able to figure this out?",
"I did not.\r\nWith all the limitations this repo currently has, I had to create a repo of my own using tfds to mitigate them. \r\nhttps://github.com/sign-language-processing/datasets/tree/master/sign_language_datasets/datasets/dgs_corpus\r\n\r\nClosing as I don't know how to support this PR further"
] | 1,604,657,683,000 | 1,616,480,335,000 | 1,616,480,335,000 | CONTRIBUTOR | null | When trying to create dummy data I get:
> Dataset datasets with config None seems to already open files in the method `_split_generators(...)`. You might consider to instead only open files in the method `_generate_examples(...)` instead. If this is not possible the dummy data has t o be created with less guidance. Make sure you create the file dummy_data.
I am not sure how to manually create the dummy_data (what exactly it should contain)
Also note, this library says:
> ImportError: To be able to use this dataset, you need to install the following dependencies['pympi'] using 'pip install pympi' for instance'
When you actually need to `pip install pympi-ling`
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/808/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/808/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/808",
"html_url": "https://github.com/huggingface/datasets/pull/808",
"diff_url": "https://github.com/huggingface/datasets/pull/808.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/808.patch",
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/807 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/807/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/807/comments | https://api.github.com/repos/huggingface/datasets/issues/807/events | https://github.com/huggingface/datasets/issues/807 | 737,509,954 | MDU6SXNzdWU3Mzc1MDk5NTQ= | 807 | load_dataset for LOCAL CSV files report CONNECTION ERROR | {
"login": "shexuan",
"id": 25664170,
"node_id": "MDQ6VXNlcjI1NjY0MTcw",
"avatar_url": "https://avatars.githubusercontent.com/u/25664170?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/shexuan",
"html_url": "https://github.com/shexuan",
"followers_url": "https://api.github.com/users/shexuan/followers",
"following_url": "https://api.github.com/users/shexuan/following{/other_user}",
"gists_url": "https://api.github.com/users/shexuan/gists{/gist_id}",
"starred_url": "https://api.github.com/users/shexuan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/shexuan/subscriptions",
"organizations_url": "https://api.github.com/users/shexuan/orgs",
"repos_url": "https://api.github.com/users/shexuan/repos",
"events_url": "https://api.github.com/users/shexuan/events{/privacy}",
"received_events_url": "https://api.github.com/users/shexuan/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": null,
"id": null,
"node_id": null,
"avatar_url": null,
"gravatar_id": null,
"url": null,
"html_url": null,
"followers_url": null,
"following_url": null,
"gists_url": null,
"starred_url": null,
"subscriptions_url": null,
"organizations_url": null,
"repos_url": null,
"events_url": null,
"received_events_url": null,
"type": null,
"site_admin": null
} | [] | null | [
"Hi !\r\nThe url works on my side.\r\n\r\nIs the url working in your navigator ?\r\nAre you connected to internet ? Does your network block access to `raw.githubusercontent.com` ?",
"> Hi !\r\n> The url works on my side.\r\n> \r\n> Is the url working in your navigator ?\r\n> Are you connected to internet ? Does your network block access to `raw.githubusercontent.com` ?\r\n\r\nI tried another server, it's working now. Thanks a lot.\r\n\r\nAnd I'm curious about why download things from \"github\" when I load dataset from local files ? Dose datasets work if my network crashed?",
"It seems my network frequently crashed so most time it cannot work.",
"\r\n\r\n\r\n> > Hi !\r\n> > The url works on my side.\r\n> > Is the url working in your navigator ?\r\n> > Are you connected to internet ? Does your network block access to `raw.githubusercontent.com` ?\r\n> \r\n> I tried another server, it's working now. Thanks a lot.\r\n> \r\n> And I'm curious about why download things from \"github\" when I load dataset from local files ? Dose datasets work if my network crashed?\r\n\r\nI download the scripts `https://raw.githubusercontent.com/huggingface/datasets/1.1.2/datasets/csv/csv.py` and move it to the package dir `*/datasets/` solved the problem. Could you please put the file `datasets/datasets/csv/csv.py` to `datasets/src/datasets/`๏ผ \r\n\r\nThanks :D",
"hello, how did you solve this problems?\r\n\r\n> > > Hi !\r\n> > > The url works on my side.\r\n> > > Is the url working in your navigator ?\r\n> > > Are you connected to internet ? Does your network block access to `raw.githubusercontent.com` ?\r\n> > \r\n> > \r\n> > I tried another server, it's working now. Thanks a lot.\r\n> > And I'm curious about why download things from \"github\" when I load dataset from local files ? Dose datasets work if my network crashed?\r\n> \r\n> I download the scripts `https://raw.githubusercontent.com/huggingface/datasets/1.1.2/datasets/csv/csv.py` and move it to the package dir `*/datasets/` solved the problem. Could you please put the file `datasets/datasets/csv/csv.py` to `datasets/src/datasets/`๏ผ\r\n> \r\n> Thanks :D\r\n\r\nhello, I tried this. but it still failed. how do you fix this error?",
"> hello, how did you solve this problems?\r\n> \r\n> > > > Hi !\r\n> > > > The url works on my side.\r\n> > > > Is the url working in your navigator ?\r\n> > > > Are you connected to internet ? Does your network block access to `raw.githubusercontent.com` ?\r\n> > > \r\n> > > \r\n> > > I tried another server, it's working now. Thanks a lot.\r\n> > > And I'm curious about why download things from \"github\" when I load dataset from local files ? Dose datasets work if my network crashed?\r\n> > \r\n> > \r\n> > I download the scripts `https://raw.githubusercontent.com/huggingface/datasets/1.1.2/datasets/csv/csv.py` and move it to the package dir `*/datasets/` solved the problem. Could you please put the file `datasets/datasets/csv/csv.py` to `datasets/src/datasets/`๏ผ\r\n> > Thanks :D\r\n> \r\n> hello, I tried this. but it still failed. how do you fix this error?\r\n\r\nไฝ ๆ้ฃไธช่ๆฌไธ่ฝฝๅฐไฝ ๆฌๅฐๅฎ่ฃ
็ฎๅฝไธ๏ผ็ถๅ `load_dataset(csv_script_path, data_fiels)`\r\n\r\n",
"> > hello, how did you solve this problems?\r\n> > > > > Hi !\r\n> > > > > The url works on my side.\r\n> > > > > Is the url working in your navigator ?\r\n> > > > > Are you connected to internet ? Does your network block access to `raw.githubusercontent.com` ?\r\n> > > > \r\n> > > > \r\n> > > > I tried another server, it's working now. Thanks a lot.\r\n> > > > And I'm curious about why download things from \"github\" when I load dataset from local files ? Dose datasets work if my network crashed?\r\n> > > \r\n> > > \r\n> > > I download the scripts `https://raw.githubusercontent.com/huggingface/datasets/1.1.2/datasets/csv/csv.py` and move it to the package dir `*/datasets/` solved the problem. Could you please put the file `datasets/datasets/csv/csv.py` to `datasets/src/datasets/`๏ผ\r\n> > > Thanks :D\r\n> > \r\n> > \r\n> > hello, I tried this. but it still failed. how do you fix this error?\r\n> \r\n> ไฝ ๆ้ฃไธช่ๆฌไธ่ฝฝๅฐไฝ ๆฌๅฐๅฎ่ฃ
็ฎๅฝไธ๏ผ็ถๅ `load_dataset(csv_script_path, data_fiels)`\r\n\r\nๅฅฝ็ๅฅฝ็๏ผ่งฃๅณไบ๏ผๆ่ฐขๆ่ฐข๏ผ๏ผ๏ผ",
"> \r\n> \r\n> > hello, how did you solve this problems?\r\n> > > > > Hi !\r\n> > > > > The url works on my side.\r\n> > > > > Is the url working in your navigator ?\r\n> > > > > Are you connected to internet ? Does your network block access to `raw.githubusercontent.com` ?\r\n> > > > \r\n> > > > \r\n> > > > I tried another server, it's working now. Thanks a lot.\r\n> > > > And I'm curious about why download things from \"github\" when I load dataset from local files ? Dose datasets work if my network crashed?\r\n> > > \r\n> > > \r\n> > > I download the scripts `https://raw.githubusercontent.com/huggingface/datasets/1.1.2/datasets/csv/csv.py` and move it to the package dir `*/datasets/` solved the problem. Could you please put the file `datasets/datasets/csv/csv.py` to `datasets/src/datasets/`๏ผ\r\n> > > Thanks :D\r\n> > \r\n> > \r\n> > hello, I tried this. but it still failed. how do you fix this error?\r\n> \r\n> ไฝ ๆ้ฃไธช่ๆฌไธ่ฝฝๅฐไฝ ๆฌๅฐๅฎ่ฃ
็ฎๅฝไธ๏ผ็ถๅ `load_dataset(csv_script_path, data_fiels)`\r\n\r\nๆ็
ง็ๅไบ๏ผ็ถๅๆฅ้ใ\r\nValueError: unable to parse C:/Software/Anaconda/envs/ptk_gpu2/Lib/site-packages/datasets\\dataset_infos.json as a URL or as a local path\r\n\r\n`---------------------------------------------------------------------------\r\nValueError Traceback (most recent call last)\r\n<ipython-input-5-fd2106a3f053> in <module>\r\n----> 1 dataset = load_dataset('C:/Software/Anaconda/envs/ptk_gpu2/Lib/site-packages/datasets/csv.py', data_files='./test.csv', delimiter=',', autogenerate_column_names=False)\r\n\r\nC:\\Software\\Anaconda\\envs\\ptk_gpu2\\lib\\site-packages\\datasets\\load.py in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, ignore_verifications, save_infos, script_version, **config_kwargs)\r\n 588 # Download/copy dataset processing script\r\n 589 module_path, hash = prepare_module(\r\n--> 590 path, script_version=script_version, download_config=download_config, download_mode=download_mode, dataset=True\r\n 591 )\r\n 592 \r\n\r\nC:\\Software\\Anaconda\\envs\\ptk_gpu2\\lib\\site-packages\\datasets\\load.py in prepare_module(path, script_version, download_config, download_mode, dataset, force_local_path, **download_kwargs)\r\n 296 local_dataset_infos_path = cached_path(\r\n 297 dataset_infos,\r\n--> 298 download_config=download_config,\r\n 299 )\r\n 300 except (FileNotFoundError, ConnectionError):\r\n\r\nC:\\Software\\Anaconda\\envs\\ptk_gpu2\\lib\\site-packages\\datasets\\utils\\file_utils.py in cached_path(url_or_filename, download_config, **download_kwargs)\r\n 316 else:\r\n 317 # Something unknown\r\n--> 318 raise ValueError(\"unable to parse {} as a URL or as a local path\".format(url_or_filename))\r\n 319 \r\n 320 if download_config.extract_compressed_file and output_path is not None:\r\n\r\nValueError: unable to parse C:/Software/Anaconda/envs/ptk_gpu2/Lib/site-packages/datasets\\dataset_infos.json as a URL or as a local path\r\n\r\n`",
"I also experienced this issue this morning. Looks like something specific to windows.\r\nI'm working on a fix",
"I opened a PR @wn1652400018",
"> \r\n> \r\n> I opened a PR @wn1652400018\r\n\r\nThanks you!, It works very well."
] | 1,604,644,384,000 | 1,610,328,627,000 | 1,605,331,834,000 | NONE | null | ## load_dataset for LOCAL CSV files report CONNECTION ERROR
- **Description:**
A local demo csv file:
```
import pandas as pd
import numpy as np
from datasets import load_dataset
import torch
import transformers
df = pd.DataFrame(np.arange(1200).reshape(300,4))
df.to_csv('test.csv', header=False, index=False)
print('datasets version: ', datasets.__version__)
print('pytorch version: ', torch.__version__)
print('transformers version: ', transformers.__version__)
# output:
datasets version: 1.1.2
pytorch version: 1.5.0
transformers version: 3.2.0
```
when I load data through `dataset`:
```
dataset = load_dataset('csv', data_files='./test.csv', delimiter=',', autogenerate_column_names=False)
```
Error infos:
```
ConnectionError Traceback (most recent call last)
<ipython-input-17-bbdadb9a0c78> in <module>
----> 1 dataset = load_dataset('csv', data_files='./test.csv', delimiter=',', autogenerate_column_names=False)
~/.conda/envs/py36/lib/python3.6/site-packages/datasets/load.py in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, ignore_verifications, save_infos, script_version, **config_kwargs)
588 # Download/copy dataset processing script
589 module_path, hash = prepare_module(
--> 590 path, script_version=script_version, download_config=download_config, download_mode=download_mode, dataset=True
591 )
592
~/.conda/envs/py36/lib/python3.6/site-packages/datasets/load.py in prepare_module(path, script_version, download_config, download_mode, dataset, force_local_path, **download_kwargs)
266 file_path = hf_github_url(path=path, name=name, dataset=dataset, version=script_version)
267 try:
--> 268 local_path = cached_path(file_path, download_config=download_config)
269 except FileNotFoundError:
270 if script_version is not None:
~/.conda/envs/py36/lib/python3.6/site-packages/datasets/utils/file_utils.py in cached_path(url_or_filename, download_config, **download_kwargs)
306 user_agent=download_config.user_agent,
307 local_files_only=download_config.local_files_only,
--> 308 use_etag=download_config.use_etag,
309 )
310 elif os.path.exists(url_or_filename):
~/.conda/envs/py36/lib/python3.6/site-packages/datasets/utils/file_utils.py in get_from_cache(url, cache_dir, force_download, proxies, etag_timeout, resume_download, user_agent, local_files_only, use_etag)
473 elif response is not None and response.status_code == 404:
474 raise FileNotFoundError("Couldn't find file at {}".format(url))
--> 475 raise ConnectionError("Couldn't reach {}".format(url))
476
477 # Try a second time
ConnectionError: Couldn't reach https://raw.githubusercontent.com/huggingface/datasets/1.1.2/datasets/csv/csv.py
```
And I try to connect to the site with requests:
```
import requests
requests.head("https://raw.githubusercontent.com/huggingface/datasets/1.1.2/datasets/csv/csv.py")
```
Similarly Error occurs:
```
---------------------------------------------------------------------------
ConnectionRefusedError Traceback (most recent call last)
~/.conda/envs/py36/lib/python3.6/site-packages/urllib3/connection.py in _new_conn(self)
159 conn = connection.create_connection(
--> 160 (self._dns_host, self.port), self.timeout, **extra_kw
161 )
~/.conda/envs/py36/lib/python3.6/site-packages/urllib3/util/connection.py in create_connection(address, timeout, source_address, socket_options)
83 if err is not None:
---> 84 raise err
85
~/.conda/envs/py36/lib/python3.6/site-packages/urllib3/util/connection.py in create_connection(address, timeout, source_address, socket_options)
73 sock.bind(source_address)
---> 74 sock.connect(sa)
75 return sock
ConnectionRefusedError: [Errno 111] Connection refused
During handling of the above exception, another exception occurred:
NewConnectionError Traceback (most recent call last)
~/.conda/envs/py36/lib/python3.6/site-packages/urllib3/connectionpool.py in urlopen(self, method, url, body, headers, retries, redirect, assert_same_host, timeout, pool_timeout, release_conn, chunked, body_pos, **response_kw)
676 headers=headers,
--> 677 chunked=chunked,
678 )
~/.conda/envs/py36/lib/python3.6/site-packages/urllib3/connectionpool.py in _make_request(self, conn, method, url, timeout, chunked, **httplib_request_kw)
380 try:
--> 381 self._validate_conn(conn)
382 except (SocketTimeout, BaseSSLError) as e:
~/.conda/envs/py36/lib/python3.6/site-packages/urllib3/connectionpool.py in _validate_conn(self, conn)
975 if not getattr(conn, "sock", None): # AppEngine might not have `.sock`
--> 976 conn.connect()
977
~/.conda/envs/py36/lib/python3.6/site-packages/urllib3/connection.py in connect(self)
307 # Add certificate verification
--> 308 conn = self._new_conn()
309 hostname = self.host
~/.conda/envs/py36/lib/python3.6/site-packages/urllib3/connection.py in _new_conn(self)
171 raise NewConnectionError(
--> 172 self, "Failed to establish a new connection: %s" % e
173 )
NewConnectionError: <urllib3.connection.HTTPSConnection object at 0x7f3cceda5e48>: Failed to establish a new connection: [Errno 111] Connection refused
During handling of the above exception, another exception occurred:
MaxRetryError Traceback (most recent call last)
~/.conda/envs/py36/lib/python3.6/site-packages/requests/adapters.py in send(self, request, stream, timeout, verify, cert, proxies)
448 retries=self.max_retries,
--> 449 timeout=timeout
450 )
~/.conda/envs/py36/lib/python3.6/site-packages/urllib3/connectionpool.py in urlopen(self, method, url, body, headers, retries, redirect, assert_same_host, timeout, pool_timeout, release_conn, chunked, body_pos, **response_kw)
724 retries = retries.increment(
--> 725 method, url, error=e, _pool=self, _stacktrace=sys.exc_info()[2]
726 )
~/.conda/envs/py36/lib/python3.6/site-packages/urllib3/util/retry.py in increment(self, method, url, response, error, _pool, _stacktrace)
438 if new_retry.is_exhausted():
--> 439 raise MaxRetryError(_pool, url, error or ResponseError(cause))
440
MaxRetryError: HTTPSConnectionPool(host='raw.githubusercontent.com', port=443): Max retries exceeded with url: /huggingface/datasets/1.1.2/datasets/csv/csv.py (Caused by NewConnectionError('<urllib3.connection.HTTPSConnection object at 0x7f3cceda5e48>: Failed to establish a new connection: [Errno 111] Connection refused',))
During handling of the above exception, another exception occurred:
ConnectionError Traceback (most recent call last)
<ipython-input-20-18cc3eb4a049> in <module>
1 import requests
2
----> 3 requests.head("https://raw.githubusercontent.com/huggingface/datasets/1.1.2/datasets/csv/csv.py")
~/.conda/envs/py36/lib/python3.6/site-packages/requests/api.py in head(url, **kwargs)
102
103 kwargs.setdefault('allow_redirects', False)
--> 104 return request('head', url, **kwargs)
105
106
~/.conda/envs/py36/lib/python3.6/site-packages/requests/api.py in request(method, url, **kwargs)
59 # cases, and look like a memory leak in others.
60 with sessions.Session() as session:
---> 61 return session.request(method=method, url=url, **kwargs)
62
63
~/.conda/envs/py36/lib/python3.6/site-packages/requests/sessions.py in request(self, method, url, params, data, headers, cookies, files, auth, timeout, allow_redirects, proxies, hooks, stream, verify, cert, json)
528 }
529 send_kwargs.update(settings)
--> 530 resp = self.send(prep, **send_kwargs)
531
532 return resp
~/.conda/envs/py36/lib/python3.6/site-packages/requests/sessions.py in send(self, request, **kwargs)
641
642 # Send the request
--> 643 r = adapter.send(request, **kwargs)
644
645 # Total elapsed time of the request (approximately)
~/.conda/envs/py36/lib/python3.6/site-packages/requests/adapters.py in send(self, request, stream, timeout, verify, cert, proxies)
514 raise SSLError(e, request=request)
515
--> 516 raise ConnectionError(e, request=request)
517
518 except ClosedPoolError as e:
ConnectionError: HTTPSConnectionPool(host='raw.githubusercontent.com', port=443): Max retries exceeded with url: /huggingface/datasets/1.1.2/datasets/csv/csv.py (Caused by NewConnectionError('<urllib3.connection.HTTPSConnection object at 0x7f3cceda5e48>: Failed to establish a new connection: [Errno 111] Connection refused',))
``` | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/807/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/807/timeline | null | null | {
"url": null,
"html_url": null,
"diff_url": null,
"patch_url": null,
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/806 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/806/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/806/comments | https://api.github.com/repos/huggingface/datasets/issues/806/events | https://github.com/huggingface/datasets/issues/806 | 737,215,430 | MDU6SXNzdWU3MzcyMTU0MzA= | 806 | Quail dataset urls are out of date | {
"login": "ngdodd",
"id": 4889636,
"node_id": "MDQ6VXNlcjQ4ODk2MzY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4889636?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ngdodd",
"html_url": "https://github.com/ngdodd",
"followers_url": "https://api.github.com/users/ngdodd/followers",
"following_url": "https://api.github.com/users/ngdodd/following{/other_user}",
"gists_url": "https://api.github.com/users/ngdodd/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ngdodd/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ngdodd/subscriptions",
"organizations_url": "https://api.github.com/users/ngdodd/orgs",
"repos_url": "https://api.github.com/users/ngdodd/repos",
"events_url": "https://api.github.com/users/ngdodd/events{/privacy}",
"received_events_url": "https://api.github.com/users/ngdodd/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": null,
"id": null,
"node_id": null,
"avatar_url": null,
"gravatar_id": null,
"url": null,
"html_url": null,
"followers_url": null,
"following_url": null,
"gists_url": null,
"starred_url": null,
"subscriptions_url": null,
"organizations_url": null,
"repos_url": null,
"events_url": null,
"received_events_url": null,
"type": null,
"site_admin": null
} | [] | null | [
"Hi ! Thanks for reporting.\r\nWe should fix the urls and use quail 1.3.\r\nIf you want to contribute feel free to fix the urls and open a PR :) ",
"Done! PR [https://github.com/huggingface/datasets/pull/820](https://github.com/huggingface/datasets/pull/820)\r\n\r\nUpdated links and also regenerated the metadata and dummy data for v1.3 in order to pass verifications as described here: [https://huggingface.co/docs/datasets/share_dataset.html#adding-tests-and-metadata-to-the-dataset](https://huggingface.co/docs/datasets/share_dataset.html#adding-tests-and-metadata-to-the-dataset). ",
"Closing since #820 is merged.\r\nThanks again for fixing the urls :)"
] | 1,604,605,219,000 | 1,605,016,971,000 | 1,605,016,971,000 | CONTRIBUTOR | null | <h3>Code</h3>
```
from datasets import load_dataset
quail = load_dataset('quail')
```
<h3>Error</h3>
```
FileNotFoundError: Couldn't find file at https://raw.githubusercontent.com/text-machine-lab/quail/master/quail_v1.2/xml/ordered/quail_1.2_train.xml
```
As per [quail v1.3 commit](https://github.com/text-machine-lab/quail/commit/506501cfa34d9ec6c042d31026ba6fea6bcec8ff) it looks like the location and suggested ordering has changed. In [https://github.com/huggingface/datasets/blob/master/datasets/quail/quail.py#L52-L58](https://github.com/huggingface/datasets/blob/master/datasets/quail/quail.py#L52-L58) the quail v1.2 datasets are being pointed to, which don't exist anymore. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/806/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/806/timeline | null | null | {
"url": null,
"html_url": null,
"diff_url": null,
"patch_url": null,
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/805 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/805/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/805/comments | https://api.github.com/repos/huggingface/datasets/issues/805/events | https://github.com/huggingface/datasets/issues/805 | 737,019,360 | MDU6SXNzdWU3MzcwMTkzNjA= | 805 | On loading a metric from datasets, I get the following error | {
"login": "laibamehnaz",
"id": 36405283,
"node_id": "MDQ6VXNlcjM2NDA1Mjgz",
"avatar_url": "https://avatars.githubusercontent.com/u/36405283?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/laibamehnaz",
"html_url": "https://github.com/laibamehnaz",
"followers_url": "https://api.github.com/users/laibamehnaz/followers",
"following_url": "https://api.github.com/users/laibamehnaz/following{/other_user}",
"gists_url": "https://api.github.com/users/laibamehnaz/gists{/gist_id}",
"starred_url": "https://api.github.com/users/laibamehnaz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/laibamehnaz/subscriptions",
"organizations_url": "https://api.github.com/users/laibamehnaz/orgs",
"repos_url": "https://api.github.com/users/laibamehnaz/repos",
"events_url": "https://api.github.com/users/laibamehnaz/events{/privacy}",
"received_events_url": "https://api.github.com/users/laibamehnaz/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | {
"login": null,
"id": null,
"node_id": null,
"avatar_url": null,
"gravatar_id": null,
"url": null,
"html_url": null,
"followers_url": null,
"following_url": null,
"gists_url": null,
"starred_url": null,
"subscriptions_url": null,
"organizations_url": null,
"repos_url": null,
"events_url": null,
"received_events_url": null,
"type": null,
"site_admin": null
} | [] | null | [
"Hi ! We support only pyarrow > 0.17.1 so that we have access to the `PyExtensionType` object.\r\nCould you update pyarrow and try again ?\r\n```\r\npip install --upgrade pyarrow\r\n```"
] | 1,604,589,278,000 | 1,604,913,155,000 | null | NONE | null | `from datasets import load_metric`
`metric = load_metric('bleurt')`
Traceback:
210 class _ArrayXDExtensionType(pa.PyExtensionType):
211
212 ndims: int = None
AttributeError: module 'pyarrow' has no attribute 'PyExtensionType'
Any help will be appreciated. Thank you. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/805/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/805/timeline | null | null | {
"url": null,
"html_url": null,
"diff_url": null,
"patch_url": null,
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/804 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/804/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/804/comments | https://api.github.com/repos/huggingface/datasets/issues/804/events | https://github.com/huggingface/datasets/issues/804 | 736,858,507 | MDU6SXNzdWU3MzY4NTg1MDc= | 804 | Empty output/answer in TriviaQA test set (both in 'kilt_tasks' and 'trivia_qa') | {
"login": "PaulLerner",
"id": 25532159,
"node_id": "MDQ6VXNlcjI1NTMyMTU5",
"avatar_url": "https://avatars.githubusercontent.com/u/25532159?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/PaulLerner",
"html_url": "https://github.com/PaulLerner",
"followers_url": "https://api.github.com/users/PaulLerner/followers",
"following_url": "https://api.github.com/users/PaulLerner/following{/other_user}",
"gists_url": "https://api.github.com/users/PaulLerner/gists{/gist_id}",
"starred_url": "https://api.github.com/users/PaulLerner/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/PaulLerner/subscriptions",
"organizations_url": "https://api.github.com/users/PaulLerner/orgs",
"repos_url": "https://api.github.com/users/PaulLerner/repos",
"events_url": "https://api.github.com/users/PaulLerner/events{/privacy}",
"received_events_url": "https://api.github.com/users/PaulLerner/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": null,
"id": null,
"node_id": null,
"avatar_url": null,
"gravatar_id": null,
"url": null,
"html_url": null,
"followers_url": null,
"following_url": null,
"gists_url": null,
"starred_url": null,
"subscriptions_url": null,
"organizations_url": null,
"repos_url": null,
"events_url": null,
"received_events_url": null,
"type": null,
"site_admin": null
} | [] | null | [
"cc @yjernite is this expected ?",
"Yes: TriviaQA has a private test set for the leaderboard [here](https://competitions.codalab.org/competitions/17208)\r\n\r\nFor the KILT training and validation portions, you need to link the examples from the TriviaQA dataset as detailed here:\r\nhttps://github.com/huggingface/datasets/blob/master/datasets/kilt_tasks/README.md",
"Oh ok, I guess I read the paper too fast ๐
, thank you for your answer!"
] | 1,604,576,281,000 | 1,604,931,299,000 | 1,604,931,298,000 | CONTRIBUTOR | null | # The issue
It's all in the title, it appears to be fine on the train and validation sets.
Is there some kind of mapping to do like for the questions (see https://github.com/huggingface/datasets/blob/master/datasets/kilt_tasks/README.md) ?
# How to reproduce
```py
from datasets import load_dataset
kilt_tasks = load_dataset("kilt_tasks")
trivia_qa = load_dataset('trivia_qa', 'unfiltered.nocontext')
# both in "kilt_tasks"
In [18]: any([output['answer'] for output in kilt_tasks['test_triviaqa']['output']])
Out[18]: False
# and "trivia_qa"
In [13]: all([answer['value'] == '<unk>' for answer in trivia_qa['test']['answer']])
Out[13]: True
# appears to be fine on the train and validation sets.
In [14]: all([answer['value'] == '<unk>' for answer in trivia_qa['train']['answer']])
Out[14]: False
In [15]: all([answer['value'] == '<unk>' for answer in trivia_qa['validation']['answer']])
Out[15]: False
In [16]: any([output['answer'] for output in kilt_tasks['train_triviaqa']['output']])
Out[16]: True
In [17]: any([output['answer'] for output in kilt_tasks['validation_triviaqa']['output']])
Out[17]: True
``` | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/804/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/804/timeline | null | null | {
"url": null,
"html_url": null,
"diff_url": null,
"patch_url": null,
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/803 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/803/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/803/comments | https://api.github.com/repos/huggingface/datasets/issues/803/events | https://github.com/huggingface/datasets/pull/803 | 736,818,917 | MDExOlB1bGxSZXF1ZXN0NTE1OTY1ODE2 | 803 | fix: typos in tutorial to map KILT and TriviaQA | {
"login": "PaulLerner",
"id": 25532159,
"node_id": "MDQ6VXNlcjI1NTMyMTU5",
"avatar_url": "https://avatars.githubusercontent.com/u/25532159?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/PaulLerner",
"html_url": "https://github.com/PaulLerner",
"followers_url": "https://api.github.com/users/PaulLerner/followers",
"following_url": "https://api.github.com/users/PaulLerner/following{/other_user}",
"gists_url": "https://api.github.com/users/PaulLerner/gists{/gist_id}",
"starred_url": "https://api.github.com/users/PaulLerner/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/PaulLerner/subscriptions",
"organizations_url": "https://api.github.com/users/PaulLerner/orgs",
"repos_url": "https://api.github.com/users/PaulLerner/repos",
"events_url": "https://api.github.com/users/PaulLerner/events{/privacy}",
"received_events_url": "https://api.github.com/users/PaulLerner/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": null,
"id": null,
"node_id": null,
"avatar_url": null,
"gravatar_id": null,
"url": null,
"html_url": null,
"followers_url": null,
"following_url": null,
"gists_url": null,
"starred_url": null,
"subscriptions_url": null,
"organizations_url": null,
"repos_url": null,
"events_url": null,
"received_events_url": null,
"type": null,
"site_admin": null
} | [] | null | [] | 1,604,572,920,000 | 1,604,999,287,000 | 1,604,999,287,000 | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/803/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/803/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/803",
"html_url": "https://github.com/huggingface/datasets/pull/803",
"diff_url": "https://github.com/huggingface/datasets/pull/803.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/803.patch",
"merged_at": 1604999287000
} | true |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.