url
stringlengths 58
61
| repository_url
stringclasses 1
value | labels_url
stringlengths 72
75
| comments_url
stringlengths 67
70
| events_url
stringlengths 65
68
| html_url
stringlengths 46
51
| id
int64 599M
1.12B
| node_id
stringlengths 18
32
| number
int64 1
3.64k
| title
stringlengths 1
276
| user
dict | labels
list | state
stringclasses 2
values | locked
bool 1
class | assignee
dict | assignees
list | milestone
dict | comments
sequence | created_at
int64 1,587B
1,643B
| updated_at
int64 1,587B
1,643B
| closed_at
int64 1,587B
1,643B
⌀ | author_association
stringclasses 3
values | active_lock_reason
null | body
stringlengths 0
228k
⌀ | reactions
dict | timeline_url
stringlengths 67
70
| performed_via_github_app
null | draft
bool 2
classes | pull_request
dict | is_pull_request
bool 2
classes |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://api.github.com/repos/huggingface/datasets/issues/1004 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1004/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1004/comments | https://api.github.com/repos/huggingface/datasets/issues/1004/events | https://github.com/huggingface/datasets/issues/1004 | 755,325,368 | MDU6SXNzdWU3NTUzMjUzNjg= | 1,004 | how large datasets are handled under the hood | {
"login": "rabeehkarimimahabadi",
"id": 73364383,
"node_id": "MDQ6VXNlcjczMzY0Mzgz",
"avatar_url": "https://avatars.githubusercontent.com/u/73364383?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rabeehkarimimahabadi",
"html_url": "https://github.com/rabeehkarimimahabadi",
"followers_url": "https://api.github.com/users/rabeehkarimimahabadi/followers",
"following_url": "https://api.github.com/users/rabeehkarimimahabadi/following{/other_user}",
"gists_url": "https://api.github.com/users/rabeehkarimimahabadi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/rabeehkarimimahabadi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rabeehkarimimahabadi/subscriptions",
"organizations_url": "https://api.github.com/users/rabeehkarimimahabadi/orgs",
"repos_url": "https://api.github.com/users/rabeehkarimimahabadi/repos",
"events_url": "https://api.github.com/users/rabeehkarimimahabadi/events{/privacy}",
"received_events_url": "https://api.github.com/users/rabeehkarimimahabadi/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
} | [] | null | [
"This library uses Apache Arrow under the hood to store datasets on disk.\r\nThe advantage of Apache Arrow is that it allows to memory map the dataset. This allows to load datasets bigger than memory and with almost no RAM usage. It also offers excellent I/O speed.\r\n\r\nFor example when you access one element or one batch\r\n```python\r\nfrom datasets import load_dataset\r\n\r\nsquad = load_dataset(\"squad\", split=\"train\")\r\nfirst_element = squad[0]\r\none_batch = squad[:8]\r\n```\r\n\r\nthen only this element/batch is loaded in memory, while the rest of the dataset is memory mapped.",
"How can we change how much data is loaded to memory with Arrow? I think that I am having some performance issue with it. When Arrow loads the data from disk it does it in multiprocess? It's almost twice slower training with arrow than in memory.\r\n\r\nEDIT:\r\nMy fault! I had not seen the `dataloader_num_workers` in `TrainingArguments` ! Now I can parallelize and go fast! Sorry, and thanks.",
"> How can we change how much data is loaded to memory with Arrow? I think that I am having some performance issue with it. When Arrow loads the data from disk it does it in multiprocess? It's almost twice slower training with arrow than in memory.\r\n\r\nLoading arrow data from disk is done with memory-mapping. This allows to load huge datasets without filling your RAM.\r\nMemory mapping is almost instantaneous and is done within one process.\r\n\r\nThen, the speed of querying examples from the dataset is I/O bounded depending on your disk. If it's an SSD then fetching examples from the dataset will be very fast.\r\nBut since the I/O speed of an SSD is lower than the one of RAM it's expected to be slower to fetch data from disk than from memory.\r\nStill, if you load the dataset in different processes then it can be faster but there will still be the I/O bottleneck of the disk.\r\n\r\n> EDIT:\r\n> My fault! I had not seen the `dataloader_num_workers` in `TrainingArguments` ! Now I can parallelize and go fast! Sorry, and thanks.\r\n\r\nOk let me know if that helps !\r\n"
] | 1,606,919,560,000 | 1,612,175,031,000 | null | NONE | null | Hi
I want to use multiple large datasets with a mapping style dataloader, where they cannot fit into memory, could you tell me how you handled the datasets under the hood? is this you bring all in memory in case of mapping style ones? or is this some sharding under the hood and you bring in memory when necessary, thanks | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/1004/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/1004/timeline | null | null | {
"url": "",
"html_url": "",
"diff_url": "",
"patch_url": "",
"merged_at": 0
} | false |
https://api.github.com/repos/huggingface/datasets/issues/1003 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1003/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1003/comments | https://api.github.com/repos/huggingface/datasets/issues/1003/events | https://github.com/huggingface/datasets/pull/1003 | 755,310,318 | MDExOlB1bGxSZXF1ZXN0NTMxMDQ1NDcy | 1,003 | Add multi_x_science_sum | {
"login": "moussaKam",
"id": 28675016,
"node_id": "MDQ6VXNlcjI4Njc1MDE2",
"avatar_url": "https://avatars.githubusercontent.com/u/28675016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/moussaKam",
"html_url": "https://github.com/moussaKam",
"followers_url": "https://api.github.com/users/moussaKam/followers",
"following_url": "https://api.github.com/users/moussaKam/following{/other_user}",
"gists_url": "https://api.github.com/users/moussaKam/gists{/gist_id}",
"starred_url": "https://api.github.com/users/moussaKam/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/moussaKam/subscriptions",
"organizations_url": "https://api.github.com/users/moussaKam/orgs",
"repos_url": "https://api.github.com/users/moussaKam/repos",
"events_url": "https://api.github.com/users/moussaKam/events{/privacy}",
"received_events_url": "https://api.github.com/users/moussaKam/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
} | [] | null | [] | 1,606,918,441,000 | 1,606,930,745,000 | 1,606,930,745,000 | CONTRIBUTOR | null | Add Multi-XScience Dataset.
github repo: https://github.com/yaolu/Multi-XScience
paper: [Multi-XScience: A Large-scale Dataset for Extreme Multi-document Summarization of Scientific Articles](https://arxiv.org/abs/2010.14235) | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/1003/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/1003/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1003",
"html_url": "https://github.com/huggingface/datasets/pull/1003",
"diff_url": "https://github.com/huggingface/datasets/pull/1003.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1003.patch",
"merged_at": 1606930745000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/1002 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1002/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1002/comments | https://api.github.com/repos/huggingface/datasets/issues/1002/events | https://github.com/huggingface/datasets/pull/1002 | 755,309,758 | MDExOlB1bGxSZXF1ZXN0NTMxMDQ1MDIx | 1,002 | Adding Medal: MeDAL: Medical Abbreviation Disambiguation Dataset for Natural Language Understanding Pretraining | {
"login": "Narsil",
"id": 204321,
"node_id": "MDQ6VXNlcjIwNDMyMQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/204321?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Narsil",
"html_url": "https://github.com/Narsil",
"followers_url": "https://api.github.com/users/Narsil/followers",
"following_url": "https://api.github.com/users/Narsil/following{/other_user}",
"gists_url": "https://api.github.com/users/Narsil/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Narsil/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Narsil/subscriptions",
"organizations_url": "https://api.github.com/users/Narsil/orgs",
"repos_url": "https://api.github.com/users/Narsil/repos",
"events_url": "https://api.github.com/users/Narsil/events{/privacy}",
"received_events_url": "https://api.github.com/users/Narsil/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
} | [] | null | [
"Could you fix the dummy data before we merge ?\r\nLooks like the dummy `train.csv` is missing",
"Thanks @Narsil @lhoestq for adding MeDAL :)"
] | 1,606,918,397,000 | 1,607,360,283,000 | 1,607,001,273,000 | CONTRIBUTOR | null | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/1002/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/1002/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1002",
"html_url": "https://github.com/huggingface/datasets/pull/1002",
"diff_url": "https://github.com/huggingface/datasets/pull/1002.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1002.patch",
"merged_at": 1607001273000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/1001 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1001/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1001/comments | https://api.github.com/repos/huggingface/datasets/issues/1001/events | https://github.com/huggingface/datasets/pull/1001 | 755,309,071 | MDExOlB1bGxSZXF1ZXN0NTMxMDQ0NDQ0 | 1,001 | Adding Medal: MeDAL: Medical Abbreviation Disambiguation Dataset for Natural Language Understanding Pretraining | {
"login": "Narsil",
"id": 204321,
"node_id": "MDQ6VXNlcjIwNDMyMQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/204321?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Narsil",
"html_url": "https://github.com/Narsil",
"followers_url": "https://api.github.com/users/Narsil/followers",
"following_url": "https://api.github.com/users/Narsil/following{/other_user}",
"gists_url": "https://api.github.com/users/Narsil/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Narsil/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Narsil/subscriptions",
"organizations_url": "https://api.github.com/users/Narsil/orgs",
"repos_url": "https://api.github.com/users/Narsil/repos",
"events_url": "https://api.github.com/users/Narsil/events{/privacy}",
"received_events_url": "https://api.github.com/users/Narsil/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
} | [] | null | [
"Dupe"
] | 1,606,918,350,000 | 1,606,918,392,000 | 1,606,918,392,000 | CONTRIBUTOR | null | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/1001/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/1001/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1001",
"html_url": "https://github.com/huggingface/datasets/pull/1001",
"diff_url": "https://github.com/huggingface/datasets/pull/1001.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1001.patch",
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/1000 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1000/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1000/comments | https://api.github.com/repos/huggingface/datasets/issues/1000/events | https://github.com/huggingface/datasets/pull/1000 | 755,292,066 | MDExOlB1bGxSZXF1ZXN0NTMxMDMxMTE1 | 1,000 | UM005: Urdu <> English Translation Dataset | {
"login": "abhishekkrthakur",
"id": 1183441,
"node_id": "MDQ6VXNlcjExODM0NDE=",
"avatar_url": "https://avatars.githubusercontent.com/u/1183441?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/abhishekkrthakur",
"html_url": "https://github.com/abhishekkrthakur",
"followers_url": "https://api.github.com/users/abhishekkrthakur/followers",
"following_url": "https://api.github.com/users/abhishekkrthakur/following{/other_user}",
"gists_url": "https://api.github.com/users/abhishekkrthakur/gists{/gist_id}",
"starred_url": "https://api.github.com/users/abhishekkrthakur/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/abhishekkrthakur/subscriptions",
"organizations_url": "https://api.github.com/users/abhishekkrthakur/orgs",
"repos_url": "https://api.github.com/users/abhishekkrthakur/repos",
"events_url": "https://api.github.com/users/abhishekkrthakur/events{/privacy}",
"received_events_url": "https://api.github.com/users/abhishekkrthakur/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
} | [] | null | [] | 1,606,917,095,000 | 1,607,096,070,000 | 1,607,096,069,000 | MEMBER | null | Adds Urdu-English dataset for machine translation: http://ufal.ms.mff.cuni.cz/umc/005-en-ur/ | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/1000/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/1000/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1000",
"html_url": "https://github.com/huggingface/datasets/pull/1000",
"diff_url": "https://github.com/huggingface/datasets/pull/1000.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1000.patch",
"merged_at": 1607096069000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/999 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/999/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/999/comments | https://api.github.com/repos/huggingface/datasets/issues/999/events | https://github.com/huggingface/datasets/pull/999 | 755,246,786 | MDExOlB1bGxSZXF1ZXN0NTMwOTk1MTY3 | 999 | add generated_reviews_enth | {
"login": "cstorm125",
"id": 15519308,
"node_id": "MDQ6VXNlcjE1NTE5MzA4",
"avatar_url": "https://avatars.githubusercontent.com/u/15519308?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/cstorm125",
"html_url": "https://github.com/cstorm125",
"followers_url": "https://api.github.com/users/cstorm125/followers",
"following_url": "https://api.github.com/users/cstorm125/following{/other_user}",
"gists_url": "https://api.github.com/users/cstorm125/gists{/gist_id}",
"starred_url": "https://api.github.com/users/cstorm125/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/cstorm125/subscriptions",
"organizations_url": "https://api.github.com/users/cstorm125/orgs",
"repos_url": "https://api.github.com/users/cstorm125/repos",
"events_url": "https://api.github.com/users/cstorm125/events{/privacy}",
"received_events_url": "https://api.github.com/users/cstorm125/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
} | [] | null | [] | 1,606,913,443,000 | 1,606,994,248,000 | 1,606,994,248,000 | CONTRIBUTOR | null | `generated_reviews_enth` is created as part of [scb-mt-en-th-2020](https://arxiv.org/pdf/2007.03541.pdf) for machine translation task. This dataset (referred to as `generated_reviews_yn` in [scb-mt-en-th-2020](https://arxiv.org/pdf/2007.03541.pdf)) are English product reviews generated by [CTRL](https://arxiv.org/abs/1909.05858), translated by Google Translate API and annotated as accepted or rejected (`correct`) based on fluency and adequacy of the translation by human annotators. This allows it to be used for English-to-Thai translation quality esitmation (binary label), machine translation, and sentiment analysis. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/999/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/999/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/999",
"html_url": "https://github.com/huggingface/datasets/pull/999",
"diff_url": "https://github.com/huggingface/datasets/pull/999.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/999.patch",
"merged_at": 1606994248000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/998 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/998/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/998/comments | https://api.github.com/repos/huggingface/datasets/issues/998/events | https://github.com/huggingface/datasets/pull/998 | 755,235,356 | MDExOlB1bGxSZXF1ZXN0NTMwOTg2MTQ3 | 998 | adding yahoo_answers_qa | {
"login": "patil-suraj",
"id": 27137566,
"node_id": "MDQ6VXNlcjI3MTM3NTY2",
"avatar_url": "https://avatars.githubusercontent.com/u/27137566?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patil-suraj",
"html_url": "https://github.com/patil-suraj",
"followers_url": "https://api.github.com/users/patil-suraj/followers",
"following_url": "https://api.github.com/users/patil-suraj/following{/other_user}",
"gists_url": "https://api.github.com/users/patil-suraj/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patil-suraj/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patil-suraj/subscriptions",
"organizations_url": "https://api.github.com/users/patil-suraj/orgs",
"repos_url": "https://api.github.com/users/patil-suraj/repos",
"events_url": "https://api.github.com/users/patil-suraj/events{/privacy}",
"received_events_url": "https://api.github.com/users/patil-suraj/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
} | [] | null | [] | 1,606,912,434,000 | 1,606,916,740,000 | 1,606,915,566,000 | MEMBER | null | Adding Yahoo Answers QA dataset.
More info:
https://ciir.cs.umass.edu/downloads/nfL6/ | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/998/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/998/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/998",
"html_url": "https://github.com/huggingface/datasets/pull/998",
"diff_url": "https://github.com/huggingface/datasets/pull/998.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/998.patch",
"merged_at": 1606915566000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/997 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/997/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/997/comments | https://api.github.com/repos/huggingface/datasets/issues/997/events | https://github.com/huggingface/datasets/pull/997 | 755,185,517 | MDExOlB1bGxSZXF1ZXN0NTMwOTQ2MTIy | 997 | Microsoft CodeXGlue | {
"login": "madlag",
"id": 272253,
"node_id": "MDQ6VXNlcjI3MjI1Mw==",
"avatar_url": "https://avatars.githubusercontent.com/u/272253?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/madlag",
"html_url": "https://github.com/madlag",
"followers_url": "https://api.github.com/users/madlag/followers",
"following_url": "https://api.github.com/users/madlag/following{/other_user}",
"gists_url": "https://api.github.com/users/madlag/gists{/gist_id}",
"starred_url": "https://api.github.com/users/madlag/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/madlag/subscriptions",
"organizations_url": "https://api.github.com/users/madlag/orgs",
"repos_url": "https://api.github.com/users/madlag/repos",
"events_url": "https://api.github.com/users/madlag/events{/privacy}",
"received_events_url": "https://api.github.com/users/madlag/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
} | [] | null | [
"#978 is working on adding code refinement\r\n\r\nmaybe we should keep the CodeXGlue benchmark (as glue) and don't merge the code_refinement dataset proposed in #978 ?\r\n\r\ncc @reshinthadithyan",
"Hi @madlag and @lhoestq , I am extremely interested in getting this dataset into HF's library as I research in this area a lot. I see that it hasn't been updated in a while, but it is very close to being finished. If no one is currently working on this, I'd be happy to do any final touches that might be needed to get this merged.",
"Hi @ncoop57 ! Thanks for your interest and sorry for the inactivity on this PR.\r\nSure feel free to create another PR to continue this one ! This one was really close to being merged so I think it won't require that much changes. In addition to my previous comments, there should also be a \"Contributions\" subsection (see the template of the README [here](https://github.com/huggingface/datasets/blob/master/templates/README.md))",
"Superseded by https://github.com/huggingface/datasets/pull/2357 ."
] | 1,606,908,078,000 | 1,623,159,745,000 | 1,623,159,744,000 | CONTRIBUTOR | null | Datasets from https://github.com/microsoft/CodeXGLUE
This contains 13 datasets:
code_x_glue_cc_clone_detection_big_clone_bench
code_x_glue_cc_clone_detection_poj_104
code_x_glue_cc_cloze_testing_all
code_x_glue_cc_cloze_testing_maxmin
code_x_glue_cc_code_completion_line
code_x_glue_cc_code_completion_token
code_x_glue_cc_code_refinement
code_x_glue_cc_code_to_code_trans
code_x_glue_cc_defect_detection
code_x_glue_ct_code_to_text
code_x_glue_tc_nl_code_search_adv
code_x_glue_tc_text_to_code
code_x_glue_tt_text_to_text
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/997/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/997/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/997",
"html_url": "https://github.com/huggingface/datasets/pull/997",
"diff_url": "https://github.com/huggingface/datasets/pull/997.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/997.patch",
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/996 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/996/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/996/comments | https://api.github.com/repos/huggingface/datasets/issues/996/events | https://github.com/huggingface/datasets/issues/996 | 755,176,084 | MDU6SXNzdWU3NTUxNzYwODQ= | 996 | NotADirectoryError while loading the CNN/Dailymail dataset | {
"login": "arc-bu",
"id": 75367920,
"node_id": "MDQ6VXNlcjc1MzY3OTIw",
"avatar_url": "https://avatars.githubusercontent.com/u/75367920?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/arc-bu",
"html_url": "https://github.com/arc-bu",
"followers_url": "https://api.github.com/users/arc-bu/followers",
"following_url": "https://api.github.com/users/arc-bu/following{/other_user}",
"gists_url": "https://api.github.com/users/arc-bu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/arc-bu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/arc-bu/subscriptions",
"organizations_url": "https://api.github.com/users/arc-bu/orgs",
"repos_url": "https://api.github.com/users/arc-bu/repos",
"events_url": "https://api.github.com/users/arc-bu/events{/privacy}",
"received_events_url": "https://api.github.com/users/arc-bu/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
} | [] | null | [
"Looks like the google drive download failed.\r\nI'm getting a `Google Drive - Quota exceeded` error while looking at the downloaded file.\r\n\r\nWe should consider finding a better host than google drive for this dataset imo\r\nrelated : #873 #864 ",
"It is working now, thank you. \r\n\r\nShould I leave this issue open to address the Quota-exceeded error?",
"Yes please. It's been happening several times, we definitely need to address it",
"Any updates on this one? I'm facing a similar issue trying to add CelebA.",
"I've looked into it and couldn't find a solution. This looks like a Google Drive limitation..\r\nPlease try to use other hosts when possible",
"The original links are google drive links. Would it be feasible for HF to maintain their own servers for this? Also, I think the same issue must also exist with TFDS.",
"It's possible to host data on our side but we should ask the authors. TFDS has the same issue and doesn't have a solution either afaik.\r\nOtherwise you can use the google drive link, but it it's not that convenient because of this quota issue.",
"Okay. I imagine asking every author who shares their dataset on Google Drive will also be cumbersome.",
"I am getting this error as well. Is there a fix?",
"Not as long as the data is stored on GG drive unfortunately.\r\nMaybe we can ask if there's a mirror ?\r\n\r\nHi @JafferWilson is there a download link to get cnn dailymail from another host than GG drive ?\r\n\r\nTo give you some context, this library provides tools to download and process datasets. For CNN DailyMail the data are downloaded from the link you provide on your github repository. Unfortunately because of GG drive quotas, many users are not able to load this dataset.",
"The following copy of CNN/DM dataset, fixed the problem for me:\r\nhttps://huggingface.co/datasets/ccdv/cnn_dailymail",
"Thanks for the link @mrazizi !\r\n\r\nApparently the original authors don't host the dataset themselves (\"for legal reasons\", source [here](https://github.com/abisee/cnn-dailymail/issues/9))."
] | 1,606,907,276,000 | 1,640,082,003,000 | null | NONE | null |
Downloading and preparing dataset cnn_dailymail/3.0.0 (download: 558.32 MiB, generated: 1.28 GiB, post-processed: Unknown size, total: 1.82 GiB) to /root/.cache/huggingface/datasets/cnn_dailymail/3.0.0/3.0.0/0128610a44e10f25b4af6689441c72af86205282d26399642f7db38fa7535602...
---------------------------------------------------------------------------
NotADirectoryError Traceback (most recent call last)
<ipython-input-9-cd4bf8bea840> in <module>()
22
23
---> 24 train = load_dataset('cnn_dailymail', '3.0.0', split='train')
25 validation = load_dataset('cnn_dailymail', '3.0.0', split='validation')
26 test = load_dataset('cnn_dailymail', '3.0.0', split='test')
5 frames
/root/.cache/huggingface/modules/datasets_modules/datasets/cnn_dailymail/0128610a44e10f25b4af6689441c72af86205282d26399642f7db38fa7535602/cnn_dailymail.py in _find_files(dl_paths, publisher, url_dict)
132 else:
133 logging.fatal("Unsupported publisher: %s", publisher)
--> 134 files = sorted(os.listdir(top_dir))
135
136 ret_files = []
NotADirectoryError: [Errno 20] Not a directory: '/root/.cache/huggingface/datasets/downloads/1bc05d24fa6dda2468e83a73cf6dc207226e01e3c48a507ea716dc0421da583b/cnn/stories' | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/996/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/996/timeline | null | null | {
"url": "",
"html_url": "",
"diff_url": "",
"patch_url": "",
"merged_at": 0
} | false |
https://api.github.com/repos/huggingface/datasets/issues/995 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/995/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/995/comments | https://api.github.com/repos/huggingface/datasets/issues/995/events | https://github.com/huggingface/datasets/pull/995 | 755,175,199 | MDExOlB1bGxSZXF1ZXN0NTMwOTM3NjI3 | 995 | added dataset circa | {
"login": "bhavitvyamalik",
"id": 19718818,
"node_id": "MDQ6VXNlcjE5NzE4ODE4",
"avatar_url": "https://avatars.githubusercontent.com/u/19718818?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bhavitvyamalik",
"html_url": "https://github.com/bhavitvyamalik",
"followers_url": "https://api.github.com/users/bhavitvyamalik/followers",
"following_url": "https://api.github.com/users/bhavitvyamalik/following{/other_user}",
"gists_url": "https://api.github.com/users/bhavitvyamalik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/bhavitvyamalik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bhavitvyamalik/subscriptions",
"organizations_url": "https://api.github.com/users/bhavitvyamalik/orgs",
"repos_url": "https://api.github.com/users/bhavitvyamalik/repos",
"events_url": "https://api.github.com/users/bhavitvyamalik/events{/privacy}",
"received_events_url": "https://api.github.com/users/bhavitvyamalik/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
} | [] | null | [
"Blocked @k125-ak ;) Bye-bye"
] | 1,606,907,199,000 | 1,607,079,496,000 | 1,606,988,377,000 | CONTRIBUTOR | null | Dataset Circa added. Only README.md and dataset card left | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/995/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/995/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/995",
"html_url": "https://github.com/huggingface/datasets/pull/995",
"diff_url": "https://github.com/huggingface/datasets/pull/995.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/995.patch",
"merged_at": 1606988377000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/994 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/994/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/994/comments | https://api.github.com/repos/huggingface/datasets/issues/994/events | https://github.com/huggingface/datasets/pull/994 | 755,146,834 | MDExOlB1bGxSZXF1ZXN0NTMwOTE1MDc2 | 994 | Add Sepedi ner corpus | {
"login": "yvonnegitau",
"id": 7923902,
"node_id": "MDQ6VXNlcjc5MjM5MDI=",
"avatar_url": "https://avatars.githubusercontent.com/u/7923902?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yvonnegitau",
"html_url": "https://github.com/yvonnegitau",
"followers_url": "https://api.github.com/users/yvonnegitau/followers",
"following_url": "https://api.github.com/users/yvonnegitau/following{/other_user}",
"gists_url": "https://api.github.com/users/yvonnegitau/gists{/gist_id}",
"starred_url": "https://api.github.com/users/yvonnegitau/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yvonnegitau/subscriptions",
"organizations_url": "https://api.github.com/users/yvonnegitau/orgs",
"repos_url": "https://api.github.com/users/yvonnegitau/repos",
"events_url": "https://api.github.com/users/yvonnegitau/events{/privacy}",
"received_events_url": "https://api.github.com/users/yvonnegitau/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
} | [] | null | [
"Looks like the PR includes commits about many other files.\r\nCould you create a clean branch from master, and create another PR ?",
"Sorry, will do that. "
] | 1,606,905,007,000 | 1,606,990,754,000 | 1,606,933,208,000 | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/994/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/994/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/994",
"html_url": "https://github.com/huggingface/datasets/pull/994",
"diff_url": "https://github.com/huggingface/datasets/pull/994.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/994.patch",
"merged_at": null
} | true |
|
https://api.github.com/repos/huggingface/datasets/issues/993 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/993/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/993/comments | https://api.github.com/repos/huggingface/datasets/issues/993/events | https://github.com/huggingface/datasets/issues/993 | 755,135,768 | MDU6SXNzdWU3NTUxMzU3Njg= | 993 | Problem downloading amazon_reviews_multi | {
"login": "hfawaz",
"id": 29229602,
"node_id": "MDQ6VXNlcjI5MjI5NjAy",
"avatar_url": "https://avatars.githubusercontent.com/u/29229602?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/hfawaz",
"html_url": "https://github.com/hfawaz",
"followers_url": "https://api.github.com/users/hfawaz/followers",
"following_url": "https://api.github.com/users/hfawaz/following{/other_user}",
"gists_url": "https://api.github.com/users/hfawaz/gists{/gist_id}",
"starred_url": "https://api.github.com/users/hfawaz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/hfawaz/subscriptions",
"organizations_url": "https://api.github.com/users/hfawaz/orgs",
"repos_url": "https://api.github.com/users/hfawaz/repos",
"events_url": "https://api.github.com/users/hfawaz/events{/privacy}",
"received_events_url": "https://api.github.com/users/hfawaz/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
} | [] | null | [
"Hi @hfawaz ! This is working fine for me. Is it a repeated occurence? Have you tried from the latest verion?",
"Hi, it seems a connection problem. \r\nNow it says: \r\n`ConnectionError: Couldn't reach https://amazon-reviews-ml.s3-us-west-2.amazonaws.com/json/train/dataset_ja_train.json`"
] | 1,606,904,157,000 | 1,607,074,693,000 | null | CONTRIBUTOR | null | Thanks for adding the dataset.
After trying to load the dataset, I am getting the following error:
`ConnectionError: Couldn't reach https://amazon-reviews-ml.s3-us-west-2.amazonaws.com/json/train/dataset_fr_train.json
`
I used the following code to load the dataset:
`load_dataset(
dataset_name,
"all_languages",
cache_dir=".data"
)`
I am using version 1.1.3 of `datasets`
Note that I can perform a successfull `wget https://amazon-reviews-ml.s3-us-west-2.amazonaws.com/json/train/dataset_fr_train.json` | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/993/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/993/timeline | null | null | {
"url": "",
"html_url": "",
"diff_url": "",
"patch_url": "",
"merged_at": 0
} | false |
https://api.github.com/repos/huggingface/datasets/issues/992 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/992/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/992/comments | https://api.github.com/repos/huggingface/datasets/issues/992/events | https://github.com/huggingface/datasets/pull/992 | 755,124,963 | MDExOlB1bGxSZXF1ZXN0NTMwODk3Njkx | 992 | Add CAIL 2018 dataset | {
"login": "JetRunner",
"id": 22514219,
"node_id": "MDQ6VXNlcjIyNTE0MjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/22514219?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/JetRunner",
"html_url": "https://github.com/JetRunner",
"followers_url": "https://api.github.com/users/JetRunner/followers",
"following_url": "https://api.github.com/users/JetRunner/following{/other_user}",
"gists_url": "https://api.github.com/users/JetRunner/gists{/gist_id}",
"starred_url": "https://api.github.com/users/JetRunner/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/JetRunner/subscriptions",
"organizations_url": "https://api.github.com/users/JetRunner/orgs",
"repos_url": "https://api.github.com/users/JetRunner/repos",
"events_url": "https://api.github.com/users/JetRunner/events{/privacy}",
"received_events_url": "https://api.github.com/users/JetRunner/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
} | [] | null | [] | 1,606,903,300,000 | 1,606,927,742,000 | 1,606,927,741,000 | MEMBER | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/992/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/992/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/992",
"html_url": "https://github.com/huggingface/datasets/pull/992",
"diff_url": "https://github.com/huggingface/datasets/pull/992.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/992.patch",
"merged_at": 1606927741000
} | true |
|
https://api.github.com/repos/huggingface/datasets/issues/991 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/991/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/991/comments | https://api.github.com/repos/huggingface/datasets/issues/991/events | https://github.com/huggingface/datasets/pull/991 | 755,117,902 | MDExOlB1bGxSZXF1ZXN0NTMwODkyMDk0 | 991 | Adding farsi_news dataset (https://github.com/sci2lab/Farsi-datasets) | {
"login": "Narsil",
"id": 204321,
"node_id": "MDQ6VXNlcjIwNDMyMQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/204321?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Narsil",
"html_url": "https://github.com/Narsil",
"followers_url": "https://api.github.com/users/Narsil/followers",
"following_url": "https://api.github.com/users/Narsil/following{/other_user}",
"gists_url": "https://api.github.com/users/Narsil/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Narsil/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Narsil/subscriptions",
"organizations_url": "https://api.github.com/users/Narsil/orgs",
"repos_url": "https://api.github.com/users/Narsil/repos",
"events_url": "https://api.github.com/users/Narsil/events{/privacy}",
"received_events_url": "https://api.github.com/users/Narsil/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
} | [] | null | [] | 1,606,902,739,000 | 1,606,993,286,000 | 1,606,993,286,000 | CONTRIBUTOR | null | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/991/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/991/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/991",
"html_url": "https://github.com/huggingface/datasets/pull/991",
"diff_url": "https://github.com/huggingface/datasets/pull/991.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/991.patch",
"merged_at": 1606993286000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/990 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/990/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/990/comments | https://api.github.com/repos/huggingface/datasets/issues/990/events | https://github.com/huggingface/datasets/pull/990 | 755,097,798 | MDExOlB1bGxSZXF1ZXN0NTMwODc1NDYx | 990 | Add E2E NLG | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
} | [] | null | [] | 1,606,901,112,000 | 1,607,000,885,000 | 1,607,000,884,000 | MEMBER | null | Adding the E2E NLG dataset.
More info here : http://www.macs.hw.ac.uk/InteractionLab/E2E/
### Checkbox
- [x] Create the dataset script `/datasets/my_dataset/my_dataset.py` using the template
- [x] Fill the `_DESCRIPTION` and `_CITATION` variables
- [x] Implement `_infos()`, `_split_generators()` and `_generate_examples()`
- [x] Make sure that the `BUILDER_CONFIGS` class attribute is filled with the different configurations of the dataset and that the `BUILDER_CONFIG_CLASS` is specified if there is a custom config class.
- [x] Generate the metadata file `dataset_infos.json` for all configurations
- [x] Generate the dummy data `dummy_data.zip` files to have the dataset script tested and that they don't weigh too much (<50KB)
- [x] Add the dataset card `README.md` using the template and at least fill the tags
- [x] Both tests for the real data and the dummy data pass.
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/990/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/990/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/990",
"html_url": "https://github.com/huggingface/datasets/pull/990",
"diff_url": "https://github.com/huggingface/datasets/pull/990.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/990.patch",
"merged_at": 1607000884000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/989 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/989/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/989/comments | https://api.github.com/repos/huggingface/datasets/issues/989/events | https://github.com/huggingface/datasets/pull/989 | 755,079,394 | MDExOlB1bGxSZXF1ZXN0NTMwODYwNDMw | 989 | Fix SV -> NO | {
"login": "jplu",
"id": 959590,
"node_id": "MDQ6VXNlcjk1OTU5MA==",
"avatar_url": "https://avatars.githubusercontent.com/u/959590?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jplu",
"html_url": "https://github.com/jplu",
"followers_url": "https://api.github.com/users/jplu/followers",
"following_url": "https://api.github.com/users/jplu/following{/other_user}",
"gists_url": "https://api.github.com/users/jplu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jplu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jplu/subscriptions",
"organizations_url": "https://api.github.com/users/jplu/orgs",
"repos_url": "https://api.github.com/users/jplu/repos",
"events_url": "https://api.github.com/users/jplu/events{/privacy}",
"received_events_url": "https://api.github.com/users/jplu/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
} | [] | null | [] | 1,606,899,599,000 | 1,606,900,701,000 | 1,606,900,694,000 | CONTRIBUTOR | null | This PR fixes the small typo as seen in #956 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/989/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/989/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/989",
"html_url": "https://github.com/huggingface/datasets/pull/989",
"diff_url": "https://github.com/huggingface/datasets/pull/989.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/989.patch",
"merged_at": 1606900694000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/988 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/988/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/988/comments | https://api.github.com/repos/huggingface/datasets/issues/988/events | https://github.com/huggingface/datasets/issues/988 | 755,069,159 | MDU6SXNzdWU3NTUwNjkxNTk= | 988 | making sure datasets are not loaded in memory and distributed training of them | {
"login": "rabeehk",
"id": 6278280,
"node_id": "MDQ6VXNlcjYyNzgyODA=",
"avatar_url": "https://avatars.githubusercontent.com/u/6278280?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rabeehk",
"html_url": "https://github.com/rabeehk",
"followers_url": "https://api.github.com/users/rabeehk/followers",
"following_url": "https://api.github.com/users/rabeehk/following{/other_user}",
"gists_url": "https://api.github.com/users/rabeehk/gists{/gist_id}",
"starred_url": "https://api.github.com/users/rabeehk/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rabeehk/subscriptions",
"organizations_url": "https://api.github.com/users/rabeehk/orgs",
"repos_url": "https://api.github.com/users/rabeehk/repos",
"events_url": "https://api.github.com/users/rabeehk/events{/privacy}",
"received_events_url": "https://api.github.com/users/rabeehk/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
} | [] | null | [
"my implementation of sharding per TPU core: https://github.com/google-research/ruse/blob/d4dd58a2d8efe0ffb1a9e9e77e3228d6824d3c3c/seq2seq/trainers/t5_trainer.py#L316 \r\nmy implementation of dataloader for this case https://github.com/google-research/ruse/blob/d4dd58a2d8efe0ffb1a9e9e77e3228d6824d3c3c/seq2seq/tasks/tasks.py#L496 "
] | 1,606,898,715,000 | 1,606,899,034,000 | null | CONTRIBUTOR | null | Hi
I am dealing with large-scale datasets which I need to train distributedly, I used the shard function to divide the dataset across the cores, without any sampler, this does not work for distributed training and does not become any faster than 1 TPU core. 1) how I can make sure data is not loaded in memory 2) in case of distributed training with iterative datasets which measures needs to be taken? Is this all sharding the data only. I was wondering if there can be possibility for me to discuss this with someone with distributed training with iterative datasets using dataset library. thanks | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/988/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/988/timeline | null | null | {
"url": "",
"html_url": "",
"diff_url": "",
"patch_url": "",
"merged_at": 0
} | false |
https://api.github.com/repos/huggingface/datasets/issues/987 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/987/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/987/comments | https://api.github.com/repos/huggingface/datasets/issues/987/events | https://github.com/huggingface/datasets/pull/987 | 755,059,469 | MDExOlB1bGxSZXF1ZXN0NTMwODQ0MTQ4 | 987 | Add OPUS DOGC dataset | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
} | [] | null | [
"merging since the CI is fixed on master"
] | 1,606,897,832,000 | 1,607,088,461,000 | 1,607,088,461,000 | MEMBER | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/987/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/987/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/987",
"html_url": "https://github.com/huggingface/datasets/pull/987",
"diff_url": "https://github.com/huggingface/datasets/pull/987.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/987.patch",
"merged_at": 1607088461000
} | true |
|
https://api.github.com/repos/huggingface/datasets/issues/986 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/986/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/986/comments | https://api.github.com/repos/huggingface/datasets/issues/986/events | https://github.com/huggingface/datasets/pull/986 | 755,047,470 | MDExOlB1bGxSZXF1ZXN0NTMwODM0MzYx | 986 | Add SciTLDR Dataset | {
"login": "Bharat123rox",
"id": 13381361,
"node_id": "MDQ6VXNlcjEzMzgxMzYx",
"avatar_url": "https://avatars.githubusercontent.com/u/13381361?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Bharat123rox",
"html_url": "https://github.com/Bharat123rox",
"followers_url": "https://api.github.com/users/Bharat123rox/followers",
"following_url": "https://api.github.com/users/Bharat123rox/following{/other_user}",
"gists_url": "https://api.github.com/users/Bharat123rox/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Bharat123rox/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Bharat123rox/subscriptions",
"organizations_url": "https://api.github.com/users/Bharat123rox/orgs",
"repos_url": "https://api.github.com/users/Bharat123rox/repos",
"events_url": "https://api.github.com/users/Bharat123rox/events{/privacy}",
"received_events_url": "https://api.github.com/users/Bharat123rox/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
} | [] | null | [
"CI failures seem to be unrelated (related to `norwegian_ner`)\r\n```\r\nFAILED tests/test_dataset_common.py::RemoteDatasetTest::test_builder_class_norwegian_ner\r\nFAILED tests/test_dataset_common.py::RemoteDatasetTest::test_builder_configs_norwegian_ner\r\nFAILED tests/test_dataset_common.py::RemoteDatasetTest::test_load_dataset_norwegian_ner\r\n```",
"you can just rebase from master to fix the CI :) ",
"can you just rebase from master before we merge ?",
"Sorry, the rebase from master went horribly wrong, I guess I'll just open another PR\r\n\r\nClosing this one due to a mistake in rebasing :(",
"Continued in #1014 "
] | 1,606,896,676,000 | 1,606,934,242,000 | 1,606,932,179,000 | CONTRIBUTOR | null | Adds the SciTLDR Dataset by AI2
Added README card with tags to the best of my knowledge
Multi-target summaries or TLDRs of Scientific Documents | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/986/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/986/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/986",
"html_url": "https://github.com/huggingface/datasets/pull/986",
"diff_url": "https://github.com/huggingface/datasets/pull/986.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/986.patch",
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/985 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/985/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/985/comments | https://api.github.com/repos/huggingface/datasets/issues/985/events | https://github.com/huggingface/datasets/pull/985 | 755,020,564 | MDExOlB1bGxSZXF1ZXN0NTMwODEyNTM1 | 985 | Add GAP dataset | {
"login": "VictorSanh",
"id": 16107619,
"node_id": "MDQ6VXNlcjE2MTA3NjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/16107619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/VictorSanh",
"html_url": "https://github.com/VictorSanh",
"followers_url": "https://api.github.com/users/VictorSanh/followers",
"following_url": "https://api.github.com/users/VictorSanh/following{/other_user}",
"gists_url": "https://api.github.com/users/VictorSanh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/VictorSanh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/VictorSanh/subscriptions",
"organizations_url": "https://api.github.com/users/VictorSanh/orgs",
"repos_url": "https://api.github.com/users/VictorSanh/repos",
"events_url": "https://api.github.com/users/VictorSanh/events{/privacy}",
"received_events_url": "https://api.github.com/users/VictorSanh/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
} | [] | null | [
"This dataset already exists apparently, sorry :/ \r\nsee\r\nhttps://github.com/huggingface/datasets/blob/master/datasets/gap/gap.py\r\n\r\nFeel free to re-use the dataset card you did for `/datasets/gap`\r\n",
"oh heck, my bad 🤦♂️ sorry"
] | 1,606,893,911,000 | 1,606,925,792,000 | 1,606,925,792,000 | MEMBER | null | GAP dataset
Gender bias coreference resolution | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/985/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/985/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/985",
"html_url": "https://github.com/huggingface/datasets/pull/985",
"diff_url": "https://github.com/huggingface/datasets/pull/985.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/985.patch",
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/984 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/984/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/984/comments | https://api.github.com/repos/huggingface/datasets/issues/984/events | https://github.com/huggingface/datasets/pull/984 | 755,009,916 | MDExOlB1bGxSZXF1ZXN0NTMwODAzNzgw | 984 | committing Whoa file | {
"login": "StulosDunamos",
"id": 75356780,
"node_id": "MDQ6VXNlcjc1MzU2Nzgw",
"avatar_url": "https://avatars.githubusercontent.com/u/75356780?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/StulosDunamos",
"html_url": "https://github.com/StulosDunamos",
"followers_url": "https://api.github.com/users/StulosDunamos/followers",
"following_url": "https://api.github.com/users/StulosDunamos/following{/other_user}",
"gists_url": "https://api.github.com/users/StulosDunamos/gists{/gist_id}",
"starred_url": "https://api.github.com/users/StulosDunamos/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/StulosDunamos/subscriptions",
"organizations_url": "https://api.github.com/users/StulosDunamos/orgs",
"repos_url": "https://api.github.com/users/StulosDunamos/repos",
"events_url": "https://api.github.com/users/StulosDunamos/events{/privacy}",
"received_events_url": "https://api.github.com/users/StulosDunamos/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
} | [] | null | [
"can't find the Whoa file since there' nothing left",
"The classic `rm -rf` command - nice one"
] | 1,606,892,866,000 | 1,606,925,729,000 | 1,606,923,658,000 | NONE | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/984/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/984/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/984",
"html_url": "https://github.com/huggingface/datasets/pull/984",
"diff_url": "https://github.com/huggingface/datasets/pull/984.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/984.patch",
"merged_at": null
} | true |
|
https://api.github.com/repos/huggingface/datasets/issues/983 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/983/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/983/comments | https://api.github.com/repos/huggingface/datasets/issues/983/events | https://github.com/huggingface/datasets/pull/983 | 754,966,620 | MDExOlB1bGxSZXF1ZXN0NTMwNzY4MTMw | 983 | add mc taco | {
"login": "VictorSanh",
"id": 16107619,
"node_id": "MDQ6VXNlcjE2MTA3NjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/16107619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/VictorSanh",
"html_url": "https://github.com/VictorSanh",
"followers_url": "https://api.github.com/users/VictorSanh/followers",
"following_url": "https://api.github.com/users/VictorSanh/following{/other_user}",
"gists_url": "https://api.github.com/users/VictorSanh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/VictorSanh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/VictorSanh/subscriptions",
"organizations_url": "https://api.github.com/users/VictorSanh/orgs",
"repos_url": "https://api.github.com/users/VictorSanh/repos",
"events_url": "https://api.github.com/users/VictorSanh/events{/privacy}",
"received_events_url": "https://api.github.com/users/VictorSanh/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
} | [] | null | [] | 1,606,888,495,000 | 1,606,923,467,000 | 1,606,923,466,000 | MEMBER | null | MC-TACO
Temporal commonsense knowledge | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/983/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/983/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/983",
"html_url": "https://github.com/huggingface/datasets/pull/983",
"diff_url": "https://github.com/huggingface/datasets/pull/983.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/983.patch",
"merged_at": 1606923466000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/982 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/982/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/982/comments | https://api.github.com/repos/huggingface/datasets/issues/982/events | https://github.com/huggingface/datasets/pull/982 | 754,946,337 | MDExOlB1bGxSZXF1ZXN0NTMwNzUxMzYx | 982 | add prachathai67k take2 | {
"login": "cstorm125",
"id": 15519308,
"node_id": "MDQ6VXNlcjE1NTE5MzA4",
"avatar_url": "https://avatars.githubusercontent.com/u/15519308?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/cstorm125",
"html_url": "https://github.com/cstorm125",
"followers_url": "https://api.github.com/users/cstorm125/followers",
"following_url": "https://api.github.com/users/cstorm125/following{/other_user}",
"gists_url": "https://api.github.com/users/cstorm125/gists{/gist_id}",
"starred_url": "https://api.github.com/users/cstorm125/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/cstorm125/subscriptions",
"organizations_url": "https://api.github.com/users/cstorm125/orgs",
"repos_url": "https://api.github.com/users/cstorm125/repos",
"events_url": "https://api.github.com/users/cstorm125/events{/privacy}",
"received_events_url": "https://api.github.com/users/cstorm125/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
} | [] | null | [] | 1,606,885,921,000 | 1,606,904,291,000 | 1,606,904,291,000 | CONTRIBUTOR | null | I decided it will be faster to create a new pull request instead of fixing the rebase issues.
continuing from https://github.com/huggingface/datasets/pull/954
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/982/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/982/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/982",
"html_url": "https://github.com/huggingface/datasets/pull/982",
"diff_url": "https://github.com/huggingface/datasets/pull/982.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/982.patch",
"merged_at": 1606904291000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/981 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/981/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/981/comments | https://api.github.com/repos/huggingface/datasets/issues/981/events | https://github.com/huggingface/datasets/pull/981 | 754,937,612 | MDExOlB1bGxSZXF1ZXN0NTMwNzQ0MTYx | 981 | add wisesight_sentiment take2 | {
"login": "cstorm125",
"id": 15519308,
"node_id": "MDQ6VXNlcjE1NTE5MzA4",
"avatar_url": "https://avatars.githubusercontent.com/u/15519308?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/cstorm125",
"html_url": "https://github.com/cstorm125",
"followers_url": "https://api.github.com/users/cstorm125/followers",
"following_url": "https://api.github.com/users/cstorm125/following{/other_user}",
"gists_url": "https://api.github.com/users/cstorm125/gists{/gist_id}",
"starred_url": "https://api.github.com/users/cstorm125/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/cstorm125/subscriptions",
"organizations_url": "https://api.github.com/users/cstorm125/orgs",
"repos_url": "https://api.github.com/users/cstorm125/repos",
"events_url": "https://api.github.com/users/cstorm125/events{/privacy}",
"received_events_url": "https://api.github.com/users/cstorm125/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
} | [] | null | [] | 1,606,884,659,000 | 1,606,905,433,000 | 1,606,905,433,000 | CONTRIBUTOR | null | Take 2 since last time the rebase issues were taking me too much time to fix as opposed to just open a new one. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/981/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/981/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/981",
"html_url": "https://github.com/huggingface/datasets/pull/981",
"diff_url": "https://github.com/huggingface/datasets/pull/981.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/981.patch",
"merged_at": 1606905433000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/980 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/980/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/980/comments | https://api.github.com/repos/huggingface/datasets/issues/980/events | https://github.com/huggingface/datasets/pull/980 | 754,899,301 | MDExOlB1bGxSZXF1ZXN0NTMwNzEzNjY3 | 980 | Wongnai - Thai reviews dataset | {
"login": "mapmeld",
"id": 643918,
"node_id": "MDQ6VXNlcjY0MzkxOA==",
"avatar_url": "https://avatars.githubusercontent.com/u/643918?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mapmeld",
"html_url": "https://github.com/mapmeld",
"followers_url": "https://api.github.com/users/mapmeld/followers",
"following_url": "https://api.github.com/users/mapmeld/following{/other_user}",
"gists_url": "https://api.github.com/users/mapmeld/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mapmeld/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mapmeld/subscriptions",
"organizations_url": "https://api.github.com/users/mapmeld/orgs",
"repos_url": "https://api.github.com/users/mapmeld/repos",
"events_url": "https://api.github.com/users/mapmeld/events{/privacy}",
"received_events_url": "https://api.github.com/users/mapmeld/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
} | [] | null | [
"Thank you for contributing a Thai dataset, @mapmeld ! I'm super hyped. \r\nOne comment I may add is that wongnai-corpus has two datasets: review classification (this) and word tokenization (https://github.com/wongnai/wongnai-corpus/blob/master/search/labeled_queries_by_judges.txt).\r\nWould it be possible for you to rename this one something along the line of `wongnai-reviews` so that when/if we include the word tokenization dataset, we will know which is which.\r\n\r\nThis helps solve my check_code_quality issue.\r\n```\r\nmake style\r\nblack --line-length 119 --target-version py36 datasets/wongnai\r\nflake8 datasets/wongnai\r\nisort datasets/wongnai/wongnai.py\r\n```",
"@cstorm125 thanks! following your suggestions on formatting and on naming the dataset\r\n\r\nI am writing a blog post about Thai NLP and transformers (example: mBERT does 1-2 character tokens instead of doing word segmentation), started adding this dataset to use as an example, and then saw you were adding other datasets. Great work! And if you know any Thai BERT models beyond https://github.com/ThAIKeras/bert we should maybe talk over email!"
] | 1,606,879,208,000 | 1,606,923,281,000 | 1,606,923,005,000 | CONTRIBUTOR | null | 40,000 reviews, previously released on GitHub ( https://github.com/wongnai/wongnai-corpus ) with an LGPL license, and on a closed Kaggle competition ( https://www.kaggle.com/c/wongnai-challenge-review-rating-prediction/ ) | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/980/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/980/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/980",
"html_url": "https://github.com/huggingface/datasets/pull/980",
"diff_url": "https://github.com/huggingface/datasets/pull/980.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/980.patch",
"merged_at": 1606923004000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/979 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/979/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/979/comments | https://api.github.com/repos/huggingface/datasets/issues/979/events | https://github.com/huggingface/datasets/pull/979 | 754,893,337 | MDExOlB1bGxSZXF1ZXN0NTMwNzA4OTA5 | 979 | [WIP] Add multi woz | {
"login": "yjernite",
"id": 10469459,
"node_id": "MDQ6VXNlcjEwNDY5NDU5",
"avatar_url": "https://avatars.githubusercontent.com/u/10469459?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yjernite",
"html_url": "https://github.com/yjernite",
"followers_url": "https://api.github.com/users/yjernite/followers",
"following_url": "https://api.github.com/users/yjernite/following{/other_user}",
"gists_url": "https://api.github.com/users/yjernite/gists{/gist_id}",
"starred_url": "https://api.github.com/users/yjernite/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yjernite/subscriptions",
"organizations_url": "https://api.github.com/users/yjernite/orgs",
"repos_url": "https://api.github.com/users/yjernite/repos",
"events_url": "https://api.github.com/users/yjernite/events{/privacy}",
"received_events_url": "https://api.github.com/users/yjernite/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
} | [] | null | [] | 1,606,878,342,000 | 1,606,925,236,000 | 1,606,925,236,000 | MEMBER | null | This PR adds version 2.2 of the Multi-domain Wizard of OZ dataset: https://github.com/budzianowski/multiwoz/tree/master/data/MultiWOZ_2.2
It was a pretty big chunk of work to figure out the structure, so I stil have tol add the description to the README.md
On the plus side the structure is broadly similar to that of the Google Schema Guided dialogue [dataset](https://github.com/google-research-datasets/dstc8-schema-guided-dialogue), so will take care of that one next. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/979/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/979/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/979",
"html_url": "https://github.com/huggingface/datasets/pull/979",
"diff_url": "https://github.com/huggingface/datasets/pull/979.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/979.patch",
"merged_at": 1606925236000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/978 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/978/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/978/comments | https://api.github.com/repos/huggingface/datasets/issues/978/events | https://github.com/huggingface/datasets/pull/978 | 754,854,478 | MDExOlB1bGxSZXF1ZXN0NTMwNjc4NTUy | 978 | Add code refinement | {
"login": "reshinthadithyan",
"id": 36307201,
"node_id": "MDQ6VXNlcjM2MzA3MjAx",
"avatar_url": "https://avatars.githubusercontent.com/u/36307201?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/reshinthadithyan",
"html_url": "https://github.com/reshinthadithyan",
"followers_url": "https://api.github.com/users/reshinthadithyan/followers",
"following_url": "https://api.github.com/users/reshinthadithyan/following{/other_user}",
"gists_url": "https://api.github.com/users/reshinthadithyan/gists{/gist_id}",
"starred_url": "https://api.github.com/users/reshinthadithyan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/reshinthadithyan/subscriptions",
"organizations_url": "https://api.github.com/users/reshinthadithyan/orgs",
"repos_url": "https://api.github.com/users/reshinthadithyan/repos",
"events_url": "https://api.github.com/users/reshinthadithyan/events{/privacy}",
"received_events_url": "https://api.github.com/users/reshinthadithyan/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
} | [] | null | [
"Also cc @madlag since I recall you wanted to work on CodeXGlue as well ?",
"Yes, sorry I did not see earlier your message. I added 34 on the 35 datasets in CodeXGlue, tomorrow I will wrap it up, and so I will remove my version for code_refinement. Maybe we can just have a renaming after the merge, to have a consistent naming with all the other codexglue datasets ? What do you think @reshinthadithyan ?",
"> Yes, sorry I did not see earlier your message. I added 34 on the 35 datasets in CodeXGlue, tomorrow I will wrap it up, and so I will remove my version for code_refinement. Maybe we can just have a renaming after the merge, to have a consistent naming with all the other codexglue datasets ? What do you think @reshinthadithyan ?\r\n\r\nHello @madlag, I think you can retain that in your script. Let's stick onto the same file like how Glue is maintained.",
"Hi @reshinthadithyan ! Are you still working on this version of the dataset or are we going with @madlag 's only ?",
"> Hi @reshinthadithyan ! Are you still working on this version of the dataset or are we going with @madlag 's only ?\r\n\r\nHello, yes. We are going with Madlag's"
] | 1,606,872,598,000 | 1,607,305,978,000 | 1,607,305,978,000 | CONTRIBUTOR | null | ### OVERVIEW
Millions of open-source projects with numerous bug fixes
are available in code repositories. This proliferation
of software development histories can be leveraged to
learn how to fix common programming bugs
Code refinement aims to automatically fix bugs in the code,
which can contribute to reducing the cost of bug-fixes for developers.
Given a piece of Java code with bugs,
the task is to remove the bugs to output the refined code. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/978/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/978/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/978",
"html_url": "https://github.com/huggingface/datasets/pull/978",
"diff_url": "https://github.com/huggingface/datasets/pull/978.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/978.patch",
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/977 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/977/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/977/comments | https://api.github.com/repos/huggingface/datasets/issues/977/events | https://github.com/huggingface/datasets/pull/977 | 754,839,594 | MDExOlB1bGxSZXF1ZXN0NTMwNjY2ODg3 | 977 | Add ROPES dataset | {
"login": "VictorSanh",
"id": 16107619,
"node_id": "MDQ6VXNlcjE2MTA3NjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/16107619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/VictorSanh",
"html_url": "https://github.com/VictorSanh",
"followers_url": "https://api.github.com/users/VictorSanh/followers",
"following_url": "https://api.github.com/users/VictorSanh/following{/other_user}",
"gists_url": "https://api.github.com/users/VictorSanh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/VictorSanh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/VictorSanh/subscriptions",
"organizations_url": "https://api.github.com/users/VictorSanh/orgs",
"repos_url": "https://api.github.com/users/VictorSanh/repos",
"events_url": "https://api.github.com/users/VictorSanh/events{/privacy}",
"received_events_url": "https://api.github.com/users/VictorSanh/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
} | [] | null | [] | 1,606,870,330,000 | 1,606,906,716,000 | 1,606,906,715,000 | MEMBER | null | ROPES dataset
Reasoning over paragraph effects in situations - testing a system's ability to apply knowledge from a passage of text to a new situation. The task is framed into a reading comprehension task following squad-style extractive qa.
One thing to note: labels of the test set are hidden (leaderboard submission) so I encoded that as an empty list (ropes.py:L125) | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/977/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/977/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/977",
"html_url": "https://github.com/huggingface/datasets/pull/977",
"diff_url": "https://github.com/huggingface/datasets/pull/977.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/977.patch",
"merged_at": 1606906715000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/976 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/976/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/976/comments | https://api.github.com/repos/huggingface/datasets/issues/976/events | https://github.com/huggingface/datasets/pull/976 | 754,826,146 | MDExOlB1bGxSZXF1ZXN0NTMwNjU1NzM5 | 976 | Arabic pos dialect | {
"login": "mcmillanmajora",
"id": 26722925,
"node_id": "MDQ6VXNlcjI2NzIyOTI1",
"avatar_url": "https://avatars.githubusercontent.com/u/26722925?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mcmillanmajora",
"html_url": "https://github.com/mcmillanmajora",
"followers_url": "https://api.github.com/users/mcmillanmajora/followers",
"following_url": "https://api.github.com/users/mcmillanmajora/following{/other_user}",
"gists_url": "https://api.github.com/users/mcmillanmajora/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mcmillanmajora/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mcmillanmajora/subscriptions",
"organizations_url": "https://api.github.com/users/mcmillanmajora/orgs",
"repos_url": "https://api.github.com/users/mcmillanmajora/repos",
"events_url": "https://api.github.com/users/mcmillanmajora/events{/privacy}",
"received_events_url": "https://api.github.com/users/mcmillanmajora/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
} | [] | null | [
"looks like this PR includes changes about many other files than the oens for Araboc POS Dialect\r\n\r\nCan you create a another branch and another PR please ?",
"Sorry! I'm not sure how I managed to do that. I'll make a new branch."
] | 1,606,868,473,000 | 1,607,535,032,000 | 1,607,535,032,000 | CONTRIBUTOR | null | A README.md and loading script for the Arabic POS Dialect dataset. The README is missing the sections on personal information, biases, and limitations, as it would probably be better for those to be filled by someone who can read the contents of the dataset and is familiar with Arabic NLP. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/976/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/976/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/976",
"html_url": "https://github.com/huggingface/datasets/pull/976",
"diff_url": "https://github.com/huggingface/datasets/pull/976.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/976.patch",
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/975 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/975/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/975/comments | https://api.github.com/repos/huggingface/datasets/issues/975/events | https://github.com/huggingface/datasets/pull/975 | 754,823,701 | MDExOlB1bGxSZXF1ZXN0NTMwNjUzNjg4 | 975 | add MeTooMA dataset | {
"login": "akash418",
"id": 23264033,
"node_id": "MDQ6VXNlcjIzMjY0MDMz",
"avatar_url": "https://avatars.githubusercontent.com/u/23264033?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/akash418",
"html_url": "https://github.com/akash418",
"followers_url": "https://api.github.com/users/akash418/followers",
"following_url": "https://api.github.com/users/akash418/following{/other_user}",
"gists_url": "https://api.github.com/users/akash418/gists{/gist_id}",
"starred_url": "https://api.github.com/users/akash418/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/akash418/subscriptions",
"organizations_url": "https://api.github.com/users/akash418/orgs",
"repos_url": "https://api.github.com/users/akash418/repos",
"events_url": "https://api.github.com/users/akash418/events{/privacy}",
"received_events_url": "https://api.github.com/users/akash418/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
} | [] | null | [] | 1,606,868,155,000 | 1,606,906,736,000 | 1,606,906,735,000 | CONTRIBUTOR | null | This PR adds the #MeToo MA dataset. It presents multi-label data points for tweets mined in the backdrop of the #MeToo movement. The dataset includes data points in the form of Tweet ids and appropriate labels. Please refer to the accompanying paper for detailed information regarding annotation, collection, and guidelines.
Paper: https://ojs.aaai.org/index.php/ICWSM/article/view/7292
Dataset Link: https://dataverse.harvard.edu/dataset.xhtml?persistentId=doi:10.7910/DVN/JN4EYU
---
annotations_creators:
- expert-generated
language_creators:
- found
languages:
- en
multilinguality:
- monolingual
size_categories:
- 1K<n<10K
source_datasets:
- original
task_categories:
- text-classification
- text-retrieval
task_ids:
- multi-class-classification
- multi-label-classification
---
# Dataset Card for #MeTooMA dataset
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-instances)
- [Data Splits](#data-instances)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
## Dataset Description
- **Homepage:** https://dataverse.harvard.edu/dataset.xhtml?persistentId=doi:10.7910/DVN/JN4EYU
- **Paper:** https://ojs.aaai.org//index.php/ICWSM/article/view/7292
- **Point of Contact:** https://github.com/midas-research/MeTooMA
### Dataset Summary
- The dataset consists of tweets belonging to #MeToo movement on Twitter, labeled into different categories.
- This dataset includes more data points and has more labels than any of the previous datasets that contain social media
posts about sexual abuse disclosures. Please refer to the Related Datasets of the publication for detailed information about this.
- Due to Twitter's development policies, the authors provide only the tweet IDs and corresponding labels,
other data can be fetched via Twitter API.
- The data has been labeled by experts, with the majority taken into the account for deciding the final label.
- The authors provide these labels for each of the tweets.
- Relevance
- Directed Hate
- Generalized Hate
- Sarcasm
- Allegation
- Justification
- Refutation
- Support
- Oppose
- The definitions for each task/label are in the main publication.
- Please refer to the accompanying paper https://aaai.org/ojs/index.php/ICWSM/article/view/7292 for statistical analysis on the textual data
extracted from this dataset.
- The language of all the tweets in this dataset is English
- Time period: October 2018 - December 2018
- Suggested Use Cases of this dataset:
- Evaluating usage of linguistic acts such as hate-speech and sarcasm in the context of public sexual abuse disclosures.
- Extracting actionable insights and virtual dynamics of gender roles in sexual abuse revelations.
- Identifying how influential people were portrayed on the public platform in the
events of mass social movements.
- Polarization analysis based on graph simulations of social nodes of users involved
in the #MeToo movement.
### Supported Tasks and Leaderboards
Multi-Label and Multi-Class Classification
### Languages
English
## Dataset Structure
- The dataset is structured into CSV format with TweetID and accompanying labels.
- Train and Test sets are split into respective files.
### Data Instances
Tweet ID and the appropriate labels
### Data Fields
Tweet ID and appropriate labels (binary label applicable for a data point) and multiple labels for each Tweet ID
### Data Splits
- Train: 7979
- Test: 1996
## Dataset Creation
### Curation Rationale
- Twitter was the major source of all the public disclosures of sexual abuse incidents during the #MeToo movement.
- People expressed their opinions over issues that were previously missing from the social media space.
- This provides an option to study the linguistic behaviors of social media users in an informal setting,
therefore the authors decide to curate this annotated dataset.
- The authors expect this dataset would be of great interest and use to both computational and socio-linguists.
- For computational linguists, it provides an opportunity to model three new complex dialogue acts (allegation, refutation, and justification) and also to study how these acts interact with some of the other linguistic components like stance, hate, and sarcasm. For socio-linguists, it provides an opportunity to explore how a movement manifests in social media.
### Source Data
- Source of all the data points in this dataset is a Twitter social media platform.
#### Initial Data Collection and Normalization
- All the tweets are mined from Twitter with initial search parameters identified using keywords from the #MeToo movement.
- Redundant keywords were removed based on manual inspection.
- Public streaming APIs of Twitter was used for querying with the selected keywords.
- Based on text de-duplication and cosine similarity score, the set of tweets were pruned.
- Non-English tweets were removed.
- The final set was labeled by experts with the majority label taken into the account for deciding the final label.
- Please refer to this paper for detailed information: https://ojs.aaai.org//index.php/ICWSM/article/view/7292
#### Who are the source language producers?
Please refer to this paper for detailed information: https://ojs.aaai.org//index.php/ICWSM/article/view/7292
### Annotations
#### Annotation process
- The authors chose against crowdsourcing for labeling this dataset due to its highly sensitive nature.
- The annotators are domain experts having degrees in advanced clinical psychology and gender studies.
- They were provided a guidelines document with instructions about each task and its definitions, labels, and examples.
- They studied the document, worked on a few examples to get used to this annotation task.
- They also provided feedback for improving the class definitions.
- The annotation process is not mutually exclusive, implying that the presence of one label does not mean the
absence of the other one.
#### Who are the annotators?
- The annotators are domain experts having a degree in clinical psychology and gender studies.
- Please refer to the accompanying paper for a detailed annotation process.
### Personal and Sensitive Information
- Considering Twitter's policy for distribution of data, only Tweet ID and applicable labels are shared for public use.
- It is highly encouraged to use this dataset for scientific purposes only.
- This dataset collection completely follows the Twitter mandated guidelines for distribution and usage.
## Considerations for Using the Data
### Social Impact of Dataset
- The authors of this dataset do not intend to conduct a population-centric analysis of the #MeToo movement on Twitter.
- The authors acknowledge that findings from this dataset cannot be used as-is for any direct social intervention, these
should be used to assist already existing human intervention tools and therapies.
- Enough care has been taken to ensure that this work comes off as trying to target a specific person for their
the personal stance of issues pertaining to the #MeToo movement.
- The authors of this work do not aim to vilify anyone accused in the #MeToo movement in any manner.
- Please refer to the ethics and discussion section of the mentioned publication for appropriate sharing of this dataset
and the social impact of this work.
### Discussion of Biases
- The #MeToo movement acted as a catalyst for implementing social policy changes to benefit the members of
the community affected by sexual abuse.
- Any work undertaken on this dataset should aim to minimize the bias against minority groups which
might amplify in cases of a sudden outburst of public reactions over sensitive social media discussions.
### Other Known Limitations
- Considering privacy concerns, social media practitioners should be aware of making automated interventions
to aid the victims of sexual abuse as some people might not prefer to disclose their notions.
- Concerned social media users might also repeal their social information if they found out that their
information is being used for computational purposes, hence it is important to seek subtle individual consent
before trying to profile authors involved in online discussions to uphold personal privacy.
## Additional Information
Please refer to this link: https://dataverse.harvard.edu/dataset.xhtml?persistentId=doi:10.7910/DVN/JN4EYU
### Dataset Curators
- If you use the corpus in a product or application, then please credit the authors
and [Multimodal Digital Media Analysis Lab - Indraprastha Institute of Information Technology, New Delhi]
(http://midas.iiitd.edu.in) appropriately.
Also, if you send us an email, we will be thrilled to know about how you have used the corpus.
- If interested in the commercial use of the corpus, send an email to [email protected].
- Multimodal Digital Media Analysis Lab - Indraprastha Institute of Information Technology, New Delhi, India
disclaims any responsibility for the use of the corpus and does not provide technical support.
However, the contact listed above will be happy to respond to queries and clarifications
- Please feel free to send us an email:
- with feedback regarding the corpus.
- with information on how you have used the corpus.
- if interested in having us analyze your social media data.
- if interested in a collaborative research project.
### Licensing Information
[More Information Needed]
### Citation Information
Please cite the following publication if you make use of the dataset: https://ojs.aaai.org/index.php/ICWSM/article/view/7292
```
@article{Gautam_Mathur_Gosangi_Mahata_Sawhney_Shah_2020, title={#MeTooMA: Multi-Aspect Annotations of Tweets Related to the MeToo Movement}, volume={14}, url={https://aaai.org/ojs/index.php/ICWSM/article/view/7292}, abstractNote={<p>In this paper, we present a dataset containing 9,973 tweets related to the MeToo movement that were manually annotated for five different linguistic aspects: relevance, stance, hate speech, sarcasm, and dialogue acts. We present a detailed account of the data collection and annotation processes. The annotations have a very high inter-annotator agreement (0.79 to 0.93 k-alpha) due to the domain expertise of the annotators and clear annotation instructions. We analyze the data in terms of geographical distribution, label correlations, and keywords. Lastly, we present some potential use cases of this dataset. We expect this dataset would be of great interest to psycholinguists, socio-linguists, and computational linguists to study the discursive space of digitally mobilized social movements on sensitive issues like sexual harassment.</p&gt;}, number={1}, journal={Proceedings of the International AAAI Conference on Web and Social Media}, author={Gautam, Akash and Mathur, Puneet and Gosangi, Rakesh and Mahata, Debanjan and Sawhney, Ramit and Shah, Rajiv Ratn}, year={2020}, month={May}, pages={209-216} }
```
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/975/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/975/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/975",
"html_url": "https://github.com/huggingface/datasets/pull/975",
"diff_url": "https://github.com/huggingface/datasets/pull/975.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/975.patch",
"merged_at": 1606906735000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/974 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/974/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/974/comments | https://api.github.com/repos/huggingface/datasets/issues/974/events | https://github.com/huggingface/datasets/pull/974 | 754,811,185 | MDExOlB1bGxSZXF1ZXN0NTMwNjQzNzQ3 | 974 | Add MeTooMA Dataset | {
"login": "akash418",
"id": 23264033,
"node_id": "MDQ6VXNlcjIzMjY0MDMz",
"avatar_url": "https://avatars.githubusercontent.com/u/23264033?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/akash418",
"html_url": "https://github.com/akash418",
"followers_url": "https://api.github.com/users/akash418/followers",
"following_url": "https://api.github.com/users/akash418/following{/other_user}",
"gists_url": "https://api.github.com/users/akash418/gists{/gist_id}",
"starred_url": "https://api.github.com/users/akash418/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/akash418/subscriptions",
"organizations_url": "https://api.github.com/users/akash418/orgs",
"repos_url": "https://api.github.com/users/akash418/repos",
"events_url": "https://api.github.com/users/akash418/events{/privacy}",
"received_events_url": "https://api.github.com/users/akash418/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
} | [] | null | [] | 1,606,866,241,000 | 1,606,867,078,000 | 1,606,867,078,000 | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/974/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/974/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/974",
"html_url": "https://github.com/huggingface/datasets/pull/974",
"diff_url": "https://github.com/huggingface/datasets/pull/974.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/974.patch",
"merged_at": null
} | true |
|
https://api.github.com/repos/huggingface/datasets/issues/973 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/973/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/973/comments | https://api.github.com/repos/huggingface/datasets/issues/973/events | https://github.com/huggingface/datasets/pull/973 | 754,807,963 | MDExOlB1bGxSZXF1ZXN0NTMwNjQxMTky | 973 | Adding The Microsoft Terminology Collection dataset. | {
"login": "leoxzhao",
"id": 7915719,
"node_id": "MDQ6VXNlcjc5MTU3MTk=",
"avatar_url": "https://avatars.githubusercontent.com/u/7915719?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/leoxzhao",
"html_url": "https://github.com/leoxzhao",
"followers_url": "https://api.github.com/users/leoxzhao/followers",
"following_url": "https://api.github.com/users/leoxzhao/following{/other_user}",
"gists_url": "https://api.github.com/users/leoxzhao/gists{/gist_id}",
"starred_url": "https://api.github.com/users/leoxzhao/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/leoxzhao/subscriptions",
"organizations_url": "https://api.github.com/users/leoxzhao/orgs",
"repos_url": "https://api.github.com/users/leoxzhao/repos",
"events_url": "https://api.github.com/users/leoxzhao/events{/privacy}",
"received_events_url": "https://api.github.com/users/leoxzhao/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
} | [] | null | [
"I have to manually copy a dataset_infos.json file from other dataset and modify it since the `datasets-cli` isn't able to handle manually downloaded datasets yet (as far as I know).",
"you can generate the dataset_infos.json file even for dataset with manual data\r\nTo do so just specify `--data_dir <path/to/the/folder/containing/the/manual/data>`",
"Also, dummy_data seems having difficulty to handle manually downloaded datasets. `python datasets-cli dummy_data datasets/ms_terms --data_dir ...` reported `error: unrecognized arguments: --data_dir` error. Without `--data_dir`, it reported this error:\r\n```\r\nDataset ms_terms with config BuilderConfig(name='ms_terms-full', version=1.0.0, data_dir=None, data_files=None, description='...\\n') seems to already open files in the method `_split_generators(...)`. You might consider to instead only open files in the method `_generate_examples(...)` instead. If this is not possible the dummy data has to be created with less guidance. Make sure you create the file None.\r\nTraceback (most recent call last):\r\n File \"datasets-cli\", line 36, in <module>\r\n service.run()\r\n File \"/Users/lzhao/Downloads/huggingface/datasets/src/datasets/commands/dummy_data.py\", line 326, in run\r\n dataset_builder=dataset_builder, mock_dl_manager=mock_dl_manager\r\n File \"/Users/lzhao/Downloads/huggingface/datasets/src/datasets/commands/dummy_data.py\", line 406, in _print_dummy_data_instructions\r\n for split in generator_splits:\r\nUnboundLocalError: local variable 'generator_splits' referenced before assignment\r\n```",
"Oh yes `--data_dir` seems to only be supported for the `datasets_cli test` command. Sorry about that.\r\n\r\nCan you try to build the dummy_data.zip file manually ?\r\n\r\nIt has to be inside `./datasets/ms_terms/dummy/ms_terms-full/1.0.0`.\r\nInside this folder, please create a folder `dummy_data` that contains a dummy file `MicrosoftTermCollection.tbx` (with just a few examples in it). Then you can zip the `dummy_data` folder to `dummy_data.zip`\r\n\r\nThen you can check if it worked using the command\r\n```\r\npytest tests/test_dataset_common.py::LocalDatasetTest::test_load_dataset_ms_terms\r\n```\r\n\r\nFeel free to use some debugging print statements in your script if it doesn't work first try to see what `dl_manager.manual_dir` ends up being and also `path_to_manual_file`.\r\n\r\nFeel free to ping me if you have other questions",
"`pytest tests/test_dataset_common.py::LocalDatasetTest::test_load_dataset_ms_terms` gave `1 passed, 4 warnings in 8.13s`. Existing datasets, like `wikihow`, and `newsroom`, also report 4 warnings. So, I guess that is not related to this dataset.",
"Could you run `make style` before we merge @leoxzhao ?",
"the other errors are fixed on master so it's fine",
"> Could you run `make style` before we merge @leoxzhao ?\r\n\r\nSure thing. Done. Thanks Quentin. I have other datasets in mind. All of which requires manual download. This process is very helpful",
"Thank you :) "
] | 1,606,865,783,000 | 1,607,095,544,000 | 1,607,094,766,000 | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/973/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/973/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/973",
"html_url": "https://github.com/huggingface/datasets/pull/973",
"diff_url": "https://github.com/huggingface/datasets/pull/973.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/973.patch",
"merged_at": 1607094766000
} | true |
|
https://api.github.com/repos/huggingface/datasets/issues/972 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/972/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/972/comments | https://api.github.com/repos/huggingface/datasets/issues/972/events | https://github.com/huggingface/datasets/pull/972 | 754,787,314 | MDExOlB1bGxSZXF1ZXN0NTMwNjI0NTI3 | 972 | Add Children's Book Test (CBT) dataset | {
"login": "thomwolf",
"id": 7353373,
"node_id": "MDQ6VXNlcjczNTMzNzM=",
"avatar_url": "https://avatars.githubusercontent.com/u/7353373?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/thomwolf",
"html_url": "https://github.com/thomwolf",
"followers_url": "https://api.github.com/users/thomwolf/followers",
"following_url": "https://api.github.com/users/thomwolf/following{/other_user}",
"gists_url": "https://api.github.com/users/thomwolf/gists{/gist_id}",
"starred_url": "https://api.github.com/users/thomwolf/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/thomwolf/subscriptions",
"organizations_url": "https://api.github.com/users/thomwolf/orgs",
"repos_url": "https://api.github.com/users/thomwolf/repos",
"events_url": "https://api.github.com/users/thomwolf/events{/privacy}",
"received_events_url": "https://api.github.com/users/thomwolf/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
} | [] | null | [
"Hi @lhoestq,\r\n\r\nI guess this PR can be closed since we merged #2044?\r\n\r\nI have used the same link for the homepage, as it is where the dataset is provided, hope that is okay?",
"Closing in favor of #2044, thanks again :)\r\n\r\n> I have used the same link for the homepage, as it is where the dataset is provided, hope that is okay?\r\n\r\nYea it's ok actually, at that time I thought there was another homepage for this dataset"
] | 1,606,863,206,000 | 1,616,153,403,000 | 1,616,153,403,000 | MEMBER | null | Add the Children's Book Test (CBT) from Facebook (Hill et al. 2016).
Sentence completion given a few sentences as context from a children's book. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/972/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/972/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/972",
"html_url": "https://github.com/huggingface/datasets/pull/972",
"diff_url": "https://github.com/huggingface/datasets/pull/972.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/972.patch",
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/971 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/971/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/971/comments | https://api.github.com/repos/huggingface/datasets/issues/971/events | https://github.com/huggingface/datasets/pull/971 | 754,784,041 | MDExOlB1bGxSZXF1ZXN0NTMwNjIxOTQz | 971 | add piqa | {
"login": "VictorSanh",
"id": 16107619,
"node_id": "MDQ6VXNlcjE2MTA3NjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/16107619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/VictorSanh",
"html_url": "https://github.com/VictorSanh",
"followers_url": "https://api.github.com/users/VictorSanh/followers",
"following_url": "https://api.github.com/users/VictorSanh/following{/other_user}",
"gists_url": "https://api.github.com/users/VictorSanh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/VictorSanh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/VictorSanh/subscriptions",
"organizations_url": "https://api.github.com/users/VictorSanh/orgs",
"repos_url": "https://api.github.com/users/VictorSanh/repos",
"events_url": "https://api.github.com/users/VictorSanh/events{/privacy}",
"received_events_url": "https://api.github.com/users/VictorSanh/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
} | [] | null | [] | 1,606,862,824,000 | 1,606,903,082,000 | 1,606,903,081,000 | MEMBER | null | Physical Interaction: Question Answering (commonsense)
https://yonatanbisk.com/piqa/ | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/971/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/971/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/971",
"html_url": "https://github.com/huggingface/datasets/pull/971",
"diff_url": "https://github.com/huggingface/datasets/pull/971.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/971.patch",
"merged_at": 1606903081000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/970 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/970/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/970/comments | https://api.github.com/repos/huggingface/datasets/issues/970/events | https://github.com/huggingface/datasets/pull/970 | 754,697,489 | MDExOlB1bGxSZXF1ZXN0NTMwNTUxNTkz | 970 | Add SWAG | {
"login": "VictorSanh",
"id": 16107619,
"node_id": "MDQ6VXNlcjE2MTA3NjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/16107619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/VictorSanh",
"html_url": "https://github.com/VictorSanh",
"followers_url": "https://api.github.com/users/VictorSanh/followers",
"following_url": "https://api.github.com/users/VictorSanh/following{/other_user}",
"gists_url": "https://api.github.com/users/VictorSanh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/VictorSanh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/VictorSanh/subscriptions",
"organizations_url": "https://api.github.com/users/VictorSanh/orgs",
"repos_url": "https://api.github.com/users/VictorSanh/repos",
"events_url": "https://api.github.com/users/VictorSanh/events{/privacy}",
"received_events_url": "https://api.github.com/users/VictorSanh/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
} | [] | null | [] | 1,606,854,065,000 | 1,606,902,916,000 | 1,606,902,915,000 | MEMBER | null | Commonsense NLI -> https://rowanzellers.com/swag/ | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/970/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/970/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/970",
"html_url": "https://github.com/huggingface/datasets/pull/970",
"diff_url": "https://github.com/huggingface/datasets/pull/970.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/970.patch",
"merged_at": 1606902915000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/969 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/969/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/969/comments | https://api.github.com/repos/huggingface/datasets/issues/969/events | https://github.com/huggingface/datasets/pull/969 | 754,681,940 | MDExOlB1bGxSZXF1ZXN0NTMwNTM4ODQz | 969 | Add wiki auto dataset | {
"login": "yjernite",
"id": 10469459,
"node_id": "MDQ6VXNlcjEwNDY5NDU5",
"avatar_url": "https://avatars.githubusercontent.com/u/10469459?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yjernite",
"html_url": "https://github.com/yjernite",
"followers_url": "https://api.github.com/users/yjernite/followers",
"following_url": "https://api.github.com/users/yjernite/following{/other_user}",
"gists_url": "https://api.github.com/users/yjernite/gists{/gist_id}",
"starred_url": "https://api.github.com/users/yjernite/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yjernite/subscriptions",
"organizations_url": "https://api.github.com/users/yjernite/orgs",
"repos_url": "https://api.github.com/users/yjernite/repos",
"events_url": "https://api.github.com/users/yjernite/events{/privacy}",
"received_events_url": "https://api.github.com/users/yjernite/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
} | [] | null | [] | 1,606,852,691,000 | 1,606,925,954,000 | 1,606,925,954,000 | MEMBER | null | This PR adds the WikiAuto sentence simplification dataset
https://github.com/chaojiang06/wiki-auto
This is also a prospective GEM task, hence the README.md | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/969/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/969/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/969",
"html_url": "https://github.com/huggingface/datasets/pull/969",
"diff_url": "https://github.com/huggingface/datasets/pull/969.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/969.patch",
"merged_at": 1606925954000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/968 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/968/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/968/comments | https://api.github.com/repos/huggingface/datasets/issues/968/events | https://github.com/huggingface/datasets/pull/968 | 754,659,015 | MDExOlB1bGxSZXF1ZXN0NTMwNTIwMjEz | 968 | ADD Afrikaans NER | {
"login": "yvonnegitau",
"id": 7923902,
"node_id": "MDQ6VXNlcjc5MjM5MDI=",
"avatar_url": "https://avatars.githubusercontent.com/u/7923902?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yvonnegitau",
"html_url": "https://github.com/yvonnegitau",
"followers_url": "https://api.github.com/users/yvonnegitau/followers",
"following_url": "https://api.github.com/users/yvonnegitau/following{/other_user}",
"gists_url": "https://api.github.com/users/yvonnegitau/gists{/gist_id}",
"starred_url": "https://api.github.com/users/yvonnegitau/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yvonnegitau/subscriptions",
"organizations_url": "https://api.github.com/users/yvonnegitau/orgs",
"repos_url": "https://api.github.com/users/yvonnegitau/repos",
"events_url": "https://api.github.com/users/yvonnegitau/events{/privacy}",
"received_events_url": "https://api.github.com/users/yvonnegitau/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
} | [] | null | [
"One trick if you want to add other datasets: consider running these commands each time you want to add a new dataset\r\n```\r\ngit checkout master\r\ngit fetch upstream\r\ngit rebase upstream/master\r\ngit checkout -b add-<my_dataset_name>\r\n```"
] | 1,606,850,583,000 | 1,606,902,088,000 | 1,606,902,088,000 | CONTRIBUTOR | null | Afrikaans NER corpus | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/968/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/968/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/968",
"html_url": "https://github.com/huggingface/datasets/pull/968",
"diff_url": "https://github.com/huggingface/datasets/pull/968.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/968.patch",
"merged_at": 1606902088000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/967 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/967/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/967/comments | https://api.github.com/repos/huggingface/datasets/issues/967/events | https://github.com/huggingface/datasets/pull/967 | 754,578,988 | MDExOlB1bGxSZXF1ZXN0NTMwNDU0OTI3 | 967 | Add CS Restaurants dataset | {
"login": "TevenLeScao",
"id": 26709476,
"node_id": "MDQ6VXNlcjI2NzA5NDc2",
"avatar_url": "https://avatars.githubusercontent.com/u/26709476?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/TevenLeScao",
"html_url": "https://github.com/TevenLeScao",
"followers_url": "https://api.github.com/users/TevenLeScao/followers",
"following_url": "https://api.github.com/users/TevenLeScao/following{/other_user}",
"gists_url": "https://api.github.com/users/TevenLeScao/gists{/gist_id}",
"starred_url": "https://api.github.com/users/TevenLeScao/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/TevenLeScao/subscriptions",
"organizations_url": "https://api.github.com/users/TevenLeScao/orgs",
"repos_url": "https://api.github.com/users/TevenLeScao/repos",
"events_url": "https://api.github.com/users/TevenLeScao/events{/privacy}",
"received_events_url": "https://api.github.com/users/TevenLeScao/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
} | [] | null | [
"Oh yeah, for some reason I thought you had to do it after the merge, I'll get on it",
"Weird, now the CI seems to fail because of other datasets (XGLUE, Norwegian_NER)",
"Yea you just need to rebase from master",
"Re-opening a PR without the messed-up rebase"
] | 1,606,843,057,000 | 1,606,931,864,000 | 1,606,931,845,000 | MEMBER | null | This PR adds the Czech restaurants dataset for Czech NLG. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/967/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/967/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/967",
"html_url": "https://github.com/huggingface/datasets/pull/967",
"diff_url": "https://github.com/huggingface/datasets/pull/967.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/967.patch",
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/966 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/966/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/966/comments | https://api.github.com/repos/huggingface/datasets/issues/966/events | https://github.com/huggingface/datasets/pull/966 | 754,558,686 | MDExOlB1bGxSZXF1ZXN0NTMwNDM4NDE4 | 966 | Add CLINC150 Dataset | {
"login": "sumanthd17",
"id": 28291870,
"node_id": "MDQ6VXNlcjI4MjkxODcw",
"avatar_url": "https://avatars.githubusercontent.com/u/28291870?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sumanthd17",
"html_url": "https://github.com/sumanthd17",
"followers_url": "https://api.github.com/users/sumanthd17/followers",
"following_url": "https://api.github.com/users/sumanthd17/following{/other_user}",
"gists_url": "https://api.github.com/users/sumanthd17/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sumanthd17/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sumanthd17/subscriptions",
"organizations_url": "https://api.github.com/users/sumanthd17/orgs",
"repos_url": "https://api.github.com/users/sumanthd17/repos",
"events_url": "https://api.github.com/users/sumanthd17/events{/privacy}",
"received_events_url": "https://api.github.com/users/sumanthd17/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
} | [] | null | [
"Looks like your PR now shows changes in many other files than the ones for CLINC150.\r\nFeel free to create another branch and another PR",
"created new [PR](https://github.com/huggingface/datasets/pull/1016)\r\n\r\nclosing this!"
] | 1,606,841,413,000 | 1,606,934,743,000 | 1,606,934,730,000 | CONTRIBUTOR | null | Added CLINC150 Dataset. The link to the dataset can be found [here](https://github.com/clinc/oos-eval) and the paper can be found [here](https://www.aclweb.org/anthology/D19-1131.pdf)
- [x] Followed the instructions in CONTRIBUTING.md
- [x] Ran the tests successfully
- [x] Created the dummy data | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/966/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/966/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/966",
"html_url": "https://github.com/huggingface/datasets/pull/966",
"diff_url": "https://github.com/huggingface/datasets/pull/966.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/966.patch",
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/965 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/965/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/965/comments | https://api.github.com/repos/huggingface/datasets/issues/965/events | https://github.com/huggingface/datasets/pull/965 | 754,553,169 | MDExOlB1bGxSZXF1ZXN0NTMwNDMzODQ2 | 965 | Add CLINC150 Dataset | {
"login": "sumanthd17",
"id": 28291870,
"node_id": "MDQ6VXNlcjI4MjkxODcw",
"avatar_url": "https://avatars.githubusercontent.com/u/28291870?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sumanthd17",
"html_url": "https://github.com/sumanthd17",
"followers_url": "https://api.github.com/users/sumanthd17/followers",
"following_url": "https://api.github.com/users/sumanthd17/following{/other_user}",
"gists_url": "https://api.github.com/users/sumanthd17/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sumanthd17/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sumanthd17/subscriptions",
"organizations_url": "https://api.github.com/users/sumanthd17/orgs",
"repos_url": "https://api.github.com/users/sumanthd17/repos",
"events_url": "https://api.github.com/users/sumanthd17/events{/privacy}",
"received_events_url": "https://api.github.com/users/sumanthd17/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
} | [] | null | [] | 1,606,840,980,000 | 1,606,841,476,000 | 1,606,841,355,000 | CONTRIBUTOR | null | Added CLINC150 Dataset. The link to the dataset can be found [here](https://github.com/clinc/oos-eval) and the paper can be found [here](https://www.aclweb.org/anthology/D19-1131.pdf)
- [x] Followed the instructions in CONTRIBUTING.md
- [x] Ran the tests successfully
- [x] Created the dummy data | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/965/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/965/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/965",
"html_url": "https://github.com/huggingface/datasets/pull/965",
"diff_url": "https://github.com/huggingface/datasets/pull/965.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/965.patch",
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/964 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/964/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/964/comments | https://api.github.com/repos/huggingface/datasets/issues/964/events | https://github.com/huggingface/datasets/pull/964 | 754,474,660 | MDExOlB1bGxSZXF1ZXN0NTMwMzY4OTAy | 964 | Adding the WebNLG dataset | {
"login": "yjernite",
"id": 10469459,
"node_id": "MDQ6VXNlcjEwNDY5NDU5",
"avatar_url": "https://avatars.githubusercontent.com/u/10469459?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yjernite",
"html_url": "https://github.com/yjernite",
"followers_url": "https://api.github.com/users/yjernite/followers",
"following_url": "https://api.github.com/users/yjernite/following{/other_user}",
"gists_url": "https://api.github.com/users/yjernite/gists{/gist_id}",
"starred_url": "https://api.github.com/users/yjernite/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yjernite/subscriptions",
"organizations_url": "https://api.github.com/users/yjernite/orgs",
"repos_url": "https://api.github.com/users/yjernite/repos",
"events_url": "https://api.github.com/users/yjernite/events{/privacy}",
"received_events_url": "https://api.github.com/users/yjernite/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
} | [] | null | [
"This is task is part of the GEM suite so will actually need a more complete dataset card. I'm taking a break for now though and will get back to it before merging :) "
] | 1,606,835,123,000 | 1,606,930,445,000 | 1,606,930,445,000 | MEMBER | null | This PR adds data from the WebNLG challenge, with one configuration per release and challenge iteration.
More information can be found [here](https://webnlg-challenge.loria.fr/)
Unfortunately, the data itself comes from a pretty large number of small XML files, so the dummy data ends up being quite large (8.4 MB even keeping only one example per file). | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/964/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/964/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/964",
"html_url": "https://github.com/huggingface/datasets/pull/964",
"diff_url": "https://github.com/huggingface/datasets/pull/964.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/964.patch",
"merged_at": 1606930445000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/963 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/963/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/963/comments | https://api.github.com/repos/huggingface/datasets/issues/963/events | https://github.com/huggingface/datasets/pull/963 | 754,451,234 | MDExOlB1bGxSZXF1ZXN0NTMwMzQ5NjQ4 | 963 | add CODAH dataset | {
"login": "patil-suraj",
"id": 27137566,
"node_id": "MDQ6VXNlcjI3MTM3NTY2",
"avatar_url": "https://avatars.githubusercontent.com/u/27137566?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patil-suraj",
"html_url": "https://github.com/patil-suraj",
"followers_url": "https://api.github.com/users/patil-suraj/followers",
"following_url": "https://api.github.com/users/patil-suraj/following{/other_user}",
"gists_url": "https://api.github.com/users/patil-suraj/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patil-suraj/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patil-suraj/subscriptions",
"organizations_url": "https://api.github.com/users/patil-suraj/orgs",
"repos_url": "https://api.github.com/users/patil-suraj/repos",
"events_url": "https://api.github.com/users/patil-suraj/events{/privacy}",
"received_events_url": "https://api.github.com/users/patil-suraj/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
} | [] | null | [] | 1,606,833,425,000 | 1,606,916,758,000 | 1,606,915,285,000 | MEMBER | null | Adding CODAH dataset.
More info:
https://github.com/Websail-NU/CODAH | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/963/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/963/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/963",
"html_url": "https://github.com/huggingface/datasets/pull/963",
"diff_url": "https://github.com/huggingface/datasets/pull/963.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/963.patch",
"merged_at": 1606915285000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/962 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/962/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/962/comments | https://api.github.com/repos/huggingface/datasets/issues/962/events | https://github.com/huggingface/datasets/pull/962 | 754,441,428 | MDExOlB1bGxSZXF1ZXN0NTMwMzQxMDA2 | 962 | Add Danish Political Comments Dataset | {
"login": "abhishekkrthakur",
"id": 1183441,
"node_id": "MDQ6VXNlcjExODM0NDE=",
"avatar_url": "https://avatars.githubusercontent.com/u/1183441?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/abhishekkrthakur",
"html_url": "https://github.com/abhishekkrthakur",
"followers_url": "https://api.github.com/users/abhishekkrthakur/followers",
"following_url": "https://api.github.com/users/abhishekkrthakur/following{/other_user}",
"gists_url": "https://api.github.com/users/abhishekkrthakur/gists{/gist_id}",
"starred_url": "https://api.github.com/users/abhishekkrthakur/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/abhishekkrthakur/subscriptions",
"organizations_url": "https://api.github.com/users/abhishekkrthakur/orgs",
"repos_url": "https://api.github.com/users/abhishekkrthakur/repos",
"events_url": "https://api.github.com/users/abhishekkrthakur/events{/privacy}",
"received_events_url": "https://api.github.com/users/abhishekkrthakur/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
} | [] | null | [] | 1,606,832,912,000 | 1,606,991,515,000 | 1,606,991,514,000 | MEMBER | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/962/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/962/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/962",
"html_url": "https://github.com/huggingface/datasets/pull/962",
"diff_url": "https://github.com/huggingface/datasets/pull/962.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/962.patch",
"merged_at": 1606991514000
} | true |
|
https://api.github.com/repos/huggingface/datasets/issues/961 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/961/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/961/comments | https://api.github.com/repos/huggingface/datasets/issues/961/events | https://github.com/huggingface/datasets/issues/961 | 754,434,398 | MDU6SXNzdWU3NTQ0MzQzOTg= | 961 | sample multiple datasets | {
"login": "rabeehk",
"id": 6278280,
"node_id": "MDQ6VXNlcjYyNzgyODA=",
"avatar_url": "https://avatars.githubusercontent.com/u/6278280?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rabeehk",
"html_url": "https://github.com/rabeehk",
"followers_url": "https://api.github.com/users/rabeehk/followers",
"following_url": "https://api.github.com/users/rabeehk/following{/other_user}",
"gists_url": "https://api.github.com/users/rabeehk/gists{/gist_id}",
"starred_url": "https://api.github.com/users/rabeehk/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rabeehk/subscriptions",
"organizations_url": "https://api.github.com/users/rabeehk/orgs",
"repos_url": "https://api.github.com/users/rabeehk/repos",
"events_url": "https://api.github.com/users/rabeehk/events{/privacy}",
"received_events_url": "https://api.github.com/users/rabeehk/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
} | [] | null | [
"here I share my dataloader currently for multiple tasks: https://gist.github.com/rabeehkarimimahabadi/39f9444a4fb6f53dcc4fca5d73bf8195 \r\n\r\nI need to train my model distributedly with this dataloader, \"MultiTasksataloader\", currently this does not work in distributed fasion,\r\nto save on memory I tried to use iterative datasets, could you have a look in this dataloader and tell me if this is indeed the case? not sure how to make datasets being iterative to not load them in memory, then I remove the sampler for dataloader, and shard the data per core, could you tell me please how I should implement this case in datasets library? and how do you find my implementation in terms of correctness? thanks \r\n"
] | 1,606,832,402,000 | 1,606,872,764,000 | null | CONTRIBUTOR | null | Hi
I am dealing with multiple datasets, I need to have a dataloader over them with a condition that in each batch data samples are coming from one of the datasets. My main question is:
- I need to have a way to sample the datasets first with some weights, lets say 2x dataset1 1x dataset2, could you point me how I can do it
sub-questions:
- I want to concat sampled datasets and define one dataloader on it, then I need a way to make sure batches come from 1 dataset in each iteration, could you assist me how I can do?
- I use iterative-type of datasets, but I need a method of shuffling still since it brings accuracy performance issues if not doing it, thanks for the help. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/961/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/961/timeline | null | null | {
"url": "",
"html_url": "",
"diff_url": "",
"patch_url": "",
"merged_at": 0
} | false |
https://api.github.com/repos/huggingface/datasets/issues/960 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/960/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/960/comments | https://api.github.com/repos/huggingface/datasets/issues/960/events | https://github.com/huggingface/datasets/pull/960 | 754,422,710 | MDExOlB1bGxSZXF1ZXN0NTMwMzI1MzUx | 960 | Add code to automate parts of the dataset card | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
} | [] | null | [] | 1,606,831,491,000 | 1,619,423,761,000 | 1,619,423,761,000 | MEMBER | null | Most parts of the "Dataset Structure" section can be generated automatically. This PR adds some code to do so. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/960/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/960/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/960",
"html_url": "https://github.com/huggingface/datasets/pull/960",
"diff_url": "https://github.com/huggingface/datasets/pull/960.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/960.patch",
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/959 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/959/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/959/comments | https://api.github.com/repos/huggingface/datasets/issues/959/events | https://github.com/huggingface/datasets/pull/959 | 754,418,610 | MDExOlB1bGxSZXF1ZXN0NTMwMzIxOTM1 | 959 | Add Tunizi Dataset | {
"login": "abhishekkrthakur",
"id": 1183441,
"node_id": "MDQ6VXNlcjExODM0NDE=",
"avatar_url": "https://avatars.githubusercontent.com/u/1183441?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/abhishekkrthakur",
"html_url": "https://github.com/abhishekkrthakur",
"followers_url": "https://api.github.com/users/abhishekkrthakur/followers",
"following_url": "https://api.github.com/users/abhishekkrthakur/following{/other_user}",
"gists_url": "https://api.github.com/users/abhishekkrthakur/gists{/gist_id}",
"starred_url": "https://api.github.com/users/abhishekkrthakur/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/abhishekkrthakur/subscriptions",
"organizations_url": "https://api.github.com/users/abhishekkrthakur/orgs",
"repos_url": "https://api.github.com/users/abhishekkrthakur/repos",
"events_url": "https://api.github.com/users/abhishekkrthakur/events{/privacy}",
"received_events_url": "https://api.github.com/users/abhishekkrthakur/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
} | [] | null | [] | 1,606,831,179,000 | 1,607,005,301,000 | 1,607,005,300,000 | MEMBER | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/959/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/959/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/959",
"html_url": "https://github.com/huggingface/datasets/pull/959",
"diff_url": "https://github.com/huggingface/datasets/pull/959.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/959.patch",
"merged_at": 1607005300000
} | true |
|
https://api.github.com/repos/huggingface/datasets/issues/958 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/958/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/958/comments | https://api.github.com/repos/huggingface/datasets/issues/958/events | https://github.com/huggingface/datasets/pull/958 | 754,404,095 | MDExOlB1bGxSZXF1ZXN0NTMwMzA5ODkz | 958 | dataset(ncslgr): add initial loading script | {
"login": "AmitMY",
"id": 5757359,
"node_id": "MDQ6VXNlcjU3NTczNTk=",
"avatar_url": "https://avatars.githubusercontent.com/u/5757359?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/AmitMY",
"html_url": "https://github.com/AmitMY",
"followers_url": "https://api.github.com/users/AmitMY/followers",
"following_url": "https://api.github.com/users/AmitMY/following{/other_user}",
"gists_url": "https://api.github.com/users/AmitMY/gists{/gist_id}",
"starred_url": "https://api.github.com/users/AmitMY/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/AmitMY/subscriptions",
"organizations_url": "https://api.github.com/users/AmitMY/orgs",
"repos_url": "https://api.github.com/users/AmitMY/repos",
"events_url": "https://api.github.com/users/AmitMY/events{/privacy}",
"received_events_url": "https://api.github.com/users/AmitMY/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
} | [] | null | [
"@lhoestq I added the README files, and now the tests fail... (check commit history, only changed MD file)\r\nThe tests seem a bit unstable",
"the `RemoteDatasetTest ` errors in the CI are fixed on master so it's fine",
"merging since the CI is fixed on master"
] | 1,606,830,077,000 | 1,607,358,939,000 | 1,607,358,939,000 | CONTRIBUTOR | null | clean #789 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/958/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/958/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/958",
"html_url": "https://github.com/huggingface/datasets/pull/958",
"diff_url": "https://github.com/huggingface/datasets/pull/958.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/958.patch",
"merged_at": 1607358939000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/957 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/957/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/957/comments | https://api.github.com/repos/huggingface/datasets/issues/957/events | https://github.com/huggingface/datasets/pull/957 | 754,380,073 | MDExOlB1bGxSZXF1ZXN0NTMwMjg5OTk4 | 957 | Isixhosa ner corpus | {
"login": "yvonnegitau",
"id": 7923902,
"node_id": "MDQ6VXNlcjc5MjM5MDI=",
"avatar_url": "https://avatars.githubusercontent.com/u/7923902?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yvonnegitau",
"html_url": "https://github.com/yvonnegitau",
"followers_url": "https://api.github.com/users/yvonnegitau/followers",
"following_url": "https://api.github.com/users/yvonnegitau/following{/other_user}",
"gists_url": "https://api.github.com/users/yvonnegitau/gists{/gist_id}",
"starred_url": "https://api.github.com/users/yvonnegitau/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yvonnegitau/subscriptions",
"organizations_url": "https://api.github.com/users/yvonnegitau/orgs",
"repos_url": "https://api.github.com/users/yvonnegitau/repos",
"events_url": "https://api.github.com/users/yvonnegitau/events{/privacy}",
"received_events_url": "https://api.github.com/users/yvonnegitau/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
} | [] | null | [] | 1,606,828,116,000 | 1,606,846,498,000 | 1,606,846,498,000 | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/957/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/957/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/957",
"html_url": "https://github.com/huggingface/datasets/pull/957",
"diff_url": "https://github.com/huggingface/datasets/pull/957.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/957.patch",
"merged_at": 1606846498000
} | true |
|
https://api.github.com/repos/huggingface/datasets/issues/956 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/956/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/956/comments | https://api.github.com/repos/huggingface/datasets/issues/956/events | https://github.com/huggingface/datasets/pull/956 | 754,368,378 | MDExOlB1bGxSZXF1ZXN0NTMwMjgwMzU1 | 956 | Add Norwegian NER | {
"login": "jplu",
"id": 959590,
"node_id": "MDQ6VXNlcjk1OTU5MA==",
"avatar_url": "https://avatars.githubusercontent.com/u/959590?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jplu",
"html_url": "https://github.com/jplu",
"followers_url": "https://api.github.com/users/jplu/followers",
"following_url": "https://api.github.com/users/jplu/following{/other_user}",
"gists_url": "https://api.github.com/users/jplu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jplu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jplu/subscriptions",
"organizations_url": "https://api.github.com/users/jplu/orgs",
"repos_url": "https://api.github.com/users/jplu/repos",
"events_url": "https://api.github.com/users/jplu/events{/privacy}",
"received_events_url": "https://api.github.com/users/jplu/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
} | [] | null | [
"Merging this one, good job and thank you @jplu :) "
] | 1,606,827,062,000 | 1,606,899,191,000 | 1,606,846,161,000 | CONTRIBUTOR | null | This PR adds the [Norwegian NER](https://github.com/ljos/navnkjenner) dataset.
I have added the `conllu` package as a test dependency. This is required to properly parse the `.conllu` files. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/956/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/956/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/956",
"html_url": "https://github.com/huggingface/datasets/pull/956",
"diff_url": "https://github.com/huggingface/datasets/pull/956.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/956.patch",
"merged_at": 1606846161000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/955 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/955/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/955/comments | https://api.github.com/repos/huggingface/datasets/issues/955/events | https://github.com/huggingface/datasets/pull/955 | 754,367,291 | MDExOlB1bGxSZXF1ZXN0NTMwMjc5NDQw | 955 | Added PragmEval benchmark | {
"login": "sileod",
"id": 9168444,
"node_id": "MDQ6VXNlcjkxNjg0NDQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/9168444?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sileod",
"html_url": "https://github.com/sileod",
"followers_url": "https://api.github.com/users/sileod/followers",
"following_url": "https://api.github.com/users/sileod/following{/other_user}",
"gists_url": "https://api.github.com/users/sileod/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sileod/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sileod/subscriptions",
"organizations_url": "https://api.github.com/users/sileod/orgs",
"repos_url": "https://api.github.com/users/sileod/repos",
"events_url": "https://api.github.com/users/sileod/events{/privacy}",
"received_events_url": "https://api.github.com/users/sileod/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
} | [] | null | [
"> Really cool ! Thanks for adding this one :)\r\n> Good job at adding all those citations for each task\r\n> \r\n> Looks like the dummy data test doesn't pass. Maybe some files are missing in the dummy_data.zip files ?\r\n> The error reports `pragmeval/verifiability/train.tsv` to be missing\r\n> \r\n> Also could you add the tags part of the dataset card (the rest is optional) ?\r\n> See more info here : https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md#tag-the-dataset-and-write-the-dataset-card\r\n\r\nIn the prior commits I generated dataset_infos and the dummy files myself\r\nNow they are generated with the cli, and the tests now seem to be passing better\r\nI will look into the tag\r\n",
"Looks like you did a good job with dummy data in the first place !\r\nThe downside of automatically generated dummy data is that the files are heavier (here 40KB per file).\r\nIf you could replace the generated dummy files with the one you created yourself it would be awesome, since the one you did yourself are way lighter (around 1KB per file). Using small files make `git clone` run faster so we encourage to use small dummy_data files.",
"could you rebase from master ? it should fix the CI",
"> could you rebase from master ? it should fix the CI\r\n\r\nI think it is due to the file structure of the dummy data that causes test failure. The automatically generated dummy data pass the tests",
"Indeed the error reports that `pragmeval/verifiability/train.tsv` is missing for the verifiability dummy_data.zip file.\r\nTo fix that you should add the missing data files in each dummy_data.zip file.\r\nTo test that your dummy data work you can run\r\n```\r\nRUN_SLOW=1 pytest tests/test_dataset_common.py::LocalDatasetTest::test_load_dataset_all_configs_\r\n```\r\nif some file is missing it should tell you which one",
"Also it looks like you haven't rebased from master yet, even though you did a `rebase` commit. \r\n\r\nrebasing should fix the other CI fails",
"It's ok if we have `RemoteDatasetTest ` errors, they're fixed on master",
"merging since the CI is fixed on master",
"Hey @sileod! Super nice to see you participating ;)\r\n\r\nDid you officially joined the sprint by posting on [the forum thread](https://discuss.huggingface.co/t/open-to-the-community-one-week-team-effort-to-reach-v2-0-of-hf-datasets-library/2176) and joining our slack?\r\n\r\nI can't seem to find you there! Should I add you directly with your gmail address?",
"Hi @sileod 👋 "
] | 1,606,826,955,000 | 1,607,078,612,000 | 1,606,988,207,000 | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/955/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/955/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/955",
"html_url": "https://github.com/huggingface/datasets/pull/955",
"diff_url": "https://github.com/huggingface/datasets/pull/955.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/955.patch",
"merged_at": 1606988207000
} | true |
|
https://api.github.com/repos/huggingface/datasets/issues/954 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/954/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/954/comments | https://api.github.com/repos/huggingface/datasets/issues/954/events | https://github.com/huggingface/datasets/pull/954 | 754,362,012 | MDExOlB1bGxSZXF1ZXN0NTMwMjc1MDY4 | 954 | add prachathai67k | {
"login": "cstorm125",
"id": 15519308,
"node_id": "MDQ6VXNlcjE1NTE5MzA4",
"avatar_url": "https://avatars.githubusercontent.com/u/15519308?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/cstorm125",
"html_url": "https://github.com/cstorm125",
"followers_url": "https://api.github.com/users/cstorm125/followers",
"following_url": "https://api.github.com/users/cstorm125/following{/other_user}",
"gists_url": "https://api.github.com/users/cstorm125/gists{/gist_id}",
"starred_url": "https://api.github.com/users/cstorm125/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/cstorm125/subscriptions",
"organizations_url": "https://api.github.com/users/cstorm125/orgs",
"repos_url": "https://api.github.com/users/cstorm125/repos",
"events_url": "https://api.github.com/users/cstorm125/events{/privacy}",
"received_events_url": "https://api.github.com/users/cstorm125/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
} | [] | null | [
"Test failing for same issues as https://github.com/huggingface/datasets/pull/939\r\nPlease advise.\r\n\r\n```\r\n=========================== short test summary info ============================\r\nFAILED tests/test_dataset_common.py::RemoteDatasetTest::test_builder_class_flue\r\nFAILED tests/test_dataset_common.py::RemoteDatasetTest::test_builder_class_norwegian_ner\r\nFAILED tests/test_dataset_common.py::RemoteDatasetTest::test_builder_configs_flue\r\nFAILED tests/test_dataset_common.py::RemoteDatasetTest::test_builder_configs_norwegian_ner\r\nFAILED tests/test_dataset_common.py::RemoteDatasetTest::test_load_dataset_flue\r\nFAILED tests/test_dataset_common.py::RemoteDatasetTest::test_load_dataset_norwegian_ner\r\nFAILED tests/test_dataset_common.py::RemoteDatasetTest::test_load_dataset_xglue\r\n===== 7 failed, 1309 passed, 932 skipped, 11 warnings in 166.71s (0:02:46) =====\r\n```",
"Closing and opening a new pull request to solve rebase issues",
"To be continued on https://github.com/huggingface/datasets/pull/982"
] | 1,606,826,455,000 | 1,606,885,931,000 | 1,606,884,232,000 | CONTRIBUTOR | null | `prachathai-67k`: News Article Corpus and Multi-label Text Classificdation from Prachathai.com
The prachathai-67k dataset was scraped from the news site Prachathai.
We filtered out those articles with less than 500 characters of body text, mostly images and cartoons.
It contains 67,889 articles wtih 12 curated tags from August 24, 2004 to November 15, 2018.
The dataset was originally scraped by @lukkiddd and cleaned by @cstorm125.
You can also see preliminary exploration at https://github.com/PyThaiNLP/prachathai-67k/blob/master/exploration.ipynb | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/954/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/954/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/954",
"html_url": "https://github.com/huggingface/datasets/pull/954",
"diff_url": "https://github.com/huggingface/datasets/pull/954.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/954.patch",
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/953 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/953/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/953/comments | https://api.github.com/repos/huggingface/datasets/issues/953/events | https://github.com/huggingface/datasets/pull/953 | 754,359,942 | MDExOlB1bGxSZXF1ZXN0NTMwMjczMzg5 | 953 | added health_fact dataset | {
"login": "bhavitvyamalik",
"id": 19718818,
"node_id": "MDQ6VXNlcjE5NzE4ODE4",
"avatar_url": "https://avatars.githubusercontent.com/u/19718818?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bhavitvyamalik",
"html_url": "https://github.com/bhavitvyamalik",
"followers_url": "https://api.github.com/users/bhavitvyamalik/followers",
"following_url": "https://api.github.com/users/bhavitvyamalik/following{/other_user}",
"gists_url": "https://api.github.com/users/bhavitvyamalik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/bhavitvyamalik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bhavitvyamalik/subscriptions",
"organizations_url": "https://api.github.com/users/bhavitvyamalik/orgs",
"repos_url": "https://api.github.com/users/bhavitvyamalik/repos",
"events_url": "https://api.github.com/users/bhavitvyamalik/events{/privacy}",
"received_events_url": "https://api.github.com/users/bhavitvyamalik/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
} | [] | null | [
"Hi @lhoestq,\r\nInitially I tried int(-1) only in place of nan labels and missing values but I kept on getting this error ```pyarrow.lib.ArrowTypeError: Expected bytes, got a 'int' object``` maybe because I'm sending int values (-1) to objects which are string type"
] | 1,606,826,264,000 | 1,606,864,293,000 | 1,606,864,293,000 | CONTRIBUTOR | null | Added dataset Explainable Fact-Checking for Public Health Claims (dataset_id: health_fact) | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/953/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/953/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/953",
"html_url": "https://github.com/huggingface/datasets/pull/953",
"diff_url": "https://github.com/huggingface/datasets/pull/953.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/953.patch",
"merged_at": 1606864293000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/952 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/952/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/952/comments | https://api.github.com/repos/huggingface/datasets/issues/952/events | https://github.com/huggingface/datasets/pull/952 | 754,357,270 | MDExOlB1bGxSZXF1ZXN0NTMwMjcxMTQz | 952 | Add orange sum | {
"login": "moussaKam",
"id": 28675016,
"node_id": "MDQ6VXNlcjI4Njc1MDE2",
"avatar_url": "https://avatars.githubusercontent.com/u/28675016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/moussaKam",
"html_url": "https://github.com/moussaKam",
"followers_url": "https://api.github.com/users/moussaKam/followers",
"following_url": "https://api.github.com/users/moussaKam/following{/other_user}",
"gists_url": "https://api.github.com/users/moussaKam/gists{/gist_id}",
"starred_url": "https://api.github.com/users/moussaKam/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/moussaKam/subscriptions",
"organizations_url": "https://api.github.com/users/moussaKam/orgs",
"repos_url": "https://api.github.com/users/moussaKam/repos",
"events_url": "https://api.github.com/users/moussaKam/events{/privacy}",
"received_events_url": "https://api.github.com/users/moussaKam/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
} | [] | null | [] | 1,606,826,014,000 | 1,606,837,440,000 | 1,606,837,440,000 | CONTRIBUTOR | null | Add OrangeSum a french abstractive summarization dataset.
Paper: [BARThez: a Skilled Pretrained French Sequence-to-Sequence Model](https://arxiv.org/abs/2010.12321) | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/952/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/952/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/952",
"html_url": "https://github.com/huggingface/datasets/pull/952",
"diff_url": "https://github.com/huggingface/datasets/pull/952.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/952.patch",
"merged_at": 1606837440000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/951 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/951/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/951/comments | https://api.github.com/repos/huggingface/datasets/issues/951/events | https://github.com/huggingface/datasets/pull/951 | 754,349,979 | MDExOlB1bGxSZXF1ZXN0NTMwMjY1MTY0 | 951 | Prachathai67k | {
"login": "cstorm125",
"id": 15519308,
"node_id": "MDQ6VXNlcjE1NTE5MzA4",
"avatar_url": "https://avatars.githubusercontent.com/u/15519308?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/cstorm125",
"html_url": "https://github.com/cstorm125",
"followers_url": "https://api.github.com/users/cstorm125/followers",
"following_url": "https://api.github.com/users/cstorm125/following{/other_user}",
"gists_url": "https://api.github.com/users/cstorm125/gists{/gist_id}",
"starred_url": "https://api.github.com/users/cstorm125/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/cstorm125/subscriptions",
"organizations_url": "https://api.github.com/users/cstorm125/orgs",
"repos_url": "https://api.github.com/users/cstorm125/repos",
"events_url": "https://api.github.com/users/cstorm125/events{/privacy}",
"received_events_url": "https://api.github.com/users/cstorm125/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
} | [] | null | [
"Wrongly branching from existing branch of wisesight_sentiment. Closing and opening another one specifically for prachathai67k"
] | 1,606,825,312,000 | 1,606,825,793,000 | 1,606,825,706,000 | CONTRIBUTOR | null | Add `prachathai-67k`: News Article Corpus and Multi-label Text Classificdation from Prachathai.com
The `prachathai-67k` dataset was scraped from the news site [Prachathai](prachathai.com). We filtered out those articles with less than 500 characters of body text, mostly images and cartoons. It contains 67,889 articles wtih 12 curated tags from August 24, 2004 to November 15, 2018. The dataset was originally scraped by [@lukkiddd](https://github.com/lukkiddd) and cleaned by [@cstorm125](https://github.com/cstorm125). Download the dataset [here](https://www.dropbox.com/s/fsxepdka4l2pr45/prachathai-67k.zip?dl=1). You can also see preliminary exploration in [exploration.ipynb](https://github.com/PyThaiNLP/prachathai-67k/blob/master/exploration.ipynb).
This dataset is a part of [pyThaiNLP](https://github.com/PyThaiNLP/) Thai text [classification-benchmarks](https://github.com/PyThaiNLP/classification-benchmarks). For the benchmark, we selected the following tags with substantial volume that resemble **classifying types of articles**:
* `การเมือง` - politics
* `สิทธิมนุษยชน` - human_rights
* `คุณภาพชีวิต` - quality_of_life
* `ต่างประเทศ` - international
* `สังคม` - social
* `สิ่งแวดล้อม` - environment
* `เศรษฐกิจ` - economics
* `วัฒนธรรม` - culture
* `แรงงาน` - labor
* `ความมั่นคง` - national_security
* `ไอซีที` - ict
* `การศึกษา` - education | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/951/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/951/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/951",
"html_url": "https://github.com/huggingface/datasets/pull/951",
"diff_url": "https://github.com/huggingface/datasets/pull/951.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/951.patch",
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/950 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/950/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/950/comments | https://api.github.com/repos/huggingface/datasets/issues/950/events | https://github.com/huggingface/datasets/pull/950 | 754,318,686 | MDExOlB1bGxSZXF1ZXN0NTMwMjM4OTQx | 950 | Support .xz file format | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
} | [] | null | [] | 1,606,822,488,000 | 1,606,829,958,000 | 1,606,829,958,000 | MEMBER | null | Add support to extract/uncompress files in .xz format. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/950/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/950/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/950",
"html_url": "https://github.com/huggingface/datasets/pull/950",
"diff_url": "https://github.com/huggingface/datasets/pull/950.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/950.patch",
"merged_at": 1606829958000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/949 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/949/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/949/comments | https://api.github.com/repos/huggingface/datasets/issues/949/events | https://github.com/huggingface/datasets/pull/949 | 754,317,777 | MDExOlB1bGxSZXF1ZXN0NTMwMjM4MTky | 949 | Add GermaNER Dataset | {
"login": "abhishekkrthakur",
"id": 1183441,
"node_id": "MDQ6VXNlcjExODM0NDE=",
"avatar_url": "https://avatars.githubusercontent.com/u/1183441?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/abhishekkrthakur",
"html_url": "https://github.com/abhishekkrthakur",
"followers_url": "https://api.github.com/users/abhishekkrthakur/followers",
"following_url": "https://api.github.com/users/abhishekkrthakur/following{/other_user}",
"gists_url": "https://api.github.com/users/abhishekkrthakur/gists{/gist_id}",
"starred_url": "https://api.github.com/users/abhishekkrthakur/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/abhishekkrthakur/subscriptions",
"organizations_url": "https://api.github.com/users/abhishekkrthakur/orgs",
"repos_url": "https://api.github.com/users/abhishekkrthakur/repos",
"events_url": "https://api.github.com/users/abhishekkrthakur/events{/privacy}",
"received_events_url": "https://api.github.com/users/abhishekkrthakur/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
} | [] | null | [
"@lhoestq added. "
] | 1,606,822,411,000 | 1,607,004,401,000 | 1,607,004,400,000 | MEMBER | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/949/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/949/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/949",
"html_url": "https://github.com/huggingface/datasets/pull/949",
"diff_url": "https://github.com/huggingface/datasets/pull/949.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/949.patch",
"merged_at": 1607004400000
} | true |
|
https://api.github.com/repos/huggingface/datasets/issues/948 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/948/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/948/comments | https://api.github.com/repos/huggingface/datasets/issues/948/events | https://github.com/huggingface/datasets/pull/948 | 754,306,260 | MDExOlB1bGxSZXF1ZXN0NTMwMjI4NjQz | 948 | docs(ADD_NEW_DATASET): correct indentation for script | {
"login": "AmitMY",
"id": 5757359,
"node_id": "MDQ6VXNlcjU3NTczNTk=",
"avatar_url": "https://avatars.githubusercontent.com/u/5757359?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/AmitMY",
"html_url": "https://github.com/AmitMY",
"followers_url": "https://api.github.com/users/AmitMY/followers",
"following_url": "https://api.github.com/users/AmitMY/following{/other_user}",
"gists_url": "https://api.github.com/users/AmitMY/gists{/gist_id}",
"starred_url": "https://api.github.com/users/AmitMY/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/AmitMY/subscriptions",
"organizations_url": "https://api.github.com/users/AmitMY/orgs",
"repos_url": "https://api.github.com/users/AmitMY/repos",
"events_url": "https://api.github.com/users/AmitMY/events{/privacy}",
"received_events_url": "https://api.github.com/users/AmitMY/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
} | [] | null | [] | 1,606,821,458,000 | 1,606,821,918,000 | 1,606,821,918,000 | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/948/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/948/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/948",
"html_url": "https://github.com/huggingface/datasets/pull/948",
"diff_url": "https://github.com/huggingface/datasets/pull/948.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/948.patch",
"merged_at": 1606821918000
} | true |
|
https://api.github.com/repos/huggingface/datasets/issues/947 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/947/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/947/comments | https://api.github.com/repos/huggingface/datasets/issues/947/events | https://github.com/huggingface/datasets/pull/947 | 754,286,658 | MDExOlB1bGxSZXF1ZXN0NTMwMjEyMjc3 | 947 | Add europeana newspapers | {
"login": "jplu",
"id": 959590,
"node_id": "MDQ6VXNlcjk1OTU5MA==",
"avatar_url": "https://avatars.githubusercontent.com/u/959590?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jplu",
"html_url": "https://github.com/jplu",
"followers_url": "https://api.github.com/users/jplu/followers",
"following_url": "https://api.github.com/users/jplu/following{/other_user}",
"gists_url": "https://api.github.com/users/jplu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jplu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jplu/subscriptions",
"organizations_url": "https://api.github.com/users/jplu/orgs",
"repos_url": "https://api.github.com/users/jplu/repos",
"events_url": "https://api.github.com/users/jplu/events{/privacy}",
"received_events_url": "https://api.github.com/users/jplu/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
} | [] | null | [] | 1,606,819,938,000 | 1,606,902,155,000 | 1,606,902,129,000 | CONTRIBUTOR | null | This PR adds the [Europeana newspapers](https://github.com/EuropeanaNewspapers/ner-corpora) dataset. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/947/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/947/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/947",
"html_url": "https://github.com/huggingface/datasets/pull/947",
"diff_url": "https://github.com/huggingface/datasets/pull/947.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/947.patch",
"merged_at": 1606902129000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/946 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/946/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/946/comments | https://api.github.com/repos/huggingface/datasets/issues/946/events | https://github.com/huggingface/datasets/pull/946 | 754,278,632 | MDExOlB1bGxSZXF1ZXN0NTMwMjA1Nzgw | 946 | add PEC dataset | {
"login": "zhongpeixiang",
"id": 11826803,
"node_id": "MDQ6VXNlcjExODI2ODAz",
"avatar_url": "https://avatars.githubusercontent.com/u/11826803?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zhongpeixiang",
"html_url": "https://github.com/zhongpeixiang",
"followers_url": "https://api.github.com/users/zhongpeixiang/followers",
"following_url": "https://api.github.com/users/zhongpeixiang/following{/other_user}",
"gists_url": "https://api.github.com/users/zhongpeixiang/gists{/gist_id}",
"starred_url": "https://api.github.com/users/zhongpeixiang/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zhongpeixiang/subscriptions",
"organizations_url": "https://api.github.com/users/zhongpeixiang/orgs",
"repos_url": "https://api.github.com/users/zhongpeixiang/repos",
"events_url": "https://api.github.com/users/zhongpeixiang/events{/privacy}",
"received_events_url": "https://api.github.com/users/zhongpeixiang/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
} | [] | null | [
"The checks failed again even if I didn't make any changes.",
"you just need to rebase from master to fix the CI :)",
"Sorry for the mess, I'm confused by the rebase and thus created a new branch."
] | 1,606,819,301,000 | 1,606,963,634,000 | 1,606,963,634,000 | CONTRIBUTOR | null | A persona-based empathetic conversation dataset published at EMNLP 2020. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/946/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/946/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/946",
"html_url": "https://github.com/huggingface/datasets/pull/946",
"diff_url": "https://github.com/huggingface/datasets/pull/946.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/946.patch",
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/945 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/945/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/945/comments | https://api.github.com/repos/huggingface/datasets/issues/945/events | https://github.com/huggingface/datasets/pull/945 | 754,273,920 | MDExOlB1bGxSZXF1ZXN0NTMwMjAyMDM1 | 945 | Adding Babi dataset - English version | {
"login": "thomwolf",
"id": 7353373,
"node_id": "MDQ6VXNlcjczNTMzNzM=",
"avatar_url": "https://avatars.githubusercontent.com/u/7353373?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/thomwolf",
"html_url": "https://github.com/thomwolf",
"followers_url": "https://api.github.com/users/thomwolf/followers",
"following_url": "https://api.github.com/users/thomwolf/following{/other_user}",
"gists_url": "https://api.github.com/users/thomwolf/gists{/gist_id}",
"starred_url": "https://api.github.com/users/thomwolf/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/thomwolf/subscriptions",
"organizations_url": "https://api.github.com/users/thomwolf/orgs",
"repos_url": "https://api.github.com/users/thomwolf/repos",
"events_url": "https://api.github.com/users/thomwolf/events{/privacy}",
"received_events_url": "https://api.github.com/users/thomwolf/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
} | [] | null | [
"Replaced by #1126"
] | 1,606,818,936,000 | 1,607,096,585,000 | 1,607,096,574,000 | MEMBER | null | Adding the English version of bAbI.
Samples are taken from ParlAI for consistency with the main users at the moment. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/945/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/945/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/945",
"html_url": "https://github.com/huggingface/datasets/pull/945",
"diff_url": "https://github.com/huggingface/datasets/pull/945.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/945.patch",
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/944 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/944/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/944/comments | https://api.github.com/repos/huggingface/datasets/issues/944/events | https://github.com/huggingface/datasets/pull/944 | 754,228,947 | MDExOlB1bGxSZXF1ZXN0NTMwMTY0NTU5 | 944 | Add German Legal Entity Recognition Dataset | {
"login": "abhishekkrthakur",
"id": 1183441,
"node_id": "MDQ6VXNlcjExODM0NDE=",
"avatar_url": "https://avatars.githubusercontent.com/u/1183441?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/abhishekkrthakur",
"html_url": "https://github.com/abhishekkrthakur",
"followers_url": "https://api.github.com/users/abhishekkrthakur/followers",
"following_url": "https://api.github.com/users/abhishekkrthakur/following{/other_user}",
"gists_url": "https://api.github.com/users/abhishekkrthakur/gists{/gist_id}",
"starred_url": "https://api.github.com/users/abhishekkrthakur/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/abhishekkrthakur/subscriptions",
"organizations_url": "https://api.github.com/users/abhishekkrthakur/orgs",
"repos_url": "https://api.github.com/users/abhishekkrthakur/repos",
"events_url": "https://api.github.com/users/abhishekkrthakur/events{/privacy}",
"received_events_url": "https://api.github.com/users/abhishekkrthakur/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
} | [] | null | [
"thanks ! merging this one"
] | 1,606,815,502,000 | 1,607,000,816,000 | 1,607,000,815,000 | MEMBER | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/944/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/944/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/944",
"html_url": "https://github.com/huggingface/datasets/pull/944",
"diff_url": "https://github.com/huggingface/datasets/pull/944.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/944.patch",
"merged_at": 1607000814000
} | true |
|
https://api.github.com/repos/huggingface/datasets/issues/943 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/943/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/943/comments | https://api.github.com/repos/huggingface/datasets/issues/943/events | https://github.com/huggingface/datasets/pull/943 | 754,192,491 | MDExOlB1bGxSZXF1ZXN0NTMwMTM2ODM3 | 943 | The FLUE Benchmark | {
"login": "jplu",
"id": 959590,
"node_id": "MDQ6VXNlcjk1OTU5MA==",
"avatar_url": "https://avatars.githubusercontent.com/u/959590?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jplu",
"html_url": "https://github.com/jplu",
"followers_url": "https://api.github.com/users/jplu/followers",
"following_url": "https://api.github.com/users/jplu/following{/other_user}",
"gists_url": "https://api.github.com/users/jplu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jplu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jplu/subscriptions",
"organizations_url": "https://api.github.com/users/jplu/orgs",
"repos_url": "https://api.github.com/users/jplu/repos",
"events_url": "https://api.github.com/users/jplu/events{/privacy}",
"received_events_url": "https://api.github.com/users/jplu/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
} | [] | null | [] | 1,606,813,250,000 | 1,606,836,278,000 | 1,606,836,270,000 | CONTRIBUTOR | null | This PR adds the [FLUE](https://github.com/getalp/Flaubert/tree/master/flue) benchmark which is a set of different datasets to evaluate models for French content.
Two datasets are missing, the French Treebank that we can use only for research purpose and we are not allowed to distribute, and the Word Sense disambiguation for Nouns that will be added later. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/943/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/943/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/943",
"html_url": "https://github.com/huggingface/datasets/pull/943",
"diff_url": "https://github.com/huggingface/datasets/pull/943.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/943.patch",
"merged_at": 1606836270000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/942 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/942/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/942/comments | https://api.github.com/repos/huggingface/datasets/issues/942/events | https://github.com/huggingface/datasets/issues/942 | 754,162,318 | MDU6SXNzdWU3NTQxNjIzMTg= | 942 | D | {
"login": "CryptoMiKKi",
"id": 74238514,
"node_id": "MDQ6VXNlcjc0MjM4NTE0",
"avatar_url": "https://avatars.githubusercontent.com/u/74238514?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/CryptoMiKKi",
"html_url": "https://github.com/CryptoMiKKi",
"followers_url": "https://api.github.com/users/CryptoMiKKi/followers",
"following_url": "https://api.github.com/users/CryptoMiKKi/following{/other_user}",
"gists_url": "https://api.github.com/users/CryptoMiKKi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/CryptoMiKKi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/CryptoMiKKi/subscriptions",
"organizations_url": "https://api.github.com/users/CryptoMiKKi/orgs",
"repos_url": "https://api.github.com/users/CryptoMiKKi/repos",
"events_url": "https://api.github.com/users/CryptoMiKKi/events{/privacy}",
"received_events_url": "https://api.github.com/users/CryptoMiKKi/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
} | [] | null | [] | 1,606,810,630,000 | 1,607,013,773,000 | 1,607,013,773,000 | NONE | null | ## Adding a Dataset
- **Name:** *name of the dataset*
- **Description:** *short description of the dataset (or link to social media or blog post)*
- **Paper:** *link to the dataset paper if available*
- **Data:** *link to the Github repository or current dataset location*
- **Motivation:** *what are some good reasons to have this dataset*
Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md). | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/942/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/942/timeline | null | null | {
"url": "",
"html_url": "",
"diff_url": "",
"patch_url": "",
"merged_at": 0
} | false |
https://api.github.com/repos/huggingface/datasets/issues/941 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/941/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/941/comments | https://api.github.com/repos/huggingface/datasets/issues/941/events | https://github.com/huggingface/datasets/pull/941 | 754,141,321 | MDExOlB1bGxSZXF1ZXN0NTMwMDk0MTI2 | 941 | Add People's Daily NER dataset | {
"login": "JetRunner",
"id": 22514219,
"node_id": "MDQ6VXNlcjIyNTE0MjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/22514219?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/JetRunner",
"html_url": "https://github.com/JetRunner",
"followers_url": "https://api.github.com/users/JetRunner/followers",
"following_url": "https://api.github.com/users/JetRunner/following{/other_user}",
"gists_url": "https://api.github.com/users/JetRunner/gists{/gist_id}",
"starred_url": "https://api.github.com/users/JetRunner/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/JetRunner/subscriptions",
"organizations_url": "https://api.github.com/users/JetRunner/orgs",
"repos_url": "https://api.github.com/users/JetRunner/repos",
"events_url": "https://api.github.com/users/JetRunner/events{/privacy}",
"received_events_url": "https://api.github.com/users/JetRunner/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
} | [] | null | [
"> LGTM thanks :)\n> \n> \n> \n> Before we merge, could you add a dataset card ? see here for more info: https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md#tag-the-dataset-and-write-the-dataset-card\n> \n> \n> \n> Note that only the tags at the top of the dataset card are mandatory, if you feel like it's going to take too much time writing the rest to fill it all you can just skip the paragraphs\n\nNope. I don't think there is a citation. Also, can I do the dataset card later (maybe in bulk)?",
"We're doing one PR = one dataset to keep track of things. Feel free to add the tags later in this PR if you want to.\r\nAlso only the tags are required now, because we don't want people spending too much time on the cards",
"added @lhoestq ",
"Merging since the CI is fixed on master"
] | 1,606,808,933,000 | 1,606,934,563,000 | 1,606,934,561,000 | MEMBER | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/941/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/941/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/941",
"html_url": "https://github.com/huggingface/datasets/pull/941",
"diff_url": "https://github.com/huggingface/datasets/pull/941.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/941.patch",
"merged_at": 1606934561000
} | true |
|
https://api.github.com/repos/huggingface/datasets/issues/940 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/940/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/940/comments | https://api.github.com/repos/huggingface/datasets/issues/940/events | https://github.com/huggingface/datasets/pull/940 | 754,010,753 | MDExOlB1bGxSZXF1ZXN0NTI5OTc3OTQ2 | 940 | Add MSRA NER dataset | {
"login": "JetRunner",
"id": 22514219,
"node_id": "MDQ6VXNlcjIyNTE0MjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/22514219?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/JetRunner",
"html_url": "https://github.com/JetRunner",
"followers_url": "https://api.github.com/users/JetRunner/followers",
"following_url": "https://api.github.com/users/JetRunner/following{/other_user}",
"gists_url": "https://api.github.com/users/JetRunner/gists{/gist_id}",
"starred_url": "https://api.github.com/users/JetRunner/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/JetRunner/subscriptions",
"organizations_url": "https://api.github.com/users/JetRunner/orgs",
"repos_url": "https://api.github.com/users/JetRunner/repos",
"events_url": "https://api.github.com/users/JetRunner/events{/privacy}",
"received_events_url": "https://api.github.com/users/JetRunner/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
} | [] | null | [
"LGTM, don't forget the tags ;)"
] | 1,606,798,931,000 | 1,607,074,180,000 | 1,606,807,553,000 | MEMBER | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/940/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/940/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/940",
"html_url": "https://github.com/huggingface/datasets/pull/940",
"diff_url": "https://github.com/huggingface/datasets/pull/940.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/940.patch",
"merged_at": 1606807553000
} | true |
|
https://api.github.com/repos/huggingface/datasets/issues/939 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/939/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/939/comments | https://api.github.com/repos/huggingface/datasets/issues/939/events | https://github.com/huggingface/datasets/pull/939 | 753,965,405 | MDExOlB1bGxSZXF1ZXN0NTI5OTQwOTYz | 939 | add wisesight_sentiment | {
"login": "cstorm125",
"id": 15519308,
"node_id": "MDQ6VXNlcjE1NTE5MzA4",
"avatar_url": "https://avatars.githubusercontent.com/u/15519308?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/cstorm125",
"html_url": "https://github.com/cstorm125",
"followers_url": "https://api.github.com/users/cstorm125/followers",
"following_url": "https://api.github.com/users/cstorm125/following{/other_user}",
"gists_url": "https://api.github.com/users/cstorm125/gists{/gist_id}",
"starred_url": "https://api.github.com/users/cstorm125/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/cstorm125/subscriptions",
"organizations_url": "https://api.github.com/users/cstorm125/orgs",
"repos_url": "https://api.github.com/users/cstorm125/repos",
"events_url": "https://api.github.com/users/cstorm125/events{/privacy}",
"received_events_url": "https://api.github.com/users/cstorm125/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
} | [] | null | [
"@lhoestq Thanks, Quentin. Removed the .ipynb_checkpoints and edited the README.md. The tests are failing because of other dataets. I'm figuring out why since the commits only have changes on `wisesight_sentiment`\r\n\r\n```\r\nFAILED tests/test_dataset_common.py::RemoteDatasetTest::test_builder_class_flue\r\nFAILED tests/test_dataset_common.py::RemoteDatasetTest::test_builder_class_norwegian_ner\r\nFAILED tests/test_dataset_common.py::RemoteDatasetTest::test_builder_configs_flue\r\nFAILED tests/test_dataset_common.py::RemoteDatasetTest::test_builder_configs_norwegian_ner\r\nFAILED tests/test_dataset_common.py::RemoteDatasetTest::test_load_dataset_flue\r\nFAILED tests/test_dataset_common.py::RemoteDatasetTest::test_load_dataset_norwegian_ner\r\nFAILED tests/test_dataset_common.py::RemoteDatasetTest::test_load_dataset_xglue\r\n```",
"@cstorm125 I really like the dataset and dataset card but there seems to have been a rebase issue at some point since it's now changing 140 files :D \r\n\r\nCould you rebase from master?",
"I think it might be faster to close and reopen.",
"To be continued on: https://github.com/huggingface/datasets/pull/981"
] | 1,606,791,999,000 | 1,606,884,758,000 | 1,606,883,751,000 | CONTRIBUTOR | null | Add `wisesight_sentiment` Social media messages in Thai language with sentiment label (positive, neutral, negative, question)
Model Card:
---
YAML tags:
annotations_creators:
- expert-generated
language_creators:
- found
languages:
- th
licenses:
- cc0-1.0
multilinguality:
- monolingual
size_categories:
- 10K<n<100K
source_datasets:
- original
task_categories:
- text-classification
task_ids:
- sentiment-classification
---
# Dataset Card for wisesight_sentiment
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-instances)
- [Data Splits](#data-instances)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
## Dataset Description
- **Homepage:** https://github.com/PyThaiNLP/wisesight-sentiment
- **Repository:** https://github.com/PyThaiNLP/wisesight-sentiment
- **Paper:**
- **Leaderboard:** https://www.kaggle.com/c/wisesight-sentiment/
- **Point of Contact:** https://github.com/PyThaiNLP/
### Dataset Summary
Wisesight Sentiment Corpus: Social media messages in Thai language with sentiment label (positive, neutral, negative, question)
- Released to public domain under Creative Commons Zero v1.0 Universal license.
- Labels: {"pos": 0, "neu": 1, "neg": 2, "q": 3}
- Size: 26,737 messages
- Language: Central Thai
- Style: Informal and conversational. With some news headlines and advertisement.
- Time period: Around 2016 to early 2019. With small amount from other period.
- Domains: Mixed. Majority are consumer products and services (restaurants, cosmetics, drinks, car, hotels), with some current affairs.
- Privacy:
- Only messages that made available to the public on the internet (websites, blogs, social network sites).
- For Facebook, this means the public comments (everyone can see) that made on a public page.
- Private/protected messages and messages in groups, chat, and inbox are not included.
- Alternations and modifications:
- Keep in mind that this corpus does not statistically represent anything in the language register.
- Large amount of messages are not in their original form. Personal data are removed or masked.
- Duplicated, leading, and trailing whitespaces are removed. Other punctuations, symbols, and emojis are kept intact.
(Mis)spellings are kept intact.
- Messages longer than 2,000 characters are removed.
- Long non-Thai messages are removed. Duplicated message (exact match) are removed.
- More characteristics of the data can be explore [this notebook](https://github.com/PyThaiNLP/wisesight-sentiment/blob/master/exploration.ipynb)
### Supported Tasks and Leaderboards
Sentiment analysis / [Kaggle Leaderboard](https://www.kaggle.com/c/wisesight-sentiment/)
### Languages
Thai
## Dataset Structure
### Data Instances
```
{'category': 'pos', 'texts': 'น่าสนนน'}
{'category': 'neu', 'texts': 'ครับ #phithanbkk'}
{'category': 'neg', 'texts': 'ซื้อแต่ผ้าอนามัยแบบเย็นมาค่ะ แบบว่าอีห่ากูนอนไม่ได้'}
{'category': 'q', 'texts': 'มีแอลกอฮอลมั้ยคะ'}
```
### Data Fields
- `texts`: texts
- `category`: sentiment of texts ranging from `pos` (positive; 0), `neu` (neutral; 1), `neg` (negative; 2) and `q` (question; 3)
### Data Splits
| | train | valid | test |
|-----------|-------|-------|-------|
| # samples | 21628 | 2404 | 2671 |
| # neu | 11795 | 1291 | 1453 |
| # neg | 5491 | 637 | 683 |
| # pos | 3866 | 434 | 478 |
| # q | 476 | 42 | 57 |
| avg words | 27.21 | 27.18 | 27.12 |
| avg chars | 89.82 | 89.50 | 90.36 |
## Dataset Creation
### Curation Rationale
Originally, the dataset was conceived for the [In-class Kaggle Competition](https://www.kaggle.com/c/wisesight-sentiment/) at Chulalongkorn university by [Ekapol Chuangsuwanich](https://www.cp.eng.chula.ac.th/en/about/faculty/ekapolc/) (Faculty of Engineering, Chulalongkorn University). It has since become one of the benchmarks for sentiment analysis in Thai.
### Source Data
#### Initial Data Collection and Normalization
- Style: Informal and conversational. With some news headlines and advertisement.
- Time period: Around 2016 to early 2019. With small amount from other period.
- Domains: Mixed. Majority are consumer products and services (restaurants, cosmetics, drinks, car, hotels), with some current affairs.
- Privacy:
- Only messages that made available to the public on the internet (websites, blogs, social network sites).
- For Facebook, this means the public comments (everyone can see) that made on a public page.
- Private/protected messages and messages in groups, chat, and inbox are not included.
- Usernames and non-public figure names are removed
- Phone numbers are masked (e.g. 088-888-8888, 09-9999-9999, 0-2222-2222)
- If you see any personal data still remain in the set, please tell us - so we can remove them.
- Alternations and modifications:
- Keep in mind that this corpus does not statistically represent anything in the language register.
- Large amount of messages are not in their original form. Personal data are removed or masked.
- Duplicated, leading, and trailing whitespaces are removed. Other punctuations, symbols, and emojis are kept intact.
- (Mis)spellings are kept intact.
- Messages longer than 2,000 characters are removed.
- Long non-Thai messages are removed. Duplicated message (exact match) are removed.
#### Who are the source language producers?
Social media users in Thailand
### Annotations
#### Annotation process
- Sentiment values are assigned by human annotators.
- A human annotator put his/her best effort to assign just one label, out of four, to a message.
- Agreement, enjoyment, and satisfaction are positive. Disagreement, sadness, and disappointment are negative.
- Showing interest in a topic or in a product is counted as positive. In this sense, a question about a particular product could has a positive sentiment value, if it shows the interest in the product.
- Saying that other product or service is better is counted as negative.
- General information or news title tend to be counted as neutral.
#### Who are the annotators?
Outsourced annotators hired by [Wisesight (Thailand) Co., Ltd.](https://github.com/wisesight/)
### Personal and Sensitive Information
- We trying to exclude any known personally identifiable information from this data set.
- Usernames and non-public figure names are removed
- Phone numbers are masked (e.g. 088-888-8888, 09-9999-9999, 0-2222-2222)
- If you see any personal data still remain in the set, please tell us - so we can remove them.
## Considerations for Using the Data
### Social Impact of Dataset
- `wisesight_sentiment` is the first and one of the few open datasets for sentiment analysis of social media data in Thai
- There are risks of personal information that escape the anonymization process
### Discussion of Biases
- A message can be ambiguous. When possible, the judgement will be based solely on the text itself.
- In some situation, like when the context is missing, the annotator may have to rely on his/her own world knowledge and just guess.
- In some cases, the human annotator may have an access to the message's context, like an image. These additional information are not included as part of this corpus.
### Other Known Limitations
- The labels are imbalanced; over half of the texts are `neu` (neutral) whereas there are very few `q` (question).
- Misspellings in social media texts make word tokenization process for Thai difficult, thus impacting the model performance
## Additional Information
### Dataset Curators
Thanks [PyThaiNLP](https://github.com/PyThaiNLP/pythainlp) community, [Kitsuchart Pasupa](http://www.it.kmitl.ac.th/~kitsuchart/) (Faculty of Information Technology, King Mongkut's Institute of Technology Ladkrabang), and [Ekapol Chuangsuwanich](https://www.cp.eng.chula.ac.th/en/about/faculty/ekapolc/) (Faculty of Engineering, Chulalongkorn University) for advice. The original Kaggle competition, using the first version of this corpus, can be found at https://www.kaggle.com/c/wisesight-sentiment/
### Licensing Information
- If applicable, copyright of each message content belongs to the original poster.
- **Annotation data (labels) are released to public domain.**
- [Wisesight (Thailand) Co., Ltd.](https://github.com/wisesight/) helps facilitate the annotation, but does not necessarily agree upon the labels made by the human annotators. This annotation is for research purpose and does not reflect the professional work that Wisesight has been done for its customers.
- The human annotator does not necessarily agree or disagree with the message. Likewise, the label he/she made to the message does not necessarily reflect his/her personal view towards the message.
### Citation Information
Please cite the following if you make use of the dataset:
Arthit Suriyawongkul, Ekapol Chuangsuwanich, Pattarawat Chormai, and Charin Polpanumas. 2019. **PyThaiNLP/wisesight-sentiment: First release.** September.
BibTeX:
```
@software{bact_2019_3457447,
author = {Suriyawongkul, Arthit and
Chuangsuwanich, Ekapol and
Chormai, Pattarawat and
Polpanumas, Charin},
title = {PyThaiNLP/wisesight-sentiment: First release},
month = sep,
year = 2019,
publisher = {Zenodo},
version = {v1.0},
doi = {10.5281/zenodo.3457447},
url = {https://doi.org/10.5281/zenodo.3457447}
}
```
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/939/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/939/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/939",
"html_url": "https://github.com/huggingface/datasets/pull/939",
"diff_url": "https://github.com/huggingface/datasets/pull/939.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/939.patch",
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/938 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/938/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/938/comments | https://api.github.com/repos/huggingface/datasets/issues/938/events | https://github.com/huggingface/datasets/pull/938 | 753,940,979 | MDExOlB1bGxSZXF1ZXN0NTI5OTIxNzU5 | 938 | V-1.0.0 of isizulu_ner_corpus | {
"login": "yvonnegitau",
"id": 7923902,
"node_id": "MDQ6VXNlcjc5MjM5MDI=",
"avatar_url": "https://avatars.githubusercontent.com/u/7923902?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yvonnegitau",
"html_url": "https://github.com/yvonnegitau",
"followers_url": "https://api.github.com/users/yvonnegitau/followers",
"following_url": "https://api.github.com/users/yvonnegitau/following{/other_user}",
"gists_url": "https://api.github.com/users/yvonnegitau/gists{/gist_id}",
"starred_url": "https://api.github.com/users/yvonnegitau/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yvonnegitau/subscriptions",
"organizations_url": "https://api.github.com/users/yvonnegitau/orgs",
"repos_url": "https://api.github.com/users/yvonnegitau/repos",
"events_url": "https://api.github.com/users/yvonnegitau/events{/privacy}",
"received_events_url": "https://api.github.com/users/yvonnegitau/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
} | [] | null | [
"closing since it's been added in #957 "
] | 1,606,788,272,000 | 1,606,865,676,000 | 1,606,865,676,000 | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/938/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 1,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/938/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/938",
"html_url": "https://github.com/huggingface/datasets/pull/938",
"diff_url": "https://github.com/huggingface/datasets/pull/938.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/938.patch",
"merged_at": null
} | true |
|
https://api.github.com/repos/huggingface/datasets/issues/937 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/937/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/937/comments | https://api.github.com/repos/huggingface/datasets/issues/937/events | https://github.com/huggingface/datasets/issues/937 | 753,921,078 | MDU6SXNzdWU3NTM5MjEwNzg= | 937 | Local machine/cluster Beam Datasets example/tutorial | {
"login": "shangw-nvidia",
"id": 66387198,
"node_id": "MDQ6VXNlcjY2Mzg3MTk4",
"avatar_url": "https://avatars.githubusercontent.com/u/66387198?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/shangw-nvidia",
"html_url": "https://github.com/shangw-nvidia",
"followers_url": "https://api.github.com/users/shangw-nvidia/followers",
"following_url": "https://api.github.com/users/shangw-nvidia/following{/other_user}",
"gists_url": "https://api.github.com/users/shangw-nvidia/gists{/gist_id}",
"starred_url": "https://api.github.com/users/shangw-nvidia/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/shangw-nvidia/subscriptions",
"organizations_url": "https://api.github.com/users/shangw-nvidia/orgs",
"repos_url": "https://api.github.com/users/shangw-nvidia/repos",
"events_url": "https://api.github.com/users/shangw-nvidia/events{/privacy}",
"received_events_url": "https://api.github.com/users/shangw-nvidia/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
} | [] | null | [
"I tried to make it run once on the SparkRunner but it seems that this runner has some issues when it is run locally.\r\nFrom my experience the DirectRunner is fine though, even if it's clearly not memory efficient.\r\n\r\nIt would be awesome though to make it work locally on a SparkRunner !\r\nDid you manage to make your processing work ?"
] | 1,606,785,103,000 | 1,608,731,696,000 | null | NONE | null | Hi,
I'm wondering if https://huggingface.co/docs/datasets/beam_dataset.html has an non-GCP or non-Dataflow version example/tutorial? I tried to migrate it to run on DirectRunner and SparkRunner, however, there were way too many runtime errors that I had to fix during the process, and even so I wasn't able to get either runner correctly producing the desired output.
Thanks!
Shang | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/937/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/937/timeline | null | null | {
"url": "",
"html_url": "",
"diff_url": "",
"patch_url": "",
"merged_at": 0
} | false |
https://api.github.com/repos/huggingface/datasets/issues/936 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/936/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/936/comments | https://api.github.com/repos/huggingface/datasets/issues/936/events | https://github.com/huggingface/datasets/pull/936 | 753,915,603 | MDExOlB1bGxSZXF1ZXN0NTI5OTAxODMw | 936 | Added HANS parses and categories | {
"login": "TevenLeScao",
"id": 26709476,
"node_id": "MDQ6VXNlcjI2NzA5NDc2",
"avatar_url": "https://avatars.githubusercontent.com/u/26709476?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/TevenLeScao",
"html_url": "https://github.com/TevenLeScao",
"followers_url": "https://api.github.com/users/TevenLeScao/followers",
"following_url": "https://api.github.com/users/TevenLeScao/following{/other_user}",
"gists_url": "https://api.github.com/users/TevenLeScao/gists{/gist_id}",
"starred_url": "https://api.github.com/users/TevenLeScao/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/TevenLeScao/subscriptions",
"organizations_url": "https://api.github.com/users/TevenLeScao/orgs",
"repos_url": "https://api.github.com/users/TevenLeScao/repos",
"events_url": "https://api.github.com/users/TevenLeScao/events{/privacy}",
"received_events_url": "https://api.github.com/users/TevenLeScao/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
} | [] | null | [] | 1,606,784,296,000 | 1,606,828,781,000 | 1,606,828,780,000 | MEMBER | null | This pull request adds HANS missing information: the sentence parses, as well as the heuristic category. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/936/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/936/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/936",
"html_url": "https://github.com/huggingface/datasets/pull/936",
"diff_url": "https://github.com/huggingface/datasets/pull/936.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/936.patch",
"merged_at": 1606828780000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/935 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/935/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/935/comments | https://api.github.com/repos/huggingface/datasets/issues/935/events | https://github.com/huggingface/datasets/pull/935 | 753,863,055 | MDExOlB1bGxSZXF1ZXN0NTI5ODU5MjM4 | 935 | add PIB dataset | {
"login": "vasudevgupta7",
"id": 53136577,
"node_id": "MDQ6VXNlcjUzMTM2NTc3",
"avatar_url": "https://avatars.githubusercontent.com/u/53136577?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/vasudevgupta7",
"html_url": "https://github.com/vasudevgupta7",
"followers_url": "https://api.github.com/users/vasudevgupta7/followers",
"following_url": "https://api.github.com/users/vasudevgupta7/following{/other_user}",
"gists_url": "https://api.github.com/users/vasudevgupta7/gists{/gist_id}",
"starred_url": "https://api.github.com/users/vasudevgupta7/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/vasudevgupta7/subscriptions",
"organizations_url": "https://api.github.com/users/vasudevgupta7/orgs",
"repos_url": "https://api.github.com/users/vasudevgupta7/repos",
"events_url": "https://api.github.com/users/vasudevgupta7/events{/privacy}",
"received_events_url": "https://api.github.com/users/vasudevgupta7/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
} | [] | null | [
"Hi, \r\n\r\nI am unable to get success in these tests. Can someone help me by pointing out possible errors?\r\n\r\nThanks",
"Hi ! you can read the tests by logging in to circleci.\r\n\r\nAnyway for information here are the errors : \r\n```\r\ndatasets/pib/pib.py:19:1: F401 'csv' imported but unused\r\ndatasets/pib/pib.py:20:1: F401 'json' imported but unused\r\ndatasets/pib/pib.py:36:84: W291 trailing whitespace\r\n```\r\nand \r\n```\r\nFAILED tests/test_file_encoding.py::TestFileEncoding::test_no_encoding_on_file_open\r\n```\r\n\r\nTo fix the `test_no_encoding_on_file_open` you just have to specify an encoding while opening a text file. For example `encoding=\"utf-8\"`\r\n",
"All suggested changes are done.",
"Nice ! can you re-generate the dataset_infos.json file to take into account the feature type change ?\r\n```\r\ndatasets-cli test ./datasets/pib --save_infos --all_configs --ignore_verifications\r\n```\r\nAnd also format your code ?\r\n```\r\nmake style\r\n```"
] | 1,606,776,943,000 | 1,606,864,631,000 | 1,606,864,631,000 | CONTRIBUTOR | null | This pull request will add PIB dataset. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/935/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/935/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/935",
"html_url": "https://github.com/huggingface/datasets/pull/935",
"diff_url": "https://github.com/huggingface/datasets/pull/935.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/935.patch",
"merged_at": 1606864631000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/934 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/934/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/934/comments | https://api.github.com/repos/huggingface/datasets/issues/934/events | https://github.com/huggingface/datasets/pull/934 | 753,860,095 | MDExOlB1bGxSZXF1ZXN0NTI5ODU2ODY4 | 934 | small updates to the "add new dataset" guide | {
"login": "VictorSanh",
"id": 16107619,
"node_id": "MDQ6VXNlcjE2MTA3NjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/16107619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/VictorSanh",
"html_url": "https://github.com/VictorSanh",
"followers_url": "https://api.github.com/users/VictorSanh/followers",
"following_url": "https://api.github.com/users/VictorSanh/following{/other_user}",
"gists_url": "https://api.github.com/users/VictorSanh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/VictorSanh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/VictorSanh/subscriptions",
"organizations_url": "https://api.github.com/users/VictorSanh/orgs",
"repos_url": "https://api.github.com/users/VictorSanh/repos",
"events_url": "https://api.github.com/users/VictorSanh/events{/privacy}",
"received_events_url": "https://api.github.com/users/VictorSanh/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
} | [] | null | [
"cc @yjernite @lhoestq @thomwolf "
] | 1,606,776,550,000 | 1,606,798,582,000 | 1,606,778,040,000 | MEMBER | null | small updates (corrections/typos) to the "add new dataset" guide | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/934/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/934/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/934",
"html_url": "https://github.com/huggingface/datasets/pull/934",
"diff_url": "https://github.com/huggingface/datasets/pull/934.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/934.patch",
"merged_at": 1606778040000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/933 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/933/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/933/comments | https://api.github.com/repos/huggingface/datasets/issues/933/events | https://github.com/huggingface/datasets/pull/933 | 753,854,272 | MDExOlB1bGxSZXF1ZXN0NTI5ODUyMTI1 | 933 | Add NumerSense | {
"login": "joeddav",
"id": 9353833,
"node_id": "MDQ6VXNlcjkzNTM4MzM=",
"avatar_url": "https://avatars.githubusercontent.com/u/9353833?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/joeddav",
"html_url": "https://github.com/joeddav",
"followers_url": "https://api.github.com/users/joeddav/followers",
"following_url": "https://api.github.com/users/joeddav/following{/other_user}",
"gists_url": "https://api.github.com/users/joeddav/gists{/gist_id}",
"starred_url": "https://api.github.com/users/joeddav/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/joeddav/subscriptions",
"organizations_url": "https://api.github.com/users/joeddav/orgs",
"repos_url": "https://api.github.com/users/joeddav/repos",
"events_url": "https://api.github.com/users/joeddav/events{/privacy}",
"received_events_url": "https://api.github.com/users/joeddav/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
} | [] | null | [] | 1,606,775,793,000 | 1,606,854,350,000 | 1,606,852,316,000 | CONTRIBUTOR | null | Adds the NumerSense dataset
- Webpage/leaderboard: https://inklab.usc.edu/NumerSense/
- Paper: https://arxiv.org/abs/2005.00683
- Description: NumerSense is a new numerical commonsense reasoning probing task, with a diagnostic dataset consisting of 3,145 masked-word-prediction probes. Basically, it's a benchmark to see whether your MLM can figure out the right number in a fill-in-the-blank task based on commonsense knowledge (a bird has **two** legs) | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/933/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/933/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/933",
"html_url": "https://github.com/huggingface/datasets/pull/933",
"diff_url": "https://github.com/huggingface/datasets/pull/933.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/933.patch",
"merged_at": 1606852316000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/932 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/932/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/932/comments | https://api.github.com/repos/huggingface/datasets/issues/932/events | https://github.com/huggingface/datasets/pull/932 | 753,840,300 | MDExOlB1bGxSZXF1ZXN0NTI5ODQwNjQ3 | 932 | adding metooma dataset | {
"login": "akash418",
"id": 23264033,
"node_id": "MDQ6VXNlcjIzMjY0MDMz",
"avatar_url": "https://avatars.githubusercontent.com/u/23264033?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/akash418",
"html_url": "https://github.com/akash418",
"followers_url": "https://api.github.com/users/akash418/followers",
"following_url": "https://api.github.com/users/akash418/following{/other_user}",
"gists_url": "https://api.github.com/users/akash418/gists{/gist_id}",
"starred_url": "https://api.github.com/users/akash418/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/akash418/subscriptions",
"organizations_url": "https://api.github.com/users/akash418/orgs",
"repos_url": "https://api.github.com/users/akash418/repos",
"events_url": "https://api.github.com/users/akash418/events{/privacy}",
"received_events_url": "https://api.github.com/users/akash418/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
} | [] | null | [
"This PR adds the #MeToo MA dataset. It presents multi-label data points for tweets mined in the backdrop of the #MeToo movement. The dataset includes data points in the form of Tweet ids and appropriate labels. Please refer to the accompanying paper for detailed information regarding annotation, collection, and guidelines. \r\n\r\nPaper: https://ojs.aaai.org/index.php/ICWSM/article/view/7292\r\nDataset Link: https://dataverse.harvard.edu/dataset.xhtml?persistentId=doi:10.7910/DVN/JN4EYU\r\n\r\nYAML tags:\r\nannotations_creators:\r\n- expert-generated\r\nlanguage_creators:\r\n- found\r\nlanguages:\r\n- en\r\nmultilinguality:\r\n- monolingual\r\nsize_categories:\r\n- 1K<n<10K\r\nsource_datasets:\r\n- original\r\ntask_categories:\r\n- text-classification\r\n- text-retrieval\r\ntask_ids:\r\n- multi-class-classification\r\n- multi-label-classification\r\n\r\n# Dataset Card for #MeTooMA dataset\r\n\r\n## Table of Contents\r\n- [Dataset Description](#dataset-description)\r\n - [Dataset Summary](#dataset-summary)\r\n - [Supported Tasks](#supported-tasks-and-leaderboards)\r\n - [Languages](#languages)\r\n- [Dataset Structure](#dataset-structure)\r\n - [Data Instances](#data-instances)\r\n - [Data Fields](#data-instances)\r\n - [Data Splits](#data-instances)\r\n- [Dataset Creation](#dataset-creation)\r\n - [Curation Rationale](#curation-rationale)\r\n - [Source Data](#source-data)\r\n - [Annotations](#annotations)\r\n - [Personal and Sensitive Information](#personal-and-sensitive-information)\r\n- [Considerations for Using the Data](#considerations-for-using-the-data)\r\n - [Social Impact of Dataset](#social-impact-of-dataset)\r\n - [Discussion of Biases](#discussion-of-biases)\r\n - [Other Known Limitations](#other-known-limitations)\r\n- [Additional Information](#additional-information)\r\n - [Dataset Curators](#dataset-curators)\r\n - [Licensing Information](#licensing-information)\r\n - [Citation Information](#citation-information)\r\n\r\n## Dataset Description\r\n\r\n- **Homepage:** https://dataverse.harvard.edu/dataset.xhtml?persistentId=doi:10.7910/DVN/JN4EYU\r\n- **Paper:** https://ojs.aaai.org//index.php/ICWSM/article/view/7292\r\n- **Point of Contact:** https://github.com/midas-research/MeTooMA\r\n\r\n\r\n### Dataset Summary\r\n\r\n- The dataset consists of tweets belonging to #MeToo movement on Twitter, labelled into different categories.\r\n- This dataset includes more data points and has more labels than any of the previous datasets in that contain social media\r\nposts about sexual abuse discloures. Please refer to the Related Datasets of the publication for a detailed information about this.\r\n- Due to Twitters development policies, the authors provide only the tweet IDs and corresponding labels,\r\nother data can be fetched via Twitter API.\r\n- The data has been labelled by experts, with the majority taken into the account for deciding the final label.\r\n- The authors provide these labels for each of the tweets.\r\n - Relevance\r\n - Directed Hate\r\n - Generalized Hate\r\n - Sarcasm\r\n - Allegation\r\n - Justification\r\n - Refutation\r\n - Support\r\n - Oppose\r\n- The definitions for each task/label is in the main publication.\r\n- Please refer to the accompanying paper https://aaai.org/ojs/index.php/ICWSM/article/view/7292 for statistical analysis on the textual data\r\nextracted from this dataset.\r\n- The language of all the tweets in this dataset is English\r\n- Time period: October 2018 - December 2018\r\n- Suggested Use Cases of this dataset:\r\n - Evaluating usage of linguistic acts such as: hate-spech and sarcasm in the incontext of public sexual abuse discloures.\r\n - Extracting actionable insights and virtual dynamics of gender roles in sexual abuse revelations.\r\n - Identifying how influential people were potrayed on public platform in the\r\n events of mass social movements.\r\n - Polarization analysis based on graph simulations of social nodes of users involved\r\n in the #MeToo movement.\r\n\r\n\r\n### Supported Tasks and Leaderboards\r\n\r\nMulti Label and Multi-Class Classification\r\n\r\n### Languages\r\n\r\nEnglish\r\n\r\n## Dataset Structure\r\n- The dataset is structured into CSV format with TweetID and accompanying labels.\r\n- Train and Test sets are split into respective files.\r\n\r\n### Data Instances\r\n\r\nTweet ID and the appropriatelabels\r\n\r\n### Data Fields\r\n\r\nTweet ID and appropriate labels (binary label applicable for a data point) and multiple labels for each Tweet ID\r\n\r\n### Data Splits\r\n\r\n- Train: 7979\r\n- Test: 1996\r\n\r\n## Dataset Creation\r\n\r\n### Curation Rationale\r\n\r\n- Twitter was the major source of all the public discloures of sexual abuse incidents during the #MeToo movement.\r\n- People expressed their opinions over issues which were previously missing from the social media space.\r\n- This provides an option to study the linguistic behaviours of social media users in an informal setting,\r\ntherefore the authors decide to curate this annotated dataset.\r\n- The authors expect this dataset would be of great interest and use to both computational and socio-linguists.\r\n- For computational linguists, it provides an opportunity to model three new complex dialogue acts (allegation, refutation, and justification) and also to study how these acts interact with some of the other linguistic components like stance, hate, and sarcasm. For socio-linguists, it provides an opportunity to explore how a movement manifests in social media.\r\n\r\n\r\n### Source Data\r\n- Source of all the data points in this dataset is Twitter.\r\n\r\n#### Initial Data Collection and Normalization\r\n\r\n- All the tweets are mined from Twitter with initial search paramters identified using keywords from the #MeToo movement.\r\n- Redundant keywords were removed based on manual inspection.\r\n- Public streaming APIs of Twitter were used for querying with the selected keywords.\r\n- Based on text de-duplication and cosine similarity score, the set of tweets were pruned.\r\n- Non english tweets were removed.\r\n- The final set was labelled by experts with the majority label taken into the account for deciding the final label.\r\n- Please refer to this paper for detailed information: https://ojs.aaai.org//index.php/ICWSM/article/view/7292\r\n\r\n#### Who are the source language producers?\r\n\r\nPlease refer to this paper for detailed information: https://ojs.aaai.org//index.php/ICWSM/article/view/7292\r\n\r\n### Annotations\r\n\r\n#### Annotation process\r\n\r\n- The authors chose against crowd sourcing for labeling this dataset due to its highly sensitive nature.\r\n- The annotators are domain experts having degress in advanced clinical psychology and gender studies.\r\n- They were provided a guidelines document with instructions about each task and its definitions, labels and examples.\r\n- They studied the document, worked a few examples to get used to this annotation task.\r\n- They also provided feedback for improving the class definitions.\r\n- The annotation process is not mutually exclusive, implying that presence of one label does not mean the\r\nabsence of the other one.\r\n\r\n\r\n#### Who are the annotators?\r\n\r\n- The annotators are domain experts having a degree in clinical psychology and gender studies.\r\n- Please refer to the accompnaying paper for a detailed annotation process.\r\n\r\n### Personal and Sensitive Information\r\n\r\n- Considering Twitters policy for distribution of data, only Tweet ID and applicable labels are shared for the public use.\r\n- It is highly encouraged to use this dataset for scientific purposes only.\r\n- This dataset collection completely follows the Twitter mandated guidelines for distribution and usage.\r\n\r\n## Considerations for Using the Data\r\n\r\n### Social Impact of Dataset\r\n\r\n- The authors of this dataset do not intend to conduct a population centric analysis of #MeToo movement on Twitter.\r\n- The authors acknowledge that findings from this dataset cannot be used as-is for any direct social intervention, these\r\nshould be used to assist already existing human intervention tools and therapies.\r\n- Enough care has been taken to ensure that this work comes of as trying to target a specific person for their\r\npersonal stance of issues pertaining to the #MeToo movement.\r\n- The authors of this work do not aim to vilify anyone accused in the #MeToo movement in any manner.\r\n- Please refer to the ethics and discussion section of the mentioned publication for appropriate sharing of this dataset\r\nand social impact of this work.\r\n\r\n\r\n### Discussion of Biases\r\n\r\n- The #MeToo movement acted as a catalyst for implementing social policy changes to benefit the members of\r\ncommunity affected by sexual abuse.\r\n- Any work undertaken on this dataset should aim to minimize the bias against minority groups which\r\nmight amplified in cases of sudden outburst of public reactions over sensitive social media discussions.\r\n\r\n### Other Known Limitations\r\n\r\n- Considering privacy concerns, social media practitioners should be aware of making automated interventions\r\nto aid the victims of sexual abuse as some people might not prefer to disclose their notions.\r\n- Concerned social media users might also repeal their social information, if they found out that their\r\ninformation is being used for computational purposes, hence it is important seek subtle individual consent\r\nbefore trying to profile authors involved in online discussions to uphold personal privacy.\r\n\r\n## Additional Information\r\n\r\nPlease refer to this link: https://dataverse.harvard.edu/dataset.xhtml?persistentId=doi:10.7910/DVN/JN4EYU\r\n\r\n### Dataset Curators\r\n\r\n- If you use the corpus in a product or application, then please credit the authors\r\nand [Multimodal Digital Media Analysis Lab - Indraprastha Institute of Information Technology, New Delhi]\r\n(http://midas.iiitd.edu.in) appropriately.\r\nAlso, if you send us an email, we will be thrilled to know about how you have used the corpus.\r\n- If interested in commercial use of the corpus, send email to [email protected].\r\n- Multimodal Digital Media Analysis Lab - Indraprastha Institute of Information Technology, New Delhi, India\r\ndisclaims any responsibility for the use of the corpus and does not provide technical support.\r\nHowever, the contact listed above will be happy to respond to queries and clarifications\r\n- Please feel free to send us an email:\r\n - with feedback regarding the corpus.\r\n - with information on how you have used the corpus.\r\n - if interested in having us analyze your social media data.\r\n - if interested in a collaborative research project.\r\n\r\n### Licensing Information\r\n\r\n[More Information Needed]\r\n\r\n### Citation Information\r\n\r\nPlease cite the following publication if you make use of the dataset: https://ojs.aaai.org/index.php/ICWSM/article/view/7292\r\n\r\n```\r\n\r\n@article{Gautam_Mathur_Gosangi_Mahata_Sawhney_Shah_2020, title={#MeTooMA: Multi-Aspect Annotations of Tweets Related to the MeToo Movement}, volume={14}, url={https://aaai.org/ojs/index.php/ICWSM/article/view/7292}, abstractNote={<p>In this paper, we present a dataset containing 9,973 tweets related to the MeToo movement that were manually annotated for five different linguistic aspects: relevance, stance, hate speech, sarcasm, and dialogue acts. We present a detailed account of the data collection and annotation processes. The annotations have a very high inter-annotator agreement (0.79 to 0.93 k-alpha) due to the domain expertise of the annotators and clear annotation instructions. We analyze the data in terms of geographical distribution, label correlations, and keywords. Lastly, we present some potential use cases of this dataset. We expect this dataset would be of great interest to psycholinguists, socio-linguists, and computational linguists to study the discursive space of digitally mobilized social movements on sensitive issues like sexual harassment.</p&gt;}, number={1}, journal={Proceedings of the International AAAI Conference on Web and Social Media}, author={Gautam, Akash and Mathur, Puneet and Gosangi, Rakesh and Mahata, Debanjan and Sawhney, Ramit and Shah, Rajiv Ratn}, year={2020}, month={May}, pages={209-216} }\r\n\r\n```\r\n\r\n\r\n\r\n",
"Hi, @lhoestq I have resolved all the comments you have raised. Can you review the PR again? However, I do need assistance on how to remove other files that came along in my PR. Should I manually delete unwanted files from the PR raised?",
"I am closing this PR, @lhoestq please review this PR instead https://github.com/huggingface/datasets/pull/975 where I have removed the unwanted files of other datasets and addressed each of your points. "
] | 1,606,774,189,000 | 1,606,869,474,000 | 1,606,869,474,000 | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/932/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/932/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/932",
"html_url": "https://github.com/huggingface/datasets/pull/932",
"diff_url": "https://github.com/huggingface/datasets/pull/932.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/932.patch",
"merged_at": null
} | true |
|
https://api.github.com/repos/huggingface/datasets/issues/931 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/931/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/931/comments | https://api.github.com/repos/huggingface/datasets/issues/931/events | https://github.com/huggingface/datasets/pull/931 | 753,818,193 | MDExOlB1bGxSZXF1ZXN0NTI5ODIzMDYz | 931 | [WIP] complex_webqa - Error zipfile.BadZipFile: Bad CRC-32 | {
"login": "thomwolf",
"id": 7353373,
"node_id": "MDQ6VXNlcjczNTMzNzM=",
"avatar_url": "https://avatars.githubusercontent.com/u/7353373?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/thomwolf",
"html_url": "https://github.com/thomwolf",
"followers_url": "https://api.github.com/users/thomwolf/followers",
"following_url": "https://api.github.com/users/thomwolf/following{/other_user}",
"gists_url": "https://api.github.com/users/thomwolf/gists{/gist_id}",
"starred_url": "https://api.github.com/users/thomwolf/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/thomwolf/subscriptions",
"organizations_url": "https://api.github.com/users/thomwolf/orgs",
"repos_url": "https://api.github.com/users/thomwolf/repos",
"events_url": "https://api.github.com/users/thomwolf/events{/privacy}",
"received_events_url": "https://api.github.com/users/thomwolf/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
} | [] | null | [] | 1,606,771,821,000 | 1,606,771,821,000 | null | MEMBER | null | Have a string `zipfile.BadZipFile: Bad CRC-32 for file 'web_snippets_train.json'` error when downloading the largest file from dropbox: `https://www.dropbox.com/sh/7pkwkrfnwqhsnpo/AABVENv_Q9rFtnM61liyzO0La/web_snippets_train.json.zip?dl=1`
Didn't managed to see how to solve that.
Putting aside for now.
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/931/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 1,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/931/timeline | null | true | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/931",
"html_url": "https://github.com/huggingface/datasets/pull/931",
"diff_url": "https://github.com/huggingface/datasets/pull/931.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/931.patch",
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/930 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/930/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/930/comments | https://api.github.com/repos/huggingface/datasets/issues/930/events | https://github.com/huggingface/datasets/pull/930 | 753,801,204 | MDExOlB1bGxSZXF1ZXN0NTI5ODA5MzM1 | 930 | Lambada | {
"login": "VictorSanh",
"id": 16107619,
"node_id": "MDQ6VXNlcjE2MTA3NjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/16107619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/VictorSanh",
"html_url": "https://github.com/VictorSanh",
"followers_url": "https://api.github.com/users/VictorSanh/followers",
"following_url": "https://api.github.com/users/VictorSanh/following{/other_user}",
"gists_url": "https://api.github.com/users/VictorSanh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/VictorSanh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/VictorSanh/subscriptions",
"organizations_url": "https://api.github.com/users/VictorSanh/orgs",
"repos_url": "https://api.github.com/users/VictorSanh/repos",
"events_url": "https://api.github.com/users/VictorSanh/events{/privacy}",
"received_events_url": "https://api.github.com/users/VictorSanh/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
} | [] | null | [] | 1,606,770,153,000 | 1,606,783,032,000 | 1,606,783,031,000 | MEMBER | null | Added LAMBADA dataset.
A couple of points of attention (mostly because I am not sure)
- The training data are compressed in a .tar file inside the main tar.gz file. I had to manually un-tar the training file to access the examples.
- The dev and test splits don't have the `category` field so I put `None` by default.
Happy to make changes if it doesn't respect the guidelines!
Victor | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/930/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/930/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/930",
"html_url": "https://github.com/huggingface/datasets/pull/930",
"diff_url": "https://github.com/huggingface/datasets/pull/930.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/930.patch",
"merged_at": 1606783031000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/929 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/929/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/929/comments | https://api.github.com/repos/huggingface/datasets/issues/929/events | https://github.com/huggingface/datasets/pull/929 | 753,737,794 | MDExOlB1bGxSZXF1ZXN0NTI5NzU4NTU3 | 929 | Add weibo NER dataset | {
"login": "abhishekkrthakur",
"id": 1183441,
"node_id": "MDQ6VXNlcjExODM0NDE=",
"avatar_url": "https://avatars.githubusercontent.com/u/1183441?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/abhishekkrthakur",
"html_url": "https://github.com/abhishekkrthakur",
"followers_url": "https://api.github.com/users/abhishekkrthakur/followers",
"following_url": "https://api.github.com/users/abhishekkrthakur/following{/other_user}",
"gists_url": "https://api.github.com/users/abhishekkrthakur/gists{/gist_id}",
"starred_url": "https://api.github.com/users/abhishekkrthakur/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/abhishekkrthakur/subscriptions",
"organizations_url": "https://api.github.com/users/abhishekkrthakur/orgs",
"repos_url": "https://api.github.com/users/abhishekkrthakur/repos",
"events_url": "https://api.github.com/users/abhishekkrthakur/events{/privacy}",
"received_events_url": "https://api.github.com/users/abhishekkrthakur/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
} | [] | null | [] | 1,606,764,167,000 | 1,607,002,615,000 | 1,607,002,614,000 | MEMBER | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/929/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/929/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/929",
"html_url": "https://github.com/huggingface/datasets/pull/929",
"diff_url": "https://github.com/huggingface/datasets/pull/929.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/929.patch",
"merged_at": 1607002614000
} | true |
|
https://api.github.com/repos/huggingface/datasets/issues/928 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/928/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/928/comments | https://api.github.com/repos/huggingface/datasets/issues/928/events | https://github.com/huggingface/datasets/pull/928 | 753,722,324 | MDExOlB1bGxSZXF1ZXN0NTI5NzQ1OTIx | 928 | Add the Multilingual Amazon Reviews Corpus | {
"login": "joeddav",
"id": 9353833,
"node_id": "MDQ6VXNlcjkzNTM4MzM=",
"avatar_url": "https://avatars.githubusercontent.com/u/9353833?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/joeddav",
"html_url": "https://github.com/joeddav",
"followers_url": "https://api.github.com/users/joeddav/followers",
"following_url": "https://api.github.com/users/joeddav/following{/other_user}",
"gists_url": "https://api.github.com/users/joeddav/gists{/gist_id}",
"starred_url": "https://api.github.com/users/joeddav/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/joeddav/subscriptions",
"organizations_url": "https://api.github.com/users/joeddav/orgs",
"repos_url": "https://api.github.com/users/joeddav/repos",
"events_url": "https://api.github.com/users/joeddav/events{/privacy}",
"received_events_url": "https://api.github.com/users/joeddav/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
} | [] | null | [] | 1,606,762,686,000 | 1,606,838,670,000 | 1,606,838,667,000 | CONTRIBUTOR | null | - **Name:** Multilingual Amazon Reviews Corpus* (`amazon_reviews_multi`)
- **Description:** A collection of Amazon reviews in English, Japanese, German, French, Spanish and Chinese.
- **Paper:** https://arxiv.org/abs/2010.02573
### Checkbox
- [x] Create the dataset script `/datasets/my_dataset/my_dataset.py` using the template
- [x] Fill the `_DESCRIPTION` and `_CITATION` variables
- [x] Implement `_infos()`, `_split_generators()` and `_generate_examples()`
- [x] Make sure that the `BUILDER_CONFIGS` class attribute is filled with the different configurations of the dataset and that the `BUILDER_CONFIG_CLASS` is specified if there is a custom config class.
- [x] Generate the metadata file `dataset_infos.json` for all configurations
- [x] Generate the dummy data `dummy_data.zip` files to have the dataset script tested and that they don't weigh too much (<50KB)
- [x] Add the dataset card `README.md` using the template : fill the tags and the various paragraphs
- [x] Both tests for the real data and the dummy data pass. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/928/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 1,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/928/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/928",
"html_url": "https://github.com/huggingface/datasets/pull/928",
"diff_url": "https://github.com/huggingface/datasets/pull/928.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/928.patch",
"merged_at": 1606838667000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/927 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/927/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/927/comments | https://api.github.com/repos/huggingface/datasets/issues/927/events | https://github.com/huggingface/datasets/issues/927 | 753,679,020 | MDU6SXNzdWU3NTM2NzkwMjA= | 927 | Hello | {
"login": "k125-ak",
"id": 75259546,
"node_id": "MDQ6VXNlcjc1MjU5NTQ2",
"avatar_url": "https://avatars.githubusercontent.com/u/75259546?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/k125-ak",
"html_url": "https://github.com/k125-ak",
"followers_url": "https://api.github.com/users/k125-ak/followers",
"following_url": "https://api.github.com/users/k125-ak/following{/other_user}",
"gists_url": "https://api.github.com/users/k125-ak/gists{/gist_id}",
"starred_url": "https://api.github.com/users/k125-ak/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/k125-ak/subscriptions",
"organizations_url": "https://api.github.com/users/k125-ak/orgs",
"repos_url": "https://api.github.com/users/k125-ak/repos",
"events_url": "https://api.github.com/users/k125-ak/events{/privacy}",
"received_events_url": "https://api.github.com/users/k125-ak/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
} | [] | null | [] | 1,606,758,605,000 | 1,606,758,630,000 | 1,606,758,630,000 | NONE | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/927/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/927/timeline | null | null | {
"url": "",
"html_url": "",
"diff_url": "",
"patch_url": "",
"merged_at": 0
} | false |
|
https://api.github.com/repos/huggingface/datasets/issues/926 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/926/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/926/comments | https://api.github.com/repos/huggingface/datasets/issues/926/events | https://github.com/huggingface/datasets/pull/926 | 753,676,069 | MDExOlB1bGxSZXF1ZXN0NTI5NzA4MTcy | 926 | add inquisitive | {
"login": "patil-suraj",
"id": 27137566,
"node_id": "MDQ6VXNlcjI3MTM3NTY2",
"avatar_url": "https://avatars.githubusercontent.com/u/27137566?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patil-suraj",
"html_url": "https://github.com/patil-suraj",
"followers_url": "https://api.github.com/users/patil-suraj/followers",
"following_url": "https://api.github.com/users/patil-suraj/following{/other_user}",
"gists_url": "https://api.github.com/users/patil-suraj/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patil-suraj/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patil-suraj/subscriptions",
"organizations_url": "https://api.github.com/users/patil-suraj/orgs",
"repos_url": "https://api.github.com/users/patil-suraj/repos",
"events_url": "https://api.github.com/users/patil-suraj/events{/privacy}",
"received_events_url": "https://api.github.com/users/patil-suraj/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
} | [] | null | [
"`dummy_data` right now contains all article files, keeping only the required articles for dummy data fails the dummy data test.\r\nAny idea ?",
"> `dummy_data` right now contains all article files, keeping only the required articles for dummy data fails the dummy data test.\r\n> Any idea ?\r\n\r\nWe should definitely find a way to make it work with only a few articles.\r\n\r\nIf it doesn't work right now for dummy data, I guess it's because it tries to load every single article file ?\r\n\r\nIf so, then maybe you can use `os.listdir` method to first check all the data files available in the path where the `articles.tgz` file is extracted. Then you can simply iter through the data files and depending on their ID, include them in the train or test set. With this method you should be able to have only a few articles files per split in the dummy data. Does that make sense ?",
"fixed! so the issue was, `articles_ids` were prepared based on the number of files in articles dir, so for dummy data questions it was not able to load some articles due to incorrect ids and the test was failing"
] | 1,606,758,322,000 | 1,606,916,722,000 | 1,606,916,413,000 | MEMBER | null | Adding inquisitive qg dataset
More info: https://github.com/wjko2/INQUISITIVE | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/926/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/926/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/926",
"html_url": "https://github.com/huggingface/datasets/pull/926",
"diff_url": "https://github.com/huggingface/datasets/pull/926.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/926.patch",
"merged_at": 1606916413000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/925 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/925/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/925/comments | https://api.github.com/repos/huggingface/datasets/issues/925/events | https://github.com/huggingface/datasets/pull/925 | 753,672,661 | MDExOlB1bGxSZXF1ZXN0NTI5NzA1MzM4 | 925 | Add Turku NLP Corpus for Finnish NER | {
"login": "abhishekkrthakur",
"id": 1183441,
"node_id": "MDQ6VXNlcjExODM0NDE=",
"avatar_url": "https://avatars.githubusercontent.com/u/1183441?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/abhishekkrthakur",
"html_url": "https://github.com/abhishekkrthakur",
"followers_url": "https://api.github.com/users/abhishekkrthakur/followers",
"following_url": "https://api.github.com/users/abhishekkrthakur/following{/other_user}",
"gists_url": "https://api.github.com/users/abhishekkrthakur/gists{/gist_id}",
"starred_url": "https://api.github.com/users/abhishekkrthakur/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/abhishekkrthakur/subscriptions",
"organizations_url": "https://api.github.com/users/abhishekkrthakur/orgs",
"repos_url": "https://api.github.com/users/abhishekkrthakur/repos",
"events_url": "https://api.github.com/users/abhishekkrthakur/events{/privacy}",
"received_events_url": "https://api.github.com/users/abhishekkrthakur/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
} | [] | null | [
"> Did you generate the dummy data with the cli or manually ?\r\n\r\nIt was generated by the cli. Do you want me to make it smaller keep it like this?\r\n\r\n"
] | 1,606,758,019,000 | 1,607,004,431,000 | 1,607,004,430,000 | MEMBER | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/925/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/925/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/925",
"html_url": "https://github.com/huggingface/datasets/pull/925",
"diff_url": "https://github.com/huggingface/datasets/pull/925.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/925.patch",
"merged_at": 1607004430000
} | true |
|
https://api.github.com/repos/huggingface/datasets/issues/924 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/924/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/924/comments | https://api.github.com/repos/huggingface/datasets/issues/924/events | https://github.com/huggingface/datasets/pull/924 | 753,631,951 | MDExOlB1bGxSZXF1ZXN0NTI5NjcyMzgw | 924 | Add DART | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
} | [] | null | [
"LGTM!"
] | 1,606,754,557,000 | 1,606,878,822,000 | 1,606,878,821,000 | MEMBER | null | - **Name:** *DART*
- **Description:** *DART is a large dataset for open-domain structured data record to text generation.*
- **Paper:** *https://arxiv.org/abs/2007.02871*
- **Data:** *https://github.com/Yale-LILY/dart#leaderboard*
### Checkbox
- [x] Create the dataset script `/datasets/my_dataset/my_dataset.py` using the template
- [x] Fill the `_DESCRIPTION` and `_CITATION` variables
- [x] Implement `_infos()`, `_split_generators()` and `_generate_examples()`
- [x] Make sure that the `BUILDER_CONFIGS` class attribute is filled with the different configurations of the dataset and that the `BUILDER_CONFIG_CLASS` is specified if there is a custom config class.
- [x] Generate the metadata file `dataset_infos.json` for all configurations
- [x] Generate the dummy data `dummy_data.zip` files to have the dataset script tested and that they don't weigh too much (<50KB)
- [x] Add the dataset card `README.md` using the template : fill the tags and the various paragraphs
- [x] Both tests for the real data and the dummy data pass. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/924/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/924/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/924",
"html_url": "https://github.com/huggingface/datasets/pull/924",
"diff_url": "https://github.com/huggingface/datasets/pull/924.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/924.patch",
"merged_at": 1606878821000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/923 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/923/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/923/comments | https://api.github.com/repos/huggingface/datasets/issues/923/events | https://github.com/huggingface/datasets/pull/923 | 753,569,220 | MDExOlB1bGxSZXF1ZXN0NTI5NjIyMDQx | 923 | Add CC-100 dataset | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892913,
"node_id": "MDU6TGFiZWwxOTM1ODkyOTEz",
"url": "https://api.github.com/repos/huggingface/datasets/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": "This will not be worked on"
}
] | closed | false | {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
} | [] | null | [
"Hello @lhoestq, I would like just to ask you if it is OK that I include this feature 9f32ba1 in this PR or you would prefer to have it in a separate one.\r\n\r\nI was wondering whether include also a test, but I did not find any test for the other file formats...",
"Hi ! Sure that would be valuable to support .xz files. Feel free to open a separate PR for this.\r\nAnd feel free to create the first test case for extracting compressed files if you have some inspiration (maybe create test_file_utils.py ?). We can still spend more time on tests next week when the sprint is over though so don't spend too much time on it.",
"@lhoestq, DONE! ;) See PR #950.",
"Thanks for adding support for `.xz` files :)\r\n\r\nFeel free to rebase from master to include it in your PR",
"@lhoestq DONE; I have merged instead, to avoid changing the history of my public PR ;)",
"Hi @lhoestq, I would need that you generate the dataset_infos.json and the dummy data for this dataset with a bigger computer. Sorry, but my laptop did not succeed...",
"Thanks for your work @albertvillanova \r\nWe'll definitely look into it after this sprint :)",
"Looks like #1456 added CC100 already.\r\nThe difference with your approach is that this implementation uses the `BuilderConfig` parameters to allow the creation of custom configs for all the languages, without having to specify them in the `BUILDER_CONFIGS` class attribute.\r\nFor example even if the dataset doesn't have a config for english already, you can still load the english CC100 with\r\n```python\r\nfrom datasets import load_dataset\r\n\r\nload_dataset(\"cc100\", lang=\"en\")\r\n```",
"@lhoestq, oops!! I remember having assigned this dataset to me in the Google sheet, besides having mentioned the corresponding issue in the Pull Request... Nevermind! :)",
"Yes indeed I can see that...\r\nSorry for noticing that only now \r\n\r\nThe code of the other PR ended up being pretty close to yours though\r\nIf you want to add more details to the cc100 dataset card or in the script feel to do so, any addition is welcome"
] | 1,606,749,802,000 | 1,618,925,657,000 | 1,618,925,657,000 | MEMBER | null | Add CC-100.
Close #773 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/923/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/923/timeline | null | true | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/923",
"html_url": "https://github.com/huggingface/datasets/pull/923",
"diff_url": "https://github.com/huggingface/datasets/pull/923.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/923.patch",
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/922 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/922/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/922/comments | https://api.github.com/repos/huggingface/datasets/issues/922/events | https://github.com/huggingface/datasets/pull/922 | 753,559,130 | MDExOlB1bGxSZXF1ZXN0NTI5NjEzOTA4 | 922 | Add XOR QA Dataset | {
"login": "sumanthd17",
"id": 28291870,
"node_id": "MDQ6VXNlcjI4MjkxODcw",
"avatar_url": "https://avatars.githubusercontent.com/u/28291870?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sumanthd17",
"html_url": "https://github.com/sumanthd17",
"followers_url": "https://api.github.com/users/sumanthd17/followers",
"following_url": "https://api.github.com/users/sumanthd17/following{/other_user}",
"gists_url": "https://api.github.com/users/sumanthd17/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sumanthd17/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sumanthd17/subscriptions",
"organizations_url": "https://api.github.com/users/sumanthd17/orgs",
"repos_url": "https://api.github.com/users/sumanthd17/repos",
"events_url": "https://api.github.com/users/sumanthd17/events{/privacy}",
"received_events_url": "https://api.github.com/users/sumanthd17/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
} | [] | null | [
"Hi @sumanthd17 \r\n\r\nLooks like a good start! You will also need to add a Dataset card, following the instructions given [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md#manually-tag-the-dataset-and-write-the-dataset-card)",
"I followed the instructions mentioned there but my dataset isn't showing up in the dropdown list. Am I missing something here? @yjernite ",
"> I followed the instructions mentioned there but my dataset isn't showing up in the dropdown list. Am I missing something here? @yjernite\r\n\r\nThe best way is to run the tagging app locally and provide it the location to the `dataset_infos.json` after you've run the CLI:\r\nhttps://github.com/huggingface/datasets-tagging\r\n",
"This is a really good data card!!\r\n\r\nSmall changes to make it even better:\r\n- Tags: the dataset has both \"original\" data and data that is \"extended\" from a source dataset: TydiQA - you should choose both options in the tagging apps\r\n- The language and annotation creator tags are off: the language here is the questions: I understand it's a mix of crowd-sourced and expert-generated? Is there any machine translation involved? The annotations are the span selections: is that crowd-sourced?\r\n- Personal and sensitive information: there should be a statement there, even if only to say that none could be found or that it only mentions public figures"
] | 1,606,749,054,000 | 1,606,878,741,000 | 1,606,878,741,000 | CONTRIBUTOR | null | Added XOR Question Answering Dataset. The link to the dataset can be found [here](https://nlp.cs.washington.edu/xorqa/)
- [x] Followed the instructions in CONTRIBUTING.md
- [x] Ran the tests successfully
- [x] Created the dummy data | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/922/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/922/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/922",
"html_url": "https://github.com/huggingface/datasets/pull/922",
"diff_url": "https://github.com/huggingface/datasets/pull/922.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/922.patch",
"merged_at": 1606878741000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/920 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/920/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/920/comments | https://api.github.com/repos/huggingface/datasets/issues/920/events | https://github.com/huggingface/datasets/pull/920 | 753,445,747 | MDExOlB1bGxSZXF1ZXN0NTI5NTIzMTgz | 920 | add dream dataset | {
"login": "patil-suraj",
"id": 27137566,
"node_id": "MDQ6VXNlcjI3MTM3NTY2",
"avatar_url": "https://avatars.githubusercontent.com/u/27137566?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patil-suraj",
"html_url": "https://github.com/patil-suraj",
"followers_url": "https://api.github.com/users/patil-suraj/followers",
"following_url": "https://api.github.com/users/patil-suraj/following{/other_user}",
"gists_url": "https://api.github.com/users/patil-suraj/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patil-suraj/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patil-suraj/subscriptions",
"organizations_url": "https://api.github.com/users/patil-suraj/orgs",
"repos_url": "https://api.github.com/users/patil-suraj/repos",
"events_url": "https://api.github.com/users/patil-suraj/events{/privacy}",
"received_events_url": "https://api.github.com/users/patil-suraj/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
} | [] | null | [
"> Awesome good job !\r\n> \r\n> Could you also add a dataset card using the template guide here : https://github.com/huggingface/datasets/blob/master/templates/README_guide.md\r\n> If you can't fill some fields then just leave `[N/A]`\r\n\r\nQuick amendment: `[N/A]` is for fields that are not relevant: if you can't find the information just leave `[More Information Needed]`",
"@lhoestq since datset cards are optional for this sprint I'll add those later. Good for merge.",
"Indeed we only require the tags to be added now (the yaml part at the top of the dataset card).\r\nCould you add them please ?\r\nYou can find more infos here : https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md#tag-the-dataset-and-write-the-dataset-card",
"@lhoestq added tags, I'll fill rest of the info after current sprint :)",
"The tests are failing tests for other datasets, not this one.",
"@lhoestq could you tell me why these tests are failing, they don't seem related to this PR. "
] | 1,606,740,014,000 | 1,607,013,912,000 | 1,606,923,552,000 | MEMBER | null | Adding Dream: a Dataset and for Dialogue-Based Reading Comprehension
More details:
https://dataset.org/dream/
https://github.com/nlpdata/dream | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/920/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/920/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/920",
"html_url": "https://github.com/huggingface/datasets/pull/920",
"diff_url": "https://github.com/huggingface/datasets/pull/920.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/920.patch",
"merged_at": 1606923552000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/919 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/919/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/919/comments | https://api.github.com/repos/huggingface/datasets/issues/919/events | https://github.com/huggingface/datasets/issues/919 | 753,434,472 | MDU6SXNzdWU3NTM0MzQ0NzI= | 919 | wrong length with datasets | {
"login": "rabeehk",
"id": 6278280,
"node_id": "MDQ6VXNlcjYyNzgyODA=",
"avatar_url": "https://avatars.githubusercontent.com/u/6278280?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rabeehk",
"html_url": "https://github.com/rabeehk",
"followers_url": "https://api.github.com/users/rabeehk/followers",
"following_url": "https://api.github.com/users/rabeehk/following{/other_user}",
"gists_url": "https://api.github.com/users/rabeehk/gists{/gist_id}",
"starred_url": "https://api.github.com/users/rabeehk/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rabeehk/subscriptions",
"organizations_url": "https://api.github.com/users/rabeehk/orgs",
"repos_url": "https://api.github.com/users/rabeehk/repos",
"events_url": "https://api.github.com/users/rabeehk/events{/privacy}",
"received_events_url": "https://api.github.com/users/rabeehk/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
} | [] | null | [
"Also, I cannot first convert it to torch format, since huggingface seq2seq_trainer codes process the datasets afterwards during datacollector function to make it optimize for TPUs. ",
"sorry I misunderstood length of dataset with dataloader, closed. thanks "
] | 1,606,739,019,000 | 1,606,739,847,000 | 1,606,739,846,000 | CONTRIBUTOR | null | Hi
I have a MRPC dataset which I convert it to seq2seq format, then this is of this format:
`Dataset(features: {'src_texts': Value(dtype='string', id=None), 'tgt_texts': Value(dtype='string', id=None)}, num_rows: 10)
`
I feed it to a dataloader:
```
dataloader = DataLoader(
train_dataset,
batch_size=self.args.train_batch_size,
sampler=train_sampler,
collate_fn=self.data_collator,
drop_last=self.args.dataloader_drop_last,
num_workers=self.args.dataloader_num_workers,
)
```
now if I type len(dataloader) this is 1, which is wrong, and this needs to be 10. could you assist me please? thanks
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/919/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/919/timeline | null | null | {
"url": "",
"html_url": "",
"diff_url": "",
"patch_url": "",
"merged_at": 0
} | false |
https://api.github.com/repos/huggingface/datasets/issues/918 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/918/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/918/comments | https://api.github.com/repos/huggingface/datasets/issues/918/events | https://github.com/huggingface/datasets/pull/918 | 753,397,440 | MDExOlB1bGxSZXF1ZXN0NTI5NDgzOTk4 | 918 | Add conll2002 | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
} | [] | null | [] | 1,606,735,775,000 | 1,606,761,270,000 | 1,606,761,269,000 | MEMBER | null | Adding the Conll2002 dataset for NER.
More info here : https://www.clips.uantwerpen.be/conll2002/ner/
### Checkbox
- [x] Create the dataset script `/datasets/my_dataset/my_dataset.py` using the template
- [x] Fill the `_DESCRIPTION` and `_CITATION` variables
- [x] Implement `_infos()`, `_split_generators()` and `_generate_examples()`
- [x] Make sure that the `BUILDER_CONFIGS` class attribute is filled with the different configurations of the dataset and that the `BUILDER_CONFIG_CLASS` is specified if there is a custom config class.
- [x] Generate the metadata file `dataset_infos.json` for all configurations
- [x] Generate the dummy data `dummy_data.zip` files to have the dataset script tested and that they don't weigh too much (<50KB)
- [x] Add the dataset card `README.md` using the template : fill the tags and the various paragraphs
- [x] Both tests for the real data and the dummy data pass.
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/918/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/918/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/918",
"html_url": "https://github.com/huggingface/datasets/pull/918",
"diff_url": "https://github.com/huggingface/datasets/pull/918.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/918.patch",
"merged_at": 1606761269000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/917 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/917/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/917/comments | https://api.github.com/repos/huggingface/datasets/issues/917/events | https://github.com/huggingface/datasets/pull/917 | 753,391,591 | MDExOlB1bGxSZXF1ZXN0NTI5NDc5MTIy | 917 | Addition of Concode Dataset | {
"login": "reshinthadithyan",
"id": 36307201,
"node_id": "MDQ6VXNlcjM2MzA3MjAx",
"avatar_url": "https://avatars.githubusercontent.com/u/36307201?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/reshinthadithyan",
"html_url": "https://github.com/reshinthadithyan",
"followers_url": "https://api.github.com/users/reshinthadithyan/followers",
"following_url": "https://api.github.com/users/reshinthadithyan/following{/other_user}",
"gists_url": "https://api.github.com/users/reshinthadithyan/gists{/gist_id}",
"starred_url": "https://api.github.com/users/reshinthadithyan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/reshinthadithyan/subscriptions",
"organizations_url": "https://api.github.com/users/reshinthadithyan/orgs",
"repos_url": "https://api.github.com/users/reshinthadithyan/repos",
"events_url": "https://api.github.com/users/reshinthadithyan/events{/privacy}",
"received_events_url": "https://api.github.com/users/reshinthadithyan/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
} | [] | null | [
"Testing command doesn't work\r\n###trace\r\n-- Docs: https://docs.pytest.org/en/stable/warnings.html\r\n========================================================= short test summary info ========================================================== \r\nERROR tests/test_dataset_common.py - absl.testing.parameterized.NoTestsError: parameterized test decorators did not generate any tests. Ma...\r\n====================================================== 2 warnings, 1 error in 54.23s ======================================================= \r\nERROR: not found: G:\\Work Related\\hf\\datasets\\tests\\test_dataset_common.py::LocalDatasetTest::test_load_real_dataset_concode\r\n(no name 'G:\\\\Work Related\\\\hf\\\\datasets\\\\tests\\\\test_dataset_common.py::LocalDatasetTest::test_load_real_dataset_concode' in any of [<Module test_dataset_common.py>])\r\n",
"Hello @lhoestq Test checks are passing in my local, but the commit fails in ci. Any idea onto why? \r\n#### Dummy Dataset Test \r\n====================================================== 1 passed, 6 warnings in 7.14s ======================================================= \r\n#### Real Dataset Test \r\n====================================================== 1 passed, 6 warnings in 25.54s ====================================================== ",
"Hello @lhoestq, Have a look, I've changed the file according to the reviews. Thanks!",
"@reshinthadithyan that's a great start! You will also need to add a Dataset card, following the instructions given [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md#manually-tag-the-dataset-and-write-the-dataset-card)",
"> @reshinthadithyan that's a great start! You will also need to add a Dataset card, following the instructions given [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md#manually-tag-the-dataset-and-write-the-dataset-card)\r\n\r\nHello @yjernite I'm facing issues in using the datasets-tagger Refer #1 in datasets-tagger. Thanks",
"> > @reshinthadithyan that's a great start! You will also need to add a Dataset card, following the instructions given [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md#manually-tag-the-dataset-and-write-the-dataset-card)\r\n> \r\n> Hello @yjernite I'm facing issues in using the datasets-tagger Refer #1 in datasets-tagger. Thanks\r\n\r\nHi @reshinthadithyan ! Did you try with the latest version of the tagger? What issues are you facing?\r\n\r\nWe're also relaxed the dataset requirement for now, you'll only add to add the tags :) ",
"Could you work on another branch when adding different datasets ?\r\nThe idea is to have one PR per dataset",
"Thanks ! The github diff looks all clean now :) \r\nTo fix the CI you just need to rebase from master\r\n\r\nDon't forget to add the tags of the dataset card. It's the yaml part at the top of the dataset card\r\nMore infor here : https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md#tag-the-dataset-and-write-the-dataset-card\r\n\r\nThe issue you had with the tagger should be fixed now by https://github.com/huggingface/datasets-tagging/pull/5\r\n"
] | 1,606,735,259,000 | 1,609,210,536,000 | 1,609,210,536,000 | CONTRIBUTOR | null | ##Overview
Concode Dataset contains pairs of Nl Queries and the corresponding Code.(Contextual Code Generation)
Reference Links
Paper Link = https://arxiv.org/pdf/1904.09086.pdf
Github Link =https://github.com/microsoft/CodeXGLUE/tree/main/Text-Code/text-to-code | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/917/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/917/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/917",
"html_url": "https://github.com/huggingface/datasets/pull/917",
"diff_url": "https://github.com/huggingface/datasets/pull/917.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/917.patch",
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/916 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/916/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/916/comments | https://api.github.com/repos/huggingface/datasets/issues/916/events | https://github.com/huggingface/datasets/pull/916 | 753,376,643 | MDExOlB1bGxSZXF1ZXN0NTI5NDY3MTkx | 916 | Add Swedish NER Corpus | {
"login": "abhishekkrthakur",
"id": 1183441,
"node_id": "MDQ6VXNlcjExODM0NDE=",
"avatar_url": "https://avatars.githubusercontent.com/u/1183441?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/abhishekkrthakur",
"html_url": "https://github.com/abhishekkrthakur",
"followers_url": "https://api.github.com/users/abhishekkrthakur/followers",
"following_url": "https://api.github.com/users/abhishekkrthakur/following{/other_user}",
"gists_url": "https://api.github.com/users/abhishekkrthakur/gists{/gist_id}",
"starred_url": "https://api.github.com/users/abhishekkrthakur/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/abhishekkrthakur/subscriptions",
"organizations_url": "https://api.github.com/users/abhishekkrthakur/orgs",
"repos_url": "https://api.github.com/users/abhishekkrthakur/repos",
"events_url": "https://api.github.com/users/abhishekkrthakur/events{/privacy}",
"received_events_url": "https://api.github.com/users/abhishekkrthakur/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
} | [] | null | [
"Yes the use of configs is optional",
"@abhishekkrthakur we want to keep track of the information that is and isn't in the dataset cards so we're asking everyone to use the full template :) If there is some information in there that you really can't find or don't feel qualified to add, you can just leave the `[More Information Needed]` text"
] | 1,606,733,991,000 | 1,606,878,650,000 | 1,606,878,649,000 | MEMBER | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/916/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/916/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/916",
"html_url": "https://github.com/huggingface/datasets/pull/916",
"diff_url": "https://github.com/huggingface/datasets/pull/916.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/916.patch",
"merged_at": 1606878649000
} | true |
|
https://api.github.com/repos/huggingface/datasets/issues/915 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/915/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/915/comments | https://api.github.com/repos/huggingface/datasets/issues/915/events | https://github.com/huggingface/datasets/issues/915 | 753,118,481 | MDU6SXNzdWU3NTMxMTg0ODE= | 915 | Shall we change the hashing to encoding to reduce potential replicated cache files? | {
"login": "zhuzilin",
"id": 10428324,
"node_id": "MDQ6VXNlcjEwNDI4MzI0",
"avatar_url": "https://avatars.githubusercontent.com/u/10428324?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zhuzilin",
"html_url": "https://github.com/zhuzilin",
"followers_url": "https://api.github.com/users/zhuzilin/followers",
"following_url": "https://api.github.com/users/zhuzilin/following{/other_user}",
"gists_url": "https://api.github.com/users/zhuzilin/gists{/gist_id}",
"starred_url": "https://api.github.com/users/zhuzilin/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zhuzilin/subscriptions",
"organizations_url": "https://api.github.com/users/zhuzilin/orgs",
"repos_url": "https://api.github.com/users/zhuzilin/repos",
"events_url": "https://api.github.com/users/zhuzilin/events{/privacy}",
"received_events_url": "https://api.github.com/users/zhuzilin/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
},
{
"id": 2067400324,
"node_id": "MDU6TGFiZWwyMDY3NDAwMzI0",
"url": "https://api.github.com/repos/huggingface/datasets/labels/generic%20discussion",
"name": "generic discussion",
"color": "c5def5",
"default": false,
"description": "Generic discussion on the library"
}
] | open | false | {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
} | [] | null | [
"This is an interesting idea !\r\nDo you have ideas about how to approach the decoding and the normalization ?",
"@lhoestq\r\nI think we first need to save the transformation chain to a list in `self._fingerprint`. Then we can\r\n- decode all the current saved datasets to see if there is already one that is equivalent to the transformation we need now.\r\n- or, calculate all the possible hash value of the current chain for comparison so that we could continue to use hashing.\r\nIf we find one, we can adjust the list in `self._fingerprint` to it.\r\n\r\nAs for the transformation reordering rules, we can just start with some manual rules, like two sort on the same column should merge to one, filter and select can change orders.\r\n\r\nAnd for encoding and decoding, we can just manually specify `sort` is 0, `shuffling` is 2 and create a base-n number or use some general algorithm like `base64.urlsafe_b64encode`.\r\n\r\nBecause we are not doing lazy evaluation now, we may not be able to normalize the transformation to its minimal form. If we want to support that, we can provde a `Sequential` api and let user input a list or transformation, so that user would not use the intermediate datasets. This would look like tf.data.Dataset."
] | 1,606,708,246,000 | 1,608,786,709,000 | null | NONE | null | Hi there. For now, we are using `xxhash` to hash the transformations to fingerprint and we will save a copy of the processed dataset to disk if there is a new hash value. However, there are some transformations that are idempotent or commutative to each other. I think that encoding the transformation chain as the fingerprint may help in those cases, for example, use `base64.urlsafe_b64encode`. In this way, before we want to save a new copy, we can decode the transformation chain and normalize it to prevent omit potential reuse. As the main targets of this project are the really large datasets that cannot be loaded entirely in memory, I believe it would save a lot of time if we can avoid some write.
If you have interest in this, I'd love to help :). | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/915/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/915/timeline | null | null | {
"url": "",
"html_url": "",
"diff_url": "",
"patch_url": "",
"merged_at": 0
} | false |
https://api.github.com/repos/huggingface/datasets/issues/914 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/914/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/914/comments | https://api.github.com/repos/huggingface/datasets/issues/914/events | https://github.com/huggingface/datasets/pull/914 | 752,956,106 | MDExOlB1bGxSZXF1ZXN0NTI5MTM2Njk3 | 914 | Add list_github_datasets api for retrieving dataset name list in github repo | {
"login": "zhuzilin",
"id": 10428324,
"node_id": "MDQ6VXNlcjEwNDI4MzI0",
"avatar_url": "https://avatars.githubusercontent.com/u/10428324?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zhuzilin",
"html_url": "https://github.com/zhuzilin",
"followers_url": "https://api.github.com/users/zhuzilin/followers",
"following_url": "https://api.github.com/users/zhuzilin/following{/other_user}",
"gists_url": "https://api.github.com/users/zhuzilin/gists{/gist_id}",
"starred_url": "https://api.github.com/users/zhuzilin/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zhuzilin/subscriptions",
"organizations_url": "https://api.github.com/users/zhuzilin/orgs",
"repos_url": "https://api.github.com/users/zhuzilin/repos",
"events_url": "https://api.github.com/users/zhuzilin/events{/privacy}",
"received_events_url": "https://api.github.com/users/zhuzilin/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
} | [] | null | [
"We can look into removing some of the attributes from `GET /api/datasets` to make it smaller/faster, what do you think @lhoestq?",
"> We can look into removing some of the attributes from `GET /api/datasets` to make it smaller/faster, what do you think @lhoestq?\r\n\r\nyes at least remove all the `dummy_data.zip`",
"`GET /api/datasets` should now be much faster. @zhuzilin can you check if `list_datasets` is now faster for you?",
"> `GET /api/datasets` should now be much faster. @zhuzilin can you check if `list_datasets` is now faster for you?\r\n\r\nYes, much faster! Thank you!"
] | 1,606,668,135,000 | 1,606,893,676,000 | 1,606,893,676,000 | NONE | null | Thank you for your great effort on unifying data processing for NLP!
This pr is trying to add a new api `list_github_datasets` in the `inspect` module. The reason for it is that the current `list_datasets` api need to access https://huggingface.co/api/datasets to get a large json. However, this connection can be really slow... (I was visiting from China) and from my own experience, most of the time `requests.get` failed to download the whole json after a long wait and will trigger fault in `r.json()`.
I also noticed that the current implementation will first try to download from github, which makes me be able to smoothly run `load_dataset('squad')` in the example.
Therefore, I think it would be better if we can have an api to get the list of datasets that are available on github, and it will also improve newcomers' experience (it is a little frustrating if one cannot successfully run the first function in the README example.) before we have faster source for huggingface.co.
As for the implementation, I've added a `dataset_infos.json` file under the `datasets` folder, and it has the following structure:
```json
{
"id": "aeslc",
"folder": "datasets/aeslc",
"dataset_infos": "datasets/aeslc/dataset_infos.json"
},
...
{
"id": "json",
"folder": "datasets/json"
},
...
```
The script I used to get this file is:
```python
import json
import os
DATASETS_BASE_DIR = "/root/datasets"
DATASET_INFOS_JSON = "dataset_infos.json"
datasets = []
for item in os.listdir(os.path.join(DATASETS_BASE_DIR, "datasets")):
if os.path.isdir(os.path.join(DATASETS_BASE_DIR, "datasets", item)):
datasets.append(item)
datasets.sort()
total_ds_info = []
for ds in datasets:
ds_dir = os.path.join("datasets", ds)
ds_info_dir = os.path.join(ds_dir, DATASET_INFOS_JSON)
if os.path.isfile(os.path.join(DATASETS_BASE_DIR, ds_info_dir)):
total_ds_info.append({"id": ds,
"folder": ds_dir,
"dataset_infos": ds_info_dir})
else:
total_ds_info.append({"id": ds,
"folder": ds_dir})
with open(DATASET_INFOS_JSON, "w") as f:
json.dump(total_ds_info, f)
```
The new `dataset_infos.json` was saved as a formated json so that it will be easy to add new dataset.
When calling `list_github_datasets`, the user will get the list of dataset names in this github repo and if `with_details` is set to be `True`, they can get the url of specific dataset info.
Thank you for your time on reviewing this pr :). | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/914/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/914/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/914",
"html_url": "https://github.com/huggingface/datasets/pull/914",
"diff_url": "https://github.com/huggingface/datasets/pull/914.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/914.patch",
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/913 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/913/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/913/comments | https://api.github.com/repos/huggingface/datasets/issues/913/events | https://github.com/huggingface/datasets/pull/913 | 752,892,020 | MDExOlB1bGxSZXF1ZXN0NTI5MDkyOTc3 | 913 | My new dataset PEC | {
"login": "zhongpeixiang",
"id": 11826803,
"node_id": "MDQ6VXNlcjExODI2ODAz",
"avatar_url": "https://avatars.githubusercontent.com/u/11826803?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zhongpeixiang",
"html_url": "https://github.com/zhongpeixiang",
"followers_url": "https://api.github.com/users/zhongpeixiang/followers",
"following_url": "https://api.github.com/users/zhongpeixiang/following{/other_user}",
"gists_url": "https://api.github.com/users/zhongpeixiang/gists{/gist_id}",
"starred_url": "https://api.github.com/users/zhongpeixiang/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zhongpeixiang/subscriptions",
"organizations_url": "https://api.github.com/users/zhongpeixiang/orgs",
"repos_url": "https://api.github.com/users/zhongpeixiang/repos",
"events_url": "https://api.github.com/users/zhongpeixiang/events{/privacy}",
"received_events_url": "https://api.github.com/users/zhongpeixiang/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
} | [] | null | [
"How to resolve these failed checks?",
"Thanks for adding this one :) \r\n\r\nTo fix the check_code_quality, please run `make style` with the latest version of black, isort, flake8\r\nTo fix the test_no_encoding_on_file_open, make sure to specify the encoding each time you call `open()` on a text file.\r\nFor example : `encoding=\"utf-8\"`\r\nTo fix the test_load_dataset_pec , you must add the dummy_data.zip file. It is used to test the dataset script and make sure it runs fine. To add it, please refer to the steps in https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-add-a-dataset\r\n\r\n",
"Could you also add a dataset card ? you can find a template here : https://github.com/huggingface/datasets/blob/master/templates/README.md\r\n\r\nThat would be awesome",
"> Thanks for adding this one :)\r\n> \r\n> To fix the check_code_quality, please run `make style` with the latest version of black, isort, flake8\r\n> To fix the test_no_encoding_on_file_open, make sure to specify the encoding each time you call `open()` on a text file.\r\n> For example : `encoding=\"utf-8\"`\r\n> To fix the test_load_dataset_pec , you must add the dummy_data.zip file. It is used to test the dataset script and make sure it runs fine. To add it, please refer to the steps in https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-add-a-dataset\r\n\r\nThank you for the detailed suggestion.\r\n\r\nI have added dummy_data but it still failed the DistributedDatasetTest check. My dataset has a central file (containing a python dict) that needs to be accessed by each data example. Is it because the central file cannot be distributed (which would lead to a partial dictionary)?\r\n\r\nSpecifically, the central file contains a dictionary of speakers with their attributes. Each data example is also associated with a speaker. As of now, I keep the central file and data files separately. If I remove the central file by appending the speaker attributes to each data example, then there would be lots of redundancy because there are lots of duplicate speakers in the data files.",
"The `DistributedDatasetTest` fail and the changes of this PR are not related, there was just a bug in the CI. You can ignore it",
"> Really cool thanks !\r\n> \r\n> Could you make the dummy files smaller ? For example by reducing the size of persona.txt ?\r\n> I also left a comment about the files concatenation. It would be cool to replace that with simple iterations through the different files.\r\n> \r\n> Then once this is done, you can add a dataset card using the template guide here : https://github.com/huggingface/datasets/blob/master/templates/README_guide.md\r\n> If some fields can't be filled, just leave `[N/A]`\r\n\r\nSmall change: if you don't have the information for a field, please leave `[More Information Needed]` rather than `[N/A]`\r\n\r\nThe full information can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md#manually-tag-the-dataset-and-write-the-dataset-card)"
] | 1,606,648,237,000 | 1,606,819,313,000 | 1,606,819,313,000 | CONTRIBUTOR | null | A new dataset PEC published in EMNLP 2020. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/913/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/913/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/913",
"html_url": "https://github.com/huggingface/datasets/pull/913",
"diff_url": "https://github.com/huggingface/datasets/pull/913.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/913.patch",
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/911 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/911/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/911/comments | https://api.github.com/repos/huggingface/datasets/issues/911/events | https://github.com/huggingface/datasets/issues/911 | 752,806,215 | MDU6SXNzdWU3NTI4MDYyMTU= | 911 | datasets module not found | {
"login": "sbassam",
"id": 15836274,
"node_id": "MDQ6VXNlcjE1ODM2Mjc0",
"avatar_url": "https://avatars.githubusercontent.com/u/15836274?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sbassam",
"html_url": "https://github.com/sbassam",
"followers_url": "https://api.github.com/users/sbassam/followers",
"following_url": "https://api.github.com/users/sbassam/following{/other_user}",
"gists_url": "https://api.github.com/users/sbassam/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sbassam/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sbassam/subscriptions",
"organizations_url": "https://api.github.com/users/sbassam/orgs",
"repos_url": "https://api.github.com/users/sbassam/repos",
"events_url": "https://api.github.com/users/sbassam/events{/privacy}",
"received_events_url": "https://api.github.com/users/sbassam/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
} | [] | null | [
"nvm, I'd made an assumption that the library gets installed with transformers. "
] | 1,606,613,055,000 | 1,606,660,389,000 | 1,606,660,389,000 | NONE | null | Currently, running `from datasets import load_dataset` will throw a `ModuleNotFoundError: No module named 'datasets'` error.
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/911/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/911/timeline | null | null | {
"url": "",
"html_url": "",
"diff_url": "",
"patch_url": "",
"merged_at": 0
} | false |
https://api.github.com/repos/huggingface/datasets/issues/910 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/910/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/910/comments | https://api.github.com/repos/huggingface/datasets/issues/910/events | https://github.com/huggingface/datasets/issues/910 | 752,772,723 | MDU6SXNzdWU3NTI3NzI3MjM= | 910 | Grindr meeting app web.Grindr | {
"login": "jackin34",
"id": 75184749,
"node_id": "MDQ6VXNlcjc1MTg0NzQ5",
"avatar_url": "https://avatars.githubusercontent.com/u/75184749?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jackin34",
"html_url": "https://github.com/jackin34",
"followers_url": "https://api.github.com/users/jackin34/followers",
"following_url": "https://api.github.com/users/jackin34/following{/other_user}",
"gists_url": "https://api.github.com/users/jackin34/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jackin34/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jackin34/subscriptions",
"organizations_url": "https://api.github.com/users/jackin34/orgs",
"repos_url": "https://api.github.com/users/jackin34/repos",
"events_url": "https://api.github.com/users/jackin34/events{/privacy}",
"received_events_url": "https://api.github.com/users/jackin34/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
} | [] | null | [] | 1,606,599,383,000 | 1,606,644,711,000 | 1,606,644,711,000 | NONE | null | ## Adding a Dataset
- **Name:** *name of the dataset*
- **Description:** *short description of the dataset (or link to social media or blog post)*
- **Paper:** *link to the dataset paper if available*
- **Data:** *link to the Github repository or current dataset location*
- **Motivation:** *what are some good reasons to have this dataset*
Instructions to add a new dataset can be found [here](https://huggingface.co/docs/datasets/share_dataset.html). | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/910/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/910/timeline | null | null | {
"url": "",
"html_url": "",
"diff_url": "",
"patch_url": "",
"merged_at": 0
} | false |
https://api.github.com/repos/huggingface/datasets/issues/909 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/909/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/909/comments | https://api.github.com/repos/huggingface/datasets/issues/909/events | https://github.com/huggingface/datasets/pull/909 | 752,508,299 | MDExOlB1bGxSZXF1ZXN0NTI4ODE1NDYz | 909 | Add FiNER dataset | {
"login": "stefan-it",
"id": 20651387,
"node_id": "MDQ6VXNlcjIwNjUxMzg3",
"avatar_url": "https://avatars.githubusercontent.com/u/20651387?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stefan-it",
"html_url": "https://github.com/stefan-it",
"followers_url": "https://api.github.com/users/stefan-it/followers",
"following_url": "https://api.github.com/users/stefan-it/following{/other_user}",
"gists_url": "https://api.github.com/users/stefan-it/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stefan-it/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stefan-it/subscriptions",
"organizations_url": "https://api.github.com/users/stefan-it/orgs",
"repos_url": "https://api.github.com/users/stefan-it/repos",
"events_url": "https://api.github.com/users/stefan-it/events{/privacy}",
"received_events_url": "https://api.github.com/users/stefan-it/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
} | [] | null | [
"> That's really cool thank you !\r\n> \r\n> Could you also add a dataset card ?\r\n> You can find a template here : https://github.com/huggingface/datasets/blob/master/templates/README.md\r\n\r\nThe full information for adding a dataset card can be found here :) \r\nhttps://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md#manually-tag-the-dataset-and-write-the-dataset-card\r\n",
"Thanks your suggestions! I've fixed them, and currently working on the dataset card!",
"@yjernite and @lhoestq I will add the dataset card a bit later in a separate PR if that's ok for you!",
"Yes I want to re-emphasize if it was not clear that dataset cards are optional for the sprint. \r\n\r\nOnly the tags are required for merging a datasets.\r\n\r\nPlease try to enforce this rule as well @lhoestq and @yjernite ",
"Yes @stefan-it if you could just add the tags (the yaml part at the top of the dataset card) that'd be perfect :) ",
"Oh, sorry, will add them now!\r\n",
"Initial README file is now added :) ",
"the `RemoteDatasetTest ` errors in the CI are fixed on master so it's fine",
"merging since the CI is fixed on master"
] | 1,606,521,260,000 | 1,607,360,183,000 | 1,607,360,183,000 | CONTRIBUTOR | null | Hi,
this PR adds "A Finnish News Corpus for Named Entity Recognition" as new `finer` dataset.
The dataset is described in [this paper](https://arxiv.org/abs/1908.04212). The data is publicly available in [this GitHub](https://github.com/mpsilfve/finer-data).
Notice: they provide two testsets. The additional test dataset taken from Wikipedia is named as "test_wikipedia" split. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/909/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/909/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/909",
"html_url": "https://github.com/huggingface/datasets/pull/909",
"diff_url": "https://github.com/huggingface/datasets/pull/909.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/909.patch",
"merged_at": 1607360183000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/908 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/908/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/908/comments | https://api.github.com/repos/huggingface/datasets/issues/908/events | https://github.com/huggingface/datasets/pull/908 | 752,428,652 | MDExOlB1bGxSZXF1ZXN0NTI4NzUzMjcz | 908 | Add dependency on black for tests | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
} | [] | null | [
"Sorry, I have just seen that it was already in `QUALITY_REQUIRE`.\r\n\r\nFor some reason it did not get installed on my virtual environment..."
] | 1,606,504,368,000 | 1,606,513,613,000 | 1,606,513,612,000 | MEMBER | null | Add package 'black' as an installation requirement for tests. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/908/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/908/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/908",
"html_url": "https://github.com/huggingface/datasets/pull/908",
"diff_url": "https://github.com/huggingface/datasets/pull/908.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/908.patch",
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/907 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/907/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/907/comments | https://api.github.com/repos/huggingface/datasets/issues/907/events | https://github.com/huggingface/datasets/pull/907 | 752,422,351 | MDExOlB1bGxSZXF1ZXN0NTI4NzQ4ODMx | 907 | Remove os.path.join from all URLs | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
} | [] | null | [] | 1,606,503,330,000 | 1,606,690,100,000 | 1,606,690,099,000 | MEMBER | null | Remove `os.path.join` from all URLs in dataset scripts. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/907/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/907/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/907",
"html_url": "https://github.com/huggingface/datasets/pull/907",
"diff_url": "https://github.com/huggingface/datasets/pull/907.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/907.patch",
"merged_at": 1606690099000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/906 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/906/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/906/comments | https://api.github.com/repos/huggingface/datasets/issues/906/events | https://github.com/huggingface/datasets/pull/906 | 752,403,395 | MDExOlB1bGxSZXF1ZXN0NTI4NzM0MDY0 | 906 | Fix url with backslash in windows for blimp and pg19 | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
} | [] | null | [] | 1,606,499,951,000 | 1,606,501,196,000 | 1,606,501,196,000 | MEMBER | null | Following #903 I also fixed blimp and pg19 which were using the `os.path.join` to create urls
cc @albertvillanova | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/906/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/906/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/906",
"html_url": "https://github.com/huggingface/datasets/pull/906",
"diff_url": "https://github.com/huggingface/datasets/pull/906.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/906.patch",
"merged_at": 1606501195000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/905 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/905/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/905/comments | https://api.github.com/repos/huggingface/datasets/issues/905/events | https://github.com/huggingface/datasets/pull/905 | 752,395,456 | MDExOlB1bGxSZXF1ZXN0NTI4NzI3OTEy | 905 | Disallow backslash in urls | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
} | [] | null | [
"Looks like the test doesn't detect all the problems fixed by #907 , I'll fix that",
"Ok found why it doesn't detect the problems fixed by #907 . That's because for all those datasets the urls are actually fine (no backslash) on windows, even if it uses `os.path.join`.\r\n\r\nThis is because of the behavior of `os.path.join` on windows when the first path ends with a slash : \r\n\r\n```python\r\nimport os\r\nos.path.join(\"https://test.com/foo\", \"bar.txt\")\r\n# 'https://test.com/foo\\\\bar.txt'\r\nos.path.join(\"https://test.com/foo/\", \"bar.txt\")\r\n# 'https://test.com/foo/bar.txt'\r\n```\r\n\r\nHowever even though the urls are correct, this is definitely bad practice and we should never use `os.path.join` for urls"
] | 1,606,498,708,000 | 1,606,690,117,000 | 1,606,690,116,000 | MEMBER | null | Following #903 @albertvillanova noticed that there are sometimes bad usage of `os.path.join` in datasets scripts to create URLS. However this should be avoided since it doesn't work on windows.
I'm suggesting a test to make sure we that all the urls don't have backslashes in them in the datasets scripts.
The tests works by adding a callback feature to the MockDownloadManager used to test the dataset scripts. In a download callback I just make sure that the url is valid. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/905/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/905/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/905",
"html_url": "https://github.com/huggingface/datasets/pull/905",
"diff_url": "https://github.com/huggingface/datasets/pull/905.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/905.patch",
"merged_at": 1606690116000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/904 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/904/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/904/comments | https://api.github.com/repos/huggingface/datasets/issues/904/events | https://github.com/huggingface/datasets/pull/904 | 752,372,743 | MDExOlB1bGxSZXF1ZXN0NTI4NzA5NTUx | 904 | Very detailed step-by-step on how to add a dataset | {
"login": "thomwolf",
"id": 7353373,
"node_id": "MDQ6VXNlcjczNTMzNzM=",
"avatar_url": "https://avatars.githubusercontent.com/u/7353373?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/thomwolf",
"html_url": "https://github.com/thomwolf",
"followers_url": "https://api.github.com/users/thomwolf/followers",
"following_url": "https://api.github.com/users/thomwolf/following{/other_user}",
"gists_url": "https://api.github.com/users/thomwolf/gists{/gist_id}",
"starred_url": "https://api.github.com/users/thomwolf/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/thomwolf/subscriptions",
"organizations_url": "https://api.github.com/users/thomwolf/orgs",
"repos_url": "https://api.github.com/users/thomwolf/repos",
"events_url": "https://api.github.com/users/thomwolf/events{/privacy}",
"received_events_url": "https://api.github.com/users/thomwolf/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
} | [] | null | [
"Awesome! Thanks @lhoestq "
] | 1,606,495,521,000 | 1,606,730,187,000 | 1,606,730,186,000 | MEMBER | null | Add very detailed step-by-step instructions to add a new dataset to the library. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/904/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/904/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/904",
"html_url": "https://github.com/huggingface/datasets/pull/904",
"diff_url": "https://github.com/huggingface/datasets/pull/904.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/904.patch",
"merged_at": 1606730186000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/903 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/903/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/903/comments | https://api.github.com/repos/huggingface/datasets/issues/903/events | https://github.com/huggingface/datasets/pull/903 | 752,360,614 | MDExOlB1bGxSZXF1ZXN0NTI4Njk5NDQ3 | 903 | Fix URL with backslash in Windows | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
} | [] | null | [
"@lhoestq I was indeed working on that... to make another commit on this feature branch...",
"But as you prefer... nevermind! :)",
"Ah what do you have in mind for the tests ? I was thinking of adding a check in the MockDownloadManager used for tests based on dummy data. I'm creating a PR right now, I'd be happy to have your opinion",
"Indeed I was thinking of something similar: monckeypatching the HTTP request...",
"Therefore, if you agree, I am removing all the rest of `os.path.join`, both from the code and the docs...",
"If you spot other `os.path.join` for urls in dataset scripts or metrics scripts feel free to fix them.\r\nIn the library itself (/src/datasets) it should be fine since there are tests and a windows CI, but if you have doubts of some usage of `os.path.join` somewhere, let me know.",
"Alright create the test in #905 .\r\nThe windows CI is failing for all the datasets that have bad usage of `os.path.join` for urls.\r\nThere are of course the ones you fixed in this PR (thanks again !) but I found others as well such as pg19 and blimp.\r\nYou can check the full list by looking at the CI failures of the commit 1ce3354",
"I am merging this one as well as #906 that should fix all of the datasets.\r\nThen I'll rebase #905 which adds the test that checks for bad urls and make sure it' all green now"
] | 1,606,494,384,000 | 1,606,500,286,000 | 1,606,500,286,000 | MEMBER | null | In Windows, `os.path.join` generates URLs containing backslashes, when the first "path" does not end with a slash.
In general, `os.path.join` should be avoided to generate URLs. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/903/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/903/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/903",
"html_url": "https://github.com/huggingface/datasets/pull/903",
"diff_url": "https://github.com/huggingface/datasets/pull/903.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/903.patch",
"merged_at": 1606500286000
} | true |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.