url
stringlengths 58
61
| repository_url
stringclasses 1
value | labels_url
stringlengths 72
75
| comments_url
stringlengths 67
70
| events_url
stringlengths 65
68
| html_url
stringlengths 46
51
| id
int64 599M
1.5B
| node_id
stringlengths 18
32
| number
int64 1
5.38k
| title
stringlengths 1
276
| user
dict | labels
list | state
stringclasses 2
values | locked
bool 1
class | assignee
dict | assignees
list | milestone
dict | comments
sequence | created_at
stringlengths 20
20
| updated_at
stringlengths 20
20
| closed_at
stringlengths 20
20
⌀ | author_association
stringclasses 3
values | active_lock_reason
null | draft
bool 2
classes | pull_request
dict | body
stringlengths 0
228k
⌀ | reactions
dict | timeline_url
stringlengths 67
70
| performed_via_github_app
null | state_reason
stringclasses 3
values | is_pull_request
bool 1
class |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://api.github.com/repos/huggingface/datasets/issues/497 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/497/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/497/comments | https://api.github.com/repos/huggingface/datasets/issues/497/events | https://github.com/huggingface/datasets/pull/497 | 677,057,116 | MDExOlB1bGxSZXF1ZXN0NDY2MjQ2NDQ3 | 497 | skip header in PAWS-X | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [] | closed | false | null | [] | null | [] | 2020-08-11T17:26:25Z | 2020-08-19T09:50:02Z | 2020-08-19T09:50:01Z | MEMBER | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/497.diff",
"html_url": "https://github.com/huggingface/datasets/pull/497",
"merged_at": "2020-08-19T09:50:01Z",
"patch_url": "https://github.com/huggingface/datasets/pull/497.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/497"
} | This should fix #485
I also updated the `dataset_infos.json` file that is used to verify the integrity of the generated splits (the number of examples was reduced by one).
Note that there are new fields in `dataset_infos.json` introduced in the latest release 0.4.0 corresponding to post processing info. I removed them in this case when I ran `nlp-cli ./datasets/xtreme --save_infos` to keep backward compatibility (versions 0.3.0 can't load these fields).
I think I'll change the logic so that `nlp-cli test` doesn't create these fields for dataset with no post processing | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 1,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/497/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/497/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/496 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/496/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/496/comments | https://api.github.com/repos/huggingface/datasets/issues/496/events | https://github.com/huggingface/datasets/pull/496 | 677,016,998 | MDExOlB1bGxSZXF1ZXN0NDY2MjE1Mjg1 | 496 | fix bad type in overflow check | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [] | closed | false | null | [] | null | [] | 2020-08-11T16:24:58Z | 2020-08-14T13:29:35Z | 2020-08-14T13:29:34Z | MEMBER | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/496.diff",
"html_url": "https://github.com/huggingface/datasets/pull/496",
"merged_at": "2020-08-14T13:29:34Z",
"patch_url": "https://github.com/huggingface/datasets/pull/496.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/496"
} | When writing an arrow file and inferring the features, the overflow check could fail if the first example had a `null` field.
This is because we were not using the inferred features to do this check, and we could end up with arrays that don't match because of a type mismatch (`null` vs `string` for example).
This should fix #482 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/496/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/496/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/495 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/495/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/495/comments | https://api.github.com/repos/huggingface/datasets/issues/495/events | https://github.com/huggingface/datasets/pull/495 | 676,959,289 | MDExOlB1bGxSZXF1ZXN0NDY2MTY5MTA3 | 495 | stack vectors in pytorch and tensorflow | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [] | closed | false | null | [] | null | [] | 2020-08-11T15:12:53Z | 2020-08-12T09:30:49Z | 2020-08-12T09:30:48Z | MEMBER | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/495.diff",
"html_url": "https://github.com/huggingface/datasets/pull/495",
"merged_at": "2020-08-12T09:30:48Z",
"patch_url": "https://github.com/huggingface/datasets/pull/495.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/495"
} | When the format of a dataset is set to pytorch or tensorflow, and if the dataset has vectors in it, they were not stacked together as tensors when calling `dataset[i:i + batch_size][column]` or `dataset[column]`.
I added support for stacked tensors for both pytorch and tensorflow.
For ragged tensors, they are stacked only for tensorflow as pytorch doesn't support ragged tensors.
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/495/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/495/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/494 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/494/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/494/comments | https://api.github.com/repos/huggingface/datasets/issues/494/events | https://github.com/huggingface/datasets/pull/494 | 676,886,955 | MDExOlB1bGxSZXF1ZXN0NDY2MTExOTQz | 494 | Fix numpy stacking | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [] | closed | false | null | [] | null | [] | 2020-08-11T13:40:30Z | 2020-08-11T14:56:50Z | 2020-08-11T13:49:52Z | MEMBER | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/494.diff",
"html_url": "https://github.com/huggingface/datasets/pull/494",
"merged_at": "2020-08-11T13:49:52Z",
"patch_url": "https://github.com/huggingface/datasets/pull/494.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/494"
} | When getting items using a column name as a key, numpy arrays were not stacked.
I fixed that and added some tests.
There is another issue that still needs to be fixed though: when getting items using a column name as a key, pytorch tensors are not stacked (it outputs a list of tensors). This PR should help with the to fix this issue. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/494/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/494/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/493 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/493/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/493/comments | https://api.github.com/repos/huggingface/datasets/issues/493/events | https://github.com/huggingface/datasets/pull/493 | 676,527,351 | MDExOlB1bGxSZXF1ZXN0NDY1ODIxOTA0 | 493 | Fix wmt zh-en url | {
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/sshleifer",
"id": 6045025,
"login": "sshleifer",
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"type": "User",
"url": "https://api.github.com/users/sshleifer"
} | [] | closed | false | null | [] | null | [] | 2020-08-11T02:14:52Z | 2020-08-11T02:22:28Z | 2020-08-11T02:22:12Z | CONTRIBUTOR | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/493.diff",
"html_url": "https://github.com/huggingface/datasets/pull/493",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/493.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/493"
} | I verified that
```
wget https://stuncorpusprod.blob.core.windows.net/corpusfiles/UNv1.0.en-zh.tar.gz.00
```
runs in 2 minutes. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/493/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/493/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/492 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/492/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/492/comments | https://api.github.com/repos/huggingface/datasets/issues/492/events | https://github.com/huggingface/datasets/issues/492 | 676,495,064 | MDU6SXNzdWU2NzY0OTUwNjQ= | 492 | nlp.Features does not distinguish between nullable and non-nullable types in PyArrow schema | {
"avatar_url": "https://avatars.githubusercontent.com/u/4564897?v=4",
"events_url": "https://api.github.com/users/jarednielsen/events{/privacy}",
"followers_url": "https://api.github.com/users/jarednielsen/followers",
"following_url": "https://api.github.com/users/jarednielsen/following{/other_user}",
"gists_url": "https://api.github.com/users/jarednielsen/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/jarednielsen",
"id": 4564897,
"login": "jarednielsen",
"node_id": "MDQ6VXNlcjQ1NjQ4OTc=",
"organizations_url": "https://api.github.com/users/jarednielsen/orgs",
"received_events_url": "https://api.github.com/users/jarednielsen/received_events",
"repos_url": "https://api.github.com/users/jarednielsen/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/jarednielsen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jarednielsen/subscriptions",
"type": "User",
"url": "https://api.github.com/users/jarednielsen"
} | [] | closed | false | null | [] | null | [] | 2020-08-11T00:27:46Z | 2020-08-26T16:17:19Z | 2020-08-26T16:17:19Z | CONTRIBUTOR | null | null | null | Here's the code I'm trying to run:
```python
dset_wikipedia = nlp.load_dataset("wikipedia", "20200501.en", split="train", cache_dir=args.cache_dir)
dset_wikipedia.drop(columns=["title"])
dset_wikipedia.features.pop("title")
dset_books = nlp.load_dataset("bookcorpus", split="train", cache_dir=args.cache_dir)
dset = nlp.concatenate_datasets([dset_wikipedia, dset_books])
```
This fails because they have different schemas, despite having identical features.
```python
assert dset_wikipedia.features == dset_books.features # True
assert dset_wikipedia._data.schema == dset_books._data.schema # False
```
The Wikipedia dataset has 'text: string', while the BookCorpus dataset has 'text: string not null'. Currently I hack together a working schema match with the following line, but it would be better if this was handled in Features themselves.
```python
dset_wikipedia._data = dset_wikipedia.data.cast(dset_books._data.schema)
```
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/492/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/492/timeline | null | completed | true |
https://api.github.com/repos/huggingface/datasets/issues/491 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/491/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/491/comments | https://api.github.com/repos/huggingface/datasets/issues/491/events | https://github.com/huggingface/datasets/issues/491 | 676,486,275 | MDU6SXNzdWU2NzY0ODYyNzU= | 491 | No 0.4.0 release on GitHub | {
"avatar_url": "https://avatars.githubusercontent.com/u/4564897?v=4",
"events_url": "https://api.github.com/users/jarednielsen/events{/privacy}",
"followers_url": "https://api.github.com/users/jarednielsen/followers",
"following_url": "https://api.github.com/users/jarednielsen/following{/other_user}",
"gists_url": "https://api.github.com/users/jarednielsen/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/jarednielsen",
"id": 4564897,
"login": "jarednielsen",
"node_id": "MDQ6VXNlcjQ1NjQ4OTc=",
"organizations_url": "https://api.github.com/users/jarednielsen/orgs",
"received_events_url": "https://api.github.com/users/jarednielsen/received_events",
"repos_url": "https://api.github.com/users/jarednielsen/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/jarednielsen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jarednielsen/subscriptions",
"type": "User",
"url": "https://api.github.com/users/jarednielsen"
} | [] | closed | false | null | [] | null | [] | 2020-08-10T23:59:57Z | 2020-08-11T16:50:07Z | 2020-08-11T16:50:07Z | CONTRIBUTOR | null | null | null | 0.4.0 was released on PyPi, but not on GitHub. This means [the documentation](https://huggingface.co/nlp/) is still displaying from 0.3.0, and that there's no tag to easily clone the 0.4.0 version of the repo. | {
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/491/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/491/timeline | null | completed | true |
https://api.github.com/repos/huggingface/datasets/issues/490 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/490/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/490/comments | https://api.github.com/repos/huggingface/datasets/issues/490/events | https://github.com/huggingface/datasets/issues/490 | 676,482,242 | MDU6SXNzdWU2NzY0ODIyNDI= | 490 | Loading preprocessed Wikipedia dataset requires apache_beam | {
"avatar_url": "https://avatars.githubusercontent.com/u/4564897?v=4",
"events_url": "https://api.github.com/users/jarednielsen/events{/privacy}",
"followers_url": "https://api.github.com/users/jarednielsen/followers",
"following_url": "https://api.github.com/users/jarednielsen/following{/other_user}",
"gists_url": "https://api.github.com/users/jarednielsen/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/jarednielsen",
"id": 4564897,
"login": "jarednielsen",
"node_id": "MDQ6VXNlcjQ1NjQ4OTc=",
"organizations_url": "https://api.github.com/users/jarednielsen/orgs",
"received_events_url": "https://api.github.com/users/jarednielsen/received_events",
"repos_url": "https://api.github.com/users/jarednielsen/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/jarednielsen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jarednielsen/subscriptions",
"type": "User",
"url": "https://api.github.com/users/jarednielsen"
} | [] | closed | false | null | [] | null | [] | 2020-08-10T23:46:50Z | 2020-08-14T13:17:20Z | 2020-08-14T13:17:20Z | CONTRIBUTOR | null | null | null | Running
`nlp.load_dataset("wikipedia", "20200501.en", split="train", dir="/tmp/wikipedia")`
gives an error if apache_beam is not installed, stemming from
https://github.com/huggingface/nlp/blob/38eb2413de54ee804b0be81781bd65ac4a748ced/src/nlp/builder.py#L981-L988
This succeeded without the dependency in version 0.3.0. This seems like an unnecessary dependency to process some dataset info if you're using the already-preprocessed version. Could it be removed? | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/490/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/490/timeline | null | completed | true |
https://api.github.com/repos/huggingface/datasets/issues/489 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/489/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/489/comments | https://api.github.com/repos/huggingface/datasets/issues/489/events | https://github.com/huggingface/datasets/issues/489 | 676,456,257 | MDU6SXNzdWU2NzY0NTYyNTc= | 489 | ug | {
"avatar_url": "https://avatars.githubusercontent.com/u/2000204?v=4",
"events_url": "https://api.github.com/users/timothyjlaurent/events{/privacy}",
"followers_url": "https://api.github.com/users/timothyjlaurent/followers",
"following_url": "https://api.github.com/users/timothyjlaurent/following{/other_user}",
"gists_url": "https://api.github.com/users/timothyjlaurent/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/timothyjlaurent",
"id": 2000204,
"login": "timothyjlaurent",
"node_id": "MDQ6VXNlcjIwMDAyMDQ=",
"organizations_url": "https://api.github.com/users/timothyjlaurent/orgs",
"received_events_url": "https://api.github.com/users/timothyjlaurent/received_events",
"repos_url": "https://api.github.com/users/timothyjlaurent/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/timothyjlaurent/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/timothyjlaurent/subscriptions",
"type": "User",
"url": "https://api.github.com/users/timothyjlaurent"
} | [] | closed | false | null | [] | null | [] | 2020-08-10T22:33:03Z | 2020-08-10T22:55:14Z | 2020-08-10T22:33:40Z | NONE | null | null | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/489/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/489/timeline | null | completed | true |
|
https://api.github.com/repos/huggingface/datasets/issues/488 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/488/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/488/comments | https://api.github.com/repos/huggingface/datasets/issues/488/events | https://github.com/huggingface/datasets/issues/488 | 676,299,993 | MDU6SXNzdWU2NzYyOTk5OTM= | 488 | issues with downloading datasets for wmt16 and wmt19 | {
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/stas00",
"id": 10676103,
"login": "stas00",
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"repos_url": "https://api.github.com/users/stas00/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"type": "User",
"url": "https://api.github.com/users/stas00"
} | [] | closed | false | null | [] | null | [] | 2020-08-10T17:32:51Z | 2022-10-04T17:46:59Z | 2022-10-04T17:46:58Z | MEMBER | null | null | null | I have encountered multiple issues while trying to:
```
import nlp
dataset = nlp.load_dataset('wmt16', 'ru-en')
metric = nlp.load_metric('wmt16')
```
1. I had to do `pip install -e ".[dev]" ` on master, currently released nlp didn't work (sorry, didn't save the error) - I went back to the released version and now it worked. So it must have been some outdated dependencies that `pip install -e ".[dev]" ` fixed.
2. it was downloading at 60kbs - almost 5 hours to get the dataset. It was downloading all pairs and not just the one I asked for.
I tried the same code with `wmt19` in parallel and it took a few secs to download and it only fetched data for the requested pair. (but it failed too, see below)
3. my machine has crushed and when I retried I got:
```
Traceback (most recent call last):
File "./download.py", line 9, in <module>
dataset = nlp.load_dataset('wmt16', 'ru-en')
File "/mnt/nvme1/code/huggingface/nlp-master/src/nlp/load.py", line 549, in load_dataset
download_config=download_config, download_mode=download_mode, ignore_verifications=ignore_verifications,
File "/mnt/nvme1/code/huggingface/nlp-master/src/nlp/builder.py", line 449, in download_and_prepare
with incomplete_dir(self._cache_dir) as tmp_data_dir:
File "/home/stas/anaconda3/envs/main/lib/python3.7/contextlib.py", line 112, in __enter__
return next(self.gen)
File "/mnt/nvme1/code/huggingface/nlp-master/src/nlp/builder.py", line 422, in incomplete_dir
os.makedirs(tmp_dir)
File "/home/stas/anaconda3/envs/main/lib/python3.7/os.py", line 221, in makedirs
mkdir(name, mode)
FileExistsError: [Errno 17] File exists: '/home/stas/.cache/huggingface/datasets/wmt16/ru-en/1.0.0/4d8269cdd971ed26984a9c0e4a158e0c7afc8135fac8fb8ee43ceecf38fd422d.incomplete'
```
it can't handle resumes. but neither allows a new start. Had to delete it manually.
4. and finally when it downloaded the dataset, it then failed to fetch the metrics:
```
Traceback (most recent call last):
File "./download.py", line 15, in <module>
metric = nlp.load_metric('wmt16')
File "/mnt/nvme1/code/huggingface/nlp-master/src/nlp/load.py", line 442, in load_metric
module_path, hash = prepare_module(path, download_config=download_config, dataset=False)
File "/mnt/nvme1/code/huggingface/nlp-master/src/nlp/load.py", line 258, in prepare_module
local_path = cached_path(file_path, download_config=download_config)
File "/mnt/nvme1/code/huggingface/nlp-master/src/nlp/utils/file_utils.py", line 198, in cached_path
local_files_only=download_config.local_files_only,
File "/mnt/nvme1/code/huggingface/nlp-master/src/nlp/utils/file_utils.py", line 356, in get_from_cache
raise ConnectionError("Couldn't reach {}".format(url))
ConnectionError: Couldn't reach https://s3.amazonaws.com/datasets.huggingface.co/nlp/metrics/wmt16/wmt16.py
```
5. If I run the same code with `wmt19`, it fails too:
```
ConnectionError: Couldn't reach https://storage.googleapis.com/tfdataset-data/downloadataset/uncorpus/UNv1.0.en-ru.tar.gz
``` | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/488/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/488/timeline | null | completed | true |
https://api.github.com/repos/huggingface/datasets/issues/487 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/487/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/487/comments | https://api.github.com/repos/huggingface/datasets/issues/487/events | https://github.com/huggingface/datasets/pull/487 | 676,143,029 | MDExOlB1bGxSZXF1ZXN0NDY1NTA1NjQy | 487 | Fix elasticsearch result ids returning as strings | {
"avatar_url": "https://avatars.githubusercontent.com/u/3595526?v=4",
"events_url": "https://api.github.com/users/sai-prasanna/events{/privacy}",
"followers_url": "https://api.github.com/users/sai-prasanna/followers",
"following_url": "https://api.github.com/users/sai-prasanna/following{/other_user}",
"gists_url": "https://api.github.com/users/sai-prasanna/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/sai-prasanna",
"id": 3595526,
"login": "sai-prasanna",
"node_id": "MDQ6VXNlcjM1OTU1MjY=",
"organizations_url": "https://api.github.com/users/sai-prasanna/orgs",
"received_events_url": "https://api.github.com/users/sai-prasanna/received_events",
"repos_url": "https://api.github.com/users/sai-prasanna/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/sai-prasanna/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sai-prasanna/subscriptions",
"type": "User",
"url": "https://api.github.com/users/sai-prasanna"
} | [] | closed | false | null | [] | null | [] | 2020-08-10T13:37:11Z | 2020-08-31T10:42:46Z | 2020-08-31T10:42:46Z | CONTRIBUTOR | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/487.diff",
"html_url": "https://github.com/huggingface/datasets/pull/487",
"merged_at": "2020-08-31T10:42:46Z",
"patch_url": "https://github.com/huggingface/datasets/pull/487.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/487"
} | I am using the latest elasticsearch binary and master of nlp. For me elasticsearch searches failed because the resultant "id_" returned for searches are strings, but our library assumes them to be integers. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/487/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/487/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/486 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/486/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/486/comments | https://api.github.com/repos/huggingface/datasets/issues/486/events | https://github.com/huggingface/datasets/issues/486 | 675,649,034 | MDU6SXNzdWU2NzU2NDkwMzQ= | 486 | Bookcorpus data contains pretokenized text | {
"avatar_url": "https://avatars.githubusercontent.com/u/99543?v=4",
"events_url": "https://api.github.com/users/orsharir/events{/privacy}",
"followers_url": "https://api.github.com/users/orsharir/followers",
"following_url": "https://api.github.com/users/orsharir/following{/other_user}",
"gists_url": "https://api.github.com/users/orsharir/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/orsharir",
"id": 99543,
"login": "orsharir",
"node_id": "MDQ6VXNlcjk5NTQz",
"organizations_url": "https://api.github.com/users/orsharir/orgs",
"received_events_url": "https://api.github.com/users/orsharir/received_events",
"repos_url": "https://api.github.com/users/orsharir/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/orsharir/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/orsharir/subscriptions",
"type": "User",
"url": "https://api.github.com/users/orsharir"
} | [] | closed | false | null | [] | null | [] | 2020-08-09T06:53:24Z | 2022-10-04T17:44:33Z | 2022-10-04T17:44:33Z | CONTRIBUTOR | null | null | null | It seem that the bookcoprus data downloaded through the library was pretokenized with NLTK's Treebank tokenizer, which changes the text in incompatible ways to how, for instance, BERT's wordpiece tokenizer works. For example, "didn't" becomes "did" + "n't", and double quotes are changed to `` and '' for start and end quotes, respectively.
On my own projects, I just run the data through NLTK's TreebankWordDetokenizer to reverse the tokenization (as best as possible). I think it would be beneficial to apply this transformation directly on your remote cached copy of the dataset. If you choose to do so, I would also suggest to use my fork of NLTK that fixes several bugs in their detokenizer (I've opened a pull-request, but they've yet to respond): https://github.com/nltk/nltk/pull/2575 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/486/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/486/timeline | null | completed | true |
https://api.github.com/repos/huggingface/datasets/issues/485 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/485/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/485/comments | https://api.github.com/repos/huggingface/datasets/issues/485/events | https://github.com/huggingface/datasets/issues/485 | 675,595,393 | MDU6SXNzdWU2NzU1OTUzOTM= | 485 | PAWS dataset first item is header | {
"avatar_url": "https://avatars.githubusercontent.com/u/13238952?v=4",
"events_url": "https://api.github.com/users/jxmorris12/events{/privacy}",
"followers_url": "https://api.github.com/users/jxmorris12/followers",
"following_url": "https://api.github.com/users/jxmorris12/following{/other_user}",
"gists_url": "https://api.github.com/users/jxmorris12/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/jxmorris12",
"id": 13238952,
"login": "jxmorris12",
"node_id": "MDQ6VXNlcjEzMjM4OTUy",
"organizations_url": "https://api.github.com/users/jxmorris12/orgs",
"received_events_url": "https://api.github.com/users/jxmorris12/received_events",
"repos_url": "https://api.github.com/users/jxmorris12/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/jxmorris12/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jxmorris12/subscriptions",
"type": "User",
"url": "https://api.github.com/users/jxmorris12"
} | [] | closed | false | null | [] | null | [] | 2020-08-08T22:05:25Z | 2020-08-19T09:50:01Z | 2020-08-19T09:50:01Z | CONTRIBUTOR | null | null | null | ```
import nlp
dataset = nlp.load_dataset('xtreme', 'PAWS-X.en')
dataset['test'][0]
```
prints the following
```
{'label': 'label', 'sentence1': 'sentence1', 'sentence2': 'sentence2'}
```
dataset['test'][0] should probably be the first item in the dataset, not just a dictionary mapping the column names to themselves. Probably just need to ignore the first row in the dataset by default or something like that. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/485/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/485/timeline | null | completed | true |
https://api.github.com/repos/huggingface/datasets/issues/484 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/484/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/484/comments | https://api.github.com/repos/huggingface/datasets/issues/484/events | https://github.com/huggingface/datasets/pull/484 | 675,088,983 | MDExOlB1bGxSZXF1ZXN0NDY0NjY1NTU4 | 484 | update mirror for RT dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/13238952?v=4",
"events_url": "https://api.github.com/users/jxmorris12/events{/privacy}",
"followers_url": "https://api.github.com/users/jxmorris12/followers",
"following_url": "https://api.github.com/users/jxmorris12/following{/other_user}",
"gists_url": "https://api.github.com/users/jxmorris12/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/jxmorris12",
"id": 13238952,
"login": "jxmorris12",
"node_id": "MDQ6VXNlcjEzMjM4OTUy",
"organizations_url": "https://api.github.com/users/jxmorris12/orgs",
"received_events_url": "https://api.github.com/users/jxmorris12/received_events",
"repos_url": "https://api.github.com/users/jxmorris12/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/jxmorris12/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jxmorris12/subscriptions",
"type": "User",
"url": "https://api.github.com/users/jxmorris12"
} | [] | closed | false | null | [] | null | [] | 2020-08-07T15:25:45Z | 2020-08-24T13:33:37Z | 2020-08-24T13:33:37Z | CONTRIBUTOR | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/484.diff",
"html_url": "https://github.com/huggingface/datasets/pull/484",
"merged_at": "2020-08-24T13:33:37Z",
"patch_url": "https://github.com/huggingface/datasets/pull/484.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/484"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/484/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/484/timeline | null | null | true |
|
https://api.github.com/repos/huggingface/datasets/issues/483 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/483/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/483/comments | https://api.github.com/repos/huggingface/datasets/issues/483/events | https://github.com/huggingface/datasets/issues/483 | 675,080,694 | MDU6SXNzdWU2NzUwODA2OTQ= | 483 | rotten tomatoes movie review dataset taken down | {
"avatar_url": "https://avatars.githubusercontent.com/u/13238952?v=4",
"events_url": "https://api.github.com/users/jxmorris12/events{/privacy}",
"followers_url": "https://api.github.com/users/jxmorris12/followers",
"following_url": "https://api.github.com/users/jxmorris12/following{/other_user}",
"gists_url": "https://api.github.com/users/jxmorris12/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/jxmorris12",
"id": 13238952,
"login": "jxmorris12",
"node_id": "MDQ6VXNlcjEzMjM4OTUy",
"organizations_url": "https://api.github.com/users/jxmorris12/orgs",
"received_events_url": "https://api.github.com/users/jxmorris12/received_events",
"repos_url": "https://api.github.com/users/jxmorris12/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/jxmorris12/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jxmorris12/subscriptions",
"type": "User",
"url": "https://api.github.com/users/jxmorris12"
} | [] | closed | false | null | [] | null | [] | 2020-08-07T15:12:01Z | 2020-09-08T09:36:34Z | 2020-09-08T09:36:33Z | CONTRIBUTOR | null | null | null | In an interesting twist of events, the individual who created the movie review seems to have left Cornell, and their webpage has been removed, along with the movie review dataset (http://www.cs.cornell.edu/people/pabo/movie-review-data/rt-polaritydata.tar.gz). It's not downloadable anymore. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/483/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/483/timeline | null | completed | true |
https://api.github.com/repos/huggingface/datasets/issues/482 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/482/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/482/comments | https://api.github.com/repos/huggingface/datasets/issues/482/events | https://github.com/huggingface/datasets/issues/482 | 674,851,147 | MDU6SXNzdWU2NzQ4NTExNDc= | 482 | Bugs : dataset.map() is frozen on ELI5 | {
"avatar_url": "https://avatars.githubusercontent.com/u/56621342?v=4",
"events_url": "https://api.github.com/users/ratthachat/events{/privacy}",
"followers_url": "https://api.github.com/users/ratthachat/followers",
"following_url": "https://api.github.com/users/ratthachat/following{/other_user}",
"gists_url": "https://api.github.com/users/ratthachat/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/ratthachat",
"id": 56621342,
"login": "ratthachat",
"node_id": "MDQ6VXNlcjU2NjIxMzQy",
"organizations_url": "https://api.github.com/users/ratthachat/orgs",
"received_events_url": "https://api.github.com/users/ratthachat/received_events",
"repos_url": "https://api.github.com/users/ratthachat/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/ratthachat/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ratthachat/subscriptions",
"type": "User",
"url": "https://api.github.com/users/ratthachat"
} | [] | closed | false | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
}
] | null | [] | 2020-08-07T08:23:35Z | 2020-08-12T14:13:46Z | 2020-08-11T23:55:15Z | NONE | null | null | null | Hi Huggingface Team!
Thank you guys once again for this amazing repo.
I have tried to prepare ELI5 to train with T5, based on [this wonderful notebook of Suraj Patil](https://github.com/patil-suraj/exploring-T5/blob/master/T5_on_TPU.ipynb)
However, when I run `dataset.map()` on ELI5 to prepare `input_text, target_text`, `dataset.map` is **frozen** in the first hundreds examples. On the contrary, this works totally fine on SQUAD (80,000 examples). Both `nlp` version 0.3.0 and 0.4.0 cause frozen process . Also try various `pyarrow` versions from 0.16.0 / 0.17.0 / 1.0.0 also have the same frozen process.
Reproducible code can be found on [this colab notebook ](https://colab.research.google.com/drive/14wttOTv3ky74B_c0kv5WrbgQjCF2fYQk?usp=sharing), where I also show that the same mapping function works fine on SQUAD, so the problem is likely due to ELI5 somehow.
----------------------------------------
**More Info :** instead of `map`, if I run `for` loop and apply function by myself, there's no error and can finish within 10 seconds. However, `nlp dataset` is immutable (I couldn't manually assign a new key-value to `dataset `object)
I also notice that SQUAD texts are quite clean while ELI5 texts contain many special characters, not sure if this is the cause ? | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/482/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/482/timeline | null | completed | true |
https://api.github.com/repos/huggingface/datasets/issues/481 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/481/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/481/comments | https://api.github.com/repos/huggingface/datasets/issues/481/events | https://github.com/huggingface/datasets/pull/481 | 674,567,389 | MDExOlB1bGxSZXF1ZXN0NDY0MjM2MTA1 | 481 | Apply utf-8 encoding to all datasets | {
"avatar_url": "https://avatars.githubusercontent.com/u/26859204?v=4",
"events_url": "https://api.github.com/users/lewtun/events{/privacy}",
"followers_url": "https://api.github.com/users/lewtun/followers",
"following_url": "https://api.github.com/users/lewtun/following{/other_user}",
"gists_url": "https://api.github.com/users/lewtun/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lewtun",
"id": 26859204,
"login": "lewtun",
"node_id": "MDQ6VXNlcjI2ODU5MjA0",
"organizations_url": "https://api.github.com/users/lewtun/orgs",
"received_events_url": "https://api.github.com/users/lewtun/received_events",
"repos_url": "https://api.github.com/users/lewtun/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lewtun/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lewtun/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lewtun"
} | [] | closed | false | null | [] | null | [] | 2020-08-06T20:02:09Z | 2020-08-20T08:16:08Z | 2020-08-20T08:16:08Z | MEMBER | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/481.diff",
"html_url": "https://github.com/huggingface/datasets/pull/481",
"merged_at": "2020-08-20T08:16:08Z",
"patch_url": "https://github.com/huggingface/datasets/pull/481.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/481"
} | ## Description
This PR applies utf-8 encoding for all instances of `with open(...) as f` to all Python files in `datasets/`. As suggested by @thomwolf in #468 , we use regular expressions and the following function
```python
def apply_encoding_on_file_open(filepath: str):
"""Apply UTF-8 encoding for all instances where a non-binary file is opened."""
with open(filepath, 'r', encoding='utf-8') as input_file:
regexp = re.compile(r"(?!.*\b(?:encoding|rb|w|wb|w+|wb+|ab|ab+)\b)(?<=\s)(open)\((.*)\)")
input_text = input_file.read()
match = regexp.search(input_text)
if match:
output = regexp.sub(lambda m: m.group()[:-1]+', encoding="utf-8")', input_text)
with open(filepath, 'w', encoding='utf-8') as output_file:
output_file.write(output)
```
to perform the replacement.
Note:
1. I excluded all _**binary files**_ from the search since it's possible some objects are opened for which the encoding doesn't make sense. Please correct me if I'm wrong and I'll tweak the regexp accordingly
2. There were two edge cases where the regexp failed (e.g. two `open` instances on a single line), but I decided to just fix these manually in the interest of time.
3. I only applied the replacement to files in `datasets/`. Let me know if this should be extended to other places like `metrics/`
4. I have implemented a unit test that should catch missing encodings in future CI runs
Closes #468 and possibly #347 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/481/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/481/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/480 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/480/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/480/comments | https://api.github.com/repos/huggingface/datasets/issues/480/events | https://github.com/huggingface/datasets/pull/480 | 674,245,959 | MDExOlB1bGxSZXF1ZXN0NDYzOTcwNjQ2 | 480 | Column indexing hotfix | {
"avatar_url": "https://avatars.githubusercontent.com/u/26709476?v=4",
"events_url": "https://api.github.com/users/TevenLeScao/events{/privacy}",
"followers_url": "https://api.github.com/users/TevenLeScao/followers",
"following_url": "https://api.github.com/users/TevenLeScao/following{/other_user}",
"gists_url": "https://api.github.com/users/TevenLeScao/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/TevenLeScao",
"id": 26709476,
"login": "TevenLeScao",
"node_id": "MDQ6VXNlcjI2NzA5NDc2",
"organizations_url": "https://api.github.com/users/TevenLeScao/orgs",
"received_events_url": "https://api.github.com/users/TevenLeScao/received_events",
"repos_url": "https://api.github.com/users/TevenLeScao/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/TevenLeScao/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/TevenLeScao/subscriptions",
"type": "User",
"url": "https://api.github.com/users/TevenLeScao"
} | [] | closed | false | null | [] | null | [] | 2020-08-06T11:37:05Z | 2020-08-12T08:36:10Z | 2020-08-12T08:36:10Z | MEMBER | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/480.diff",
"html_url": "https://github.com/huggingface/datasets/pull/480",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/480.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/480"
} | As observed for example in #469 , currently `__getitem__` does not convert the data to the dataset format when indexing by column. This is a hotfix that imitates functional 0.3.0. code. In the future it'd probably be nice to have a test there. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/480/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/480/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/479 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/479/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/479/comments | https://api.github.com/repos/huggingface/datasets/issues/479/events | https://github.com/huggingface/datasets/pull/479 | 673,905,407 | MDExOlB1bGxSZXF1ZXN0NDYzNjkxMjA0 | 479 | add METEOR metric | {
"avatar_url": "https://avatars.githubusercontent.com/u/24683907?v=4",
"events_url": "https://api.github.com/users/vegarab/events{/privacy}",
"followers_url": "https://api.github.com/users/vegarab/followers",
"following_url": "https://api.github.com/users/vegarab/following{/other_user}",
"gists_url": "https://api.github.com/users/vegarab/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/vegarab",
"id": 24683907,
"login": "vegarab",
"node_id": "MDQ6VXNlcjI0NjgzOTA3",
"organizations_url": "https://api.github.com/users/vegarab/orgs",
"received_events_url": "https://api.github.com/users/vegarab/received_events",
"repos_url": "https://api.github.com/users/vegarab/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/vegarab/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/vegarab/subscriptions",
"type": "User",
"url": "https://api.github.com/users/vegarab"
} | [] | closed | false | null | [] | null | [] | 2020-08-05T23:13:00Z | 2020-08-19T13:39:09Z | 2020-08-19T13:39:09Z | CONTRIBUTOR | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/479.diff",
"html_url": "https://github.com/huggingface/datasets/pull/479",
"merged_at": "2020-08-19T13:39:09Z",
"patch_url": "https://github.com/huggingface/datasets/pull/479.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/479"
} | Added the METEOR metric. Can be used like this:
```python
import nlp
meteor = nlp.load_metric('metrics/meteor')
meteor.compute(["some string", "some string"], ["some string", "some similar string"])
# {'meteor': 0.6411637931034483}
meteor.add("some string", "some string")
meteor.add('some string", "some similar string")
meteor.compute()
# {'meteor': 0.6411637931034483}
```
Uses [NLTK's implementation](https://www.nltk.org/api/nltk.translate.html#module-nltk.translate.meteor_score), [(source)](https://github.com/nltk/nltk/blob/develop/nltk/translate/meteor_score.py) | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/479/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/479/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/478 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/478/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/478/comments | https://api.github.com/repos/huggingface/datasets/issues/478/events | https://github.com/huggingface/datasets/issues/478 | 673,178,317 | MDU6SXNzdWU2NzMxNzgzMTc= | 478 | Export TFRecord to GCP bucket | {
"avatar_url": "https://avatars.githubusercontent.com/u/43774355?v=4",
"events_url": "https://api.github.com/users/astariul/events{/privacy}",
"followers_url": "https://api.github.com/users/astariul/followers",
"following_url": "https://api.github.com/users/astariul/following{/other_user}",
"gists_url": "https://api.github.com/users/astariul/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/astariul",
"id": 43774355,
"login": "astariul",
"node_id": "MDQ6VXNlcjQzNzc0MzU1",
"organizations_url": "https://api.github.com/users/astariul/orgs",
"received_events_url": "https://api.github.com/users/astariul/received_events",
"repos_url": "https://api.github.com/users/astariul/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/astariul/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/astariul/subscriptions",
"type": "User",
"url": "https://api.github.com/users/astariul"
} | [] | closed | false | null | [] | null | [] | 2020-08-05T01:08:32Z | 2020-08-05T01:21:37Z | 2020-08-05T01:21:36Z | NONE | null | null | null | Previously, I was writing TFRecords manually to GCP bucket with : `with tf.io.TFRecordWriter('gs://my_bucket/x.tfrecord')`
Since `0.4.0` is out with the `export()` function, I tried it. But it seems TFRecords cannot be directly written to GCP bucket.
`dataset.export('local.tfrecord')` works fine,
but `dataset.export('gs://my_bucket/x.tfrecord')` does not work.
There is no error message, I just can't find the file on my bucket...
---
Looking at the code, `nlp` is using `tf.data.experimental.TFRecordWriter`, while I was using `tf.io.TFRecordWriter`.
**What's the difference between those 2 ? How can I write TFRecords files directly to GCP bucket ?**
@jarednielsen @lhoestq | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/478/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/478/timeline | null | completed | true |
https://api.github.com/repos/huggingface/datasets/issues/477 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/477/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/477/comments | https://api.github.com/repos/huggingface/datasets/issues/477/events | https://github.com/huggingface/datasets/issues/477 | 673,142,143 | MDU6SXNzdWU2NzMxNDIxNDM= | 477 | Overview.ipynb throws exceptions with nlp 0.4.0 | {
"avatar_url": "https://avatars.githubusercontent.com/u/23109219?v=4",
"events_url": "https://api.github.com/users/mandy-li/events{/privacy}",
"followers_url": "https://api.github.com/users/mandy-li/followers",
"following_url": "https://api.github.com/users/mandy-li/following{/other_user}",
"gists_url": "https://api.github.com/users/mandy-li/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mandy-li",
"id": 23109219,
"login": "mandy-li",
"node_id": "MDQ6VXNlcjIzMTA5MjE5",
"organizations_url": "https://api.github.com/users/mandy-li/orgs",
"received_events_url": "https://api.github.com/users/mandy-li/received_events",
"repos_url": "https://api.github.com/users/mandy-li/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mandy-li/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mandy-li/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mandy-li"
} | [] | closed | false | null | [] | null | [] | 2020-08-04T23:18:15Z | 2021-08-03T06:02:15Z | 2021-08-03T06:02:15Z | NONE | null | null | null | with nlp 0.4.0, the TensorFlow example in Overview.ipynb throws the following exceptions:
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
<ipython-input-5-48907f2ad433> in <module>
----> 1 features = {x: train_tf_dataset[x].to_tensor(default_value=0, shape=[None, tokenizer.max_len]) for x in columns[:3]}
2 labels = {"output_1": train_tf_dataset["start_positions"].to_tensor(default_value=0, shape=[None, 1])}
3 labels["output_2"] = train_tf_dataset["end_positions"].to_tensor(default_value=0, shape=[None, 1])
4 tfdataset = tf.data.Dataset.from_tensor_slices((features, labels)).batch(8)
<ipython-input-5-48907f2ad433> in <dictcomp>(.0)
----> 1 features = {x: train_tf_dataset[x].to_tensor(default_value=0, shape=[None, tokenizer.max_len]) for x in columns[:3]}
2 labels = {"output_1": train_tf_dataset["start_positions"].to_tensor(default_value=0, shape=[None, 1])}
3 labels["output_2"] = train_tf_dataset["end_positions"].to_tensor(default_value=0, shape=[None, 1])
4 tfdataset = tf.data.Dataset.from_tensor_slices((features, labels)).batch(8)
AttributeError: 'numpy.ndarray' object has no attribute 'to_tensor' | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/477/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/477/timeline | null | completed | true |
https://api.github.com/repos/huggingface/datasets/issues/476 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/476/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/476/comments | https://api.github.com/repos/huggingface/datasets/issues/476/events | https://github.com/huggingface/datasets/pull/476 | 672,991,854 | MDExOlB1bGxSZXF1ZXN0NDYyOTMyMTgx | 476 | CheckList | {
"avatar_url": "https://avatars.githubusercontent.com/u/698010?v=4",
"events_url": "https://api.github.com/users/marcotcr/events{/privacy}",
"followers_url": "https://api.github.com/users/marcotcr/followers",
"following_url": "https://api.github.com/users/marcotcr/following{/other_user}",
"gists_url": "https://api.github.com/users/marcotcr/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/marcotcr",
"id": 698010,
"login": "marcotcr",
"node_id": "MDQ6VXNlcjY5ODAxMA==",
"organizations_url": "https://api.github.com/users/marcotcr/orgs",
"received_events_url": "https://api.github.com/users/marcotcr/received_events",
"repos_url": "https://api.github.com/users/marcotcr/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/marcotcr/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/marcotcr/subscriptions",
"type": "User",
"url": "https://api.github.com/users/marcotcr"
} | [
{
"color": "0e8a16",
"default": false,
"description": "Contribution to a dataset script",
"id": 4564477500,
"name": "dataset contribution",
"node_id": "LA_kwDODunzps8AAAABEBBmPA",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20contribution"
}
] | closed | false | null | [] | null | [] | 2020-08-04T18:32:05Z | 2022-10-03T09:43:37Z | 2022-10-03T09:43:37Z | NONE | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/476.diff",
"html_url": "https://github.com/huggingface/datasets/pull/476",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/476.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/476"
} | Sorry for the large pull request.
- Added checklists as datasets. I can't run `test_load_real_dataset` (see #474), but I can load the datasets successfully as shown in the example notebook
- Added a checklist wrapper | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 1,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/476/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/476/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/475 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/475/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/475/comments | https://api.github.com/repos/huggingface/datasets/issues/475/events | https://github.com/huggingface/datasets/pull/475 | 672,884,595 | MDExOlB1bGxSZXF1ZXN0NDYyODQzMzQz | 475 | misc. bugs and quality of life | {
"avatar_url": "https://avatars.githubusercontent.com/u/9353833?v=4",
"events_url": "https://api.github.com/users/joeddav/events{/privacy}",
"followers_url": "https://api.github.com/users/joeddav/followers",
"following_url": "https://api.github.com/users/joeddav/following{/other_user}",
"gists_url": "https://api.github.com/users/joeddav/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/joeddav",
"id": 9353833,
"login": "joeddav",
"node_id": "MDQ6VXNlcjkzNTM4MzM=",
"organizations_url": "https://api.github.com/users/joeddav/orgs",
"received_events_url": "https://api.github.com/users/joeddav/received_events",
"repos_url": "https://api.github.com/users/joeddav/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/joeddav/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/joeddav/subscriptions",
"type": "User",
"url": "https://api.github.com/users/joeddav"
} | [] | closed | false | null | [] | null | [] | 2020-08-04T15:32:29Z | 2020-08-17T21:14:08Z | 2020-08-17T21:14:07Z | CONTRIBUTOR | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/475.diff",
"html_url": "https://github.com/huggingface/datasets/pull/475",
"merged_at": "2020-08-17T21:14:07Z",
"patch_url": "https://github.com/huggingface/datasets/pull/475.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/475"
} | A few misc. bugs and QOL improvements that I've come across in using the library. Let me know if you don't like any of them and I can adjust/remove them.
1. Printing datasets without a description field throws an error when formatting the `single_line_description`. This fixes that, and also adds some formatting to the repr to make it slightly more readable.
```
>>> print(list_datasets()[0])
nlp.ObjectInfo(
id='aeslc',
description='A collection of email messages of employees in the Enron Corporation.There are two features: - email_body: email body text. - subject_line: email subject text.',
files=[nlp.S3Object('aeslc.py'), nlp.S3Object('dataset_infos.json'), nlp.S3Object('dummy/1.0.0/dummy_data-zip-extracted/dummy_data/AESLC-master/enron_subject_line/dev/allen-p_inbox_29.subject'), nlp.S3Object('dummy/1.0.0/dummy_data-zip-extracted/dummy_data/AESLC-master/enron_subject_line/test/allen-p_inbox_24.subject'), nlp.S3Object('dummy/1.0.0/dummy_data-zip-extracted/dummy_data/AESLC-master/enron_subject_line/train/allen-p_inbox_20.subject'), nlp.S3Object('dummy/1.0.0/dummy_data.zip'), nlp.S3Object('urls_checksums/checksums.txt')]
)
```
2. Add id-only option to `list_datasets` and `list_metrics` to allow the user to easily print out just the names of the datasets & metrics. I often found myself annoyed that this took so many strokes to do.
```python
[dataset.id for dataset in list_datasets()] # before
list_datasets(id_only=True) # after
```
3. Fix null-seed randomization caching. When using `train_test_split` and `shuffle`, the computation was being cached even without a seed or generator being passed. The result was that calling `.shuffle` more than once on the same dataset didn't do anything without passing a distinct seed or generator. Likewise with `train_test_split`.
4. Indexing by iterables of bool. I added support for passing an iterable of type bool to `_getitem` as a numpy/pandas-like indexing method. Let me know if you think it's redundant with `filter` (I know it's not optimal memory-wise), but I think it's nice to have as a lightweight alternative to do simple things without having to create a copy of the entire dataset, e.g.
```python
dataset[dataset['label'] == 0] # numpy-like bool indexing to look at instances with labels of 0
```
5. Add an `input_column` argument to `map` and `filter`, which allows you to filter/map on a particular column rather than passing the whole dict to the function. Also adds `fn_kwargs` to be passed to the function. I think these together make mapping much cleaner in many cases such as mono-column tokenization:
```python
# before
dataset = dataset.map(lambda batch: tokenizer(batch["text"])
# after
dataset = dataset.map(tokenizer, input_column="text")
dataset = dataset.map(tokenizer, input_column="text", fn_kwargs={"truncation": True, "padding": True})
``` | {
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/475/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/475/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/474 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/474/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/474/comments | https://api.github.com/repos/huggingface/datasets/issues/474/events | https://github.com/huggingface/datasets/issues/474 | 672,407,330 | MDU6SXNzdWU2NzI0MDczMzA= | 474 | test_load_real_dataset when config has BUILDER_CONFIGS that matter | {
"avatar_url": "https://avatars.githubusercontent.com/u/698010?v=4",
"events_url": "https://api.github.com/users/marcotcr/events{/privacy}",
"followers_url": "https://api.github.com/users/marcotcr/followers",
"following_url": "https://api.github.com/users/marcotcr/following{/other_user}",
"gists_url": "https://api.github.com/users/marcotcr/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/marcotcr",
"id": 698010,
"login": "marcotcr",
"node_id": "MDQ6VXNlcjY5ODAxMA==",
"organizations_url": "https://api.github.com/users/marcotcr/orgs",
"received_events_url": "https://api.github.com/users/marcotcr/received_events",
"repos_url": "https://api.github.com/users/marcotcr/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/marcotcr/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/marcotcr/subscriptions",
"type": "User",
"url": "https://api.github.com/users/marcotcr"
} | [] | closed | false | null | [] | null | [] | 2020-08-03T23:46:36Z | 2020-09-07T14:53:13Z | 2020-09-07T14:53:13Z | NONE | null | null | null | It a dataset has custom `BUILDER_CONFIGS` with non-keyword arguments (or keyword arguments with non default values), the config is not loaded during the test and causes an error.
I think the problem is that `test_load_real_dataset` calls `load_dataset` with `data_dir=temp_data_dir` ([here](https://github.com/huggingface/nlp/blob/master/tests/test_dataset_common.py#L200)). This causes [this line](https://github.com/huggingface/nlp/blob/master/src/nlp/builder.py#L201) to always be false because `config_kwargs` is not `None`. [This line](https://github.com/huggingface/nlp/blob/master/src/nlp/builder.py#L222) will be run instead, which doesn't use `BUILDER_CONFIGS`.
For an example, you can try running the test for lince:
` RUN_SLOW=1 pytest tests/test_dataset_common.py::LocalDatasetTest::test_load_real_dataset_lince`
which yields
> E TypeError: __init__() missing 3 required positional arguments: 'colnames', 'classes', and 'label_column' | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/474/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/474/timeline | null | completed | true |
https://api.github.com/repos/huggingface/datasets/issues/473 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/473/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/473/comments | https://api.github.com/repos/huggingface/datasets/issues/473/events | https://github.com/huggingface/datasets/pull/473 | 672,007,247 | MDExOlB1bGxSZXF1ZXN0NDYyMTIwNzU4 | 473 | add DoQA dataset (ACL 2020) | {
"avatar_url": "https://avatars.githubusercontent.com/u/38249783?v=4",
"events_url": "https://api.github.com/users/mariamabarham/events{/privacy}",
"followers_url": "https://api.github.com/users/mariamabarham/followers",
"following_url": "https://api.github.com/users/mariamabarham/following{/other_user}",
"gists_url": "https://api.github.com/users/mariamabarham/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariamabarham",
"id": 38249783,
"login": "mariamabarham",
"node_id": "MDQ6VXNlcjM4MjQ5Nzgz",
"organizations_url": "https://api.github.com/users/mariamabarham/orgs",
"received_events_url": "https://api.github.com/users/mariamabarham/received_events",
"repos_url": "https://api.github.com/users/mariamabarham/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariamabarham/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariamabarham/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariamabarham"
} | [] | closed | false | null | [] | null | [] | 2020-08-03T11:26:52Z | 2020-09-10T17:19:11Z | 2020-09-03T11:44:15Z | CONTRIBUTOR | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/473.diff",
"html_url": "https://github.com/huggingface/datasets/pull/473",
"merged_at": "2020-09-03T11:44:14Z",
"patch_url": "https://github.com/huggingface/datasets/pull/473.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/473"
} | add DoQA dataset (ACL 2020) http://ixa.eus/node/12931 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/473/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/473/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/472 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/472/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/472/comments | https://api.github.com/repos/huggingface/datasets/issues/472/events | https://github.com/huggingface/datasets/pull/472 | 672,000,745 | MDExOlB1bGxSZXF1ZXN0NDYyMTE1MjA4 | 472 | add crd3 dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/38249783?v=4",
"events_url": "https://api.github.com/users/mariamabarham/events{/privacy}",
"followers_url": "https://api.github.com/users/mariamabarham/followers",
"following_url": "https://api.github.com/users/mariamabarham/following{/other_user}",
"gists_url": "https://api.github.com/users/mariamabarham/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariamabarham",
"id": 38249783,
"login": "mariamabarham",
"node_id": "MDQ6VXNlcjM4MjQ5Nzgz",
"organizations_url": "https://api.github.com/users/mariamabarham/orgs",
"received_events_url": "https://api.github.com/users/mariamabarham/received_events",
"repos_url": "https://api.github.com/users/mariamabarham/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariamabarham/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariamabarham/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariamabarham"
} | [] | closed | false | null | [] | null | [] | 2020-08-03T11:15:02Z | 2020-08-03T11:22:10Z | 2020-08-03T11:22:09Z | CONTRIBUTOR | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/472.diff",
"html_url": "https://github.com/huggingface/datasets/pull/472",
"merged_at": "2020-08-03T11:22:09Z",
"patch_url": "https://github.com/huggingface/datasets/pull/472.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/472"
} | opening new PR for CRD3 dataset (ACL2020) to fix the circle CI problems | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/472/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/472/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/471 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/471/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/471/comments | https://api.github.com/repos/huggingface/datasets/issues/471/events | https://github.com/huggingface/datasets/pull/471 | 671,996,423 | MDExOlB1bGxSZXF1ZXN0NDYyMTExNTU1 | 471 | add reuters21578 dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/38249783?v=4",
"events_url": "https://api.github.com/users/mariamabarham/events{/privacy}",
"followers_url": "https://api.github.com/users/mariamabarham/followers",
"following_url": "https://api.github.com/users/mariamabarham/following{/other_user}",
"gists_url": "https://api.github.com/users/mariamabarham/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariamabarham",
"id": 38249783,
"login": "mariamabarham",
"node_id": "MDQ6VXNlcjM4MjQ5Nzgz",
"organizations_url": "https://api.github.com/users/mariamabarham/orgs",
"received_events_url": "https://api.github.com/users/mariamabarham/received_events",
"repos_url": "https://api.github.com/users/mariamabarham/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariamabarham/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariamabarham/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariamabarham"
} | [] | closed | false | null | [] | null | [] | 2020-08-03T11:07:14Z | 2022-08-04T08:39:11Z | 2020-09-03T09:58:50Z | CONTRIBUTOR | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/471.diff",
"html_url": "https://github.com/huggingface/datasets/pull/471",
"merged_at": "2020-09-03T09:58:50Z",
"patch_url": "https://github.com/huggingface/datasets/pull/471.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/471"
} | new PR to add the reuters21578 dataset and fix the circle CI problems.
Fix partially:
- #353
Subsequent PR after:
- #449 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/471/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/471/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/470 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/470/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/470/comments | https://api.github.com/repos/huggingface/datasets/issues/470/events | https://github.com/huggingface/datasets/pull/470 | 671,952,276 | MDExOlB1bGxSZXF1ZXN0NDYyMDc0MzQ0 | 470 | Adding IWSLT 2017 dataset. | {
"avatar_url": "https://avatars.githubusercontent.com/u/204321?v=4",
"events_url": "https://api.github.com/users/Narsil/events{/privacy}",
"followers_url": "https://api.github.com/users/Narsil/followers",
"following_url": "https://api.github.com/users/Narsil/following{/other_user}",
"gists_url": "https://api.github.com/users/Narsil/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Narsil",
"id": 204321,
"login": "Narsil",
"node_id": "MDQ6VXNlcjIwNDMyMQ==",
"organizations_url": "https://api.github.com/users/Narsil/orgs",
"received_events_url": "https://api.github.com/users/Narsil/received_events",
"repos_url": "https://api.github.com/users/Narsil/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Narsil/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Narsil/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Narsil"
} | [] | closed | false | null | [] | null | [] | 2020-08-03T09:52:39Z | 2020-09-07T12:33:30Z | 2020-09-07T12:33:30Z | CONTRIBUTOR | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/470.diff",
"html_url": "https://github.com/huggingface/datasets/pull/470",
"merged_at": "2020-09-07T12:33:30Z",
"patch_url": "https://github.com/huggingface/datasets/pull/470.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/470"
} | Created a [IWSLT 2017](https://sites.google.com/site/iwsltevaluation2017/TED-tasks) dataset script for the *multilingual data*.
```
Bilingual data: {Arabic, German, French, Japanese, Korean, Chinese} <-> English
Multilingual data: German, English, Italian, Dutch, Romanian. (Any pair)
```
I'm unsure how to handle bilingual vs multilingual. Given `nlp` architecture a Config option seems to be the way to go, however, it might be a bit confusing to have different language pairs with different option. Using just language pairs is not viable as English to German exists in both.
Any opinion on how that should be done ?
EDIT: I decided to just omit de-en from multilingual as it's only a subset of the bilingual one. That way only language pairs exist.
EDIT : Could be interesting for #438 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 2,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 2,
"url": "https://api.github.com/repos/huggingface/datasets/issues/470/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/470/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/469 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/469/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/469/comments | https://api.github.com/repos/huggingface/datasets/issues/469/events | https://github.com/huggingface/datasets/issues/469 | 671,876,963 | MDU6SXNzdWU2NzE4NzY5NjM= | 469 | invalid data type 'str' at _convert_outputs in arrow_dataset.py | {
"avatar_url": "https://avatars.githubusercontent.com/u/30617486?v=4",
"events_url": "https://api.github.com/users/Murgates/events{/privacy}",
"followers_url": "https://api.github.com/users/Murgates/followers",
"following_url": "https://api.github.com/users/Murgates/following{/other_user}",
"gists_url": "https://api.github.com/users/Murgates/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Murgates",
"id": 30617486,
"login": "Murgates",
"node_id": "MDQ6VXNlcjMwNjE3NDg2",
"organizations_url": "https://api.github.com/users/Murgates/orgs",
"received_events_url": "https://api.github.com/users/Murgates/received_events",
"repos_url": "https://api.github.com/users/Murgates/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Murgates/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Murgates/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Murgates"
} | [] | open | false | null | [] | null | [] | 2020-08-03T07:48:29Z | 2020-10-22T09:04:26Z | null | NONE | null | null | null | I trying to build multi label text classifier model using Transformers lib.
I'm using Transformers NLP to load the data set, while calling trainer.train() method. It throws the following error
File "C:\***\arrow_dataset.py", line 343, in _convert_outputs
v = command(v)
TypeError: new(): invalid data type 'str'
I'm using pyarrow 1.0.0. And I have simple custom data set with Text and Integer Label.
Ex: Data
Text , Label #Column Header
I'm facing an Network issue, 1
I forgot my password, 2
Error StackTrace:
File "C:\**\transformers\trainer.py", line 492, in train
for step, inputs in enumerate(epoch_iterator):
File "C:\**\tqdm\std.py", line 1104, in __iter__
for obj in iterable:
File "C:\**\torch\utils\data\dataloader.py", line 345, in __next__
data = self._next_data()
File "C:\**\torch\utils\data\dataloader.py", line 385, in _next_data
data = self._dataset_fetcher.fetch(index) # may raise StopIteration
File "C:\**\torch\utils\data\_utils\fetch.py", line 44, in fetch
data = [self.dataset[idx] for idx in possibly_batched_index]
File "C:\**\torch\utils\data\_utils\fetch.py", line 44, in <listcomp>
data = [self.dataset[idx] for idx in possibly_batched_index]
File "C:\**\nlp\arrow_dataset.py", line 414, in __getitem__
output_all_columns=self._output_all_columns,
File "C:\**\nlp\arrow_dataset.py", line 403, in _getitem
outputs, format_type=format_type, format_columns=format_columns, output_all_columns=output_all_columns
File "C:\**\nlp\arrow_dataset.py", line 343, in _convert_outputs
v = command(v)
TypeError: new(): invalid data type 'str'
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/469/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/469/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/468 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/468/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/468/comments | https://api.github.com/repos/huggingface/datasets/issues/468/events | https://github.com/huggingface/datasets/issues/468 | 671,622,441 | MDU6SXNzdWU2NzE2MjI0NDE= | 468 | UnicodeDecodeError while loading PAN-X task of XTREME dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/26859204?v=4",
"events_url": "https://api.github.com/users/lewtun/events{/privacy}",
"followers_url": "https://api.github.com/users/lewtun/followers",
"following_url": "https://api.github.com/users/lewtun/following{/other_user}",
"gists_url": "https://api.github.com/users/lewtun/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lewtun",
"id": 26859204,
"login": "lewtun",
"node_id": "MDQ6VXNlcjI2ODU5MjA0",
"organizations_url": "https://api.github.com/users/lewtun/orgs",
"received_events_url": "https://api.github.com/users/lewtun/received_events",
"repos_url": "https://api.github.com/users/lewtun/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lewtun/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lewtun/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lewtun"
} | [] | closed | false | null | [] | null | [] | 2020-08-02T14:05:10Z | 2020-08-20T08:16:08Z | 2020-08-20T08:16:08Z | MEMBER | null | null | null | Hi 🤗 team!
## Description of the problem
I'm running into a `UnicodeDecodeError` while trying to load the PAN-X subset the XTREME dataset:
```
---------------------------------------------------------------------------
UnicodeDecodeError Traceback (most recent call last)
<ipython-input-5-1d61f439b843> in <module>
----> 1 dataset = load_dataset("xtreme", "PAN-X.en", data_dir='./data')
/usr/local/lib/python3.6/dist-packages/nlp/load.py in load_dataset(path, name, version, data_dir, data_files, split, cache_dir, features, download_config, download_mode, ignore_verifications, save_infos, **config_kwargs)
528 ignore_verifications = ignore_verifications or save_infos
529 # Download/copy dataset processing script
--> 530 module_path, hash = prepare_module(path, download_config=download_config, dataset=True)
531
532 # Get dataset builder class from the processing script
/usr/local/lib/python3.6/dist-packages/nlp/load.py in prepare_module(path, download_config, dataset, force_local_path, **download_kwargs)
265
266 # Download external imports if needed
--> 267 imports = get_imports(local_path)
268 local_imports = []
269 library_imports = []
/usr/local/lib/python3.6/dist-packages/nlp/load.py in get_imports(file_path)
156 lines = []
157 with open(file_path, mode="r") as f:
--> 158 lines.extend(f.readlines())
159
160 logger.info("Checking %s for additional imports.", file_path)
/usr/lib/python3.6/encodings/ascii.py in decode(self, input, final)
24 class IncrementalDecoder(codecs.IncrementalDecoder):
25 def decode(self, input, final=False):
---> 26 return codecs.ascii_decode(input, self.errors)[0]
27
28 class StreamWriter(Codec,codecs.StreamWriter):
UnicodeDecodeError: 'ascii' codec can't decode byte 0xe2 in position 111: ordinal not in range(128)
```
## Steps to reproduce
Install from nlp's master branch
```python
pip install git+https://github.com/huggingface/nlp.git
```
then run
```python
from nlp import load_dataset
# AmazonPhotos.zip is located in data/
dataset = load_dataset("xtreme", "PAN-X.en", data_dir='./data')
```
## OS / platform details
- `nlp` version: latest from master
- Platform: Linux-4.15.0-72-generic-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.6.9
- PyTorch version (GPU?): 1.4.0 (True)
- Tensorflow version (GPU?): 2.1.0 (True)
- Using GPU in script?: True
- Using distributed or parallel set-up in script?: False
## Proposed solution
Either change [line 762](https://github.com/huggingface/nlp/blob/7ada00b1d62f94eee22a7df38c6b01e3f27194b7/datasets/xtreme/xtreme.py#L762) in `xtreme.py` to include UTF-8 encoding:
```
# old
with open(filepath) as f
# new
with open(filepath, encoding='utf-8') as f
```
or raise a warning that suggests setting the locale explicitly, e.g.
```python
import locale
locale.setlocale(locale.LC_ALL, 'C.UTF-8')
```
I have a preference for the first solution. Let me know if you agree and I'll be happy to implement the simple fix! | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/468/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/468/timeline | null | completed | true |
https://api.github.com/repos/huggingface/datasets/issues/467 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/467/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/467/comments | https://api.github.com/repos/huggingface/datasets/issues/467/events | https://github.com/huggingface/datasets/pull/467 | 671,580,010 | MDExOlB1bGxSZXF1ZXN0NDYxNzgwMzUy | 467 | DOCS: Fix typo | {
"avatar_url": "https://avatars.githubusercontent.com/u/13381361?v=4",
"events_url": "https://api.github.com/users/Bharat123rox/events{/privacy}",
"followers_url": "https://api.github.com/users/Bharat123rox/followers",
"following_url": "https://api.github.com/users/Bharat123rox/following{/other_user}",
"gists_url": "https://api.github.com/users/Bharat123rox/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Bharat123rox",
"id": 13381361,
"login": "Bharat123rox",
"node_id": "MDQ6VXNlcjEzMzgxMzYx",
"organizations_url": "https://api.github.com/users/Bharat123rox/orgs",
"received_events_url": "https://api.github.com/users/Bharat123rox/received_events",
"repos_url": "https://api.github.com/users/Bharat123rox/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Bharat123rox/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Bharat123rox/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Bharat123rox"
} | [] | closed | false | null | [] | null | [] | 2020-08-02T08:59:37Z | 2020-08-02T13:52:27Z | 2020-08-02T09:18:54Z | CONTRIBUTOR | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/467.diff",
"html_url": "https://github.com/huggingface/datasets/pull/467",
"merged_at": "2020-08-02T09:18:54Z",
"patch_url": "https://github.com/huggingface/datasets/pull/467.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/467"
} | Fix typo from dictionnary -> dictionary | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/467/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/467/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/466 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/466/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/466/comments | https://api.github.com/repos/huggingface/datasets/issues/466/events | https://github.com/huggingface/datasets/pull/466 | 670,766,891 | MDExOlB1bGxSZXF1ZXN0NDYxMDEzOTM0 | 466 | [METRICS] Various improvements on metrics | {
"avatar_url": "https://avatars.githubusercontent.com/u/7353373?v=4",
"events_url": "https://api.github.com/users/thomwolf/events{/privacy}",
"followers_url": "https://api.github.com/users/thomwolf/followers",
"following_url": "https://api.github.com/users/thomwolf/following{/other_user}",
"gists_url": "https://api.github.com/users/thomwolf/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/thomwolf",
"id": 7353373,
"login": "thomwolf",
"node_id": "MDQ6VXNlcjczNTMzNzM=",
"organizations_url": "https://api.github.com/users/thomwolf/orgs",
"received_events_url": "https://api.github.com/users/thomwolf/received_events",
"repos_url": "https://api.github.com/users/thomwolf/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/thomwolf/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/thomwolf/subscriptions",
"type": "User",
"url": "https://api.github.com/users/thomwolf"
} | [] | closed | false | null | [] | null | [] | 2020-08-01T11:03:45Z | 2020-08-17T15:15:00Z | 2020-08-17T15:14:59Z | MEMBER | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/466.diff",
"html_url": "https://github.com/huggingface/datasets/pull/466",
"merged_at": "2020-08-17T15:14:59Z",
"patch_url": "https://github.com/huggingface/datasets/pull/466.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/466"
} | - Disallow the use of positional arguments to avoid `predictions` vs `references` mistakes
- Allow to directly feed numpy/pytorch/tensorflow/pandas objects in metrics | {
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/466/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/466/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/465 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/465/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/465/comments | https://api.github.com/repos/huggingface/datasets/issues/465/events | https://github.com/huggingface/datasets/pull/465 | 669,889,779 | MDExOlB1bGxSZXF1ZXN0NDYwMjEwODYw | 465 | Keep features after transform | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [] | closed | false | null | [] | null | [] | 2020-07-31T14:43:21Z | 2020-07-31T18:27:33Z | 2020-07-31T18:27:32Z | MEMBER | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/465.diff",
"html_url": "https://github.com/huggingface/datasets/pull/465",
"merged_at": "2020-07-31T18:27:32Z",
"patch_url": "https://github.com/huggingface/datasets/pull/465.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/465"
} | When applying a transform like `map`, some features were lost (and inferred features were used).
It was the case for ClassLabel, Translation, etc.
To fix that, I did some modifications in the `ArrowWriter`:
- added the `update_features` parameter. When it's `True`, then the features specified by the user (if any) can be updated with inferred features if their type don't match. `map` transform sets `update_features=True` when writing to cache file or buffer. Features won't change by default in `map`.
- added the `with_metadata` parameter. If `True`, the `features` (after update) will be written inside the metadata of the schema in this format:
```
{
"huggingface": {"features" : <serialized Features exactly like dataset_info.json>}
}
```
Then, once a dataset is instantiated without info/features, these metadata are used to set the features of the dataset. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/465/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/465/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/464 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/464/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/464/comments | https://api.github.com/repos/huggingface/datasets/issues/464/events | https://github.com/huggingface/datasets/pull/464 | 669,767,381 | MDExOlB1bGxSZXF1ZXN0NDYwMTAxNDYz | 464 | Add rename, remove and cast in-place operations | {
"avatar_url": "https://avatars.githubusercontent.com/u/7353373?v=4",
"events_url": "https://api.github.com/users/thomwolf/events{/privacy}",
"followers_url": "https://api.github.com/users/thomwolf/followers",
"following_url": "https://api.github.com/users/thomwolf/following{/other_user}",
"gists_url": "https://api.github.com/users/thomwolf/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/thomwolf",
"id": 7353373,
"login": "thomwolf",
"node_id": "MDQ6VXNlcjczNTMzNzM=",
"organizations_url": "https://api.github.com/users/thomwolf/orgs",
"received_events_url": "https://api.github.com/users/thomwolf/received_events",
"repos_url": "https://api.github.com/users/thomwolf/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/thomwolf/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/thomwolf/subscriptions",
"type": "User",
"url": "https://api.github.com/users/thomwolf"
} | [] | closed | false | null | [] | null | [] | 2020-07-31T12:30:21Z | 2020-07-31T15:50:02Z | 2020-07-31T15:50:00Z | MEMBER | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/464.diff",
"html_url": "https://github.com/huggingface/datasets/pull/464",
"merged_at": "2020-07-31T15:50:00Z",
"patch_url": "https://github.com/huggingface/datasets/pull/464.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/464"
} | Add a bunch of in-place operation leveraging the Arrow back-end to rename and remove columns and cast to new features without using the more expensive `map` method.
These methods are added to `Dataset` as well as `DatasetDict`.
Added tests for these new methods and add the methods to the doc.
Naming follows the new pattern with a trailing underscore indicating in-place methods. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/464/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/464/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/463 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/463/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/463/comments | https://api.github.com/repos/huggingface/datasets/issues/463/events | https://github.com/huggingface/datasets/pull/463 | 669,735,455 | MDExOlB1bGxSZXF1ZXN0NDYwMDcyNjQ1 | 463 | Add dataset/mlsum | {
"avatar_url": "https://avatars.githubusercontent.com/u/36986299?v=4",
"events_url": "https://api.github.com/users/RachelKer/events{/privacy}",
"followers_url": "https://api.github.com/users/RachelKer/followers",
"following_url": "https://api.github.com/users/RachelKer/following{/other_user}",
"gists_url": "https://api.github.com/users/RachelKer/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/RachelKer",
"id": 36986299,
"login": "RachelKer",
"node_id": "MDQ6VXNlcjM2OTg2Mjk5",
"organizations_url": "https://api.github.com/users/RachelKer/orgs",
"received_events_url": "https://api.github.com/users/RachelKer/received_events",
"repos_url": "https://api.github.com/users/RachelKer/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/RachelKer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/RachelKer/subscriptions",
"type": "User",
"url": "https://api.github.com/users/RachelKer"
} | [] | closed | false | null | [] | null | [] | 2020-07-31T11:50:52Z | 2020-08-24T14:54:42Z | 2020-08-24T14:54:42Z | CONTRIBUTOR | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/463.diff",
"html_url": "https://github.com/huggingface/datasets/pull/463",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/463.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/463"
} | New pull request that should correct the previous errors.
The load_real_data stills fails because it is looking for a default dataset URL that does not exists, this does not happen when loading the dataset with load_dataset | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/463/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/463/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/462 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/462/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/462/comments | https://api.github.com/repos/huggingface/datasets/issues/462/events | https://github.com/huggingface/datasets/pull/462 | 669,715,547 | MDExOlB1bGxSZXF1ZXN0NDYwMDU0NDgz | 462 | add DoQA (ACL 2020) dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/38249783?v=4",
"events_url": "https://api.github.com/users/mariamabarham/events{/privacy}",
"followers_url": "https://api.github.com/users/mariamabarham/followers",
"following_url": "https://api.github.com/users/mariamabarham/following{/other_user}",
"gists_url": "https://api.github.com/users/mariamabarham/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariamabarham",
"id": 38249783,
"login": "mariamabarham",
"node_id": "MDQ6VXNlcjM4MjQ5Nzgz",
"organizations_url": "https://api.github.com/users/mariamabarham/orgs",
"received_events_url": "https://api.github.com/users/mariamabarham/received_events",
"repos_url": "https://api.github.com/users/mariamabarham/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariamabarham/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariamabarham/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariamabarham"
} | [] | closed | false | null | [] | null | [] | 2020-07-31T11:25:56Z | 2020-08-03T11:28:27Z | 2020-08-03T11:28:27Z | CONTRIBUTOR | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/462.diff",
"html_url": "https://github.com/huggingface/datasets/pull/462",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/462.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/462"
} | adds DoQA (ACL 2020) dataset | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/462/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/462/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/461 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/461/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/461/comments | https://api.github.com/repos/huggingface/datasets/issues/461/events | https://github.com/huggingface/datasets/pull/461 | 669,703,508 | MDExOlB1bGxSZXF1ZXN0NDYwMDQzNDY5 | 461 | Doqa | {
"avatar_url": "https://avatars.githubusercontent.com/u/38249783?v=4",
"events_url": "https://api.github.com/users/mariamabarham/events{/privacy}",
"followers_url": "https://api.github.com/users/mariamabarham/followers",
"following_url": "https://api.github.com/users/mariamabarham/following{/other_user}",
"gists_url": "https://api.github.com/users/mariamabarham/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariamabarham",
"id": 38249783,
"login": "mariamabarham",
"node_id": "MDQ6VXNlcjM4MjQ5Nzgz",
"organizations_url": "https://api.github.com/users/mariamabarham/orgs",
"received_events_url": "https://api.github.com/users/mariamabarham/received_events",
"repos_url": "https://api.github.com/users/mariamabarham/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariamabarham/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariamabarham/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariamabarham"
} | [] | closed | false | null | [] | null | [] | 2020-07-31T11:11:12Z | 2020-07-31T11:13:15Z | 2020-07-31T11:13:15Z | CONTRIBUTOR | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/461.diff",
"html_url": "https://github.com/huggingface/datasets/pull/461",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/461.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/461"
} | add DoQA (ACL 2020) dataset | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/461/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/461/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/460 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/460/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/460/comments | https://api.github.com/repos/huggingface/datasets/issues/460/events | https://github.com/huggingface/datasets/pull/460 | 669,585,256 | MDExOlB1bGxSZXF1ZXN0NDU5OTM2OTU2 | 460 | Fix KeyboardInterrupt in map and bad indices in select | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [] | closed | false | null | [] | null | [] | 2020-07-31T08:57:15Z | 2020-07-31T11:32:19Z | 2020-07-31T11:32:18Z | MEMBER | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/460.diff",
"html_url": "https://github.com/huggingface/datasets/pull/460",
"merged_at": "2020-07-31T11:32:18Z",
"patch_url": "https://github.com/huggingface/datasets/pull/460.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/460"
} | If you interrupted a map function while it was writing, the cached file was not discarded.
Therefore the next time you called map, it was loading an incomplete arrow file.
We had the same issue with select if there was a bad indice at one point.
To fix that I used temporary files that are renamed once everything is finished. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/460/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/460/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/459 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/459/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/459/comments | https://api.github.com/repos/huggingface/datasets/issues/459/events | https://github.com/huggingface/datasets/pull/459 | 669,545,437 | MDExOlB1bGxSZXF1ZXN0NDU5OTAxMjEy | 459 | [Breaking] Update Dataset and DatasetDict API | {
"avatar_url": "https://avatars.githubusercontent.com/u/7353373?v=4",
"events_url": "https://api.github.com/users/thomwolf/events{/privacy}",
"followers_url": "https://api.github.com/users/thomwolf/followers",
"following_url": "https://api.github.com/users/thomwolf/following{/other_user}",
"gists_url": "https://api.github.com/users/thomwolf/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/thomwolf",
"id": 7353373,
"login": "thomwolf",
"node_id": "MDQ6VXNlcjczNTMzNzM=",
"organizations_url": "https://api.github.com/users/thomwolf/orgs",
"received_events_url": "https://api.github.com/users/thomwolf/received_events",
"repos_url": "https://api.github.com/users/thomwolf/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/thomwolf/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/thomwolf/subscriptions",
"type": "User",
"url": "https://api.github.com/users/thomwolf"
} | [] | closed | false | null | [] | null | [] | 2020-07-31T08:11:33Z | 2020-08-26T08:28:36Z | 2020-08-26T08:28:35Z | MEMBER | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/459.diff",
"html_url": "https://github.com/huggingface/datasets/pull/459",
"merged_at": "2020-08-26T08:28:35Z",
"patch_url": "https://github.com/huggingface/datasets/pull/459.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/459"
} | This PR contains a few breaking changes so it's probably good to keep it for the next (major) release:
- rename the `flatten`, `drop` and `dictionary_encode_column` methods in `flatten_`, `drop_` and `dictionary_encode_column_` to indicate that these methods have in-place effects as discussed in #166. From now on we should keep the convention of having a trailing underscore for methods which have an in-place effet. I also adopt the conversion of not returning the (self) dataset for these methods. This is different than what PyTorch does for instance (`model.to()` is in-place but return the self model) but I feel like it's a safer approach in terms of UX.
- remove the `dataset.columns` property which returns a low-level Apache Arrow object and should not be used by users. Similarly, remove `dataset. nbytes` which we don't really want to expose in this bare-bone format.
- add a few more properties and methods to `DatasetDict` | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/459/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/459/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/458 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/458/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/458/comments | https://api.github.com/repos/huggingface/datasets/issues/458/events | https://github.com/huggingface/datasets/pull/458 | 668,972,666 | MDExOlB1bGxSZXF1ZXN0NDU5Mzk5ODg2 | 458 | Install CoVal metric from github | {
"avatar_url": "https://avatars.githubusercontent.com/u/10469459?v=4",
"events_url": "https://api.github.com/users/yjernite/events{/privacy}",
"followers_url": "https://api.github.com/users/yjernite/followers",
"following_url": "https://api.github.com/users/yjernite/following{/other_user}",
"gists_url": "https://api.github.com/users/yjernite/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/yjernite",
"id": 10469459,
"login": "yjernite",
"node_id": "MDQ6VXNlcjEwNDY5NDU5",
"organizations_url": "https://api.github.com/users/yjernite/orgs",
"received_events_url": "https://api.github.com/users/yjernite/received_events",
"repos_url": "https://api.github.com/users/yjernite/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/yjernite/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yjernite/subscriptions",
"type": "User",
"url": "https://api.github.com/users/yjernite"
} | [] | closed | false | null | [] | null | [] | 2020-07-30T16:59:25Z | 2020-07-31T13:56:33Z | 2020-07-31T13:56:33Z | MEMBER | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/458.diff",
"html_url": "https://github.com/huggingface/datasets/pull/458",
"merged_at": "2020-07-31T13:56:33Z",
"patch_url": "https://github.com/huggingface/datasets/pull/458.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/458"
} | Changed the import statements in `coval.py` to direct the user to install the original package from github if it's not already installed (the warning will only display properly after merging [PR455](https://github.com/huggingface/nlp/pull/455))
Also changed the function call to use named rather than positional arguments. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/458/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/458/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/457 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/457/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/457/comments | https://api.github.com/repos/huggingface/datasets/issues/457/events | https://github.com/huggingface/datasets/pull/457 | 668,898,386 | MDExOlB1bGxSZXF1ZXN0NDU5MzMyOTM1 | 457 | add set_format to DatasetDict + tests | {
"avatar_url": "https://avatars.githubusercontent.com/u/7353373?v=4",
"events_url": "https://api.github.com/users/thomwolf/events{/privacy}",
"followers_url": "https://api.github.com/users/thomwolf/followers",
"following_url": "https://api.github.com/users/thomwolf/following{/other_user}",
"gists_url": "https://api.github.com/users/thomwolf/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/thomwolf",
"id": 7353373,
"login": "thomwolf",
"node_id": "MDQ6VXNlcjczNTMzNzM=",
"organizations_url": "https://api.github.com/users/thomwolf/orgs",
"received_events_url": "https://api.github.com/users/thomwolf/received_events",
"repos_url": "https://api.github.com/users/thomwolf/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/thomwolf/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/thomwolf/subscriptions",
"type": "User",
"url": "https://api.github.com/users/thomwolf"
} | [] | closed | false | null | [] | null | [] | 2020-07-30T15:53:20Z | 2020-07-30T17:34:36Z | 2020-07-30T17:34:34Z | MEMBER | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/457.diff",
"html_url": "https://github.com/huggingface/datasets/pull/457",
"merged_at": "2020-07-30T17:34:34Z",
"patch_url": "https://github.com/huggingface/datasets/pull/457.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/457"
} | Add the `set_format` and `formated_as` and `reset_format` to `DatasetDict`.
Add tests to these for `Dataset` and `DatasetDict`.
Fix some bugs uncovered by the tests for `pandas` formating. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/457/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/457/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/456 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/456/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/456/comments | https://api.github.com/repos/huggingface/datasets/issues/456/events | https://github.com/huggingface/datasets/pull/456 | 668,723,785 | MDExOlB1bGxSZXF1ZXN0NDU5MTc1MTY0 | 456 | add crd3(ACL 2020) dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/38249783?v=4",
"events_url": "https://api.github.com/users/mariamabarham/events{/privacy}",
"followers_url": "https://api.github.com/users/mariamabarham/followers",
"following_url": "https://api.github.com/users/mariamabarham/following{/other_user}",
"gists_url": "https://api.github.com/users/mariamabarham/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariamabarham",
"id": 38249783,
"login": "mariamabarham",
"node_id": "MDQ6VXNlcjM4MjQ5Nzgz",
"organizations_url": "https://api.github.com/users/mariamabarham/orgs",
"received_events_url": "https://api.github.com/users/mariamabarham/received_events",
"repos_url": "https://api.github.com/users/mariamabarham/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariamabarham/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariamabarham/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariamabarham"
} | [] | closed | false | null | [] | null | [] | 2020-07-30T13:28:35Z | 2020-08-03T11:28:52Z | 2020-08-03T11:28:52Z | CONTRIBUTOR | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/456.diff",
"html_url": "https://github.com/huggingface/datasets/pull/456",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/456.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/456"
} | This PR adds the **Critical Role Dungeons and Dragons Dataset** published at ACL 2020 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/456/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/456/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/455 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/455/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/455/comments | https://api.github.com/repos/huggingface/datasets/issues/455/events | https://github.com/huggingface/datasets/pull/455 | 668,037,965 | MDExOlB1bGxSZXF1ZXN0NDU4NTk4NTUw | 455 | Add bleurt | {
"avatar_url": "https://avatars.githubusercontent.com/u/10469459?v=4",
"events_url": "https://api.github.com/users/yjernite/events{/privacy}",
"followers_url": "https://api.github.com/users/yjernite/followers",
"following_url": "https://api.github.com/users/yjernite/following{/other_user}",
"gists_url": "https://api.github.com/users/yjernite/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/yjernite",
"id": 10469459,
"login": "yjernite",
"node_id": "MDQ6VXNlcjEwNDY5NDU5",
"organizations_url": "https://api.github.com/users/yjernite/orgs",
"received_events_url": "https://api.github.com/users/yjernite/received_events",
"repos_url": "https://api.github.com/users/yjernite/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/yjernite/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yjernite/subscriptions",
"type": "User",
"url": "https://api.github.com/users/yjernite"
} | [] | closed | false | null | [] | null | [] | 2020-07-29T18:08:32Z | 2020-07-31T13:56:14Z | 2020-07-31T13:56:14Z | MEMBER | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/455.diff",
"html_url": "https://github.com/huggingface/datasets/pull/455",
"merged_at": "2020-07-31T13:56:14Z",
"patch_url": "https://github.com/huggingface/datasets/pull/455.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/455"
} | This PR adds the BLEURT metric to the library.
The BLEURT `Metric` downloads a TF checkpoint corresponding to its `config_name` at creation (in the `_info` function). Default is set to `bleurt-base-128`.
Note that the default in the original package is `bleurt-tiny-128`, but they throw a warning and recommend using `bleurt-base-128` instead. I think it's safer to have our users have a functioning metric when they call the default behavior, we'll address discrepancies in the issues/discussions if it comes up.
In addition to the BLEURT file, `load.py` was changed so we can ask users to pip install the required packages from git when they have a `setup.py` but are not on PyPL
cc @ankparikh @tsellam | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/455/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/455/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/454 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/454/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/454/comments | https://api.github.com/repos/huggingface/datasets/issues/454/events | https://github.com/huggingface/datasets/pull/454 | 668,011,577 | MDExOlB1bGxSZXF1ZXN0NDU4NTc3MzA3 | 454 | Create SECURITY.md | {
"avatar_url": "https://avatars.githubusercontent.com/u/56394989?v=4",
"events_url": "https://api.github.com/users/ChenZehong13/events{/privacy}",
"followers_url": "https://api.github.com/users/ChenZehong13/followers",
"following_url": "https://api.github.com/users/ChenZehong13/following{/other_user}",
"gists_url": "https://api.github.com/users/ChenZehong13/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/ChenZehong13",
"id": 56394989,
"login": "ChenZehong13",
"node_id": "MDQ6VXNlcjU2Mzk0OTg5",
"organizations_url": "https://api.github.com/users/ChenZehong13/orgs",
"received_events_url": "https://api.github.com/users/ChenZehong13/received_events",
"repos_url": "https://api.github.com/users/ChenZehong13/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/ChenZehong13/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ChenZehong13/subscriptions",
"type": "User",
"url": "https://api.github.com/users/ChenZehong13"
} | [] | closed | false | null | [] | null | [] | 2020-07-29T17:23:34Z | 2020-07-29T21:45:52Z | 2020-07-29T21:45:52Z | NONE | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/454.diff",
"html_url": "https://github.com/huggingface/datasets/pull/454",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/454.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/454"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/454/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/454/timeline | null | null | true |
|
https://api.github.com/repos/huggingface/datasets/issues/453 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/453/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/453/comments | https://api.github.com/repos/huggingface/datasets/issues/453/events | https://github.com/huggingface/datasets/pull/453 | 667,728,247 | MDExOlB1bGxSZXF1ZXN0NDU4MzQwNzky | 453 | add builder tests | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [] | closed | false | null | [] | null | [] | 2020-07-29T10:22:07Z | 2020-07-29T11:14:06Z | 2020-07-29T11:14:05Z | MEMBER | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/453.diff",
"html_url": "https://github.com/huggingface/datasets/pull/453",
"merged_at": "2020-07-29T11:14:05Z",
"patch_url": "https://github.com/huggingface/datasets/pull/453.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/453"
} | I added `as_dataset` and `download_and_prepare` to the tests | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/453/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/453/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/452 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/452/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/452/comments | https://api.github.com/repos/huggingface/datasets/issues/452/events | https://github.com/huggingface/datasets/pull/452 | 667,498,295 | MDExOlB1bGxSZXF1ZXN0NDU4MTUzNjQy | 452 | Guardian authorship dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/25109412?v=4",
"events_url": "https://api.github.com/users/malikaltakrori/events{/privacy}",
"followers_url": "https://api.github.com/users/malikaltakrori/followers",
"following_url": "https://api.github.com/users/malikaltakrori/following{/other_user}",
"gists_url": "https://api.github.com/users/malikaltakrori/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/malikaltakrori",
"id": 25109412,
"login": "malikaltakrori",
"node_id": "MDQ6VXNlcjI1MTA5NDEy",
"organizations_url": "https://api.github.com/users/malikaltakrori/orgs",
"received_events_url": "https://api.github.com/users/malikaltakrori/received_events",
"repos_url": "https://api.github.com/users/malikaltakrori/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/malikaltakrori/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/malikaltakrori/subscriptions",
"type": "User",
"url": "https://api.github.com/users/malikaltakrori"
} | [] | closed | false | null | [] | null | [] | 2020-07-29T02:23:57Z | 2020-08-20T15:09:57Z | 2020-08-20T15:07:56Z | CONTRIBUTOR | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/452.diff",
"html_url": "https://github.com/huggingface/datasets/pull/452",
"merged_at": "2020-08-20T15:07:55Z",
"patch_url": "https://github.com/huggingface/datasets/pull/452.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/452"
} | A new dataset: Guardian news articles for authorship attribution
**tests passed:**
python nlp-cli dummy_data datasets/guardian_authorship --save_infos --all_configs
RUN_SLOW=1 pytest tests/test_dataset_common.py::LocalDatasetTest::test_load_dataset_all_configs_guardian_authorship
**Tests failed:**
Real data: RUN_SLOW=1 pytest tests/test_dataset_common.py::LocalDatasetTest::test_load_real_dataset_guardian_authorship
output: __init__() missing 3 required positional arguments: 'train_folder', 'valid_folder', and 'tes...'
Remarks: This is the init function of my class. I am not sure why it passes in both my tests and with nlp-cli, but fails here. By the way, I ran this command with another 2 datasets and they failed:
* _glue - OSError: Cannot find data file.
*_newsgroup - FileNotFoundError: Local file datasets/newsgroup/dummy/18828_comp.graphics/3.0.0/dummy_data.zip doesn't exist
Thank you for letting us contribute to such a huge and important library!
EDIT:
I was able to fix the dummy_data issue. This dataset has around 14 configurations. I was testing with only 2, but their versions were not in a sequence, they were V1.0.0 and V.12.0.0. It seems that the testing code generates testes for all the versions from 0 to MAX, and was testing for versions (and dummy_data.zip files) that do not exist. I fixed that by changing the versions to 1 and 2.
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/452/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/452/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/451 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/451/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/451/comments | https://api.github.com/repos/huggingface/datasets/issues/451/events | https://github.com/huggingface/datasets/pull/451 | 667,210,468 | MDExOlB1bGxSZXF1ZXN0NDU3OTIxNDMx | 451 | Fix csv/json/txt cache dir | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [] | closed | false | null | [] | null | [] | 2020-07-28T16:30:51Z | 2020-07-29T13:57:23Z | 2020-07-29T13:57:22Z | MEMBER | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/451.diff",
"html_url": "https://github.com/huggingface/datasets/pull/451",
"merged_at": "2020-07-29T13:57:22Z",
"patch_url": "https://github.com/huggingface/datasets/pull/451.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/451"
} | The cache dir for csv/json/txt datasets was always the same. This is an issue because it should be different depending on the data files provided by the user.
To fix that, I added a line that use the hash of the data files provided by the user to define the cache dir.
This should fix #444 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/451/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/451/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/450 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/450/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/450/comments | https://api.github.com/repos/huggingface/datasets/issues/450/events | https://github.com/huggingface/datasets/pull/450 | 667,074,120 | MDExOlB1bGxSZXF1ZXN0NDU3ODA5ODA2 | 450 | add sogou_news | {
"avatar_url": "https://avatars.githubusercontent.com/u/38249783?v=4",
"events_url": "https://api.github.com/users/mariamabarham/events{/privacy}",
"followers_url": "https://api.github.com/users/mariamabarham/followers",
"following_url": "https://api.github.com/users/mariamabarham/following{/other_user}",
"gists_url": "https://api.github.com/users/mariamabarham/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariamabarham",
"id": 38249783,
"login": "mariamabarham",
"node_id": "MDQ6VXNlcjM4MjQ5Nzgz",
"organizations_url": "https://api.github.com/users/mariamabarham/orgs",
"received_events_url": "https://api.github.com/users/mariamabarham/received_events",
"repos_url": "https://api.github.com/users/mariamabarham/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariamabarham/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariamabarham/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariamabarham"
} | [] | closed | false | null | [] | null | [] | 2020-07-28T13:29:10Z | 2020-07-29T13:30:18Z | 2020-07-29T13:30:17Z | CONTRIBUTOR | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/450.diff",
"html_url": "https://github.com/huggingface/datasets/pull/450",
"merged_at": "2020-07-29T13:30:17Z",
"patch_url": "https://github.com/huggingface/datasets/pull/450.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/450"
} | This PR adds the sogou news dataset
#353 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/450/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/450/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/449 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/449/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/449/comments | https://api.github.com/repos/huggingface/datasets/issues/449/events | https://github.com/huggingface/datasets/pull/449 | 666,898,923 | MDExOlB1bGxSZXF1ZXN0NDU3NjY0NjYx | 449 | add reuters21578 dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/38249783?v=4",
"events_url": "https://api.github.com/users/mariamabarham/events{/privacy}",
"followers_url": "https://api.github.com/users/mariamabarham/followers",
"following_url": "https://api.github.com/users/mariamabarham/following{/other_user}",
"gists_url": "https://api.github.com/users/mariamabarham/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariamabarham",
"id": 38249783,
"login": "mariamabarham",
"node_id": "MDQ6VXNlcjM4MjQ5Nzgz",
"organizations_url": "https://api.github.com/users/mariamabarham/orgs",
"received_events_url": "https://api.github.com/users/mariamabarham/received_events",
"repos_url": "https://api.github.com/users/mariamabarham/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariamabarham/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariamabarham/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariamabarham"
} | [] | closed | false | null | [] | null | [] | 2020-07-28T08:58:12Z | 2020-08-03T11:10:31Z | 2020-08-03T11:10:31Z | CONTRIBUTOR | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/449.diff",
"html_url": "https://github.com/huggingface/datasets/pull/449",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/449.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/449"
} | This PR adds the `Reuters_21578` dataset https://kdd.ics.uci.edu/databases/reuters21578/reuters21578.html
#353
The datasets is a lit of `.sgm` files which are a bit different from xml file indeed `xml.etree` couldn't be used to read files. I consider them as text file (to avoid using external library) and read line by line (maybe there is a better way to do, happy to get your opinion on it)
In the Readme file 3 ways to split the dataset are given.:
- The Modified Lewis ("ModLewis") Split: train, test and unused-set
- The Modified Apte ("ModApte") Split : train, test and unused-set
- The Modified Hayes ("ModHayes") Split: train and test
Here I consider the last one as the readme file highlight that this split provides the ability to compare results with those of the 2 first splits.
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/449/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/449/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/448 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/448/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/448/comments | https://api.github.com/repos/huggingface/datasets/issues/448/events | https://github.com/huggingface/datasets/pull/448 | 666,893,443 | MDExOlB1bGxSZXF1ZXN0NDU3NjYwMDU2 | 448 | add aws load metric test | {
"avatar_url": "https://avatars.githubusercontent.com/u/5303103?v=4",
"events_url": "https://api.github.com/users/idoh/events{/privacy}",
"followers_url": "https://api.github.com/users/idoh/followers",
"following_url": "https://api.github.com/users/idoh/following{/other_user}",
"gists_url": "https://api.github.com/users/idoh/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/idoh",
"id": 5303103,
"login": "idoh",
"node_id": "MDQ6VXNlcjUzMDMxMDM=",
"organizations_url": "https://api.github.com/users/idoh/orgs",
"received_events_url": "https://api.github.com/users/idoh/received_events",
"repos_url": "https://api.github.com/users/idoh/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/idoh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/idoh/subscriptions",
"type": "User",
"url": "https://api.github.com/users/idoh"
} | [] | closed | false | null | [] | null | [] | 2020-07-28T08:50:22Z | 2020-07-28T15:02:27Z | 2020-07-28T15:02:27Z | CONTRIBUTOR | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/448.diff",
"html_url": "https://github.com/huggingface/datasets/pull/448",
"merged_at": "2020-07-28T15:02:26Z",
"patch_url": "https://github.com/huggingface/datasets/pull/448.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/448"
} | Following issue #445
Added a test to recognize import errors of all metrics | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/448/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/448/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/447 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/447/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/447/comments | https://api.github.com/repos/huggingface/datasets/issues/447/events | https://github.com/huggingface/datasets/pull/447 | 666,842,115 | MDExOlB1bGxSZXF1ZXN0NDU3NjE2NDA0 | 447 | [BugFix] fix wrong import of DEFAULT_TOKENIZER | {
"avatar_url": "https://avatars.githubusercontent.com/u/5303103?v=4",
"events_url": "https://api.github.com/users/idoh/events{/privacy}",
"followers_url": "https://api.github.com/users/idoh/followers",
"following_url": "https://api.github.com/users/idoh/following{/other_user}",
"gists_url": "https://api.github.com/users/idoh/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/idoh",
"id": 5303103,
"login": "idoh",
"node_id": "MDQ6VXNlcjUzMDMxMDM=",
"organizations_url": "https://api.github.com/users/idoh/orgs",
"received_events_url": "https://api.github.com/users/idoh/received_events",
"repos_url": "https://api.github.com/users/idoh/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/idoh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/idoh/subscriptions",
"type": "User",
"url": "https://api.github.com/users/idoh"
} | [] | closed | false | null | [] | null | [] | 2020-07-28T07:41:10Z | 2020-07-28T12:58:01Z | 2020-07-28T12:52:05Z | CONTRIBUTOR | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/447.diff",
"html_url": "https://github.com/huggingface/datasets/pull/447",
"merged_at": "2020-07-28T12:52:05Z",
"patch_url": "https://github.com/huggingface/datasets/pull/447.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/447"
} | Fixed the path to `DEFAULT_TOKENIZER`
#445 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/447/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/447/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/446 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/446/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/446/comments | https://api.github.com/repos/huggingface/datasets/issues/446/events | https://github.com/huggingface/datasets/pull/446 | 666,837,351 | MDExOlB1bGxSZXF1ZXN0NDU3NjEyNTg5 | 446 | [BugFix] fix wrong import of DEFAULT_TOKENIZER | {
"avatar_url": "https://avatars.githubusercontent.com/u/5303103?v=4",
"events_url": "https://api.github.com/users/idoh/events{/privacy}",
"followers_url": "https://api.github.com/users/idoh/followers",
"following_url": "https://api.github.com/users/idoh/following{/other_user}",
"gists_url": "https://api.github.com/users/idoh/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/idoh",
"id": 5303103,
"login": "idoh",
"node_id": "MDQ6VXNlcjUzMDMxMDM=",
"organizations_url": "https://api.github.com/users/idoh/orgs",
"received_events_url": "https://api.github.com/users/idoh/received_events",
"repos_url": "https://api.github.com/users/idoh/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/idoh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/idoh/subscriptions",
"type": "User",
"url": "https://api.github.com/users/idoh"
} | [] | closed | false | null | [] | null | [] | 2020-07-28T07:32:47Z | 2020-07-28T07:34:46Z | 2020-07-28T07:33:59Z | CONTRIBUTOR | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/446.diff",
"html_url": "https://github.com/huggingface/datasets/pull/446",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/446.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/446"
} | Fixed the path to `DEFAULT_TOKENIZER`
#445 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/446/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/446/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/445 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/445/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/445/comments | https://api.github.com/repos/huggingface/datasets/issues/445/events | https://github.com/huggingface/datasets/issues/445 | 666,836,658 | MDU6SXNzdWU2NjY4MzY2NTg= | 445 | DEFAULT_TOKENIZER import error in sacrebleu | {
"avatar_url": "https://avatars.githubusercontent.com/u/5303103?v=4",
"events_url": "https://api.github.com/users/idoh/events{/privacy}",
"followers_url": "https://api.github.com/users/idoh/followers",
"following_url": "https://api.github.com/users/idoh/following{/other_user}",
"gists_url": "https://api.github.com/users/idoh/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/idoh",
"id": 5303103,
"login": "idoh",
"node_id": "MDQ6VXNlcjUzMDMxMDM=",
"organizations_url": "https://api.github.com/users/idoh/orgs",
"received_events_url": "https://api.github.com/users/idoh/received_events",
"repos_url": "https://api.github.com/users/idoh/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/idoh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/idoh/subscriptions",
"type": "User",
"url": "https://api.github.com/users/idoh"
} | [] | closed | false | null | [] | null | [] | 2020-07-28T07:31:30Z | 2020-07-28T12:58:56Z | 2020-07-28T12:58:56Z | CONTRIBUTOR | null | null | null | Latest Version 0.3.0
When loading the metric "sacrebleu" there is an import error due to the wrong path

| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/445/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/445/timeline | null | completed | true |
https://api.github.com/repos/huggingface/datasets/issues/444 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/444/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/444/comments | https://api.github.com/repos/huggingface/datasets/issues/444/events | https://github.com/huggingface/datasets/issues/444 | 666,280,842 | MDU6SXNzdWU2NjYyODA4NDI= | 444 | Keep loading old file even I specify a new file in load_dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/10594453?v=4",
"events_url": "https://api.github.com/users/joshhu/events{/privacy}",
"followers_url": "https://api.github.com/users/joshhu/followers",
"following_url": "https://api.github.com/users/joshhu/following{/other_user}",
"gists_url": "https://api.github.com/users/joshhu/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/joshhu",
"id": 10594453,
"login": "joshhu",
"node_id": "MDQ6VXNlcjEwNTk0NDUz",
"organizations_url": "https://api.github.com/users/joshhu/orgs",
"received_events_url": "https://api.github.com/users/joshhu/received_events",
"repos_url": "https://api.github.com/users/joshhu/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/joshhu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/joshhu/subscriptions",
"type": "User",
"url": "https://api.github.com/users/joshhu"
} | [
{
"color": "2edb81",
"default": false,
"description": "A bug in a dataset script provided in the library",
"id": 2067388877,
"name": "dataset bug",
"node_id": "MDU6TGFiZWwyMDY3Mzg4ODc3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20bug"
}
] | closed | false | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
}
] | null | [] | 2020-07-27T13:08:06Z | 2020-07-29T13:57:22Z | 2020-07-29T13:57:22Z | NONE | null | null | null | I used load a file called 'a.csv' by
```
dataset = load_dataset('csv', data_file='./a.csv')
```
And after a while, I tried to load another csv called 'b.csv'
```
dataset = load_dataset('csv', data_file='./b.csv')
```
However, the new dataset seems to remain the old 'a.csv' and not loading new csv file.
Even worse, after I load a.csv, the load_dataset function keeps loading the 'a.csv' afterward.
Is this a cache problem?
| {
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/444/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/444/timeline | null | completed | true |
https://api.github.com/repos/huggingface/datasets/issues/443 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/443/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/443/comments | https://api.github.com/repos/huggingface/datasets/issues/443/events | https://github.com/huggingface/datasets/issues/443 | 666,246,716 | MDU6SXNzdWU2NjYyNDY3MTY= | 443 | Cannot unpickle saved .pt dataset with torch.save()/load() | {
"avatar_url": "https://avatars.githubusercontent.com/u/24683907?v=4",
"events_url": "https://api.github.com/users/vegarab/events{/privacy}",
"followers_url": "https://api.github.com/users/vegarab/followers",
"following_url": "https://api.github.com/users/vegarab/following{/other_user}",
"gists_url": "https://api.github.com/users/vegarab/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/vegarab",
"id": 24683907,
"login": "vegarab",
"node_id": "MDQ6VXNlcjI0NjgzOTA3",
"organizations_url": "https://api.github.com/users/vegarab/orgs",
"received_events_url": "https://api.github.com/users/vegarab/received_events",
"repos_url": "https://api.github.com/users/vegarab/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/vegarab/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/vegarab/subscriptions",
"type": "User",
"url": "https://api.github.com/users/vegarab"
} | [] | closed | false | null | [] | null | [] | 2020-07-27T12:13:37Z | 2020-07-27T13:05:11Z | 2020-07-27T13:05:11Z | CONTRIBUTOR | null | null | null | Saving a formatted torch dataset to file using `torch.save()`. Loading the same file fails during unpickling:
```python
>>> import torch
>>> import nlp
>>> squad = nlp.load_dataset("squad.py", split="train")
>>> squad
Dataset(features: {'source_text': Value(dtype='string', id=None), 'target_text': Value(dtype='string', id=None)}, num_rows: 87599)
>>> squad = squad.map(create_features, batched=True)
>>> squad.set_format(type="torch", columns=["source_ids", "target_ids", "attention_mask"])
>>> torch.save(squad, "squad.pt")
>>> squad_pt = torch.load("squad.pt")
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/vegarab/.conda/envs/torch/lib/python3.7/site-packages/torch/serialization.py", line 593, in load
return _legacy_load(opened_file, map_location, pickle_module, **pickle_load_args)
File "/home/vegarab/.conda/envs/torch/lib/python3.7/site-packages/torch/serialization.py", line 773, in _legacy_load
result = unpickler.load()
File "/home/vegarab/.conda/envs/torch/lib/python3.7/site-packages/nlp/splits.py", line 493, in __setitem__
raise ValueError("Cannot add elem. Use .add() instead.")
ValueError: Cannot add elem. Use .add() instead.
```
where `create_features` is a function that tokenizes the data using `batch_encode_plus` and returns a Dict with `input_ids`, `target_ids` and `attention_mask`.
```python
def create_features(batch):
source_text_encoding = tokenizer.batch_encode_plus(
batch["source_text"],
max_length=max_source_length,
pad_to_max_length=True,
truncation=True)
target_text_encoding = tokenizer.batch_encode_plus(
batch["target_text"],
max_length=max_target_length,
pad_to_max_length=True,
truncation=True)
features = {
"source_ids": source_text_encoding["input_ids"],
"target_ids": target_text_encoding["input_ids"],
"attention_mask": source_text_encoding["attention_mask"]
}
return features
```
I found a similar issue in [issue 5267 in the huggingface/transformers repo](https://github.com/huggingface/transformers/issues/5267) which was solved by downgrading to `nlp==0.2.0`. That did not solve this problem, however. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/443/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/443/timeline | null | completed | true |
https://api.github.com/repos/huggingface/datasets/issues/442 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/442/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/442/comments | https://api.github.com/repos/huggingface/datasets/issues/442/events | https://github.com/huggingface/datasets/issues/442 | 666,201,810 | MDU6SXNzdWU2NjYyMDE4MTA= | 442 | [Suggestion] Glue Diagnostic Data with Labels | {
"avatar_url": "https://avatars.githubusercontent.com/u/3662782?v=4",
"events_url": "https://api.github.com/users/ggbetz/events{/privacy}",
"followers_url": "https://api.github.com/users/ggbetz/followers",
"following_url": "https://api.github.com/users/ggbetz/following{/other_user}",
"gists_url": "https://api.github.com/users/ggbetz/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/ggbetz",
"id": 3662782,
"login": "ggbetz",
"node_id": "MDQ6VXNlcjM2NjI3ODI=",
"organizations_url": "https://api.github.com/users/ggbetz/orgs",
"received_events_url": "https://api.github.com/users/ggbetz/received_events",
"repos_url": "https://api.github.com/users/ggbetz/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/ggbetz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ggbetz/subscriptions",
"type": "User",
"url": "https://api.github.com/users/ggbetz"
} | [
{
"color": "72f99f",
"default": false,
"description": "Discussions on the datasets",
"id": 2067401494,
"name": "Dataset discussion",
"node_id": "MDU6TGFiZWwyMDY3NDAxNDk0",
"url": "https://api.github.com/repos/huggingface/datasets/labels/Dataset%20discussion"
}
] | open | false | null | [] | null | [] | 2020-07-27T10:59:58Z | 2020-08-24T15:13:20Z | null | NONE | null | null | null | Hello! First of all, thanks for setting up this useful project!
I've just realised you provide the the [Glue Diagnostics Data](https://huggingface.co/nlp/viewer/?dataset=glue&config=ax) without labels, indicating in the `GlueConfig` that you've only a test set.
Yet, the data with labels is available, too (see also [here](https://gluebenchmark.com/diagnostics#introduction)):
https://www.dropbox.com/s/ju7d95ifb072q9f/diagnostic-full.tsv?dl=1
Have you considered incorporating it? | {
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/442/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/442/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/441 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/441/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/441/comments | https://api.github.com/repos/huggingface/datasets/issues/441/events | https://github.com/huggingface/datasets/pull/441 | 666,148,413 | MDExOlB1bGxSZXF1ZXN0NDU3MDQyMjY3 | 441 | Add features parameter in load dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [] | closed | false | null | [] | null | [] | 2020-07-27T09:50:01Z | 2020-07-30T12:51:17Z | 2020-07-30T12:51:16Z | MEMBER | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/441.diff",
"html_url": "https://github.com/huggingface/datasets/pull/441",
"merged_at": "2020-07-30T12:51:16Z",
"patch_url": "https://github.com/huggingface/datasets/pull/441.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/441"
} | Added `features` argument in `nlp.load_dataset`.
If they don't match the data type, it raises a `ValueError`.
It's a draft PR because #440 needs to be merged first. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/441/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/441/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/440 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/440/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/440/comments | https://api.github.com/repos/huggingface/datasets/issues/440/events | https://github.com/huggingface/datasets/pull/440 | 666,116,823 | MDExOlB1bGxSZXF1ZXN0NDU3MDE2MjQy | 440 | Fix user specified features in map | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [] | closed | false | null | [] | null | [] | 2020-07-27T09:04:26Z | 2020-07-28T09:25:23Z | 2020-07-28T09:25:22Z | MEMBER | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/440.diff",
"html_url": "https://github.com/huggingface/datasets/pull/440",
"merged_at": "2020-07-28T09:25:22Z",
"patch_url": "https://github.com/huggingface/datasets/pull/440.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/440"
} | `.map` didn't keep the user specified features because of an issue in the writer.
The writer used to overwrite the user specified features with inferred features.
I also added tests to make sure it doesn't happen again. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/440/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/440/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/439 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/439/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/439/comments | https://api.github.com/repos/huggingface/datasets/issues/439/events | https://github.com/huggingface/datasets/issues/439 | 665,964,673 | MDU6SXNzdWU2NjU5NjQ2NzM= | 439 | Issues: Adding a FAISS or Elastic Search index to a Dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/431890?v=4",
"events_url": "https://api.github.com/users/nsankar/events{/privacy}",
"followers_url": "https://api.github.com/users/nsankar/followers",
"following_url": "https://api.github.com/users/nsankar/following{/other_user}",
"gists_url": "https://api.github.com/users/nsankar/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/nsankar",
"id": 431890,
"login": "nsankar",
"node_id": "MDQ6VXNlcjQzMTg5MA==",
"organizations_url": "https://api.github.com/users/nsankar/orgs",
"received_events_url": "https://api.github.com/users/nsankar/received_events",
"repos_url": "https://api.github.com/users/nsankar/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/nsankar/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/nsankar/subscriptions",
"type": "User",
"url": "https://api.github.com/users/nsankar"
} | [] | closed | false | null | [] | null | [] | 2020-07-27T04:25:17Z | 2020-10-28T01:46:24Z | 2020-10-28T01:46:24Z | NONE | null | null | null | It seems the DPRContextEncoder, DPRContextEncoderTokenizer cited[ in this documentation](https://huggingface.co/nlp/faiss_and_ea.html) is not implemented ? It didnot work with the standard nlp installation . Also, I couldn't find or use it with the latest nlp install from github in Colab. Is there any dependency on the latest PyArrow 1.0.0 ? Is it yet to be made generally available ? | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/439/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/439/timeline | null | completed | true |
https://api.github.com/repos/huggingface/datasets/issues/438 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/438/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/438/comments | https://api.github.com/repos/huggingface/datasets/issues/438/events | https://github.com/huggingface/datasets/issues/438 | 665,865,490 | MDU6SXNzdWU2NjU4NjU0OTA= | 438 | New Datasets: IWSLT15+, ITTB | {
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/sshleifer",
"id": 6045025,
"login": "sshleifer",
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"type": "User",
"url": "https://api.github.com/users/sshleifer"
} | [
{
"color": "e99695",
"default": false,
"description": "Requesting to add a new dataset",
"id": 2067376369,
"name": "dataset request",
"node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request"
}
] | open | false | null | [] | null | [] | 2020-07-26T21:43:04Z | 2020-08-24T15:12:15Z | null | CONTRIBUTOR | null | null | null | **Links:**
[iwslt](https://pytorchnlp.readthedocs.io/en/latest/_modules/torchnlp/datasets/iwslt.html)
Don't know if that link is up to date.
[ittb](http://www.cfilt.iitb.ac.in/iitb_parallel/)
**Motivation**: replicate mbart finetuning results (table below)

For future readers, we already have the following language pairs in the wmt namespaces:
```
wmt14: ['cs-en', 'de-en', 'fr-en', 'hi-en', 'ru-en']
wmt15: ['cs-en', 'de-en', 'fi-en', 'fr-en', 'ru-en']
wmt16: ['cs-en', 'de-en', 'fi-en', 'ro-en', 'ru-en', 'tr-en']
wmt17: ['cs-en', 'de-en', 'fi-en', 'lv-en', 'ru-en', 'tr-en', 'zh-en']
wmt18: ['cs-en', 'de-en', 'et-en', 'fi-en', 'kk-en', 'ru-en', 'tr-en', 'zh-en']
wmt19: ['cs-en', 'de-en', 'fi-en', 'gu-en', 'kk-en', 'lt-en', 'ru-en', 'zh-en', 'fr-de']
``` | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/438/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/438/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/437 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/437/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/437/comments | https://api.github.com/repos/huggingface/datasets/issues/437/events | https://github.com/huggingface/datasets/pull/437 | 665,597,176 | MDExOlB1bGxSZXF1ZXN0NDU2NjIzNjc3 | 437 | Fix XTREME PAN-X loading | {
"avatar_url": "https://avatars.githubusercontent.com/u/8264887?v=4",
"events_url": "https://api.github.com/users/lvwerra/events{/privacy}",
"followers_url": "https://api.github.com/users/lvwerra/followers",
"following_url": "https://api.github.com/users/lvwerra/following{/other_user}",
"gists_url": "https://api.github.com/users/lvwerra/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lvwerra",
"id": 8264887,
"login": "lvwerra",
"node_id": "MDQ6VXNlcjgyNjQ4ODc=",
"organizations_url": "https://api.github.com/users/lvwerra/orgs",
"received_events_url": "https://api.github.com/users/lvwerra/received_events",
"repos_url": "https://api.github.com/users/lvwerra/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lvwerra/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lvwerra/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lvwerra"
} | [] | closed | false | null | [] | null | [] | 2020-07-25T14:44:57Z | 2020-07-30T08:28:15Z | 2020-07-30T08:28:15Z | MEMBER | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/437.diff",
"html_url": "https://github.com/huggingface/datasets/pull/437",
"merged_at": "2020-07-30T08:28:15Z",
"patch_url": "https://github.com/huggingface/datasets/pull/437.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/437"
} | Hi 🤗
In response to the discussion in #425 @lewtun and I made some fixes to the repo. In the original XTREME implementation the PAN-X dataset for named entity recognition loaded each word/tag pair as a single row and the sentence relation was lost. With the fix each row contains the list of all words in a single sentence and their NER tags. This is also in agreement with the [NER example](https://github.com/huggingface/transformers/tree/master/examples/token-classification) in the transformers repo.
With the fix the output of the dataset should look as follows:
```python
>>> dataset = load_dataset("xtreme", "PAN-X.en", data_dir='./data')
>>> dataset['train'][0]
{'words': ['R.H.', 'Saunders', '(', 'St.', 'Lawrence', 'River', ')', '(', '968', 'MW', ')'],
'ner_tags': ['B-ORG', 'I-ORG', 'O', 'B-ORG', 'I-ORG', 'I-ORG', 'O', 'O', 'O', 'O', 'O'],
'langs': ['en', 'en', 'en', 'en', 'en', 'en', 'en', 'en', 'en', 'en', 'en']}
``` | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/437/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/437/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/436 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/436/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/436/comments | https://api.github.com/repos/huggingface/datasets/issues/436/events | https://github.com/huggingface/datasets/issues/436 | 665,582,167 | MDU6SXNzdWU2NjU1ODIxNjc= | 436 | Google Colab - load_dataset - PyArrow exception | {
"avatar_url": "https://avatars.githubusercontent.com/u/431890?v=4",
"events_url": "https://api.github.com/users/nsankar/events{/privacy}",
"followers_url": "https://api.github.com/users/nsankar/followers",
"following_url": "https://api.github.com/users/nsankar/following{/other_user}",
"gists_url": "https://api.github.com/users/nsankar/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/nsankar",
"id": 431890,
"login": "nsankar",
"node_id": "MDQ6VXNlcjQzMTg5MA==",
"organizations_url": "https://api.github.com/users/nsankar/orgs",
"received_events_url": "https://api.github.com/users/nsankar/received_events",
"repos_url": "https://api.github.com/users/nsankar/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/nsankar/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/nsankar/subscriptions",
"type": "User",
"url": "https://api.github.com/users/nsankar"
} | [] | closed | false | null | [] | null | [] | 2020-07-25T13:05:20Z | 2020-08-20T08:08:18Z | 2020-08-20T08:08:18Z | NONE | null | null | null | With latest PyArrow 1.0.0 installed, I get the following exception . Restarting colab has the same issue
ImportWarning: To use `nlp`, the module `pyarrow>=0.16.0` is required, and the current version of `pyarrow` doesn't match this condition. If you are running this in a Google Colab, you should probably just restart the runtime to use the right version of `pyarrow`.
The error goes only when I install version 0.16.0
i.e. !pip install pyarrow==0.16.0 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 1,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/436/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/436/timeline | null | completed | true |
https://api.github.com/repos/huggingface/datasets/issues/435 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/435/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/435/comments | https://api.github.com/repos/huggingface/datasets/issues/435/events | https://github.com/huggingface/datasets/issues/435 | 665,507,141 | MDU6SXNzdWU2NjU1MDcxNDE= | 435 | ImportWarning for pyarrow 1.0.0 | {
"avatar_url": "https://avatars.githubusercontent.com/u/18187806?v=4",
"events_url": "https://api.github.com/users/HanGuo97/events{/privacy}",
"followers_url": "https://api.github.com/users/HanGuo97/followers",
"following_url": "https://api.github.com/users/HanGuo97/following{/other_user}",
"gists_url": "https://api.github.com/users/HanGuo97/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/HanGuo97",
"id": 18187806,
"login": "HanGuo97",
"node_id": "MDQ6VXNlcjE4MTg3ODA2",
"organizations_url": "https://api.github.com/users/HanGuo97/orgs",
"received_events_url": "https://api.github.com/users/HanGuo97/received_events",
"repos_url": "https://api.github.com/users/HanGuo97/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/HanGuo97/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/HanGuo97/subscriptions",
"type": "User",
"url": "https://api.github.com/users/HanGuo97"
} | [] | closed | false | null | [] | null | [] | 2020-07-25T03:44:39Z | 2020-09-08T17:57:15Z | 2020-08-03T16:37:32Z | NONE | null | null | null | The following PR raised ImportWarning at `pyarrow ==1.0.0` https://github.com/huggingface/nlp/pull/265/files | {
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/435/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/435/timeline | null | completed | true |
https://api.github.com/repos/huggingface/datasets/issues/434 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/434/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/434/comments | https://api.github.com/repos/huggingface/datasets/issues/434/events | https://github.com/huggingface/datasets/pull/434 | 665,477,638 | MDExOlB1bGxSZXF1ZXN0NDU2NTM3Njgz | 434 | Fixed check for pyarrow | {
"avatar_url": "https://avatars.githubusercontent.com/u/58701810?v=4",
"events_url": "https://api.github.com/users/nadahlberg/events{/privacy}",
"followers_url": "https://api.github.com/users/nadahlberg/followers",
"following_url": "https://api.github.com/users/nadahlberg/following{/other_user}",
"gists_url": "https://api.github.com/users/nadahlberg/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/nadahlberg",
"id": 58701810,
"login": "nadahlberg",
"node_id": "MDQ6VXNlcjU4NzAxODEw",
"organizations_url": "https://api.github.com/users/nadahlberg/orgs",
"received_events_url": "https://api.github.com/users/nadahlberg/received_events",
"repos_url": "https://api.github.com/users/nadahlberg/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/nadahlberg/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/nadahlberg/subscriptions",
"type": "User",
"url": "https://api.github.com/users/nadahlberg"
} | [] | closed | false | null | [] | null | [] | 2020-07-25T00:16:53Z | 2020-07-25T06:36:34Z | 2020-07-25T06:36:34Z | CONTRIBUTOR | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/434.diff",
"html_url": "https://github.com/huggingface/datasets/pull/434",
"merged_at": "2020-07-25T06:36:34Z",
"patch_url": "https://github.com/huggingface/datasets/pull/434.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/434"
} | Fix check for pyarrow in __init__.py. Previously would raise an error for pyarrow >= 1.0.0 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/434/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/434/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/433 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/433/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/433/comments | https://api.github.com/repos/huggingface/datasets/issues/433/events | https://github.com/huggingface/datasets/issues/433 | 665,311,025 | MDU6SXNzdWU2NjUzMTEwMjU= | 433 | How to reuse functionality of a (generic) dataset? | {
"avatar_url": "https://avatars.githubusercontent.com/u/3375489?v=4",
"events_url": "https://api.github.com/users/ArneBinder/events{/privacy}",
"followers_url": "https://api.github.com/users/ArneBinder/followers",
"following_url": "https://api.github.com/users/ArneBinder/following{/other_user}",
"gists_url": "https://api.github.com/users/ArneBinder/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/ArneBinder",
"id": 3375489,
"login": "ArneBinder",
"node_id": "MDQ6VXNlcjMzNzU0ODk=",
"organizations_url": "https://api.github.com/users/ArneBinder/orgs",
"received_events_url": "https://api.github.com/users/ArneBinder/received_events",
"repos_url": "https://api.github.com/users/ArneBinder/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/ArneBinder/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArneBinder/subscriptions",
"type": "User",
"url": "https://api.github.com/users/ArneBinder"
} | [] | closed | false | null | [] | null | [] | 2020-07-24T17:27:37Z | 2022-10-04T17:59:34Z | 2022-10-04T17:59:33Z | NONE | null | null | null | I have written a generic dataset for corpora created with the Brat annotation tool ([specification](https://brat.nlplab.org/standoff.html), [dataset code](https://github.com/ArneBinder/nlp/blob/brat/datasets/brat/brat.py)). Now I wonder how to use that to create specific dataset instances. What's the recommended way to reuse formats and loading functionality for datasets with a common format?
In my case, it took a bit of time to create the Brat dataset and I think others would appreciate to not have to think about that again. Also, I assume there are other formats (e.g. conll) that are widely used, so having this would really ease dataset onboarding and adoption of the library. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/433/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/433/timeline | null | completed | true |
https://api.github.com/repos/huggingface/datasets/issues/432 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/432/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/432/comments | https://api.github.com/repos/huggingface/datasets/issues/432/events | https://github.com/huggingface/datasets/pull/432 | 665,234,340 | MDExOlB1bGxSZXF1ZXN0NDU2MzQxNDk3 | 432 | Fix handling of config files while loading datasets from multiple processes | {
"avatar_url": "https://avatars.githubusercontent.com/u/99543?v=4",
"events_url": "https://api.github.com/users/orsharir/events{/privacy}",
"followers_url": "https://api.github.com/users/orsharir/followers",
"following_url": "https://api.github.com/users/orsharir/following{/other_user}",
"gists_url": "https://api.github.com/users/orsharir/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/orsharir",
"id": 99543,
"login": "orsharir",
"node_id": "MDQ6VXNlcjk5NTQz",
"organizations_url": "https://api.github.com/users/orsharir/orgs",
"received_events_url": "https://api.github.com/users/orsharir/received_events",
"repos_url": "https://api.github.com/users/orsharir/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/orsharir/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/orsharir/subscriptions",
"type": "User",
"url": "https://api.github.com/users/orsharir"
} | [] | closed | false | null | [] | null | [] | 2020-07-24T15:10:57Z | 2020-08-01T17:11:42Z | 2020-07-30T08:25:28Z | CONTRIBUTOR | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/432.diff",
"html_url": "https://github.com/huggingface/datasets/pull/432",
"merged_at": "2020-07-30T08:25:28Z",
"patch_url": "https://github.com/huggingface/datasets/pull/432.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/432"
} | When loading shards on several processes, each process upon loading the dataset will overwrite dataset_infos.json in <package path>/datasets/<dataset name>/<hash>/dataset_infos.json. It does so every time, even when the target file already exists and is identical. Because multiple processes rewrite the same file in parallel, it creates a race condition when a process tries to load the file, often resulting in a JSON decoding exception because the file is only partially written.
This pull requests partially address this by comparing if the files are already identical before copying over the downloaded copy to the cached destination. There's still a race condition, but now it's less likely to occur if some basic precautions are taken by the library user, e.g., download all datasets to cache before spawning multiple processes. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/432/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/432/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/431 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/431/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/431/comments | https://api.github.com/repos/huggingface/datasets/issues/431/events | https://github.com/huggingface/datasets/pull/431 | 665,044,416 | MDExOlB1bGxSZXF1ZXN0NDU2MTgyNDE2 | 431 | Specify split post processing + Add post processing resources downloading | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [] | closed | false | null | [] | null | [] | 2020-07-24T09:29:19Z | 2020-07-31T09:05:04Z | 2020-07-31T09:05:03Z | MEMBER | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/431.diff",
"html_url": "https://github.com/huggingface/datasets/pull/431",
"merged_at": "2020-07-31T09:05:03Z",
"patch_url": "https://github.com/huggingface/datasets/pull/431.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/431"
} | Previously if you tried to do
```python
from nlp import load_dataset
wiki = load_dataset("wiki_dpr", "psgs_w100_with_nq_embeddings", split="train[:100]", with_index=True)
```
Then you'd get an error `Index size should match Dataset size...`
This was because it was trying to use the full index (21M elements).
To fix that I made it so post processing resources can be named according to the split.
I'm going to add tests on post processing too.
Note that the CI will fail as I added a new argument in `_post_processing_resources`: the AWS version of wiki_dpr fails, and there's also an error telling that it is not synced (it'll be synced once it's merged):
```
=========================== short test summary info ============================
FAILED tests/test_dataset_common.py::AWSDatasetTest::test_load_dataset_wiki_dpr
FAILED tests/test_hf_gcp.py::TestDatasetSynced::test_script_synced_with_s3_wiki_dpr
```
EDIT: I did a change to ignore the script hash to locate the arrow files on GCS, so I removed the sync test. It was there just because of the hash logic for files on GCS | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/431/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/431/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/430 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/430/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/430/comments | https://api.github.com/repos/huggingface/datasets/issues/430/events | https://github.com/huggingface/datasets/pull/430 | 664,583,837 | MDExOlB1bGxSZXF1ZXN0NDU1ODAxOTI2 | 430 | add DatasetDict | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [] | closed | false | null | [] | null | [] | 2020-07-23T15:43:49Z | 2020-08-04T01:01:53Z | 2020-07-29T09:06:22Z | MEMBER | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/430.diff",
"html_url": "https://github.com/huggingface/datasets/pull/430",
"merged_at": "2020-07-29T09:06:22Z",
"patch_url": "https://github.com/huggingface/datasets/pull/430.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/430"
} | ## Add DatasetDict
### Overview
When you call `load_dataset` it can return a dictionary of datasets if there are several splits (train/test for example).
If you wanted to apply dataset transforms you had to iterate over each split and apply the transform.
Instead of returning a dict, it now returns a `nlp.DatasetDict` object which inherits from dict and contains the same data as before, except that now users can call dataset transforms directly from the output, and they'll be applied on each split.
Before:
```python
from nlp import load_dataset
squad = load_dataset("squad")
print(squad.keys())
# dict_keys(['train', 'validation'])
squad = {
split_name: dataset.map(my_func) for split_name, dataset in squad.items()
}
print(squad.keys())
# dict_keys(['train', 'validation'])
```
Now:
```python
from nlp import load_dataset
squad = load_dataset("squad")
print(squad.keys())
# dict_keys(['train', 'validation'])
squad = squad.map(my_func)
print(squad.keys())
# dict_keys(['train', 'validation'])
```
### Dataset transforms
`nlp.DatasetDict` implements the following dataset transforms:
- map
- filter
- sort
- shuffle
### Arguments
The arguments of the methods are the same except for split-specific arguments like `cache_file_name`.
For such arguments, the expected input is a dictionary `{split_name: argument_value}`
It concerns:
- `cache_file_name` in map, filter, sort, shuffle
- `seed` and `generator` in shuffle | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/430/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/430/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/429 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/429/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/429/comments | https://api.github.com/repos/huggingface/datasets/issues/429/events | https://github.com/huggingface/datasets/pull/429 | 664,412,137 | MDExOlB1bGxSZXF1ZXN0NDU1NjU2MDk5 | 429 | mlsum | {
"avatar_url": "https://avatars.githubusercontent.com/u/36986299?v=4",
"events_url": "https://api.github.com/users/RachelKer/events{/privacy}",
"followers_url": "https://api.github.com/users/RachelKer/followers",
"following_url": "https://api.github.com/users/RachelKer/following{/other_user}",
"gists_url": "https://api.github.com/users/RachelKer/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/RachelKer",
"id": 36986299,
"login": "RachelKer",
"node_id": "MDQ6VXNlcjM2OTg2Mjk5",
"organizations_url": "https://api.github.com/users/RachelKer/orgs",
"received_events_url": "https://api.github.com/users/RachelKer/received_events",
"repos_url": "https://api.github.com/users/RachelKer/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/RachelKer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/RachelKer/subscriptions",
"type": "User",
"url": "https://api.github.com/users/RachelKer"
} | [] | closed | false | null | [] | null | [] | 2020-07-23T11:52:39Z | 2020-07-31T11:46:20Z | 2020-07-31T11:46:20Z | CONTRIBUTOR | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/429.diff",
"html_url": "https://github.com/huggingface/datasets/pull/429",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/429.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/429"
} | Hello,
The tests for the load_real_data fail, as there is no default language subset to download it looks for a file that does not exist. This bug does not happen when using the load_dataset function, as it asks you to specify a language if you do not, so I submit this PR anyway. The dataset is avalaible on : https://gitlab.lip6.fr/scialom/mlsum_data | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/429/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/429/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/428 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/428/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/428/comments | https://api.github.com/repos/huggingface/datasets/issues/428/events | https://github.com/huggingface/datasets/pull/428 | 664,367,086 | MDExOlB1bGxSZXF1ZXN0NDU1NjE3Nzcy | 428 | fix concatenate_datasets | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [] | closed | false | null | [] | null | [] | 2020-07-23T10:30:59Z | 2020-07-23T10:35:00Z | 2020-07-23T10:34:58Z | MEMBER | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/428.diff",
"html_url": "https://github.com/huggingface/datasets/pull/428",
"merged_at": "2020-07-23T10:34:58Z",
"patch_url": "https://github.com/huggingface/datasets/pull/428.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/428"
} | `concatenate_datatsets` used to test that the different`nlp.Dataset.schema` match, but this attribute was removed in #423 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/428/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/428/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/427 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/427/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/427/comments | https://api.github.com/repos/huggingface/datasets/issues/427/events | https://github.com/huggingface/datasets/pull/427 | 664,341,623 | MDExOlB1bGxSZXF1ZXN0NDU1NTk1Nzc3 | 427 | Allow sequence features for beam + add processed Natural Questions | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [] | closed | false | null | [] | null | [] | 2020-07-23T09:52:41Z | 2020-07-23T13:09:30Z | 2020-07-23T13:09:29Z | MEMBER | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/427.diff",
"html_url": "https://github.com/huggingface/datasets/pull/427",
"merged_at": "2020-07-23T13:09:29Z",
"patch_url": "https://github.com/huggingface/datasets/pull/427.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/427"
} | ## Allow Sequence features for Beam Datasets + add Natural Questions
### The issue
The steps of beam datasets processing is the following:
- download the source files and send them in a remote storage (gcs)
- process the files using a beam runner (dataflow)
- save output in remote storage (gcs)
- convert output to arrow in remote storage (gcs)
However it wasn't possible to process `natural_questions` because apache beam's processing outputs parquet files, and it's not yet possible to read parquet files with list features.
### The proposed solution
To allow sequence features for beam I added a workaround that serializes the values using `json.dumps`, so that we end up with strings instead of the original features. Then when the arrow file is created, the serialized objects are transformed back to normal with `json.loads`. Not sure if there's a better way to do it.
### Natural Questions
I was able to process NQ with it, and so I added the json infos file in this PR too.
The processed arrow files are also stored in gcs.
It allows you to load NQ with
```python
from nlp import load_dataset
nq = load_dataset("natural_questions") # download the 90GB arrow files from gcs and return the dataset
```
### Tests
I added a test case to make sure it works as expected.
Note that the CI will fail because I am updating `natural_questions.py`: it's not synced with the script on S3. It will be synced as soon as this PR is merged.
```
=========================== short test summary info ============================
FAILED tests/test_hf_gcp.py::TestDatasetOnHfGcp::test_script_synced_with_s3_natural_questions/default
``` | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 1,
"laugh": 0,
"rocket": 3,
"total_count": 4,
"url": "https://api.github.com/repos/huggingface/datasets/issues/427/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/427/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/426 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/426/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/426/comments | https://api.github.com/repos/huggingface/datasets/issues/426/events | https://github.com/huggingface/datasets/issues/426 | 664,203,897 | MDU6SXNzdWU2NjQyMDM4OTc= | 426 | [FEATURE REQUEST] Multiprocessing with for dataset.map, dataset.filter | {
"avatar_url": "https://avatars.githubusercontent.com/u/2000204?v=4",
"events_url": "https://api.github.com/users/timothyjlaurent/events{/privacy}",
"followers_url": "https://api.github.com/users/timothyjlaurent/followers",
"following_url": "https://api.github.com/users/timothyjlaurent/following{/other_user}",
"gists_url": "https://api.github.com/users/timothyjlaurent/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/timothyjlaurent",
"id": 2000204,
"login": "timothyjlaurent",
"node_id": "MDQ6VXNlcjIwMDAyMDQ=",
"organizations_url": "https://api.github.com/users/timothyjlaurent/orgs",
"received_events_url": "https://api.github.com/users/timothyjlaurent/received_events",
"repos_url": "https://api.github.com/users/timothyjlaurent/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/timothyjlaurent/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/timothyjlaurent/subscriptions",
"type": "User",
"url": "https://api.github.com/users/timothyjlaurent"
} | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | closed | false | null | [] | null | [] | 2020-07-23T05:00:41Z | 2021-03-12T09:34:12Z | 2020-09-07T14:48:04Z | NONE | null | null | null | It would be nice to be able to speed up `dataset.map` or `dataset.filter`. Perhaps this is as easy as sharding the dataset sending each shard to a process/thread/dask pool and using the new `nlp.concatenate_dataset()` function to join them all together? | {
"+1": 3,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 3,
"url": "https://api.github.com/repos/huggingface/datasets/issues/426/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/426/timeline | null | completed | true |
https://api.github.com/repos/huggingface/datasets/issues/425 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/425/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/425/comments | https://api.github.com/repos/huggingface/datasets/issues/425/events | https://github.com/huggingface/datasets/issues/425 | 664,029,848 | MDU6SXNzdWU2NjQwMjk4NDg= | 425 | Correct data structure for PAN-X task in XTREME dataset? | {
"avatar_url": "https://avatars.githubusercontent.com/u/26859204?v=4",
"events_url": "https://api.github.com/users/lewtun/events{/privacy}",
"followers_url": "https://api.github.com/users/lewtun/followers",
"following_url": "https://api.github.com/users/lewtun/following{/other_user}",
"gists_url": "https://api.github.com/users/lewtun/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lewtun",
"id": 26859204,
"login": "lewtun",
"node_id": "MDQ6VXNlcjI2ODU5MjA0",
"organizations_url": "https://api.github.com/users/lewtun/orgs",
"received_events_url": "https://api.github.com/users/lewtun/received_events",
"repos_url": "https://api.github.com/users/lewtun/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lewtun/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lewtun/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lewtun"
} | [] | closed | false | null | [] | null | [] | 2020-07-22T20:29:20Z | 2020-08-02T13:30:34Z | 2020-08-02T13:30:34Z | MEMBER | null | null | null | Hi 🤗 team!
## Description of the problem
Thanks to the fix from #416 I am now able to load the NER task in the XTREME dataset as follows:
```python
from nlp import load_dataset
# AmazonPhotos.zip is located in data/
dataset = load_dataset("xtreme", "PAN-X.en", data_dir='./data')
dataset_train = dataset['train']
```
However, I am not sure that `load_dataset()` is returning the correct data structure for NER.
Currently, every row in `dataset_train` is of the form
```python
{'word': str, 'ner_tag': str, 'lang': str}
```
but I think we actually want something like
```python
{'words': List[str], 'ner_tags': List[str], 'langs': List[str]}
```
so that each row corresponds to a _sequence_ of words associated with each example. With the current data structure I do not think it is possible to transform `dataset_train` into a form suitable for training because we do not know the boundaries between examples.
Indeed, [this line](https://github.com/google-research/xtreme/blob/522434d1aece34131d997a97ce7e9242a51a688a/third_party/utils_tag.py#L58) in the XTREME repo, processes the texts as lists of sentences, tags, and languages.
## Proposed solution
Replace
```python
with open(filepath) as f:
data = csv.reader(f, delimiter="\t", quoting=csv.QUOTE_NONE)
for id_, row in enumerate(data):
if row:
lang, word = row[0].split(":")[0], row[0].split(":")[1]
tag = row[1]
yield id_, {"word": word, "ner_tag": tag, "lang": lang}
```
from [these lines](https://github.com/huggingface/nlp/blob/ce7d3a1d630b78fe27188d1706f3ea980e8eec43/datasets/xtreme/xtreme.py#L881-L887) of the `_generate_examples()` function with something like
```python
guid_index = 1
with open(filepath, encoding="utf-8") as f:
words = []
ner_tags = []
langs = []
for line in f:
if line.startswith("-DOCSTART-") or line == "" or line == "\n":
if words:
yield guid_index, {"words": words, "ner_tags": ner_tags, "langs": langs}
guid_index += 1
words = []
ner_tags = []
else:
# pan-x data is tab separated
splits = line.split("\t")
# strip out en: prefix
langs.append(splits[0][:2])
words.append(splits[0][3:])
if len(splits) > 1:
labels.append(splits[-1].replace("\n", ""))
else:
# examples have no label in test set
labels.append("O")
```
If you agree, me or @lvwerra would be happy to implement this and create a PR. | {
"+1": 2,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 2,
"url": "https://api.github.com/repos/huggingface/datasets/issues/425/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/425/timeline | null | completed | true |
https://api.github.com/repos/huggingface/datasets/issues/424 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/424/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/424/comments | https://api.github.com/repos/huggingface/datasets/issues/424/events | https://github.com/huggingface/datasets/pull/424 | 663,858,552 | MDExOlB1bGxSZXF1ZXN0NDU1MTk4MTY0 | 424 | Web of science | {
"avatar_url": "https://avatars.githubusercontent.com/u/38249783?v=4",
"events_url": "https://api.github.com/users/mariamabarham/events{/privacy}",
"followers_url": "https://api.github.com/users/mariamabarham/followers",
"following_url": "https://api.github.com/users/mariamabarham/following{/other_user}",
"gists_url": "https://api.github.com/users/mariamabarham/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariamabarham",
"id": 38249783,
"login": "mariamabarham",
"node_id": "MDQ6VXNlcjM4MjQ5Nzgz",
"organizations_url": "https://api.github.com/users/mariamabarham/orgs",
"received_events_url": "https://api.github.com/users/mariamabarham/received_events",
"repos_url": "https://api.github.com/users/mariamabarham/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariamabarham/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariamabarham/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariamabarham"
} | [] | closed | false | null | [] | null | [] | 2020-07-22T15:38:31Z | 2020-07-23T14:27:58Z | 2020-07-23T14:27:56Z | CONTRIBUTOR | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/424.diff",
"html_url": "https://github.com/huggingface/datasets/pull/424",
"merged_at": "2020-07-23T14:27:56Z",
"patch_url": "https://github.com/huggingface/datasets/pull/424.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/424"
} | this PR adds the WebofScience dataset
#353 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/424/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/424/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/423 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/423/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/423/comments | https://api.github.com/repos/huggingface/datasets/issues/423/events | https://github.com/huggingface/datasets/pull/423 | 663,079,359 | MDExOlB1bGxSZXF1ZXN0NDU0NTU4OTA0 | 423 | Change features vs schema logic | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [] | closed | false | null | [] | null | [] | 2020-07-21T14:52:47Z | 2020-07-25T09:08:34Z | 2020-07-23T10:15:17Z | MEMBER | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/423.diff",
"html_url": "https://github.com/huggingface/datasets/pull/423",
"merged_at": "2020-07-23T10:15:16Z",
"patch_url": "https://github.com/huggingface/datasets/pull/423.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/423"
} | ## New logic for `nlp.Features` in datasets
Previously, it was confusing to have `features` and pyarrow's `schema` in `nlp.Dataset`.
However `features` is supposed to be the front-facing object to define the different fields of a dataset, while `schema` is only used to write arrow files.
Changes:
- Remove `schema` field in `nlp.Dataset`
- Make `features` the source of truth to read/write examples
- `features` can no longer be `None` in `nlp.Dataset`
- Update `features` after each dataset transform such as `nlp.Dataset.map`
Todo: change the tests to take these changes into account | {
"+1": 2,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 2,
"url": "https://api.github.com/repos/huggingface/datasets/issues/423/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/423/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/422 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/422/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/422/comments | https://api.github.com/repos/huggingface/datasets/issues/422/events | https://github.com/huggingface/datasets/pull/422 | 663,028,497 | MDExOlB1bGxSZXF1ZXN0NDU0NTE3MDU2 | 422 | - Corrected encoding for IMDB. | {
"avatar_url": "https://avatars.githubusercontent.com/u/25091538?v=4",
"events_url": "https://api.github.com/users/ghazi-f/events{/privacy}",
"followers_url": "https://api.github.com/users/ghazi-f/followers",
"following_url": "https://api.github.com/users/ghazi-f/following{/other_user}",
"gists_url": "https://api.github.com/users/ghazi-f/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/ghazi-f",
"id": 25091538,
"login": "ghazi-f",
"node_id": "MDQ6VXNlcjI1MDkxNTM4",
"organizations_url": "https://api.github.com/users/ghazi-f/orgs",
"received_events_url": "https://api.github.com/users/ghazi-f/received_events",
"repos_url": "https://api.github.com/users/ghazi-f/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/ghazi-f/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ghazi-f/subscriptions",
"type": "User",
"url": "https://api.github.com/users/ghazi-f"
} | [] | closed | false | null | [] | null | [] | 2020-07-21T13:46:59Z | 2020-07-22T16:02:53Z | 2020-07-22T16:02:53Z | CONTRIBUTOR | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/422.diff",
"html_url": "https://github.com/huggingface/datasets/pull/422",
"merged_at": "2020-07-22T16:02:53Z",
"patch_url": "https://github.com/huggingface/datasets/pull/422.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/422"
} | The preparation phase (after the download phase) crashed on windows because of charmap encoding not being able to decode certain characters. This change suggested in Issue #347 fixes it for the IMDB dataset. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/422/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/422/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/421 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/421/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/421/comments | https://api.github.com/repos/huggingface/datasets/issues/421/events | https://github.com/huggingface/datasets/pull/421 | 662,213,864 | MDExOlB1bGxSZXF1ZXN0NDUzNzkzMzQ1 | 421 | Style change | {
"avatar_url": "https://avatars.githubusercontent.com/u/35500534?v=4",
"events_url": "https://api.github.com/users/lordtt13/events{/privacy}",
"followers_url": "https://api.github.com/users/lordtt13/followers",
"following_url": "https://api.github.com/users/lordtt13/following{/other_user}",
"gists_url": "https://api.github.com/users/lordtt13/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lordtt13",
"id": 35500534,
"login": "lordtt13",
"node_id": "MDQ6VXNlcjM1NTAwNTM0",
"organizations_url": "https://api.github.com/users/lordtt13/orgs",
"received_events_url": "https://api.github.com/users/lordtt13/received_events",
"repos_url": "https://api.github.com/users/lordtt13/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lordtt13/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lordtt13/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lordtt13"
} | [] | closed | false | null | [] | null | [] | 2020-07-20T20:08:29Z | 2020-07-22T16:08:40Z | 2020-07-22T16:08:39Z | CONTRIBUTOR | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/421.diff",
"html_url": "https://github.com/huggingface/datasets/pull/421",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/421.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/421"
} | make quality and make style ran on scripts | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/421/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/421/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/420 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/420/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/420/comments | https://api.github.com/repos/huggingface/datasets/issues/420/events | https://github.com/huggingface/datasets/pull/420 | 662,029,782 | MDExOlB1bGxSZXF1ZXN0NDUzNjI5OTk2 | 420 | Better handle nested features | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [] | closed | false | null | [] | null | [] | 2020-07-20T16:44:13Z | 2020-07-21T08:20:49Z | 2020-07-21T08:09:52Z | MEMBER | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/420.diff",
"html_url": "https://github.com/huggingface/datasets/pull/420",
"merged_at": "2020-07-21T08:09:51Z",
"patch_url": "https://github.com/huggingface/datasets/pull/420.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/420"
} | Changes:
- added arrow schema to features conversion (it's going to be useful to fix #342 )
- make flatten handle deep features (useful for tfrecords conversion in #339 )
- add tests for flatten and features conversions
- the reader now returns the kwargs to instantiate a Dataset (fix circular dependencies) | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/420/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/420/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/419 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/419/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/419/comments | https://api.github.com/repos/huggingface/datasets/issues/419/events | https://github.com/huggingface/datasets/pull/419 | 661,974,747 | MDExOlB1bGxSZXF1ZXN0NDUzNTgxNzQz | 419 | EmoContext dataset add | {
"avatar_url": "https://avatars.githubusercontent.com/u/35500534?v=4",
"events_url": "https://api.github.com/users/lordtt13/events{/privacy}",
"followers_url": "https://api.github.com/users/lordtt13/followers",
"following_url": "https://api.github.com/users/lordtt13/following{/other_user}",
"gists_url": "https://api.github.com/users/lordtt13/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lordtt13",
"id": 35500534,
"login": "lordtt13",
"node_id": "MDQ6VXNlcjM1NTAwNTM0",
"organizations_url": "https://api.github.com/users/lordtt13/orgs",
"received_events_url": "https://api.github.com/users/lordtt13/received_events",
"repos_url": "https://api.github.com/users/lordtt13/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lordtt13/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lordtt13/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lordtt13"
} | [] | closed | false | null | [] | null | [] | 2020-07-20T15:48:45Z | 2020-07-24T08:22:01Z | 2020-07-24T08:22:00Z | CONTRIBUTOR | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/419.diff",
"html_url": "https://github.com/huggingface/datasets/pull/419",
"merged_at": "2020-07-24T08:22:00Z",
"patch_url": "https://github.com/huggingface/datasets/pull/419.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/419"
} | EmoContext Dataset add
Signed-off-by: lordtt13 <[email protected]> | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/419/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/419/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/418 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/418/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/418/comments | https://api.github.com/repos/huggingface/datasets/issues/418/events | https://github.com/huggingface/datasets/issues/418 | 661,914,873 | MDU6SXNzdWU2NjE5MTQ4NzM= | 418 | Addition of google drive links to dl_manager | {
"avatar_url": "https://avatars.githubusercontent.com/u/35500534?v=4",
"events_url": "https://api.github.com/users/lordtt13/events{/privacy}",
"followers_url": "https://api.github.com/users/lordtt13/followers",
"following_url": "https://api.github.com/users/lordtt13/following{/other_user}",
"gists_url": "https://api.github.com/users/lordtt13/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lordtt13",
"id": 35500534,
"login": "lordtt13",
"node_id": "MDQ6VXNlcjM1NTAwNTM0",
"organizations_url": "https://api.github.com/users/lordtt13/orgs",
"received_events_url": "https://api.github.com/users/lordtt13/received_events",
"repos_url": "https://api.github.com/users/lordtt13/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lordtt13/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lordtt13/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lordtt13"
} | [] | closed | false | null | [] | null | [] | 2020-07-20T14:52:02Z | 2020-07-20T15:39:32Z | 2020-07-20T15:39:32Z | CONTRIBUTOR | null | null | null | Hello there, I followed the template to create a download script of my own, which works fine for me, although I had to shun the dl_manager because it was downloading nothing from the drive links and instead use gdown.
This is the script for me:
```python
class EmoConfig(nlp.BuilderConfig):
"""BuilderConfig for SQUAD."""
def __init__(self, **kwargs):
"""BuilderConfig for EmoContext.
Args:
**kwargs: keyword arguments forwarded to super.
"""
super(EmoConfig, self).__init__(**kwargs)
_TEST_URL = "https://drive.google.com/file/d/1Hn5ytHSSoGOC4sjm3wYy0Dh0oY_oXBbb/view?usp=sharing"
_TRAIN_URL = "https://drive.google.com/file/d/12Uz59TYg_NtxOy7SXraYeXPMRT7oaO7X/view?usp=sharing"
class EmoDataset(nlp.GeneratorBasedBuilder):
""" SemEval-2019 Task 3: EmoContext Contextual Emotion Detection in Text. Version 1.0.0 """
VERSION = nlp.Version("1.0.0")
force = False
def _info(self):
return nlp.DatasetInfo(
description=_DESCRIPTION,
features=nlp.Features(
{
"text": nlp.Value("string"),
"label": nlp.features.ClassLabel(names=["others", "happy", "sad", "angry"]),
}
),
supervised_keys=None,
homepage="https://www.aclweb.org/anthology/S19-2005/",
citation=_CITATION,
)
def _get_drive_url(self, url):
base_url = 'https://drive.google.com/uc?id='
split_url = url.split('/')
return base_url + split_url[5]
def _split_generators(self, dl_manager):
"""Returns SplitGenerators."""
if(not os.path.exists("emo-train.json") or self.force):
gdown.download(self._get_drive_url(_TRAIN_URL), "emo-train.json", quiet = True)
if(not os.path.exists("emo-test.json") or self.force):
gdown.download(self._get_drive_url(_TEST_URL), "emo-test.json", quiet = True)
return [
nlp.SplitGenerator(
name=nlp.Split.TRAIN,
gen_kwargs={
"filepath": "emo-train.json",
"split": "train",
},
),
nlp.SplitGenerator(
name=nlp.Split.TEST,
gen_kwargs={"filepath": "emo-test.json", "split": "test"},
),
]
def _generate_examples(self, filepath, split):
""" Yields examples. """
with open(filepath, 'rb') as f:
data = json.load(f)
for id_, text, label in zip(data["text"].keys(), data["text"].values(), data["Label"].values()):
yield id_, {
"text": text,
"label": label,
}
```
Can someone help me in adding gdrive links to be used with default dl_manager or adding gdown as another dl_manager, because I'd like to add this dataset to nlp's official database. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/418/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/418/timeline | null | completed | true |
https://api.github.com/repos/huggingface/datasets/issues/417 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/417/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/417/comments | https://api.github.com/repos/huggingface/datasets/issues/417/events | https://github.com/huggingface/datasets/pull/417 | 661,804,054 | MDExOlB1bGxSZXF1ZXN0NDUzNDMyODE5 | 417 | Fix docstrins multiple metrics instances | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [] | closed | false | null | [] | null | [] | 2020-07-20T13:08:59Z | 2020-07-22T09:51:00Z | 2020-07-22T09:50:59Z | MEMBER | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/417.diff",
"html_url": "https://github.com/huggingface/datasets/pull/417",
"merged_at": "2020-07-22T09:50:58Z",
"patch_url": "https://github.com/huggingface/datasets/pull/417.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/417"
} | We change the docstrings of `nlp.Metric.compute`, `nlp.Metric.add` and `nlp.Metric.add_batch` depending on which metric is instantiated. However we had issues when instantiating multiple metrics (docstrings were duplicated).
This should fix #304 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/417/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/417/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/416 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/416/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/416/comments | https://api.github.com/repos/huggingface/datasets/issues/416/events | https://github.com/huggingface/datasets/pull/416 | 661,635,393 | MDExOlB1bGxSZXF1ZXN0NDUzMjg1NTM4 | 416 | Fix xtreme panx directory | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [] | closed | false | null | [] | null | [] | 2020-07-20T10:09:17Z | 2020-07-21T08:15:46Z | 2020-07-21T08:15:44Z | MEMBER | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/416.diff",
"html_url": "https://github.com/huggingface/datasets/pull/416",
"merged_at": "2020-07-21T08:15:44Z",
"patch_url": "https://github.com/huggingface/datasets/pull/416.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/416"
} | Fix #412 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/416/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/416/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/415 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/415/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/415/comments | https://api.github.com/repos/huggingface/datasets/issues/415/events | https://github.com/huggingface/datasets/issues/415 | 660,687,076 | MDU6SXNzdWU2NjA2ODcwNzY= | 415 | Something is wrong with WMT 19 kk-en dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/32014649?v=4",
"events_url": "https://api.github.com/users/ChenghaoMou/events{/privacy}",
"followers_url": "https://api.github.com/users/ChenghaoMou/followers",
"following_url": "https://api.github.com/users/ChenghaoMou/following{/other_user}",
"gists_url": "https://api.github.com/users/ChenghaoMou/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/ChenghaoMou",
"id": 32014649,
"login": "ChenghaoMou",
"node_id": "MDQ6VXNlcjMyMDE0NjQ5",
"organizations_url": "https://api.github.com/users/ChenghaoMou/orgs",
"received_events_url": "https://api.github.com/users/ChenghaoMou/received_events",
"repos_url": "https://api.github.com/users/ChenghaoMou/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/ChenghaoMou/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ChenghaoMou/subscriptions",
"type": "User",
"url": "https://api.github.com/users/ChenghaoMou"
} | [
{
"color": "2edb81",
"default": false,
"description": "A bug in a dataset script provided in the library",
"id": 2067388877,
"name": "dataset bug",
"node_id": "MDU6TGFiZWwyMDY3Mzg4ODc3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20bug"
}
] | open | false | null | [] | null | [] | 2020-07-19T08:18:51Z | 2020-07-20T09:54:26Z | null | NONE | null | null | null | The translation in the `train` set does not look right:
```
>>>import nlp
>>>from nlp import load_dataset
>>>dataset = load_dataset('wmt19', 'kk-en')
>>>dataset["train"]["translation"][0]
{'kk': 'Trumpian Uncertainty', 'en': 'Трамптық белгісіздік'}
>>>dataset["validation"]["translation"][0]
{'kk': 'Ақша-несие саясатының сценарийін қайта жазсақ', 'en': 'Rewriting the Monetary-Policy Script'}
``` | {
"+1": 0,
"-1": 0,
"confused": 1,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/415/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/415/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/414 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/414/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/414/comments | https://api.github.com/repos/huggingface/datasets/issues/414/events | https://github.com/huggingface/datasets/issues/414 | 660,654,013 | MDU6SXNzdWU2NjA2NTQwMTM= | 414 | from_dict delete? | {
"avatar_url": "https://avatars.githubusercontent.com/u/22817243?v=4",
"events_url": "https://api.github.com/users/hackerxiaobai/events{/privacy}",
"followers_url": "https://api.github.com/users/hackerxiaobai/followers",
"following_url": "https://api.github.com/users/hackerxiaobai/following{/other_user}",
"gists_url": "https://api.github.com/users/hackerxiaobai/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/hackerxiaobai",
"id": 22817243,
"login": "hackerxiaobai",
"node_id": "MDQ6VXNlcjIyODE3MjQz",
"organizations_url": "https://api.github.com/users/hackerxiaobai/orgs",
"received_events_url": "https://api.github.com/users/hackerxiaobai/received_events",
"repos_url": "https://api.github.com/users/hackerxiaobai/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/hackerxiaobai/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/hackerxiaobai/subscriptions",
"type": "User",
"url": "https://api.github.com/users/hackerxiaobai"
} | [] | closed | false | null | [] | null | [] | 2020-07-19T07:08:36Z | 2020-07-21T02:21:17Z | 2020-07-21T02:21:17Z | NONE | null | null | null | AttributeError: type object 'Dataset' has no attribute 'from_dict' | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/414/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/414/timeline | null | completed | true |
https://api.github.com/repos/huggingface/datasets/issues/413 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/413/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/413/comments | https://api.github.com/repos/huggingface/datasets/issues/413/events | https://github.com/huggingface/datasets/issues/413 | 660,063,655 | MDU6SXNzdWU2NjAwNjM2NTU= | 413 | Is there a way to download only NQ dev? | {
"avatar_url": "https://avatars.githubusercontent.com/u/1563902?v=4",
"events_url": "https://api.github.com/users/tholor/events{/privacy}",
"followers_url": "https://api.github.com/users/tholor/followers",
"following_url": "https://api.github.com/users/tholor/following{/other_user}",
"gists_url": "https://api.github.com/users/tholor/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/tholor",
"id": 1563902,
"login": "tholor",
"node_id": "MDQ6VXNlcjE1NjM5MDI=",
"organizations_url": "https://api.github.com/users/tholor/orgs",
"received_events_url": "https://api.github.com/users/tholor/received_events",
"repos_url": "https://api.github.com/users/tholor/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/tholor/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/tholor/subscriptions",
"type": "User",
"url": "https://api.github.com/users/tholor"
} | [] | closed | false | null | [] | null | [] | 2020-07-18T10:28:23Z | 2022-02-11T09:50:21Z | 2022-02-11T09:50:21Z | NONE | null | null | null | Maybe I missed that in the docs, but is there a way to only download the dev set of natural questions (~1 GB)?
As we want to benchmark QA models on different datasets, I would like to avoid downloading the 41GB of training data.
I tried
```
dataset = nlp.load_dataset('natural_questions', split="validation", beam_runner="DirectRunner")
```
But this still triggered a big download of presumably the whole dataset. Is there any way of doing this or are splits / slicing options only available after downloading?
Thanks! | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/413/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/413/timeline | null | completed | true |
https://api.github.com/repos/huggingface/datasets/issues/412 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/412/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/412/comments | https://api.github.com/repos/huggingface/datasets/issues/412/events | https://github.com/huggingface/datasets/issues/412 | 660,047,139 | MDU6SXNzdWU2NjAwNDcxMzk= | 412 | Unable to load XTREME dataset from disk | {
"avatar_url": "https://avatars.githubusercontent.com/u/26859204?v=4",
"events_url": "https://api.github.com/users/lewtun/events{/privacy}",
"followers_url": "https://api.github.com/users/lewtun/followers",
"following_url": "https://api.github.com/users/lewtun/following{/other_user}",
"gists_url": "https://api.github.com/users/lewtun/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lewtun",
"id": 26859204,
"login": "lewtun",
"node_id": "MDQ6VXNlcjI2ODU5MjA0",
"organizations_url": "https://api.github.com/users/lewtun/orgs",
"received_events_url": "https://api.github.com/users/lewtun/received_events",
"repos_url": "https://api.github.com/users/lewtun/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lewtun/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lewtun/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lewtun"
} | [] | closed | false | null | [] | null | [] | 2020-07-18T09:55:00Z | 2020-07-21T08:15:44Z | 2020-07-21T08:15:44Z | MEMBER | null | null | null | Hi 🤗 team!
## Description of the problem
Following the [docs](https://huggingface.co/nlp/loading_datasets.html?highlight=xtreme#manually-downloading-files) I'm trying to load the `PAN-X.fr` dataset from the [XTREME](https://github.com/google-research/xtreme) benchmark.
I have manually downloaded the `AmazonPhotos.zip` file from [here](https://www.amazon.com/clouddrive/share/d3KGCRCIYwhKJF0H3eWA26hjg2ZCRhjpEQtDL70FSBN?_encoding=UTF8&%2AVersion%2A=1&%2Aentries%2A=0&mgh=1) and am running into a `FileNotFoundError` when I point to the location of the dataset.
As far as I can tell, the problem is that `AmazonPhotos.zip` decompresses to `panx_dataset` and `load_dataset()` is not looking in the correct path:
```
# path where load_dataset is looking for fr.tar.gz
/root/.cache/huggingface/datasets/9b8c4f1578e45cb2539332c79738beb3b54afbcd842b079cabfd79e3ed6704f6/
# path where it actually exists
/root/.cache/huggingface/datasets/9b8c4f1578e45cb2539332c79738beb3b54afbcd842b079cabfd79e3ed6704f6/panx_dataset/
```
## Steps to reproduce the problem
1. Manually download the XTREME benchmark from [here](https://www.amazon.com/clouddrive/share/d3KGCRCIYwhKJF0H3eWA26hjg2ZCRhjpEQtDL70FSBN?_encoding=UTF8&%2AVersion%2A=1&%2Aentries%2A=0&mgh=1)
2. Run the following code snippet
```python
from nlp import load_dataset
# AmazonPhotos.zip is in the root of the folder
dataset = load_dataset("xtreme", "PAN-X.fr", data_dir='./')
```
3. Here is the stack trace
```
---------------------------------------------------------------------------
FileNotFoundError Traceback (most recent call last)
<ipython-input-4-26786bb5fa93> in <module>
----> 1 dataset = load_dataset("xtreme", "PAN-X.fr", data_dir='./')
/usr/local/lib/python3.6/dist-packages/nlp/load.py in load_dataset(path, name, version, data_dir, data_files, split, cache_dir, download_config, download_mode, ignore_verifications, save_infos, **config_kwargs)
522 download_mode=download_mode,
523 ignore_verifications=ignore_verifications,
--> 524 save_infos=save_infos,
525 )
526
/usr/local/lib/python3.6/dist-packages/nlp/builder.py in download_and_prepare(self, download_config, download_mode, ignore_verifications, save_infos, try_from_hf_gcs, dl_manager, **download_and_prepare_kwargs)
430 verify_infos = not save_infos and not ignore_verifications
431 self._download_and_prepare(
--> 432 dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs
433 )
434 # Sync info
/usr/local/lib/python3.6/dist-packages/nlp/builder.py in _download_and_prepare(self, dl_manager, verify_infos, **prepare_split_kwargs)
464 split_dict = SplitDict(dataset_name=self.name)
465 split_generators_kwargs = self._make_split_generators_kwargs(prepare_split_kwargs)
--> 466 split_generators = self._split_generators(dl_manager, **split_generators_kwargs)
467 # Checksums verification
468 if verify_infos:
/usr/local/lib/python3.6/dist-packages/nlp/datasets/xtreme/b8c2ed3583a7a7ac60b503576dfed3271ac86757628897e945bd329c43b8a746/xtreme.py in _split_generators(self, dl_manager)
725 panx_dl_dir = dl_manager.extract(panx_path)
726 lang = self.config.name.split(".")[1]
--> 727 lang_folder = dl_manager.extract(os.path.join(panx_dl_dir, lang + ".tar.gz"))
728 return [
729 nlp.SplitGenerator(
/usr/local/lib/python3.6/dist-packages/nlp/utils/download_manager.py in extract(self, path_or_paths)
196 """
197 return map_nested(
--> 198 lambda path: cached_path(path, extract_compressed_file=True, force_extract=False), path_or_paths,
199 )
200
/usr/local/lib/python3.6/dist-packages/nlp/utils/py_utils.py in map_nested(function, data_struct, dict_only, map_tuple)
170 return tuple(mapped)
171 # Singleton
--> 172 return function(data_struct)
173
174
/usr/local/lib/python3.6/dist-packages/nlp/utils/download_manager.py in <lambda>(path)
196 """
197 return map_nested(
--> 198 lambda path: cached_path(path, extract_compressed_file=True, force_extract=False), path_or_paths,
199 )
200
/usr/local/lib/python3.6/dist-packages/nlp/utils/file_utils.py in cached_path(url_or_filename, download_config, **download_kwargs)
203 elif urlparse(url_or_filename).scheme == "":
204 # File, but it doesn't exist.
--> 205 raise FileNotFoundError("Local file {} doesn't exist".format(url_or_filename))
206 else:
207 # Something unknown
FileNotFoundError: Local file /root/.cache/huggingface/datasets/9b8c4f1578e45cb2539332c79738beb3b54afbcd842b079cabfd79e3ed6704f6/fr.tar.gz doesn't exist
```
## OS and hardware
```
- `nlp` version: 0.3.0
- Platform: Linux-4.15.0-72-generic-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.6.9
- PyTorch version (GPU?): 1.4.0 (True)
- Tensorflow version (GPU?): 2.1.0 (True)
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
``` | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/412/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/412/timeline | null | completed | true |
https://api.github.com/repos/huggingface/datasets/issues/411 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/411/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/411/comments | https://api.github.com/repos/huggingface/datasets/issues/411/events | https://github.com/huggingface/datasets/pull/411 | 659,393,398 | MDExOlB1bGxSZXF1ZXN0NDUxMjQxOTQy | 411 | Sbf | {
"avatar_url": "https://avatars.githubusercontent.com/u/38249783?v=4",
"events_url": "https://api.github.com/users/mariamabarham/events{/privacy}",
"followers_url": "https://api.github.com/users/mariamabarham/followers",
"following_url": "https://api.github.com/users/mariamabarham/following{/other_user}",
"gists_url": "https://api.github.com/users/mariamabarham/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariamabarham",
"id": 38249783,
"login": "mariamabarham",
"node_id": "MDQ6VXNlcjM4MjQ5Nzgz",
"organizations_url": "https://api.github.com/users/mariamabarham/orgs",
"received_events_url": "https://api.github.com/users/mariamabarham/received_events",
"repos_url": "https://api.github.com/users/mariamabarham/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariamabarham/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariamabarham/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariamabarham"
} | [] | closed | false | null | [] | null | [] | 2020-07-17T16:19:45Z | 2020-07-21T09:13:46Z | 2020-07-21T09:13:45Z | CONTRIBUTOR | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/411.diff",
"html_url": "https://github.com/huggingface/datasets/pull/411",
"merged_at": "2020-07-21T09:13:45Z",
"patch_url": "https://github.com/huggingface/datasets/pull/411.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/411"
} | This PR adds the Social Bias Frames Dataset (ACL 2020) .
dataset homepage: https://homes.cs.washington.edu/~msap/social-bias-frames/ | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/411/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/411/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/410 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/410/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/410/comments | https://api.github.com/repos/huggingface/datasets/issues/410/events | https://github.com/huggingface/datasets/pull/410 | 659,242,871 | MDExOlB1bGxSZXF1ZXN0NDUxMTEzMTI3 | 410 | 20newsgroup | {
"avatar_url": "https://avatars.githubusercontent.com/u/38249783?v=4",
"events_url": "https://api.github.com/users/mariamabarham/events{/privacy}",
"followers_url": "https://api.github.com/users/mariamabarham/followers",
"following_url": "https://api.github.com/users/mariamabarham/following{/other_user}",
"gists_url": "https://api.github.com/users/mariamabarham/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariamabarham",
"id": 38249783,
"login": "mariamabarham",
"node_id": "MDQ6VXNlcjM4MjQ5Nzgz",
"organizations_url": "https://api.github.com/users/mariamabarham/orgs",
"received_events_url": "https://api.github.com/users/mariamabarham/received_events",
"repos_url": "https://api.github.com/users/mariamabarham/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariamabarham/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariamabarham/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariamabarham"
} | [] | closed | false | null | [] | null | [] | 2020-07-17T13:07:57Z | 2020-07-20T07:05:29Z | 2020-07-20T07:05:28Z | CONTRIBUTOR | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/410.diff",
"html_url": "https://github.com/huggingface/datasets/pull/410",
"merged_at": "2020-07-20T07:05:28Z",
"patch_url": "https://github.com/huggingface/datasets/pull/410.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/410"
} | Add 20Newsgroup dataset.
#353 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/410/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/410/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/409 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/409/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/409/comments | https://api.github.com/repos/huggingface/datasets/issues/409/events | https://github.com/huggingface/datasets/issues/409 | 659,128,611 | MDU6SXNzdWU2NTkxMjg2MTE= | 409 | train_test_split error: 'dict' object has no attribute 'deepcopy' | {
"avatar_url": "https://avatars.githubusercontent.com/u/20516801?v=4",
"events_url": "https://api.github.com/users/morganmcg1/events{/privacy}",
"followers_url": "https://api.github.com/users/morganmcg1/followers",
"following_url": "https://api.github.com/users/morganmcg1/following{/other_user}",
"gists_url": "https://api.github.com/users/morganmcg1/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/morganmcg1",
"id": 20516801,
"login": "morganmcg1",
"node_id": "MDQ6VXNlcjIwNTE2ODAx",
"organizations_url": "https://api.github.com/users/morganmcg1/orgs",
"received_events_url": "https://api.github.com/users/morganmcg1/received_events",
"repos_url": "https://api.github.com/users/morganmcg1/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/morganmcg1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/morganmcg1/subscriptions",
"type": "User",
"url": "https://api.github.com/users/morganmcg1"
} | [] | closed | false | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
}
] | null | [] | 2020-07-17T10:36:28Z | 2020-07-21T14:34:52Z | 2020-07-21T14:34:52Z | NONE | null | null | null | `train_test_split` is giving me an error when I try and call it:
`'dict' object has no attribute 'deepcopy'`
## To reproduce
```
dataset = load_dataset('glue', 'mrpc', split='train')
dataset = dataset.train_test_split(test_size=0.2)
```
## Full Stacktrace
```
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
<ipython-input-12-feb740dbec9a> in <module>
1 dataset = load_dataset('glue', 'mrpc', split='train')
----> 2 dataset = dataset.train_test_split(test_size=0.2)
~/anaconda3/envs/fastai2_me/lib/python3.7/site-packages/nlp/arrow_dataset.py in train_test_split(self, test_size, train_size, shuffle, seed, generator, keep_in_memory, load_from_cache_file, train_cache_file_name, test_cache_file_name, writer_batch_size)
1032 "writer_batch_size": writer_batch_size,
1033 }
-> 1034 train_kwargs = cache_kwargs.deepcopy()
1035 train_kwargs["split"] = "train"
1036 test_kwargs = cache_kwargs.deepcopy()
AttributeError: 'dict' object has no attribute 'deepcopy'
``` | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/409/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/409/timeline | null | completed | true |
https://api.github.com/repos/huggingface/datasets/issues/408 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/408/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/408/comments | https://api.github.com/repos/huggingface/datasets/issues/408/events | https://github.com/huggingface/datasets/pull/408 | 659,064,144 | MDExOlB1bGxSZXF1ZXN0NDUwOTU1MTE0 | 408 | Add tests datasets gcp | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [] | closed | false | null | [] | null | [] | 2020-07-17T09:23:27Z | 2020-07-17T09:26:57Z | 2020-07-17T09:26:56Z | MEMBER | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/408.diff",
"html_url": "https://github.com/huggingface/datasets/pull/408",
"merged_at": "2020-07-17T09:26:56Z",
"patch_url": "https://github.com/huggingface/datasets/pull/408.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/408"
} | Some datasets are available on our google cloud storage in arrow format, so that the users don't need to process the data.
These tests make sure that they're always available. It also makes sure that their scripts are in sync between S3 and the repo.
This should avoid future issues like #407 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/408/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/408/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/407 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/407/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/407/comments | https://api.github.com/repos/huggingface/datasets/issues/407/events | https://github.com/huggingface/datasets/issues/407 | 658,672,736 | MDU6SXNzdWU2NTg2NzI3MzY= | 407 | MissingBeamOptions for Wikipedia 20200501.en | {
"avatar_url": "https://avatars.githubusercontent.com/u/7490438?v=4",
"events_url": "https://api.github.com/users/mitchellgordon95/events{/privacy}",
"followers_url": "https://api.github.com/users/mitchellgordon95/followers",
"following_url": "https://api.github.com/users/mitchellgordon95/following{/other_user}",
"gists_url": "https://api.github.com/users/mitchellgordon95/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mitchellgordon95",
"id": 7490438,
"login": "mitchellgordon95",
"node_id": "MDQ6VXNlcjc0OTA0Mzg=",
"organizations_url": "https://api.github.com/users/mitchellgordon95/orgs",
"received_events_url": "https://api.github.com/users/mitchellgordon95/received_events",
"repos_url": "https://api.github.com/users/mitchellgordon95/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mitchellgordon95/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mitchellgordon95/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mitchellgordon95"
} | [] | closed | false | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
}
] | null | [] | 2020-07-16T23:48:03Z | 2021-01-12T11:41:16Z | 2020-07-17T14:24:28Z | CONTRIBUTOR | null | null | null | There may or may not be a regression for the pre-processed Wikipedia dataset. This was working fine 10 commits ago (without having Apache Beam available):
```
nlp.load_dataset('wikipedia', "20200501.en", split='train')
```
And now, having pulled master, I get:
```
Downloading and preparing dataset wikipedia/20200501.en (download: 16.99 GiB, generated: 17.07 GiB, total: 34.06 GiB) to /home/hltcoe/mgordon/.cache/huggingface/datasets/wikipedia/20200501.en/1.0.0/76b0b2747b679bb0ee7a1621e50e5a6378477add0c662668a324a5bc07d516dd...
Traceback (most recent call last):
File "scripts/download.py", line 11, in <module>
fire.Fire(download_pretrain)
File "/home/hltcoe/mgordon/.conda/envs/huggingface/lib/python3.6/site-packages/fire/core.py", line 138, in Fire
component_trace = _Fire(component, args, parsed_flag_args, context, name)
File "/home/hltcoe/mgordon/.conda/envs/huggingface/lib/python3.6/site-packages/fire/core.py", line 468, in _Fire
target=component.__name__)
File "/home/hltcoe/mgordon/.conda/envs/huggingface/lib/python3.6/site-packages/fire/core.py", line 672, in _CallAndUpdateTrace
component = fn(*varargs, **kwargs)
File "scripts/download.py", line 6, in download_pretrain
nlp.load_dataset('wikipedia', "20200501.en", split='train')
File "/exp/mgordon/nlp/src/nlp/load.py", line 534, in load_dataset
save_infos=save_infos,
File "/exp/mgordon/nlp/src/nlp/builder.py", line 460, in download_and_prepare
dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs
File "/exp/mgordon/nlp/src/nlp/builder.py", line 870, in _download_and_prepare
"\n\t`{}`".format(usage_example)
nlp.builder.MissingBeamOptions: Trying to generate a dataset using Apache Beam, yet no Beam Runner or PipelineOptions() has been provided in `load_dataset` or in the builder arguments. For big datasets it has to run on large-scale data processing tools like Dataflow, S
park, etc. More information about Apache Beam runners at https://beam.apache.org/documentation/runners/capability-matrix/
If you really want to run it locally because you feel like the Dataset is small enough, you can use the local beam runner called `DirectRunner` (you may run out of memory).
Example of usage:
`load_dataset('wikipedia', '20200501.en', beam_runner='DirectRunner')`
``` | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/407/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/407/timeline | null | completed | true |
https://api.github.com/repos/huggingface/datasets/issues/406 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/406/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/406/comments | https://api.github.com/repos/huggingface/datasets/issues/406/events | https://github.com/huggingface/datasets/issues/406 | 658,581,764 | MDU6SXNzdWU2NTg1ODE3NjQ= | 406 | Faster Shuffling? | {
"avatar_url": "https://avatars.githubusercontent.com/u/7490438?v=4",
"events_url": "https://api.github.com/users/mitchellgordon95/events{/privacy}",
"followers_url": "https://api.github.com/users/mitchellgordon95/followers",
"following_url": "https://api.github.com/users/mitchellgordon95/following{/other_user}",
"gists_url": "https://api.github.com/users/mitchellgordon95/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mitchellgordon95",
"id": 7490438,
"login": "mitchellgordon95",
"node_id": "MDQ6VXNlcjc0OTA0Mzg=",
"organizations_url": "https://api.github.com/users/mitchellgordon95/orgs",
"received_events_url": "https://api.github.com/users/mitchellgordon95/received_events",
"repos_url": "https://api.github.com/users/mitchellgordon95/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mitchellgordon95/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mitchellgordon95/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mitchellgordon95"
} | [] | closed | false | null | [] | null | [] | 2020-07-16T21:21:53Z | 2020-09-07T14:45:26Z | 2020-09-07T14:45:25Z | CONTRIBUTOR | null | null | null | Consider shuffling bookcorpus:
```
dataset = nlp.load_dataset('bookcorpus', split='train')
dataset.shuffle()
```
According to tqdm, this will take around 2.5 hours on my machine to complete (even with the faster version of select from #405). I've also tried with `keep_in_memory=True` and `writer_batch_size=1000`.
But I can also just write the lines to a text file:
```
batch_size = 100000
with open('tmp.txt', 'w+') as out_f:
for i in tqdm(range(0, len(dataset), batch_size)):
batch = dataset[i:i+batch_size]['text']
print("\n".join(batch), file=out_f)
```
Which completes in a couple minutes, followed by `shuf tmp.txt > tmp2.txt` which completes in under a minute. And finally,
```
dataset = nlp.load_dataset('text', data_files='tmp2.txt')
```
Which completes in under 10 minutes. I read up on Apache Arrow this morning, and it seems like the columnar data format is not especially well-suited to shuffling rows, since moving items around requires a lot of book-keeping.
Is shuffle inherently slow, or am I just using it wrong? And if it is slow, would it make sense to try converting the data to a row-based format on disk and then shuffling? (Instead of calling select with a random permutation, as is currently done.) | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/406/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/406/timeline | null | completed | true |
https://api.github.com/repos/huggingface/datasets/issues/405 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/405/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/405/comments | https://api.github.com/repos/huggingface/datasets/issues/405/events | https://github.com/huggingface/datasets/pull/405 | 658,580,192 | MDExOlB1bGxSZXF1ZXN0NDUwNTI1MTc3 | 405 | Make select() faster by batching reads | {
"avatar_url": "https://avatars.githubusercontent.com/u/7490438?v=4",
"events_url": "https://api.github.com/users/mitchellgordon95/events{/privacy}",
"followers_url": "https://api.github.com/users/mitchellgordon95/followers",
"following_url": "https://api.github.com/users/mitchellgordon95/following{/other_user}",
"gists_url": "https://api.github.com/users/mitchellgordon95/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mitchellgordon95",
"id": 7490438,
"login": "mitchellgordon95",
"node_id": "MDQ6VXNlcjc0OTA0Mzg=",
"organizations_url": "https://api.github.com/users/mitchellgordon95/orgs",
"received_events_url": "https://api.github.com/users/mitchellgordon95/received_events",
"repos_url": "https://api.github.com/users/mitchellgordon95/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mitchellgordon95/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mitchellgordon95/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mitchellgordon95"
} | [] | closed | false | null | [] | null | [] | 2020-07-16T21:19:45Z | 2020-07-17T17:05:44Z | 2020-07-17T16:51:26Z | CONTRIBUTOR | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/405.diff",
"html_url": "https://github.com/huggingface/datasets/pull/405",
"merged_at": "2020-07-17T16:51:26Z",
"patch_url": "https://github.com/huggingface/datasets/pull/405.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/405"
} | Here's a benchmark:
```
dataset = nlp.load_dataset('bookcorpus', split='train')
start = time.time()
dataset.select(np.arange(1000), reader_batch_size=1, load_from_cache_file=False)
end = time.time()
print(f'{end - start}')
start = time.time()
dataset.select(np.arange(1000), reader_batch_size=1000, load_from_cache_file=False)
end = time.time()
print(f'{end - start}')
```
Without batching, select takes around 1.27 seconds. With batching, it takes around 0.01 seconds. The slowness was upsetting me because dataset.shuffle() was supposed to take ~27 hours for bookcorpus. Now with the fix it takes ~2.5 hours (which still is pretty slow, but I'll open a separate issue for that). | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/405/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/405/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/404 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/404/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/404/comments | https://api.github.com/repos/huggingface/datasets/issues/404/events | https://github.com/huggingface/datasets/pull/404 | 658,400,987 | MDExOlB1bGxSZXF1ZXN0NDUwMzY4Mjg4 | 404 | Add seed in metrics | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [] | closed | false | null | [] | null | [] | 2020-07-16T17:27:05Z | 2020-07-20T10:12:35Z | 2020-07-20T10:12:34Z | MEMBER | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/404.diff",
"html_url": "https://github.com/huggingface/datasets/pull/404",
"merged_at": "2020-07-20T10:12:34Z",
"patch_url": "https://github.com/huggingface/datasets/pull/404.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/404"
} | With #361 we noticed that some metrics were not deterministic.
In this PR I allow the user to specify numpy's seed when instantiating a metric with `load_metric`.
The seed is set only when `compute` is called, and reset afterwards.
Moreover when calling `compute` with the same metric instance (i.e. same experiment_id), the metric will always return the same results given the same inputs. This is the case even if the seed is was not specified by the user, as the previous seed is going to be reused.
However, instantiating twice a metric (two different experiments) without specifying a seed can create different results. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/404/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/404/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/403 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/403/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/403/comments | https://api.github.com/repos/huggingface/datasets/issues/403/events | https://github.com/huggingface/datasets/pull/403 | 658,325,756 | MDExOlB1bGxSZXF1ZXN0NDUwMzAzNjI2 | 403 | return python objects instead of arrays by default | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [] | closed | false | null | [] | null | [] | 2020-07-16T15:51:52Z | 2020-07-17T11:37:01Z | 2020-07-17T11:37:00Z | MEMBER | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/403.diff",
"html_url": "https://github.com/huggingface/datasets/pull/403",
"merged_at": "2020-07-17T11:37:00Z",
"patch_url": "https://github.com/huggingface/datasets/pull/403.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/403"
} | We were using to_pandas() to convert from arrow types, however it returns numpy arrays instead of python lists.
I fixed it by using to_pydict/to_pylist instead.
Fix #387
It was mentioned in https://github.com/huggingface/transformers/issues/5729
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/403/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/403/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/402 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/402/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/402/comments | https://api.github.com/repos/huggingface/datasets/issues/402/events | https://github.com/huggingface/datasets/pull/402 | 658,001,288 | MDExOlB1bGxSZXF1ZXN0NDUwMDI2NTE0 | 402 | Search qa | {
"avatar_url": "https://avatars.githubusercontent.com/u/38249783?v=4",
"events_url": "https://api.github.com/users/mariamabarham/events{/privacy}",
"followers_url": "https://api.github.com/users/mariamabarham/followers",
"following_url": "https://api.github.com/users/mariamabarham/following{/other_user}",
"gists_url": "https://api.github.com/users/mariamabarham/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariamabarham",
"id": 38249783,
"login": "mariamabarham",
"node_id": "MDQ6VXNlcjM4MjQ5Nzgz",
"organizations_url": "https://api.github.com/users/mariamabarham/orgs",
"received_events_url": "https://api.github.com/users/mariamabarham/received_events",
"repos_url": "https://api.github.com/users/mariamabarham/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariamabarham/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariamabarham/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariamabarham"
} | [] | closed | false | null | [] | null | [] | 2020-07-16T09:00:10Z | 2020-07-16T14:27:00Z | 2020-07-16T14:26:59Z | CONTRIBUTOR | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/402.diff",
"html_url": "https://github.com/huggingface/datasets/pull/402",
"merged_at": "2020-07-16T14:26:59Z",
"patch_url": "https://github.com/huggingface/datasets/pull/402.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/402"
} | add SearchQA dataset
#336 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/402/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/402/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/401 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/401/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/401/comments | https://api.github.com/repos/huggingface/datasets/issues/401/events | https://github.com/huggingface/datasets/pull/401 | 657,996,252 | MDExOlB1bGxSZXF1ZXN0NDUwMDIyNTc0 | 401 | add web_questions | {
"avatar_url": "https://avatars.githubusercontent.com/u/38249783?v=4",
"events_url": "https://api.github.com/users/mariamabarham/events{/privacy}",
"followers_url": "https://api.github.com/users/mariamabarham/followers",
"following_url": "https://api.github.com/users/mariamabarham/following{/other_user}",
"gists_url": "https://api.github.com/users/mariamabarham/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariamabarham",
"id": 38249783,
"login": "mariamabarham",
"node_id": "MDQ6VXNlcjM4MjQ5Nzgz",
"organizations_url": "https://api.github.com/users/mariamabarham/orgs",
"received_events_url": "https://api.github.com/users/mariamabarham/received_events",
"repos_url": "https://api.github.com/users/mariamabarham/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariamabarham/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariamabarham/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariamabarham"
} | [] | closed | false | null | [] | null | [] | 2020-07-16T08:54:59Z | 2020-08-06T06:16:20Z | 2020-08-06T06:16:19Z | CONTRIBUTOR | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/401.diff",
"html_url": "https://github.com/huggingface/datasets/pull/401",
"merged_at": "2020-08-06T06:16:19Z",
"patch_url": "https://github.com/huggingface/datasets/pull/401.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/401"
} | add Web Question dataset
#336
Maybe @patrickvonplaten you can help with the dummy_data structure? it still broken | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/401/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/401/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/400 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/400/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/400/comments | https://api.github.com/repos/huggingface/datasets/issues/400/events | https://github.com/huggingface/datasets/pull/400 | 657,975,600 | MDExOlB1bGxSZXF1ZXN0NDUwMDA1MDU5 | 400 | Web questions | {
"avatar_url": "https://avatars.githubusercontent.com/u/38249783?v=4",
"events_url": "https://api.github.com/users/mariamabarham/events{/privacy}",
"followers_url": "https://api.github.com/users/mariamabarham/followers",
"following_url": "https://api.github.com/users/mariamabarham/following{/other_user}",
"gists_url": "https://api.github.com/users/mariamabarham/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariamabarham",
"id": 38249783,
"login": "mariamabarham",
"node_id": "MDQ6VXNlcjM4MjQ5Nzgz",
"organizations_url": "https://api.github.com/users/mariamabarham/orgs",
"received_events_url": "https://api.github.com/users/mariamabarham/received_events",
"repos_url": "https://api.github.com/users/mariamabarham/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariamabarham/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariamabarham/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariamabarham"
} | [] | closed | false | null | [] | null | [] | 2020-07-16T08:28:29Z | 2020-07-16T08:50:51Z | 2020-07-16T08:42:54Z | CONTRIBUTOR | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/400.diff",
"html_url": "https://github.com/huggingface/datasets/pull/400",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/400.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/400"
} | add the WebQuestion dataset
#336 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/400/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/400/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/399 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/399/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/399/comments | https://api.github.com/repos/huggingface/datasets/issues/399/events | https://github.com/huggingface/datasets/pull/399 | 657,841,433 | MDExOlB1bGxSZXF1ZXN0NDQ5ODkxNTEy | 399 | Spelling mistake | {
"avatar_url": "https://avatars.githubusercontent.com/u/9410067?v=4",
"events_url": "https://api.github.com/users/BlancRay/events{/privacy}",
"followers_url": "https://api.github.com/users/BlancRay/followers",
"following_url": "https://api.github.com/users/BlancRay/following{/other_user}",
"gists_url": "https://api.github.com/users/BlancRay/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/BlancRay",
"id": 9410067,
"login": "BlancRay",
"node_id": "MDQ6VXNlcjk0MTAwNjc=",
"organizations_url": "https://api.github.com/users/BlancRay/orgs",
"received_events_url": "https://api.github.com/users/BlancRay/received_events",
"repos_url": "https://api.github.com/users/BlancRay/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/BlancRay/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BlancRay/subscriptions",
"type": "User",
"url": "https://api.github.com/users/BlancRay"
} | [] | closed | false | null | [] | null | [] | 2020-07-16T04:37:58Z | 2020-07-16T06:49:48Z | 2020-07-16T06:49:37Z | CONTRIBUTOR | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/399.diff",
"html_url": "https://github.com/huggingface/datasets/pull/399",
"merged_at": "2020-07-16T06:49:37Z",
"patch_url": "https://github.com/huggingface/datasets/pull/399.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/399"
} | In "Formatting the dataset" part, "The two toehr modifications..." should be "The two other modifications..." ,the word "other" wrong spelled as "toehr". | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/399/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/399/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/398 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/398/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/398/comments | https://api.github.com/repos/huggingface/datasets/issues/398/events | https://github.com/huggingface/datasets/pull/398 | 657,511,962 | MDExOlB1bGxSZXF1ZXN0NDQ5NjE1OTk1 | 398 | Add inline links | {
"avatar_url": "https://avatars.githubusercontent.com/u/13381361?v=4",
"events_url": "https://api.github.com/users/Bharat123rox/events{/privacy}",
"followers_url": "https://api.github.com/users/Bharat123rox/followers",
"following_url": "https://api.github.com/users/Bharat123rox/following{/other_user}",
"gists_url": "https://api.github.com/users/Bharat123rox/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Bharat123rox",
"id": 13381361,
"login": "Bharat123rox",
"node_id": "MDQ6VXNlcjEzMzgxMzYx",
"organizations_url": "https://api.github.com/users/Bharat123rox/orgs",
"received_events_url": "https://api.github.com/users/Bharat123rox/received_events",
"repos_url": "https://api.github.com/users/Bharat123rox/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Bharat123rox/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Bharat123rox/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Bharat123rox"
} | [] | closed | false | null | [] | null | [] | 2020-07-15T17:04:04Z | 2020-07-22T10:14:22Z | 2020-07-22T10:14:22Z | CONTRIBUTOR | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/398.diff",
"html_url": "https://github.com/huggingface/datasets/pull/398",
"merged_at": "2020-07-22T10:14:22Z",
"patch_url": "https://github.com/huggingface/datasets/pull/398.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/398"
} | Add inline links to `Contributing.md` | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/398/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/398/timeline | null | null | true |
Subsets and Splits