url
stringlengths 61
61
| repository_url
stringclasses 1
value | labels_url
stringlengths 75
75
| comments_url
stringlengths 70
70
| events_url
stringlengths 68
68
| html_url
stringlengths 49
51
| id
int64 778M
1.87B
| node_id
stringlengths 18
32
| number
int64 1.68k
6.18k
| title
stringlengths 1
290
| user
dict | labels
listlengths 0
4
| state
stringclasses 2
values | locked
bool 1
class | assignee
dict | assignees
listlengths 0
4
| milestone
dict | comments
sequencelengths 0
30
| created_at
unknown | updated_at
unknown | closed_at
unknown | author_association
stringclasses 3
values | active_lock_reason
float64 | body
stringlengths 0
228k
⌀ | reactions
dict | timeline_url
stringlengths 70
70
| performed_via_github_app
float64 | state_reason
stringclasses 3
values | draft
float64 0
1
⌀ | pull_request
dict | is_pull_request
bool 2
classes |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://api.github.com/repos/huggingface/datasets/issues/1878 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1878/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1878/comments | https://api.github.com/repos/huggingface/datasets/issues/1878/events | https://github.com/huggingface/datasets/pull/1878 | 808,526,883 | MDExOlB1bGxSZXF1ZXN0NTczNTUyODk3 | 1,878 | Add LJ Speech dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/26864830?v=4",
"events_url": "https://api.github.com/users/anton-l/events{/privacy}",
"followers_url": "https://api.github.com/users/anton-l/followers",
"following_url": "https://api.github.com/users/anton-l/following{/other_user}",
"gists_url": "https://api.github.com/users/anton-l/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/anton-l",
"id": 26864830,
"login": "anton-l",
"node_id": "MDQ6VXNlcjI2ODY0ODMw",
"organizations_url": "https://api.github.com/users/anton-l/orgs",
"received_events_url": "https://api.github.com/users/anton-l/received_events",
"repos_url": "https://api.github.com/users/anton-l/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/anton-l/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/anton-l/subscriptions",
"type": "User",
"url": "https://api.github.com/users/anton-l"
} | [] | closed | false | null | [] | null | [
"Hey @anton-l,\r\n\r\nThanks a lot for the very clean integration!\r\n\r\n1) I think we should now start having \"automatic-speech-recognition\" as a label in the dataset tagger (@yjernite is it easy to add?). But we can surely add this dataset with the tag you've added and then later change the label to `asr` \r\n\r\n2) That's perfect! Yeah good question - we're currently thinking about a better design with @lhoestq \r\n\r\n3) Again tagging @yjernite & @lhoestq here - guess we should add this license though!",
"Thanks @anton-l for adding this one :)\r\nAbout the points you mentioned:\r\n1. Sure as soon as we've updated the tag sets in https://github.com/huggingface/datasets-tagging/blob/main/task_set.json, we can update the tags in this dataset card and also in the other audio dataset card.\r\n2. For now we just try to have them as small as possible but we may switch to S3/LFS at one point indeed\r\n3. If it's not part of the license set at https://github.com/huggingface/datasets-tagging/blob/main/license_set.json we can add it to this license set\r\n\r\nFor now it's ok to have the other-* tags but we'll update them very soon",
"Let's merge this one and then we'll update the tags for the audio datasets. We'll probably also add something like this:\r\n```\r\ntype:\r\n- text\r\n- audio\r\n```\r\n\r\nThank you so much for adding this one, good job !"
] | "2021-02-15T13:10:42Z" | "2021-02-15T19:39:41Z" | "2021-02-15T14:18:09Z" | MEMBER | null | This PR adds the LJ Speech dataset (https://keithito.com/LJ-Speech-Dataset/)
As requested by #1841
The ASR format is based on #1767
There are a couple of quirks that should be addressed:
- I tagged this dataset as `other-other-automatic-speech-recognition` and `other-other-text-to-speech` (as classified by paperswithcode). Since the number of speech datasets is about to grow, maybe these categories should be added to the main list?
- Similarly to #1767 this dataset uses only a single dummy sample to reduce the zip size (`wav`s are quite heavy). Is there a plan to allow LFS or S3 usage for dummy data in the repo?
- The dataset is distributed under the Public Domain license, which is not used anywhere else in the repo, AFAIK. Do you think Public Domain is worth adding to the tagger app as well?
Pinging @patrickvonplaten to review | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1878/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1878/timeline | null | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/1878.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1878",
"merged_at": "2021-02-15T14:18:09Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1878.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1878"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/1877 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1877/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1877/comments | https://api.github.com/repos/huggingface/datasets/issues/1877/events | https://github.com/huggingface/datasets/issues/1877 | 808,462,272 | MDU6SXNzdWU4MDg0NjIyNzI= | 1,877 | Allow concatenation of both in-memory and on-disk datasets | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [] | closed | false | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
}
] | null | [
"I started working on this. My idea is to first add the pyarrow Table wrappers InMemoryTable and MemoryMappedTable that both implement what's necessary regarding copy/pickle. Then have another wrapper that takes the concatenation of InMemoryTable/MemoryMappedTable objects.\r\n\r\nWhat's important here is that concatenating two tables into one doesn't double the memory used (`total_allocated_bytes()` stays the same).",
"Hi @lhoestq @albertvillanova,\r\n\r\nI checked the linked issues and PR, this seems like a great idea. Would you mind elaborating on the in-memory and memory-mapped datasets? \r\nBased on my understanding, it is something like this, please correct me if I am wrong:\r\n1. For in-memory datasets, we don't have any dataset files so the entire dataset is pickled to the cache during loading, and then whenever required it is unpickled .\r\n2. For on-disk/memory-mapped datasets, we have the data files provided, so they can be re-loaded from the paths, and only the file-paths are stored while pickling.\r\n\r\nIf this is correct, will the feature also handle pickling/unpickling of a concatenated dataset? Will this be cached?\r\n\r\nThis also leads me to ask whether datasets are chunked during pickling? \r\n\r\nThanks,\r\nGunjan",
"Hi ! Yes you're totally right about your two points :)\r\n\r\nAnd in the case of a concatenated dataset, then we should reload each sub-table depending on whether it's in-memory or memory mapped. That means the dataset will be made of several blocks in order to keep track of what's from memory and what's memory mapped. This allows to pickle/unpickle concatenated datasets",
"Hi @lhoestq\r\n\r\nThanks, that sounds nice. Can you explain where the issue of the double memory may arise? Also, why is the existing `concatenate_datasets` not sufficient for this purpose?",
"Hi @lhoestq,\r\n\r\nWill the `add_item` feature also help with lazy writing (or no caching) during `map`/`filter`?",
"> Can you explain where the issue of the double memory may arise?\r\n\r\nWe have to keep each block (in-memory vs memory mapped) separated in order to be able to reload them with pickle.\r\nOn the other hand we also need to have the full table from mixed in-memory and memory mapped data in order to iterate or extract data conveniently. That means that each block is accessible twice: once in the full table, and once in the separated blocks. But since pyarrow tables concatenation doesn't double the memory, then building the full table doesn't cost memory which is what we want :)\r\n\r\n> Also, why is the existing concatenate_datasets not sufficient for this purpose?\r\n\r\nThe existing `concatenate_datasets` doesn't support having both in-memory and memory mapped data together (there's no fancy block separation logic). It works for datasets fully in-memory or fully memory mapped but not a mix of the two.\r\n\r\n> Will the add_item feature also help with lazy writing (or no caching) during map/filter?\r\n\r\nIt will enable the implementation of the fast, masked filter from this discussion: https://github.com/huggingface/datasets/issues/1949\r\nHowever I don't think this will affect map."
] | "2021-02-15T11:39:46Z" | "2021-03-26T16:51:58Z" | "2021-03-26T16:51:58Z" | MEMBER | null | This is a prerequisite for the addition of the `add_item` feature (see #1870).
Currently there is one assumption that we would need to change: a dataset is either fully in memory (dataset._data_files is empty), or the dataset can be reloaded from disk (using the dataset._data_files).
This assumption is used for pickling for example:
- in-memory dataset can just be pickled/unpickled in-memory
- on-disk dataset can be unloaded to only keep the filepaths when pickling, and then reloaded from the disk when unpickling
Maybe let's have a design that allows a Dataset to have a Table that can be rebuilt from heterogenous sources like in-memory tables or on-disk tables ? This could also be further extended in the future
One idea would be to define a list of sources and each source implements a way to reload its corresponding pyarrow Table.
Then the dataset would be the concatenation of all these tables.
Depending on the source type, the serialization using pickle would be different. In-memory data would be copied while on-disk data would simply be replaced by the path to these data.
If you have some ideas you would like to share about the design/API feel free to do so :)
cc @albertvillanova | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 1,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1877/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1877/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/1876 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1876/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1876/comments | https://api.github.com/repos/huggingface/datasets/issues/1876/events | https://github.com/huggingface/datasets/issues/1876 | 808,025,859 | MDU6SXNzdWU4MDgwMjU4NTk= | 1,876 | load_dataset("multi_woz_v22") NonMatchingChecksumError | {
"avatar_url": "https://avatars.githubusercontent.com/u/5945326?v=4",
"events_url": "https://api.github.com/users/Vincent950129/events{/privacy}",
"followers_url": "https://api.github.com/users/Vincent950129/followers",
"following_url": "https://api.github.com/users/Vincent950129/following{/other_user}",
"gists_url": "https://api.github.com/users/Vincent950129/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Vincent950129",
"id": 5945326,
"login": "Vincent950129",
"node_id": "MDQ6VXNlcjU5NDUzMjY=",
"organizations_url": "https://api.github.com/users/Vincent950129/orgs",
"received_events_url": "https://api.github.com/users/Vincent950129/received_events",
"repos_url": "https://api.github.com/users/Vincent950129/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Vincent950129/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Vincent950129/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Vincent950129"
} | [] | closed | false | null | [] | null | [
"Thanks for reporting !\r\nThis is due to the changes made in the data files in the multiwoz repo: https://github.com/budzianowski/multiwoz/pull/59\r\nI'm opening a PR to update the checksums of the data files.",
"I just merged the fix. It will be available in the new release of `datasets` later today.\r\nYou'll be able to get the new version with\r\n```\r\npip install --upgrade datasets\r\n```",
"Hi, I still meet the error when loading the datasets after upgradeing datasets.\r\n\r\nraise NonMatchingChecksumError(error_msg + str(bad_urls))\r\ndatasets.utils.info_utils.NonMatchingChecksumError: Checksums didn't match for dataset source files:\r\n['https://github.com/budzianowski/multiwoz/raw/master/data/MultiWOZ_2.2/dialog_acts.json', 'https://github.com/budzianowski/multiwoz/raw/master/data/MultiWOZ_2.2/test/dialogues_001.json']",
"This must be related to https://github.com/budzianowski/multiwoz/pull/72\r\nThose files have changed, let me update the checksums for this dataset.\r\n\r\nFor now you can use `ignore_verifications=True` in `load_dataset` to skip the checksum verification."
] | "2021-02-14T19:14:48Z" | "2021-08-04T18:08:00Z" | "2021-08-04T18:08:00Z" | NONE | null | Hi, it seems that loading the multi_woz_v22 dataset gives a NonMatchingChecksumError.
To reproduce:
`dataset = load_dataset('multi_woz_v22','v2.2_active_only',split='train')`
This will give the following error:
```
raise NonMatchingChecksumError(error_msg + str(bad_urls))
datasets.utils.info_utils.NonMatchingChecksumError: Checksums didn't match for dataset source files:
['https://github.com/budzianowski/multiwoz/raw/master/data/MultiWOZ_2.2/dialog_acts.json', 'https://github.com/budzianowski/multiwoz/raw/master/data/MultiWOZ_2.2/train/dialogues_001.json', 'https://github.com/budzianowski/multiwoz/raw/master/data/MultiWOZ_2.2/train/dialogues_003.json', 'https://github.com/budzianowski/multiwoz/raw/master/data/MultiWOZ_2.2/train/dialogues_004.json', 'https://github.com/budzianowski/multiwoz/raw/master/data/MultiWOZ_2.2/train/dialogues_005.json', 'https://github.com/budzianowski/multiwoz/raw/master/data/MultiWOZ_2.2/train/dialogues_006.json', 'https://github.com/budzianowski/multiwoz/raw/master/data/MultiWOZ_2.2/train/dialogues_007.json', 'https://github.com/budzianowski/multiwoz/raw/master/data/MultiWOZ_2.2/train/dialogues_008.json', 'https://github.com/budzianowski/multiwoz/raw/master/data/MultiWOZ_2.2/train/dialogues_009.json', 'https://github.com/budzianowski/multiwoz/raw/master/data/MultiWOZ_2.2/train/dialogues_010.json', 'https://github.com/budzianowski/multiwoz/raw/master/data/MultiWOZ_2.2/train/dialogues_012.json', 'https://github.com/budzianowski/multiwoz/raw/master/data/MultiWOZ_2.2/train/dialogues_013.json', 'https://github.com/budzianowski/multiwoz/raw/master/data/MultiWOZ_2.2/train/dialogues_014.json', 'https://github.com/budzianowski/multiwoz/raw/master/data/MultiWOZ_2.2/train/dialogues_015.json', 'https://github.com/budzianowski/multiwoz/raw/master/data/MultiWOZ_2.2/train/dialogues_016.json', 'https://github.com/budzianowski/multiwoz/raw/master/data/MultiWOZ_2.2/train/dialogues_017.json', 'https://github.com/budzianowski/multiwoz/raw/master/data/MultiWOZ_2.2/dev/dialogues_001.json', 'https://github.com/budzianowski/multiwoz/raw/master/data/MultiWOZ_2.2/dev/dialogues_002.json', 'https://github.com/budzianowski/multiwoz/raw/master/data/MultiWOZ_2.2/test/dialogues_001.json', 'https://github.com/budzianowski/multiwoz/raw/master/data/MultiWOZ_2.2/test/dialogues_002.json']
```
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1876/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1876/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/1875 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1875/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1875/comments | https://api.github.com/repos/huggingface/datasets/issues/1875/events | https://github.com/huggingface/datasets/pull/1875 | 807,887,267 | MDExOlB1bGxSZXF1ZXN0NTczMDM2NzE0 | 1,875 | Adding sari metric | {
"avatar_url": "https://avatars.githubusercontent.com/u/6061911?v=4",
"events_url": "https://api.github.com/users/ddhruvkr/events{/privacy}",
"followers_url": "https://api.github.com/users/ddhruvkr/followers",
"following_url": "https://api.github.com/users/ddhruvkr/following{/other_user}",
"gists_url": "https://api.github.com/users/ddhruvkr/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/ddhruvkr",
"id": 6061911,
"login": "ddhruvkr",
"node_id": "MDQ6VXNlcjYwNjE5MTE=",
"organizations_url": "https://api.github.com/users/ddhruvkr/orgs",
"received_events_url": "https://api.github.com/users/ddhruvkr/received_events",
"repos_url": "https://api.github.com/users/ddhruvkr/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/ddhruvkr/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ddhruvkr/subscriptions",
"type": "User",
"url": "https://api.github.com/users/ddhruvkr"
} | [] | closed | false | null | [] | null | [] | "2021-02-14T04:38:35Z" | "2021-02-17T15:56:27Z" | "2021-02-17T15:56:27Z" | CONTRIBUTOR | null | Adding SARI metric that is used in evaluation of text simplification. This is required as part of the GEM benchmark. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1875/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1875/timeline | null | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/1875.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1875",
"merged_at": "2021-02-17T15:56:26Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1875.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1875"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/1874 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1874/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1874/comments | https://api.github.com/repos/huggingface/datasets/issues/1874/events | https://github.com/huggingface/datasets/pull/1874 | 807,786,094 | MDExOlB1bGxSZXF1ZXN0NTcyOTYzMjAy | 1,874 | Adding Europarl Bilingual dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/23355969?v=4",
"events_url": "https://api.github.com/users/lucadiliello/events{/privacy}",
"followers_url": "https://api.github.com/users/lucadiliello/followers",
"following_url": "https://api.github.com/users/lucadiliello/following{/other_user}",
"gists_url": "https://api.github.com/users/lucadiliello/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lucadiliello",
"id": 23355969,
"login": "lucadiliello",
"node_id": "MDQ6VXNlcjIzMzU1OTY5",
"organizations_url": "https://api.github.com/users/lucadiliello/orgs",
"received_events_url": "https://api.github.com/users/lucadiliello/received_events",
"repos_url": "https://api.github.com/users/lucadiliello/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lucadiliello/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lucadiliello/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lucadiliello"
} | [] | closed | false | null | [] | null | [
"is there a way to check errors without subscribing to CircleCI? Because they want access to private repositories when logging.",
"I think you need to be logged in to check the errors unfortunately. Feel free to create an account with bitbucket maybe if you don't want it to access your private github repos",
"I've resolved some requirements, but I cannot create dummy data. The dataset works as follows: for each language pair `<lang1>-<lang2>` 3 files are downloaded:\r\n- dataset for `<lang1>`\r\n- dataset for `<lang2>`\r\n- alignments between `<lang1>` and `<lang2>`\r\n\r\nSuppose we work with the `bg-cs` language pair. Then, the dataset will download three `gzip` files which should be decompressed. I do not understand the relation between the folders created by the script to create dummy data and the original data provided by the download manager.",
"Hi ! Indeed the data files structure of this dataset looks very specific.\r\nThe command `datasets-cli dummy_data ./datasets/europarl_bilingual` shows some instructions for each split but let me add more details.\r\n\r\nFirst things to know is that the dummy data files need to be uncompressed data, so for example for the file `bg.zip` you should actually have one folder with all the xml files in it instead. In the same way, `bg-cs.xml.gz` must be replaced by an actual uncompressed xml file.\r\n\r\nLet's take the bg-cs config as an example. To make the dummy data you need to:\r\n- go to `./datasets/europarl_bilingual/dummy/bg-cs/8.0.0` and create a folder named `dummy_data`. Then go inside this folder\r\n- create a text file named `bg-cs.xml.gz` containing xml content (so without .gz compression). The xml content must have the same structure as the original `bg-cs.zml` but only include 1 `linkGrp` entry. You can pick one entry from the original `bg-cs.xml` file. Let's say this entry is about this file: `ep-06-01-16-003.xml`\r\n- create a folder named `bg.zip` and inside this folder add one file Europarl/raw/bg/ep-06-01-16-003.xml. You can pick the xml file from the original `bg.zip` archive.\r\n- create a folder named `cs.zip` and inside this folder add one file Europarl/raw/cs/ep-06-01-16-003.xml. You can pick the xml file from the original `cs.zip` archive.\r\n- zip the `dummy_data` into `dummy_data.zip`\r\n\r\nAt this point you have dummy data files to generate 1 example which is what we want to be able to test the dataset script `europarl_bilingual.py` with pytest. \r\n\r\nIn particular this will make this test pass:\r\n```\r\npytest tests/test_dataset_common.py::LocalDatasetTest::test_load_dataset_europarl_bilingual\r\n```\r\n\r\nIdeally it would be awesome to have dummy data for all the different configs so if we manage to make a script that generates all of it automatically that would be perfect. However since the structure is not trivial, another option would be to only have the dummy data for only 1 or 2 configs, like what we do for [bible_para](https://github.com/huggingface/datasets/blob/master/datasets/bible_para/bible_para.py) for example. In `bible_para` only a few configurations are tested. As you can see there is only 6 configs in the `BUILDER_CONFIGS` attribute. All the other configs can still be used, here is what is said inside the dataset card of bible_para:\r\n```\r\nTo load a language pair which isn't part of the config, all you need to do is specify the language code as pairs.\r\nYou can find the valid pairs in Homepage section of Dataset Description: http://opus.nlpl.eu/bible-uedin.php\r\nE.g.\r\n\r\n`dataset = load_dataset(\"bible_para\", lang1=\"fi\", lang2=\"hi\")`\r\n```\r\nIn this case the configuration \"fi-hi\" is simply created on the fly, instead of being picked from the `BUILDER_CONFIGS` list.\r\n\r\nI hope this helps, let me know if you have questions or if I can help",
"I already created the scripts to create reduced versions of the data. What I didn't understand was how to put files in the dummy_data folder because, as you noticed, some file decompress to a nested tree structure. I will now try again with your suggestions!",
"Is there something else I should do? If not can this be integrated?",
"Thanks a lot !!\r\nSince the set of all the dummy data files is quite big I only kept a few of them. If we had kept them all the size of the `datasets` repo would have increased too much :/\r\nSo I did the same as for `bible_para`: only keep a few configurations in BUILDER_CONFIGS and have all the other pairs loadable with the lang1 and lang2 parameters like this:\r\n\r\n`dataset = load_dataset(\"europarl_bilingual\", lang1=\"fi\", lang2=\"fr\")`"
] | "2021-02-13T17:02:04Z" | "2021-03-04T10:38:22Z" | "2021-03-04T10:38:22Z" | CONTRIBUTOR | null | Implementation of Europarl bilingual dataset from described [here](https://opus.nlpl.eu/Europarl.php).
This dataset allows to use every language pair detailed in the original dataset. The loading script manages also the small errors contained in the original dataset (in very rare cases (1 over 10M) there are some keys that references to inexistent sentences).
I chose to follow the the style of a similar dataset available in this repository: `multi_para_crawl`.
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1874/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1874/timeline | null | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/1874.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1874",
"merged_at": "2021-03-04T10:38:22Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1874.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1874"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/1873 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1873/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1873/comments | https://api.github.com/repos/huggingface/datasets/issues/1873/events | https://github.com/huggingface/datasets/pull/1873 | 807,750,745 | MDExOlB1bGxSZXF1ZXN0NTcyOTM4MTYy | 1,873 | add iapp_wiki_qa_squad | {
"avatar_url": "https://avatars.githubusercontent.com/u/15519308?v=4",
"events_url": "https://api.github.com/users/cstorm125/events{/privacy}",
"followers_url": "https://api.github.com/users/cstorm125/followers",
"following_url": "https://api.github.com/users/cstorm125/following{/other_user}",
"gists_url": "https://api.github.com/users/cstorm125/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/cstorm125",
"id": 15519308,
"login": "cstorm125",
"node_id": "MDQ6VXNlcjE1NTE5MzA4",
"organizations_url": "https://api.github.com/users/cstorm125/orgs",
"received_events_url": "https://api.github.com/users/cstorm125/received_events",
"repos_url": "https://api.github.com/users/cstorm125/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/cstorm125/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/cstorm125/subscriptions",
"type": "User",
"url": "https://api.github.com/users/cstorm125"
} | [] | closed | false | null | [] | null | [] | "2021-02-13T13:34:27Z" | "2021-02-16T14:21:58Z" | "2021-02-16T14:21:58Z" | CONTRIBUTOR | null | `iapp_wiki_qa_squad` is an extractive question answering dataset from Thai Wikipedia articles.
It is adapted from [the original iapp-wiki-qa-dataset](https://github.com/iapp-technology/iapp-wiki-qa-dataset)
to [SQuAD](https://rajpurkar.github.io/SQuAD-explorer/) format, resulting in
5761/742/739 questions from 1529/191/192 articles. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1873/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1873/timeline | null | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/1873.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1873",
"merged_at": "2021-02-16T14:21:58Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1873.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1873"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/1872 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1872/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1872/comments | https://api.github.com/repos/huggingface/datasets/issues/1872/events | https://github.com/huggingface/datasets/issues/1872 | 807,711,935 | MDU6SXNzdWU4MDc3MTE5MzU= | 1,872 | Adding a new column to the dataset after set_format was called | {
"avatar_url": "https://avatars.githubusercontent.com/u/2743060?v=4",
"events_url": "https://api.github.com/users/villmow/events{/privacy}",
"followers_url": "https://api.github.com/users/villmow/followers",
"following_url": "https://api.github.com/users/villmow/following{/other_user}",
"gists_url": "https://api.github.com/users/villmow/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/villmow",
"id": 2743060,
"login": "villmow",
"node_id": "MDQ6VXNlcjI3NDMwNjA=",
"organizations_url": "https://api.github.com/users/villmow/orgs",
"received_events_url": "https://api.github.com/users/villmow/received_events",
"repos_url": "https://api.github.com/users/villmow/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/villmow/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/villmow/subscriptions",
"type": "User",
"url": "https://api.github.com/users/villmow"
} | [] | closed | false | null | [] | null | [
"Hi ! Indeed if you add a column to a formatted dataset, then the new dataset gets a new formatting in which:\r\n```\r\nnew formatted columns = (all columns - previously unformatted columns)\r\n```\r\nTherefore the new column is going to be formatted using the `torch` formatting.\r\n\r\nIf you want your new column to be unformatted you can re-run this line:\r\n```python\r\ndata.set_format(\"torch\", columns=[\"some_integer_column1\", \"some_integer_column2\"], output_all_columns=True)\r\n```",
"Hi, thanks that solved my problem. Maybe mention that in the documentation. ",
"Ok cool :) \r\nAlso I just did a PR to mention this behavior in the documentation",
"Closed by #1888"
] | "2021-02-13T09:14:35Z" | "2021-03-30T14:01:45Z" | "2021-03-30T14:01:45Z" | NONE | null | Hi,
thanks for the nice library. I'm in the process of creating a custom dataset, which has a mix of tensors and lists of strings. I stumbled upon an error and want to know if its a problem on my side.
I load some lists of strings and integers, then call `data.set_format("torch", columns=["some_integer_column1", "some_integer_column2"], output_all_columns=True)`. This converts the integer columns into tensors, but keeps the lists of strings as they are. I then call `map` to add a new column to my dataset, which is a **list of strings**. Once I iterate through my dataset, I get an error that the new column can't be converted into a tensor (which is probably caused by `set_format`).
Below some pseudo code:
```python
def augment_func(sample: Dict) -> Dict:
# do something
return {
"some_integer_column1" : augmented_data["some_integer_column1"], # <-- tensor
"some_integer_column2" : augmented_data["some_integer_column2"], # <-- tensor
"NEW_COLUMN": targets, # <-- list of strings
}
data = datasets.load_dataset(__file__, data_dir="...", split="train")
data.set_format("torch", columns=["some_integer_column1", "some_integer_column2"], output_all_columns=True)
augmented_dataset = data.map(augment_func, batched=False)
for sample in augmented_dataset:
print(sample) # fails
```
and the exception:
```python
Traceback (most recent call last):
File "dataset.py", line 487, in <module>
main()
File "dataset.py", line 471, in main
for sample in augmented_dataset:
File "lib/python3.8/site-packages/datasets/arrow_dataset.py", line 697, in __iter__
yield self._getitem(
File "lib/python3.8/site-packages/datasets/arrow_dataset.py", line 1069, in _getitem
outputs = self._convert_outputs(
File "lib/python3.8/site-packages/datasets/arrow_dataset.py", line 890, in _convert_outputs
v = map_nested(command, v, **map_nested_kwargs)
File "lib/python3.8/site-packages/datasets/utils/py_utils.py", line 225, in map_nested
return function(data_struct)
File "lib/python3.8/site-packages/datasets/arrow_dataset.py", line 850, in command
return [map_nested(command, i, **map_nested_kwargs) for i in x]
File "lib/python3.8/site-packages/datasets/arrow_dataset.py", line 850, in <listcomp>
return [map_nested(command, i, **map_nested_kwargs) for i in x]
File "lib/python3.8/site-packages/datasets/utils/py_utils.py", line 225, in map_nested
return function(data_struct)
File "lib/python3.8/site-packages/datasets/arrow_dataset.py", line 850, in command
return [map_nested(command, i, **map_nested_kwargs) for i in x]
File "lib/python3.8/site-packages/datasets/arrow_dataset.py", line 850, in <listcomp>
return [map_nested(command, i, **map_nested_kwargs) for i in x]
File "lib/python3.8/site-packages/datasets/utils/py_utils.py", line 225, in map_nested
return function(data_struct)
File "lib/python3.8/site-packages/datasets/arrow_dataset.py", line 851, in command
return torch.tensor(x, **format_kwargs)
TypeError: new(): invalid data type 'str'
```
Thanks!
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 1,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1872/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1872/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/1871 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1871/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1871/comments | https://api.github.com/repos/huggingface/datasets/issues/1871/events | https://github.com/huggingface/datasets/pull/1871 | 807,697,671 | MDExOlB1bGxSZXF1ZXN0NTcyODk5Nzgz | 1,871 | Add newspop dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/299380?v=4",
"events_url": "https://api.github.com/users/frankier/events{/privacy}",
"followers_url": "https://api.github.com/users/frankier/followers",
"following_url": "https://api.github.com/users/frankier/following{/other_user}",
"gists_url": "https://api.github.com/users/frankier/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/frankier",
"id": 299380,
"login": "frankier",
"node_id": "MDQ6VXNlcjI5OTM4MA==",
"organizations_url": "https://api.github.com/users/frankier/orgs",
"received_events_url": "https://api.github.com/users/frankier/received_events",
"repos_url": "https://api.github.com/users/frankier/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/frankier/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/frankier/subscriptions",
"type": "User",
"url": "https://api.github.com/users/frankier"
} | [] | closed | false | null | [] | null | [
"Thanks for the changes :)\r\nmerging"
] | "2021-02-13T07:31:23Z" | "2021-03-08T10:12:45Z" | "2021-03-08T10:12:45Z" | CONTRIBUTOR | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1871/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1871/timeline | null | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/1871.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1871",
"merged_at": "2021-03-08T10:12:45Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1871.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1871"
} | true |
|
https://api.github.com/repos/huggingface/datasets/issues/1870 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1870/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1870/comments | https://api.github.com/repos/huggingface/datasets/issues/1870/events | https://github.com/huggingface/datasets/pull/1870 | 807,306,564 | MDExOlB1bGxSZXF1ZXN0NTcyNTc4Mjc4 | 1,870 | Implement Dataset add_item | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | closed | false | null | [] | {
"closed_at": "2021-05-31T16:20:53Z",
"closed_issues": 3,
"created_at": "2021-04-09T13:16:31Z",
"creator": {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
},
"description": "Next minor release",
"due_on": "2021-05-14T07:00:00Z",
"html_url": "https://github.com/huggingface/datasets/milestone/3",
"id": 6644287,
"labels_url": "https://api.github.com/repos/huggingface/datasets/milestones/3/labels",
"node_id": "MDk6TWlsZXN0b25lNjY0NDI4Nw==",
"number": 3,
"open_issues": 0,
"state": "closed",
"title": "1.7",
"updated_at": "2021-05-31T16:20:53Z",
"url": "https://api.github.com/repos/huggingface/datasets/milestones/3"
} | [
"Thanks @lhoestq for your remarks. Yes, I agree there are still many issues to be tackled... This PR is just a starting point, so that we can discuss how Dataset should be generalized.",
"Sure ! I opened an issue #1877 so we can discuss this specific aspect :)",
"I am going to implement this consolidation step in #2151.",
"Sounds good !",
"I retake this PR once the consolidation step is already implemented by #2151."
] | "2021-02-12T15:03:46Z" | "2021-04-23T10:01:31Z" | "2021-04-23T10:01:31Z" | MEMBER | null | Implement `Dataset.add_item`.
Close #1854. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1870/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1870/timeline | null | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/1870.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1870",
"merged_at": "2021-04-23T10:01:30Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1870.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1870"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/1869 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1869/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1869/comments | https://api.github.com/repos/huggingface/datasets/issues/1869/events | https://github.com/huggingface/datasets/pull/1869 | 807,159,835 | MDExOlB1bGxSZXF1ZXN0NTcyNDU0NTMy | 1,869 | Remove outdated commands in favor of huggingface-cli | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [] | closed | false | null | [] | null | [] | "2021-02-12T11:28:10Z" | "2021-02-12T16:13:09Z" | "2021-02-12T16:13:08Z" | MEMBER | null | Removing the old user commands since `huggingface_hub` is going to be used instead.
cc @julien-c | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1869/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1869/timeline | null | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/1869.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1869",
"merged_at": "2021-02-12T16:13:08Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1869.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1869"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/1868 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1868/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1868/comments | https://api.github.com/repos/huggingface/datasets/issues/1868/events | https://github.com/huggingface/datasets/pull/1868 | 807,138,159 | MDExOlB1bGxSZXF1ZXN0NTcyNDM2MjA0 | 1,868 | Update oscar sizes | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [] | closed | false | null | [] | null | [] | "2021-02-12T10:55:35Z" | "2021-02-12T11:03:07Z" | "2021-02-12T11:03:06Z" | MEMBER | null | This commit https://github.com/huggingface/datasets/commit/837a152e4724adc5308e2c4481908c00a8d93383 removed empty lines from the oscar deduplicated datasets. This PR updates the size of each deduplicated dataset to fix possible `NonMatchingSplitsSizesError` errors. cc @cahya-wirawan | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1868/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1868/timeline | null | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/1868.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1868",
"merged_at": "2021-02-12T11:03:06Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1868.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1868"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/1867 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1867/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1867/comments | https://api.github.com/repos/huggingface/datasets/issues/1867/events | https://github.com/huggingface/datasets/issues/1867 | 807,127,181 | MDU6SXNzdWU4MDcxMjcxODE= | 1,867 | ERROR WHEN USING SET_TRANSFORM() | {
"avatar_url": "https://avatars.githubusercontent.com/u/35173563?v=4",
"events_url": "https://api.github.com/users/avacaondata/events{/privacy}",
"followers_url": "https://api.github.com/users/avacaondata/followers",
"following_url": "https://api.github.com/users/avacaondata/following{/other_user}",
"gists_url": "https://api.github.com/users/avacaondata/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/avacaondata",
"id": 35173563,
"login": "avacaondata",
"node_id": "MDQ6VXNlcjM1MTczNTYz",
"organizations_url": "https://api.github.com/users/avacaondata/orgs",
"received_events_url": "https://api.github.com/users/avacaondata/received_events",
"repos_url": "https://api.github.com/users/avacaondata/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/avacaondata/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/avacaondata/subscriptions",
"type": "User",
"url": "https://api.github.com/users/avacaondata"
} | [] | closed | false | null | [] | null | [
"Hi @alejandrocros it looks like an incompatibility with the current Trainer @sgugger \r\nIndeed currently the Trainer of `transformers` doesn't support a dataset with a transform\r\n\r\nIt looks like it comes from this line: https://github.com/huggingface/transformers/blob/f51188cbe74195c14c5b3e2e8f10c2f435f9751a/src/transformers/trainer.py#L442\r\n\r\nThis line sets the format to not return certain unused columns. But this has two issues:\r\n1. it forgets to also set the format_kwargs (this causes the error you got):\r\n```python\r\ndataset.set_format(type=dataset.format[\"type\"], columns=columns, format_kwargs=dataset.format[\"format_kwargs\"])\r\n```\r\n2. the Trainer wants to keep only the fields that are used as input for a model. However for a dataset with a transform, the output fields are often different from the columns fields. For example from a column \"text\" in the dataset, the strings can be transformed on-the-fly into \"input_ids\". If you want your dataset to only output certain fields and not other you must change your transform function.\r\n",
"FYI that option can be removed with `remove_unused_columns = False` in your `TrainingArguments`, so there is a workaround @alexvaca0 while the fix in `Trainer` is underway.\r\n\r\n@lhoestq I think I will just use the line you suggested and if someone is using the columns that are removed in their transform they will need to change `remove_unused_columns` to `False`. We might switch the default of that argument in the next version if that proves too bug-proof.",
"I've tried your solutions @sgugger @lhoestq and the good news is that it throws no error. However, TPU training is taking forever, in 1 hour it has only trained 1 batch of 8192 elements, which doesn't make much sense... Is it possible that \"on the fly\" tokenization of batches is slowing down TPU training to that extent?",
"I'm pretty sure this is because of padding but @sgugger might know better",
"I don't know what the value of `padding` is in your lines of code pasted above so I can't say for sure. The first batch will be very slow on TPU since it compiles everything, so that's normal (1 hour is long but 8192 elements is also large). Then if your batches are not of the same lengths, it will recompile everything at each step instead of using the same graph, which will be very slow, so you should double check you are using padding to make everything the exact same shape. ",
"I have tried now on a GPU and it goes smooth! Amazing feature .set_transform() instead of .map()! Now I can pre-train my model without the hard disk limitation. Thanks for your work all HuggingFace team!! :clap: ",
"In the end, to make it work I turned to A-100 gpus instead of TPUS, among other changes. Set_transform doesn't work as expected and slows down training very much even in GPUs, and applying map destroys the disk, as it multiplies by 100 the size of the data passed to it (due to inefficient implementation converting strings to int64 floats I guess). For that reason, I chose to use datasets to load the data as text, and then edit the Collator from Transformers to tokenize every batch it receives before processing it. That way, I'm being able to train fast, without memory breaks, without the disk being unnecessarily filled, while making use of GPUs almost all the time I'm paying for them (the map function over the whole dataset took ~15hrs, in which you're not training at all). I hope this info helps others that are looking for training a language model from scratch cheaply, I'm going to close the issue as the optimal solution I found after many experiments to the problem posted in it is explained above. ",
"Great comment @alexvaca0 . I think that we could re-open the issue as a reformulation of why it takes so much space to save the arrow. Saving a 1% of oscar corpus takes more thank 600 GB (it breaks when it pass 600GB because it is the free memory that I have at this moment) when the full dataset is 1,3 TB. I have a 1TB M.2 NVMe disk that I can not train on because the saved .arrow files goes crazily big. If you can share your Collator I will be grateful. "
] | "2021-02-12T10:38:31Z" | "2021-03-01T14:04:24Z" | "2021-02-24T12:00:43Z" | NONE | null | Hi, I'm trying to use dataset.set_transform(encode) as @lhoestq told me in this issue: https://github.com/huggingface/datasets/issues/1825#issuecomment-774202797
However, when I try to use Trainer from transformers with such dataset, it throws an error:
```
TypeError: __init__() missing 1 required positional argument: 'transform'
[INFO|trainer.py:357] 2021-02-12 10:18:09,893 >> The following columns in the training set don't have a corresponding argument in `AlbertForMaskedLM.forward` and have been ignored: text.
Exception in device=TPU:0: __init__() missing 1 required positional argument: 'transform'
Traceback (most recent call last):
File "/anaconda3/envs/torch-xla-1.7/lib/python3.6/site-packages/torch_xla/distributed/xla_multiprocessing.py", line 330, in _mp_start_fn
_start_fn(index, pf_cfg, fn, args)
File "/anaconda3/envs/torch-xla-1.7/lib/python3.6/site-packages/torch_xla/distributed/xla_multiprocessing.py", line 324, in _start_fn
fn(gindex, *args)
File "/home/alejandro_vaca/transformers/examples/language-modeling/run_mlm_wwm.py", line 368, in _mp_fn
main()
File "/home/alejandro_vaca/transformers/examples/language-modeling/run_mlm_wwm.py", line 332, in main
data_collator=data_collator,
File "/anaconda3/envs/torch-xla-1.7/lib/python3.6/site-packages/transformers/trainer.py", line 286, in __init__
self._remove_unused_columns(self.train_dataset, description="training")
File "/anaconda3/envs/torch-xla-1.7/lib/python3.6/site-packages/transformers/trainer.py", line 359, in _remove_unused_columns
dataset.set_format(type=dataset.format["type"], columns=columns)
File "/home/alejandro_vaca/datasets/src/datasets/fingerprint.py", line 312, in wrapper
out = func(self, *args, **kwargs)
File "/home/alejandro_vaca/datasets/src/datasets/arrow_dataset.py", line 818, in set_format
_ = get_formatter(type, **format_kwargs)
File "/home/alejandro_vaca/datasets/src/datasets/formatting/__init__.py", line 112, in get_formatter
return _FORMAT_TYPES[format_type](**format_kwargs)
TypeError: __init__() missing 1 required positional argument: 'transform'
```
The code I'm using:
```{python}
def tokenize_function(examples):
# Remove empty lines
examples["text"] = [line for line in examples["text"] if len(line) > 0 and not line.isspace()]
return tokenizer(examples["text"], padding=padding, truncation=True, max_length=data_args.max_seq_length)
datasets.set_transform(tokenize_function)
data_collator = DataCollatorForWholeWordMask(tokenizer=tokenizer, mlm_probability=data_args.mlm_probability)
# Initialize our Trainer
trainer = Trainer(
model=model,
args=training_args,
train_dataset=datasets["train"] if training_args.do_train else None,
eval_dataset=datasets["val"] if training_args.do_eval else None,
tokenizer=tokenizer,
data_collator=data_collator,
)
```
I've installed from source, master branch.
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1867/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1867/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/1866 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1866/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1866/comments | https://api.github.com/repos/huggingface/datasets/issues/1866/events | https://github.com/huggingface/datasets/pull/1866 | 807,017,816 | MDExOlB1bGxSZXF1ZXN0NTcyMzM3NDQ1 | 1,866 | Add dataset for Financial PhraseBank | {
"avatar_url": "https://avatars.githubusercontent.com/u/299380?v=4",
"events_url": "https://api.github.com/users/frankier/events{/privacy}",
"followers_url": "https://api.github.com/users/frankier/followers",
"following_url": "https://api.github.com/users/frankier/following{/other_user}",
"gists_url": "https://api.github.com/users/frankier/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/frankier",
"id": 299380,
"login": "frankier",
"node_id": "MDQ6VXNlcjI5OTM4MA==",
"organizations_url": "https://api.github.com/users/frankier/orgs",
"received_events_url": "https://api.github.com/users/frankier/received_events",
"repos_url": "https://api.github.com/users/frankier/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/frankier/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/frankier/subscriptions",
"type": "User",
"url": "https://api.github.com/users/frankier"
} | [] | closed | false | null | [] | null | [
"Thanks for the feedback. All accepted and metadata regenerated."
] | "2021-02-12T07:30:56Z" | "2021-02-17T14:22:36Z" | "2021-02-17T14:22:36Z" | CONTRIBUTOR | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1866/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1866/timeline | null | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/1866.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1866",
"merged_at": "2021-02-17T14:22:36Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1866.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1866"
} | true |
|
https://api.github.com/repos/huggingface/datasets/issues/1865 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1865/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1865/comments | https://api.github.com/repos/huggingface/datasets/issues/1865/events | https://github.com/huggingface/datasets/pull/1865 | 806,388,290 | MDExOlB1bGxSZXF1ZXN0NTcxODE2ODI2 | 1,865 | Updated OPUS Open Subtitles Dataset with metadata information | {
"avatar_url": "https://avatars.githubusercontent.com/u/19476123?v=4",
"events_url": "https://api.github.com/users/Valahaar/events{/privacy}",
"followers_url": "https://api.github.com/users/Valahaar/followers",
"following_url": "https://api.github.com/users/Valahaar/following{/other_user}",
"gists_url": "https://api.github.com/users/Valahaar/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Valahaar",
"id": 19476123,
"login": "Valahaar",
"node_id": "MDQ6VXNlcjE5NDc2MTIz",
"organizations_url": "https://api.github.com/users/Valahaar/orgs",
"received_events_url": "https://api.github.com/users/Valahaar/received_events",
"repos_url": "https://api.github.com/users/Valahaar/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Valahaar/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Valahaar/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Valahaar"
} | [] | closed | false | null | [] | null | [
"Hi !\r\nAbout the problems you mentioned:\r\n- Saving the infos is only done for the configurations inside the BUILDER_CONFIGS. Otherwise you would need to run the scripts on ALL language pairs, which is not what we want.\r\n- Moreover when you're on your branch, please specify the path to your local version of the dataset script, like \"./datasets/open_subtitles\". Otherwise the dataset is loaded from the master branch on github.\r\nHope that clarifies things a bit\r\n\r\nAnd of course feel free to add methods or classmethods to your builder.\r\n",
"Great! Thank you :)\r\nI'll close the issue as well."
] | "2021-02-11T13:26:26Z" | "2021-02-19T12:38:09Z" | "2021-02-12T16:59:44Z" | CONTRIBUTOR | null | Close #1844
Problems:
- I ran `python datasets-cli test datasets/open_subtitles --save_infos --all_configs`, hence the change in `dataset_infos.json`, but it appears that the metadata features have not been added for all pairs. Any idea why that might be?
- Possibly related to the above, I tried doing `pip uninstall datasets && pip install -e ".[dev]"` after the changes, and loading the dataset via `load_dataset("open_subtitles", lang1='hi', lang2='it')` to check if the update worked, but the loaded dataset did not contain the metadata fields (neither in the features nor doing `next(iter(dataset['train']))`). What step(s) did I miss?
Questions:
- Is it ok to have a `classmethod` in there? I have not seen any in the few other datasets I have checked. I could make it a local method of the `_generate_examples` method, but I'd rather not duplicate the logic... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1865/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1865/timeline | null | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/1865.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1865",
"merged_at": "2021-02-12T16:59:44Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1865.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1865"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/1864 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1864/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1864/comments | https://api.github.com/repos/huggingface/datasets/issues/1864/events | https://github.com/huggingface/datasets/issues/1864 | 806,172,843 | MDU6SXNzdWU4MDYxNzI4NDM= | 1,864 | Add Winogender Schemas | {
"avatar_url": "https://avatars.githubusercontent.com/u/48327001?v=4",
"events_url": "https://api.github.com/users/NielsRogge/events{/privacy}",
"followers_url": "https://api.github.com/users/NielsRogge/followers",
"following_url": "https://api.github.com/users/NielsRogge/following{/other_user}",
"gists_url": "https://api.github.com/users/NielsRogge/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/NielsRogge",
"id": 48327001,
"login": "NielsRogge",
"node_id": "MDQ6VXNlcjQ4MzI3MDAx",
"organizations_url": "https://api.github.com/users/NielsRogge/orgs",
"received_events_url": "https://api.github.com/users/NielsRogge/received_events",
"repos_url": "https://api.github.com/users/NielsRogge/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/NielsRogge/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/NielsRogge/subscriptions",
"type": "User",
"url": "https://api.github.com/users/NielsRogge"
} | [
{
"color": "e99695",
"default": false,
"description": "Requesting to add a new dataset",
"id": 2067376369,
"name": "dataset request",
"node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request"
}
] | closed | false | null | [] | null | [
"Nevermind, this one is already available on the hub under the name `'wino_bias'`: https://huggingface.co/datasets/wino_bias"
] | "2021-02-11T08:18:38Z" | "2021-02-11T08:19:51Z" | "2021-02-11T08:19:51Z" | CONTRIBUTOR | null | ## Adding a Dataset
- **Name:** Winogender Schemas
- **Description:** Winogender Schemas (inspired by Winograd Schemas) are minimal pairs of sentences that differ only by the gender of one pronoun in the sentence, designed to test for the presence of gender bias in automated coreference resolution systems.
- **Paper:** https://arxiv.org/abs/1804.09301
- **Data:** https://github.com/rudinger/winogender-schemas (see data directory)
- **Motivation:** Testing gender bias in automated coreference resolution systems, improve coreference resolution in general.
Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md).
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1864/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1864/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/1863 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1863/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1863/comments | https://api.github.com/repos/huggingface/datasets/issues/1863/events | https://github.com/huggingface/datasets/issues/1863 | 806,171,311 | MDU6SXNzdWU4MDYxNzEzMTE= | 1,863 | Add WikiCREM | {
"avatar_url": "https://avatars.githubusercontent.com/u/48327001?v=4",
"events_url": "https://api.github.com/users/NielsRogge/events{/privacy}",
"followers_url": "https://api.github.com/users/NielsRogge/followers",
"following_url": "https://api.github.com/users/NielsRogge/following{/other_user}",
"gists_url": "https://api.github.com/users/NielsRogge/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/NielsRogge",
"id": 48327001,
"login": "NielsRogge",
"node_id": "MDQ6VXNlcjQ4MzI3MDAx",
"organizations_url": "https://api.github.com/users/NielsRogge/orgs",
"received_events_url": "https://api.github.com/users/NielsRogge/received_events",
"repos_url": "https://api.github.com/users/NielsRogge/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/NielsRogge/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/NielsRogge/subscriptions",
"type": "User",
"url": "https://api.github.com/users/NielsRogge"
} | [
{
"color": "e99695",
"default": false,
"description": "Requesting to add a new dataset",
"id": 2067376369,
"name": "dataset request",
"node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request"
}
] | open | false | null | [] | null | [
"Hi @NielsRogge I would like to work on this dataset.\r\n\r\nThanks!",
"Hi @udapy, are you working on this?"
] | "2021-02-11T08:16:00Z" | "2021-03-07T07:27:13Z" | null | CONTRIBUTOR | null | ## Adding a Dataset
- **Name:** WikiCREM
- **Description:** A large unsupervised corpus for coreference resolution.
- **Paper:** https://arxiv.org/abs/1905.06290
- **Github repo:**: https://github.com/vid-koci/bert-commonsense
- **Data:** https://ora.ox.ac.uk/objects/uuid:c83e94bb-7584-41a1-aef9-85b0e764d9e3
- **Motivation:** Coreference resolution, common sense reasoning
Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md).
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1863/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1863/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/1862 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1862/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1862/comments | https://api.github.com/repos/huggingface/datasets/issues/1862/events | https://github.com/huggingface/datasets/pull/1862 | 805,722,293 | MDExOlB1bGxSZXF1ZXN0NTcxMjc2ODAx | 1,862 | Fix writing GPU Faiss index | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [] | closed | false | null | [] | null | [] | "2021-02-10T17:32:03Z" | "2021-02-10T18:17:48Z" | "2021-02-10T18:17:47Z" | MEMBER | null | As reported in by @corticalstack there is currently an error when we try to save a faiss index on GPU.
I fixed that by checking the index `getDevice()` method before calling `index_gpu_to_cpu`
Close #1859 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1862/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1862/timeline | null | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/1862.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1862",
"merged_at": "2021-02-10T18:17:47Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1862.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1862"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/1861 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1861/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1861/comments | https://api.github.com/repos/huggingface/datasets/issues/1861/events | https://github.com/huggingface/datasets/pull/1861 | 805,631,215 | MDExOlB1bGxSZXF1ZXN0NTcxMjAwNjA1 | 1,861 | Fix Limit url | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [] | closed | false | null | [] | null | [] | "2021-02-10T15:44:56Z" | "2021-02-10T16:15:00Z" | "2021-02-10T16:14:59Z" | MEMBER | null | The test.json file of the Literal-Motion-in-Text (LiMiT) dataset was removed recently on the master branch of the repo at https://github.com/ilmgut/limit_dataset
This PR uses the previous commit sha to download the file instead, as suggested by @Paethon
Close #1836 | {
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1861/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1861/timeline | null | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/1861.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1861",
"merged_at": "2021-02-10T16:14:58Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1861.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1861"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/1860 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1860/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1860/comments | https://api.github.com/repos/huggingface/datasets/issues/1860/events | https://github.com/huggingface/datasets/pull/1860 | 805,510,037 | MDExOlB1bGxSZXF1ZXN0NTcxMDk4OTIz | 1,860 | Add loading from the Datasets Hub + add relative paths in download manager | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [] | closed | false | null | [] | null | [
"I just added the steps to share a dataset on the datasets hub. It's highly inspired by the steps to share a model in the `transformers` doc.\r\n\r\nMoreover once the new huggingface_hub is released we can update the version in the setup.py. We also need to update the command to create a dataset repo in the documentation\r\n\r\nI added a few more tests with the \"lhoestq/test\" dataset I added on the hub and it works fine :) ",
"Here is the PR adding support for datasets repos in `huggingface_hub`: https://github.com/huggingface/huggingface_hub/pull/14"
] | "2021-02-10T13:24:11Z" | "2021-02-12T19:13:30Z" | "2021-02-12T19:13:29Z" | MEMBER | null | With the new Datasets Hub on huggingface.co it's now possible to have a dataset repo with your own script and data.
For example: https://huggingface.co/datasets/lhoestq/custom_squad/tree/main contains one script and two json files.
You can load it using
```python
from datasets import load_dataset
d = load_dataset("lhoestq/custom_squad")
```
To be able to use the data files that live right next to the dataset script on the repo in the hub, I added relative paths support for the DownloadManager. For example in the repo mentioned above, there are two json files that can be downloaded via
```python
_URLS = {
"train": "train-v1.1.json",
"dev": "dev-v1.1.json",
}
downloaded_files = dl_manager.download_and_extract(_URLS)
```
To make it work, I set the `base_path` of the DownloadManager to be the parent path of the dataset script (which comes from either a local path or a remote url).
I also had to add the auth header of the requests to huggingface.co for private datasets repos. The token is fetched from [huggingface_hub](https://github.com/huggingface/huggingface_hub). | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1860/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1860/timeline | null | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/1860.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1860",
"merged_at": "2021-02-12T19:13:29Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1860.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1860"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/1859 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1859/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1859/comments | https://api.github.com/repos/huggingface/datasets/issues/1859/events | https://github.com/huggingface/datasets/issues/1859 | 805,479,025 | MDU6SXNzdWU4MDU0NzkwMjU= | 1,859 | Error "in void don't know how to serialize this type of index" when saving index to disk when device=0 (GPU) | {
"avatar_url": "https://avatars.githubusercontent.com/u/3995321?v=4",
"events_url": "https://api.github.com/users/corticalstack/events{/privacy}",
"followers_url": "https://api.github.com/users/corticalstack/followers",
"following_url": "https://api.github.com/users/corticalstack/following{/other_user}",
"gists_url": "https://api.github.com/users/corticalstack/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/corticalstack",
"id": 3995321,
"login": "corticalstack",
"node_id": "MDQ6VXNlcjM5OTUzMjE=",
"organizations_url": "https://api.github.com/users/corticalstack/orgs",
"received_events_url": "https://api.github.com/users/corticalstack/received_events",
"repos_url": "https://api.github.com/users/corticalstack/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/corticalstack/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/corticalstack/subscriptions",
"type": "User",
"url": "https://api.github.com/users/corticalstack"
} | [] | closed | false | null | [] | null | [
"Hi @corticalstack ! Thanks for reporting. Indeed in the recent versions of Faiss we must use `getDevice` to check if the index in on GPU.\r\n\r\nI'm opening a PR",
"I fixed this issue. It should work fine now.\r\nFeel free to try it out by installing `datasets` from source.\r\nOtherwise you can wait for the next release of `datasets` (in a few days)",
"Thanks for such a quick fix and merge to master, pip installed git master, tested all OK"
] | "2021-02-10T12:41:00Z" | "2021-02-10T18:32:12Z" | "2021-02-10T18:17:47Z" | NONE | null | Error serializing faiss index. Error as follows:
`Error in void faiss::write_index(const faiss::Index*, faiss::IOWriter*) at /home/conda/feedstock_root/build_artifacts/faiss-split_1612472484670/work/faiss/impl/index_write.cpp:453: don't know how to serialize this type of index`
Note:
`torch.cuda.is_available()` reports:
```
Cuda is available
cuda:0
```
Adding index, device=0 for GPU.
`dataset.add_faiss_index(column='embeddings', index_name='idx_embeddings', device=0)`
However, during a quick debug, self.faiss_index has no attr "device" when checked in` search.py, method save`, so fails to transform gpu index to cpu index. If I add index without device, index is saved OK.
```
def save(self, file: str):
"""Serialize the FaissIndex on disk"""
import faiss # noqa: F811
if (
hasattr(self.faiss_index, "device")
and self.faiss_index.device is not None
and self.faiss_index.device > -1
):
index = faiss.index_gpu_to_cpu(self.faiss_index)
else:
index = self.faiss_index
faiss.write_index(index, file)
```
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1859/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1859/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/1858 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1858/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1858/comments | https://api.github.com/repos/huggingface/datasets/issues/1858/events | https://github.com/huggingface/datasets/pull/1858 | 805,477,774 | MDExOlB1bGxSZXF1ZXN0NTcxMDcxNzIx | 1,858 | Clean config getenvs | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [] | closed | false | null | [] | null | [] | "2021-02-10T12:39:14Z" | "2021-02-10T15:52:30Z" | "2021-02-10T15:52:29Z" | MEMBER | null | Following #1848
Remove double getenv calls and fix one issue with rarfile
cc @albertvillanova | {
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1858/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1858/timeline | null | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/1858.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1858",
"merged_at": "2021-02-10T15:52:29Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1858.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1858"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/1857 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1857/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1857/comments | https://api.github.com/repos/huggingface/datasets/issues/1857/events | https://github.com/huggingface/datasets/issues/1857 | 805,391,107 | MDU6SXNzdWU4MDUzOTExMDc= | 1,857 | Unable to upload "community provided" dataset - 400 Client Error | {
"avatar_url": "https://avatars.githubusercontent.com/u/1376337?v=4",
"events_url": "https://api.github.com/users/mwrzalik/events{/privacy}",
"followers_url": "https://api.github.com/users/mwrzalik/followers",
"following_url": "https://api.github.com/users/mwrzalik/following{/other_user}",
"gists_url": "https://api.github.com/users/mwrzalik/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mwrzalik",
"id": 1376337,
"login": "mwrzalik",
"node_id": "MDQ6VXNlcjEzNzYzMzc=",
"organizations_url": "https://api.github.com/users/mwrzalik/orgs",
"received_events_url": "https://api.github.com/users/mwrzalik/received_events",
"repos_url": "https://api.github.com/users/mwrzalik/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mwrzalik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mwrzalik/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mwrzalik"
} | [] | closed | false | null | [] | null | [
"Hi ! We're in the process of switching the community datasets to git repos, exactly like what we're doing for models.\r\nYou can find an example here:\r\nhttps://huggingface.co/datasets/lhoestq/custom_squad/tree/main\r\n\r\nWe'll update the CLI in the coming days and do a new release :)\r\n\r\nAlso cc @julien-c maybe we can make improve the error message ?"
] | "2021-02-10T10:39:01Z" | "2021-08-03T05:06:13Z" | "2021-08-03T05:06:13Z" | CONTRIBUTOR | null | Hi,
i'm trying to a upload a dataset as described [here](https://huggingface.co/docs/datasets/v1.2.0/share_dataset.html#sharing-a-community-provided-dataset). This is what happens:
```
$ datasets-cli login
$ datasets-cli upload_dataset my_dataset
About to upload file /path/to/my_dataset/dataset_infos.json to S3 under filename my_dataset/dataset_infos.json and namespace username
About to upload file /path/to/my_dataset/my_dataset.py to S3 under filename my_dataset/my_dataset.py and namespace username
Proceed? [Y/n] Y
Uploading... This might take a while if files are large
400 Client Error: Bad Request for url: https://huggingface.co/api/datasets/presign
huggingface.co migrated to a new model hosting system.
You need to upgrade to transformers v3.5+ to upload new models.
More info at https://discuss.hugginface.co or https://twitter.com/julien_c. Thank you!
```
I'm using the latest releases of datasets and transformers. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1857/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1857/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/1856 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1856/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1856/comments | https://api.github.com/repos/huggingface/datasets/issues/1856/events | https://github.com/huggingface/datasets/issues/1856 | 805,360,200 | MDU6SXNzdWU4MDUzNjAyMDA= | 1,856 | load_dataset("amazon_polarity") NonMatchingChecksumError | {
"avatar_url": "https://avatars.githubusercontent.com/u/19946372?v=4",
"events_url": "https://api.github.com/users/yanxi0830/events{/privacy}",
"followers_url": "https://api.github.com/users/yanxi0830/followers",
"following_url": "https://api.github.com/users/yanxi0830/following{/other_user}",
"gists_url": "https://api.github.com/users/yanxi0830/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/yanxi0830",
"id": 19946372,
"login": "yanxi0830",
"node_id": "MDQ6VXNlcjE5OTQ2Mzcy",
"organizations_url": "https://api.github.com/users/yanxi0830/orgs",
"received_events_url": "https://api.github.com/users/yanxi0830/received_events",
"repos_url": "https://api.github.com/users/yanxi0830/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/yanxi0830/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yanxi0830/subscriptions",
"type": "User",
"url": "https://api.github.com/users/yanxi0830"
} | [] | closed | false | null | [] | null | [
"Hi ! This issue may be related to #996 \r\nThis comes probably from the Quota Exceeded error from Google Drive.\r\nCan you try again tomorrow and see if you still have the error ?\r\n\r\nOn my side I didn't get any error today with `load_dataset(\"amazon_polarity\")`",
"+1 encountering this issue as well",
"@lhoestq Hi! I encounter the same error when loading `yelp_review_full`.\r\n\r\n```\r\nfrom datasets import load_dataset\r\ndataset_yp = load_dataset(\"yelp_review_full\")\r\n```\r\n\r\nWhen you say the \"Quota Exceeded from Google drive\". Is this a quota from the dataset owner? or the quota from our (the runner) Google Drive?",
"+1 Also encountering this issue",
"> When you say the \"Quota Exceeded from Google drive\". Is this a quota from the dataset owner? or the quota from our (the runner) Google Drive?\r\n\r\nEach file on Google Drive can be downloaded only a certain amount of times per day because of a quota. The quota is reset every day. So if too many people download the dataset the same day, then the quota is likely to exceed.\r\nThat's a really bad limitations of Google Drive and we should definitely find another host for these dataset than Google Drive.\r\nFor now I would suggest to wait and try again later..\r\n\r\nSo far the issue happened with CNN DailyMail, Amazon Polarity and Yelp Reviews. \r\nAre you experiencing the issue with other datasets ? @calebchiam @dtch1997 ",
"@lhoestq Gotcha, that is quite problematic...for what it's worth, I've had no issues with the other datasets I tried, such as `yelp_reviews_full` and `amazon_reviews_multi`.",
"Same issue today with \"big_patent\", though the symptoms are slightly different.\r\n\r\nWhen running\r\n\r\n```py\r\nfrom datasets import load_dataset\r\nload_dataset(\"big_patent\", split=\"validation\")\r\n```\r\n\r\nI get the following\r\n`FileNotFoundError: Local file \\huggingface\\datasets\\downloads\\6159313604f4f2c01e7d1cac52139343b6c07f73f6de348d09be6213478455c5\\bigPatentData\\train.tar.gz doesn't exist`\r\n\r\nI had to look into `6159313604f4f2c01e7d1cac52139343b6c07f73f6de348d09be6213478455c5` (which is a file instead of a folder) and got the following:\r\n\r\n`<!DOCTYPE html><html><head><title>Google Drive - Quota exceeded</title><meta http-equiv=\"content-type\" content=\"text/html; charset=utf-8\"/><link href=/static/doclist/client/css/4033072956-untrustedcontent.css rel=\"stylesheet\" nonce=\"JV0t61Smks2TEKdFCGAUFA\"><link rel=\"icon\" href=\"//ssl.gstatic.com/images/branding/product/1x/drive_2020q4_32dp.png\"/><style nonce=\"JV0t61Smks2TEKdFCGAUFA\">#gbar,#guser{font-size:13px;padding-top:0px !important;}#gbar{height:22px}#guser{padding-bottom:7px !important;text-align:right}.gbh,.gbd{border-top:1px solid #c9d7f1;font-size:1px}.gbh{height:0;position:absolute;top:24px;width:100%}@media all{.gb1{height:22px;margin-right:.5em;vertical-align:top}#gbar{float:left}}a.gb1,a.gb4{text-decoration:underline !important}a.gb1,a.gb4{color:#00c !important}.gbi .gb4{color:#dd8e27 !important}.gbf .gb4{color:#900 !important}\r\n</style><script nonce=\"iNUHigT+ENVQ3UZrLkFtRw\"></script></head><body><div id=gbar><nobr><a target=_blank class=gb1 href=\"https://www.google.fr/webhp?tab=ow\">Search</a> <a target=_blank class=gb1 href=\"http://www.google.fr/imghp?hl=en&tab=oi\">Images</a> <a target=_blank class=gb1 href=\"https://maps.google.fr/maps?hl=en&tab=ol\">Maps</a> <a target=_blank class=gb1 href=\"https://play.google.com/?hl=en&tab=o8\">Play</a> <a target=_blank class=gb1 href=\"https://www.youtube.com/?gl=FR&tab=o1\">YouTube</a> <a target=_blank class=gb1 href=\"https://news.google.com/?tab=on\">News</a> <a target=_blank class=gb1 href=\"https://mail.google.com/mail/?tab=om\">Gmail</a> <b class=gb1>Drive</b> <a target=_blank class=gb1 style=\"text-decoration:none\" href=\"https://www.google.fr/intl/en/about/products?tab=oh\"><u>More</u> »</a></nobr></div><div id=guser width=100%><nobr><span id=gbn class=gbi></span><span id=gbf class=gbf></span><span id=gbe></span><a target=\"_self\" href=\"/settings?hl=en_US\" class=gb4>Settings</a> | <a target=_blank href=\"//support.google.com/drive/?p=web_home&hl=en_US\" class=gb4>Help</a> | <a target=_top id=gb_70 href=\"https://accounts.google.com/ServiceLogin?hl=en&passive=true&continue=https://drive.google.com/uc%3Fexport%3Ddownload%26id%3D1J3mucMFTWrgAYa3LuBZoLRR3CzzYD3fa&service=writely&ec=GAZAMQ\" class=gb4>Sign in</a></nobr></div><div class=gbh style=left:0></div><div class=gbh style=right:0></div><div class=\"uc-main\"><div id=\"uc-text\"><p class=\"uc-error-caption\">Sorry, you can't view or download this file at this time.</p><p class=\"uc-error-subcaption\">Too many users have viewed or downloaded this file recently. Please try accessing the file again later. If the file you are trying to access is particularly large or is shared with many people, it may take up to 24 hours to be able to view or download the file. If you still can't access a file after 24 hours, contact your domain administrator.</p></div></div><div class=\"uc-footer\"><hr class=\"uc-footer-divider\">© 2021 Google - <a class=\"goog-link\" href=\"//support.google.com/drive/?p=web_home\">Help</a> - <a class=\"goog-link\" href=\"//support.google.com/drive/bin/answer.py?hl=en_US&answer=2450387\">Privacy & Terms</a></div></body></html>`",
"A similar issue arises when trying to stream the dataset\r\n\r\n```python\r\n>>> from datasets import load_dataset\r\n>>> iter_dset = load_dataset(\"amazon_polarity\", split=\"test\", streaming=True)\r\n>>> iter(iter_dset).__next__()\r\n\r\n---------------------------------------------------------------------------\r\nValueError Traceback (most recent call last)\r\n~\\lib\\tarfile.py in nti(s)\r\n 186 s = nts(s, \"ascii\", \"strict\")\r\n--> 187 n = int(s.strip() or \"0\", 8)\r\n 188 except ValueError:\r\n\r\nValueError: invalid literal for int() with base 8: 'e nonce='\r\n\r\nDuring handling of the above exception, another exception occurred:\r\n\r\nInvalidHeaderError Traceback (most recent call last)\r\n~\\lib\\tarfile.py in next(self)\r\n 2288 try:\r\n-> 2289 tarinfo = self.tarinfo.fromtarfile(self)\r\n 2290 except EOFHeaderError as e:\r\n\r\n~\\lib\\tarfile.py in fromtarfile(cls, tarfile)\r\n 1094 buf = tarfile.fileobj.read(BLOCKSIZE)\r\n-> 1095 obj = cls.frombuf(buf, tarfile.encoding, tarfile.errors)\r\n 1096 obj.offset = tarfile.fileobj.tell() - BLOCKSIZE\r\n\r\n~\\lib\\tarfile.py in frombuf(cls, buf, encoding, errors)\r\n 1036\r\n-> 1037 chksum = nti(buf[148:156])\r\n 1038 if chksum not in calc_chksums(buf):\r\n\r\n~\\lib\\tarfile.py in nti(s)\r\n 188 except ValueError:\r\n--> 189 raise InvalidHeaderError(\"invalid header\")\r\n 190 return n\r\n\r\nInvalidHeaderError: invalid header\r\n\r\nDuring handling of the above exception, another exception occurred:\r\n\r\nReadError Traceback (most recent call last)\r\n<ipython-input-5-6b9058341b2b> in <module>\r\n----> 1 iter(iter_dset).__next__()\r\n\r\n~\\lib\\site-packages\\datasets\\iterable_dataset.py in __iter__(self)\r\n 363\r\n 364 def __iter__(self):\r\n--> 365 for key, example in self._iter():\r\n 366 if self.features:\r\n 367 # we encode the example for ClassLabel feature types for example\r\n\r\n~\\lib\\site-packages\\datasets\\iterable_dataset.py in _iter(self)\r\n 360 else:\r\n 361 ex_iterable = self._ex_iterable\r\n--> 362 yield from ex_iterable\r\n 363\r\n 364 def __iter__(self):\r\n\r\n~\\lib\\site-packages\\datasets\\iterable_dataset.py in __iter__(self)\r\n 77\r\n 78 def __iter__(self):\r\n---> 79 yield from self.generate_examples_fn(**self.kwargs)\r\n 80\r\n 81 def shuffle_data_sources(self, seed: Optional[int]) -> \"ExamplesIterable\":\r\n\r\n~\\.cache\\huggingface\\modules\\datasets_modules\\datasets\\amazon_polarity\\56923eeb72030cb6c4ea30c8a4e1162c26b25973475ac1f44340f0ec0f2936f4\\amazon_polarity.py in _generate_examples(self, filepath, files)\r\n 114 def _generate_examples(self, filepath, files):\r\n 115 \"\"\"Yields examples.\"\"\"\r\n--> 116 for path, f in files:\r\n 117 if path == filepath:\r\n 118 lines = (line.decode(\"utf-8\") for line in f)\r\n\r\n~\\lib\\site-packages\\datasets\\utils\\streaming_download_manager.py in __iter__(self)\r\n 616\r\n 617 def __iter__(self):\r\n--> 618 yield from self.generator(*self.args, **self.kwargs)\r\n 619\r\n 620\r\n\r\n~\\lib\\site-packages\\datasets\\utils\\streaming_download_manager.py in _iter_from_urlpath(cls, urlpath, use_auth_token)\r\n 644 ) -> Generator[Tuple, None, None]:\r\n 645 with xopen(urlpath, \"rb\", use_auth_token=use_auth_token) as f:\r\n--> 646 yield from cls._iter_from_fileobj(f)\r\n 647\r\n 648 @classmethod\r\n\r\n~\\lib\\site-packages\\datasets\\utils\\streaming_download_manager.py in _iter_from_fileobj(cls, f)\r\n 624 @classmethod\r\n 625 def _iter_from_fileobj(cls, f) -> Generator[Tuple, None, None]:\r\n--> 626 stream = tarfile.open(fileobj=f, mode=\"r|*\")\r\n 627 for tarinfo in stream:\r\n 628 file_path = tarinfo.name\r\n\r\n~\\lib\\tarfile.py in open(cls, name, mode, fileobj, bufsize, **kwargs)\r\n 1603 stream = _Stream(name, filemode, comptype, fileobj, bufsize)\r\n 1604 try:\r\n-> 1605 t = cls(name, filemode, stream, **kwargs)\r\n 1606 except:\r\n 1607 stream.close()\r\n\r\n~\\lib\\tarfile.py in __init__(self, name, mode, fileobj, format, tarinfo, dereference, ignore_zeros, encoding, errors, pax_headers, debug, errorlevel, copybufsize)\r\n 1484 if self.mode == \"r\":\r\n 1485 self.firstmember = None\r\n-> 1486 self.firstmember = self.next()\r\n 1487\r\n 1488 if self.mode == \"a\":\r\n\r\n~\\lib\\tarfile.py in next(self)\r\n 2299 continue\r\n 2300 elif self.offset == 0:\r\n-> 2301 raise ReadError(str(e))\r\n 2302 except EmptyHeaderError:\r\n 2303 if self.offset == 0:\r\n\r\nReadError: invalid header\r\n\r\n```",
"This error still happens, but for a different reason now: Google Drive returns a warning instead of the dataset.",
"Met the same issue +1",
"Hi ! Thanks for reporting. Google Drive changed the way to bypass the warning message recently.\r\n\r\nThe latest release `1.18.4` fixes this for datasets loaded in a regular way.\r\n\r\nWe opened a PR to fix this recently for streaming mode at #3843 - we'll do a new release once the fix is merged :)",
"Fixed by:\r\n- #3787 \r\n- #3843"
] | "2021-02-10T10:00:56Z" | "2022-03-15T13:55:24Z" | "2022-03-15T13:55:23Z" | NONE | null | Hi, it seems that loading the amazon_polarity dataset gives a NonMatchingChecksumError.
To reproduce:
```
load_dataset("amazon_polarity")
```
This will give the following error:
```
---------------------------------------------------------------------------
NonMatchingChecksumError Traceback (most recent call last)
<ipython-input-3-8559a03fe0f8> in <module>()
----> 1 dataset = load_dataset("amazon_polarity")
3 frames
/usr/local/lib/python3.6/dist-packages/datasets/utils/info_utils.py in verify_checksums(expected_checksums, recorded_checksums, verification_name)
37 if len(bad_urls) > 0:
38 error_msg = "Checksums didn't match" + for_verification_name + ":\n"
---> 39 raise NonMatchingChecksumError(error_msg + str(bad_urls))
40 logger.info("All the checksums matched successfully" + for_verification_name)
41
NonMatchingChecksumError: Checksums didn't match for dataset source files:
['https://drive.google.com/u/0/uc?id=0Bz8a_Dbh9QhbaW12WVVZS2drcnM&export=download']
``` | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1856/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1856/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/1855 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1855/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1855/comments | https://api.github.com/repos/huggingface/datasets/issues/1855/events | https://github.com/huggingface/datasets/pull/1855 | 805,256,579 | MDExOlB1bGxSZXF1ZXN0NTcwODkzNDY3 | 1,855 | Minor fix in the docs | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | [] | closed | false | null | [] | null | [] | "2021-02-10T07:27:43Z" | "2021-02-10T12:33:09Z" | "2021-02-10T12:33:09Z" | MEMBER | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1855/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1855/timeline | null | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/1855.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1855",
"merged_at": "2021-02-10T12:33:09Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1855.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1855"
} | true |
|
https://api.github.com/repos/huggingface/datasets/issues/1854 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1854/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1854/comments | https://api.github.com/repos/huggingface/datasets/issues/1854/events | https://github.com/huggingface/datasets/issues/1854 | 805,204,397 | MDU6SXNzdWU4MDUyMDQzOTc= | 1,854 | Feature Request: Dataset.add_item | {
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/sshleifer",
"id": 6045025,
"login": "sshleifer",
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"type": "User",
"url": "https://api.github.com/users/sshleifer"
} | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | closed | false | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
}
] | null | [
"Hi @sshleifer.\r\n\r\nI am not sure of understanding the need of the `add_item` approach...\r\n\r\nBy just reading your \"Desired API\" section, I would say you could (nearly) get it with a 1-column Dataset:\r\n```python\r\ndata = {\"input_ids\": [np.array([4,4,2]), np.array([8,6,5,5,2]), np.array([3,3,31,5])]}\r\nds = Dataset.from_dict(data)\r\nassert (ds[\"input_ids\"][0] == np.array([4,4,2])).all()\r\n```",
"Hi @sshleifer :) \r\n\r\nWe don't have methods like `Dataset.add_batch` or `Dataset.add_entry/add_item` yet.\r\nBut that's something we'll add pretty soon. Would an API that looks roughly like this help ? Do you have suggestions ?\r\n```python\r\nimport numpy as np\r\nfrom datasets import Dataset\r\n\r\ntokenized = [np.array([4,4,2]), np.array([8,6,5,5,2]), np.array([3,3,31,5])\r\n\r\n# API suggestion (not available yet)\r\nd = Dataset()\r\nfor input_ids in tokenized:\r\n d.add_item({\"input_ids\": input_ids})\r\n\r\nprint(d[0][\"input_ids\"])\r\n# [4, 4, 2]\r\n```\r\n\r\nCurrently you can define a dataset with what @albertvillanova suggest, or via a generator using dataset builders. It's also possible to [concatenate datasets](https://huggingface.co/docs/datasets/package_reference/main_classes.html?highlight=concatenate#datasets.concatenate_datasets).",
"Your API looks perfect @lhoestq, thanks!"
] | "2021-02-10T06:06:00Z" | "2021-04-23T10:01:30Z" | "2021-04-23T10:01:30Z" | CONTRIBUTOR | null | I'm trying to integrate `huggingface/datasets` functionality into `fairseq`, which requires (afaict) being able to build a dataset through an `add_item` method, such as https://github.com/pytorch/fairseq/blob/master/fairseq/data/indexed_dataset.py#L318, as opposed to loading all the text into arrow, and then `dataset.map(binarizer)`.
Is this possible at the moment? Is there an example? I'm happy to use raw `pa.Table` but not sure whether it will support uneven length entries.
### Desired API
```python
import numpy as np
tokenized: List[np.NDArray[np.int64]] = [np.array([4,4,2]), np.array([8,6,5,5,2]), np.array([3,3,31,5])
def build_dataset_from_tokenized(tokenized: List[np.NDArray[int]]) -> Dataset:
"""FIXME"""
dataset = EmptyDataset()
for t in tokenized: dataset.append(t)
return dataset
ds = build_dataset_from_tokenized(tokenized)
assert (ds[0] == np.array([4,4,2])).all()
```
### What I tried
grep, google for "add one entry at a time", "datasets.append"
### Current Code
This code achieves the same result but doesn't fit into the `add_item` abstraction.
```python
dataset = load_dataset('text', data_files={'train': 'train.txt'})
tokenizer = RobertaTokenizerFast.from_pretrained('roberta-base', max_length=4096)
def tokenize_function(examples):
ids = tokenizer(examples['text'], return_attention_mask=False)['input_ids']
return {'input_ids': [x[1:] for x in ids]}
ds = dataset.map(tokenize_function, batched=True, num_proc=4, remove_columns=['text'], load_from_cache_file=not overwrite_cache)
print(ds['train'][0]) => np array
```
Thanks in advance! | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1854/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1854/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/1853 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1853/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1853/comments | https://api.github.com/repos/huggingface/datasets/issues/1853/events | https://github.com/huggingface/datasets/pull/1853 | 804,791,166 | MDExOlB1bGxSZXF1ZXN0NTcwNTAwMjc4 | 1,853 | Configure library root logger at the module level | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | [] | closed | false | null | [] | null | [] | "2021-02-09T18:11:12Z" | "2021-02-10T12:32:34Z" | "2021-02-10T12:32:34Z" | MEMBER | null | Configure library root logger at the datasets.logging module level (singleton-like).
By doing it this way:
- we are sure configuration is done only once: module level code is only runned once
- no need of global variable
- no need of threading lock | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1853/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1853/timeline | null | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/1853.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1853",
"merged_at": "2021-02-10T12:32:34Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1853.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1853"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/1852 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1852/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1852/comments | https://api.github.com/repos/huggingface/datasets/issues/1852/events | https://github.com/huggingface/datasets/pull/1852 | 804,633,033 | MDExOlB1bGxSZXF1ZXN0NTcwMzY3NTU1 | 1,852 | Add Arabic Speech Corpus | {
"avatar_url": "https://avatars.githubusercontent.com/u/15667714?v=4",
"events_url": "https://api.github.com/users/zaidalyafeai/events{/privacy}",
"followers_url": "https://api.github.com/users/zaidalyafeai/followers",
"following_url": "https://api.github.com/users/zaidalyafeai/following{/other_user}",
"gists_url": "https://api.github.com/users/zaidalyafeai/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/zaidalyafeai",
"id": 15667714,
"login": "zaidalyafeai",
"node_id": "MDQ6VXNlcjE1NjY3NzE0",
"organizations_url": "https://api.github.com/users/zaidalyafeai/orgs",
"received_events_url": "https://api.github.com/users/zaidalyafeai/received_events",
"repos_url": "https://api.github.com/users/zaidalyafeai/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/zaidalyafeai/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zaidalyafeai/subscriptions",
"type": "User",
"url": "https://api.github.com/users/zaidalyafeai"
} | [] | closed | false | null | [] | null | [] | "2021-02-09T15:02:26Z" | "2021-02-11T10:18:55Z" | "2021-02-11T10:18:55Z" | CONTRIBUTOR | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 2,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 2,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1852/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1852/timeline | null | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/1852.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1852",
"merged_at": "2021-02-11T10:18:54Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1852.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1852"
} | true |
|
https://api.github.com/repos/huggingface/datasets/issues/1851 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1851/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1851/comments | https://api.github.com/repos/huggingface/datasets/issues/1851/events | https://github.com/huggingface/datasets/pull/1851 | 804,523,174 | MDExOlB1bGxSZXF1ZXN0NTcwMjc2MTk5 | 1,851 | set bert_score version dependency | {
"avatar_url": "https://avatars.githubusercontent.com/u/3596?v=4",
"events_url": "https://api.github.com/users/pvl/events{/privacy}",
"followers_url": "https://api.github.com/users/pvl/followers",
"following_url": "https://api.github.com/users/pvl/following{/other_user}",
"gists_url": "https://api.github.com/users/pvl/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/pvl",
"id": 3596,
"login": "pvl",
"node_id": "MDQ6VXNlcjM1OTY=",
"organizations_url": "https://api.github.com/users/pvl/orgs",
"received_events_url": "https://api.github.com/users/pvl/received_events",
"repos_url": "https://api.github.com/users/pvl/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/pvl/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pvl/subscriptions",
"type": "User",
"url": "https://api.github.com/users/pvl"
} | [] | closed | false | null | [] | null | [] | "2021-02-09T12:51:07Z" | "2021-02-09T14:21:48Z" | "2021-02-09T14:21:48Z" | CONTRIBUTOR | null | Set the bert_score version in requirements since previous versions of bert_score will fail with datasets (closes #843) | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1851/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1851/timeline | null | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/1851.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1851",
"merged_at": "2021-02-09T14:21:48Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1851.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1851"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/1850 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1850/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1850/comments | https://api.github.com/repos/huggingface/datasets/issues/1850/events | https://github.com/huggingface/datasets/pull/1850 | 804,412,249 | MDExOlB1bGxSZXF1ZXN0NTcwMTg0MDAx | 1,850 | Add cord 19 dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/5583410?v=4",
"events_url": "https://api.github.com/users/ggdupont/events{/privacy}",
"followers_url": "https://api.github.com/users/ggdupont/followers",
"following_url": "https://api.github.com/users/ggdupont/following{/other_user}",
"gists_url": "https://api.github.com/users/ggdupont/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/ggdupont",
"id": 5583410,
"login": "ggdupont",
"node_id": "MDQ6VXNlcjU1ODM0MTA=",
"organizations_url": "https://api.github.com/users/ggdupont/orgs",
"received_events_url": "https://api.github.com/users/ggdupont/received_events",
"repos_url": "https://api.github.com/users/ggdupont/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/ggdupont/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ggdupont/subscriptions",
"type": "User",
"url": "https://api.github.com/users/ggdupont"
} | [] | closed | false | null | [] | null | [
"Cleaned-up version of previous PR: https://github.com/huggingface/datasets/pull/1129",
"@lhoestq FYI",
"Before merging I might tweak a little bit the dummy data to avoid having to check if the `document_parses` and `embeddings` directories exist or not. I'll do that later today",
"Looks all good now ! Thanks a lot @ggdupont :)\r\nMerging"
] | "2021-02-09T10:22:08Z" | "2021-02-09T15:16:26Z" | "2021-02-09T15:16:26Z" | CONTRIBUTOR | null | Initial version only reading the metadata in CSV.
### Checklist:
- [x] Create the dataset script /datasets/my_dataset/my_dataset.py using the template
- [x] Fill the _DESCRIPTION and _CITATION variables
- [x] Implement _infos(), _split_generators() and _generate_examples()
- [x] Make sure that the BUILDER_CONFIGS class attribute is filled with the different configurations of the dataset and that the BUILDER_CONFIG_CLASS is specified if there is a custom config class.
- [x] Generate the metadata file dataset_infos.json for all configurations
- [x] Generate the dummy data dummy_data.zip files to have the dataset script tested and that they don't weigh too much (<50KB)
- [x] Add the dataset card README.md using the template and at least fill the tags
- [x] Both tests for the real data and the dummy data pass.
### Extras:
- [x] add more metadata
- [x] add full text
- [x] add pre-computed document embedding | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1850/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1850/timeline | null | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/1850.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1850",
"merged_at": "2021-02-09T15:16:25Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1850.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1850"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/1849 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1849/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1849/comments | https://api.github.com/repos/huggingface/datasets/issues/1849/events | https://github.com/huggingface/datasets/issues/1849 | 804,292,971 | MDU6SXNzdWU4MDQyOTI5NzE= | 1,849 | Add TIMIT | {
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/patrickvonplaten",
"id": 23423619,
"login": "patrickvonplaten",
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"type": "User",
"url": "https://api.github.com/users/patrickvonplaten"
} | [
{
"color": "e99695",
"default": false,
"description": "Requesting to add a new dataset",
"id": 2067376369,
"name": "dataset request",
"node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request"
},
{
"color": "d93f0b",
"default": false,
"description": "",
"id": 2725241052,
"name": "speech",
"node_id": "MDU6TGFiZWwyNzI1MjQxMDUy",
"url": "https://api.github.com/repos/huggingface/datasets/labels/speech"
}
] | closed | false | null | [] | null | [
"@patrickvonplaten Could you please help me with how the output text has to be represented in the data? TIMIT has Words, Phonemes and texts. Also has lot on info on the speaker and the dialect. Could you please help me? An example of how to arrange it would be super helpful!\r\n\r\n",
"Hey @vrindaprabhu - sure I'll help you :-) Could you open a first PR for TIMIT where you copy-paste more or less the `librispeech_asr` script: https://github.com/huggingface/datasets/blob/28be129db862ec89a87ac9349c64df6b6118aff4/datasets/librispeech_asr/librispeech_asr.py#L93 (obviously replacing all the naming and links correctly...) and then you can list all possible outputs in the features dict: https://github.com/huggingface/datasets/blob/28be129db862ec89a87ac9349c64df6b6118aff4/datasets/librispeech_asr/librispeech_asr.py#L104 (words, phonemes should probably be of kind `datasets.Sequence(datasets.Value(\"string\"))` and texts I think should be of type `\"text\": datasets.Value(\"string\")`.\r\n\r\nWhen you've opened a first PR, I think it'll be much easier for us to take a look together :-) ",
"I am sorry! I created the PR [#1903](https://github.com/huggingface/datasets/pull/1903#). Requesting your comments! CircleCI tests are failing, will address them along with your comments!"
] | "2021-02-09T07:29:41Z" | "2021-03-15T05:59:37Z" | "2021-03-15T05:59:37Z" | MEMBER | null | ## Adding a Dataset
- **Name:** *TIMIT*
- **Description:** *The TIMIT corpus of read speech has been designed to provide speech data for the acquisition of acoustic-phonetic knowledge and for the development and evaluation of automatic speech recognition systems*
- **Paper:** *Homepage*: http://groups.inf.ed.ac.uk/ami/corpus/ / *Wikipedia*: https://en.wikipedia.org/wiki/TIMIT
- **Data:** *https://deepai.org/dataset/timit*
- **Motivation:** Important speech dataset
If interested in tackling this issue, feel free to tag @patrickvonplaten
Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md).
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1849/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1849/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/1848 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1848/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1848/comments | https://api.github.com/repos/huggingface/datasets/issues/1848/events | https://github.com/huggingface/datasets/pull/1848 | 803,826,506 | MDExOlB1bGxSZXF1ZXN0NTY5Njg5ODU1 | 1,848 | Refactoring: Create config module | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | [] | closed | false | null | [] | null | [] | "2021-02-08T18:43:51Z" | "2021-02-10T12:29:35Z" | "2021-02-10T12:29:35Z" | MEMBER | null | Refactorize configuration settings into their own module.
This could be seen as a Pythonic singleton-like approach. Eventually a config instance class might be created. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1848/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1848/timeline | null | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/1848.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1848",
"merged_at": "2021-02-10T12:29:35Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1848.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1848"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/1847 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1847/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1847/comments | https://api.github.com/repos/huggingface/datasets/issues/1847/events | https://github.com/huggingface/datasets/pull/1847 | 803,824,694 | MDExOlB1bGxSZXF1ZXN0NTY5Njg4NDY0 | 1,847 | [Metrics] Add word error metric metric | {
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/patrickvonplaten",
"id": 23423619,
"login": "patrickvonplaten",
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"type": "User",
"url": "https://api.github.com/users/patrickvonplaten"
} | [] | closed | false | null | [] | null | [
"Feel free to merge once the CI is all green ;)"
] | "2021-02-08T18:41:15Z" | "2021-02-09T17:53:21Z" | "2021-02-09T17:53:21Z" | MEMBER | null | This PR adds the word error rate metric to datasets.
WER: https://en.wikipedia.org/wiki/Word_error_rate
for speech recognition. WER is the main metric used in ASR.
`jiwer` seems to be a solid library (see https://github.com/asteroid-team/asteroid/pull/329#discussion_r525158939) | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1847/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1847/timeline | null | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/1847.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1847",
"merged_at": "2021-02-09T17:53:21Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1847.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1847"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/1846 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1846/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1846/comments | https://api.github.com/repos/huggingface/datasets/issues/1846/events | https://github.com/huggingface/datasets/pull/1846 | 803,806,380 | MDExOlB1bGxSZXF1ZXN0NTY5NjczMzcy | 1,846 | Make DownloadManager downloaded/extracted paths accessible | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | [] | closed | false | null | [] | null | [
"First I was thinking of the dict, which makes sense for .download, mapping URL to downloaded path. However does this make sense for .extract, mapping the downloaded path to the extracted path? I ask this because the user did not chose the downloaded path, so this is completely unknown for them...",
"There could be several situations:\r\n- download a file with no extraction\r\n- download a file and extract it\r\n- download a file, extract it and then inside the output folder extract some more files\r\n- extract a local file (for datasets with data that are manually downloaded for example)\r\n- extract a local file, and then inside the output folder extract some more files\r\n\r\nSo I think it's ok to have `downloaded_paths` as a dict url -> downloaded_path and `extracted_paths` as a dict local_path -> extracted_path.",
"OK. I am refactoring this. I have opened #1879, as an intermediate step..."
] | "2021-02-08T18:14:42Z" | "2021-02-25T14:10:18Z" | "2021-02-25T14:10:18Z" | MEMBER | null | Make accessible the file paths downloaded/extracted by DownloadManager.
Close #1831.
The approach:
- I set these paths as DownloadManager attributes: these are DownloadManager's concerns
- To access to these from DatasetBuilder, I set the DownloadManager instance as DatasetBuilder attribute: object composition | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1846/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1846/timeline | null | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/1846.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1846",
"merged_at": "2021-02-25T14:10:18Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1846.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1846"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/1845 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1845/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1845/comments | https://api.github.com/repos/huggingface/datasets/issues/1845/events | https://github.com/huggingface/datasets/pull/1845 | 803,714,493 | MDExOlB1bGxSZXF1ZXN0NTY5NTk2MTIz | 1,845 | Enable logging propagation and remove logging handler | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [] | closed | false | null | [] | null | [
"Thank you @lhoestq. This logging configuration makes more sense to me.\r\n\r\nOnce propagation is allowed, the end-user can customize logging behavior and add custom handlers to the proper top logger in the hierarchy.\r\n\r\nAnd I also agree with following the best practices and removing any custom handlers:\r\n- it is the end user who has to implement any custom handlers\r\n- indeed, the previous logging problem with TensorFlow was due to the fact that absl did not follow best practices and had implemented a custom handler\r\n\r\nOur errors/warnings will be displayed anyway, even if we do not implement any custom handler. Since Python 3.2, logging has a built-in \"default\" handler (logging.lastResort) with the expected default behavior (sending error/warning messages to sys.stderr), which is used only if the end user has not configured any custom handler."
] | "2021-02-08T16:22:13Z" | "2021-02-09T14:22:38Z" | "2021-02-09T14:22:37Z" | MEMBER | null | We used to have logging propagation disabled because of this issue: https://github.com/tensorflow/tensorflow/issues/26691
But since it's now fixed we should re-enable it. This is important to keep the default logging behavior for users, and propagation is also needed for pytest fixtures as asked in #1826
I also removed the handler that was added since, according to the logging [documentation](https://docs.python.org/3/howto/logging.html#configuring-logging-for-a-library):
> It is strongly advised that you do not add any handlers other than NullHandler to your library’s loggers. This is because the configuration of handlers is the prerogative of the application developer who uses your library. The application developer knows their target audience and what handlers are most appropriate for their application: if you add handlers ‘under the hood’, you might well interfere with their ability to carry out unit tests and deliver logs which suit their requirements.
It could have been useful if we wanted to have a custom formatter for the logging but I think it's more important to keep the logging as default to not interfere with the users' logging management.
Therefore I also removed the two methods `datasets.logging.enable_default_handler` and `datasets.logging.disable_default_handler`.
cc @albertvillanova this should let you use capsys/caplog in pytest
cc @LysandreJik @sgugger if you want to do the same in `transformers` | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 1,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1845/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1845/timeline | null | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/1845.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1845",
"merged_at": "2021-02-09T14:22:37Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1845.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1845"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/1844 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1844/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1844/comments | https://api.github.com/repos/huggingface/datasets/issues/1844/events | https://github.com/huggingface/datasets/issues/1844 | 803,588,125 | MDU6SXNzdWU4MDM1ODgxMjU= | 1,844 | Update Open Subtitles corpus with original sentence IDs | {
"avatar_url": "https://avatars.githubusercontent.com/u/19476123?v=4",
"events_url": "https://api.github.com/users/Valahaar/events{/privacy}",
"followers_url": "https://api.github.com/users/Valahaar/followers",
"following_url": "https://api.github.com/users/Valahaar/following{/other_user}",
"gists_url": "https://api.github.com/users/Valahaar/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Valahaar",
"id": 19476123,
"login": "Valahaar",
"node_id": "MDQ6VXNlcjE5NDc2MTIz",
"organizations_url": "https://api.github.com/users/Valahaar/orgs",
"received_events_url": "https://api.github.com/users/Valahaar/received_events",
"repos_url": "https://api.github.com/users/Valahaar/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Valahaar/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Valahaar/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Valahaar"
} | [
{
"color": "7057ff",
"default": true,
"description": "Good for newcomers",
"id": 1935892877,
"name": "good first issue",
"node_id": "MDU6TGFiZWwxOTM1ODkyODc3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/good%20first%20issue"
}
] | closed | false | null | [] | null | [
"Hi ! You're right this can can useful.\r\nThis should be easy to add, so feel free to give it a try if you want to contribute :)\r\nI think we just need to add it to the _generate_examples method of the OpenSubtitles dataset builder [here](https://github.com/huggingface/datasets/blob/master/datasets/open_subtitles/open_subtitles.py#L103)",
"Hey @lhoestq , absolutely yes! Just one question before I start implementing. The ids found in the zip file have this format: \r\n(the following is line `22497315` of the `ids` file of the `de-en` dump)\r\n\r\n\r\n`de/2017/7006210/7063319.xml.gz en/2017/7006210/7050201.xml.gz 335 339 340` (every space is actually a tab, aside from the space between `339` and `340`)\r\n\r\n\r\nWhere filenames encode the information like this: `lang/year/imdb_id/opensubtitles_id.xml.gz` whereas the numbers correspond to the sentence ids which are linked together (i.e. sentence `335` of the German subtitle corresponds to lines `339` and `340` of the English file)\r\n\r\nThat being said, do you think I should stick to the raw sentence id (and replace the current sequential id) or should I include more detailed metadata (or both things maybe)?\r\n\r\nGoing with raw ID is surely simpler, but including `year`, `imdbId` and `subtitleId` should save space as they're just integers; besides, any operation (like filtering or grouping) will be much easier if users don't have to manually parse the ids every time.\r\nAs for the language-specific sentenceIds, what could be the best option? A list of integers or a comma-separated string?\r\n\r\n**Note:** I did not find any official information about this encoding, but it appears to check out:\r\nhttps://www.imdb.com/title/tt7006210/, https://www.opensubtitles.org/en/subtitles/7063319 and https://www.opensubtitles.org/en/subtitles/7050201 all link to the same episode, so I guess (I hope!) it's correct.\r\n\r\n",
"I like the idea of having `year`, `imdbId` and `subtitleId` as columns for filtering for example.\r\nAnd for the `sentenceIds` a list of integers is fine.",
"Thanks for improving it @Valahaar :) ",
"Something like this? (adapted from [here](https://github.com/huggingface/datasets/blob/master/datasets/open_subtitles/open_subtitles.py#L114))\r\n\r\n```python\r\nresult = (\r\n sentence_counter,\r\n {\r\n \"id\": str(sentence_counter),\r\n \"meta\": {\r\n \"year\": year,\r\n \"imdbId\": imdb_id,\r\n \"subtitleId\": {l1: l1_sub_id, l2: l2_sub_id},\r\n \"sentenceIds\": {l1: [... source_sids ...], l2: [... target_sids ...]},\r\n # or maybe src/tgt? I'd go with the first one for consistency with 'translation'\r\n \"subtitleId\": {\"src\": l1_sub_id, \"tgt\": l2_sub_id},\r\n \"sentenceIds\": {\"src\": [... source_sids ...], \"tgt\": [... target_sids ...]},\r\n },\r\n \"translation\": {l1: x, l2: y},\r\n },\r\n )\r\n```\r\nOr at top level, avoiding nesting into 'meta'?",
"Merged in #1865, closing. Thanks :)"
] | "2021-02-08T13:55:13Z" | "2021-02-12T17:38:58Z" | "2021-02-12T17:38:58Z" | CONTRIBUTOR | null | Hi! It would be great if you could add the original sentence ids to [Open Subtitles](https://huggingface.co/datasets/open_subtitles).
I can think of two reasons: first, it's possible to gather sentences for an entire document (the original ids contain media id, subtitle file id and sentence id), therefore somewhat allowing for document-level machine translation (and other document-level stuff which could be cool to have); second, it's possible to have parallel sentences in multiple languages, as they share the same ids across bitexts.
I think I should tag @abhishekkrthakur as he's the one who added it in the first place.
Thanks! | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1844/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1844/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/1843 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1843/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1843/comments | https://api.github.com/repos/huggingface/datasets/issues/1843/events | https://github.com/huggingface/datasets/issues/1843 | 803,565,393 | MDU6SXNzdWU4MDM1NjUzOTM= | 1,843 | MustC Speech Translation | {
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/patrickvonplaten",
"id": 23423619,
"login": "patrickvonplaten",
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"type": "User",
"url": "https://api.github.com/users/patrickvonplaten"
} | [
{
"color": "e99695",
"default": false,
"description": "Requesting to add a new dataset",
"id": 2067376369,
"name": "dataset request",
"node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request"
},
{
"color": "d93f0b",
"default": false,
"description": "",
"id": 2725241052,
"name": "speech",
"node_id": "MDU6TGFiZWwyNzI1MjQxMDUy",
"url": "https://api.github.com/repos/huggingface/datasets/labels/speech"
}
] | open | false | null | [] | null | [
"Hi @patrickvonplaten I would like to work on this dataset. \r\n\r\nThanks! ",
"That's awesome! Actually, I just noticed that this dataset might become a bit too big!\r\n\r\nMuST-C is the main dataset used for IWSLT19 and should probably be added as a standalone dataset. Would you be interested also in adding `datasets/MuST-C` instead?\r\n\r\nDescription: \r\n_MuST-C is a multilingual speech translation corpus whose size and quality facilitates the training of end-to-end systems for speech translation from English into several languages. For each target language, MuST-C comprises several hundred hours of audio recordings from English TED Talks, which are automatically aligned at the sentence level with their manual transcriptions and translations._\r\n\r\nPaper: https://www.aclweb.org/anthology/N19-1202.pdf\r\n\r\nDataset: https://ict.fbk.eu/must-c/ (One needs to fill out a short from to download the data, but it's very easy).\r\n\r\nIt would be awesome if you're interested in adding this datates. I'm very happy to guide you through the PR! I think the easiest way to start would probably be to read [this README on how to add a dataset](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md) and open a PR. Think you can copy & paste some code from:\r\n\r\n- Librispeech_asr: https://github.com/huggingface/datasets/blob/master/datasets/librispeech_asr/librispeech_asr.py\r\n- Flores Translation: https://github.com/huggingface/datasets/blob/master/datasets/flores/flores.py\r\n\r\nThink all the rest can be handled on the PR :-) ",
"Hi @patrickvonplaten \r\nI have tried downloading this dataset, but the connection seems to reset all the time. I have tried it via the browser, wget, and using gdown . But it gives me an error message. _\"The server is busy or down, pls try again\"_ (rephrasing the message here)\r\n\r\nI have completed adding 4 datasets in the previous data sprint (including the IWSLT dataset #1676 ) ...so just checking if you are able to download it at your end. Otherwise will write to the dataset authors to update the links. \r\n\r\n\r\n\r\n\r\n",
"Let me check tomorrow! Thanks for leaving this message!",
"cc @patil-suraj for notification ",
"@skyprince999, I think I'm getting the same error you're getting :-/\r\n\r\n```\r\nSorry, you can't view or download this file at this time.\r\n\r\nToo many users have viewed or downloaded this file recently. Please try accessing the file again later. If the file you are trying to access is particularly large or is shared with many people, it may take up to 24 hours to be able to view or download the file. If you still can't access a file after 24 hours, contact your domain administrator.\r\n```\r\n\r\nIt would be great if you could write the authors to see whether they can fix it.\r\nAlso cc @lhoestq - do you think we could mirror the dataset? ",
"Also there are huge those datasets. Think downloading MuST-C v1.2 amounts to ~ 1000GB... because there are 14 possible configs each around 60-70GB. I think users mostly will only use one of the 14 configs so that they would only need, in theory, will have to download ~60GB which is ok. But I think this functionality doesn't exist yet in `datasets` no? cc @lhoestq ",
"> Also cc @lhoestq - do you think we could mirror the dataset?\r\n\r\nYes we can mirror it if the authors are fine with it. You can create a dataset repo on huggingface.co (possibly under the relevant org) and add the mirrored data files.\r\n\r\n> I think users mostly will only use one of the 14 configs so that they would only need, in theory, will have to download ~60GB which is ok. But I think this functionality doesn't exist yet in datasets no? cc @lhoestq\r\n\r\nIf there are different download links for each configuration we can make the dataset builder download only the files related to the requested configuration.",
"I have written to the dataset authors, highlighting this issue. Waiting for their response. \r\n\r\nUpdate on 25th Feb: \r\nThe authors have replied back, they are updating the download link and will revert back shortly! \r\n\r\n```\r\nfirst of all thanks a lot for being interested in MuST-C and for building the data-loader.\r\n\r\nBefore answering your request, I'd like to clarify that the creation, maintenance, and expansion of MuST-c are not supported by any funded project, so this means that we need to find economic support for all these activities. This also includes permanently moving all the data to AWS or GCP. We are working at this with the goal of facilitating the use of MuST-C, but this is not something that can happen today. We hope to have some news ASAP and you will be among the first to be informed.\r\n\r\nI hope you understand our situation.\r\n```\r\n\r\n",
"Awesome, actually @lhoestq let's just ask the authors if we should host the dataset no? They could just use our links then as well for their website - what do you think? Is it fine to use our AWS dataset storage also as external links? ",
"Yes definitely. Shall we suggest them to create a dataset repository under their org on huggingface.co ? @julien-c \r\nThe dataset is around 1TB",
"Sounds good! \r\n\r\nOrder of magnitude is storage costs ~$20 per TB per month (not including bandwidth). \r\n\r\nHappy to provide this to the community as I feel this is an important dataset. Let us know what the authors want to do!\r\n\r\n",
"Great! @skyprince999, do you think you could ping the authors here or link to this thread? I think it could be a cool idea to host the dataset on our side then",
"Done. They replied back, and they want to have a call over a meet/ skype. Is that possible ? \r\nBtw @patrickvonplaten you are looped in that email (_pls check you gmail account_) ",
"Hello! Any news on this?",
"@gegallego there were some concerns regarding dataset usage & attribution by a for-profit company, so couldn't take it forward. Also the download links were unstable. \r\nBut I guess if you want to test the fairseq benchmarks, you can connect with them directly for downloading the dataset. ",
"Yes, that dataset is not easy to download... I had to copy it to my Google Drive and use `rsync` to be able to download it.\r\nHowever, we could add the dataset with a manual download, right?",
"yes that is possible. I couldn't unfortunately complete this PR, If you would like to add it, please feel free to do it. "
] | "2021-02-08T13:27:45Z" | "2021-05-14T14:53:34Z" | null | MEMBER | null | ## Adding a Dataset
- **Name:** *IWSLT19*
- **Description:** *The Speech Translation Task addresses the translation of English audio into German and Portuguese text.*
- **Hompage:** *https://sites.google.com/view/iwslt-evaluation-2019/speech-translation*
- **Data:** *https://sites.google.com/view/iwslt-evaluation-2019/speech-translation* - all data under "Allowed Training Data" and "Development and Evalutaion Data for TED/How2"
- **Motivation:** Important speech dataset
If interested in tackling this issue, feel free to tag @patrickvonplaten
Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md).
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1843/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1843/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/1842 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1842/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1842/comments | https://api.github.com/repos/huggingface/datasets/issues/1842/events | https://github.com/huggingface/datasets/issues/1842 | 803,563,149 | MDU6SXNzdWU4MDM1NjMxNDk= | 1,842 | Add AMI Corpus | {
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/patrickvonplaten",
"id": 23423619,
"login": "patrickvonplaten",
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"type": "User",
"url": "https://api.github.com/users/patrickvonplaten"
} | [
{
"color": "e99695",
"default": false,
"description": "Requesting to add a new dataset",
"id": 2067376369,
"name": "dataset request",
"node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request"
},
{
"color": "d93f0b",
"default": false,
"description": "",
"id": 2725241052,
"name": "speech",
"node_id": "MDU6TGFiZWwyNzI1MjQxMDUy",
"url": "https://api.github.com/repos/huggingface/datasets/labels/speech"
}
] | closed | false | null | [] | null | [
"Available here: ~https://huggingface.co/datasets/ami~ https://huggingface.co/datasets/edinburghcstr/ami",
"@mariosasko actually the \"official\" AMI dataset can be found here: https://huggingface.co/datasets/edinburghcstr/ami -> the old one under `datasets/ami` doesn't work and should be deleted. \r\n\r\nThe new one was tested by fine-tuning a Wav2Vec2 model on it + we uploaded all the processed audio directly into it",
"@patrickvonplaten Thanks for correcting me! I've updated the link."
] | "2021-02-08T13:25:00Z" | "2023-02-28T16:29:22Z" | "2023-02-28T16:29:22Z" | MEMBER | null | ## Adding a Dataset
- **Name:** *AMI*
- **Description:** *The AMI Meeting Corpus is a multi-modal data set consisting of 100 hours of meeting recordings. For a gentle introduction to the corpus, see the corpus overview. To access the data, follow the directions given there. Around two-thirds of the data has been elicited using a scenario in which the participants play different roles in a design team, taking a design project from kick-off to completion over the course of a day. The rest consists of naturally occurring meetings in a range of domains. Detailed information can be found in the documentation section.*
- **Paper:** *Homepage*: http://groups.inf.ed.ac.uk/ami/corpus/
- **Data:** *http://groups.inf.ed.ac.uk/ami/download/* - Select all cases in 1) and select "Individual Headsets" & "Microphone array" for 2)
- **Motivation:** Important speech dataset
If interested in tackling this issue, feel free to tag @patrickvonplaten
Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md).
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1842/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1842/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/1841 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1841/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1841/comments | https://api.github.com/repos/huggingface/datasets/issues/1841/events | https://github.com/huggingface/datasets/issues/1841 | 803,561,123 | MDU6SXNzdWU4MDM1NjExMjM= | 1,841 | Add ljspeech | {
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/patrickvonplaten",
"id": 23423619,
"login": "patrickvonplaten",
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"type": "User",
"url": "https://api.github.com/users/patrickvonplaten"
} | [
{
"color": "e99695",
"default": false,
"description": "Requesting to add a new dataset",
"id": 2067376369,
"name": "dataset request",
"node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request"
},
{
"color": "d93f0b",
"default": false,
"description": "",
"id": 2725241052,
"name": "speech",
"node_id": "MDU6TGFiZWwyNzI1MjQxMDUy",
"url": "https://api.github.com/repos/huggingface/datasets/labels/speech"
}
] | closed | false | null | [] | null | [] | "2021-02-08T13:22:26Z" | "2021-03-15T05:59:02Z" | "2021-03-15T05:59:02Z" | MEMBER | null | ## Adding a Dataset
- **Name:** *ljspeech*
- **Description:** *This is a public domain speech dataset consisting of 13,100 short audio clips of a single speaker reading passages from 7 non-fiction books. A transcription is provided for each clip. Clips vary in length from 1 to 10 seconds and have a total length of approximately 24 hours.
The texts were published between 1884 and 1964, and are in the public domain. The audio was recorded in 2016-17 by the LibriVox project and is also in the public domain.)*
- **Paper:** *Homepage*: https://keithito.com/LJ-Speech-Dataset/
- **Data:** *https://keithito.com/LJ-Speech-Dataset/*
- **Motivation:** Important speech dataset
- **TFDatasets Implementation**: https://www.tensorflow.org/datasets/catalog/ljspeech
If interested in tackling this issue, feel free to tag @patrickvonplaten
Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md).
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1841/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1841/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/1840 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1840/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1840/comments | https://api.github.com/repos/huggingface/datasets/issues/1840/events | https://github.com/huggingface/datasets/issues/1840 | 803,560,039 | MDU6SXNzdWU4MDM1NjAwMzk= | 1,840 | Add common voice | {
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/patrickvonplaten",
"id": 23423619,
"login": "patrickvonplaten",
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"type": "User",
"url": "https://api.github.com/users/patrickvonplaten"
} | [
{
"color": "e99695",
"default": false,
"description": "Requesting to add a new dataset",
"id": 2067376369,
"name": "dataset request",
"node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request"
},
{
"color": "d93f0b",
"default": false,
"description": "",
"id": 2725241052,
"name": "speech",
"node_id": "MDU6TGFiZWwyNzI1MjQxMDUy",
"url": "https://api.github.com/repos/huggingface/datasets/labels/speech"
}
] | closed | false | null | [] | null | [
"I have started working on adding this dataset.",
"Hey @BirgerMoell - awesome that you started working on Common Voice. Common Voice is a bit special since, there is no direct download link to download the data. In these cases we usually consider two options:\r\n\r\n1) Find a hacky solution to extract the download link somehow from the XLM tree of the website \r\n2) If this doesn't work we force the user to download the data himself and add a `\"data_dir\"` as an input parameter. E.g. you can take a look at how it is done for [this](https://github.com/huggingface/datasets/blob/66f2a7eece98d2778bd22bb5034cb7c2376032d4/datasets/arxiv_dataset/arxiv_dataset.py#L66) \r\n\r\nAlso the documentation here: https://huggingface.co/docs/datasets/add_dataset.html?highlight=data_dir#downloading-data-files-and-organizing-splits (especially the \"note\") might be helpful.",
"Let me know if you have any other questions",
"I added a Work in Progress pull request (hope that is ok). I've made a card for the dataset and filled out the common_voice.py file with information about the datset (not completely).\r\n\r\nI didn't manage to get the tagging tool working locally on my machine but will look into that later.\r\n\r\nLeft to do.\r\n\r\n- Tag the dataset\r\n- Add missing information and update common_voice.py\r\n\r\nhttps://github.com/huggingface/datasets/pull/1886",
"Awesome! I left a longer comment on the PR :-)",
"I saw that this current datasets package holds common voice version 6.1, how to add the new version 7.0 that is already available?",
"Will me merged next week - we're working on it :-)",
"Common voice still appears to be a 6.1. Is the plan still to upgrade to 7.0?",
"We actually already have the code and everything ready to add Common Voice 7.0 to `datasets` but are still waiting for the common voice authors to give us the green light :-) \r\n\r\nAlso gently pinging @phirework and @milupo here",
"Common Voice 7.0 is available here now: https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0",
"For anyone else stumbling upon this thread, the 8.0 version is also available now: https://huggingface.co/datasets/mozilla-foundation/common_voice_8_0"
] | "2021-02-08T13:21:05Z" | "2022-03-20T15:23:40Z" | "2021-03-15T05:56:21Z" | MEMBER | null | ## Adding a Dataset
- **Name:** *common voice*
- **Description:** *Mozilla Common Voice Dataset*
- **Paper:** Homepage: https://voice.mozilla.org/en/datasets
- **Data:** https://voice.mozilla.org/en/datasets
- **Motivation:** Important speech dataset
- **TFDatasets Implementation**: https://www.tensorflow.org/datasets/catalog/common_voice
If interested in tackling this issue, feel free to tag @patrickvonplaten
Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md).
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1840/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1840/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/1839 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1839/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1839/comments | https://api.github.com/repos/huggingface/datasets/issues/1839/events | https://github.com/huggingface/datasets/issues/1839 | 803,559,164 | MDU6SXNzdWU4MDM1NTkxNjQ= | 1,839 | Add Voxforge | {
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/patrickvonplaten",
"id": 23423619,
"login": "patrickvonplaten",
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"type": "User",
"url": "https://api.github.com/users/patrickvonplaten"
} | [
{
"color": "e99695",
"default": false,
"description": "Requesting to add a new dataset",
"id": 2067376369,
"name": "dataset request",
"node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request"
},
{
"color": "d93f0b",
"default": false,
"description": "",
"id": 2725241052,
"name": "speech",
"node_id": "MDU6TGFiZWwyNzI1MjQxMDUy",
"url": "https://api.github.com/repos/huggingface/datasets/labels/speech"
}
] | open | false | null | [] | null | [] | "2021-02-08T13:19:56Z" | "2021-02-08T13:28:31Z" | null | MEMBER | null | ## Adding a Dataset
- **Name:** *voxforge*
- **Description:** *VoxForge is a language classification dataset. It consists of user submitted audio clips submitted to the website. In this release, data from 6 languages is collected - English, Spanish, French, German, Russian, and Italian. Since the website is constantly updated, and for the sake of reproducibility, this release contains only recordings submitted prior to 2020-01-01. The samples are splitted between train, validation and testing so that samples from each speaker belongs to exactly one split.*
- **Paper:** *Homepage*: http://www.voxforge.org/
- **Data:** *http://www.voxforge.org/home/downloads*
- **Motivation:** Important speech dataset
- **TFDatasets Implementation**: https://www.tensorflow.org/datasets/catalog/voxforge
If interested in tackling this issue, feel free to tag @patrickvonplaten
Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md).
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1839/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1839/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/1838 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1838/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1838/comments | https://api.github.com/repos/huggingface/datasets/issues/1838/events | https://github.com/huggingface/datasets/issues/1838 | 803,557,521 | MDU6SXNzdWU4MDM1NTc1MjE= | 1,838 | Add tedlium | {
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/patrickvonplaten",
"id": 23423619,
"login": "patrickvonplaten",
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"type": "User",
"url": "https://api.github.com/users/patrickvonplaten"
} | [
{
"color": "e99695",
"default": false,
"description": "Requesting to add a new dataset",
"id": 2067376369,
"name": "dataset request",
"node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request"
},
{
"color": "d93f0b",
"default": false,
"description": "",
"id": 2725241052,
"name": "speech",
"node_id": "MDU6TGFiZWwyNzI1MjQxMDUy",
"url": "https://api.github.com/repos/huggingface/datasets/labels/speech"
}
] | closed | false | null | [] | null | [
"Hi @patrickvonplaten \r\nI can have a look to this dataset later since I am trying to add the OpenSLR dataset https://github.com/huggingface/datasets/pull/2173\r\nHopefully I have enough space since the compressed file is 21GB. The release 3 is even bigger: 54GB :-0",
"Resolved via https://github.com/huggingface/datasets/pull/4309"
] | "2021-02-08T13:17:52Z" | "2022-10-04T14:34:12Z" | "2022-10-04T14:34:12Z" | MEMBER | null | ## Adding a Dataset
- **Name:** *tedlium*
- **Description:** *The TED-LIUM 1-3 corpus is English-language TED talks, with transcriptions, sampled at 16kHz. It contains about 118 hours of speech.*
- **Paper:** Homepage: http://www.openslr.org/7/, https://lium.univ-lemans.fr/en/ted-lium2/ &, https://www.openslr.org/51/
- **Data:** http://www.openslr.org/7/
- **Motivation:** Important speech dataset
- **TFDatasets Implementation**: https://www.tensorflow.org/datasets/catalog/tedlium
If interested in tackling this issue, feel free to tag @patrickvonplaten
Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md).
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1838/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1838/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/1837 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1837/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1837/comments | https://api.github.com/repos/huggingface/datasets/issues/1837/events | https://github.com/huggingface/datasets/issues/1837 | 803,555,650 | MDU6SXNzdWU4MDM1NTU2NTA= | 1,837 | Add VCTK | {
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/patrickvonplaten",
"id": 23423619,
"login": "patrickvonplaten",
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"type": "User",
"url": "https://api.github.com/users/patrickvonplaten"
} | [
{
"color": "e99695",
"default": false,
"description": "Requesting to add a new dataset",
"id": 2067376369,
"name": "dataset request",
"node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request"
},
{
"color": "d93f0b",
"default": false,
"description": "",
"id": 2725241052,
"name": "speech",
"node_id": "MDU6TGFiZWwyNzI1MjQxMDUy",
"url": "https://api.github.com/repos/huggingface/datasets/labels/speech"
}
] | closed | false | null | [] | null | [
"@patrickvonplaten I'd like to take this, if nobody has already done it. I have added datasets before through the datasets sprint, but I feel rusty on the details, so I'll look at the guide as well as similar audio PRs (#1878 in particular comes to mind). If there is any detail I should be aware of please, let me know! Otherwise, I'll try to write up a PR in the coming days.",
"That sounds great @jaketae - let me know if you need any help i.e. feel free to ping me on a first PR :-)"
] | "2021-02-08T13:15:28Z" | "2021-12-28T15:05:08Z" | "2021-12-28T15:05:08Z" | MEMBER | null | ## Adding a Dataset
- **Name:** *VCTK*
- **Description:** *This CSTR VCTK Corpus includes speech data uttered by 110 English speakers with various accents. Each speaker reads out about 400 sentences, which were selected from a newspaper, the rainbow passage and an elicitation paragraph used for the speech accent archive.*
- **Paper:** Homepage: https://datashare.ed.ac.uk/handle/10283/3443
- **Data:** https://datashare.ed.ac.uk/handle/10283/3443
- **Motivation:** Important speech dataset
- **TFDatasets Implementation**: https://www.tensorflow.org/datasets/catalog/vctk
If interested in tackling this issue, feel free to tag @patrickvonplaten
Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md).
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1837/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1837/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/1836 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1836/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1836/comments | https://api.github.com/repos/huggingface/datasets/issues/1836/events | https://github.com/huggingface/datasets/issues/1836 | 803,531,837 | MDU6SXNzdWU4MDM1MzE4Mzc= | 1,836 | test.json has been removed from the limit dataset repo (breaks dataset) | {
"avatar_url": "https://avatars.githubusercontent.com/u/237550?v=4",
"events_url": "https://api.github.com/users/Paethon/events{/privacy}",
"followers_url": "https://api.github.com/users/Paethon/followers",
"following_url": "https://api.github.com/users/Paethon/following{/other_user}",
"gists_url": "https://api.github.com/users/Paethon/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Paethon",
"id": 237550,
"login": "Paethon",
"node_id": "MDQ6VXNlcjIzNzU1MA==",
"organizations_url": "https://api.github.com/users/Paethon/orgs",
"received_events_url": "https://api.github.com/users/Paethon/received_events",
"repos_url": "https://api.github.com/users/Paethon/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Paethon/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Paethon/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Paethon"
} | [
{
"color": "2edb81",
"default": false,
"description": "A bug in a dataset script provided in the library",
"id": 2067388877,
"name": "dataset bug",
"node_id": "MDU6TGFiZWwyMDY3Mzg4ODc3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20bug"
}
] | closed | false | null | [] | null | [
"Thanks for the heads up ! I'm opening a PR to fix that"
] | "2021-02-08T12:45:53Z" | "2021-02-10T16:14:58Z" | "2021-02-10T16:14:58Z" | NONE | null | https://github.com/huggingface/datasets/blob/16042b233dbff2a7585110134e969204c69322c3/datasets/limit/limit.py#L51
The URL is not valid anymore since test.json has been removed in master for some reason. Directly referencing the last commit works:
`https://raw.githubusercontent.com/ilmgut/limit_dataset/0707d3989cd8848f0f11527c77dcf168fefd2b23/data` | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1836/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1836/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/1835 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1835/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1835/comments | https://api.github.com/repos/huggingface/datasets/issues/1835/events | https://github.com/huggingface/datasets/issues/1835 | 803,524,790 | MDU6SXNzdWU4MDM1MjQ3OTA= | 1,835 | Add CHiME4 dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/patrickvonplaten",
"id": 23423619,
"login": "patrickvonplaten",
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"type": "User",
"url": "https://api.github.com/users/patrickvonplaten"
} | [
{
"color": "e99695",
"default": false,
"description": "Requesting to add a new dataset",
"id": 2067376369,
"name": "dataset request",
"node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request"
},
{
"color": "d93f0b",
"default": false,
"description": "",
"id": 2725241052,
"name": "speech",
"node_id": "MDU6TGFiZWwyNzI1MjQxMDUy",
"url": "https://api.github.com/repos/huggingface/datasets/labels/speech"
}
] | open | false | null | [] | null | [] | "2021-02-08T12:36:38Z" | "2021-02-08T13:13:31Z" | null | MEMBER | null | ## Adding a Dataset
- **Name:** Chime4
- **Description:** Chime4 is a dataset for automatic speech recognition. It is especially useful for evaluating models in a noisy environment and for multi-channel ASR
- **Paper:** Dataset comes from a channel: http://spandh.dcs.shef.ac.uk/chime_challenge/CHiME4/ . Results paper:
- **Data:** http://spandh.dcs.shef.ac.uk/chime_challenge/CHiME4/download.html
- **Motivation:** So far there are very little datasets for speech in `datasets`. Only `lbirispeech_asr` so far.
If interested in tackling this issue, feel free to tag @patrickvonplaten
Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md).
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1835/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1835/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/1834 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1834/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1834/comments | https://api.github.com/repos/huggingface/datasets/issues/1834/events | https://github.com/huggingface/datasets/pull/1834 | 803,517,094 | MDExOlB1bGxSZXF1ZXN0NTY5NDMzNDA4 | 1,834 | Fixes base_url of limit dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/237550?v=4",
"events_url": "https://api.github.com/users/Paethon/events{/privacy}",
"followers_url": "https://api.github.com/users/Paethon/followers",
"following_url": "https://api.github.com/users/Paethon/following{/other_user}",
"gists_url": "https://api.github.com/users/Paethon/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Paethon",
"id": 237550,
"login": "Paethon",
"node_id": "MDQ6VXNlcjIzNzU1MA==",
"organizations_url": "https://api.github.com/users/Paethon/orgs",
"received_events_url": "https://api.github.com/users/Paethon/received_events",
"repos_url": "https://api.github.com/users/Paethon/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Paethon/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Paethon/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Paethon"
} | [] | closed | false | null | [] | null | [
"OK, apparently it is a lot more complicated than simply changing the URL? Going to make an issue."
] | "2021-02-08T12:26:35Z" | "2021-02-08T12:42:50Z" | "2021-02-08T12:42:50Z" | NONE | null | `test.json` is not available in the master branch of the repository anymore. Linking to a specific commit. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1834/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1834/timeline | null | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/1834.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1834",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/1834.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1834"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/1833 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1833/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1833/comments | https://api.github.com/repos/huggingface/datasets/issues/1833/events | https://github.com/huggingface/datasets/pull/1833 | 803,120,978 | MDExOlB1bGxSZXF1ZXN0NTY5MDk5MTUx | 1,833 | Add OSCAR dataset card | {
"avatar_url": "https://avatars.githubusercontent.com/u/635220?v=4",
"events_url": "https://api.github.com/users/pjox/events{/privacy}",
"followers_url": "https://api.github.com/users/pjox/followers",
"following_url": "https://api.github.com/users/pjox/following{/other_user}",
"gists_url": "https://api.github.com/users/pjox/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/pjox",
"id": 635220,
"login": "pjox",
"node_id": "MDQ6VXNlcjYzNTIyMA==",
"organizations_url": "https://api.github.com/users/pjox/orgs",
"received_events_url": "https://api.github.com/users/pjox/received_events",
"repos_url": "https://api.github.com/users/pjox/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/pjox/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pjox/subscriptions",
"type": "User",
"url": "https://api.github.com/users/pjox"
} | [] | closed | false | null | [] | null | [
"@lhoestq Thanks for the suggestions! I agree with all of them. Should I accept them one by one or can I accept them all at once? When I try to load the whole diff GitHub is complaining and it does no render them well (probably my browser?) 😅 ",
"I just merged the tables as suggested 😄 . However I noticed something weird, the train sizes are identical for both the original and deduplicated files ... This is not normal, in general the original files are almost twice as big as the deduplicated ones 🤔 ",
"Good catch @pjox ! I just checked and this is because the scripts doesn't handle having several blank lines in a row.\r\nBlank lines introduced by deduplication are currently not ignored so we end up with the same number of examples in the dataset as the original version (but with empty examples...)\r\nI fixed that in this [commit](https://github.com/huggingface/datasets/commit/837a152e4724adc5308e2c4481908c00a8d93383). I'm re-running the metadata generation for deduplicated configs.",
"I got the new sizes today, will update the dataset_infos.json and the dataset card tomorrow",
"> I got the new sizes today, will update the dataset_infos.json and the dataset card tomorrow\r\n\r\ngreat, I just wanted to report that I got error message \"NonMatchingSplitsSizesError\" when I tried to load one of the oscar dataset.",
"Hi @cahya-wirawan, which configuration of oscar do you have this issue with ?",
"Ok I see you're having this issue because I haven't updated the sizes yet ! I'm opening a PR\r\n\r\nI just checked and indeed there's an issue with the `deduplicated` configurations since the commit I mentioned above.\r\nI'm fixing this by using the new sizes I got yesterday :) \r\n",
"I just updated the size in the table @pjox it should be good now :) \r\nI also updated the sizes in the dataset_infos.json in https://github.com/huggingface/datasets/pull/1868 (merged)",
"Thanks @lhoestq for fixing the issue, it works now",
"Thank you so much @lhoestq !"
] | "2021-02-08T01:39:49Z" | "2021-02-12T14:09:25Z" | "2021-02-12T14:08:24Z" | CONTRIBUTOR | null | I added more information and completed the dataset card for OSCAR which was started by @lhoestq in his previous [PR](https://github.com/huggingface/datasets/pull/1824). | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1833/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1833/timeline | null | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/1833.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1833",
"merged_at": "2021-02-12T14:08:24Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1833.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1833"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/1832 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1832/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1832/comments | https://api.github.com/repos/huggingface/datasets/issues/1832/events | https://github.com/huggingface/datasets/issues/1832 | 802,880,897 | MDU6SXNzdWU4MDI4ODA4OTc= | 1,832 | Looks like nokogumbo is up-to-date now, so this is no longer needed. | {
"avatar_url": "https://avatars.githubusercontent.com/u/68724553?v=4",
"events_url": "https://api.github.com/users/JimmyJim1/events{/privacy}",
"followers_url": "https://api.github.com/users/JimmyJim1/followers",
"following_url": "https://api.github.com/users/JimmyJim1/following{/other_user}",
"gists_url": "https://api.github.com/users/JimmyJim1/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/JimmyJim1",
"id": 68724553,
"login": "JimmyJim1",
"node_id": "MDQ6VXNlcjY4NzI0NTUz",
"organizations_url": "https://api.github.com/users/JimmyJim1/orgs",
"received_events_url": "https://api.github.com/users/JimmyJim1/received_events",
"repos_url": "https://api.github.com/users/JimmyJim1/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/JimmyJim1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/JimmyJim1/subscriptions",
"type": "User",
"url": "https://api.github.com/users/JimmyJim1"
} | [] | closed | false | null | [] | null | [] | "2021-02-07T06:52:07Z" | "2021-02-08T17:27:29Z" | "2021-02-08T17:27:29Z" | NONE | null | Looks like nokogumbo is up-to-date now, so this is no longer needed.
__Originally posted by @dependabot in https://github.com/discourse/discourse/pull/11373#issuecomment-738993432__ | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1832/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1832/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/1831 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1831/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1831/comments | https://api.github.com/repos/huggingface/datasets/issues/1831/events | https://github.com/huggingface/datasets/issues/1831 | 802,868,854 | MDU6SXNzdWU4MDI4Njg4NTQ= | 1,831 | Some question about raw dataset download info in the project . | {
"avatar_url": "https://avatars.githubusercontent.com/u/27874014?v=4",
"events_url": "https://api.github.com/users/svjack/events{/privacy}",
"followers_url": "https://api.github.com/users/svjack/followers",
"following_url": "https://api.github.com/users/svjack/following{/other_user}",
"gists_url": "https://api.github.com/users/svjack/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/svjack",
"id": 27874014,
"login": "svjack",
"node_id": "MDQ6VXNlcjI3ODc0MDE0",
"organizations_url": "https://api.github.com/users/svjack/orgs",
"received_events_url": "https://api.github.com/users/svjack/received_events",
"repos_url": "https://api.github.com/users/svjack/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/svjack/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/svjack/subscriptions",
"type": "User",
"url": "https://api.github.com/users/svjack"
} | [] | closed | false | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
}
] | null | [
"Hi ! The `dl_manager` is a `DownloadManager` object and is responsible for downloading the raw data files.\r\nIt is used by dataset builders in their `_split_generators` method to download the raw data files that are necessary to build the datasets splits.\r\n\r\nThe `Conll2003` class is a dataset builder, and so you can download all the raw data files by calling `_split_generators` with a download manager:\r\n```python\r\nfrom datasets import DownloadManager\r\nfrom datasets.load import import_main_class\r\n\r\nconll2003_builder = import_main_class(...)\r\n\r\ndl_manager = DownloadManager()\r\nsplis_generators = conll2003_builder._split_generators(dl_manager)\r\n```\r\n\r\nThen you can see what files have been downloaded with\r\n```python\r\ndl_manager.get_recorded_sizes_checksums()\r\n```\r\nIt returns a dictionary with the format {url: {num_bytes: int, checksum: str}}\r\n\r\nThen you can get the actual location of the downloaded files with\r\n```python\r\nfrom datasets import cached_path\r\n\r\nlocal_path_to_downloaded_file = cached_path(url)\r\n```\r\n\r\n------------------\r\n\r\nNote that you can also get the urls from the Dataset object:\r\n```python\r\nfrom datasets import load_dataset\r\n\r\nconll2003 = load_dataset(\"conll2003\")\r\nprint(conll2003[\"train\"].download_checksums)\r\n```\r\nIt returns the same dictionary with the format {url: {num_bytes: int, checksum: str}}",
"I am afraid that there is not a very straightforward way to get that location.\r\n\r\nAnother option, from _split_generators would be to use:\r\n- `dl_manager._download_config.cache_dir` to get the directory where all the raw downloaded files are:\r\n ```python\r\n download_dir = dl_manager._download_config.cache_dir\r\n ```\r\n- the function `datasets.utils.file_utils.hash_url_to_filename` to get the filenames of the raw downloaded files:\r\n ```python\r\n filenames = [hash_url_to_filename(url) for url in urls_to_download.values()]\r\n ```\r\nTherefore the complete path to the raw downloaded files would be the join of both:\r\n```python\r\ndownloaded_paths = [os.path.join(download_dir, filename) for filename in filenames]\r\n```\r\n\r\nMaybe it would be interesting to make these paths accessible more easily. I could work on this. What do you think, @lhoestq ?",
"Sure it would be nice to have an easier access to these paths !\r\nThe dataset builder could have a method to return those, what do you think ?\r\nFeel free to work on this @albertvillanova , it would be a nice addition :) \r\n\r\nYour suggestion does work as well @albertvillanova if you complete it by specifying `etag=` to `hash_url_to_filename`.\r\n\r\nThe ETag is obtained by a HEAD request and is used to know if the file on the remote host has changed. Therefore if a file is updated on the remote host, then the hash returned by `hash_url_to_filename` is different.",
"Once #1846 will be merged, the paths to the raw downloaded files will be accessible as:\r\n```python\r\nbuilder_instance.dl_manager.downloaded_paths\r\n``` "
] | "2021-02-07T05:33:36Z" | "2021-02-25T14:10:18Z" | "2021-02-25T14:10:18Z" | NONE | null | Hi , i review the code in
https://github.com/huggingface/datasets/blob/master/datasets/conll2003/conll2003.py
in the _split_generators function is the truly logic of download raw datasets with dl_manager
and use Conll2003 cls by use import_main_class in load_dataset function
My question is that , with this logic it seems that i can not have the raw dataset download location
in variable in downloaded_files in _split_generators.
If someone also want use huggingface datasets as raw dataset downloader,
how can he retrieve the raw dataset download path from attributes in
datasets.dataset_dict.DatasetDict ? | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1831/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1831/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/1830 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1830/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1830/comments | https://api.github.com/repos/huggingface/datasets/issues/1830/events | https://github.com/huggingface/datasets/issues/1830 | 802,790,075 | MDU6SXNzdWU4MDI3OTAwNzU= | 1,830 | using map on loaded Tokenizer 10x - 100x slower than default Tokenizer? | {
"avatar_url": "https://avatars.githubusercontent.com/u/7662740?v=4",
"events_url": "https://api.github.com/users/wumpusman/events{/privacy}",
"followers_url": "https://api.github.com/users/wumpusman/followers",
"following_url": "https://api.github.com/users/wumpusman/following{/other_user}",
"gists_url": "https://api.github.com/users/wumpusman/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/wumpusman",
"id": 7662740,
"login": "wumpusman",
"node_id": "MDQ6VXNlcjc2NjI3NDA=",
"organizations_url": "https://api.github.com/users/wumpusman/orgs",
"received_events_url": "https://api.github.com/users/wumpusman/received_events",
"repos_url": "https://api.github.com/users/wumpusman/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/wumpusman/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/wumpusman/subscriptions",
"type": "User",
"url": "https://api.github.com/users/wumpusman"
} | [] | open | false | null | [] | null | [
"Hi @wumpusman \r\n`datasets` has a caching mechanism that allows to cache the results of `.map` so that when you want to re-run it later it doesn't recompute it again.\r\nSo when you do `.map`, what actually happens is:\r\n1. compute the hash used to identify your `map` for the cache\r\n2. apply your function on every batch\r\n\r\nThis can explain the time difference between your different experiments.\r\n\r\nThe hash computation time depends of how complex your function is. For a tokenizer, the hash computation scans the lists of the words in the tokenizer to identify this tokenizer. Usually it takes 2-3 seconds.\r\n\r\nAlso note that you can disable caching though using\r\n```python\r\nimport datasets\r\n\r\ndatasets.set_caching_enabled(False)\r\n```",
"Hi @lhoestq ,\r\n\r\nThanks for the reply. It's entirely possible that is the issue. Since it's a side project I won't be looking at it till later this week, but, I'll verify it by disabling caching and hopefully I'll see the same runtime. \r\n\r\nAppreciate the reference,\r\n\r\nMichael",
"I believe this is an actual issue, tokenizing a ~4GB txt file went from an hour and a half to ~10 minutes when I switched from my pre-trained tokenizer(on the same dataset) to the default gpt2 tokenizer.\r\nBoth were loaded using:\r\n```\r\nAutoTokenizer.from_pretrained(model_name_or_path, use_fast=True)\r\n```\r\nI trained the tokenizer using ByteLevelBPETokenizer from the Tokenizers library and save it to a tokenizer.json file.\r\n\r\nI have tested the caching ideas above, changing the number of process, the TOKENIZERS_PARALLELISM env variable, keep_in_memory=True and batching with different sizes.\r\n\r\nApologies I can't really upload much code, but wanted to back up the finding and hopefully a fix/the problem can be found.\r\nI will comment back if I find a fix as well.",
"Hi @johncookds do you think this can come from one tokenizer being faster than the other one ? Can you try to compare their speed without using `datasets` just to make sure ?",
"Hi yes, I'm closing the loop here with some timings below. The issue seems to be at least somewhat/mainly with the tokenizer's themselves. Moreover legacy saves of the trainer tokenizer perform faster but differently than the new tokenizer.json saves(note nothing about the training process/adding of special tokens changed between the top two trained tokenizer tests, only the way it was saved). This is only a 3x slowdown vs like a 10x but I think the slowdown is most likely due to this.\r\n\r\n```\r\ntrained tokenizer - tokenizer.json save (same results for AutoTokenizer legacy_format=False):\r\nTokenizer time(seconds): 0.32767510414123535\r\nTokenized avg. length: 323.01\r\n\r\ntrained tokenizer - AutoTokenizer legacy_format=True:\r\nTokenizer time(seconds): 0.09258866310119629\r\nTokenized avg. length: 301.01\r\n\r\nGPT2 Tokenizer from huggingface\r\nTokenizer time(seconds): 0.1010282039642334\r\nTokenized avg. length: 461.21\r\n```",
"@lhoestq ,\r\n\r\nHi, which version of datasets has datasets.set_caching_enabled(False)? I get \r\nmodule 'datasets' has no attribute 'set_caching_enabled'. To hopefully get around this, I reran my code on a new set of data, and did so only once.\r\n\r\n@johncookds , thanks for chiming in, it looks this might be an issue of Tokenizer.\r\n\r\n**Tokenizer**: The runtime of GPT2TokenizerFast.from_pretrained(\"gpt2\") on 1000 chars is: **143 ms**\r\n**SlowTokenizer**: The runtime of a locally saved and loaded Tokenizer using the same vocab on 1000 chars is: **4.43 s**\r\n\r\nThat being said, I compared performance on the map function:\r\n\r\nRunning Tokenizer versus using it in the map function for 1000 chars goes from **141 ms** to **356 ms** \r\nRunning SlowTokenizer versus using it in the map function for 1000 chars with a single element goes from **4.43 s** to **9.76 s**\r\n\r\nI'm trying to figure out why the overhead of map would increase the time by double (figured it would be a fixed increase in time)? Though maybe this is expected behavior.\r\n\r\n@lhoestq, do you by chance know how I can redirect this issue to Tokenizer?\r\n\r\nRegards,\r\n\r\nMichael",
"Thanks for the experiments @johncookds and @wumpusman ! \r\n\r\n> Hi, which version of datasets has datasets.set_caching_enabled(False)?\r\n\r\nCurrently you have to install `datasets` from source to have this feature, but this will be available in the next release in a few days.\r\n\r\n> I'm trying to figure out why the overhead of map would increase the time by double (figured it would be a fixed increase in time)? Though maybe this is expected behavior.\r\n\r\nCould you also try with double the number of characters ? This should let us have an idea of the fixed cost (hashing) and the dynamic cost (actual tokenization, grows with the size of the input)\r\n\r\n> @lhoestq, do you by chance know how I can redirect this issue to Tokenizer?\r\n\r\nFeel free to post an issue on the `transformers` repo. Also I'm sure there should be related issues so you can also look for someone with the same concerns on the `transformers` repo.",
"@lhoestq,\r\n\r\nI just checked that previous run time was actually 3000 chars. I increased it to 6k chars, again, roughly double.\r\n\r\nSlowTokenizer **7.4 s** to **15.7 s**\r\nTokenizer: **276 ms** to **616 ms**\r\n\r\nI'll post this issue on Tokenizer, seems it hasn't quite been raised (albeit I noticed a similar issue that might relate).\r\n\r\nRegards,\r\n\r\nMichael",
"Hi, \r\nI'm following up here as I found my exact issue. It was with saving and re-loading the tokenizer. When I trained then processed the data without saving and reloading it, it was 10x-100x faster than when I saved and re-loaded it.\r\nBoth resulted in the exact same tokenized datasets as well. \r\nThere is additionally a bug where the older legacy tokenizer save does not preserve a learned tokenizing behavior if trained from scratch.\r\nUnderstand its not exactly Datasets related but hope it can help someone if they have the same issue.\r\nThanks!"
] | "2021-02-06T21:00:26Z" | "2021-02-24T21:56:14Z" | null | NONE | null | This could total relate to me misunderstanding particular call functions, but I added words to a GPT2Tokenizer, and saved it to disk (note I'm only showing snippets but I can share more) and the map function ran much slower:
````
def save_tokenizer(original_tokenizer,text,path="simpledata/tokenizer"):
words_unique = set(text.split(" "))
for i in words_unique:
original_tokenizer.add_tokens(i)
original_tokenizer.save_pretrained(path)
tokenizer2 = GPT2Tokenizer.from_pretrained(os.path.join(experiment_path,experiment_name,"tokenizer_squad"))
train_set_baby=Dataset.from_dict({"text":[train_set["text"][0][0:50]]})
````
I then applied the dataset map function on a fairly small set of text:
```
%%time
train_set_baby = train_set_baby.map(lambda d:tokenizer2(d["text"]),batched=True)
```
The run time for train_set_baby.map was 6 seconds, and the batch itself was 2.6 seconds
**100% 1/1 [00:02<00:00, 2.60s/ba] CPU times: user 5.96 s, sys: 36 ms, total: 5.99 s Wall time: 5.99 s**
In comparison using (even after adding additional tokens):
`
tokenizer = GPT2TokenizerFast.from_pretrained("gpt2")`
```
%%time
train_set_baby = train_set_baby.map(lambda d:tokenizer2(d["text"]),batched=True)
```
The time is
**100% 1/1 [00:00<00:00, 34.09ba/s] CPU times: user 68.1 ms, sys: 16 µs, total: 68.1 ms Wall time: 62.9 ms**
It seems this might relate to the tokenizer save or load function, however, the issue appears to come up when I apply the loaded tokenizer to the map function.
I should also add that playing around with the amount of words I add to the tokenizer before I save it to disk and load it into memory appears to impact the time it takes to run the map function.
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1830/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1830/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/1829 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1829/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1829/comments | https://api.github.com/repos/huggingface/datasets/issues/1829/events | https://github.com/huggingface/datasets/pull/1829 | 802,693,600 | MDExOlB1bGxSZXF1ZXN0NTY4NzgzNjA5 | 1,829 | Add Tweet Eval Dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/29076344?v=4",
"events_url": "https://api.github.com/users/gchhablani/events{/privacy}",
"followers_url": "https://api.github.com/users/gchhablani/followers",
"following_url": "https://api.github.com/users/gchhablani/following{/other_user}",
"gists_url": "https://api.github.com/users/gchhablani/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/gchhablani",
"id": 29076344,
"login": "gchhablani",
"node_id": "MDQ6VXNlcjI5MDc2MzQ0",
"organizations_url": "https://api.github.com/users/gchhablani/orgs",
"received_events_url": "https://api.github.com/users/gchhablani/received_events",
"repos_url": "https://api.github.com/users/gchhablani/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/gchhablani/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gchhablani/subscriptions",
"type": "User",
"url": "https://api.github.com/users/gchhablani"
} | [] | closed | false | null | [] | null | [] | "2021-02-06T12:36:25Z" | "2021-02-08T13:17:54Z" | "2021-02-08T13:17:53Z" | CONTRIBUTOR | null | Closes Draft PR #1407.
Notes:
1. I have excluded `mapping.txt` from the dataset at it only contained the name mappings, which are already present in the ClassLabels.
2. I have also exluded the textual names for the emojis mentioned in the [mapping](https://github.com/cardiffnlp/tweeteval/blob/main/datasets/emoji/mapping.txt).
3. I do not understand @abhishekkrthakur's example generator on #1407. Maybe he was trying to build up on code from some other dataset.
Requesting @lhoestq to review. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1829/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1829/timeline | null | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/1829.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1829",
"merged_at": "2021-02-08T13:17:53Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1829.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1829"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/1828 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1828/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1828/comments | https://api.github.com/repos/huggingface/datasets/issues/1828/events | https://github.com/huggingface/datasets/pull/1828 | 802,449,234 | MDExOlB1bGxSZXF1ZXN0NTY4NTkwNDM2 | 1,828 | Add CelebA Dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/29076344?v=4",
"events_url": "https://api.github.com/users/gchhablani/events{/privacy}",
"followers_url": "https://api.github.com/users/gchhablani/followers",
"following_url": "https://api.github.com/users/gchhablani/following{/other_user}",
"gists_url": "https://api.github.com/users/gchhablani/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/gchhablani",
"id": 29076344,
"login": "gchhablani",
"node_id": "MDQ6VXNlcjI5MDc2MzQ0",
"organizations_url": "https://api.github.com/users/gchhablani/orgs",
"received_events_url": "https://api.github.com/users/gchhablani/received_events",
"repos_url": "https://api.github.com/users/gchhablani/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/gchhablani/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gchhablani/subscriptions",
"type": "User",
"url": "https://api.github.com/users/gchhablani"
} | [] | closed | false | null | [] | null | [
"Hi @gchhablani! Thanks for all the contributions! We definitely want more image datasets, but Face datasets are tricky in general, in this one includes predicting attributes such as Attractiveness, Gender, or Race, which can be pretty problematic.\r\n\r\nWould you be up for starting with only object classification or object detection datasets instead? (Your CIFAR-100 contribution will be super useful for example!)",
"Hi @yjernite, You're welcome. I am enjoying adding new datasets :)\r\nBy \"pretty problematic\", are you referring to the ethical issues? I used TFDS's [CelebA](https://github.com/tensorflow/datasets/blob/5ef7861470896acb6f74dacba85036001e4f1b8c/tensorflow_datasets/image/celeba.py#L91) as a reference. Here they mention in a \"Note\" that CelebA \"may contain potential bias\". Can we not do the same? I skipped the note for now, and we can add it. However, if you feel this isn't the right time, then I won't pursue this further. \r\n\r\nBut, can this issue be handled at a later stage? Does this also apply for my Hateful Memes Issue #1810?\r\n\r\nAlso, how can I \r\n1. load a part of the dataset? since `load_dataset(<>,split='train[10:20]')` still loads all the examples.\r\n2. make `datasets_infos.json` for huge datasets which have a single configuration?\r\n\r\nI will ofcourse be looking for other datasets to add regardless. \r\n",
"It's definitely a thorny question. The short answer is: Hateful Memes and hate speech detection datasets are different since their use case is specifically to train systems to identify and hopefully remove hateful content, whereas the purpose of a dataset that has an Attractiveness score as output is implicitly to train more models to rate \"Attractiveness\". \r\n\r\nAs far as warning about the \"potential biases\", I do not think it is quite enough, especially because it is hard to guarantee that every potential user will read the documentation (it is also an insufficient warning.)\r\n\r\nNote that we do have higher standards for the dataset cards of hate speech and hateful memes datasets, so if you do choose to add that one yourself we will ask that you summarize the relevant literature in the Social Impact section.\r\n\r\nIf you really need to add this dataset for your own research for the explicit purpose of studying these biases, you can add it as a community provided dataset following https://huggingface.co/docs/datasets/master/share_dataset.html#sharing-a-community-provided-dataset but I'd recommend just skipping it for now.",
"So currently you do need to download the whole dataset when using it, we are working on making it easier to stream parts of it from a remote host. You can also use the filesystem integration if local storage is an issue:\r\nhttps://huggingface.co/docs/datasets/master/filesystems.html\r\n",
"I don't think we have a great solution for `dataset_infos.json` with a single very large config when storage space is an issue, but it should be solved by the same upcoming feature mentioned above",
"Okay, then I won't pursue this one further. I'll keep this branch on my repository just in case the possibility of adding this dataset comes up in the future.\r\n\r\n> So currently you do need to download the whole dataset when using it, we are working on making it easier to stream parts of it from a remote host. You can also use the filesystem integration if local storage is an issue:\r\n> https://huggingface.co/docs/datasets/master/filesystems.html\r\n\r\nAfter downloading the whole dataset (around 1.4GB), it still loads all the examples despite using `split='train[:10%]'` or `split='train[10:20]'`. \r\n\r\nEDIT: I think this would happen only when the examples are generated for the first time and saved to the cache. Streaming parts of the data from a remote host sounds amazing! But, would that also allow for streaming examples of the data from the local cache? (without saving all the examples the first time).\r\n\r\nWhat I used:\r\n`d = load_dataset('./datasets/celeb_a',split='train[:10]')`\r\nOutput:\r\n`570 examples [01:33, 6.25 examples/s]` and it keeps going. \r\n\r\nEDIT 2: After a few thousand images, I get the following error:\r\n```python\r\nOSError: [Errno 24] Too many open files: '~/.cache/huggingface/datasets/celeb_a/default/1.1.0/01f9dca66039ab7c40b91b09af47a5fa8c3e49dc8d55df50da55b14116229207.incomplete'\r\n```\r\nI understand this is because of the way I load the images :\r\n```python\r\nImage.open(<path>)\r\n```\r\nWhat could be better alternative? I am only asking in case I face the same issues in the future.",
"Just some addition about loading only a subset of the data:\r\nCurrently if even you specify `split='train[:10]'`, it downloads and generate the full dataset, so that you can pick another part afterward if you want to. We may change that in the future and use streaming.\r\n\r\nAnd about your open files issue, you can try to close each image file after reading its content.",
"Hi @lhoestq,\r\nThanks for your response.\r\n\r\nI used `gc.collect()` inside the loop and that worked for me. I think since we are using a generator, and if I have something like `train[100000:100002]`, we will need to generate the first 1000001 examples and store. Ofcourse, this feature isn't a necessity right now, I suppose.",
"Closing this PR."
] | "2021-02-05T20:20:55Z" | "2021-02-18T14:17:07Z" | "2021-02-18T14:17:07Z" | CONTRIBUTOR | null | Trying to add CelebA Dataset.
Need help with testing. Loading examples takes a lot of time so I am unable to generate the `dataset_infos.json` and unable to test. Also, need help with creating `dummy_data.zip`.
Additionally, trying to load a few examples using `load_dataset('./datasets/celeb_a',split='train[10:20]')` still loads all the examples (doesn't stop at 10). | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1828/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1828/timeline | null | null | 1 | {
"diff_url": "https://github.com/huggingface/datasets/pull/1828.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1828",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/1828.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1828"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/1827 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1827/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1827/comments | https://api.github.com/repos/huggingface/datasets/issues/1827/events | https://github.com/huggingface/datasets/issues/1827 | 802,353,974 | MDU6SXNzdWU4MDIzNTM5NzQ= | 1,827 | Regarding On-the-fly Data Loading | {
"avatar_url": "https://avatars.githubusercontent.com/u/29076344?v=4",
"events_url": "https://api.github.com/users/gchhablani/events{/privacy}",
"followers_url": "https://api.github.com/users/gchhablani/followers",
"following_url": "https://api.github.com/users/gchhablani/following{/other_user}",
"gists_url": "https://api.github.com/users/gchhablani/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/gchhablani",
"id": 29076344,
"login": "gchhablani",
"node_id": "MDQ6VXNlcjI5MDc2MzQ0",
"organizations_url": "https://api.github.com/users/gchhablani/orgs",
"received_events_url": "https://api.github.com/users/gchhablani/received_events",
"repos_url": "https://api.github.com/users/gchhablani/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/gchhablani/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gchhablani/subscriptions",
"type": "User",
"url": "https://api.github.com/users/gchhablani"
} | [] | closed | false | null | [] | null | [
"Possible duplicate\r\n\r\n#1776 https://github.com/huggingface/datasets/issues/\r\n\r\nreally looking PR for this feature",
"Hi @acul3 \r\n\r\nIssue #1776 talks about doing on-the-fly data pre-processing, which I think is solved in the next release as mentioned in the issue #1825. I also look forward to using this feature, though :)\r\n\r\nI wanted to ask about on-the-fly data loading from the cache (before pre-processing).",
"Hi ! Currently when you load a dataset via `load_dataset` for example, then the dataset is memory-mapped from an Arrow file on disk. Therefore there's almost no RAM usage even if your dataset contains TB of data.\r\nUsually at training time only one batch of data at a time is loaded in memory.\r\n\r\nDoes that answer your question or were you thinking about something else ?",
"Hi @lhoestq,\r\n\r\nI apologize for the late response. This answers my question. Thanks a lot."
] | "2021-02-05T17:43:48Z" | "2021-02-18T13:55:16Z" | "2021-02-18T13:55:16Z" | CONTRIBUTOR | null | Hi,
I was wondering if it is possible to load images/texts as a batch during the training process, without loading the entire dataset on the RAM at any given point.
Thanks,
Gunjan | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1827/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1827/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/1826 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1826/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1826/comments | https://api.github.com/repos/huggingface/datasets/issues/1826/events | https://github.com/huggingface/datasets/pull/1826 | 802,074,744 | MDExOlB1bGxSZXF1ZXN0NTY4Mjc4OTI2 | 1,826 | Print error message with filename when malformed CSV | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | [] | closed | false | null | [] | null | [] | "2021-02-05T11:07:59Z" | "2021-02-09T17:39:27Z" | "2021-02-09T17:39:27Z" | MEMBER | null | Print error message specifying filename when malformed CSV file.
Close #1821 | {
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1826/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1826/timeline | null | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/1826.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1826",
"merged_at": "2021-02-09T17:39:26Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1826.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1826"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/1825 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1825/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1825/comments | https://api.github.com/repos/huggingface/datasets/issues/1825/events | https://github.com/huggingface/datasets/issues/1825 | 802,073,925 | MDU6SXNzdWU4MDIwNzM5MjU= | 1,825 | Datasets library not suitable for huge text datasets. | {
"avatar_url": "https://avatars.githubusercontent.com/u/35173563?v=4",
"events_url": "https://api.github.com/users/avacaondata/events{/privacy}",
"followers_url": "https://api.github.com/users/avacaondata/followers",
"following_url": "https://api.github.com/users/avacaondata/following{/other_user}",
"gists_url": "https://api.github.com/users/avacaondata/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/avacaondata",
"id": 35173563,
"login": "avacaondata",
"node_id": "MDQ6VXNlcjM1MTczNTYz",
"organizations_url": "https://api.github.com/users/avacaondata/orgs",
"received_events_url": "https://api.github.com/users/avacaondata/received_events",
"repos_url": "https://api.github.com/users/avacaondata/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/avacaondata/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/avacaondata/subscriptions",
"type": "User",
"url": "https://api.github.com/users/avacaondata"
} | [] | closed | false | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
}
] | null | [
"Hi ! Looks related to #861 \r\n\r\nYou are right: tokenizing a dataset using map takes a lot of space since it can store `input_ids` but also `token_type_ids`, `attention_mask` and `special_tokens_mask`. Moreover if your tokenization function returns python integers then by default they'll be stored as int64 which can take a lot of space. Padding can also increase the size of the tokenized dataset.\r\n\r\nTo make things more convenient, we recently added a \"lazy map\" feature that allows to tokenize each batch at training time as you mentioned. For example you'll be able to do\r\n```python\r\nfrom transformers import BertTokenizer\r\n\r\ntokenizer = BertTokenizer.from_pretrained(\"bert-base-uncased\")\r\n\r\ndef encode(batch):\r\n return tokenizer(batch[\"text\"], padding=\"longest\", truncation=True, max_length=512, return_tensors=\"pt\")\r\n\r\ndataset.set_transform(encode)\r\nprint(dataset.format)\r\n# {'type': 'custom', 'format_kwargs': {'transform': <function __main__.encode(batch)>}, 'columns': ['idx', 'label', 'sentence1', 'sentence2'], 'output_all_columns': False}\r\nprint(dataset[:2])\r\n# {'input_ids': tensor([[ 101, 2572, 3217, ... 102]]), 'token_type_ids': tensor([[0, 0, 0, ... 0]]), 'attention_mask': tensor([[1, 1, 1, ... 1]])}\r\n\r\n```\r\nIn this example the `encode` transform is applied on-the-fly on the \"text\" column.\r\n\r\nThis feature will be available in the next release 2.0 which will happen in a few days.\r\nYou can already play with it by installing `datasets` from source if you want :)\r\n\r\nHope that helps !",
"How recently was `set_transform` added? I am actually trying to implement it and getting an error:\r\n\r\n`AttributeError: 'Dataset' object has no attribute 'set_transform'\r\n`\r\n\r\nI'm on v.1.2.1.\r\n\r\nEDIT: Oh, wait I see now it's in the v.2.0. Whoops! This should be really useful.",
"Yes indeed it was added a few days ago. The code is available on master\r\nWe'll do a release next week :)\r\n\r\nFeel free to install `datasets` from source to try it out though, I would love to have some feedbacks",
"For information: it's now available in `datasets` 1.3.0.\r\nThe 2.0 is reserved for even cooler features ;)",
"Hi @alexvaca0 , we have optimized Datasets' disk usage in the latest release v1.5.\r\n\r\nFeel free to update your Datasets version\r\n```shell\r\npip install -U datasets\r\n```\r\nand see if it better suits your needs."
] | "2021-02-05T11:06:50Z" | "2021-03-30T14:04:01Z" | "2021-03-16T09:44:00Z" | NONE | null | Hi,
I'm trying to use datasets library to load a 187GB dataset of pure text, with the intention of building a Language Model. The problem is that from the 187GB it goes to some TB when processed by Datasets. First of all, I think the pre-tokenizing step (with tokenizer.map()) is not really thought for datasets this big, but for fine-tuning datasets, as this process alone takes so much time, usually in expensive machines (due to the need of tpus - gpus) which is not being used for training. It would possibly be more efficient in such cases to tokenize each batch at training time (receive batch - tokenize batch - train with batch), so that the whole time the machine is up it's being used for training.
Moreover, the pyarrow objects created from a 187 GB datasets are huge, I mean, we always receive OOM, or No Space left on device errors when only 10-12% of the dataset has been processed, and only that part occupies 2.1TB in disk, which is so many times the disk usage of the pure text (and this doesn't make sense, as tokenized texts should be lighter than pure texts).
Any suggestions?? | {
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1825/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1825/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/1824 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1824/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1824/comments | https://api.github.com/repos/huggingface/datasets/issues/1824/events | https://github.com/huggingface/datasets/pull/1824 | 802,048,281 | MDExOlB1bGxSZXF1ZXN0NTY4MjU3MTU3 | 1,824 | Add OSCAR dataset card | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [] | closed | false | null | [] | null | [
"Hi @lhoestq! When are you planning to release the version with this dataset?\r\n\r\nBTW: What a huge README file :astonished:",
"Next week !",
"Closing in favor of #1833"
] | "2021-02-05T10:30:26Z" | "2021-05-05T18:24:14Z" | "2021-02-08T11:30:33Z" | MEMBER | null | I started adding the dataset card for OSCAR !
For now it's just basic info for all the different configurations in `Dataset Structure`.
In particular the Data Splits section tells how may samples there are for each config. The Data Instances section show an example for each config, and it also shows the size in MB. Since the Data Instances section is very long the user has to click to expand the info. I was able to generate it thanks to the tools made by @madlag and @yjernite :D
Cc @pjox could you help me with the other sections ? (Dataset Description, Dataset Creation, Considerations for Using the Data, Additional Information)
| {
"+1": 2,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 1,
"hooray": 1,
"laugh": 0,
"rocket": 0,
"total_count": 4,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1824/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1824/timeline | null | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/1824.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1824",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/1824.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1824"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/1823 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1823/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1823/comments | https://api.github.com/repos/huggingface/datasets/issues/1823/events | https://github.com/huggingface/datasets/pull/1823 | 802,042,181 | MDExOlB1bGxSZXF1ZXN0NTY4MjUyMjIx | 1,823 | Add FewRel Dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/29076344?v=4",
"events_url": "https://api.github.com/users/gchhablani/events{/privacy}",
"followers_url": "https://api.github.com/users/gchhablani/followers",
"following_url": "https://api.github.com/users/gchhablani/following{/other_user}",
"gists_url": "https://api.github.com/users/gchhablani/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/gchhablani",
"id": 29076344,
"login": "gchhablani",
"node_id": "MDQ6VXNlcjI5MDc2MzQ0",
"organizations_url": "https://api.github.com/users/gchhablani/orgs",
"received_events_url": "https://api.github.com/users/gchhablani/received_events",
"repos_url": "https://api.github.com/users/gchhablani/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/gchhablani/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gchhablani/subscriptions",
"type": "User",
"url": "https://api.github.com/users/gchhablani"
} | [] | closed | false | null | [] | null | [
"Hi @lhoestq,\r\n\r\nSorry for the late response. What do you mean when you say \"adding names to default config\"? Should I handle \"pid2name\" in the same config as \"default\"?",
"Yes I was thinking of having the pid2name field available in the default configuration (and therefore only have one config). What do you think ?",
"Hi @lhoestq,\r\n\r\nSorry again, the last couple of weeks were a bit busy for me. I am wondering how do you want me to achieve that. Using a custom BuilderConfig which takes in whether it is the regular data or \"pid2name\"? \"pid2name\" is only useful for \"train_wiki\", \"val_nyt\" and \"val_wiki\". So, based on my understanding, it would look like this:\r\n\r\n```python\r\nwiki_data = load_dataset('few_rel','train_wiki')\r\nid2name = load_dataset('few_rel','pid2name')\r\n```\r\nand this will be handled in the multiple configs.\r\n\r\n\r\nA better alternative could be providing name of the relationship in only \"train_wiki\", \"val_nyt\" and \"val_wiki\" as an extra feature in the dataset, and doing away with \"pid2name\" entirely. I'll only download pid2name if any of those datasets are requested, and then during generation I'll return the list with the dataset under \"names\" feature. How does this sound?\r\n\r\nEDIT:\r\nThere is one issue with the second approach, the entire pid2name is saved with all three datasets - \"train_wiki\", \"val_nyt\" and \"val_wiki\" ([see code below](https://github.com/huggingface/datasets/pull/1823#issuecomment-786402026)). In dummy data, I can address this by manually editing the pid2name to contain only a few id-name pairs, those matching with the examples in the corresponding example file. But this seems to be inefficient for the entire dataset - storing the same file in multiple places.",
"Okay, I apologize, I guess I finally understand what is required.\r\n\r\nBasically, using:\r\n\r\n```python\r\nfew_rel = load_dataset('few_rel')\r\n```\r\nshould give all the files. This seems difficult since \"pid2name\" has a different format. Any suggestions on this?",
"Yes that's it, sorry if that wasn't clear !",
"Hi @lhoestq,\n\nSince pid2name has different features from the rest of the files, how will I add them to the same config?\n\nDo we want to exclude pid2name totally and add \"names\" to every example?",
"If I understand correctly each sample in the \"default\" config has one relation, and each relation has corresponding names in pid2name.\r\nWould it be possible to also include the names in the \"default\" configuration for each sample ? The names of one sample can be retrieved using the relation id no ?",
"Yes, that can be done. But for some files, the name is already given instead of ID. Only \"train_wiki\", \"val_wiki\", \"val_nyc\" have IDs. For others, I can set the names equal to a list of key.",
"I think that's fine as long as we mention this processing explicitly in the dataset card.",
"Hi @lhoestq,\r\n\r\nI have added the changes. Please let me know in case of any remaining issues.\r\n\r\nThanks,\r\nGunjan",
"Hi @lhoestq,\r\n\r\nThanks for fixing it and approving :)"
] | "2021-02-05T10:22:03Z" | "2021-03-01T11:56:20Z" | "2021-03-01T10:21:39Z" | CONTRIBUTOR | null | Hi,
This PR closes this [Card](https://github.com/huggingface/datasets/projects/1#card-53285184) and Issue #1757.
I wasn't sure how to add `pid2name` along with the dataset so I added it as a separate configuration. For each (head, tail, tokens) triplet, I have created one example. I have added the dictionary key as `"relation"` in the dataset. Additionally, for `pubmed_unsupervised`, I kept `"relation":""` in the dictionary.
Please recommend better alternatives, if any.
Thanks,
Gunjan | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1823/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1823/timeline | null | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/1823.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1823",
"merged_at": "2021-03-01T10:21:39Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1823.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1823"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/1822 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1822/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1822/comments | https://api.github.com/repos/huggingface/datasets/issues/1822/events | https://github.com/huggingface/datasets/pull/1822 | 802,003,835 | MDExOlB1bGxSZXF1ZXN0NTY4MjIxMzIz | 1,822 | Add Hindi Discourse Analysis Natural Language Inference Dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/33565881?v=4",
"events_url": "https://api.github.com/users/avinsit123/events{/privacy}",
"followers_url": "https://api.github.com/users/avinsit123/followers",
"following_url": "https://api.github.com/users/avinsit123/following{/other_user}",
"gists_url": "https://api.github.com/users/avinsit123/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/avinsit123",
"id": 33565881,
"login": "avinsit123",
"node_id": "MDQ6VXNlcjMzNTY1ODgx",
"organizations_url": "https://api.github.com/users/avinsit123/orgs",
"received_events_url": "https://api.github.com/users/avinsit123/received_events",
"repos_url": "https://api.github.com/users/avinsit123/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/avinsit123/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/avinsit123/subscriptions",
"type": "User",
"url": "https://api.github.com/users/avinsit123"
} | [] | closed | false | null | [] | null | [
"Could you also run `make style` to fix the CI check on code formatting ?",
"@lhoestq completed and resolved all comments."
] | "2021-02-05T09:30:54Z" | "2021-02-15T09:57:39Z" | "2021-02-15T09:57:39Z" | CONTRIBUTOR | null | # Dataset Card for Hindi Discourse Analysis Dataset
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- HomePage : https://github.com/midas-research/hindi-nli-data
- Paper : https://www.aclweb.org/anthology/2020.aacl-main.71
- Point of Contact : https://github.com/midas-research/hindi-nli-data
### Dataset Summary
- Dataset for Natural Language Inference in Hindi Language. Hindi Discourse Analysis (HDA) Dataset consists of textual-entailment pairs.
- Each row of the Datasets if made up of 4 columns - Premise, Hypothesis, Label and Topic.
- Premise and Hypothesis is written in Hindi while Entailment_Label is in English.
- Entailment_label is of 2 types - entailed and not-entailed.
- Entailed means that hypotheis can be inferred from premise and not-entailed means vice versa
- Dataset can be used to train models for Natural Language Inference tasks in Hindi Language.
### Supported Tasks and Leaderboards
- Natural Language Inference for Hindi
### Languages
- Dataset is in Hindi
## Dataset Structure
- Data is structured in TSV format.
- train, test and dev files are in seperate files
### Dataset Instances
An example of 'train' looks as follows.
```
{'hypothesis': 'यह एक वर्णनात्मक कथन है।', 'label': 1, 'premise': 'जैसे उस का सारा चेहरा अपना हो और आँखें किसी दूसरे की जो चेहरे पर पपोटों के पीछे महसूर कर दी गईं।', 'topic': 1}
```
### Data Fields
- Each row contatins 4 columns - premise, hypothesis, label and topic.
### Data Splits
- Train : 31892
- Valid : 9460
- Test : 9970
## Dataset Creation
- We employ a recasting technique from Poliak et al. (2018a,b) to convert publicly available Hindi Discourse Analysis classification datasets in Hindi and pose them as TE problems
- In this recasting process, we build template hypotheses for each class in the label taxonomy
- Then, we pair the original annotated sentence with each of the template hypotheses to create TE samples.
- For more information on the recasting process, refer to paper https://www.aclweb.org/anthology/2020.aacl-main.71
### Source Data
Source Dataset for the recasting process is the BBC Hindi Headlines Dataset(https://github.com/NirantK/hindi2vec/releases/tag/bbc-hindi-v0.1)
#### Initial Data Collection and Normalization
- Initial Data was collected by members of MIDAS Lab from Hindi Websites. They crowd sourced the data annotation process and selected two random stories from our corpus and had the three annotators work on them independently and classify each sentence based on the discourse mode.
- Please refer to this paper for detailed information: https://www.aclweb.org/anthology/2020.lrec-1.149/
- The Discourse is further classified into "Argumentative" , "Descriptive" , "Dialogic" , "Informative" and "Narrative" - 5 Clases.
#### Who are the source language producers?
Please refer to this paper for detailed information: https://www.aclweb.org/anthology/2020.lrec-1.149/
### Annotations
#### Annotation process
Annotation process has been described in Dataset Creation Section.
#### Who are the annotators?
Annotation is done automatically by machine and corresponding recasting process.
### Personal and Sensitive Information
No Personal and Sensitive Information is mentioned in the Datasets.
## Considerations for Using the Data
Pls refer to this paper: https://www.aclweb.org/anthology/2020.aacl-main.71
### Discussion of Biases
No known bias exist in the dataset.
Pls refer to this paper: https://www.aclweb.org/anthology/2020.aacl-main.71
### Other Known Limitations
No other known limitations . Size of data may not be enough to train large models
## Additional Information
Pls refer to this link: https://github.com/midas-research/hindi-nli-data
### Dataset Curators
It is written in the repo : https://github.com/midas-research/hindi-nli-data that
- This corpus can be used freely for research purposes.
- The paper listed below provide details of the creation and use of the corpus. If you use the corpus, then please cite the paper.
- If interested in commercial use of the corpus, send email to [email protected].
- If you use the corpus in a product or application, then please credit the authors and Multimodal Digital Media Analysis Lab - Indraprastha Institute of Information Technology, New Delhi appropriately. Also, if you send us an email, we will be thrilled to know about how you have used the corpus.
- Multimodal Digital Media Analysis Lab - Indraprastha Institute of Information Technology, New Delhi, India disclaims any responsibility for the use of the corpus and does not provide technical support. However, the contact listed above will be happy to respond to queries and clarifications.
- Rather than redistributing the corpus, please direct interested parties to this page
- Please feel free to send us an email:
- with feedback regarding the corpus.
- with information on how you have used the corpus.
- if interested in having us analyze your data for natural language inference.
- if interested in a collaborative research project.
### Licensing Information
Copyright (C) 2019 Multimodal Digital Media Analysis Lab - Indraprastha Institute of Information Technology, New Delhi (MIDAS, IIIT-Delhi).
Pls contact authors for any information on the dataset.
### Citation Information
```
@inproceedings{uppal-etal-2020-two,
title = "Two-Step Classification using Recasted Data for Low Resource Settings",
author = "Uppal, Shagun and
Gupta, Vivek and
Swaminathan, Avinash and
Zhang, Haimin and
Mahata, Debanjan and
Gosangi, Rakesh and
Shah, Rajiv Ratn and
Stent, Amanda",
booktitle = "Proceedings of the 1st Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics and the 10th International Joint Conference on Natural Language Processing",
month = dec,
year = "2020",
address = "Suzhou, China",
publisher = "Association for Computational Linguistics",
url = "https://www.aclweb.org/anthology/2020.aacl-main.71",
pages = "706--719",
abstract = "An NLP model{'}s ability to reason should be independent of language. Previous works utilize Natural Language Inference (NLI) to understand the reasoning ability of models, mostly focusing on high resource languages like English. To address scarcity of data in low-resource languages such as Hindi, we use data recasting to create NLI datasets for four existing text classification datasets. Through experiments, we show that our recasted dataset is devoid of statistical irregularities and spurious patterns. We further study the consistency in predictions of the textual entailment models and propose a consistency regulariser to remove pairwise-inconsistencies in predictions. We propose a novel two-step classification method which uses textual-entailment predictions for classification task. We further improve the performance by using a joint-objective for classification and textual entailment. We therefore highlight the benefits of data recasting and improvements on classification performance using our approach with supporting experimental results.",
}
```
### Contributions
Thanks to [@avinsit123](https://github.com/avinsit123) for adding this dataset.
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1822/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1822/timeline | null | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/1822.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1822",
"merged_at": "2021-02-15T09:57:39Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1822.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1822"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/1821 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1821/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1821/comments | https://api.github.com/repos/huggingface/datasets/issues/1821/events | https://github.com/huggingface/datasets/issues/1821 | 801,747,647 | MDU6SXNzdWU4MDE3NDc2NDc= | 1,821 | Provide better exception message when one of many files results in an exception | {
"avatar_url": "https://avatars.githubusercontent.com/u/5028974?v=4",
"events_url": "https://api.github.com/users/david-waterworth/events{/privacy}",
"followers_url": "https://api.github.com/users/david-waterworth/followers",
"following_url": "https://api.github.com/users/david-waterworth/following{/other_user}",
"gists_url": "https://api.github.com/users/david-waterworth/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/david-waterworth",
"id": 5028974,
"login": "david-waterworth",
"node_id": "MDQ6VXNlcjUwMjg5NzQ=",
"organizations_url": "https://api.github.com/users/david-waterworth/orgs",
"received_events_url": "https://api.github.com/users/david-waterworth/received_events",
"repos_url": "https://api.github.com/users/david-waterworth/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/david-waterworth/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/david-waterworth/subscriptions",
"type": "User",
"url": "https://api.github.com/users/david-waterworth"
} | [] | closed | false | null | [] | null | [
"Hi!\r\n\r\nThank you for reporting this issue. I agree that the information about the exception should be more clear and explicit.\r\n\r\nI could take on this issue.\r\n\r\nOn the meantime, as you can see from the exception stack trace, HF Datasets uses pandas to read the CSV files. You can pass arguments to `pandas.read_csv` by passing additional keyword arguments to `load_dataset`. For example, you may find useful this argument:\r\n- `error_bad_lines` : bool, default True\r\n Lines with too many fields (e.g. a csv line with too many commas) will by default cause an exception to be raised, and no DataFrame will be returned. If False, then these “bad lines” will be dropped from the DataFrame that is returned.\r\n\r\nYou could try:\r\n```python\r\ndatasets = load_dataset(\"csv\", data_files=dict(train=train_files, validation=validation_files), error_bad_lines=False)\r\n```\r\n"
] | "2021-02-05T00:49:03Z" | "2021-02-09T17:39:27Z" | "2021-02-09T17:39:27Z" | NONE | null | I find when I process many files, i.e.
```
train_files = glob.glob('rain*.csv')
validation_files = glob.glob(validation*.csv')
datasets = load_dataset("csv", data_files=dict(train=train_files, validation=validation_files))
```
I sometimes encounter an error due to one of the files being misformed (i.e. no data, or a comma in a field that isn't quoted, etc).
For example, this is the tail of an exception which I suspect is due to a stray comma.
> File "pandas/_libs/parsers.pyx", line 756, in pandas._libs.parsers.TextReader.read
> File "pandas/_libs/parsers.pyx", line 783, in pandas._libs.parsers.TextReader._read_low_memory
> File "pandas/_libs/parsers.pyx", line 827, in pandas._libs.parsers.TextReader._read_rows
> File "pandas/_libs/parsers.pyx", line 814, in pandas._libs.parsers.TextReader._tokenize_rows
> File "pandas/_libs/parsers.pyx", line 1951, in pandas._libs.parsers.raise_parser_error
> pandas.errors.ParserError: Error tokenizing data. C error: Expected 2 fields in line 559, saw 3
It would be nice if the exception trace contained the name of the file being processed (I have 250 separate files!) | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1821/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1821/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/1820 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1820/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1820/comments | https://api.github.com/repos/huggingface/datasets/issues/1820/events | https://github.com/huggingface/datasets/pull/1820 | 801,529,936 | MDExOlB1bGxSZXF1ZXN0NTY3ODI4OTg1 | 1,820 | Add metrics usage examples and tests | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [] | closed | false | null | [] | null | [] | "2021-02-04T18:23:50Z" | "2021-02-05T14:00:01Z" | "2021-02-05T14:00:00Z" | MEMBER | null | All metrics finally have usage examples and proper fast + slow tests :)
I added examples of usage for every metric, and I use doctest to make sure they all work as expected.
For "slow" metrics such as bert_score or bleurt which require to download + run a transformer model, the download + forward pass are only done in the slow test.
In the fast test on the other hand, the download + forward pass are monkey patched.
Metrics that need to be installed from github are not added to setup.py because it prevents uploading the `datasets` package to pypi.
An additional-test-requirements.txt file is used instead. This file also include `comet` in order to not have to resolve its *impossible* dependencies.
Also `comet` is not tested on windows because one of its dependencies (fairseq) can't be installed in the CI for some reason. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1820/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1820/timeline | null | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/1820.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1820",
"merged_at": "2021-02-05T14:00:00Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1820.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1820"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/1819 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1819/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1819/comments | https://api.github.com/repos/huggingface/datasets/issues/1819/events | https://github.com/huggingface/datasets/pull/1819 | 801,448,670 | MDExOlB1bGxSZXF1ZXN0NTY3NzYyMzI2 | 1,819 | Fixed spelling `S3Fileystem` to `S3FileSystem` | {
"avatar_url": "https://avatars.githubusercontent.com/u/32632186?v=4",
"events_url": "https://api.github.com/users/philschmid/events{/privacy}",
"followers_url": "https://api.github.com/users/philschmid/followers",
"following_url": "https://api.github.com/users/philschmid/following{/other_user}",
"gists_url": "https://api.github.com/users/philschmid/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/philschmid",
"id": 32632186,
"login": "philschmid",
"node_id": "MDQ6VXNlcjMyNjMyMTg2",
"organizations_url": "https://api.github.com/users/philschmid/orgs",
"received_events_url": "https://api.github.com/users/philschmid/received_events",
"repos_url": "https://api.github.com/users/philschmid/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/philschmid/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/philschmid/subscriptions",
"type": "User",
"url": "https://api.github.com/users/philschmid"
} | [] | closed | false | null | [] | null | [] | "2021-02-04T16:36:46Z" | "2021-02-04T16:52:27Z" | "2021-02-04T16:52:26Z" | MEMBER | null | Fixed documentation spelling errors.
Wrong `S3Fileystem`
Right `S3FileSystem` | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1819/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1819/timeline | null | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/1819.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1819",
"merged_at": "2021-02-04T16:52:26Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1819.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1819"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/1818 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1818/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1818/comments | https://api.github.com/repos/huggingface/datasets/issues/1818/events | https://github.com/huggingface/datasets/issues/1818 | 800,958,776 | MDU6SXNzdWU4MDA5NTg3NzY= | 1,818 | Loading local dataset raise requests.exceptions.ConnectTimeout | {
"avatar_url": "https://avatars.githubusercontent.com/u/15032072?v=4",
"events_url": "https://api.github.com/users/Alxe1/events{/privacy}",
"followers_url": "https://api.github.com/users/Alxe1/followers",
"following_url": "https://api.github.com/users/Alxe1/following{/other_user}",
"gists_url": "https://api.github.com/users/Alxe1/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Alxe1",
"id": 15032072,
"login": "Alxe1",
"node_id": "MDQ6VXNlcjE1MDMyMDcy",
"organizations_url": "https://api.github.com/users/Alxe1/orgs",
"received_events_url": "https://api.github.com/users/Alxe1/received_events",
"repos_url": "https://api.github.com/users/Alxe1/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Alxe1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Alxe1/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Alxe1"
} | [] | closed | false | null | [] | null | [
"Hi ! Thanks for reporting. This was indeed a bug introduced when we moved the `json` dataset loader inside the `datasets` package (before that, the `json` loader was fetched online, as all the other dataset scripts).\r\n\r\nThis should be fixed on master now. Feel free to install `datasets` from source to try it out.\r\nThe fix will be available in the next release of `datasets` in a few days"
] | "2021-02-04T05:55:23Z" | "2022-06-01T15:38:42Z" | "2022-06-01T15:38:42Z" | NONE | null | Load local dataset:
```
dataset = load_dataset('json', data_files=["../../data/json.json"])
train = dataset["train"]
print(train.features)
train1 = train.map(lambda x: {"labels": 1})
print(train1[:2])
```
but it raised requests.exceptions.ConnectTimeout:
```
/Users/littlely/myvirtual/tf2/bin/python3.7 /Users/littlely/projects/python_projects/pytorch_learning/nlp/dataset/transformers_datasets.py
Traceback (most recent call last):
File "/Users/littlely/myvirtual/tf2/lib/python3.7/site-packages/urllib3/connection.py", line 160, in _new_conn
(self._dns_host, self.port), self.timeout, **extra_kw
File "/Users/littlely/myvirtual/tf2/lib/python3.7/site-packages/urllib3/util/connection.py", line 84, in create_connection
raise err
File "/Users/littlely/myvirtual/tf2/lib/python3.7/site-packages/urllib3/util/connection.py", line 74, in create_connection
sock.connect(sa)
socket.timeout: timed out
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/Users/littlely/myvirtual/tf2/lib/python3.7/site-packages/urllib3/connectionpool.py", line 677, in urlopen
chunked=chunked,
File "/Users/littlely/myvirtual/tf2/lib/python3.7/site-packages/urllib3/connectionpool.py", line 381, in _make_request
self._validate_conn(conn)
File "/Users/littlely/myvirtual/tf2/lib/python3.7/site-packages/urllib3/connectionpool.py", line 978, in _validate_conn
conn.connect()
File "/Users/littlely/myvirtual/tf2/lib/python3.7/site-packages/urllib3/connection.py", line 309, in connect
conn = self._new_conn()
File "/Users/littlely/myvirtual/tf2/lib/python3.7/site-packages/urllib3/connection.py", line 167, in _new_conn
% (self.host, self.timeout),
urllib3.exceptions.ConnectTimeoutError: (<urllib3.connection.HTTPSConnection object at 0x1181e9940>, 'Connection to s3.amazonaws.com timed out. (connect timeout=10)')
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/Users/littlely/myvirtual/tf2/lib/python3.7/site-packages/requests/adapters.py", line 449, in send
timeout=timeout
File "/Users/littlely/myvirtual/tf2/lib/python3.7/site-packages/urllib3/connectionpool.py", line 727, in urlopen
method, url, error=e, _pool=self, _stacktrace=sys.exc_info()[2]
File "/Users/littlely/myvirtual/tf2/lib/python3.7/site-packages/urllib3/util/retry.py", line 439, in increment
raise MaxRetryError(_pool, url, error or ResponseError(cause))
urllib3.exceptions.MaxRetryError: HTTPSConnectionPool(host='s3.amazonaws.com', port=443): Max retries exceeded with url: /datasets.huggingface.co/datasets/datasets/json/json.py (Caused by ConnectTimeoutError(<urllib3.connection.HTTPSConnection object at 0x1181e9940>, 'Connection to s3.amazonaws.com timed out. (connect timeout=10)'))
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/Users/littlely/projects/python_projects/pytorch_learning/nlp/dataset/transformers_datasets.py", line 12, in <module>
dataset = load_dataset('json', data_files=["../../data/json.json"])
File "/Users/littlely/myvirtual/tf2/lib/python3.7/site-packages/datasets/load.py", line 591, in load_dataset
path, script_version=script_version, download_config=download_config, download_mode=download_mode, dataset=True
File "/Users/littlely/myvirtual/tf2/lib/python3.7/site-packages/datasets/load.py", line 263, in prepare_module
head_hf_s3(path, filename=name, dataset=dataset, max_retries=download_config.max_retries)
File "/Users/littlely/myvirtual/tf2/lib/python3.7/site-packages/datasets/utils/file_utils.py", line 232, in head_hf_s3
max_retries=max_retries,
File "/Users/littlely/myvirtual/tf2/lib/python3.7/site-packages/datasets/utils/file_utils.py", line 523, in http_head
max_retries=max_retries,
File "/Users/littlely/myvirtual/tf2/lib/python3.7/site-packages/datasets/utils/file_utils.py", line 458, in _request_with_retry
raise err
File "/Users/littlely/myvirtual/tf2/lib/python3.7/site-packages/datasets/utils/file_utils.py", line 454, in _request_with_retry
response = requests.request(verb.upper(), url, **params)
File "/Users/littlely/myvirtual/tf2/lib/python3.7/site-packages/requests/api.py", line 61, in request
return session.request(method=method, url=url, **kwargs)
File "/Users/littlely/myvirtual/tf2/lib/python3.7/site-packages/requests/sessions.py", line 530, in request
resp = self.send(prep, **send_kwargs)
File "/Users/littlely/myvirtual/tf2/lib/python3.7/site-packages/requests/sessions.py", line 643, in send
r = adapter.send(request, **kwargs)
File "/Users/littlely/myvirtual/tf2/lib/python3.7/site-packages/requests/adapters.py", line 504, in send
raise ConnectTimeout(e, request=request)
requests.exceptions.ConnectTimeout: HTTPSConnectionPool(host='s3.amazonaws.com', port=443): Max retries exceeded with url: /datasets.huggingface.co/datasets/datasets/json/json.py (Caused by ConnectTimeoutError(<urllib3.connection.HTTPSConnection object at 0x1181e9940>, 'Connection to s3.amazonaws.com timed out. (connect timeout=10)'))
Process finished with exit code 1
```
Why it want to connect a remote url when I load local datasets, and how can I fix it? | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1818/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1818/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/1817 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1817/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1817/comments | https://api.github.com/repos/huggingface/datasets/issues/1817/events | https://github.com/huggingface/datasets/issues/1817 | 800,870,652 | MDU6SXNzdWU4MDA4NzA2NTI= | 1,817 | pyarrow.lib.ArrowInvalid: Column 1 named input_ids expected length 599 but got length 1500 | {
"avatar_url": "https://avatars.githubusercontent.com/u/9610770?v=4",
"events_url": "https://api.github.com/users/LuCeHe/events{/privacy}",
"followers_url": "https://api.github.com/users/LuCeHe/followers",
"following_url": "https://api.github.com/users/LuCeHe/following{/other_user}",
"gists_url": "https://api.github.com/users/LuCeHe/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/LuCeHe",
"id": 9610770,
"login": "LuCeHe",
"node_id": "MDQ6VXNlcjk2MTA3NzA=",
"organizations_url": "https://api.github.com/users/LuCeHe/orgs",
"received_events_url": "https://api.github.com/users/LuCeHe/received_events",
"repos_url": "https://api.github.com/users/LuCeHe/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/LuCeHe/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LuCeHe/subscriptions",
"type": "User",
"url": "https://api.github.com/users/LuCeHe"
} | [] | closed | false | null | [] | null | [
"Hi !\r\nThe error you have is due to the `input_ids` column not having the same number of examples as the other columns.\r\nIndeed you're concatenating the `input_ids` at this line:\r\n\r\nhttps://github.com/LuCeHe/GenericTools/blob/431835d8e13ec24dceb5ee4dc4ae58f0e873b091/KerasTools/lm_preprocessing.py#L134\r\n\r\nHowever the other columns are kept unchanged, and therefore you end up with an `input_ids` column with 599 elements while the others columns like `attention_mask` have 1500.\r\n\r\nTo fix that you can instead concatenate them all using\r\n```python\r\nconcatenated_examples = {k: sum(examples[k], []) for k in examples.keys()}\r\n```\r\n\r\nAlso you may need to drop the \"text\" column before applying `group_texts` since strings can't be concatenated with lists. You can drop it at the tokenization step:\r\n```python\r\ndset = dset.map(\r\n tokenize_function,\r\n batched=True,\r\n remove_columns=[\"text\"]\r\n)\r\n```",
"You saved my life."
] | "2021-02-04T02:30:23Z" | "2022-10-05T12:42:57Z" | "2022-10-05T12:42:57Z" | NONE | null | I am trying to preprocess any dataset in this package with GPT-2 tokenizer, so I need to structure the datasets as long sequences of text without padding. I've been following a couple of your tutorials and here you can find the script that is failing right at the end
https://github.com/LuCeHe/GenericTools/blob/master/KerasTools/lm_preprocessing.py
In the last iteration of the last dset.map, it gives the error that I copied in the title. Another issue that I have, if I leave the batch_size set as 1000 in the last .map, I'm afraid it's going to lose most text, so I'm considering setting both writer_batch_size and batch_size to 300 K, but I'm not sure it's the best way to go.
Can you help me?
Thanks! | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1817/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1817/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/1816 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1816/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1816/comments | https://api.github.com/repos/huggingface/datasets/issues/1816/events | https://github.com/huggingface/datasets/pull/1816 | 800,660,995 | MDExOlB1bGxSZXF1ZXN0NTY3MTExMjEx | 1,816 | Doc2dial rc update to latest version | {
"avatar_url": "https://avatars.githubusercontent.com/u/2062185?v=4",
"events_url": "https://api.github.com/users/songfeng/events{/privacy}",
"followers_url": "https://api.github.com/users/songfeng/followers",
"following_url": "https://api.github.com/users/songfeng/following{/other_user}",
"gists_url": "https://api.github.com/users/songfeng/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/songfeng",
"id": 2062185,
"login": "songfeng",
"node_id": "MDQ6VXNlcjIwNjIxODU=",
"organizations_url": "https://api.github.com/users/songfeng/orgs",
"received_events_url": "https://api.github.com/users/songfeng/received_events",
"repos_url": "https://api.github.com/users/songfeng/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/songfeng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/songfeng/subscriptions",
"type": "User",
"url": "https://api.github.com/users/songfeng"
} | [] | closed | false | null | [] | null | [
"- update data loader and readme for latest version 1.0.1"
] | "2021-02-03T20:08:54Z" | "2021-02-15T15:15:24Z" | "2021-02-15T15:04:33Z" | CONTRIBUTOR | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1816/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1816/timeline | null | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/1816.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1816",
"merged_at": "2021-02-15T15:04:33Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1816.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1816"
} | true |
|
https://api.github.com/repos/huggingface/datasets/issues/1815 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1815/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1815/comments | https://api.github.com/repos/huggingface/datasets/issues/1815/events | https://github.com/huggingface/datasets/pull/1815 | 800,610,017 | MDExOlB1bGxSZXF1ZXN0NTY3MDY3NjU1 | 1,815 | Add CCAligned Multilingual Dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/29076344?v=4",
"events_url": "https://api.github.com/users/gchhablani/events{/privacy}",
"followers_url": "https://api.github.com/users/gchhablani/followers",
"following_url": "https://api.github.com/users/gchhablani/following{/other_user}",
"gists_url": "https://api.github.com/users/gchhablani/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/gchhablani",
"id": 29076344,
"login": "gchhablani",
"node_id": "MDQ6VXNlcjI5MDc2MzQ0",
"organizations_url": "https://api.github.com/users/gchhablani/orgs",
"received_events_url": "https://api.github.com/users/gchhablani/received_events",
"repos_url": "https://api.github.com/users/gchhablani/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/gchhablani/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gchhablani/subscriptions",
"type": "User",
"url": "https://api.github.com/users/gchhablani"
} | [] | closed | false | null | [] | null | [
"Hi !\r\n\r\nWe already have some datasets that can have many many configurations possible.\r\nTo be able to support that, we allow to subclass BuilderConfig to add as many additional parameters as you may need.\r\nThis way users can load any language they want. For example the [bible_para](https://github.com/huggingface/datasets/blob/master/datasets/bible_para/bible_para.py) dataset is a dataset for translation and therefore users should be able to provide any language pair. You can check how the subclass of BuilderConfig is defined [here](https://github.com/huggingface/datasets/blob/master/datasets/bible_para/bible_para.py#L49).\r\n\r\nFor testing, only the configurations defined in the `BUILDER_CONFIGS` class attribute are used.\r\nAll the other configs combinations are not tested, but they can be used by users. If a config doesn't already exist in `BUILDER_CONFIGS`, then it is created on the fly.\r\nFor example in [bible_para](https://github.com/huggingface/datasets/blob/master/datasets/bible_para/bible_para.py#L61), only 6 configs are defined in `BUILDER_CONFIGS`.\r\n\r\nSo what I would do in your case is have something like\r\n```python\r\n\r\nclass CCAlignedConfig(datasets.BuilderConfig):\r\n def __init__(self, *args, documents_or_sentences=None, language_code=None, **kwargs):\r\n super().__init__(\r\n *args,\r\n name=f\"{documents_or_sentences}-{language_code}\",\r\n **kwargs,\r\n )\r\n self.documents_or_sentences = documents_or_sentences\r\n self.language_code = language_code\r\n```\r\nAnd of course, feel free to change/rename things if you want to. In particular I think we can improve the name of the parameter `documents_or_sentences`",
"Hi @lhoestq,\r\n\r\nThanks a lot! I don't know why I didn't think about that. :P \r\nI'll make these changes and update.",
"Hi @lhoestq,\r\n\r\nI have tested and added dummy files. Request you to review.\r\n\r\nAlso, does this mean BUILDER_CONFIGS is only needed while testing?",
"Hi @lhoestq,\r\n\r\nAny changes required on this one?\r\n\r\nThanks,\r\nGunjan",
"Hi @lhoestq,\r\n\r\nSorry for the delay, I have added the changes from the review. For the ISO format language codes, I just selected the first two characters from the names, hoping those are correct. Let me know if you want me to verify :P\r\n\r\nThanks for taking the time to add such a detailed review. I'll keep all these changes in mind the next time I'm adding a dataset.\r\n\r\nThanks,\r\nGunjan",
"Hi @lhoestq,\r\n\r\nI have changed the README, and added a single example per config. Even one example is long enough to make the files heavy. Hope that isn't an issue.\r\n\r\nThanks,\r\nGunjan",
"Hi @lhoestq,\r\n\r\nThanks for approving."
] | "2021-02-03T18:59:52Z" | "2021-03-01T12:33:03Z" | "2021-03-01T10:36:21Z" | CONTRIBUTOR | null | Hello,
I'm trying to add [CCAligned Multilingual Dataset](http://www.statmt.org/cc-aligned/). This has the potential to close #1756.
This dataset has two types - Document-Pairs, and Sentence-Pairs.
The datasets are huge, so I won't be able to test all of them. At the same time, a user might only want to download one particular language and not all. To provide this feature, `load_dataset`'s `**config_kwargs` should allow some random keyword args, in this case -`language_code`. This will be needed before the dataset is downloaded and extracted.
I'm expecting the usage to be something like -
`load_dataset('ccaligned_multilingual','documents',language_code='en_XX-af_ZA')`. Ofcourse, at a later stage we can provide just two character language codes. This also has an issue where one language has multiple files (`my_MM` and `my_MM_zaw` on the link), but before that the required functionality must be added to `load_dataset`.
It would be great if someone could either tell me an alternative way to do this, or point me to where changes need to be made, if any, apart from the `BuilderConfig` definition.
Additionally, I believe the tests will also have to be modified if this change is made, since it would not be possible to test for any random keyword arguments.
A decent way to go about this would be to provide all the options in a list/dictionary for `language_code` and use that to test the arguments. In essence, this is similar to the pre-trained checkpoint dictionary as `transformers`. That means writing dataset specific tests, or adding something new to dataset generation script to make it easier for everyone to add keyword arguments without having to worry about the tests.
Thanks,
Gunjan
Requesting @lhoestq / @yjernite to review. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1815/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1815/timeline | null | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/1815.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1815",
"merged_at": "2021-03-01T10:36:21Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1815.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1815"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/1814 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1814/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1814/comments | https://api.github.com/repos/huggingface/datasets/issues/1814/events | https://github.com/huggingface/datasets/pull/1814 | 800,516,236 | MDExOlB1bGxSZXF1ZXN0NTY2OTg4NTI1 | 1,814 | Add Freebase QA Dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/29076344?v=4",
"events_url": "https://api.github.com/users/gchhablani/events{/privacy}",
"followers_url": "https://api.github.com/users/gchhablani/followers",
"following_url": "https://api.github.com/users/gchhablani/following{/other_user}",
"gists_url": "https://api.github.com/users/gchhablani/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/gchhablani",
"id": 29076344,
"login": "gchhablani",
"node_id": "MDQ6VXNlcjI5MDc2MzQ0",
"organizations_url": "https://api.github.com/users/gchhablani/orgs",
"received_events_url": "https://api.github.com/users/gchhablani/received_events",
"repos_url": "https://api.github.com/users/gchhablani/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/gchhablani/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gchhablani/subscriptions",
"type": "User",
"url": "https://api.github.com/users/gchhablani"
} | [] | closed | false | null | [] | null | [
"Hi @lhoestq \r\n\r\nThanks for approving. Request you to close PR #1435 as well."
] | "2021-02-03T16:57:49Z" | "2021-02-04T19:47:51Z" | "2021-02-04T16:21:48Z" | CONTRIBUTOR | null | Closes PR #1435. Fixed issues with PR #1809.
Requesting @lhoestq to review. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1814/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1814/timeline | null | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/1814.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1814",
"merged_at": "2021-02-04T16:21:48Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1814.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1814"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/1813 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1813/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1813/comments | https://api.github.com/repos/huggingface/datasets/issues/1813/events | https://github.com/huggingface/datasets/pull/1813 | 800,435,973 | MDExOlB1bGxSZXF1ZXN0NTY2OTIxNDcz | 1,813 | Support future datasets | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [] | closed | false | null | [] | null | [] | "2021-02-03T15:26:49Z" | "2021-02-05T10:33:48Z" | "2021-02-05T10:33:47Z" | MEMBER | null | If a dataset is available at the version of the local installation of `datasets` (e.g. 1.2.0), then loading this dataset means loading the script at this version.
However when trying to load a dataset that is only available on master, currently users have to specify `script_version="master"` in `load_dataset` to make it work.
However we could automatically get the dataset from master instead in this case.
I added this feature in this PR.
I also added a warning if a dataset is not available at the version of the local installation of `datasets` but is loaded from master:
```python
>>> load_dataset("silicone", "dyda_da")
Couldn't find file locally at silicone/silicone.py, or remotely at https://raw.githubusercontent.com/huggingface/datasets/1.2.0/datasets/silicone/silicone.py.
The file was picked from the master branch on github instead at https://raw.githubusercontent.com/huggingface/datasets/master/datasets/silicone/silicone.py.
Downloading and preparing dataset silicone/dyda_da (download: 8.46 MiB, generated: 9.39 MiB, post-processed: Unknown size, total: 17.86 MiB) to /Users/quentinlhoest/.cache/huggingface/datasets/silicone/dyda_da/1.0.0/d41d8c0b73c6df035b1369c45774418f0051163ea689b5502b8bda783adf6342...
...
```
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 2,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 2,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1813/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1813/timeline | null | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/1813.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1813",
"merged_at": "2021-02-05T10:33:47Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1813.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1813"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/1812 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1812/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1812/comments | https://api.github.com/repos/huggingface/datasets/issues/1812/events | https://github.com/huggingface/datasets/pull/1812 | 799,379,178 | MDExOlB1bGxSZXF1ZXN0NTY2MDMxODIy | 1,812 | Add CIFAR-100 Dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/29076344?v=4",
"events_url": "https://api.github.com/users/gchhablani/events{/privacy}",
"followers_url": "https://api.github.com/users/gchhablani/followers",
"following_url": "https://api.github.com/users/gchhablani/following{/other_user}",
"gists_url": "https://api.github.com/users/gchhablani/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/gchhablani",
"id": 29076344,
"login": "gchhablani",
"node_id": "MDQ6VXNlcjI5MDc2MzQ0",
"organizations_url": "https://api.github.com/users/gchhablani/orgs",
"received_events_url": "https://api.github.com/users/gchhablani/received_events",
"repos_url": "https://api.github.com/users/gchhablani/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/gchhablani/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gchhablani/subscriptions",
"type": "User",
"url": "https://api.github.com/users/gchhablani"
} | [] | closed | false | null | [] | null | [
"Hi @lhoestq,\r\nI have updated with the changes from the review.",
"Thanks for approving :)"
] | "2021-02-02T15:22:59Z" | "2021-02-08T11:10:18Z" | "2021-02-08T10:39:06Z" | CONTRIBUTOR | null | Adding CIFAR-100 Dataset. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 1,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1812/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1812/timeline | null | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/1812.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1812",
"merged_at": "2021-02-08T10:39:06Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1812.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1812"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/1811 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1811/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1811/comments | https://api.github.com/repos/huggingface/datasets/issues/1811/events | https://github.com/huggingface/datasets/issues/1811 | 799,211,060 | MDU6SXNzdWU3OTkyMTEwNjA= | 1,811 | Unable to add Multi-label Datasets | {
"avatar_url": "https://avatars.githubusercontent.com/u/29076344?v=4",
"events_url": "https://api.github.com/users/gchhablani/events{/privacy}",
"followers_url": "https://api.github.com/users/gchhablani/followers",
"following_url": "https://api.github.com/users/gchhablani/following{/other_user}",
"gists_url": "https://api.github.com/users/gchhablani/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/gchhablani",
"id": 29076344,
"login": "gchhablani",
"node_id": "MDQ6VXNlcjI5MDc2MzQ0",
"organizations_url": "https://api.github.com/users/gchhablani/orgs",
"received_events_url": "https://api.github.com/users/gchhablani/received_events",
"repos_url": "https://api.github.com/users/gchhablani/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/gchhablani/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gchhablani/subscriptions",
"type": "User",
"url": "https://api.github.com/users/gchhablani"
} | [] | closed | false | null | [] | null | [
"Thanks for adding this dataset! As far as I know `supervised_keys` is mostly a holdover from TFDS, but isn't really used, so feel free to drop it (@lhoestq or @thomwolf correct me if I'm wrong). It definitely shouldn't be blocking :) ",
"I can confirm that it comes from TFDS and is not used at the moment.",
"Thanks @yjernite @lhoestq \r\n\r\nThe template for new dataset makes it slightly confusing. I suppose the comment suggesting its update can be removed.",
"Closing this issue since it was answered."
] | "2021-02-02T11:50:56Z" | "2021-02-18T14:16:31Z" | "2021-02-18T14:16:31Z" | CONTRIBUTOR | null | I am trying to add [CIFAR-100](https://www.cs.toronto.edu/~kriz/cifar.html) dataset. The dataset contains two labels per image - `fine label` and `coarse label`. Using just one label in supervised keys as
`supervised_keys=("img", "fine_label")` raises no issue. But trying `supervised_keys=("img", "fine_label","coarse_label")` leads to this error :
```python
Traceback (most recent call last):
File "test_script.py", line 2, in <module>
d = load_dataset('./datasets/cifar100')
File "~/datasets/src/datasets/load.py", line 668, in load_dataset
**config_kwargs,
File "~/datasets/src/datasets/builder.py", line 896, in __init__
super(GeneratorBasedBuilder, self).__init__(*args, **kwargs)
File "~/datasets/src/datasets/builder.py", line 247, in __init__
info.update(self._info())
File "~/.cache/huggingface/modules/datasets_modules/datasets/cifar100/61d2489b2d4a4abc34201432541b7380984ec714e290817d9a1ee318e4b74e0f/cifar100.py", line 79, in _info
citation=_CITATION,
File "<string>", line 19, in __init__
File "~/datasets/src/datasets/info.py", line 136, in __post_init__
self.supervised_keys = SupervisedKeysData(*self.supervised_keys)
TypeError: __init__() takes from 1 to 3 positional arguments but 4 were given
```
Is there a way I can fix this?
Also, what does adding `supervised_keys` do? Is it necessary? How would I specify `supervised_keys` for a multi-input, multi-label dataset?
Thanks,
Gunjan | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1811/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1811/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/1810 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1810/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1810/comments | https://api.github.com/repos/huggingface/datasets/issues/1810/events | https://github.com/huggingface/datasets/issues/1810 | 799,168,650 | MDU6SXNzdWU3OTkxNjg2NTA= | 1,810 | Add Hateful Memes Dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/29076344?v=4",
"events_url": "https://api.github.com/users/gchhablani/events{/privacy}",
"followers_url": "https://api.github.com/users/gchhablani/followers",
"following_url": "https://api.github.com/users/gchhablani/following{/other_user}",
"gists_url": "https://api.github.com/users/gchhablani/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/gchhablani",
"id": 29076344,
"login": "gchhablani",
"node_id": "MDQ6VXNlcjI5MDc2MzQ0",
"organizations_url": "https://api.github.com/users/gchhablani/orgs",
"received_events_url": "https://api.github.com/users/gchhablani/received_events",
"repos_url": "https://api.github.com/users/gchhablani/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/gchhablani/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gchhablani/subscriptions",
"type": "User",
"url": "https://api.github.com/users/gchhablani"
} | [
{
"color": "e99695",
"default": false,
"description": "Requesting to add a new dataset",
"id": 2067376369,
"name": "dataset request",
"node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request"
},
{
"color": "bfdadc",
"default": false,
"description": "Vision datasets",
"id": 3608941089,
"name": "vision",
"node_id": "LA_kwDODunzps7XHBIh",
"url": "https://api.github.com/repos/huggingface/datasets/labels/vision"
}
] | open | false | null | [] | null | [
"I am not sure, but would `datasets.Sequence(datasets.Sequence(datasets.Sequence(datasets.Value(\"int\")))` work?",
"Also, I found the information for loading only subsets of the data [here](https://github.com/huggingface/datasets/blob/master/docs/source/splits.rst).",
"Hi @lhoestq,\r\n\r\nRequest you to check this once.\r\n\r\nThanks,\r\nGunjan",
"Hi @gchhablani since Array2D doesn't support images of different sizes, I would suggest to store in the dataset the paths to the image file instead of the image data. This has the advantage of not decompressing the data (images are often compressed using jpeg, png etc.). Users can still apply `.map` to load the images if they want to. Though it would en up being Sequences features.\r\n\r\nIn the future we'll add support for ragged tensors for this case and update the relevant dataset with this feature."
] | "2021-02-02T10:53:59Z" | "2021-12-08T12:03:59Z" | null | CONTRIBUTOR | null | ## Add Hateful Memes Dataset
- **Name:** Hateful Memes
- **Description:** [https://ai.facebook.com/blog/hateful-memes-challenge-and-data-set]( https://ai.facebook.com/blog/hateful-memes-challenge-and-data-set)
- **Paper:** [https://arxiv.org/pdf/2005.04790.pdf](https://arxiv.org/pdf/2005.04790.pdf)
- **Data:** [This link](https://drivendata-competition-fb-hateful-memes-data.s3.amazonaws.com/XjiOc5ycDBRRNwbhRlgH.zip?AWSAccessKeyId=AKIARVBOBDCY4MWEDJKS&Signature=DaUuGgZWUgDHzEPPbyJ2PhSJ56Q%3D&Expires=1612816874)
- **Motivation:** Including multi-modal datasets to 🤗 datasets.
I will be adding this dataset. It requires the user to sign an agreement on DrivenData. So, it will be used with a manual download.
The issue with this dataset is that the images are of different sizes. The image datasets added so far (CIFAR-10 and MNIST) have a uniform shape throughout.
So something like
```python
datasets.Array2D(shape=(28, 28), dtype="uint8")
```
won't work for the images. How would I add image features then? I checked `datasets/features.py` but couldn't figure out the appropriate class for this. I'm assuming I would want to avoid re-sizing at all since we want the user to be able to access the original images.
Also, in case I want to load only a subset of the data, since the actual data is around 8.8GB, how would that be possible?
Thanks,
Gunjan | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1810/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1810/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/1809 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1809/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1809/comments | https://api.github.com/repos/huggingface/datasets/issues/1809/events | https://github.com/huggingface/datasets/pull/1809 | 799,059,141 | MDExOlB1bGxSZXF1ZXN0NTY1NzY4ODQz | 1,809 | Add FreebaseQA dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/29076344?v=4",
"events_url": "https://api.github.com/users/gchhablani/events{/privacy}",
"followers_url": "https://api.github.com/users/gchhablani/followers",
"following_url": "https://api.github.com/users/gchhablani/following{/other_user}",
"gists_url": "https://api.github.com/users/gchhablani/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/gchhablani",
"id": 29076344,
"login": "gchhablani",
"node_id": "MDQ6VXNlcjI5MDc2MzQ0",
"organizations_url": "https://api.github.com/users/gchhablani/orgs",
"received_events_url": "https://api.github.com/users/gchhablani/received_events",
"repos_url": "https://api.github.com/users/gchhablani/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/gchhablani/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gchhablani/subscriptions",
"type": "User",
"url": "https://api.github.com/users/gchhablani"
} | [] | closed | false | null | [] | null | [
"Hi ! It looks like this PR contains changes about other datasets than freebase_qa such as DuoRC.\r\n\r\nCan you remove these changes please ?",
"Hi @lhoestq,\r\n\r\nI think this happened because of rebasing. I'm unable to remove the duorc commit from the branch. GEM, Arabic sarcasm datasets are also there. I can't see any merge conflicts, however. Before commiting I always rebase (shouldn't have done that).\r\nCan you explain what is to be done? Should I create a clean PR?",
"Hi @gchhablani \r\nI think you can simply create another branch and another PR.\r\n\r\nIf I understand correctly the github diff is messed up because you rebased instead of merge.\r\nRebasing is supposed to be used only before pushing the branch the first time, or github messes up the diff.\r\nIf you want to include changes from master on a branch that is already push you need to use git merge.",
"Thanks @lhoestq.\r\n\r\nI understand the issue now. I missed the instructions on the template. Sorry for bothering you unnecessarily, I'm pretty new to contributing on GitHub. I'll make a fresh PR.\r\n",
"No problem, I'm not a big fan of this weird behavior tbh.\r\nThanks for making a new PR",
"@lhoestq Haha, well, it's not as weird as not reading the [instructions](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md#open-a-pull-request-on-the-main-huggingface-repo-and-share-your-work).\r\nAlso, I'm enjoying adding new datasets so it's all cool :)"
] | "2021-02-02T08:35:53Z" | "2021-02-03T17:15:05Z" | "2021-02-03T16:43:06Z" | CONTRIBUTOR | null | Adding FreebaseQA dataset suggested in PR #1435 with minor edits. Also closes that PR.
Requesting @lhoestq to review. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1809/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1809/timeline | null | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/1809.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1809",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/1809.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1809"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/1808 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1808/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1808/comments | https://api.github.com/repos/huggingface/datasets/issues/1808/events | https://github.com/huggingface/datasets/issues/1808 | 798,879,180 | MDU6SXNzdWU3OTg4NzkxODA= | 1,808 | writing Datasets in a human readable format | {
"avatar_url": "https://avatars.githubusercontent.com/u/10137?v=4",
"events_url": "https://api.github.com/users/ghost/events{/privacy}",
"followers_url": "https://api.github.com/users/ghost/followers",
"following_url": "https://api.github.com/users/ghost/following{/other_user}",
"gists_url": "https://api.github.com/users/ghost/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/ghost",
"id": 10137,
"login": "ghost",
"node_id": "MDQ6VXNlcjEwMTM3",
"organizations_url": "https://api.github.com/users/ghost/orgs",
"received_events_url": "https://api.github.com/users/ghost/received_events",
"repos_url": "https://api.github.com/users/ghost/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/ghost/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ghost/subscriptions",
"type": "User",
"url": "https://api.github.com/users/ghost"
} | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
},
{
"color": "d876e3",
"default": true,
"description": "Further information is requested",
"id": 1935892912,
"name": "question",
"node_id": "MDU6TGFiZWwxOTM1ODkyOTEy",
"url": "https://api.github.com/repos/huggingface/datasets/labels/question"
}
] | closed | false | null | [] | null | [
"AFAIK, there is currently no built-in method on the `Dataset` object to do this.\r\nHowever, a workaround is to directly use the Arrow table backing the dataset, **but it implies loading the whole dataset in memory** (correct me if I'm mistaken @lhoestq).\r\n\r\nYou can convert the Arrow table to a pandas dataframe to save the data as csv as follows:\r\n```python\r\narrow_table = dataset.data\r\ndataframe = arrow_table.to_pandas()\r\ndataframe.to_csv(\"/path/to/file.csv\")\r\n```\r\n\r\nSimilarly, you can convert the dataset to a Python dict and save it as JSON:\r\n```python\r\nimport json\r\narrow_table = dataset.data\r\npy_dict = arrow_table.to_pydict()\r\nwith open(\"/path/to/file.json\", \"w+\") as f:\r\n json.dump(py_dict, f)\r\n```",
"Indeed this works as long as you have enough memory.\r\nIt would be amazing to have export options like csv, json etc. !\r\n\r\nIt should be doable to implement something that iterates through the dataset batch by batch to write to csv for example.\r\nThere is already an `export` method but currently the only export type that is supported is `tfrecords`.",
"Hi! `datasets` now supports `Dataset.to_csv` and `Dataset.to_json` for saving data in a human readable format."
] | "2021-02-02T02:55:40Z" | "2022-06-01T15:38:13Z" | "2022-06-01T15:38:13Z" | NONE | null | Hi
I see there is a save_to_disk function to save data, but this is not human readable format, is there a way I could save a Dataset object in a human readable format to a file like json? thanks @lhoestq | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1808/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1808/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/1807 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1807/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1807/comments | https://api.github.com/repos/huggingface/datasets/issues/1807/events | https://github.com/huggingface/datasets/pull/1807 | 798,823,591 | MDExOlB1bGxSZXF1ZXN0NTY1NTczNzU5 | 1,807 | Adding an aggregated dataset for the GEM benchmark | {
"avatar_url": "https://avatars.githubusercontent.com/u/10469459?v=4",
"events_url": "https://api.github.com/users/yjernite/events{/privacy}",
"followers_url": "https://api.github.com/users/yjernite/followers",
"following_url": "https://api.github.com/users/yjernite/following{/other_user}",
"gists_url": "https://api.github.com/users/yjernite/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/yjernite",
"id": 10469459,
"login": "yjernite",
"node_id": "MDQ6VXNlcjEwNDY5NDU5",
"organizations_url": "https://api.github.com/users/yjernite/orgs",
"received_events_url": "https://api.github.com/users/yjernite/received_events",
"repos_url": "https://api.github.com/users/yjernite/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/yjernite/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yjernite/subscriptions",
"type": "User",
"url": "https://api.github.com/users/yjernite"
} | [] | closed | false | null | [] | null | [
"Nice !"
] | "2021-02-02T00:39:53Z" | "2021-02-02T22:48:41Z" | "2021-02-02T18:06:58Z" | MEMBER | null | This dataset gathers modified versions of several other conditional text generation datasets which together make up the shared task for the Generation Evaluation and Metrics workshop (think GLUE for text generation)
The changes from the original datasets are detailed in the Dataset Cards on the GEM website, which are linked to in this dataset card.
cc @sebastianGehrmann
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 1,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1807/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1807/timeline | null | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/1807.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1807",
"merged_at": "2021-02-02T18:06:58Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1807.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1807"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/1806 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1806/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1806/comments | https://api.github.com/repos/huggingface/datasets/issues/1806/events | https://github.com/huggingface/datasets/pull/1806 | 798,607,869 | MDExOlB1bGxSZXF1ZXN0NTY1Mzk0ODIz | 1,806 | Update details to MLSUM dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/15138872?v=4",
"events_url": "https://api.github.com/users/padipadou/events{/privacy}",
"followers_url": "https://api.github.com/users/padipadou/followers",
"following_url": "https://api.github.com/users/padipadou/following{/other_user}",
"gists_url": "https://api.github.com/users/padipadou/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/padipadou",
"id": 15138872,
"login": "padipadou",
"node_id": "MDQ6VXNlcjE1MTM4ODcy",
"organizations_url": "https://api.github.com/users/padipadou/orgs",
"received_events_url": "https://api.github.com/users/padipadou/received_events",
"repos_url": "https://api.github.com/users/padipadou/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/padipadou/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/padipadou/subscriptions",
"type": "User",
"url": "https://api.github.com/users/padipadou"
} | [] | closed | false | null | [] | null | [
"Thanks!"
] | "2021-02-01T18:35:12Z" | "2021-02-01T18:46:28Z" | "2021-02-01T18:46:21Z" | CONTRIBUTOR | null | Update details to MLSUM dataset | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1806/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1806/timeline | null | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/1806.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1806",
"merged_at": "2021-02-01T18:46:21Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1806.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1806"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/1805 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1805/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1805/comments | https://api.github.com/repos/huggingface/datasets/issues/1805/events | https://github.com/huggingface/datasets/issues/1805 | 798,498,053 | MDU6SXNzdWU3OTg0OTgwNTM= | 1,805 | can't pickle SwigPyObject objects when calling dataset.get_nearest_examples from FAISS index | {
"avatar_url": "https://avatars.githubusercontent.com/u/6608232?v=4",
"events_url": "https://api.github.com/users/abarbosa94/events{/privacy}",
"followers_url": "https://api.github.com/users/abarbosa94/followers",
"following_url": "https://api.github.com/users/abarbosa94/following{/other_user}",
"gists_url": "https://api.github.com/users/abarbosa94/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/abarbosa94",
"id": 6608232,
"login": "abarbosa94",
"node_id": "MDQ6VXNlcjY2MDgyMzI=",
"organizations_url": "https://api.github.com/users/abarbosa94/orgs",
"received_events_url": "https://api.github.com/users/abarbosa94/received_events",
"repos_url": "https://api.github.com/users/abarbosa94/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/abarbosa94/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/abarbosa94/subscriptions",
"type": "User",
"url": "https://api.github.com/users/abarbosa94"
} | [] | closed | false | null | [] | null | [
"Hi ! Indeed we used to require mapping functions to be picklable with `pickle` or `dill` in order to cache the resulting datasets. And FAISS indexes are not picklable unfortunately.\r\n\r\nBut since #1703 this is no longer required (the caching will simply be disabled). This change will be available in the next release of `datasets`, or you can also install `datasets` from source.",
"I totally forgot to answer this issue, I'm so sorry. \r\n\r\nI was able to get it working by installing `datasets` from source. Huge thanks!"
] | "2021-02-01T16:14:17Z" | "2021-03-06T14:32:46Z" | "2021-03-06T14:32:46Z" | CONTRIBUTOR | null | So, I have the following instances in my dataset
```
{'question': 'An astronomer observes that a planet rotates faster after a meteorite impact. Which is the most likely effect of
this increase in rotation?',
'answer': 'C',
'example_id': 'ARCCH_Mercury_7175875',
'options':[{'option_context': 'One effect of increased amperage in the planetary world (..)', 'option_id': 'A', 'option_text': 'Planetary density will decrease.'},
(...)]}
```
The `options` value is always an list with 4 options, each one is a dict with `option_context`; `option_id` and `option_text`.
I would like to overwrite the `option_context` of each instance of my dataset for a dpr result that I am developing. Then, I trained a model already and save it in a FAISS index
```
dpr_dataset = load_dataset(
"text",
data_files=ARC_CORPUS_TEXT,
cache_dir=CACHE_DIR,
split="train[:100%]",
)
dpr_dataset.load_faiss_index("embeddings", f"{ARC_CORPUS_FAISS}")
torch.set_grad_enabled(False)
```
Then, as a processor of my dataset, I created a map function that calls the `dpr_dataset` for each _option_
```
def generate_context(example):
question_text = example['question']
for option in example['options']:
question_with_option = question_text + " " + option['option_text']
tokenize_text = question_tokenizer(question_with_option, return_tensors="pt").to(device)
question_embed = (
question_encoder(**tokenize_text)
)[0][0].cpu().numpy()
_, retrieved_examples = dpr_dataset.get_nearest_examples(
"embeddings", question_embed, k=10
)
# option["option_context"] = retrieved_examples["text"]
# option["option_context"] = " ".join(option["option_context"]).strip()
#result_dict = {
# 'example_id': example['example_id'],
# 'answer': example['answer'],
# 'question': question_text,
#options': example['options']
# }
return example
```
I intentionally commented on this portion of the code.
But when I call the `map` method, `ds_with_context = dataset.map(generate_context,load_from_cache_file=False)`
It calls the following error:
```
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-55-75a458ce205c> in <module>
----> 1 ds_with_context = dataset.map(generate_context,load_from_cache_file=False)
~/.cache/pypoetry/virtualenvs/masters-utTTC0p8-py3.7/lib/python3.7/site-packages/datasets/dataset_dict.py in map(self, function, with_indices, input_columns, batched, batch_size, remove_columns, keep_in_memory, load_from_cache_file, cache_file_names, writer_batch_size, features, disable_nullable, fn_kwargs, num_proc)
301 num_proc=num_proc,
302 )
--> 303 for k, dataset in self.items()
304 }
305 )
~/.cache/pypoetry/virtualenvs/masters-utTTC0p8-py3.7/lib/python3.7/site-packages/datasets/dataset_dict.py in <dictcomp>(.0)
301 num_proc=num_proc,
302 )
--> 303 for k, dataset in self.items()
304 }
305 )
~/.cache/pypoetry/virtualenvs/masters-utTTC0p8-py3.7/lib/python3.7/site-packages/datasets/arrow_dataset.py in map(self, function, with_indices, input_columns, batched, batch_size, drop_last_batch, remove_columns, keep_in_memory, load_from_cache_file, cache_file_name, writer_batch_size, features, disable_nullable, fn_kwargs, num_proc, suffix_template, new_fingerprint)
1257 fn_kwargs=fn_kwargs,
1258 new_fingerprint=new_fingerprint,
-> 1259 update_data=update_data,
1260 )
1261 else:
~/.cache/pypoetry/virtualenvs/masters-utTTC0p8-py3.7/lib/python3.7/site-packages/datasets/arrow_dataset.py in wrapper(*args, **kwargs)
155 }
156 # apply actual function
--> 157 out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
158 datasets: List["Dataset"] = list(out.values()) if isinstance(out, dict) else [out]
159 # re-apply format to the output
~/.cache/pypoetry/virtualenvs/masters-utTTC0p8-py3.7/lib/python3.7/site-packages/datasets/fingerprint.py in wrapper(*args, **kwargs)
156 kwargs_for_fingerprint["fingerprint_name"] = fingerprint_name
157 kwargs[fingerprint_name] = update_fingerprint(
--> 158 self._fingerprint, transform, kwargs_for_fingerprint
159 )
160
~/.cache/pypoetry/virtualenvs/masters-utTTC0p8-py3.7/lib/python3.7/site-packages/datasets/fingerprint.py in update_fingerprint(fingerprint, transform, transform_args)
103 for key in sorted(transform_args):
104 hasher.update(key)
--> 105 hasher.update(transform_args[key])
106 return hasher.hexdigest()
107
~/.cache/pypoetry/virtualenvs/masters-utTTC0p8-py3.7/lib/python3.7/site-packages/datasets/fingerprint.py in update(self, value)
55 def update(self, value):
56 self.m.update(f"=={type(value)}==".encode("utf8"))
---> 57 self.m.update(self.hash(value).encode("utf-8"))
58
59 def hexdigest(self):
~/.cache/pypoetry/virtualenvs/masters-utTTC0p8-py3.7/lib/python3.7/site-packages/datasets/fingerprint.py in hash(cls, value)
51 return cls.dispatch[type(value)](cls, value)
52 else:
---> 53 return cls.hash_default(value)
54
55 def update(self, value):
~/.cache/pypoetry/virtualenvs/masters-utTTC0p8-py3.7/lib/python3.7/site-packages/datasets/fingerprint.py in hash_default(cls, value)
44 @classmethod
45 def hash_default(cls, value):
---> 46 return cls.hash_bytes(dumps(value))
47
48 @classmethod
~/.cache/pypoetry/virtualenvs/masters-utTTC0p8-py3.7/lib/python3.7/site-packages/datasets/utils/py_utils.py in dumps(obj)
387 file = StringIO()
388 with _no_cache_fields(obj):
--> 389 dump(obj, file)
390 return file.getvalue()
391
~/.cache/pypoetry/virtualenvs/masters-utTTC0p8-py3.7/lib/python3.7/site-packages/datasets/utils/py_utils.py in dump(obj, file)
359 def dump(obj, file):
360 """pickle an object to a file"""
--> 361 Pickler(file, recurse=True).dump(obj)
362 return
363
~/.cache/pypoetry/virtualenvs/masters-utTTC0p8-py3.7/lib/python3.7/site-packages/dill/_dill.py in dump(self, obj)
452 raise PicklingError(msg)
453 else:
--> 454 StockPickler.dump(self, obj)
455 stack.clear() # clear record of 'recursion-sensitive' pickled objects
456 return
/usr/lib/python3.7/pickle.py in dump(self, obj)
435 if self.proto >= 4:
436 self.framer.start_framing()
--> 437 self.save(obj)
438 self.write(STOP)
439 self.framer.end_framing()
/usr/lib/python3.7/pickle.py in save(self, obj, save_persistent_id)
502 f = self.dispatch.get(t)
503 if f is not None:
--> 504 f(self, obj) # Call unbound method with explicit self
505 return
506
~/.cache/pypoetry/virtualenvs/masters-utTTC0p8-py3.7/lib/python3.7/site-packages/datasets/utils/py_utils.py in save_function(pickler, obj)
554 dill._dill._create_function,
555 (obj.__code__, globs, obj.__name__, obj.__defaults__, obj.__closure__, obj.__dict__, fkwdefaults),
--> 556 obj=obj,
557 )
558 else:
/usr/lib/python3.7/pickle.py in save_reduce(self, func, args, state, listitems, dictitems, obj)
636 else:
637 save(func)
--> 638 save(args)
639 write(REDUCE)
640
/usr/lib/python3.7/pickle.py in save(self, obj, save_persistent_id)
502 f = self.dispatch.get(t)
503 if f is not None:
--> 504 f(self, obj) # Call unbound method with explicit self
505 return
506
/usr/lib/python3.7/pickle.py in save_tuple(self, obj)
784 write(MARK)
785 for element in obj:
--> 786 save(element)
787
788 if id(obj) in memo:
/usr/lib/python3.7/pickle.py in save(self, obj, save_persistent_id)
502 f = self.dispatch.get(t)
503 if f is not None:
--> 504 f(self, obj) # Call unbound method with explicit self
505 return
506
~/.cache/pypoetry/virtualenvs/masters-utTTC0p8-py3.7/lib/python3.7/site-packages/dill/_dill.py in save_module_dict(pickler, obj)
939 # we only care about session the first pass thru
940 pickler._session = False
--> 941 StockPickler.save_dict(pickler, obj)
942 log.info("# D2")
943 return
/usr/lib/python3.7/pickle.py in save_dict(self, obj)
854
855 self.memoize(obj)
--> 856 self._batch_setitems(obj.items())
857
858 dispatch[dict] = save_dict
/usr/lib/python3.7/pickle.py in _batch_setitems(self, items)
880 for k, v in tmp:
881 save(k)
--> 882 save(v)
883 write(SETITEMS)
884 elif n:
/usr/lib/python3.7/pickle.py in save(self, obj, save_persistent_id)
547
548 # Save the reduce() output and finally memoize the object
--> 549 self.save_reduce(obj=obj, *rv)
550
551 def persistent_id(self, obj):
/usr/lib/python3.7/pickle.py in save_reduce(self, func, args, state, listitems, dictitems, obj)
660
661 if state is not None:
--> 662 save(state)
663 write(BUILD)
664
/usr/lib/python3.7/pickle.py in save(self, obj, save_persistent_id)
502 f = self.dispatch.get(t)
503 if f is not None:
--> 504 f(self, obj) # Call unbound method with explicit self
505 return
506
~/.cache/pypoetry/virtualenvs/masters-utTTC0p8-py3.7/lib/python3.7/site-packages/dill/_dill.py in save_module_dict(pickler, obj)
939 # we only care about session the first pass thru
940 pickler._session = False
--> 941 StockPickler.save_dict(pickler, obj)
942 log.info("# D2")
943 return
/usr/lib/python3.7/pickle.py in save_dict(self, obj)
854
855 self.memoize(obj)
--> 856 self._batch_setitems(obj.items())
857
858 dispatch[dict] = save_dict
/usr/lib/python3.7/pickle.py in _batch_setitems(self, items)
880 for k, v in tmp:
881 save(k)
--> 882 save(v)
883 write(SETITEMS)
884 elif n:
/usr/lib/python3.7/pickle.py in save(self, obj, save_persistent_id)
502 f = self.dispatch.get(t)
503 if f is not None:
--> 504 f(self, obj) # Call unbound method with explicit self
505 return
506
~/.cache/pypoetry/virtualenvs/masters-utTTC0p8-py3.7/lib/python3.7/site-packages/dill/_dill.py in save_module_dict(pickler, obj)
939 # we only care about session the first pass thru
940 pickler._session = False
--> 941 StockPickler.save_dict(pickler, obj)
942 log.info("# D2")
943 return
/usr/lib/python3.7/pickle.py in save_dict(self, obj)
854
855 self.memoize(obj)
--> 856 self._batch_setitems(obj.items())
857
858 dispatch[dict] = save_dict
/usr/lib/python3.7/pickle.py in _batch_setitems(self, items)
885 k, v = tmp[0]
886 save(k)
--> 887 save(v)
888 write(SETITEM)
889 # else tmp is empty, and we're done
/usr/lib/python3.7/pickle.py in save(self, obj, save_persistent_id)
547
548 # Save the reduce() output and finally memoize the object
--> 549 self.save_reduce(obj=obj, *rv)
550
551 def persistent_id(self, obj):
/usr/lib/python3.7/pickle.py in save_reduce(self, func, args, state, listitems, dictitems, obj)
660
661 if state is not None:
--> 662 save(state)
663 write(BUILD)
664
/usr/lib/python3.7/pickle.py in save(self, obj, save_persistent_id)
502 f = self.dispatch.get(t)
503 if f is not None:
--> 504 f(self, obj) # Call unbound method with explicit self
505 return
506
~/.cache/pypoetry/virtualenvs/masters-utTTC0p8-py3.7/lib/python3.7/site-packages/dill/_dill.py in save_module_dict(pickler, obj)
939 # we only care about session the first pass thru
940 pickler._session = False
--> 941 StockPickler.save_dict(pickler, obj)
942 log.info("# D2")
943 return
/usr/lib/python3.7/pickle.py in save_dict(self, obj)
854
855 self.memoize(obj)
--> 856 self._batch_setitems(obj.items())
857
858 dispatch[dict] = save_dict
/usr/lib/python3.7/pickle.py in _batch_setitems(self, items)
880 for k, v in tmp:
881 save(k)
--> 882 save(v)
883 write(SETITEMS)
884 elif n:
/usr/lib/python3.7/pickle.py in save(self, obj, save_persistent_id)
547
548 # Save the reduce() output and finally memoize the object
--> 549 self.save_reduce(obj=obj, *rv)
550
551 def persistent_id(self, obj):
/usr/lib/python3.7/pickle.py in save_reduce(self, func, args, state, listitems, dictitems, obj)
660
661 if state is not None:
--> 662 save(state)
663 write(BUILD)
664
/usr/lib/python3.7/pickle.py in save(self, obj, save_persistent_id)
502 f = self.dispatch.get(t)
503 if f is not None:
--> 504 f(self, obj) # Call unbound method with explicit self
505 return
506
~/.cache/pypoetry/virtualenvs/masters-utTTC0p8-py3.7/lib/python3.7/site-packages/dill/_dill.py in save_module_dict(pickler, obj)
939 # we only care about session the first pass thru
940 pickler._session = False
--> 941 StockPickler.save_dict(pickler, obj)
942 log.info("# D2")
943 return
/usr/lib/python3.7/pickle.py in save_dict(self, obj)
854
855 self.memoize(obj)
--> 856 self._batch_setitems(obj.items())
857
858 dispatch[dict] = save_dict
/usr/lib/python3.7/pickle.py in _batch_setitems(self, items)
885 k, v = tmp[0]
886 save(k)
--> 887 save(v)
888 write(SETITEM)
889 # else tmp is empty, and we're done
/usr/lib/python3.7/pickle.py in save(self, obj, save_persistent_id)
522 reduce = getattr(obj, "__reduce_ex__", None)
523 if reduce is not None:
--> 524 rv = reduce(self.proto)
525 else:
526 reduce = getattr(obj, "__reduce__", None)
TypeError: can't pickle SwigPyObject objects
```
Which I have no idea how to solve/deal with it
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1805/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1805/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/1804 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1804/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1804/comments | https://api.github.com/repos/huggingface/datasets/issues/1804/events | https://github.com/huggingface/datasets/pull/1804 | 798,483,881 | MDExOlB1bGxSZXF1ZXN0NTY1MjkzMTc3 | 1,804 | Add SICK dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/36051308?v=4",
"events_url": "https://api.github.com/users/calpt/events{/privacy}",
"followers_url": "https://api.github.com/users/calpt/followers",
"following_url": "https://api.github.com/users/calpt/following{/other_user}",
"gists_url": "https://api.github.com/users/calpt/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/calpt",
"id": 36051308,
"login": "calpt",
"node_id": "MDQ6VXNlcjM2MDUxMzA4",
"organizations_url": "https://api.github.com/users/calpt/orgs",
"received_events_url": "https://api.github.com/users/calpt/received_events",
"repos_url": "https://api.github.com/users/calpt/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/calpt/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/calpt/subscriptions",
"type": "User",
"url": "https://api.github.com/users/calpt"
} | [] | closed | false | null | [] | null | [] | "2021-02-01T15:57:44Z" | "2021-02-05T17:46:28Z" | "2021-02-05T15:49:25Z" | CONTRIBUTOR | null | Adds the SICK dataset (http://marcobaroni.org/composes/sick.html).
Closes #1772.
Edit: also closes #1632, which is the original issue requesting the dataset. The newer one is a duplicate. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1804/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1804/timeline | null | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/1804.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1804",
"merged_at": "2021-02-05T15:49:25Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1804.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1804"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/1803 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1803/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1803/comments | https://api.github.com/repos/huggingface/datasets/issues/1803/events | https://github.com/huggingface/datasets/issues/1803 | 798,243,904 | MDU6SXNzdWU3OTgyNDM5MDQ= | 1,803 | Querying examples from big datasets is slower than small datasets | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [] | closed | false | null | [] | null | [
"Hello, @lhoestq / @gaceladri : We have been seeing similar behavior with bigger datasets, where querying time increases. Are you folks aware of any solution that fixes this problem yet? ",
"Hi ! I'm pretty sure that it can be fixed by using the Arrow IPC file format instead of the raw streaming format but I haven't tested yet.\r\nI'll take a look at it soon and let you know",
"My workaround is to shard the dataset into splits in my ssd disk and feed the data in different training sessions. But it is a bit of a pain when we need to reload the last training session with the rest of the split with the Trainer in transformers.\r\n\r\nI mean, when I split the training and then reloads the model and optimizer, it not gets the correct global_status of the optimizer, so I need to hardcode some things. I'm planning to open an issue in transformers and think about it.\r\n```\r\nfrom datasets import load_dataset\r\n\r\nbook_corpus = load_dataset(\"bookcorpus\", split=\"train[:25%]\")\r\nwikicorpus = load_dataset(\"wikicorpus\", split=\"train[:25%]\")\r\nopenwebtext = load_dataset(\"openwebtext\", split=\"train[:25%]\")\r\n\r\nbig_dataset = datasets.concatenate_datasets([wikicorpus, openwebtext, book_corpus])\r\nbig_dataset.shuffle(seed=42)\r\nbig_dataset = big_dataset.map(encode, batched=True, num_proc=20, load_from_cache_file=True, writer_batch_size=5000)\r\nbig_dataset.set_format(type='torch', columns=[\"text\", \"input_ids\", \"attention_mask\", \"token_type_ids\"])\r\n\r\n\r\ntraining_args = TrainingArguments(\r\n output_dir=\"./linear_bert\",\r\n overwrite_output_dir=True,\r\n per_device_train_batch_size=71,\r\n save_steps=500,\r\n save_total_limit=10,\r\n logging_first_step=True,\r\n logging_steps=100,\r\n gradient_accumulation_steps=9,\r\n fp16=True,\r\n dataloader_num_workers=20,\r\n warmup_steps=24000,\r\n learning_rate=0.000545205002870214,\r\n adam_epsilon=1e-6,\r\n adam_beta2=0.98,\r\n weight_decay=0.01,\r\n max_steps=138974, # the total number of steps after concatenating 100% datasets\r\n max_grad_norm=1.0,\r\n)\r\n\r\ntrainer = Trainer(\r\n model=model,\r\n args=training_args,\r\n data_collator=data_collator,\r\n train_dataset=big_dataset,\r\n tokenizer=tokenizer))\r\n```\r\n\r\nI do one training pass with the total steps of this shard and I use len(bbig)/batchsize to stop the training (hardcoded in the trainer.py) when I pass over all the examples in this split.\r\n\r\nNow Im working, I will edit the comment with a more elaborated answer when I left the work.",
"I just tested and using the Arrow File format doesn't improve the speed... This will need further investigation.\r\n\r\nMy guess is that it has to iterate over the record batches or chunks of a ChunkedArray in order to retrieve elements.\r\n\r\nHowever if we know in advance in which chunk the element is, and at what index it is, then we can access it instantaneously. But this requires dealing with the chunked arrays instead of the pyarrow Table directly which is not practical.",
"I have a dataset with about 2.7 million rows (which I'm loading via `load_from_disk`), and I need to fetch around 300k (particular) rows of it, by index. Currently this is taking a really long time (~8 hours). I tried sharding the large dataset but overall it doesn't change how long it takes to fetch the desired rows.\r\n\r\nI actually have enough RAM that I could fit the large dataset in memory. Would having the large dataset in memory speed up querying? To find out, I tried to load (a column of) the large dataset into memory like this:\r\n```\r\ncolumn_data = large_ds['column_name']\r\n```\r\nbut in itself this takes a really long time.\r\n\r\nI'm pretty stuck - do you have any ideas what I should do? ",
"Hi ! Feel free to post a message on the [forum](https://discuss.huggingface.co/c/datasets/10). I'd be happy to help you with this.\r\n\r\nIn your post on the forum, feel free to add more details about your setup:\r\nWhat are column names and types of your dataset ?\r\nHow was the dataset constructed ?\r\nIs the dataset shuffled ?\r\nIs the dataset tokenized ?\r\nAre you on a SSD or an HDD ?\r\n\r\nI'm sure we can figure something out.\r\nFor example on my laptop I can access the 6 millions articles from wikipedia in less than a minute.",
"Thanks @lhoestq, I've [posted on the forum](https://discuss.huggingface.co/t/fetching-rows-of-a-large-dataset-by-index/4271?u=abisee).",
"Fixed by #2122."
] | "2021-02-01T11:08:23Z" | "2021-08-04T18:11:01Z" | "2021-08-04T18:10:42Z" | MEMBER | null | After some experiments with bookcorpus I noticed that querying examples from big datasets is slower than small datasets.
For example
```python
from datasets import load_dataset
b1 = load_dataset("bookcorpus", split="train[:1%]")
b50 = load_dataset("bookcorpus", split="train[:50%]")
b100 = load_dataset("bookcorpus", split="train[:100%]")
%timeit _ = b1[-1]
# 12.2 µs ± 70.4 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each)
%timeit _ = b50[-1]
# 92.5 µs ± 1.24 µs per loop (mean ± std. dev. of 7 runs, 10000 loops each)
%timeit _ = b100[-1]
# 177 µs ± 3.13 µs per loop (mean ± std. dev. of 7 runs, 10000 loops each)
```
It looks like the time to fetch the example increases with the size of the dataset.
This is maybe due to the use of the Arrow streaming format to store the data on disk. I guess pyarrow needs to iterate through the file as a stream to find the queried sample.
Maybe switching to the Arrow IPC file format could help fixing this issue.
Indeed according to the [documentation](https://arrow.apache.org/docs/format/Columnar.html?highlight=arrow1#ipc-file-format), it's identical to the streaming format except that it contains the memory offsets of each sample, which could fix the issue:
> We define a “file format” supporting random access that is build with the stream format. The file starts and ends with a magic string ARROW1 (plus padding). What follows in the file is identical to the stream format. At the end of the file, we write a footer containing a redundant copy of the schema (which is a part of the streaming format) plus memory offsets and sizes for each of the data blocks in the file. This enables random access any record batch in the file. See File.fbs for the precise details of the file footer.
cc @gaceladri since it can help speed up your training when this one is fixed. | {
"+1": 3,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 3,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1803/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1803/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/1802 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1802/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1802/comments | https://api.github.com/repos/huggingface/datasets/issues/1802/events | https://github.com/huggingface/datasets/pull/1802 | 797,924,468 | MDExOlB1bGxSZXF1ZXN0NTY0ODE4NDIy | 1,802 | add github of contributors | {
"avatar_url": "https://avatars.githubusercontent.com/u/53136577?v=4",
"events_url": "https://api.github.com/users/thevasudevgupta/events{/privacy}",
"followers_url": "https://api.github.com/users/thevasudevgupta/followers",
"following_url": "https://api.github.com/users/thevasudevgupta/following{/other_user}",
"gists_url": "https://api.github.com/users/thevasudevgupta/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/thevasudevgupta",
"id": 53136577,
"login": "thevasudevgupta",
"node_id": "MDQ6VXNlcjUzMTM2NTc3",
"organizations_url": "https://api.github.com/users/thevasudevgupta/orgs",
"received_events_url": "https://api.github.com/users/thevasudevgupta/received_events",
"repos_url": "https://api.github.com/users/thevasudevgupta/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/thevasudevgupta/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/thevasudevgupta/subscriptions",
"type": "User",
"url": "https://api.github.com/users/thevasudevgupta"
} | [] | closed | false | null | [] | null | [
"@lhoestq Can you confirm if this format is fine? I will update cards based on your feedback.",
"On HuggingFace side we also have a mapping of hf user => github user (GitHub info used to be required when signing up until not long ago – cc @gary149 @beurkinger) so we can also add a link to HF profile",
"All the dataset cards have been updated with GitHub ids :)"
] | "2021-02-01T03:49:19Z" | "2021-02-03T10:09:52Z" | "2021-02-03T10:06:30Z" | CONTRIBUTOR | null | This PR will add contributors GitHub id at the end of every dataset cards. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1802/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1802/timeline | null | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/1802.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1802",
"merged_at": "2021-02-03T10:06:30Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1802.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1802"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/1801 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1801/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1801/comments | https://api.github.com/repos/huggingface/datasets/issues/1801/events | https://github.com/huggingface/datasets/pull/1801 | 797,814,275 | MDExOlB1bGxSZXF1ZXN0NTY0NzMwODYw | 1,801 | [GEM] Updated the source link of the data to update correct tokenized version. | {
"avatar_url": "https://avatars.githubusercontent.com/u/11708999?v=4",
"events_url": "https://api.github.com/users/mounicam/events{/privacy}",
"followers_url": "https://api.github.com/users/mounicam/followers",
"following_url": "https://api.github.com/users/mounicam/following{/other_user}",
"gists_url": "https://api.github.com/users/mounicam/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mounicam",
"id": 11708999,
"login": "mounicam",
"node_id": "MDQ6VXNlcjExNzA4OTk5",
"organizations_url": "https://api.github.com/users/mounicam/orgs",
"received_events_url": "https://api.github.com/users/mounicam/received_events",
"repos_url": "https://api.github.com/users/mounicam/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mounicam/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mounicam/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mounicam"
} | [] | closed | false | null | [] | null | [
"@mounicam we'll keep the original version in the Turk dataset proper, and use the updated file in the GEM aggregated dataset which I'll add later today\r\n\r\n@lhoestq do not merge, I'll close when I've submitted the GEM dataset PR :) ",
"Closed by https://github.com/huggingface/datasets/pull/1807"
] | "2021-01-31T21:17:19Z" | "2021-02-02T13:17:38Z" | "2021-02-02T13:17:28Z" | CONTRIBUTOR | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1801/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1801/timeline | null | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/1801.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1801",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/1801.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1801"
} | true |
|
https://api.github.com/repos/huggingface/datasets/issues/1800 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1800/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1800/comments | https://api.github.com/repos/huggingface/datasets/issues/1800/events | https://github.com/huggingface/datasets/pull/1800 | 797,798,689 | MDExOlB1bGxSZXF1ZXN0NTY0NzE5MjA3 | 1,800 | Add DuoRC Dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/29076344?v=4",
"events_url": "https://api.github.com/users/gchhablani/events{/privacy}",
"followers_url": "https://api.github.com/users/gchhablani/followers",
"following_url": "https://api.github.com/users/gchhablani/following{/other_user}",
"gists_url": "https://api.github.com/users/gchhablani/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/gchhablani",
"id": 29076344,
"login": "gchhablani",
"node_id": "MDQ6VXNlcjI5MDc2MzQ0",
"organizations_url": "https://api.github.com/users/gchhablani/orgs",
"received_events_url": "https://api.github.com/users/gchhablani/received_events",
"repos_url": "https://api.github.com/users/gchhablani/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/gchhablani/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gchhablani/subscriptions",
"type": "User",
"url": "https://api.github.com/users/gchhablani"
} | [] | closed | false | null | [] | null | [
"Thanks for approving @lhoestq!\r\nWill apply these changes for the other datasets I've added too."
] | "2021-01-31T20:01:59Z" | "2021-02-03T05:01:45Z" | "2021-02-02T22:49:26Z" | CONTRIBUTOR | null | Hi,
DuoRC SelfRC is one type of the [DuoRC Dataset](https://duorc.github.io/). DuoRC SelfRC is a crowdsourced Abstractive/Extractive Question-Answering dataset based on Wikipedia movie plots. It contains examples that may have answers in the movie plot, synthesized answers which are not present in the movie plot, or no answers. I have also added ParaphraseRC - the other type of DuoRC dataset where questions are based on Wikipedia movie plots and answers are based on corresponding IMDb movie plots.
Paper : [https://arxiv.org/abs/1804.07927](https://arxiv.org/abs/1804.07927)
I want to add this to 🤗 datasets to make it more accessible to the community. I have added all the details that I could find. Please let me know if anything else is needed from my end.
Thanks,
Gunjan
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1800/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1800/timeline | null | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/1800.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1800",
"merged_at": "2021-02-02T22:49:26Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1800.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1800"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/1799 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1799/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1799/comments | https://api.github.com/repos/huggingface/datasets/issues/1799/events | https://github.com/huggingface/datasets/pull/1799 | 797,789,439 | MDExOlB1bGxSZXF1ZXN0NTY0NzEyMzUy | 1,799 | Update: SWDA - Fixed code to use all metadata features. Added comments and cleaned c… | {
"avatar_url": "https://avatars.githubusercontent.com/u/22454783?v=4",
"events_url": "https://api.github.com/users/gmihaila/events{/privacy}",
"followers_url": "https://api.github.com/users/gmihaila/followers",
"following_url": "https://api.github.com/users/gmihaila/following{/other_user}",
"gists_url": "https://api.github.com/users/gmihaila/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/gmihaila",
"id": 22454783,
"login": "gmihaila",
"node_id": "MDQ6VXNlcjIyNDU0Nzgz",
"organizations_url": "https://api.github.com/users/gmihaila/orgs",
"received_events_url": "https://api.github.com/users/gmihaila/received_events",
"repos_url": "https://api.github.com/users/gmihaila/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/gmihaila/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gmihaila/subscriptions",
"type": "User",
"url": "https://api.github.com/users/gmihaila"
} | [] | closed | false | null | [] | null | [
"@yjernite Pushed all the changes you recommended. Thank you for your help!"
] | "2021-01-31T19:18:55Z" | "2021-02-09T22:06:13Z" | "2021-02-09T15:49:58Z" | CONTRIBUTOR | null | This is a dataset I currently use my research and I realized some features are not being returned.
Previous code was not using all available metadata and was kind of messy
I fixed code to use all metadata and made some modification to be more efficient and better formatted.
Please let me know if I need to make any changes. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 1,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1799/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1799/timeline | null | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/1799.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1799",
"merged_at": "2021-02-09T15:49:58Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1799.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1799"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/1798 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1798/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1798/comments | https://api.github.com/repos/huggingface/datasets/issues/1798/events | https://github.com/huggingface/datasets/pull/1798 | 797,766,818 | MDExOlB1bGxSZXF1ZXN0NTY0Njk2NjE1 | 1,798 | Add Arabic sarcasm dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/643918?v=4",
"events_url": "https://api.github.com/users/mapmeld/events{/privacy}",
"followers_url": "https://api.github.com/users/mapmeld/followers",
"following_url": "https://api.github.com/users/mapmeld/following{/other_user}",
"gists_url": "https://api.github.com/users/mapmeld/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mapmeld",
"id": 643918,
"login": "mapmeld",
"node_id": "MDQ6VXNlcjY0MzkxOA==",
"organizations_url": "https://api.github.com/users/mapmeld/orgs",
"received_events_url": "https://api.github.com/users/mapmeld/received_events",
"repos_url": "https://api.github.com/users/mapmeld/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mapmeld/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mapmeld/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mapmeld"
} | [] | closed | false | null | [] | null | [
"@lhoestq thanks for the comments - I believe these are now addressed. I re-generated the datasets_info.json and dummy data"
] | "2021-01-31T17:38:55Z" | "2021-02-10T20:39:13Z" | "2021-02-03T10:35:54Z" | CONTRIBUTOR | null | This MIT license dataset: https://github.com/iabufarha/ArSarcasm
Via https://sites.google.com/view/ar-sarcasm-sentiment-detection/ | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1798/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1798/timeline | null | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/1798.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1798",
"merged_at": "2021-02-03T10:35:54Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1798.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1798"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/1797 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1797/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1797/comments | https://api.github.com/repos/huggingface/datasets/issues/1797/events | https://github.com/huggingface/datasets/issues/1797 | 797,357,901 | MDU6SXNzdWU3OTczNTc5MDE= | 1,797 | Connection error | {
"avatar_url": "https://avatars.githubusercontent.com/u/46243662?v=4",
"events_url": "https://api.github.com/users/smile0925/events{/privacy}",
"followers_url": "https://api.github.com/users/smile0925/followers",
"following_url": "https://api.github.com/users/smile0925/following{/other_user}",
"gists_url": "https://api.github.com/users/smile0925/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/smile0925",
"id": 46243662,
"login": "smile0925",
"node_id": "MDQ6VXNlcjQ2MjQzNjYy",
"organizations_url": "https://api.github.com/users/smile0925/orgs",
"received_events_url": "https://api.github.com/users/smile0925/received_events",
"repos_url": "https://api.github.com/users/smile0925/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/smile0925/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/smile0925/subscriptions",
"type": "User",
"url": "https://api.github.com/users/smile0925"
} | [] | closed | false | null | [] | null | [
"Hi ! For future references let me add a link to our discussion here : https://github.com/huggingface/datasets/issues/759#issuecomment-770684693\r\n\r\nLet me know if you manage to fix your proxy issue or if we can do something on our end to help you :)"
] | "2021-01-30T07:32:45Z" | "2021-08-04T18:09:37Z" | "2021-08-04T18:09:37Z" | NONE | null | Hi
I am hitting to the error, help me and thanks.
`train_data = datasets.load_dataset("xsum", split="train")`
`ConnectionError: Couldn't reach https://raw.githubusercontent.com/huggingface/datasets/1.0.2/datasets/xsum/xsum.py` | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1797/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1797/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/1796 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1796/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1796/comments | https://api.github.com/repos/huggingface/datasets/issues/1796/events | https://github.com/huggingface/datasets/issues/1796 | 797,329,905 | MDU6SXNzdWU3OTczMjk5MDU= | 1,796 | Filter on dataset too much slowww | {
"avatar_url": "https://avatars.githubusercontent.com/u/20911334?v=4",
"events_url": "https://api.github.com/users/ayubSubhaniya/events{/privacy}",
"followers_url": "https://api.github.com/users/ayubSubhaniya/followers",
"following_url": "https://api.github.com/users/ayubSubhaniya/following{/other_user}",
"gists_url": "https://api.github.com/users/ayubSubhaniya/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/ayubSubhaniya",
"id": 20911334,
"login": "ayubSubhaniya",
"node_id": "MDQ6VXNlcjIwOTExMzM0",
"organizations_url": "https://api.github.com/users/ayubSubhaniya/orgs",
"received_events_url": "https://api.github.com/users/ayubSubhaniya/received_events",
"repos_url": "https://api.github.com/users/ayubSubhaniya/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/ayubSubhaniya/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ayubSubhaniya/subscriptions",
"type": "User",
"url": "https://api.github.com/users/ayubSubhaniya"
} | [] | open | false | null | [] | null | [
"When I use the filter on the arrow table directly, it works like butter. But I can't find a way to update the table in `Dataset` object.\r\n\r\n```\r\nds_table = dataset.data.filter(mask=dataset['flag'])\r\n```",
"@thomwolf @lhoestq can you guys please take a look and recommend some solution.",
"Hi ! Currently the filter method reads the dataset batch by batch to write a new, filtered, arrow file on disk. Therefore all the reading + writing can take some time.\r\nUsing a mask directly on the arrow table doesn't do any read or write operation therefore it's way quicker.\r\n\r\nReplacing the old table by the new one should do the job:\r\n```python\r\ndataset._data = dataset._data.filter(...)\r\n```\r\n\r\nNote: this is a **workaround** and in general users shouldn't have to do that. In particular if you did some `shuffle` or `select` before that then it would not work correctly since the indices mapping (index from `__getitem__` -> index in the table) would not be valid anymore. But if you haven't done any `shuffle`, `select`, `shard`, `train_test_split` etc. then it should work.\r\n\r\nIdeally it would be awesome to update the filter function to allow masking this way !\r\nIf you would like to give it a shot I will be happy to help :) ",
"Yes, would be happy to contribute. Thanks",
"Hi @lhoestq @ayubSubhaniya,\r\n\r\nIf there's no progress on this one, can I try working on it?\r\n\r\nThanks,\r\nGunjan",
"Sure @gchhablani feel free to start working on it, this would be very appreciated :)\r\nThis feature is would be really awesome, especially since arrow allows to mask really quickly and without having to rewrite the dataset on disk"
] | "2021-01-30T04:09:19Z" | "2021-02-18T17:09:24Z" | null | NONE | null | I have a dataset with 50M rows.
For pre-processing, I need to tokenize this and filter rows with the large sequence.
My tokenization took roughly 12mins. I used `map()` with batch size 1024 and multi-process with 96 processes.
When I applied the `filter()` function it is taking too much time. I need to filter sequences based on a boolean column.
Below are the variants I tried.
1. filter() with batch size 1024, single process (takes roughly 3 hr)
2. filter() with batch size 1024, 96 processes (takes 5-6 hrs ¯\\\_(ツ)\_/¯)
3. filter() with loading all data in memory, only a single boolean column (never ends).
Can someone please help?
Below is a sample code for small dataset.
```
from datasets import load_dataset
dataset = load_dataset('glue', 'mrpc', split='train')
dataset = dataset.map(lambda x: {'flag': random.randint(0,1)==1})
def _amplify(data):
return data
dataset = dataset.filter(_amplify, batch_size=1024, keep_in_memory=False, input_columns=['flag'])
```
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1796/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1796/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/1795 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1795/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1795/comments | https://api.github.com/repos/huggingface/datasets/issues/1795/events | https://github.com/huggingface/datasets/pull/1795 | 797,021,730 | MDExOlB1bGxSZXF1ZXN0NTY0MDk5OTUz | 1,795 | Custom formatting for lazy map + arrow data extraction refactor | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [] | closed | false | null | [] | null | [
"This PR is amazing!!!\r\n\r\nI only looked at `arrow_dataset.py` and `formatting/formatting.py` but those look good to me.\r\n\r\nMy only (tiny) concern is the name of the function: I don't think it's self-evident that `set_format` applies a generic transformation, and some people might not look too far into the doc.\r\n\r\nMaybe we could have an `apply_transform` or `process_columns` method which is called by `set_format` (to keep backward compatibility)?",
"What about something like `.set_format` and `.set_transform` ?\r\n- set_format would be the same as right now, i.e. defined by a format type.\r\n- set_transform would define the transformation that is applied on output batches on-the-fly.\r\n\r\nI was also thinking about `._with_format` and `.with_transform`. It could be their equivalent but would create a **new** dataset with the corresponding format or transform ? I know @sgugger was interested in something like that.",
"Yup, I think that would make all of these options very clear!",
"I like all those options as well (as long as the `_` in `_with_format` is a typo ;-) )",
"Yes it's a typo indeed ;)\r\n\r\nAlright I'll do the changes !",
"I took all your suggestions into account, thanks :)\r\nLet me know if you have more comments",
"Hi @lhoestq , thanks for offering the set_transform() function. It is very handy to process large datasets on the fly. But I ran into a problem when using it (error message shown below). Since we are working with a large collection, there's no way to filter all invalid data points beforehand. Those invalid data points will be problematic with the set_transform and I don't find a good work-around to ignore them. I wonder if you can offer some advice on dealing with invalid data points in this case. Thank you!\r\n\r\n```\r\nTraceback (most recent call last):\r\n File \"/export/share/ruimeng/env/anaconda/envs/ir/lib/python3.8/site-packages/torch/utils/data/_utils/worker.py\", line 198, in _worker_loop\r\n data = fetcher.fetch(index)\r\n File \"/export/share/ruimeng/env/anaconda/envs/ir/lib/python3.8/site-packages/torch/utils/data/_utils/fetch.py\", line 44, in fetch\r\n data = [self.dataset[idx] for idx in possibly_batched_index]\r\n File \"/export/share/ruimeng/env/anaconda/envs/ir/lib/python3.8/site-packages/torch/utils/data/_utils/fetch.py\", line 44, in <listcomp>\r\n data = [self.dataset[idx] for idx in possibly_batched_index]\r\n File \"/export/share/ruimeng/env/anaconda/envs/ir/lib/python3.8/site-packages/datasets/arrow_dataset.py\", line 1763, in __getitem__\r\n return self._getitem(\r\n File \"/export/share/ruimeng/env/anaconda/envs/ir/lib/python3.8/site-packages/datasets/arrow_dataset.py\", line 1748, in _getitem\r\n formatted_output = format_table(\r\n File \"/export/share/ruimeng/env/anaconda/envs/ir/lib/python3.8/site-packages/datasets/formatting/formatting.py\", line 532, in format_table\r\n return formatter(pa_table, query_type=query_type)\r\n File \"/export/share/ruimeng/env/anaconda/envs/ir/lib/python3.8/site-packages/datasets/formatting/formatting.py\", line 281, in __call__\r\n return self.format_row(pa_table)\r\n File \"/export/share/ruimeng/env/anaconda/envs/ir/lib/python3.8/site-packages/datasets/formatting/formatting.py\", line 391, in format_row\r\n raise TypeError(\r\nTypeError: Custom formatting function must return a dict to be able to pick a row, but got None\r\n\r\n```\r\n",
"> Hi @lhoestq , thanks for offering the set_transform() function. It is very handy to process large datasets on the fly. But I ran into a problem when using it (error message shown below). Since we are working with a large collection, there's no way to filter all invalid data points beforehand. Those invalid data points will be problematic with the set_transform and I don't find a good work-around to ignore them. I wonder if you can offer some advice on dealing with invalid data points in this case. Thank you!\r\n> \r\n> ```\r\n> Traceback (most recent call last):\r\n> File \"/export/share/ruimeng/env/anaconda/envs/ir/lib/python3.8/site-packages/torch/utils/data/_utils/worker.py\", line 198, in _worker_loop\r\n> data = fetcher.fetch(index)\r\n> File \"/export/share/ruimeng/env/anaconda/envs/ir/lib/python3.8/site-packages/torch/utils/data/_utils/fetch.py\", line 44, in fetch\r\n> data = [self.dataset[idx] for idx in possibly_batched_index]\r\n> File \"/export/share/ruimeng/env/anaconda/envs/ir/lib/python3.8/site-packages/torch/utils/data/_utils/fetch.py\", line 44, in <listcomp>\r\n> data = [self.dataset[idx] for idx in possibly_batched_index]\r\n> File \"/export/share/ruimeng/env/anaconda/envs/ir/lib/python3.8/site-packages/datasets/arrow_dataset.py\", line 1763, in __getitem__\r\n> return self._getitem(\r\n> File \"/export/share/ruimeng/env/anaconda/envs/ir/lib/python3.8/site-packages/datasets/arrow_dataset.py\", line 1748, in _getitem\r\n> formatted_output = format_table(\r\n> File \"/export/share/ruimeng/env/anaconda/envs/ir/lib/python3.8/site-packages/datasets/formatting/formatting.py\", line 532, in format_table\r\n> return formatter(pa_table, query_type=query_type)\r\n> File \"/export/share/ruimeng/env/anaconda/envs/ir/lib/python3.8/site-packages/datasets/formatting/formatting.py\", line 281, in __call__\r\n> return self.format_row(pa_table)\r\n> File \"/export/share/ruimeng/env/anaconda/envs/ir/lib/python3.8/site-packages/datasets/formatting/formatting.py\", line 391, in format_row\r\n> raise TypeError(\r\n> TypeError: Custom formatting function must return a dict to be able to pick a row, but got None\r\n> ```\r\n\r\nI found this trick can be helpful: return an empty dict in exception:\r\n```\r\ndef transform_fn(example):\r\n try:\r\n process_your_data(example)\r\n except Exception as e:\r\n print(e)\r\n return {'input_ids': [[]], 'token_type_ids': [[]], 'attention_mask': [[]]}\r\ntrain_dataset = datasets.load_dataset(...)\r\ntrain_dataset = train_dataset.with_transform(parse_fn)\r\n```"
] | "2021-01-29T16:35:53Z" | "2022-07-30T09:50:11Z" | "2021-02-05T09:54:06Z" | MEMBER | null | Hi !
This PR refactors the way data are extracted from pyarrow tables to extend it to the use of custom formatting functions.
While the internal storage of the dataset is always the Apache Arrow format, by setting a specific format on a dataset, you can cast the output of `datasets.Dataset.__getitem__` in NumPy/pandas/PyTorch/TensorFlow, on-the-fly.
A specific format can be activated with `datasets.Dataset.set_format`. For example: `dataset.set_format(type='torch', columns=['label'])`.
### What's new:
You can now also define your own formatting function that is applied on-the-fly. To do so you can pass your formatting function in the `transform` parameter of `datasets.Dataset.set_format`, and keep `type` to `None`.
A formatting function is a callable that takes a batch (as a dict, formatted as python) as input and returns a batch.
Here is an example to tokenize and pad tokens on-the-fly when accessing the samples:
```python
from datasets import load_dataset
from transformers import BertTokenizer
tokenizer = BertTokenizer.from_pretrained("bert-base-uncased")
def encode(batch):
return tokenizer(batch["sentence1"], padding="longest", truncation=True, max_length=512, return_tensors="pt")
dataset = load_dataset("glue", "mrpc", split="train")
dataset.set_format(transform=encode)
dataset.format
# {'type': 'custom', 'format_kwargs': {'transform': <function __main__.encode(batch)>}, 'columns': ['idx', 'label', 'sentence1', 'sentence2'], 'output_all_columns': False}
dataset[:2]
# {'input_ids': tensor([[ 101, 2572, 3217, ... 102]]), 'token_type_ids': tensor([[0, 0, 0, ... 0]]), 'attention_mask': tensor([[1, 1, 1, ... 1]])}
```
Let me know what you think of this API !
We can still change it if we want to.
Especially @sgugger since this may be useful when using `datasets` to train models.
EDIT: this was changed to `dataset.set_transform(encode)`
-------------------
Note:
I had to refactor the way data are extracted and formatted from pyarrow tables and I made it more robust and flexible. In particular I modularized it to be able to unit-test it properly. This was very helpful since I detected some bugs in the previous implementation and was able to fix them.
Some bugs I found and fixed:
- certain slices/ranges were not supported because negative ids were passed to pyarrow
- formatting as numpy/torch/tensorflow a column would make it lose its precision information (for example a column as `Value("float32")`) would be returned as a tensor of float64 (default behavior for numpy)
- on windows integers formatted as numpy/torch/tensorflow were not always int64 tensors by default but were int32
The unit tests for those are now really extensive :) | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 3,
"total_count": 3,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1795/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1795/timeline | null | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/1795.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1795",
"merged_at": "2021-02-05T09:54:06Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1795.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1795"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/1794 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1794/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1794/comments | https://api.github.com/repos/huggingface/datasets/issues/1794/events | https://github.com/huggingface/datasets/pull/1794 | 796,975,588 | MDExOlB1bGxSZXF1ZXN0NTY0MDYyMTkw | 1,794 | Move silicone directory | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [] | closed | false | null | [] | null | [] | "2021-01-29T15:33:15Z" | "2021-01-29T16:31:39Z" | "2021-01-29T16:31:38Z" | MEMBER | null | The dataset was added in #1761 but not in the right directory. I'm moving it to /datasets | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1794/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1794/timeline | null | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/1794.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1794",
"merged_at": "2021-01-29T16:31:38Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1794.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1794"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/1793 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1793/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1793/comments | https://api.github.com/repos/huggingface/datasets/issues/1793/events | https://github.com/huggingface/datasets/pull/1793 | 796,940,299 | MDExOlB1bGxSZXF1ZXN0NTY0MDMzMjk0 | 1,793 | Minor fix the docstring of load_metric | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | [] | closed | false | null | [] | null | [] | "2021-01-29T14:47:35Z" | "2021-01-29T16:53:32Z" | "2021-01-29T16:53:32Z" | MEMBER | null | Minor fix:
- duplicated attributes
- format fix | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1793/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1793/timeline | null | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/1793.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1793",
"merged_at": "2021-01-29T16:53:32Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1793.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1793"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/1792 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1792/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1792/comments | https://api.github.com/repos/huggingface/datasets/issues/1792/events | https://github.com/huggingface/datasets/pull/1792 | 796,934,627 | MDExOlB1bGxSZXF1ZXN0NTY0MDI4NTk1 | 1,792 | Allow loading dataset in-memory | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | [] | closed | false | null | [] | null | [
"I am wondering how to test their difference...",
"> ring how to test their difference...\r\n\r\nHmm I don't think pyarrow exposes an API to check if a Table comes from a file that is memory-mapped. In particular since all the buffer/memory logic is in the C++ part of pyarrow.\r\n\r\nOtherwise we can still check the difference of RAM used when loading a big chunk of data.",
"> Hmm I don't think pyarrow exposes an API to check if a Table comes from a file that is memory-mapped. In particular since all the buffer/memory logic is in the C++ part of pyarrow.\r\n> \r\n> Otherwise we can still check the difference of RAM used when loading a big chunk of data.\r\n\r\n@lhoestq I think I found a way: `pa.total_allocated_bytes()` :smirk:"
] | "2021-01-29T14:39:50Z" | "2021-02-12T14:13:28Z" | "2021-02-12T14:13:28Z" | MEMBER | null | Allow loading datasets either from:
- memory-mapped file (current implementation)
- from file descriptor, copying data to physical memory
Close #708 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 2,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 2,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1792/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1792/timeline | null | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/1792.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1792",
"merged_at": "2021-02-12T14:13:28Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1792.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1792"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/1791 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1791/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1791/comments | https://api.github.com/repos/huggingface/datasets/issues/1791/events | https://github.com/huggingface/datasets/pull/1791 | 796,924,519 | MDExOlB1bGxSZXF1ZXN0NTY0MDE5OTk3 | 1,791 | Small fix with corrected logging of train vectors | {
"avatar_url": "https://avatars.githubusercontent.com/u/7549587?v=4",
"events_url": "https://api.github.com/users/TezRomacH/events{/privacy}",
"followers_url": "https://api.github.com/users/TezRomacH/followers",
"following_url": "https://api.github.com/users/TezRomacH/following{/other_user}",
"gists_url": "https://api.github.com/users/TezRomacH/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/TezRomacH",
"id": 7549587,
"login": "TezRomacH",
"node_id": "MDQ6VXNlcjc1NDk1ODc=",
"organizations_url": "https://api.github.com/users/TezRomacH/orgs",
"received_events_url": "https://api.github.com/users/TezRomacH/received_events",
"repos_url": "https://api.github.com/users/TezRomacH/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/TezRomacH/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/TezRomacH/subscriptions",
"type": "User",
"url": "https://api.github.com/users/TezRomacH"
} | [] | closed | false | null | [] | null | [] | "2021-01-29T14:26:06Z" | "2021-01-29T18:51:10Z" | "2021-01-29T17:05:07Z" | CONTRIBUTOR | null | Now you can set `train_size` to the whole dataset size via `train_size = -1` and login writes not `Training the index with the first -1 vectors` but (for example) `Training the index with the first 16123 vectors`. And maybe more than dataset length. Logging will be correct | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1791/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1791/timeline | null | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/1791.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1791",
"merged_at": "2021-01-29T17:05:07Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1791.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1791"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/1790 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1790/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1790/comments | https://api.github.com/repos/huggingface/datasets/issues/1790/events | https://github.com/huggingface/datasets/issues/1790 | 796,678,157 | MDU6SXNzdWU3OTY2NzgxNTc= | 1,790 | ModuleNotFoundError: No module named 'apache_beam', when specific languages. | {
"avatar_url": "https://avatars.githubusercontent.com/u/6331508?v=4",
"events_url": "https://api.github.com/users/miyamonz/events{/privacy}",
"followers_url": "https://api.github.com/users/miyamonz/followers",
"following_url": "https://api.github.com/users/miyamonz/following{/other_user}",
"gists_url": "https://api.github.com/users/miyamonz/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/miyamonz",
"id": 6331508,
"login": "miyamonz",
"node_id": "MDQ6VXNlcjYzMzE1MDg=",
"organizations_url": "https://api.github.com/users/miyamonz/orgs",
"received_events_url": "https://api.github.com/users/miyamonz/received_events",
"repos_url": "https://api.github.com/users/miyamonz/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/miyamonz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/miyamonz/subscriptions",
"type": "User",
"url": "https://api.github.com/users/miyamonz"
} | [] | open | false | null | [] | null | [
"Hi !\r\n\r\nApache Beam is a framework used to define data transformation pipelines. These pipeline can then be run in many runtimes: DataFlow, Spark, Flink, etc. There also exist a local runner called the DirectRunner.\r\nWikipedia is a dataset that requires some parsing, so to allow the processing to be run on this kind of runtime we're using Apache Beam.\r\n\r\nAt Hugging Face we've already processed certain versions of wikipedia (the `20200501.en` one for example) so that users can directly download the processed version instead of using Apache Beam to process it.\r\nHowever for the japanese language we haven't processed it so you'll have to run the processing on your side.\r\nSo you do need Apache Beam to process `20200501.ja`.\r\n\r\nYou can install Apache Beam with\r\n```\r\npip install apache-beam\r\n```\r\n\r\nI think we can probably improve the error message to let users know of this subtlety.\r\nWhat #498 implied is that Apache Beam is not needed when you process a dataset that doesn't use Apache Beam.",
"Thanks for your reply! \r\nI understood.\r\n\r\nI tried again with installing apache-beam, add ` beam_runner=\"DirectRunner\"` and an anther `mwparserfromhell` is also required so I installed it.\r\nbut, it also failed. It exited 1 without error message.\r\n\r\n```py\r\nimport datasets\r\n# BTW, 20200501.ja doesn't exist at wikipedia, so I specified date argument\r\nwiki = datasets.load_dataset(\"wikipedia\", language=\"ja\", date=\"20210120\", cache_dir=\"./datasets\", beam_runner=\"DirectRunner\")\r\nprint(wiki)\r\n```\r\nand its log is below\r\n```\r\nUsing custom data configuration 20210120.ja\r\nDownloading and preparing dataset wikipedia/20210120.ja-date=20210120,language=ja (download: Unknown size, generated: Unknown size, post-processed: Unknown size, total: Unknown size) to ./datasets/wikipedia/20210120.ja-date=20210120,language=ja/0.0.0/4021357e28509391eab2f8300d9b689e7e8f3a877ebb3d354b01577d497ebc63...\r\nKilled\r\n```\r\n\r\nI also tried on another machine because it may caused by insufficient resources.\r\n```\r\n$ python main.py\r\nUsing custom data configuration 20210120.ja\r\nDownloading and preparing dataset wikipedia/20210120.ja-date=20210120,language=ja (download: Unknown size, generated: Unknown size, post-processed: Unknown size, total: Unknown size) to ./datasets/wikipedia/20210120.ja-date=20210120,language=ja/0.0.0/4021357e28509391eab2f8300d9b689e7e8f3a877ebb3d354b01577d497ebc63...\r\n\r\nTraceback (most recent call last):\r\n File \"main.py\", line 3, in <module>\r\n wiki = datasets.load_dataset(\"wikipedia\", language=\"ja\", date=\"20210120\", cache_dir=\"./datasets\", beam_runner=\"DirectRunner\")\r\n File \"/home/miyamonz/.cache/pypoetry/virtualenvs/try-datasets-4t4JWXxu-py3.8/lib/python3.8/site-packages/datasets/load.py\", line 609, in load_dataset\r\n builder_instance.download_and_prepare(\r\n File \"/home/miyamonz/.cache/pypoetry/virtualenvs/try-datasets-4t4JWXxu-py3.8/lib/python3.8/site-packages/datasets/builder.py\", line 526, in download_and_prepare\r\n self._download_and_prepare(\r\n File \"/home/miyamonz/.cache/pypoetry/virtualenvs/try-datasets-4t4JWXxu-py3.8/lib/python3.8/site-packages/datasets/builder.py\", line 1069, in _download_and_prepare\r\n pipeline_results = pipeline.run()\r\n File \"/home/miyamonz/.cache/pypoetry/virtualenvs/try-datasets-4t4JWXxu-py3.8/lib/python3.8/site-packages/apache_beam/pipeline.py\", line 561, in run\r\n return self.runner.run_pipeline(self, self._options)\r\n File \"/home/miyamonz/.cache/pypoetry/virtualenvs/try-datasets-4t4JWXxu-py3.8/lib/python3.8/site-packages/apache_beam/runners/direct/direct_runner.py\", line 126, in run_pipeline\r\n return runner.run_pipeline(pipeline, options)\r\n File \"/home/miyamonz/.cache/pypoetry/virtualenvs/try-datasets-4t4JWXxu-py3.8/lib/python3.8/site-packages/apache_beam/runners/portability/fn_api_runner/fn_runner.py\", line 182, in run_pipeline\r\n self._latest_run_result = self.run_via_runner_api(\r\n File \"/home/miyamonz/.cache/pypoetry/virtualenvs/try-datasets-4t4JWXxu-py3.8/lib/python3.8/site-packages/apache_beam/runners/portability/fn_api_runner/fn_runner.py\", line 193, in run_via_runner_api\r\n return self.run_stages(stage_context, stages)\r\n File \"/home/miyamonz/.cache/pypoetry/virtualenvs/try-datasets-4t4JWXxu-py3.8/lib/python3.8/site-packages/apache_beam/runners/portability/fn_api_runner/fn_runner.py\", line 358, in run_stages\r\n stage_results = self._run_stage(\r\n File \"/home/miyamonz/.cache/pypoetry/virtualenvs/try-datasets-4t4JWXxu-py3.8/lib/python3.8/site-packages/apache_beam/runners/portability/fn_api_runner/fn_runner.py\", line 549, in _run_stage\r\n last_result, deferred_inputs, fired_timers = self._run_bundle(\r\n File \"/home/miyamonz/.cache/pypoetry/virtualenvs/try-datasets-4t4JWXxu-py3.8/lib/python3.8/site-packages/apache_beam/runners/portability/fn_api_runner/fn_runner.py\", line 595, in _run_bundle\r\n result, splits = bundle_manager.process_bundle(\r\n File \"/home/miyamonz/.cache/pypoetry/virtualenvs/try-datasets-4t4JWXxu-py3.8/lib/python3.8/site-packages/apache_beam/runners/portability/fn_api_runner/fn_runner.py\", line 888, in process_bundle\r\n self._send_input_to_worker(process_bundle_id, transform_id, elements)\r\n File \"/home/miyamonz/.cache/pypoetry/virtualenvs/try-datasets-4t4JWXxu-py3.8/lib/python3.8/site-packages/apache_beam/runners/portability/fn_api_runner/fn_runner.py\", line 765, in _send_input_to_worker\r\n data_out.write(byte_stream)\r\n File \"apache_beam/coders/stream.pyx\", line 42, in apache_beam.coders.stream.OutputStream.write\r\n File \"apache_beam/coders/stream.pyx\", line 47, in apache_beam.coders.stream.OutputStream.write\r\n File \"apache_beam/coders/stream.pyx\", line 109, in apache_beam.coders.stream.OutputStream.extend\r\nAssertionError: OutputStream realloc failed.\r\n```\r\n\r\n",
"Hi @miyamonz,\r\n\r\nI tried replicating this issue using the same snippet used by you. I am able to download the dataset without any issues, although I stopped it in the middle because the dataset is huge.\r\n\r\nBased on a similar issue [here](https://github.com/google-research/fixmatch/issues/23), it could be related to your environment setup, although I am just guessing here. Can you share these details?",
"thanks for your reply and sorry for my late response.\r\n\r\n## environment\r\nmy local machine environment info\r\n- Ubuntu on WSL2\r\n\r\n`lsb_release -a`\r\n```\r\nNo LSB modules are available.\r\nDistributor ID: Ubuntu\r\nDescription: Ubuntu 20.04.2 LTS\r\nRelease: 20.04\r\nCodename: focal\r\n```\r\n\r\nRTX 2070 super\r\nInside WSL, there is no nvidia-msi command. I don't know why.\r\nBut, `torch.cuda.is_available()` is true and when I start something ML training code GPU usage is growing up, so I think it works.\r\n\r\nFrom PowerShell, there is nvidia-smi.exe and result is below.\r\n```\r\n+-----------------------------------------------------------------------------+\r\n| NVIDIA-SMI 470.05 Driver Version: 470.05 CUDA Version: 11.3 |\r\n|-------------------------------+----------------------+----------------------+\r\n| GPU Name TCC/WDDM | Bus-Id Disp.A | Volatile Uncorr. ECC |\r\n| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |\r\n| | | MIG M. |\r\n|===============================+======================+======================|\r\n| 0 NVIDIA GeForce ... WDDM | 00000000:09:00.0 On | N/A |\r\n| 0% 30C P8 19W / 175W | 523MiB / 8192MiB | 3% Default |\r\n| | | N/A |\r\n+-------------------------------+----------------------+----------------------+\r\n\r\n+-----------------------------------------------------------------------------+\r\n| Processes: |\r\n| GPU GI CI PID Type Process name GPU Memory |\r\n| ID ID Usage |\r\n|=============================================================================|\r\n| 0 N/A N/A 1728 C+G Insufficient Permissions N/A |\r\n| 0 N/A N/A 3672 C+G ...ekyb3d8bbwe\\YourPhone.exe N/A |\r\n| 0 N/A N/A 6304 C+G ...2txyewy\\TextInputHost.exe N/A |\r\n| 0 N/A N/A 8648 C+G C:\\Windows\\explorer.exe N/A |\r\n| 0 N/A N/A 9536 C+G ...y\\ShellExperienceHost.exe N/A |\r\n| 0 N/A N/A 10668 C+G ...5n1h2txyewy\\SearchApp.exe N/A |\r\n| 0 N/A N/A 10948 C+G ...artMenuExperienceHost.exe N/A |\r\n| 0 N/A N/A 11988 C+G ...8wekyb3d8bbwe\\Cortana.exe N/A |\r\n| 0 N/A N/A 12464 C+G ...cw5n1h2txyewy\\LockApp.exe N/A |\r\n| 0 N/A N/A 13280 C+G ...upport\\CEF\\Max Helper.exe N/A |\r\n| 0 N/A N/A 15948 C+G ...t\\GoogleIMEJaRenderer.exe N/A |\r\n| 0 N/A N/A 16128 C+G ...ram Files\\Slack\\Slack.exe N/A |\r\n| 0 N/A N/A 19096 C+G ...8bbwe\\WindowsTerminal.exe N/A |\r\n+-----------------------------------------------------------------------------+\r\n```\r\n\r\nI don't know what should I show in such a case. If it's not enough, please tell me some commands.\r\n\r\n---\r\n## what I did\r\nI surveyed more and I found 2 issues.\r\n\r\nAbout the first one, I wrote it as a new issue.\r\nhttps://github.com/huggingface/datasets/issues/2031\r\n\r\nThe error I mentioned in the previous comment above, which occurred on my local machine, is no longer occurring.\r\n\r\nBut, it still failed. In the previous comment, I wrote `AssertionError: OutputStream realloc failed.` happen on another machine. It also happens on my local machine.\r\n\r\nHere's what I've tried.\r\n\r\nthe wikipedia.py downloads these xml.bz2 files based on dumpstatus.json\r\nIn Japanese Wikipedia dataset that I specified, it will download these 6 files.\r\n\r\n\r\n`https://dumps.wikimedia.org/jawiki/20210120/dumpstatus.json`\r\nand filtered json based on wikipedia.py is below.\r\n```json\r\n {\r\n \"jobs\": {\r\n \"articlesmultistreamdump\": {\r\n \"files\": {\r\n \"jawiki-20210120-pages-articles-multistream1.xml-p1p114794.bz2\": {\r\n \"url\": \"/jawiki/20210120/jawiki-20210120-pages-articles-multistream1.xml-p1p114794.bz2\"\r\n },\r\n \"jawiki-20210120-pages-articles-multistream2.xml-p114795p390428.bz2\": {\r\n \"url\": \"/jawiki/20210120/jawiki-20210120-pages-articles-multistream2.xml-p114795p390428.bz2\"\r\n },\r\n \"jawiki-20210120-pages-articles-multistream3.xml-p390429p902407.bz2\": {\r\n \"url\": \"/jawiki/20210120/jawiki-20210120-pages-articles-multistream3.xml-p390429p902407.bz2\"\r\n },\r\n \"jawiki-20210120-pages-articles-multistream4.xml-p902408p1721646.bz2\": {\r\n \"url\": \"/jawiki/20210120/jawiki-20210120-pages-articles-multistream4.xml-p902408p1721646.bz2\"\r\n },\r\n \"jawiki-20210120-pages-articles-multistream5.xml-p1721647p2807947.bz2\": {\r\n \"url\": \"/jawiki/20210120/jawiki-20210120-pages-articles-multistream5.xml-p1721647p2807947.bz2\"\r\n },\r\n \"jawiki-20210120-pages-articles-multistream6.xml-p2807948p4290013.bz2\": {\r\n \"url\": \"/jawiki/20210120/jawiki-20210120-pages-articles-multistream6.xml-p2807948p4290013.bz2\"\r\n }\r\n }\r\n }\r\n }\r\n }\r\n```\r\n\r\nSo, I tried running with fewer resources by modifying this line.\r\nhttps://github.com/huggingface/datasets/blob/13a5b7db992ad5cf77895e4c0f76595314390418/datasets/wikipedia/wikipedia.py#L524\r\nI changed it like this. just change filepaths list.\r\n` | \"Initialize\" >> beam.Create(filepaths[:1])`\r\n\r\nand I added a print line inside for the loop of _extract_content.\r\nlike this `if(i % 100000 == 0): print(i)`\r\n\r\nfirst, without modification, it always stops after all _extract_content is done.\r\n\r\n- `filepaths[:1]` then it succeeded.\r\n- `filepaths[:2]` then it failed.\r\nI don't try all patterns because each pattern takes a long time.\r\n\r\n### my opinion\r\nIt seems it's successful when the entire file size is small.\r\n \r\nso, at least it doesn't file-specific issue.\r\n\r\n\r\nI don't know it's true but I think when beam_writter writes into a file, it consumes memory depends on its entire file.\r\nbut It's correct Apache Beam's behavior? I'm not familiar with this library.\r\n",
"I don't know if this is related, but there is this issue on the wikipedia processing that you reported at #2031 (open PR is at #2037 ) .\r\nDoes the fix your proposed at #2037 helps in your case ?\r\n\r\nAnd for information, the DirectRunner of Apache Beam is not optimized for memory intensive tasks, so you must be right when you say that it uses the memory for the entire file.",
"the #2037 doesn't solve my problem directly, but I found the point!\r\n\r\nhttps://github.com/huggingface/datasets/blob/349ac4398a3bcae6356f14c5754483383a60e8a4/datasets/wikipedia/wikipedia.py#L523\r\nthis `beam.transforms.Reshuffle()` cause the memory error.\r\n\r\nit makes sense if I consider the shuffle means. Beam's reshuffle seems need put all data in memory.\r\nPreviously I doubt that this line causes error, but at that time another bug showed in #2037 made error, so I can't found it.\r\n\r\nAnyway, I comment out this line, and run load_dataset, then it works!\r\n\r\n```python\r\nwiki = datasets.load_dataset(\r\n \"./wikipedia.py\",\r\n cache_dir=\"./datasets\",\r\n beam_runner=\"DirectRunner\",\r\n language=\"ja\",\r\n date=\"20210120\",\r\n)[\"train\"]\r\n```\r\n![image](https://user-images.githubusercontent.com/6331508/112283369-6a9f3300-8ccb-11eb-82e5-827bf7fddfb9.png)\r\n\r\nDataset has already shuffle function. https://github.com/huggingface/datasets/blob/349ac4398a3bcae6356f14c5754483383a60e8a4/src/datasets/arrow_dataset.py#L2069\r\nSo, though I don't know it's difference correctly, but I think Beam's reshuffle isn't be needed. How do you think?",
"The reshuffle is needed when you use parallelism.\r\nThe objective is to redistribute the articles evenly on the workers, since the `_extract_content` step generated many articles per file. By using reshuffle, we can split the processing of the articles of one file into several workers. Without reshuffle, all the articles of one file would be processed on the same worker that read the file, making the whole process take a very long time.",
"Maybe the reshuffle step can be added only if the runner is not a DirectRunner ?"
] | "2021-01-29T08:17:24Z" | "2021-03-25T12:10:51Z" | null | CONTRIBUTOR | null | ```py
import datasets
wiki = datasets.load_dataset('wikipedia', '20200501.ja', cache_dir='./datasets')
```
then `ModuleNotFoundError: No module named 'apache_beam'` happend.
The error doesn't appear when it's '20200501.en'.
I don't know Apache Beam, but according to #498 it isn't necessary when it's saved to local. is it correct? | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1790/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1790/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/1789 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1789/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1789/comments | https://api.github.com/repos/huggingface/datasets/issues/1789/events | https://github.com/huggingface/datasets/pull/1789 | 796,229,721 | MDExOlB1bGxSZXF1ZXN0NTYzNDQyMTc2 | 1,789 | [BUG FIX] typo in the import path for metrics | {
"avatar_url": "https://avatars.githubusercontent.com/u/10469459?v=4",
"events_url": "https://api.github.com/users/yjernite/events{/privacy}",
"followers_url": "https://api.github.com/users/yjernite/followers",
"following_url": "https://api.github.com/users/yjernite/following{/other_user}",
"gists_url": "https://api.github.com/users/yjernite/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/yjernite",
"id": 10469459,
"login": "yjernite",
"node_id": "MDQ6VXNlcjEwNDY5NDU5",
"organizations_url": "https://api.github.com/users/yjernite/orgs",
"received_events_url": "https://api.github.com/users/yjernite/received_events",
"repos_url": "https://api.github.com/users/yjernite/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/yjernite/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yjernite/subscriptions",
"type": "User",
"url": "https://api.github.com/users/yjernite"
} | [] | closed | false | null | [] | null | [] | "2021-01-28T18:01:37Z" | "2021-01-28T18:13:56Z" | "2021-01-28T18:13:56Z" | MEMBER | null | This tiny PR fixes a typo introduced in https://github.com/huggingface/datasets/pull/1726 which prevents loading new metrics | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1789/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1789/timeline | null | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/1789.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1789",
"merged_at": "2021-01-28T18:13:55Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1789.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1789"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/1788 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1788/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1788/comments | https://api.github.com/repos/huggingface/datasets/issues/1788/events | https://github.com/huggingface/datasets/pull/1788 | 795,544,422 | MDExOlB1bGxSZXF1ZXN0NTYyODc1NzA2 | 1,788 | Doc2dial rc | {
"avatar_url": "https://avatars.githubusercontent.com/u/2062185?v=4",
"events_url": "https://api.github.com/users/songfeng/events{/privacy}",
"followers_url": "https://api.github.com/users/songfeng/followers",
"following_url": "https://api.github.com/users/songfeng/following{/other_user}",
"gists_url": "https://api.github.com/users/songfeng/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/songfeng",
"id": 2062185,
"login": "songfeng",
"node_id": "MDQ6VXNlcjIwNjIxODU=",
"organizations_url": "https://api.github.com/users/songfeng/orgs",
"received_events_url": "https://api.github.com/users/songfeng/received_events",
"repos_url": "https://api.github.com/users/songfeng/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/songfeng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/songfeng/subscriptions",
"type": "User",
"url": "https://api.github.com/users/songfeng"
} | [] | closed | false | null | [] | null | [] | "2021-01-27T23:51:00Z" | "2021-01-28T18:46:13Z" | "2021-01-28T18:46:13Z" | CONTRIBUTOR | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1788/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1788/timeline | null | null | 1 | {
"diff_url": "https://github.com/huggingface/datasets/pull/1788.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1788",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/1788.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1788"
} | true |
|
https://api.github.com/repos/huggingface/datasets/issues/1787 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1787/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1787/comments | https://api.github.com/repos/huggingface/datasets/issues/1787/events | https://github.com/huggingface/datasets/pull/1787 | 795,485,842 | MDExOlB1bGxSZXF1ZXN0NTYyODI1NTI3 | 1,787 | Update the CommonGen citation information | {
"avatar_url": "https://avatars.githubusercontent.com/u/10104354?v=4",
"events_url": "https://api.github.com/users/yuchenlin/events{/privacy}",
"followers_url": "https://api.github.com/users/yuchenlin/followers",
"following_url": "https://api.github.com/users/yuchenlin/following{/other_user}",
"gists_url": "https://api.github.com/users/yuchenlin/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/yuchenlin",
"id": 10104354,
"login": "yuchenlin",
"node_id": "MDQ6VXNlcjEwMTA0MzU0",
"organizations_url": "https://api.github.com/users/yuchenlin/orgs",
"received_events_url": "https://api.github.com/users/yuchenlin/received_events",
"repos_url": "https://api.github.com/users/yuchenlin/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/yuchenlin/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yuchenlin/subscriptions",
"type": "User",
"url": "https://api.github.com/users/yuchenlin"
} | [] | closed | false | null | [] | null | [] | "2021-01-27T22:12:47Z" | "2021-01-28T13:56:29Z" | "2021-01-28T13:56:29Z" | CONTRIBUTOR | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1787/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1787/timeline | null | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/1787.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1787",
"merged_at": "2021-01-28T13:56:29Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1787.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1787"
} | true |
|
https://api.github.com/repos/huggingface/datasets/issues/1786 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1786/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1786/comments | https://api.github.com/repos/huggingface/datasets/issues/1786/events | https://github.com/huggingface/datasets/issues/1786 | 795,462,816 | MDU6SXNzdWU3OTU0NjI4MTY= | 1,786 | How to use split dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/78090287?v=4",
"events_url": "https://api.github.com/users/kkhan188/events{/privacy}",
"followers_url": "https://api.github.com/users/kkhan188/followers",
"following_url": "https://api.github.com/users/kkhan188/following{/other_user}",
"gists_url": "https://api.github.com/users/kkhan188/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/kkhan188",
"id": 78090287,
"login": "kkhan188",
"node_id": "MDQ6VXNlcjc4MDkwMjg3",
"organizations_url": "https://api.github.com/users/kkhan188/orgs",
"received_events_url": "https://api.github.com/users/kkhan188/received_events",
"repos_url": "https://api.github.com/users/kkhan188/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/kkhan188/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kkhan188/subscriptions",
"type": "User",
"url": "https://api.github.com/users/kkhan188"
} | [
{
"color": "d876e3",
"default": true,
"description": "Further information is requested",
"id": 1935892912,
"name": "question",
"node_id": "MDU6TGFiZWwxOTM1ODkyOTEy",
"url": "https://api.github.com/repos/huggingface/datasets/labels/question"
}
] | closed | false | null | [] | null | [
"By default, all 3 splits will be loaded if you run the following:\r\n\r\n```python\r\nfrom datasets import load_dataset\r\ndataset = load_dataset(\"lambada\")\r\nprint(dataset[\"train\"])\r\nprint(dataset[\"valid\"])\r\n\r\n```\r\n\r\nIf you wanted to do load this manually, you could do this:\r\n\r\n```python\r\nfrom datasets import load_dataset\r\ndata_files = {\r\n \"train\": \"data/lambada/train.txt\",\r\n \"valid\": \"data/lambada/valid.txt\",\r\n \"test\": \"data/lambada/test.txt\",\r\n}\r\nds = load_dataset(\"text\", data_files=data_files)\r\n```",
"Thank you for the quick response! "
] | "2021-01-27T21:37:47Z" | "2021-04-23T15:17:39Z" | "2021-04-23T15:17:39Z" | NONE | null | ![Capture1](https://user-images.githubusercontent.com/78090287/106057436-cb6a1f00-6111-11eb-8c9c-3658065b1fdf.PNG)
Hey,
I want to split the lambada dataset into corpus, test, train and valid txt files (like penn treebank) but I am not able to achieve this. What I am doing is, executing the lambada.py file in my project but its not giving desired results. Any help will be appreciated! | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1786/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1786/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/1785 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1785/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1785/comments | https://api.github.com/repos/huggingface/datasets/issues/1785/events | https://github.com/huggingface/datasets/issues/1785 | 795,458,856 | MDU6SXNzdWU3OTU0NTg4NTY= | 1,785 | Not enough disk space (Needed: Unknown size) when caching on a cluster | {
"avatar_url": "https://avatars.githubusercontent.com/u/4341867?v=4",
"events_url": "https://api.github.com/users/olinguyen/events{/privacy}",
"followers_url": "https://api.github.com/users/olinguyen/followers",
"following_url": "https://api.github.com/users/olinguyen/following{/other_user}",
"gists_url": "https://api.github.com/users/olinguyen/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/olinguyen",
"id": 4341867,
"login": "olinguyen",
"node_id": "MDQ6VXNlcjQzNDE4Njc=",
"organizations_url": "https://api.github.com/users/olinguyen/orgs",
"received_events_url": "https://api.github.com/users/olinguyen/received_events",
"repos_url": "https://api.github.com/users/olinguyen/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/olinguyen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/olinguyen/subscriptions",
"type": "User",
"url": "https://api.github.com/users/olinguyen"
} | [] | closed | false | null | [] | null | [
"Hi ! \r\n\r\nWhat do you mean by \"disk_usage(\".\").free` can't compute on the cluster's shared disk\" exactly ?\r\nDoes it return 0 ?",
"Yes, that's right. It shows 0 free space even though there is. I suspect it might have to do with permissions on the shared disk.\r\n\r\n```python\r\n>>> disk_usage(\".\")\r\nusage(total=999999, used=999999, free=0)\r\n```",
"That's an interesting behavior...\r\nDo you know any other way to get the free space that works in your case ?\r\nAlso if it's a permission issue could you try fix the permissions and let mus know if that helped ?",
"I think its an issue on the clusters end (unclear exactly why -- maybe something with docker containers?), will close the issue",
"Were you able to figure it out?",
"@philippnoah I had fixed it with a small hack where I patched `has_sufficient_disk_space` to always return `True`. you can do that with an import without having to modify the `datasets` package",
"@olinguyen Thanks for the suggestion, it works but I had to to edit builder.py in the installed package. Can you please explain how were you able to do this using import?",
"I was able to patch the builder code in my notebook before the load data call and it works. \r\n```\r\nimport datasets\r\ndatasets.builder.has_sufficient_disk_space = lambda needed_bytes, directory='.': True\r\n```"
] | "2021-01-27T21:30:59Z" | "2022-11-07T16:33:03Z" | "2021-01-30T01:07:56Z" | CONTRIBUTOR | null | I'm running some experiments where I'm caching datasets on a cluster and accessing it through multiple compute nodes. However, I get an error when loading the cached dataset from the shared disk.
The exact error thrown:
```bash
>>> load_dataset(dataset, cache_dir="/path/to/cluster/shared/path")
OSError: Not enough disk space. Needed: Unknown size (download: Unknown size, generated: Unknown size, post-processed: Unknown size)
```
[`utils.has_sufficient_disk_space`](https://github.com/huggingface/datasets/blob/8a03ab7d123a76ee744304f21ce868c75f411214/src/datasets/utils/py_utils.py#L332) fails on each job because of how the cluster system is designed (`disk_usage(".").free` can't compute on the cluster's shared disk).
This is exactly where the error gets thrown:
https://github.com/huggingface/datasets/blob/master/src/datasets/builder.py#L502
```python
if not utils.has_sufficient_disk_space(self.info.size_in_bytes or 0, directory=self._cache_dir_root):
raise IOError(
"Not enough disk space. Needed: {} (download: {}, generated: {}, post-processed: {})".format(
utils.size_str(self.info.size_in_bytes or 0),
utils.size_str(self.info.download_size or 0),
utils.size_str(self.info.dataset_size or 0),
utils.size_str(self.info.post_processing_size or 0),
)
)
```
What would be a good way to circumvent this? my current fix is to manually comment out that part, but that is not ideal.
Would it be possible to pass a flag to skip this check on disk space? | {
"+1": 4,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 4,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1785/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1785/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/1784 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1784/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1784/comments | https://api.github.com/repos/huggingface/datasets/issues/1784/events | https://github.com/huggingface/datasets/issues/1784 | 794,659,174 | MDU6SXNzdWU3OTQ2NTkxNzQ= | 1,784 | JSONDecodeError on JSON with multiple lines | {
"avatar_url": "https://avatars.githubusercontent.com/u/29076344?v=4",
"events_url": "https://api.github.com/users/gchhablani/events{/privacy}",
"followers_url": "https://api.github.com/users/gchhablani/followers",
"following_url": "https://api.github.com/users/gchhablani/following{/other_user}",
"gists_url": "https://api.github.com/users/gchhablani/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/gchhablani",
"id": 29076344,
"login": "gchhablani",
"node_id": "MDQ6VXNlcjI5MDc2MzQ0",
"organizations_url": "https://api.github.com/users/gchhablani/orgs",
"received_events_url": "https://api.github.com/users/gchhablani/received_events",
"repos_url": "https://api.github.com/users/gchhablani/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/gchhablani/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gchhablani/subscriptions",
"type": "User",
"url": "https://api.github.com/users/gchhablani"
} | [] | closed | false | null | [] | null | [
"Hi !\r\n\r\nThe `json` dataset script does support this format. For example loading a dataset with this format works on my side:\r\n```json\r\n{\"key1\":11, \"key2\":12, \"key3\":13}\r\n{\"key1\":21, \"key2\":22, \"key3\":23}\r\n```\r\n\r\nCan you show the full stacktrace please ? Also which version of datasets and pyarrow are you using ?\r\n\r\n",
"Hi Quentin!\r\n\r\nI apologize for bothering you. There was some issue with my pyarrow version as far as I understand. I don't remember the exact version I was using as I didn't check it.\r\n\r\nI repeated it with `datasets 1.2.1` and `pyarrow 2.0.0` and it worked.\r\n\r\nClosing this issue. Again, sorry for the bother.\r\n\r\nThanks,\r\nGunjan"
] | "2021-01-27T00:19:22Z" | "2021-01-31T08:47:18Z" | "2021-01-31T08:47:18Z" | CONTRIBUTOR | null | Hello :),
I have been trying to load data using a JSON file. Based on the [docs](https://huggingface.co/docs/datasets/loading_datasets.html#json-files), the following format is supported:
```json
{"key1":11, "key2":12, "key3":13}
{"key1":21, "key2":22, "key3":23}
```
But, when I try loading a dataset with the same format, I get a JSONDecodeError : `JSONDecodeError: Extra data: line 2 column 1 (char 7142)`. Now, this is expected when using `json` to load a JSON file. But I was wondering if there are any special arguments to pass when using `load_dataset` as the docs suggest that this format is supported.
When I convert the JSON file to a list of dictionaries format, I get AttributeError: `AttributeError: 'list' object has no attribute 'keys'`. So, I can't convert them to list of dictionaries either.
Please let me know :)
Thanks,
Gunjan | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1784/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1784/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/1783 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1783/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1783/comments | https://api.github.com/repos/huggingface/datasets/issues/1783/events | https://github.com/huggingface/datasets/issues/1783 | 794,544,495 | MDU6SXNzdWU3OTQ1NDQ0OTU= | 1,783 | Dataset Examples Explorer | {
"avatar_url": "https://avatars.githubusercontent.com/u/30875246?v=4",
"events_url": "https://api.github.com/users/ChewKokWah/events{/privacy}",
"followers_url": "https://api.github.com/users/ChewKokWah/followers",
"following_url": "https://api.github.com/users/ChewKokWah/following{/other_user}",
"gists_url": "https://api.github.com/users/ChewKokWah/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/ChewKokWah",
"id": 30875246,
"login": "ChewKokWah",
"node_id": "MDQ6VXNlcjMwODc1MjQ2",
"organizations_url": "https://api.github.com/users/ChewKokWah/orgs",
"received_events_url": "https://api.github.com/users/ChewKokWah/received_events",
"repos_url": "https://api.github.com/users/ChewKokWah/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/ChewKokWah/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ChewKokWah/subscriptions",
"type": "User",
"url": "https://api.github.com/users/ChewKokWah"
} | [] | closed | false | null | [] | null | [
"Hi @ChewKokWah,\r\n\r\nWe're working on it! In the meantime, you can still find the dataset explorer at the following URL: https://huggingface.co/datasets/viewer/",
"Glad to see that it still exist, this existing one is more than good enough for me, it is feature rich, simple to use and concise. \r\nHope similar feature can be retain in the future version."
] | "2021-01-26T20:39:02Z" | "2021-02-01T13:58:44Z" | "2021-02-01T13:58:44Z" | NONE | null | In the Older version of the Dataset, there are a useful Dataset Explorer that allow user to visualize the examples (training, test and validation) of a particular dataset, it is no longer there in current version.
Hope HuggingFace can re-enable the feature that at least allow viewing of the first 20 examples of a particular dataset, or alternatively can extract 20 examples for each datasets and make those part of the Dataset Card Documentation. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1783/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1783/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/1782 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1782/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1782/comments | https://api.github.com/repos/huggingface/datasets/issues/1782/events | https://github.com/huggingface/datasets/pull/1782 | 794,167,920 | MDExOlB1bGxSZXF1ZXN0NTYxNzI5OTc3 | 1,782 | Update pyarrow import warning | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [] | closed | false | null | [] | null | [] | "2021-01-26T11:47:11Z" | "2021-01-26T13:50:50Z" | "2021-01-26T13:50:49Z" | MEMBER | null | Update the minimum version to >=0.17.1 in the pyarrow version check and update the message.
I also moved the check at the top of the __init__.py | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1782/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1782/timeline | null | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/1782.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1782",
"merged_at": "2021-01-26T13:50:49Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1782.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1782"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/1781 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1781/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1781/comments | https://api.github.com/repos/huggingface/datasets/issues/1781/events | https://github.com/huggingface/datasets/issues/1781 | 793,914,556 | MDU6SXNzdWU3OTM5MTQ1NTY= | 1,781 | AttributeError: module 'pyarrow' has no attribute 'PyExtensionType' during import | {
"avatar_url": "https://avatars.githubusercontent.com/u/45964869?v=4",
"events_url": "https://api.github.com/users/PalaashAgrawal/events{/privacy}",
"followers_url": "https://api.github.com/users/PalaashAgrawal/followers",
"following_url": "https://api.github.com/users/PalaashAgrawal/following{/other_user}",
"gists_url": "https://api.github.com/users/PalaashAgrawal/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/PalaashAgrawal",
"id": 45964869,
"login": "PalaashAgrawal",
"node_id": "MDQ6VXNlcjQ1OTY0ODY5",
"organizations_url": "https://api.github.com/users/PalaashAgrawal/orgs",
"received_events_url": "https://api.github.com/users/PalaashAgrawal/received_events",
"repos_url": "https://api.github.com/users/PalaashAgrawal/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/PalaashAgrawal/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/PalaashAgrawal/subscriptions",
"type": "User",
"url": "https://api.github.com/users/PalaashAgrawal"
} | [] | closed | false | null | [] | null | [
"Hi ! I'm not able to reproduce the issue. Can you try restarting your runtime ?\r\n\r\nThe PyExtensionType is available in pyarrow starting 0.17.1 iirc. If restarting your runtime doesn't fix this, can you try updating pyarrow ?\r\n```\r\npip install pyarrow --upgrade\r\n```",
"We should bump up the version test of pyarrow maybe no?\r\n\r\nhttps://github.com/huggingface/datasets/blob/master/src/datasets/__init__.py#L60",
"Yes indeed.\r\n\r\nAlso it looks like Pyarrow 3.0.0 got released on pypi 10 hours ago. This might be related to the bug, I'll investigate\r\nEDIT: looks like the 3.0.0 release doesn't have unexpected breaking changes for us, so I don't think the issue comes from that",
"Maybe colab moved to pyarrow 0.16 by default (instead of 0.14 before)?",
"Installing datasets installs pyarrow>=0.17.1 so in theory it doesn't matter which version of pyarrow colab has by default (which is currently pyarrow 0.14.1).\r\n\r\nAlso now the colab runtime refresh the pyarrow version automatically after the update from pip (previously you needed to restart your runtime).\r\n\r\nI guess what happened is that Colab didn't refresh pyarrow for some reason, and the AttributeError was raised *before* the pyarrow version check from `datasets` at https://github.com/huggingface/datasets/blob/master/src/datasets/__init__.py#L60",
"Yes colab doesn’t reload preloaded library unless you restart the instance. Maybe we should move the check on top of the init ",
"Yes I'll do that :)",
"I updated the pyarrow version check in #1782"
] | "2021-01-26T04:18:35Z" | "2022-10-05T12:37:06Z" | "2022-10-05T12:37:06Z" | NONE | null | I'm using Colab. And suddenly this morning, there is this error. Have a look below!
![screenshot-colab research google com-2021 01 26-08-15-36](https://user-images.githubusercontent.com/45964869/105799890-fdaf3b80-5fae-11eb-8f06-11b65cdccc30.png)
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1781/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1781/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/1780 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1780/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1780/comments | https://api.github.com/repos/huggingface/datasets/issues/1780/events | https://github.com/huggingface/datasets/pull/1780 | 793,882,132 | MDExOlB1bGxSZXF1ZXN0NTYxNDkxNTgy | 1,780 | Update SciFact URL | {
"avatar_url": "https://avatars.githubusercontent.com/u/3091916?v=4",
"events_url": "https://api.github.com/users/dwadden/events{/privacy}",
"followers_url": "https://api.github.com/users/dwadden/followers",
"following_url": "https://api.github.com/users/dwadden/following{/other_user}",
"gists_url": "https://api.github.com/users/dwadden/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/dwadden",
"id": 3091916,
"login": "dwadden",
"node_id": "MDQ6VXNlcjMwOTE5MTY=",
"organizations_url": "https://api.github.com/users/dwadden/orgs",
"received_events_url": "https://api.github.com/users/dwadden/received_events",
"repos_url": "https://api.github.com/users/dwadden/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/dwadden/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dwadden/subscriptions",
"type": "User",
"url": "https://api.github.com/users/dwadden"
} | [] | closed | false | null | [] | null | [
"Hi ! The error you get is the result of some verifications the library is doing when loading a dataset that already has some metadata in the dataset_infos.json. You can ignore the verifications with \r\n```\r\npython datasets-cli test datasets/scifact --save_infos --all_configs --ignore_verifications\r\n```\r\nThis will update the dataset_infos.json :)",
"Nice, I ran that command and `dataset_infos` seems to have been updated appropriately; I added this to the PR. But when I try to load the dataset it still seems like it's getting a path to the old URL somehow. I `pip install -e`'d my fork of the repo, so I'm not sure why `load_dataset` is still looking for the old version of the file. Stack trace below.\r\n\r\n```\r\nIn [1]: import datasets\r\n\r\nIn [2]: ds = datasets.load_dataset(\"scifact\", \"claims\")\r\nDownloading: 7.34kB [00:00, 2.58MB/s]\r\nDownloading: 3.38kB [00:00, 1.36MB/s]\r\nDownloading and preparing dataset scifact/claims (download: 2.72 MiB, generated: 258.64 KiB, post-processed: Unknown size, total: 2.97 MiB) to /Users/dwadden/.cache/huggingface/datasets/scifact/claims/1.0.0/2bb675b2003716a061a4d8ce27fab32ab7f6d010016bab08ffaccea3c14ec6e7...\r\n---------------------------------------------------------------------------\r\nConnectionError Traceback (most recent call last)\r\n<ipython-input-2-9a50b954d89a> in <module>\r\n----> 1 ds = datasets.load_dataset(\"scifact\", \"claims\")\r\n\r\n~/proj/datasets/src/datasets/load.py in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, ignore_verifications, save_infos, script_version, **config_kwargs)\r\n 672\r\n 673 # Download and prepare data\r\n--> 674 builder_instance.download_and_prepare(\r\n 675 download_config=download_config,\r\n 676 download_mode=download_mode,\r\n\r\n~/proj/datasets/src/datasets/builder.py in download_and_prepare(self, download_config, download_mode, ignore_verifications, try_from_hf_gcs, dl_manager, **download_and_prepare_kwargs)\r\n 560 logger.warning(\"HF google storage unreachable. Downloading and preparing it from source\")\r\n 561 if not downloaded_from_gcs:\r\n--> 562 self._download_and_prepare(\r\n 563 dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs\r\n 564 )\r\n\r\n~/proj/datasets/src/datasets/builder.py in _download_and_prepare(self, dl_manager, verify_infos, **prepare_split_kwargs)\r\n 616 split_dict = SplitDict(dataset_name=self.name)\r\n 617 split_generators_kwargs = self._make_split_generators_kwargs(prepare_split_kwargs)\r\n--> 618 split_generators = self._split_generators(dl_manager, **split_generators_kwargs)\r\n 619\r\n 620 # Checksums verification\r\n\r\n~/.cache/huggingface/modules/datasets_modules/datasets/scifact/2bb675b2003716a061a4d8ce27fab32ab7f6d010016bab08ffaccea3c14ec6e7/scifact.py in _split_generators(self, dl_manager)\r\n 92 # dl_manager is a datasets.download.DownloadManager that can be used to\r\n 93 # download and extract URLs\r\n---> 94 dl_dir = dl_manager.download_and_extract(_URL)\r\n 95\r\n 96 if self.config.name == \"corpus\":\r\n\r\n~/proj/datasets/src/datasets/utils/download_manager.py in download_and_extract(self, url_or_urls)\r\n 256 extracted_path(s): `str`, extracted paths of given URL(s).\r\n 257 \"\"\"\r\n--> 258 return self.extract(self.download(url_or_urls))\r\n 259\r\n 260 def get_recorded_sizes_checksums(self):\r\n\r\n~/proj/datasets/src/datasets/utils/download_manager.py in download(self, url_or_urls)\r\n 177\r\n 178 start_time = datetime.now()\r\n--> 179 downloaded_path_or_paths = map_nested(\r\n 180 download_func,\r\n 181 url_or_urls,\r\n\r\n~/proj/datasets/src/datasets/utils/py_utils.py in map_nested(function, data_struct, dict_only, map_list, map_tuple, map_numpy, num_proc, types)\r\n 223 # Singleton\r\n 224 if not isinstance(data_struct, dict) and not isinstance(data_struct, types):\r\n--> 225 return function(data_struct)\r\n 226\r\n 227 disable_tqdm = bool(logger.getEffectiveLevel() > INFO)\r\n\r\n~/proj/datasets/src/datasets/utils/file_utils.py in cached_path(url_or_filename, download_config, **download_kwargs)\r\n 348 if is_remote_url(url_or_filename):\r\n 349 # URL, so get it from the cache (downloading if necessary)\r\n--> 350 output_path = get_from_cache(\r\n 351 url_or_filename,\r\n 352 cache_dir=cache_dir,\r\n\r\n~/proj/datasets/src/datasets/utils/file_utils.py in get_from_cache(url, cache_dir, force_download, proxies, etag_timeout, resume_download, user_agent, local_files_only, use_etag, max_retries)\r\n 631 elif response is not None and response.status_code == 404:\r\n 632 raise FileNotFoundError(\"Couldn't find file at {}\".format(url))\r\n--> 633 raise ConnectionError(\"Couldn't reach {}\".format(url))\r\n 634\r\n 635 # Try a second time\r\n\r\nConnectionError: Couldn't reach https://ai2-s2-scifact.s3-us-west-2.amazonaws.com/release/2020-05-01/data.tar.gz\r\n```",
"Hi ! This may be because you need to point `load_dataset` to the path of the dataset script that has the updated url:\r\n```python\r\nload_dataset(\"./datasets/scifact\", \"claims\")\r\n```\r\n\r\nIf you don't use a path to the updated script, then the old one is used by deffault",
"Nice, I did\r\n```\r\nload_dataset(\"./datasets/scifact\", \"claims\")\r\n```\r\nand it worked. ",
"One more question about the way the code is being preprocessed. The way I've formatted the data, each entry is a claim, which may be associated with multiple evidence documents (similar to FEVER):\r\n```\r\n# My way\r\n{'id': 70,\r\n 'claim': 'Activation of PPM1D suppresses p53 function.',\r\n 'evidence': {'5956380': [{'sentences': [5, 6], 'label': 'SUPPORT'}],\r\n '4414547': [{'sentences': [5], 'label': 'SUPPORT'}]},\r\n 'cited_doc_ids': [5956380, 4414547]}\r\n```\r\n\r\nIn the Hugginface data, each entry is a single claim / evidence document pair. So, the above entry is converted into two separate entries, like so:\r\n```\r\n# huggingface\r\n[{'cited_doc_ids': [5956380, 4414547],\r\n 'claim': 'Activation of PPM1D suppresses p53 function.',\r\n 'evidence_doc_id': '5956380',\r\n 'evidence_label': 'SUPPORT',\r\n 'evidence_sentences': [5, 6],\r\n 'id': 70},\r\n {'cited_doc_ids': [5956380, 4414547],\r\n 'claim': 'Activation of PPM1D suppresses p53 function.',\r\n 'evidence_doc_id': '4414547',\r\n 'evidence_label': 'SUPPORT',\r\n 'evidence_sentences': [5],\r\n 'id': 70}]\r\n```\r\n\r\nWas this done by design? If not, would you mind if I modify the Huggingface code so that it more closely matches the format that people will get if they download the data from the SciFact repo?",
"Yes if you think the format is not convenient for training or evaluation we can change it.\r\nAlso I think we're doing something similar for FEVER: one example = one (claim, sentence) pair.\r\n\r\nLet's merge this PR first and then feel free to open a new PR to change the format :) ",
"Thanks for merging!\r\n\r\nI don't have super-strong feelings one way or the other in terms of the data, I think it's probably fine. I may revisit later."
] | "2021-01-26T02:49:06Z" | "2021-01-28T18:48:00Z" | "2021-01-28T10:19:45Z" | CONTRIBUTOR | null | Hi,
I'm following up this [issue](https://github.com/huggingface/datasets/issues/1717). I'm the SciFact dataset creator, and I'm trying to update the SciFact data url in your repo. Thanks again for adding the dataset!
Basically, I'd just like to change the `_URL` to `"https://scifact.s3-us-west-2.amazonaws.com/release/latest/data.tar.gz"`. I changed `scifact.py` appropriately and tried running
```
python datasets-cli test datasets/scifact --save_infos --all_configs
```
which I was hoping would update the `dataset_infos.json` for SciFact. But for some reason the code still seems to be looking for the old version of the dataset. Full stack trace below. I've tried to clear all my Huggingface-related caches, and I've `git grep`'d to make sure that the old path to the dataset isn't floating around somewhere. So I'm not sure why this is happening?
Can you help me switch the download URL?
```
(datasets) $ python datasets-cli test datasets/scifact --save_infos --all_configs
Checking datasets/scifact/scifact.py for additional imports.
Found main folder for dataset datasets/scifact/scifact.py at /Users/dwadden/.cache/huggingface/modules/datasets_modules/datasets/scifact
Found specific version folder for dataset datasets/scifact/scifact.py at /Users/dwadden/.cache/huggingface/modules/datasets_modules/datasets/scifact/2b43b4e125ce3369da7d6353961d9d315e6593f24cc7bbe9ede5e5c911d11534
Found script file from datasets/scifact/scifact.py to /Users/dwadden/.cache/huggingface/modules/datasets_modules/datasets/scifact/2b43b4e125ce3369da7d6353961d9d315e6593f24cc7bbe9ede5e5c911d11534/scifact.py
Found dataset infos file from datasets/scifact/dataset_infos.json to /Users/dwadden/.cache/huggingface/modules/datasets_modules/datasets/scifact/2b43b4e125ce3369da7d6353961d9d315e6593f24cc7bbe9ede5e5c911d11534/dataset_infos.json
Found metadata file for dataset datasets/scifact/scifact.py at /Users/dwadden/.cache/huggingface/modules/datasets_modules/datasets/scifact/2b43b4e125ce3369da7d6353961d9d315e6593f24cc7bbe9ede5e5c911d11534/scifact.json
Loading Dataset Infos from /Users/dwadden/.cache/huggingface/modules/datasets_modules/datasets/scifact/2b43b4e125ce3369da7d6353961d9d315e6593f24cc7bbe9ede5e5c911d11534
Testing builder 'corpus' (1/2)
Generating dataset scifact (/Users/dwadden/.cache/huggingface/datasets/scifact/corpus/1.0.0/2b43b4e125ce3369da7d6353961d9d315e6593f24cc7bbe9ede5e5c911d11534)
Downloading and preparing dataset scifact/corpus (download: 2.72 MiB, generated: 7.63 MiB, post-processed: Unknown size, total: 10.35 MiB) to /Users/dwadden/.cache/huggingface/datasets/scifact/corpus/1.0.0/2b43b4e125ce3369da7d6353961d9d315e6593f24cc7bbe9ede5e5c911d11534...
Downloading took 0.0 min
Checksum Computation took 0.0 min
Traceback (most recent call last):
File "/Users/dwadden/proj/datasets/datasets-cli", line 36, in <module>
service.run()
File "/Users/dwadden/proj/datasets/src/datasets/commands/test.py", line 139, in run
builder.download_and_prepare(
File "/Users/dwadden/proj/datasets/src/datasets/builder.py", line 562, in download_and_prepare
self._download_and_prepare(
File "/Users/dwadden/proj/datasets/src/datasets/builder.py", line 622, in _download_and_prepare
verify_checksums(
File "/Users/dwadden/proj/datasets/src/datasets/utils/info_utils.py", line 32, in verify_checksums
raise ExpectedMoreDownloadedFiles(str(set(expected_checksums) - set(recorded_checksums)))
datasets.utils.info_utils.ExpectedMoreDownloadedFiles: {'https://ai2-s2-scifact.s3-us-west-2.amazonaws.com/release/2020-05-01/data.tar.gz'}
``` | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1780/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1780/timeline | null | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/1780.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1780",
"merged_at": "2021-01-28T10:19:45Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1780.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1780"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/1779 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1779/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1779/comments | https://api.github.com/repos/huggingface/datasets/issues/1779/events | https://github.com/huggingface/datasets/pull/1779 | 793,539,703 | MDExOlB1bGxSZXF1ZXN0NTYxMjEwNjI5 | 1,779 | Ignore definition line number of functions for caching | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [] | closed | false | null | [] | null | [] | "2021-01-25T16:42:29Z" | "2021-01-26T10:20:20Z" | "2021-01-26T10:20:19Z" | MEMBER | null | As noticed in #1718 , when a function used for processing with `map` is moved inside its python file, then the change of line number causes the caching mechanism to consider it as a different function. Therefore in this case, it recomputes everything.
This is because we were not ignoring the line number definition for such functions (even though we're doing it for lambda functions).
For example this code currently prints False:
```python
from datasets.fingerprint import Hasher
# define once
def foo(x):
return x
h = Hasher.hash(foo)
# define a second time elsewhere
def foo(x):
return x
print(h == Hasher.hash(foo))
```
I changed this by ignoring the line number for all functions. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1779/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1779/timeline | null | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/1779.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1779",
"merged_at": "2021-01-26T10:20:19Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1779.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1779"
} | true |