url
stringlengths 61
61
| repository_url
stringclasses 1
value | labels_url
stringlengths 75
75
| comments_url
stringlengths 70
70
| events_url
stringlengths 68
68
| html_url
stringlengths 49
51
| id
int64 1.14B
1.87B
| node_id
stringlengths 18
19
| number
int64 3.74k
6.19k
| title
stringlengths 1
290
| user
dict | labels
list | state
stringclasses 2
values | locked
bool 1
class | assignee
dict | assignees
list | milestone
dict | comments
sequence | created_at
timestamp[s] | updated_at
timestamp[s] | closed_at
timestamp[s] | author_association
stringclasses 3
values | active_lock_reason
null | body
stringlengths 2
33.9k
β | reactions
dict | timeline_url
stringlengths 70
70
| performed_via_github_app
null | state_reason
stringclasses 3
values | draft
bool 2
classes | pull_request
dict | is_pull_request
bool 2
classes |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://api.github.com/repos/huggingface/datasets/issues/4241 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4241/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4241/comments | https://api.github.com/repos/huggingface/datasets/issues/4241/events | https://github.com/huggingface/datasets/issues/4241 | 1,217,423,686 | I_kwDODunzps5IkGlG | 4,241 | NonMatchingChecksumError when attempting to download GLUE | {
"login": "drussellmrichie",
"id": 9650729,
"node_id": "MDQ6VXNlcjk2NTA3Mjk=",
"avatar_url": "https://avatars.githubusercontent.com/u/9650729?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/drussellmrichie",
"html_url": "https://github.com/drussellmrichie",
"followers_url": "https://api.github.com/users/drussellmrichie/followers",
"following_url": "https://api.github.com/users/drussellmrichie/following{/other_user}",
"gists_url": "https://api.github.com/users/drussellmrichie/gists{/gist_id}",
"starred_url": "https://api.github.com/users/drussellmrichie/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/drussellmrichie/subscriptions",
"organizations_url": "https://api.github.com/users/drussellmrichie/orgs",
"repos_url": "https://api.github.com/users/drussellmrichie/repos",
"events_url": "https://api.github.com/users/drussellmrichie/events{/privacy}",
"received_events_url": "https://api.github.com/users/drussellmrichie/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | [
"Hi :)\r\n\r\nI think your issue may be related to the older `nlp` library. I was able to download `glue` with the latest version of `datasets`. Can you try updating with:\r\n\r\n```py\r\npip install -U datasets\r\n```\r\n\r\nThen you can download:\r\n\r\n```py\r\nfrom datasets import load_dataset\r\nds = load_dataset(\"glue\", \"rte\")\r\n```",
"This appears to work. Thank you!\n\nOn Wed, Apr 27, 2022, 1:18 PM Steven Liu ***@***.***> wrote:\n\n> Hi :)\n>\n> I think your issue may be related to the older nlp library. I was able to\n> download glue with the latest version of datasets. Can you try updating\n> with:\n>\n> pip install -U datasets\n>\n> Then you can download:\n>\n> from datasets import load_datasetds = load_dataset(\"glue\", \"rte\")\n>\n> β\n> Reply to this email directly, view it on GitHub\n> <https://github.com/huggingface/datasets/issues/4241#issuecomment-1111267650>,\n> or unsubscribe\n> <https://github.com/notifications/unsubscribe-auth/ACJUEKLUP2EL7ES3RRWJRPTVHFZHBANCNFSM5UPJBYXA>\n> .\n> You are receiving this because you authored the thread.Message ID:\n> ***@***.***>\n>\n"
] | 2022-04-27T14:14:21 | 2022-04-28T07:45:27 | 2022-04-28T07:45:27 | NONE | null | ## Describe the bug
I am trying to download the GLUE dataset from the NLP module but get an error (see below).
## Steps to reproduce the bug
```python
import nlp
nlp.__version__ # '0.2.0'
nlp.load_dataset('glue', name="rte", download_mode="force_redownload")
```
## Expected results
I expect the dataset to download without an error.
## Actual results
```
INFO:nlp.load:Checking /home/richier/.cache/huggingface/datasets/5fe6ab0df8a32a3371b2e6a969d31d855a19563724fb0d0f163748c270c0ac60.2ea96febf19981fae5f13f0a43d4e2aa58bc619bc23acf06de66675f425a5538.py for additional imports.
INFO:nlp.load:Found main folder for dataset https://s3.amazonaws.com/datasets.huggingface.co/nlp/datasets/glue/glue.py at /home/richier/anaconda3/envs/py36_bert_ee_torch1_11/lib/python3.6/site-packages/nlp/datasets/glue
INFO:nlp.load:Found specific version folder for dataset https://s3.amazonaws.com/datasets.huggingface.co/nlp/datasets/glue/glue.py at /home/richier/anaconda3/envs/py36_bert_ee_torch1_11/lib/python3.6/site-packages/nlp/datasets/glue/637080968c182118f006d3ea39dd9937940e81cfffc8d79836eaae8bba307fc4
INFO:nlp.load:Found script file from https://s3.amazonaws.com/datasets.huggingface.co/nlp/datasets/glue/glue.py to /home/richier/anaconda3/envs/py36_bert_ee_torch1_11/lib/python3.6/site-packages/nlp/datasets/glue/637080968c182118f006d3ea39dd9937940e81cfffc8d79836eaae8bba307fc4/glue.py
INFO:nlp.load:Found dataset infos file from https://s3.amazonaws.com/datasets.huggingface.co/nlp/datasets/glue/dataset_infos.json to /home/richier/anaconda3/envs/py36_bert_ee_torch1_11/lib/python3.6/site-packages/nlp/datasets/glue/637080968c182118f006d3ea39dd9937940e81cfffc8d79836eaae8bba307fc4/dataset_infos.json
INFO:nlp.load:Found metadata file for dataset https://s3.amazonaws.com/datasets.huggingface.co/nlp/datasets/glue/glue.py at /home/richier/anaconda3/envs/py36_bert_ee_torch1_11/lib/python3.6/site-packages/nlp/datasets/glue/637080968c182118f006d3ea39dd9937940e81cfffc8d79836eaae8bba307fc4/glue.json
INFO:nlp.info:Loading Dataset Infos from /home/richier/anaconda3/envs/py36_bert_ee_torch1_11/lib/python3.6/site-packages/nlp/datasets/glue/637080968c182118f006d3ea39dd9937940e81cfffc8d79836eaae8bba307fc4
INFO:nlp.builder:Generating dataset glue (/home/richier/.cache/huggingface/datasets/glue/rte/1.0.0)
INFO:nlp.builder:Dataset not on Hf google storage. Downloading and preparing it from source
INFO:nlp.utils.file_utils:Couldn't get ETag version for url https://firebasestorage.googleapis.com/v0/b/mtl-sentence-representations.appspot.com/o/data%2FRTE.zip?alt=media&token=5efa7e85-a0bb-4f19-8ea2-9e1840f077fb
INFO:nlp.utils.file_utils:https://firebasestorage.googleapis.com/v0/b/mtl-sentence-representations.appspot.com/o/data%2FRTE.zip?alt=media&token=5efa7e85-a0bb-4f19-8ea2-9e1840f077fb not found in cache or force_download set to True, downloading to /home/richier/.cache/huggingface/datasets/downloads/tmpldt3n805
Downloading and preparing dataset glue/rte (download: 680.81 KiB, generated: 1.83 MiB, total: 2.49 MiB) to /home/richier/.cache/huggingface/datasets/glue/rte/1.0.0...
Downloading: 100%|ββββββββββ| 73.0/73.0 [00:00<00:00, 73.9kB/s]
INFO:nlp.utils.file_utils:storing https://firebasestorage.googleapis.com/v0/b/mtl-sentence-representations.appspot.com/o/data%2FRTE.zip?alt=media&token=5efa7e85-a0bb-4f19-8ea2-9e1840f077fb in cache at /home/richier/.cache/huggingface/datasets/downloads/e8b62ee44e6f8b6aea761935928579ffe1aa55d161808c482e0725abbdcf9c64
INFO:nlp.utils.file_utils:creating metadata file for /home/richier/.cache/huggingface/datasets/downloads/e8b62ee44e6f8b6aea761935928579ffe1aa55d161808c482e0725abbdcf9c64
---------------------------------------------------------------------------
NonMatchingChecksumError Traceback (most recent call last)
<ipython-input-7-669a8343dcc1> in <module>
----> 1 nlp.load_dataset('glue', name="rte", download_mode="force_redownload")
~/anaconda3/envs/py36_bert_ee_torch1_11/lib/python3.6/site-packages/nlp/load.py in load_dataset(path, name, version, data_dir, data_files, split, cache_dir, download_config, download_mode, ignore_verifications, save_infos, **config_kwargs)
518 download_mode=download_mode,
519 ignore_verifications=ignore_verifications,
--> 520 save_infos=save_infos,
521 )
522
~/anaconda3/envs/py36_bert_ee_torch1_11/lib/python3.6/site-packages/nlp/builder.py in download_and_prepare(self, download_config, download_mode, ignore_verifications, save_infos, try_from_hf_gcs, dl_manager, **download_and_prepare_kwargs)
418 verify_infos = not save_infos and not ignore_verifications
419 self._download_and_prepare(
--> 420 dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs
421 )
422 # Sync info
~/anaconda3/envs/py36_bert_ee_torch1_11/lib/python3.6/site-packages/nlp/builder.py in _download_and_prepare(self, dl_manager, verify_infos, **prepare_split_kwargs)
458 # Checksums verification
459 if verify_infos:
--> 460 verify_checksums(self.info.download_checksums, dl_manager.get_recorded_sizes_checksums())
461 for split_generator in split_generators:
462 if str(split_generator.split_info.name).lower() == "all":
~/anaconda3/envs/py36_bert_ee_torch1_11/lib/python3.6/site-packages/nlp/utils/info_utils.py in verify_checksums(expected_checksums, recorded_checksums)
34 bad_urls = [url for url in expected_checksums if expected_checksums[url] != recorded_checksums[url]]
35 if len(bad_urls) > 0:
---> 36 raise NonMatchingChecksumError(str(bad_urls))
37 logger.info("All the checksums matched successfully.")
38
NonMatchingChecksumError: ['https://firebasestorage.googleapis.com/v0/b/mtl-sentence-representations.appspot.com/o/data%2FRTE.zip?alt=media&token=5efa7e85-a0bb-4f19-8ea2-9e1840f077fb']
```
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 2.0.0
- Platform: Linux-4.18.0-348.20.1.el8_5.x86_64-x86_64-with-redhat-8.5-Ootpa
- Python version: 3.6.13
- PyArrow version: 6.0.1
- Pandas version: 1.1.5
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4241/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4241/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4240 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4240/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4240/comments | https://api.github.com/repos/huggingface/datasets/issues/4240/events | https://github.com/huggingface/datasets/pull/4240 | 1,217,287,594 | PR_kwDODunzps423xRl | 4,240 | Fix yield for crd3 | {
"login": "shanyas10",
"id": 21066979,
"node_id": "MDQ6VXNlcjIxMDY2OTc5",
"avatar_url": "https://avatars.githubusercontent.com/u/21066979?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/shanyas10",
"html_url": "https://github.com/shanyas10",
"followers_url": "https://api.github.com/users/shanyas10/followers",
"following_url": "https://api.github.com/users/shanyas10/following{/other_user}",
"gists_url": "https://api.github.com/users/shanyas10/gists{/gist_id}",
"starred_url": "https://api.github.com/users/shanyas10/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/shanyas10/subscriptions",
"organizations_url": "https://api.github.com/users/shanyas10/orgs",
"repos_url": "https://api.github.com/users/shanyas10/repos",
"events_url": "https://api.github.com/users/shanyas10/events{/privacy}",
"received_events_url": "https://api.github.com/users/shanyas10/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"I don't think you need to generate new dummy data, since they're in the same format as the original data.\r\n\r\nThe CI is failing because of this error:\r\n```python\r\n> turn[\"names\"] = turn[\"NAMES\"]\r\nE TypeError: tuple indices must be integers or slices, not str\r\n```\r\n\r\nDo you know what could cause this ? If I understand correctly, `turn` is supposed to be a list of dictionaries right ?",
"> ``` \r\n> \r\n> Do you know what could cause this ? If I understand correctly, turn is supposed to be a list of dictionaries right ?\r\n> ```\r\n\r\nThis is strange. Let me look into this. As per https://github.com/RevanthRameshkumar/CRD3/blob/master/data/aligned%20data/c%3D2/C1E001_2_0.json TURNS is a list of dictionaries. So when we iterate over `row[\"TURNS]\"` each `turn` is essentially a dictionary. Not sure why it's being considered a tuple here."
] | 2022-04-27T12:31:36 | 2022-04-29T12:41:41 | 2022-04-29T12:41:41 | CONTRIBUTOR | null | Modified the `_generate_examples` function to consider all the turns for a chunk id as a single example
Modified the features accordingly
```
"turns": [
{
"names": datasets.features.Sequence(datasets.Value("string")),
"utterances": datasets.features.Sequence(datasets.Value("string")),
"number": datasets.Value("int32"),
}
],
}
```
I wasn't able to run `datasets-cli dummy_data datasets` command. Is there a workaround for this? | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4240/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4240/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4240",
"html_url": "https://github.com/huggingface/datasets/pull/4240",
"diff_url": "https://github.com/huggingface/datasets/pull/4240.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4240.patch",
"merged_at": "2022-04-29T12:41:41"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4239 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4239/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4239/comments | https://api.github.com/repos/huggingface/datasets/issues/4239/events | https://github.com/huggingface/datasets/pull/4239 | 1,217,269,689 | PR_kwDODunzps423tZr | 4,239 | Small fixes in ROC AUC docs | {
"login": "wschella",
"id": 9478856,
"node_id": "MDQ6VXNlcjk0Nzg4NTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/9478856?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/wschella",
"html_url": "https://github.com/wschella",
"followers_url": "https://api.github.com/users/wschella/followers",
"following_url": "https://api.github.com/users/wschella/following{/other_user}",
"gists_url": "https://api.github.com/users/wschella/gists{/gist_id}",
"starred_url": "https://api.github.com/users/wschella/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/wschella/subscriptions",
"organizations_url": "https://api.github.com/users/wschella/orgs",
"repos_url": "https://api.github.com/users/wschella/repos",
"events_url": "https://api.github.com/users/wschella/events{/privacy}",
"received_events_url": "https://api.github.com/users/wschella/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-04-27T12:15:50 | 2022-05-02T13:28:57 | 2022-05-02T13:22:03 | CONTRIBUTOR | null | The list of use cases did not render on GitHub with the prepended spacing.
Additionally, some typo's we're fixed. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4239/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4239/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4239",
"html_url": "https://github.com/huggingface/datasets/pull/4239",
"diff_url": "https://github.com/huggingface/datasets/pull/4239.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4239.patch",
"merged_at": "2022-05-02T13:22:03"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4238 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4238/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4238/comments | https://api.github.com/repos/huggingface/datasets/issues/4238/events | https://github.com/huggingface/datasets/issues/4238 | 1,217,168,123 | I_kwDODunzps5IjIL7 | 4,238 | Dataset caching policy | {
"login": "loretoparisi",
"id": 163333,
"node_id": "MDQ6VXNlcjE2MzMzMw==",
"avatar_url": "https://avatars.githubusercontent.com/u/163333?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/loretoparisi",
"html_url": "https://github.com/loretoparisi",
"followers_url": "https://api.github.com/users/loretoparisi/followers",
"following_url": "https://api.github.com/users/loretoparisi/following{/other_user}",
"gists_url": "https://api.github.com/users/loretoparisi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/loretoparisi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/loretoparisi/subscriptions",
"organizations_url": "https://api.github.com/users/loretoparisi/orgs",
"repos_url": "https://api.github.com/users/loretoparisi/repos",
"events_url": "https://api.github.com/users/loretoparisi/events{/privacy}",
"received_events_url": "https://api.github.com/users/loretoparisi/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | [
"Hi @loretoparisi, thanks for reporting.\r\n\r\nThere is an option to force the redownload of the data files (and thus not using previously download and cached data files): `load_dataset(..., download_mode=\"force_redownload\")`.\r\n\r\nPlease, let me know if this fixes your problem.\r\n\r\nI can confirm you that your dataset loads without any problem for me:\r\n```python\r\nIn [2]: ds = load_dataset(\"loretoparisi/tatoeba-sentences\", data_files={\"train\": \"train.csv\", \"test\": \"test.csv\"}, delimiter=\"\\t\", column_names=['label', 'text'])\r\n\r\nIn [3]: ds\r\nOut[3]: \r\nDatasetDict({\r\n train: Dataset({\r\n features: ['label', 'text'],\r\n num_rows: 8256449\r\n })\r\n test: Dataset({\r\n features: ['label', 'text'],\r\n num_rows: 2061204\r\n })\r\n})\r\n``` ",
"@albertvillanova thank you, it seems it still does not work using:\r\n\r\n```python\r\nsentences = load_dataset(\r\n \"loretoparisi/tatoeba-sentences\",\r\n data_files=data_files,\r\n delimiter='\\t', \r\n column_names=['label', 'text'],\r\n download_mode=\"force_redownload\"\r\n)\r\n```\r\n[This](https://colab.research.google.com/drive/1EA6FWo5pHxU8rPHHRn24NlHqRPiOlPTr?usp=sharing) is my notebook!\r\n\r\nThe problem is that the download file's revision for `test.csv` is not correctly parsed\r\n\r\n![Schermata 2022-04-27 alle 18 09 41](https://user-images.githubusercontent.com/163333/165563507-0be53eb6-8f61-49b0-b959-306e59281de3.png)\r\n\r\nIf you download that file `test.csv` from the repo, the line `\\\\N` is not there anymore (it was there at the first file upload).\r\n\r\nMy impression is that the Apache Arrow file is still cached - so server side, despite of enabling a forced download. For what I can see I get those two arrow files, but I cannot grep the bad line (`\\\\N`) since are binary files:\r\n\r\n```\r\n!ls -l /root/.cache/huggingface/datasets/csv/loretoparisi--tatoeba-sentences-efeff8965c730a2c/0.0.0/433e0ccc46f9880962cc2b12065189766fbb2bee57a221866138fb9203c83519\r\n!ls -l /root/.cache/huggingface/datasets/csv/loretoparisi--tatoeba-sentences-efeff8965c730a2c/0.0.0/433e0ccc46f9880962cc2b12065189766fbb2bee57a221866138fb9203c83519/csv-test.arrow\r\n!head /root/.cache/huggingface/datasets/csv/loretoparisi--tatoeba-sentences-efeff8965c730a2c/0.0.0/433e0ccc46f9880962cc2b12065189766fbb2bee57a221866138fb9203c83519/dataset_info.json\r\n```\r\n",
"SOLVED! The problem was the with the file itself, using caching parameter helped indeed.\r\nThanks for helping!"
] | 2022-04-27T10:42:11 | 2022-04-27T16:29:25 | 2022-04-27T16:28:50 | NONE | null | ## Describe the bug
I cannot clean cache of my datasets files, despite I have updated the `csv` files on the repository [here](https://huggingface.co/datasets/loretoparisi/tatoeba-sentences). The original file had a line with bad characters, causing the following error
```
[/usr/local/lib/python3.7/dist-packages/datasets/features/features.py](https://localhost:8080/#) in str2int(self, values)
852 if value not in self._str2int:
853 value = str(value).strip()
--> 854 output.append(self._str2int[str(value)])
855 else:
856 # No names provided, try to integerize
KeyError: '\\N'
```
The file now is cleanup up, but I still get the error. This happens even if I inspect the local cached contents, and cleanup the files locally:
```python
from datasets import load_dataset_builder
dataset_builder = load_dataset_builder("loretoparisi/tatoeba-sentences")
print(dataset_builder.cache_dir)
print(dataset_builder.info.features)
print(dataset_builder.info.splits)
```
```
Using custom data configuration loretoparisi--tatoeba-sentences-e59b8ad92f1bb8dd
/root/.cache/huggingface/datasets/csv/loretoparisi--tatoeba-sentences-e59b8ad92f1bb8dd/0.0.0/433e0ccc46f9880962cc2b12065189766fbb2bee57a221866138fb9203c83519
None
None
```
and removing files located at `/root/.cache/huggingface/datasets/csv/loretoparisi--tatoeba-sentences-*`.
Is there any remote file caching policy in place? If so, is it possibile to programmatically disable it?
Currently it seems that the file `test.csv` on the repo [here](https://huggingface.co/datasets/loretoparisi/tatoeba-sentences/blob/main/test.csv) is cached remotely. In fact I download locally the file from raw link, the file is up-to-date; but If I use it within `datasets` as shown above, it gives to me always the first revision of the file, not the last.
Thank you.
## Steps to reproduce the bug
```python
from datasets import load_dataset,Features,Value,ClassLabel
class_names = ["cmn","deu","rus","fra","eng","jpn","spa","ita","kor","vie","nld","epo","por","tur","heb","hun","ell","ind","ara","arz","fin","bul","yue","swe","ukr","bel","que","ces","swh","nno","wuu","nob","zsm","est","kat","pol","lat","urd","sqi","isl","fry","afr","ron","fao","san","bre","tat","yid","uig","uzb","srp","qya","dan","pes","slk","eus","cycl","acm","tgl","lvs","kaz","hye","hin","lit","ben","cat","bos","hrv","tha","orv","cha","mon","lzh","scn","gle","mkd","slv","frm","glg","vol","ain","jbo","tok","ina","nds","mal","tlh","roh","ltz","oss","ido","gla","mlt","sco","ast","jav","oci","ile","ota","xal","tel","sjn","nov","khm","tpi","ang","aze","tgk","tuk","chv","hsb","dsb","bod","sme","cym","mri","ksh","kmr","ewe","kab","ber","tpw","udm","lld","pms","lad","grn","mlg","xho","pnb","grc","hat","lao","npi","cor","nah","avk","mar","guj","pan","kir","myv","prg","sux","crs","ckt","bak","zlm","hil","cbk","chr","nav","lkt","enm","arq","lin","abk","pcd","rom","gsw","tam","zul","awa","wln","amh","bar","hbo","mhr","bho","mrj","ckb","osx","pfl","mgm","sna","mah","hau","kan","nog","sin","glv","dng","kal","liv","vro","apc","jdt","fur","che","haw","yor","crh","pdc","ppl","kin","shs","mnw","tet","sah","kum","ngt","nya","pus","hif","mya","moh","wol","tir","ton","lzz","oar","lug","brx","non","mww","hak","nlv","ngu","bua","aym","vec","ibo","tkl","bam","kha","ceb","lou","fuc","smo","gag","lfn","arg","umb","tyv","kjh","oji","cyo","urh","kzj","pam","srd","lmo","swg","mdf","gil","snd","tso","sot","zza","tsn","pau","som","egl","ady","asm","ori","dtp","cho","max","kam","niu","sag","ilo","kaa","fuv","nch","hoc","iba","gbm","sun","war","mvv","pap","ary","kxi","csb","pag","cos","rif","kek","krc","aii","ban","ssw","tvl","mfe","tah","bvy","bcl","hnj","nau","nst","afb","quc","min","tmw","mad","bjn","mai","cjy","got","hsn","gan","tzl","dws","ldn","afh","sgs","krl","vep","rue","tly","mic","ext","izh","sma","jam","cmo","mwl","kpv","koi","bis","ike","run","evn","ryu","mnc","aoz","otk","kas","aln","akl","yua","shy","fkv","gos","fij","thv","zgh","gcf","cay","xmf","tig","div","lij","rap","hrx","cpi","tts","gaa","tmr","iii","ltg","bzt","syc","emx","gom","chg","osp","stq","frr","fro","nys","toi","new","phn","jpa","rel","drt","chn","pli","laa","bal","hdn","hax","mik","ajp","xqa","pal","crk","mni","lut","ayl","ood","sdh","ofs","nus","kiu","diq","qxq","alt","bfz","klj","mus","srn","guc","lim","zea","shi","mnr","bom","sat","szl"]
features = Features({ 'label': ClassLabel(names=class_names), 'text': Value('string')})
num_labels = features['label'].num_classes
data_files = { "train": "train.csv", "test": "test.csv" }
sentences = load_dataset(
"loretoparisi/tatoeba-sentences",
data_files=data_files,
delimiter='\t',
column_names=['label', 'text'],
)
# You can make this part faster with num_proc=<some int>
sentences = sentences.map(lambda ex: {"label" : features["label"].str2int(ex["label"]) if ex["label"] is not None else None}, features=features)
sentences = sentences.shuffle()
```
## Expected results
Properly tokenize dataset file `test.csv` without issues.
## Actual results
Specify the actual results or traceback.
```
Downloading data files: 100%
2/2 [00:16<00:00, 7.34s/it]
Downloading data: 100%
391M/391M [00:12<00:00, 36.6MB/s]
Downloading data: 100%
92.4M/92.4M [00:02<00:00, 40.0MB/s]
Extracting data files: 100%
2/2 [00:00<00:00, 47.66it/s]
Dataset csv downloaded and prepared to /root/.cache/huggingface/datasets/csv/loretoparisi--tatoeba-sentences-efeff8965c730a2c/0.0.0/433e0ccc46f9880962cc2b12065189766fbb2bee57a221866138fb9203c83519. Subsequent calls will reuse this data.
100%
2/2 [00:00<00:00, 25.94it/s]
11%
942339/8256449 [01:55<13:11, 9245.85ex/s]
---------------------------------------------------------------------------
KeyError Traceback (most recent call last)
[<ipython-input-3-6a9867fad8d6>](https://localhost:8080/#) in <module>()
12 )
13 # You can make this part faster with num_proc=<some int>
---> 14 sentences = sentences.map(lambda ex: {"label" : features["label"].str2int(ex["label"]) if ex["label"] is not None else None}, features=features)
15 sentences = sentences.shuffle()
10 frames
[/usr/local/lib/python3.7/dist-packages/datasets/features/features.py](https://localhost:8080/#) in str2int(self, values)
852 if value not in self._str2int:
853 value = str(value).strip()
--> 854 output.append(self._str2int[str(value)])
855 else:
856 # No names provided, try to integerize
KeyError: '\\N'
```
## Environment info
```
- `datasets` version: 2.1.0
- Platform: Linux-5.4.144+-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.7.13
- PyArrow version: 6.0.1
- Pandas version: 1.3.5
- ```
```
- `transformers` version: 4.18.0
- Platform: Linux-5.4.144+-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.7.13
- Huggingface_hub version: 0.5.1
- PyTorch version (GPU?): 1.11.0+cu113 (True)
- Tensorflow version (GPU?): 2.8.0 (True)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
- ```
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4238/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4238/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4237 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4237/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4237/comments | https://api.github.com/repos/huggingface/datasets/issues/4237/events | https://github.com/huggingface/datasets/issues/4237 | 1,217,121,044 | I_kwDODunzps5Ii8sU | 4,237 | Common Voice 8 doesn't show datasets viewer | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 3470211881,
"node_id": "LA_kwDODunzps7O1zsp",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset-viewer",
"name": "dataset-viewer",
"color": "E5583E",
"default": false,
"description": "Related to the dataset viewer on huggingface.co"
}
] | closed | false | null | [] | null | [
"Thanks for reporting. I understand it's an error in the dataset script. To reproduce:\r\n\r\n```python\r\n>>> import datasets as ds\r\n>>> split_names = ds.get_dataset_split_names(\"mozilla-foundation/common_voice_8_0\", use_auth_token=\"**********\")\r\nDownloading builder script: 100%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 10.9k/10.9k [00:00<00:00, 10.9MB/s]\r\nDownloading extra modules: 100%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 2.98k/2.98k [00:00<00:00, 3.36MB/s]\r\nDownloading extra modules: 100%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 53.1k/53.1k [00:00<00:00, 650kB/s]\r\nNo config specified, defaulting to: common_voice/en\r\nTraceback (most recent call last):\r\n File \"/home/slesage/hf/datasets-preview-backend/libs/libmodels/.venv/lib/python3.9/site-packages/datasets/inspect.py\", line 280, in get_dataset_config_info\r\n for split_generator in builder._split_generators(\r\n File \"/home/slesage/.cache/huggingface/modules/datasets_modules/datasets/mozilla-foundation--common_voice_8_0/720589e6e5ad674019008b719053303a71716db1b27e63c9846df02fdf93f2f3/common_voice_8_0.py\", line 153, in _split_generators\r\n self._log_download(self.config.name, bundle_version, hf_auth_token)\r\n File \"/home/slesage/.cache/huggingface/modules/datasets_modules/datasets/mozilla-foundation--common_voice_8_0/720589e6e5ad674019008b719053303a71716db1b27e63c9846df02fdf93f2f3/common_voice_8_0.py\", line 139, in _log_download\r\n email = HfApi().whoami(auth_token)[\"email\"]\r\nKeyError: 'email'\r\n\r\nThe above exception was the direct cause of the following exception:\r\n\r\nTraceback (most recent call last):\r\n File \"<stdin>\", line 1, in <module>\r\n File \"/home/slesage/hf/datasets-preview-backend/libs/libmodels/.venv/lib/python3.9/site-packages/datasets/inspect.py\", line 323, in get_dataset_split_names\r\n info = get_dataset_config_info(\r\n File \"/home/slesage/hf/datasets-preview-backend/libs/libmodels/.venv/lib/python3.9/site-packages/datasets/inspect.py\", line 285, in get_dataset_config_info\r\n raise SplitsNotFoundError(\"The split names could not be parsed from the dataset config.\") from err\r\ndatasets.inspect.SplitsNotFoundError: The split names could not be parsed from the dataset config.\r\n```",
"Thanks for reporting @patrickvonplaten and thanks for the investigation @severo.\r\n\r\nUnfortunately I'm not able to reproduce the error.\r\n\r\nI think the error has to do with authentication with `huggingface_hub`, because the exception is thrown from these code lines: https://huggingface.co/datasets/mozilla-foundation/common_voice_8_0/blob/main/common_voice_8_0.py#L137-L139\r\n```python\r\nfrom huggingface_hub import HfApi, HfFolder\r\n\r\nif isinstance(auth_token, bool):\r\n email = HfApi().whoami(auth_token)\r\nemail = HfApi().whoami(auth_token)[\"email\"]\r\n```\r\n\r\nCould you please verify the previous code with the `auth_token` you pass to `load_dataset(..., use_auth_token=auth_token,...`?",
"OK, thanks for digging a bit into it. Indeed, the error occurs with the dataset-viewer, but not with a normal user token, because we use an app token, and it does not have a related email!\r\n\r\n```python\r\n>>> from huggingface_hub import HfApi, HfFolder\r\n>>> auth_token = \"hf_app_******\"\r\n>>> t = HfApi().whoami(auth_token)\r\n>>> t\r\n{'type': 'app', 'name': 'dataset-preview-backend'}\r\n>>> t[\"email\"]\r\nTraceback (most recent call last):\r\n File \"<stdin>\", line 1, in <module>\r\nKeyError: 'email'\r\n```\r\n\r\nNote also that the doc (https://huggingface.co/docs/huggingface_hub/package_reference/hf_api#huggingface_hub.HfApi.whoami) does not state that `whoami` should return an `email` key.\r\n\r\n@SBrandeis @julien-c: do you think the app token should have an email associated, like the users?",
"We can workaround this with\r\n```python\r\nemail = HfApi().whoami(auth_token).get(\"email\", \"[email protected]\")\r\n```\r\nin the common voice scripts",
"Hmmm, does this mean that any person who downloads the common voice dataset will be logged as \"[email protected]\"? If so, it would defeat the purpose of sending the user's email to the commonvoice API, right?",
"I agree with @severo: we cannot set our system email as default, allowing anybody not authenticated to by-pass the Common Voice usage policy.\r\n\r\nAdditionally, looking at the code, I think we should implement a more robust way to send user email to Common Voice: currently anybody can tweak the script and send somebody else email instead.\r\n\r\nCC: @patrickvonplaten @lhoestq @SBrandeis @julien-c ",
"Hmm I don't agree here. \r\n\r\nAnybody can always just bypass the system by setting whatever email. As soon as someone has access to the downloading script it's trivial to tweak the code to not send the \"correct\" email but to just whatever and it would work.\r\n\r\nNote that someone only has visibility on the code after having \"signed\" the access-mechanism so I think we can expect the users to have agreed to not do anything malicious. \r\n\r\nI'm fine with both @lhoestq's solution or we find a way that forces the user to be logged in + being able to load the data for the datasets viewer. Wdyt @lhoestq @severo @albertvillanova ?",
"> Additionally, looking at the code, I think we should implement a more robust way to send user email to Common Voice: currently anybody can tweak the script and send somebody else email instead.\r\n\r\nYes, I agree we can forget about this @patrickvonplaten. After having had a look at Common Voice website, I've seen they only require sending an email (no auth is inplace on their side, contrary to what I had previously thought). Therefore, currently we impose stronger requirements than them: we require the user having logged in and accepted the access mechanism.\r\n\r\nCurrently the script as it is already requires the user being logged in:\r\n```python\r\nHfApi().whoami(auth_token)\r\n```\r\nthrows an exception if None/invalid auth_token is passed.\r\n\r\nOn the other hand, we should agree on the way to allow the viewer to stream the data.",
"The preview is back now, thanks !"
] | 2022-04-27T10:05:20 | 2022-05-10T12:17:05 | 2022-05-10T12:17:04 | MEMBER | null | https://huggingface.co/datasets/mozilla-foundation/common_voice_8_0 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4237/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4237/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4236 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4236/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4236/comments | https://api.github.com/repos/huggingface/datasets/issues/4236/events | https://github.com/huggingface/datasets/pull/4236 | 1,217,115,691 | PR_kwDODunzps423MOc | 4,236 | Replace data URL in big_patent dataset and support streaming | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"I first uploaded the data files to the Hub: I think it is a good option because we have git lfs to track versions and changes. Moreover people will be able to make PRs to propose updates on the data files.\r\n- I would have preferred to upload it it to the \"data\" org namespace, but it is already taken (although not used): might be possible to take it?\r\n\r\nAs an alternative (and to be consistent with previous datasets), I also uploaded the data files to our AWS bucket.\r\n\r\nWe should decide which to use (now and for future datasets) and set it here before merging. We should remove the data files for the non-chosen option.\r\n\r\nCC: @lhoestq @mariosasko @polinaeterna ",
"Would it make sense to make the dataset a community one (so, create an organization for it) and store the script and the data in a single repository? Just as it is for most of the datasets. That way we can also access the data using a relative path inside the repo (that's not the point though). The point is that to me it seems a bit more straightforward to store everything in one place. \r\n\r\nI guess the strong argument against this logic is that in this case the canonical version won't work... But maybe there is some redirecting mechanism I don't know about? :)\r\n\r\nAnyway, I'm in favor of hosting data on the Hub instead of AWS :) ",
"I also think storing everything in one place/single repository is the best option.\r\n\r\n@polinaeterna Canonical datasets also support data files (see the [`red_caps` repo](https://huggingface.co/datasets/red_caps/tree/main) for instance) ",
"Thanks @polinaeterna and @mariosasko for your comments.\r\n\r\nYes, definitely it is much better to have everything in the same repo. \r\n\r\nI'm transferring their data files to the Hub under \"big_patent\" and deleting them from the other repo and AWS."
] | 2022-04-27T10:01:13 | 2022-06-10T08:10:55 | 2022-05-02T18:21:15 | MEMBER | null | This PR replaces the Google Drive URL with our Hub one, once the data owners have approved to host their data on the Hub.
Moreover, this PR makes the dataset streamable.
Fix #4217. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4236/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4236/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4236",
"html_url": "https://github.com/huggingface/datasets/pull/4236",
"diff_url": "https://github.com/huggingface/datasets/pull/4236.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4236.patch",
"merged_at": "2022-05-02T18:21:15"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4235 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4235/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4235/comments | https://api.github.com/repos/huggingface/datasets/issues/4235/events | https://github.com/huggingface/datasets/issues/4235 | 1,216,952,640 | I_kwDODunzps5IiTlA | 4,235 | How to load VERY LARGE dataset? | {
"login": "CaoYiqingT",
"id": 45160643,
"node_id": "MDQ6VXNlcjQ1MTYwNjQz",
"avatar_url": "https://avatars.githubusercontent.com/u/45160643?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/CaoYiqingT",
"html_url": "https://github.com/CaoYiqingT",
"followers_url": "https://api.github.com/users/CaoYiqingT/followers",
"following_url": "https://api.github.com/users/CaoYiqingT/following{/other_user}",
"gists_url": "https://api.github.com/users/CaoYiqingT/gists{/gist_id}",
"starred_url": "https://api.github.com/users/CaoYiqingT/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/CaoYiqingT/subscriptions",
"organizations_url": "https://api.github.com/users/CaoYiqingT/orgs",
"repos_url": "https://api.github.com/users/CaoYiqingT/repos",
"events_url": "https://api.github.com/users/CaoYiqingT/events{/privacy}",
"received_events_url": "https://api.github.com/users/CaoYiqingT/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | [
"The `Trainer` support `IterableDataset`, not just datasets."
] | 2022-04-27T07:50:13 | 2023-07-25T15:07:57 | 2023-07-25T15:07:57 | NONE | null | ### System Info
```shell
I am using transformer trainer while meeting the issue.
The trainer requests torch.utils.data.Dataset as input, which loads the whole dataset into the memory at once. Therefore, when the dataset is too large to load, there's nothing I can do except using IterDataset, which loads samples of data seperately, and results in low efficiency.
I wonder if there are any tricks like Sharding in huggingface trainer.
Looking forward to your reply.
```
### Who can help?
Trainer: @sgugger
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
None
### Expected behavior
```shell
I wonder if there are any tricks like fairseq Sharding very large datasets https://fairseq.readthedocs.io/en/latest/getting_started.html.
Thanks a lot!
```
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4235/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4235/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4234 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4234/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4234/comments | https://api.github.com/repos/huggingface/datasets/issues/4234/events | https://github.com/huggingface/datasets/pull/4234 | 1,216,818,846 | PR_kwDODunzps422Mwn | 4,234 | Autoeval config | {
"login": "nazneenrajani",
"id": 3278583,
"node_id": "MDQ6VXNlcjMyNzg1ODM=",
"avatar_url": "https://avatars.githubusercontent.com/u/3278583?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/nazneenrajani",
"html_url": "https://github.com/nazneenrajani",
"followers_url": "https://api.github.com/users/nazneenrajani/followers",
"following_url": "https://api.github.com/users/nazneenrajani/following{/other_user}",
"gists_url": "https://api.github.com/users/nazneenrajani/gists{/gist_id}",
"starred_url": "https://api.github.com/users/nazneenrajani/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/nazneenrajani/subscriptions",
"organizations_url": "https://api.github.com/users/nazneenrajani/orgs",
"repos_url": "https://api.github.com/users/nazneenrajani/repos",
"events_url": "https://api.github.com/users/nazneenrajani/events{/privacy}",
"received_events_url": "https://api.github.com/users/nazneenrajani/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"Related to: https://github.com/huggingface/autonlp-backend/issues/414 and https://github.com/huggingface/autonlp-backend/issues/424",
"The tests are failing due to the changed metadata:\r\n\r\n```\r\ngot an unexpected keyword argument 'train-eval-index'\r\n```\r\n\r\nI think you can fix this by updating the `DatasetMetadata` class and implementing an appropriate `validate_train_eval_index()` function\r\n\r\n@lhoestq we are working with an arbitrary set of tags for `autoeval config`. See https://github.com/huggingface/autonlp-backend/issues/414\r\nI need to add a validator function though for the tests to pass. Our set is not well-defined as in the rest https://github.com/huggingface/datasets/tree/master/src/datasets/utils/resources. What's a workaround for this?",
"On the question of validating the `train-eval-index` metadata, I think the simplest approach would be to validate that the required fields exist and not worry about their values (which are open-ended).\r\n\r\nFor me, the required fields include:\r\n\r\n* `config`\r\n* `task`\r\n* `task_id`\r\n* `splits` (train / validation / eval)\r\n* `col_mapping`\r\n* `metrics` (checking that each one has `type`, `name`) \r\n\r\nHere I'm using the spec defined in https://github.com/huggingface/autonlp-backend/issues/414 as a guide.\r\n\r\nWDYT @lhoestq ?",
"Makes sense ! Currently the metadata type validator doesn't support subfields - let me open a PR to add it",
"I ended up improving the metadata validation in this PR x)\r\n\r\nIn particular:\r\n- I added support YAML keys with dashes instead of underscores for `train-eval-index`\r\n- I added `train-eval-index` validation with `validate_train_eval_index`. It does nothing fancy, it just checks that it is a list if it exists in the YAML, but feel free to improve it if you want\r\n\r\nLet me know if it sounds good to you ! I think we can improve `validate_train_eval_index` in another PR",
"Come on windows... I didn't do anything advanced...\r\n\r\nAnyway, will try to fix this when I get back home x)",
"> Come on windows... I didn't do anything advanced...\r\n> \r\n> Anyway, will try to fix this when I get back home x)\r\n\r\nHehe, thanks!",
"Thanks, @lhoestq this is great! ",
"Did I just fix it for windows and now it fails on linux ? xD",
"> Did I just fix it for windows and now it fails on linux ? xD\r\n\r\nLooks like the Heisenberg uncertainty principle is at play here - you cannot simultaneously have unit tests passing in both Linux and Windows π
",
"The worst is that the tests pass locally both on my windows and my linux x)",
"Ok fixed it, the issue came from python 3.6 that doesn't return the right `__origin__` for Dict and List types",
"> Alright thanks for adding the first Autoeval config ! :D\r\n\r\nWoohoo! Thank you so much π€ ",
"This is cool!"
] | 2022-04-27T05:32:10 | 2022-05-06T13:20:31 | 2022-05-05T18:20:58 | CONTRIBUTOR | null | Added autoeval config to imdb as pilot | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4234/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4234/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4234",
"html_url": "https://github.com/huggingface/datasets/pull/4234",
"diff_url": "https://github.com/huggingface/datasets/pull/4234.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4234.patch",
"merged_at": "2022-05-05T18:20:58"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4233 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4233/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4233/comments | https://api.github.com/repos/huggingface/datasets/issues/4233/events | https://github.com/huggingface/datasets/pull/4233 | 1,216,665,044 | PR_kwDODunzps421r-6 | 4,233 | Autoeval | {
"login": "nazneenrajani",
"id": 3278583,
"node_id": "MDQ6VXNlcjMyNzg1ODM=",
"avatar_url": "https://avatars.githubusercontent.com/u/3278583?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/nazneenrajani",
"html_url": "https://github.com/nazneenrajani",
"followers_url": "https://api.github.com/users/nazneenrajani/followers",
"following_url": "https://api.github.com/users/nazneenrajani/following{/other_user}",
"gists_url": "https://api.github.com/users/nazneenrajani/gists{/gist_id}",
"starred_url": "https://api.github.com/users/nazneenrajani/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/nazneenrajani/subscriptions",
"organizations_url": "https://api.github.com/users/nazneenrajani/orgs",
"repos_url": "https://api.github.com/users/nazneenrajani/repos",
"events_url": "https://api.github.com/users/nazneenrajani/events{/privacy}",
"received_events_url": "https://api.github.com/users/nazneenrajani/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_4233). All of your documentation changes will be reflected on that endpoint."
] | 2022-04-27T01:32:09 | 2022-04-27T05:29:30 | 2022-04-27T01:32:23 | CONTRIBUTOR | null | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4233/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4233/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4233",
"html_url": "https://github.com/huggingface/datasets/pull/4233",
"diff_url": "https://github.com/huggingface/datasets/pull/4233.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4233.patch",
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4232 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4232/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4232/comments | https://api.github.com/repos/huggingface/datasets/issues/4232/events | https://github.com/huggingface/datasets/pull/4232 | 1,216,659,444 | PR_kwDODunzps421qz4 | 4,232 | adding new tag to tasks.json and modified for existing datasets | {
"login": "nazneenrajani",
"id": 3278583,
"node_id": "MDQ6VXNlcjMyNzg1ODM=",
"avatar_url": "https://avatars.githubusercontent.com/u/3278583?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/nazneenrajani",
"html_url": "https://github.com/nazneenrajani",
"followers_url": "https://api.github.com/users/nazneenrajani/followers",
"following_url": "https://api.github.com/users/nazneenrajani/following{/other_user}",
"gists_url": "https://api.github.com/users/nazneenrajani/gists{/gist_id}",
"starred_url": "https://api.github.com/users/nazneenrajani/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/nazneenrajani/subscriptions",
"organizations_url": "https://api.github.com/users/nazneenrajani/orgs",
"repos_url": "https://api.github.com/users/nazneenrajani/repos",
"events_url": "https://api.github.com/users/nazneenrajani/events{/privacy}",
"received_events_url": "https://api.github.com/users/nazneenrajani/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"closing in favor of https://github.com/huggingface/datasets/pull/4244"
] | 2022-04-27T01:21:09 | 2022-05-03T14:23:56 | 2022-05-03T14:16:39 | CONTRIBUTOR | null | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4232/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4232/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4232",
"html_url": "https://github.com/huggingface/datasets/pull/4232",
"diff_url": "https://github.com/huggingface/datasets/pull/4232.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4232.patch",
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4231 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4231/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4231/comments | https://api.github.com/repos/huggingface/datasets/issues/4231/events | https://github.com/huggingface/datasets/pull/4231 | 1,216,651,960 | PR_kwDODunzps421pUX | 4,231 | Fix invalid url to CC-Aligned dataset | {
"login": "juntang-zhuang",
"id": 44451229,
"node_id": "MDQ6VXNlcjQ0NDUxMjI5",
"avatar_url": "https://avatars.githubusercontent.com/u/44451229?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/juntang-zhuang",
"html_url": "https://github.com/juntang-zhuang",
"followers_url": "https://api.github.com/users/juntang-zhuang/followers",
"following_url": "https://api.github.com/users/juntang-zhuang/following{/other_user}",
"gists_url": "https://api.github.com/users/juntang-zhuang/gists{/gist_id}",
"starred_url": "https://api.github.com/users/juntang-zhuang/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/juntang-zhuang/subscriptions",
"organizations_url": "https://api.github.com/users/juntang-zhuang/orgs",
"repos_url": "https://api.github.com/users/juntang-zhuang/repos",
"events_url": "https://api.github.com/users/juntang-zhuang/events{/privacy}",
"received_events_url": "https://api.github.com/users/juntang-zhuang/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-04-27T01:07:01 | 2022-05-16T17:01:13 | 2022-05-16T16:53:12 | CONTRIBUTOR | null | The CC-Aligned dataset url has changed to https://data.statmt.org/cc-aligned/, the old address http://www.statmt.org/cc-aligned/ is no longer valid | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4231/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4231/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4231",
"html_url": "https://github.com/huggingface/datasets/pull/4231",
"diff_url": "https://github.com/huggingface/datasets/pull/4231.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4231.patch",
"merged_at": "2022-05-16T16:53:12"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4230 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4230/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4230/comments | https://api.github.com/repos/huggingface/datasets/issues/4230/events | https://github.com/huggingface/datasets/issues/4230 | 1,216,643,661 | I_kwDODunzps5IhIJN | 4,230 | Why the `conll2003` dataset on huggingface only contains the `en` subset? Where is the German data? | {
"login": "beyondguo",
"id": 37113676,
"node_id": "MDQ6VXNlcjM3MTEzNjc2",
"avatar_url": "https://avatars.githubusercontent.com/u/37113676?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/beyondguo",
"html_url": "https://github.com/beyondguo",
"followers_url": "https://api.github.com/users/beyondguo/followers",
"following_url": "https://api.github.com/users/beyondguo/following{/other_user}",
"gists_url": "https://api.github.com/users/beyondguo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/beyondguo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/beyondguo/subscriptions",
"organizations_url": "https://api.github.com/users/beyondguo/orgs",
"repos_url": "https://api.github.com/users/beyondguo/repos",
"events_url": "https://api.github.com/users/beyondguo/events{/privacy}",
"received_events_url": "https://api.github.com/users/beyondguo/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | closed | false | null | [] | null | [
"Thanks for reporting @beyondguo.\r\n\r\nIndeed, we generate this dataset from this raw data file URL: https://data.deepai.org/conll2003.zip\r\nAnd that URL only contains the English version.",
"The German data requires payment\r\n\r\nThe [original task page](https://www.clips.uantwerpen.be/conll2003/ner/) states \"The German data is a collection of articles from the Frankfurter Rundschau. The named entities have been annotated by people of the University of Antwerp. Only the annotations are available here. In order to build these data sets you need access to the ECI Multilingual Text Corpus. It can be ordered from the Linguistic Data Consortium (2003 non-member price: US$ 35.00).\"\r\n\r\nInflation since 2003 has also affected LDC's prices, and today the dataset [LDC94T5](https://catalog.ldc.upenn.edu/LDC94T5) is available under license for $75 a copy. The [license](https://catalog.ldc.upenn.edu/license/eci-slash-mci-user-agreement.pdf) includes a non-distribution condition, which is probably why the data has not turned up openly.\r\n\r\nThe ACL hold copyright of this data; I'll mail them and anyone I can find at ECI to see if they'll open this up now. After all, it worked with Microsoft 3DMM, why not here too, after 28 years? :)\r\n",
"Closing this issue as we are not allowed to share publicly the German subset."
] | 2022-04-27T00:53:52 | 2023-07-25T15:10:15 | 2023-07-25T15:10:15 | NONE | null | ![image](https://user-images.githubusercontent.com/37113676/165416606-96b5db18-b16c-4b6b-928c-de8620fd943e.png)
But on huggingface datasets:
![image](https://user-images.githubusercontent.com/37113676/165416649-8fd77980-ca0d-43f0-935e-f398ba8323a4.png)
Where is the German data? | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4230/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4230/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4229 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4229/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4229/comments | https://api.github.com/repos/huggingface/datasets/issues/4229/events | https://github.com/huggingface/datasets/pull/4229 | 1,216,638,968 | PR_kwDODunzps421mjM | 4,229 | new task tag | {
"login": "nazneenrajani",
"id": 3278583,
"node_id": "MDQ6VXNlcjMyNzg1ODM=",
"avatar_url": "https://avatars.githubusercontent.com/u/3278583?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/nazneenrajani",
"html_url": "https://github.com/nazneenrajani",
"followers_url": "https://api.github.com/users/nazneenrajani/followers",
"following_url": "https://api.github.com/users/nazneenrajani/following{/other_user}",
"gists_url": "https://api.github.com/users/nazneenrajani/gists{/gist_id}",
"starred_url": "https://api.github.com/users/nazneenrajani/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/nazneenrajani/subscriptions",
"organizations_url": "https://api.github.com/users/nazneenrajani/orgs",
"repos_url": "https://api.github.com/users/nazneenrajani/repos",
"events_url": "https://api.github.com/users/nazneenrajani/events{/privacy}",
"received_events_url": "https://api.github.com/users/nazneenrajani/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2022-04-27T00:47:08 | 2022-04-27T00:48:28 | 2022-04-27T00:48:17 | CONTRIBUTOR | null | multi-input-text-classification tag for classification datasets that take more than one input | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4229/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4229/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4229",
"html_url": "https://github.com/huggingface/datasets/pull/4229",
"diff_url": "https://github.com/huggingface/datasets/pull/4229.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4229.patch",
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4228 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4228/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4228/comments | https://api.github.com/repos/huggingface/datasets/issues/4228/events | https://github.com/huggingface/datasets/pull/4228 | 1,216,523,043 | PR_kwDODunzps421NKL | 4,228 | new task tag | {
"login": "nazneenrajani",
"id": 3278583,
"node_id": "MDQ6VXNlcjMyNzg1ODM=",
"avatar_url": "https://avatars.githubusercontent.com/u/3278583?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/nazneenrajani",
"html_url": "https://github.com/nazneenrajani",
"followers_url": "https://api.github.com/users/nazneenrajani/followers",
"following_url": "https://api.github.com/users/nazneenrajani/following{/other_user}",
"gists_url": "https://api.github.com/users/nazneenrajani/gists{/gist_id}",
"starred_url": "https://api.github.com/users/nazneenrajani/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/nazneenrajani/subscriptions",
"organizations_url": "https://api.github.com/users/nazneenrajani/orgs",
"repos_url": "https://api.github.com/users/nazneenrajani/repos",
"events_url": "https://api.github.com/users/nazneenrajani/events{/privacy}",
"received_events_url": "https://api.github.com/users/nazneenrajani/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2022-04-26T22:00:33 | 2022-04-27T00:48:31 | 2022-04-27T00:46:31 | CONTRIBUTOR | null | multi-input-text-classification tag for classification datasets that take more than one input | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4228/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4228/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4228",
"html_url": "https://github.com/huggingface/datasets/pull/4228",
"diff_url": "https://github.com/huggingface/datasets/pull/4228.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4228.patch",
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4227 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4227/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4227/comments | https://api.github.com/repos/huggingface/datasets/issues/4227/events | https://github.com/huggingface/datasets/pull/4227 | 1,216,455,316 | PR_kwDODunzps420-mc | 4,227 | Add f1 metric card, update docstring in py file | {
"login": "emibaylor",
"id": 27527747,
"node_id": "MDQ6VXNlcjI3NTI3NzQ3",
"avatar_url": "https://avatars.githubusercontent.com/u/27527747?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/emibaylor",
"html_url": "https://github.com/emibaylor",
"followers_url": "https://api.github.com/users/emibaylor/followers",
"following_url": "https://api.github.com/users/emibaylor/following{/other_user}",
"gists_url": "https://api.github.com/users/emibaylor/gists{/gist_id}",
"starred_url": "https://api.github.com/users/emibaylor/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/emibaylor/subscriptions",
"organizations_url": "https://api.github.com/users/emibaylor/orgs",
"repos_url": "https://api.github.com/users/emibaylor/repos",
"events_url": "https://api.github.com/users/emibaylor/events{/privacy}",
"received_events_url": "https://api.github.com/users/emibaylor/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-04-26T20:41:03 | 2022-05-03T12:50:23 | 2022-05-03T12:43:33 | CONTRIBUTOR | null | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4227/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4227/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4227",
"html_url": "https://github.com/huggingface/datasets/pull/4227",
"diff_url": "https://github.com/huggingface/datasets/pull/4227.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4227.patch",
"merged_at": "2022-05-03T12:43:33"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4226 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4226/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4226/comments | https://api.github.com/repos/huggingface/datasets/issues/4226/events | https://github.com/huggingface/datasets/pull/4226 | 1,216,331,073 | PR_kwDODunzps420kAv | 4,226 | Add pearsonr mc, update functionality to match the original docs | {
"login": "emibaylor",
"id": 27527747,
"node_id": "MDQ6VXNlcjI3NTI3NzQ3",
"avatar_url": "https://avatars.githubusercontent.com/u/27527747?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/emibaylor",
"html_url": "https://github.com/emibaylor",
"followers_url": "https://api.github.com/users/emibaylor/followers",
"following_url": "https://api.github.com/users/emibaylor/following{/other_user}",
"gists_url": "https://api.github.com/users/emibaylor/gists{/gist_id}",
"starred_url": "https://api.github.com/users/emibaylor/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/emibaylor/subscriptions",
"organizations_url": "https://api.github.com/users/emibaylor/orgs",
"repos_url": "https://api.github.com/users/emibaylor/repos",
"events_url": "https://api.github.com/users/emibaylor/events{/privacy}",
"received_events_url": "https://api.github.com/users/emibaylor/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"thank you @lhoestq!! :hugs: "
] | 2022-04-26T18:30:46 | 2022-05-03T17:09:24 | 2022-05-03T17:02:28 | CONTRIBUTOR | null | - adds pearsonr metric card
- adds ability to return p-value
- p-value was mentioned in the original docs as a return value, but there was no option to return it. I updated the _compute function slightly to have an option to return the p-value. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4226/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4226/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4226",
"html_url": "https://github.com/huggingface/datasets/pull/4226",
"diff_url": "https://github.com/huggingface/datasets/pull/4226.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4226.patch",
"merged_at": "2022-05-03T17:02:28"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4225 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4225/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4225/comments | https://api.github.com/repos/huggingface/datasets/issues/4225/events | https://github.com/huggingface/datasets/pull/4225 | 1,216,213,464 | PR_kwDODunzps420LNM | 4,225 | autoeval config | {
"login": "nazneenrajani",
"id": 3278583,
"node_id": "MDQ6VXNlcjMyNzg1ODM=",
"avatar_url": "https://avatars.githubusercontent.com/u/3278583?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/nazneenrajani",
"html_url": "https://github.com/nazneenrajani",
"followers_url": "https://api.github.com/users/nazneenrajani/followers",
"following_url": "https://api.github.com/users/nazneenrajani/following{/other_user}",
"gists_url": "https://api.github.com/users/nazneenrajani/gists{/gist_id}",
"starred_url": "https://api.github.com/users/nazneenrajani/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/nazneenrajani/subscriptions",
"organizations_url": "https://api.github.com/users/nazneenrajani/orgs",
"repos_url": "https://api.github.com/users/nazneenrajani/repos",
"events_url": "https://api.github.com/users/nazneenrajani/events{/privacy}",
"received_events_url": "https://api.github.com/users/nazneenrajani/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2022-04-26T16:38:34 | 2022-04-27T00:48:31 | 2022-04-26T22:00:26 | CONTRIBUTOR | null | add train eval index for autoeval | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4225/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4225/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4225",
"html_url": "https://github.com/huggingface/datasets/pull/4225",
"diff_url": "https://github.com/huggingface/datasets/pull/4225.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4225.patch",
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4224 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4224/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4224/comments | https://api.github.com/repos/huggingface/datasets/issues/4224/events | https://github.com/huggingface/datasets/pull/4224 | 1,216,209,667 | PR_kwDODunzps420KX2 | 4,224 | autoeval config | {
"login": "nazneenrajani",
"id": 3278583,
"node_id": "MDQ6VXNlcjMyNzg1ODM=",
"avatar_url": "https://avatars.githubusercontent.com/u/3278583?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/nazneenrajani",
"html_url": "https://github.com/nazneenrajani",
"followers_url": "https://api.github.com/users/nazneenrajani/followers",
"following_url": "https://api.github.com/users/nazneenrajani/following{/other_user}",
"gists_url": "https://api.github.com/users/nazneenrajani/gists{/gist_id}",
"starred_url": "https://api.github.com/users/nazneenrajani/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/nazneenrajani/subscriptions",
"organizations_url": "https://api.github.com/users/nazneenrajani/orgs",
"repos_url": "https://api.github.com/users/nazneenrajani/repos",
"events_url": "https://api.github.com/users/nazneenrajani/events{/privacy}",
"received_events_url": "https://api.github.com/users/nazneenrajani/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2022-04-26T16:35:19 | 2022-04-26T16:36:45 | 2022-04-26T16:36:45 | CONTRIBUTOR | null | add train eval index for autoeval | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4224/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4224/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4224",
"html_url": "https://github.com/huggingface/datasets/pull/4224",
"diff_url": "https://github.com/huggingface/datasets/pull/4224.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4224.patch",
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4223 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4223/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4223/comments | https://api.github.com/repos/huggingface/datasets/issues/4223/events | https://github.com/huggingface/datasets/pull/4223 | 1,216,107,082 | PR_kwDODunzps42z0YV | 4,223 | Add Accuracy Metric Card | {
"login": "emibaylor",
"id": 27527747,
"node_id": "MDQ6VXNlcjI3NTI3NzQ3",
"avatar_url": "https://avatars.githubusercontent.com/u/27527747?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/emibaylor",
"html_url": "https://github.com/emibaylor",
"followers_url": "https://api.github.com/users/emibaylor/followers",
"following_url": "https://api.github.com/users/emibaylor/following{/other_user}",
"gists_url": "https://api.github.com/users/emibaylor/gists{/gist_id}",
"starred_url": "https://api.github.com/users/emibaylor/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/emibaylor/subscriptions",
"organizations_url": "https://api.github.com/users/emibaylor/orgs",
"repos_url": "https://api.github.com/users/emibaylor/repos",
"events_url": "https://api.github.com/users/emibaylor/events{/privacy}",
"received_events_url": "https://api.github.com/users/emibaylor/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-04-26T15:10:46 | 2022-05-03T14:27:45 | 2022-05-03T14:20:47 | CONTRIBUTOR | null | - adds accuracy metric card
- updates docstring in accuracy.py
- adds .json file with metric card and docstring information | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4223/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4223/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4223",
"html_url": "https://github.com/huggingface/datasets/pull/4223",
"diff_url": "https://github.com/huggingface/datasets/pull/4223.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4223.patch",
"merged_at": "2022-05-03T14:20:47"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4222 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4222/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4222/comments | https://api.github.com/repos/huggingface/datasets/issues/4222/events | https://github.com/huggingface/datasets/pull/4222 | 1,216,056,439 | PR_kwDODunzps42zpcd | 4,222 | Fix description links in dataset cards | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"Non passing tests are due to other pre-existing errors in dataset cards: not related to this PR."
] | 2022-04-26T14:36:25 | 2022-05-06T08:38:38 | 2022-04-26T16:52:29 | MEMBER | null | I noticed many links were not properly displayed (only text, no link) on the Hub because of wrong syntax, e.g.: https://huggingface.co/datasets/big_patent
This PR fixes all description links in dataset cards. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4222/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4222/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4222",
"html_url": "https://github.com/huggingface/datasets/pull/4222",
"diff_url": "https://github.com/huggingface/datasets/pull/4222.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4222.patch",
"merged_at": "2022-04-26T16:52:29"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4221 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4221/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4221/comments | https://api.github.com/repos/huggingface/datasets/issues/4221/events | https://github.com/huggingface/datasets/issues/4221 | 1,215,911,182 | I_kwDODunzps5IeVUO | 4,221 | Dictionary Feature | {
"login": "jordiae",
"id": 2944532,
"node_id": "MDQ6VXNlcjI5NDQ1MzI=",
"avatar_url": "https://avatars.githubusercontent.com/u/2944532?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jordiae",
"html_url": "https://github.com/jordiae",
"followers_url": "https://api.github.com/users/jordiae/followers",
"following_url": "https://api.github.com/users/jordiae/following{/other_user}",
"gists_url": "https://api.github.com/users/jordiae/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jordiae/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jordiae/subscriptions",
"organizations_url": "https://api.github.com/users/jordiae/orgs",
"repos_url": "https://api.github.com/users/jordiae/repos",
"events_url": "https://api.github.com/users/jordiae/events{/privacy}",
"received_events_url": "https://api.github.com/users/jordiae/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892912,
"node_id": "MDU6TGFiZWwxOTM1ODkyOTEy",
"url": "https://api.github.com/repos/huggingface/datasets/labels/question",
"name": "question",
"color": "d876e3",
"default": true,
"description": "Further information is requested"
}
] | closed | false | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"Hi @jordiae,\r\n\r\nInstead of the `Sequence` feature, you can use just a regular list: put the dict between `[` and `]`:\r\n```python\r\n\"list_of_dict_feature\": [\r\n {\r\n \"key1_in_dict\": datasets.Value(\"string\"),\r\n \"key2_in_dict\": datasets.Value(\"int32\"),\r\n ...\r\n }\r\n],\r\n```\r\n\r\nFeel free to re-open this issue if that does not work for your use case.",
"> Hi @jordiae,\r\n> \r\n> Instead of the `Sequence` feature, you can use just a regular list: put the dict between `[` and `]`:\r\n> \r\n> ```python\r\n> \"list_of_dict_feature\": [\r\n> {\r\n> \"key1_in_dict\": datasets.Value(\"string\"),\r\n> \"key2_in_dict\": datasets.Value(\"int32\"),\r\n> ...\r\n> }\r\n> ],\r\n> ```\r\n> \r\n> Feel free to re-open this issue if that does not work for your use case.\r\n\r\nThank you"
] | 2022-04-26T12:50:18 | 2022-04-29T14:52:19 | 2022-04-28T17:04:58 | NONE | null | Hi, I'm trying to create the loading script for a dataset in which one feature is a list of dictionaries, which afaik doesn't fit very well the values and structures supported by Value and Sequence. Is there any suggested workaround, am I missing something?
Thank you in advance. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4221/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4221/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4220 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4220/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4220/comments | https://api.github.com/repos/huggingface/datasets/issues/4220/events | https://github.com/huggingface/datasets/pull/4220 | 1,215,225,802 | PR_kwDODunzps42w5YO | 4,220 | Altered faiss installation comment | {
"login": "vishalsrao",
"id": 36671559,
"node_id": "MDQ6VXNlcjM2NjcxNTU5",
"avatar_url": "https://avatars.githubusercontent.com/u/36671559?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/vishalsrao",
"html_url": "https://github.com/vishalsrao",
"followers_url": "https://api.github.com/users/vishalsrao/followers",
"following_url": "https://api.github.com/users/vishalsrao/following{/other_user}",
"gists_url": "https://api.github.com/users/vishalsrao/gists{/gist_id}",
"starred_url": "https://api.github.com/users/vishalsrao/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/vishalsrao/subscriptions",
"organizations_url": "https://api.github.com/users/vishalsrao/orgs",
"repos_url": "https://api.github.com/users/vishalsrao/repos",
"events_url": "https://api.github.com/users/vishalsrao/events{/privacy}",
"received_events_url": "https://api.github.com/users/vishalsrao/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"Hi ! Can you explain why this change is needed ?",
"Facebook recommends installing FAISS using conda (https://github.com/facebookresearch/faiss/blob/main/INSTALL.md). pip does not seem to have the latest version of FAISS. The latest version of faiss is 1.7.2 (https://anaconda.org/conda-forge/faiss), but the latest one available through pip is 1.5.3 (https://pypi.org/project/faiss/). "
] | 2022-04-26T01:20:43 | 2022-05-09T17:29:34 | 2022-05-09T17:22:09 | CONTRIBUTOR | null | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4220/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4220/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4220",
"html_url": "https://github.com/huggingface/datasets/pull/4220",
"diff_url": "https://github.com/huggingface/datasets/pull/4220.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4220.patch",
"merged_at": "2022-05-09T17:22:09"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4219 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4219/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4219/comments | https://api.github.com/repos/huggingface/datasets/issues/4219/events | https://github.com/huggingface/datasets/pull/4219 | 1,214,934,025 | PR_kwDODunzps42v6rE | 4,219 | Add F1 Metric Card | {
"login": "emibaylor",
"id": 27527747,
"node_id": "MDQ6VXNlcjI3NTI3NzQ3",
"avatar_url": "https://avatars.githubusercontent.com/u/27527747?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/emibaylor",
"html_url": "https://github.com/emibaylor",
"followers_url": "https://api.github.com/users/emibaylor/followers",
"following_url": "https://api.github.com/users/emibaylor/following{/other_user}",
"gists_url": "https://api.github.com/users/emibaylor/gists{/gist_id}",
"starred_url": "https://api.github.com/users/emibaylor/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/emibaylor/subscriptions",
"organizations_url": "https://api.github.com/users/emibaylor/orgs",
"repos_url": "https://api.github.com/users/emibaylor/repos",
"events_url": "https://api.github.com/users/emibaylor/events{/privacy}",
"received_events_url": "https://api.github.com/users/emibaylor/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-04-25T19:14:56 | 2022-04-26T20:44:18 | 2022-04-26T20:37:46 | CONTRIBUTOR | null | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4219/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4219/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4219",
"html_url": "https://github.com/huggingface/datasets/pull/4219",
"diff_url": "https://github.com/huggingface/datasets/pull/4219.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4219.patch",
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4218 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4218/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4218/comments | https://api.github.com/repos/huggingface/datasets/issues/4218/events | https://github.com/huggingface/datasets/pull/4218 | 1,214,748,226 | PR_kwDODunzps42vTA0 | 4,218 | Make code for image downloading from image urls cacheable | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-04-25T16:17:59 | 2022-04-26T17:00:24 | 2022-04-26T13:38:26 | CONTRIBUTOR | null | Fix #4199 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4218/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4218/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4218",
"html_url": "https://github.com/huggingface/datasets/pull/4218",
"diff_url": "https://github.com/huggingface/datasets/pull/4218.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4218.patch",
"merged_at": "2022-04-26T13:38:26"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4217 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4217/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4217/comments | https://api.github.com/repos/huggingface/datasets/issues/4217/events | https://github.com/huggingface/datasets/issues/4217 | 1,214,688,141 | I_kwDODunzps5IZquN | 4,217 | Big_Patent dataset broken | {
"login": "Matthew-Larsen",
"id": 54189843,
"node_id": "MDQ6VXNlcjU0MTg5ODQz",
"avatar_url": "https://avatars.githubusercontent.com/u/54189843?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Matthew-Larsen",
"html_url": "https://github.com/Matthew-Larsen",
"followers_url": "https://api.github.com/users/Matthew-Larsen/followers",
"following_url": "https://api.github.com/users/Matthew-Larsen/following{/other_user}",
"gists_url": "https://api.github.com/users/Matthew-Larsen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Matthew-Larsen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Matthew-Larsen/subscriptions",
"organizations_url": "https://api.github.com/users/Matthew-Larsen/orgs",
"repos_url": "https://api.github.com/users/Matthew-Larsen/repos",
"events_url": "https://api.github.com/users/Matthew-Larsen/events{/privacy}",
"received_events_url": "https://api.github.com/users/Matthew-Larsen/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 4069435429,
"node_id": "LA_kwDODunzps7yjqgl",
"url": "https://api.github.com/repos/huggingface/datasets/labels/hosted-on-google-drive",
"name": "hosted-on-google-drive",
"color": "8B51EF",
"default": false,
"description": ""
}
] | closed | false | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"Thanks for reporting. The issue seems not to be directly related to the dataset viewer or the `datasets` library, but instead to it being hosted on Google Drive.\r\n\r\nSee related issues: https://github.com/huggingface/datasets/issues?q=is%3Aissue+is%3Aopen+drive.google.com\r\n\r\nTo quote [@lhoestq](https://github.com/huggingface/datasets/issues/4075#issuecomment-1087362551):\r\n\r\n> PS: if possible, please try to not use Google Drive links in your dataset script, since Google Drive has download quotas and is not always reliable.\r\n\r\n",
"We should find out if the dataset license allows redistribution and contact the data owners to propose them to host their data on our Hub.",
"The data owners have agreed on hosting their data on the Hub."
] | 2022-04-25T15:31:45 | 2022-05-26T06:29:43 | 2022-05-02T18:21:15 | NONE | null | ## Dataset viewer issue for '*big_patent*'
**Link:** *[link to the dataset viewer page](https://huggingface.co/datasets/big_patent/viewer/all/train)*
*Unable to view because it says FileNotFound, also cannot download it through the python API*
Am I the one who added this dataset ? No
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4217/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4217/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4216 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4216/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4216/comments | https://api.github.com/repos/huggingface/datasets/issues/4216/events | https://github.com/huggingface/datasets/pull/4216 | 1,214,614,029 | PR_kwDODunzps42u1_w | 4,216 | Avoid recursion error in map if example is returned as dict value | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-04-25T14:40:32 | 2022-05-04T17:20:06 | 2022-05-04T17:12:52 | CONTRIBUTOR | null | I noticed this bug while answering [this question](https://discuss.huggingface.co/t/correct-way-to-create-a-dataset-from-a-csv-file/15686/11?u=mariosasko).
This code replicates the bug:
```python
from datasets import Dataset
dset = Dataset.from_dict({"en": ["aa", "bb"], "fr": ["cc", "dd"]})
dset.map(lambda ex: {"translation": ex})
```
and this is the fix for it (before this PR):
```python
from datasets import Dataset
dset = Dataset.from_dict({"en": ["aa", "bb"], "fr": ["cc", "dd"]})
dset.map(lambda ex: {"translation": dict(ex)})
```
Internally, this can be fixed by merging two dicts via dict unpacking (instead of `dict.update) `in `Dataset.map`, which avoids creating recursive dictionaries.
P.S. `{**a, **b}` is slightly more performant than `a.update(b)` in my bencmarks. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4216/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4216/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4216",
"html_url": "https://github.com/huggingface/datasets/pull/4216",
"diff_url": "https://github.com/huggingface/datasets/pull/4216.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4216.patch",
"merged_at": "2022-05-04T17:12:52"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4215 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4215/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4215/comments | https://api.github.com/repos/huggingface/datasets/issues/4215/events | https://github.com/huggingface/datasets/pull/4215 | 1,214,579,162 | PR_kwDODunzps42uuhY | 4,215 | Add `drop_last_batch` to `IterableDataset.map` | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-04-25T14:15:19 | 2022-05-03T15:56:07 | 2022-05-03T15:48:54 | CONTRIBUTOR | null | Addresses this comment: https://github.com/huggingface/datasets/pull/3801#pullrequestreview-901736921 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4215/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4215/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4215",
"html_url": "https://github.com/huggingface/datasets/pull/4215",
"diff_url": "https://github.com/huggingface/datasets/pull/4215.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4215.patch",
"merged_at": "2022-05-03T15:48:54"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4214 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4214/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4214/comments | https://api.github.com/repos/huggingface/datasets/issues/4214/events | https://github.com/huggingface/datasets/pull/4214 | 1,214,572,430 | PR_kwDODunzps42utC5 | 4,214 | Skip checksum computation in Imagefolder by default | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-04-25T14:10:41 | 2022-05-03T15:28:32 | 2022-05-03T15:21:29 | CONTRIBUTOR | null | Avoids having to set `ignore_verifications=True` in `load_dataset("imagefolder", ...)` to skip checksum verification and speed up loading.
The user can still pass `DownloadConfig(record_checksums=True)` to not skip this part. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4214/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4214/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4214",
"html_url": "https://github.com/huggingface/datasets/pull/4214",
"diff_url": "https://github.com/huggingface/datasets/pull/4214.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4214.patch",
"merged_at": "2022-05-03T15:21:29"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4213 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4213/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4213/comments | https://api.github.com/repos/huggingface/datasets/issues/4213/events | https://github.com/huggingface/datasets/pull/4213 | 1,214,510,010 | PR_kwDODunzps42uft_ | 4,213 | ETT time series dataset | {
"login": "kashif",
"id": 8100,
"node_id": "MDQ6VXNlcjgxMDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/8100?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/kashif",
"html_url": "https://github.com/kashif",
"followers_url": "https://api.github.com/users/kashif/followers",
"following_url": "https://api.github.com/users/kashif/following{/other_user}",
"gists_url": "https://api.github.com/users/kashif/gists{/gist_id}",
"starred_url": "https://api.github.com/users/kashif/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kashif/subscriptions",
"organizations_url": "https://api.github.com/users/kashif/orgs",
"repos_url": "https://api.github.com/users/kashif/repos",
"events_url": "https://api.github.com/users/kashif/events{/privacy}",
"received_events_url": "https://api.github.com/users/kashif/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"thank you!\r\n"
] | 2022-04-25T13:26:18 | 2022-05-05T12:19:21 | 2022-05-05T12:10:35 | CONTRIBUTOR | null | Ready for review. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4213/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4213/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4213",
"html_url": "https://github.com/huggingface/datasets/pull/4213",
"diff_url": "https://github.com/huggingface/datasets/pull/4213.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4213.patch",
"merged_at": "2022-05-05T12:10:35"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4212 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4212/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4212/comments | https://api.github.com/repos/huggingface/datasets/issues/4212/events | https://github.com/huggingface/datasets/pull/4212 | 1,214,498,582 | PR_kwDODunzps42udRf | 4,212 | [Common Voice] Make sure bytes are correctly deleted if `path` exists | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"cool that you noticed that we store unnecessary bytes again :D "
] | 2022-04-25T13:18:26 | 2022-04-26T22:54:28 | 2022-04-26T22:48:27 | MEMBER | null | `path` should be set to local path inside audio feature if exist so that bytes can correctly be deleted. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4212/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4212/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4212",
"html_url": "https://github.com/huggingface/datasets/pull/4212",
"diff_url": "https://github.com/huggingface/datasets/pull/4212.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4212.patch",
"merged_at": "2022-04-26T22:48:27"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4211 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4211/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4211/comments | https://api.github.com/repos/huggingface/datasets/issues/4211/events | https://github.com/huggingface/datasets/issues/4211 | 1,214,361,837 | I_kwDODunzps5IYbDt | 4,211 | DatasetDict containing Datasets with different features when pushed to hub gets remapped features | {
"login": "pietrolesci",
"id": 61748653,
"node_id": "MDQ6VXNlcjYxNzQ4NjUz",
"avatar_url": "https://avatars.githubusercontent.com/u/61748653?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pietrolesci",
"html_url": "https://github.com/pietrolesci",
"followers_url": "https://api.github.com/users/pietrolesci/followers",
"following_url": "https://api.github.com/users/pietrolesci/following{/other_user}",
"gists_url": "https://api.github.com/users/pietrolesci/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pietrolesci/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pietrolesci/subscriptions",
"organizations_url": "https://api.github.com/users/pietrolesci/orgs",
"repos_url": "https://api.github.com/users/pietrolesci/repos",
"events_url": "https://api.github.com/users/pietrolesci/events{/privacy}",
"received_events_url": "https://api.github.com/users/pietrolesci/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"Hi @pietrolesci, thanks for reporting.\r\n\r\nPlease note that this is a design purpose: a `DatasetDict` has the same features for all its datasets. Normally, a `DatasetDict` is composed of several sub-datasets each corresponding to a different **split**.\r\n\r\nTo handle sub-datasets with different features, we use another approach: use different **configurations** instead of **splits**.\r\n\r\nHowever, for the moment `push_to_hub` does not support specifying different configurations. IMHO, we should implement this.",
"Hi @albertvillanova,\r\n\r\nThanks a lot for your reply! I got it now. The strange thing for me was to have it correctly working (i.e., DatasetDict with different features in some datasets) locally and not on the Hub. It would be great to have configuration supported by `push_to_hub`. Personally, this latter functionality allowed me to iterate rather quickly on dataset curation.\r\n\r\nAgain, thanks for your time @albertvillanova!\r\n\r\nBest,\r\nPietro",
"Hi! Yes, we should override `DatasetDict.__setitem__` and throw an error if features dictionaries are different. `DatasetDict` is a subclass of `dict`, so `DatasetDict.{update/setdefault}` need to be overridden as well. We could avoid this by subclassing `UserDict`, but then we would get the name collision - `DatasetDict.data` vs. `UserDict.data`. This makes me think we should rename the `data` attribute of `DatasetDict`/`Dataset` for easier dict subclassing (would also simplify https://github.com/huggingface/datasets/pull/3997) and to follow good Python practices. Another option is to have a custom `UserDict` class in `py_utils`, but it can be hard to keep this class consistent with the built-in `UserDict`. \r\n\r\n@albertvillanova @lhoestq wdyt?",
"I would keep things simple and keep subclassing dict. Regarding the features check, I guess this can be done only for `push_to_hub` right ? It is the only function right now that requires the underlying datasets to be splits (e.g. train/test) and have the same features.\r\n\r\nNote that later you will be able to push datasets with different features as different dataset **configurations** (similarly to the [GLUE subsets](https://huggingface.co/datasets/glue) for example). We will work on this soon",
"Hi @lhoestq,\r\n\r\nReturning to this thread to ask whether the possibility to create `DatasetDict` with different configurations will be supported in the future.\r\n\r\nBest,\r\nPietro",
"DatasetDict is likely to always require the datasets to have the same columns and types, while different configurations may have different columns and types.\r\n\r\nWhy would you like to see that ?\r\nIf it's related to push_to_hub, we plan to allow pushing several configs, but not using DatasetDict",
"Hi @lhoestq and @pietrolesci,\r\n\r\nI have been curious about this question as well. I don't have experience working with different configurations, but I can give a bit more detail on the work flow that I have been using with `Dataset_dict`.\r\n\r\nAs @pietrolesci mentions, I have been using `push_to_hub` to quickly iterate on dataset curation for different ML experiments - locally I create a set of dataset splits e.g. `train/val/test/inference`, then convert them to `HF_Datasets` and finally a to `Dataset_Dict` to `push_to_hub`. Where I have run into issues is when I want to include different metadata for different splits. For example, I have situations where I only have meta-data for one of the splits (e.g. test) or situations where I am working with `inference` data that does not have labels. Currently I use a rather hacky work around by adding \"dummy\" columns for missing columns to avoid the error:\r\n\r\n```\r\nValueError: All datasets in `DatasetDict` should have the same features\r\n```\r\n\r\nI am curious why `DatasetDict` will likely not support this functionality? I don't know much about working with different configurations, but allowing for different columns between datasets / splits would be a very helpful use-case for me. Are there any docs for using different configuration OR a more info about incorporating it with `push_to_hub`.\r\n\r\nBest wishes,\r\nJonathan\r\n\r\n",
"+1",
"> I am curious why DatasetDict will likely not support this functionality?\r\n\r\nThere's a possibility we may merge the Dataset and DatasetDict classes. The DatasetDict purpose was to define a way to get the train/test splits of a dataset.\r\n\r\nsee the discussions at https://github.com/huggingface/datasets/issues/5189\r\n\r\n> Are there any docs for using different configuration OR a more info about incorporating it with push_to_hub.\r\n\r\nThere's a PR open to allow to upload a dataset with a certain configuration name. Then later you can reload this specific configuration using `load_dataset(ds_name, config_name)`\r\n\r\nsee the PR at https://github.com/huggingface/datasets/pull/5213",
"Hi, regarding the following information:\r\n\r\n> Please note that this is a design purpose: a `DatasetDict` has the same features for all its datasets. Normally, a `DatasetDict` is composed of several sub-datasets each corresponding to a different **split**.\r\n> \r\n> To handle sub-datasets with different features, we use another approach: use different **configurations** instead of **splits**.\r\n\r\nAltough this is often implied (such as how else would `DatasetDict` be able to process multiple splits in the same way?), I would expect it to be written somewhere in the docs plainly and maybe even in bold. Also I would expect to see it in multiple places such as:\r\n\r\n- in docstring of `DatasetDict`\r\n- in nlp/image/audio guides on how to create a dataset\r\n- [in conceptual guide on how to create a loading script](https://huggingface.co/docs/datasets/main/en/about_dataset_load)\r\n\r\n\r\nI think this addition would benefit the docs, especially when you guide a newbie (such as me) through the process of creating a dataset. As I said, you somehow suspect that this is in fact the case, but without reading it in the docs you cannot be sure."
] | 2022-04-25T11:22:54 | 2023-04-06T19:25:50 | 2022-05-20T15:15:30 | NONE | null | Hi there,
I am trying to load a dataset to the Hub. This dataset is a `DatasetDict` composed of various splits. Some splits have a different `Feature` mapping. Locally, the DatasetDict preserves the individual features but if I `push_to_hub` and then `load_dataset`, the features are all the same.
Dataset and code to reproduce available [here](https://huggingface.co/datasets/pietrolesci/robust_nli).
In short:
I have 3 feature mapping
```python
Tri_features = Features(
{
"idx": Value(dtype="int64"),
"premise": Value(dtype="string"),
"hypothesis": Value(dtype="string"),
"label": ClassLabel(num_classes=3, names=["entailment", "neutral", "contradiction"]),
}
)
Ent_features = Features(
{
"idx": Value(dtype="int64"),
"premise": Value(dtype="string"),
"hypothesis": Value(dtype="string"),
"label": ClassLabel(num_classes=2, names=["non-entailment", "entailment"]),
}
)
Con_features = Features(
{
"idx": Value(dtype="int64"),
"premise": Value(dtype="string"),
"hypothesis": Value(dtype="string"),
"label": ClassLabel(num_classes=2, names=["non-contradiction", "contradiction"]),
}
)
```
Then I create different datasets
```python
dataset_splits = {}
for split in df["split"].unique():
print(split)
df_split = df.loc[df["split"] == split].copy()
if split in Tri_dataset:
df_split["label"] = df_split["label"].map({"entailment": 0, "neutral": 1, "contradiction": 2})
ds = Dataset.from_pandas(df_split, features=Tri_features)
elif split in Ent_bin_dataset:
df_split["label"] = df_split["label"].map({"non-entailment": 0, "entailment": 1})
ds = Dataset.from_pandas(df_split, features=Ent_features)
elif split in Con_bin_dataset:
df_split["label"] = df_split["label"].map({"non-contradiction": 0, "contradiction": 1})
ds = Dataset.from_pandas(df_split, features=Con_features)
else:
print("ERROR:", split)
dataset_splits[split] = ds
datasets = DatasetDict(dataset_splits)
```
I then push to hub
```python
datasets.push_to_hub("pietrolesci/robust_nli", token="<token>")
```
Finally, I load it from the hub
```python
datasets_loaded_from_hub = load_dataset("pietrolesci/robust_nli")
```
And I get that
```python
datasets["LI_TS"].features != datasets_loaded_from_hub["LI_TS"].features
```
since
```python
"label": ClassLabel(num_classes=2, names=["non-contradiction", "contradiction"])
```
gets remapped to
```python
"label": ClassLabel(num_classes=3, names=["entailment", "neutral", "contradiction"])
``` | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4211/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4211/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4210 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4210/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4210/comments | https://api.github.com/repos/huggingface/datasets/issues/4210/events | https://github.com/huggingface/datasets/issues/4210 | 1,214,089,130 | I_kwDODunzps5IXYeq | 4,210 | TypeError: Cannot cast array data from dtype('O') to dtype('int64') according to the rule 'safe' | {
"login": "loretoparisi",
"id": 163333,
"node_id": "MDQ6VXNlcjE2MzMzMw==",
"avatar_url": "https://avatars.githubusercontent.com/u/163333?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/loretoparisi",
"html_url": "https://github.com/loretoparisi",
"followers_url": "https://api.github.com/users/loretoparisi/followers",
"following_url": "https://api.github.com/users/loretoparisi/following{/other_user}",
"gists_url": "https://api.github.com/users/loretoparisi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/loretoparisi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/loretoparisi/subscriptions",
"organizations_url": "https://api.github.com/users/loretoparisi/orgs",
"repos_url": "https://api.github.com/users/loretoparisi/repos",
"events_url": "https://api.github.com/users/loretoparisi/events{/privacy}",
"received_events_url": "https://api.github.com/users/loretoparisi/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | [
"Hi! Casting class labels from strings is currently not supported in the CSV loader, but you can get the same result with an additional map as follows:\r\n```python\r\nfrom datasets import load_dataset,Features,Value,ClassLabel\r\nclass_names = [\"cmn\",\"deu\",\"rus\",\"fra\",\"eng\",\"jpn\",\"spa\",\"ita\",\"kor\",\"vie\",\"nld\",\"epo\",\"por\",\"tur\",\"heb\",\"hun\",\"ell\",\"ind\",\"ara\",\"arz\",\"fin\",\"bul\",\"yue\",\"swe\",\"ukr\",\"bel\",\"que\",\"ces\",\"swh\",\"nno\",\"wuu\",\"nob\",\"zsm\",\"est\",\"kat\",\"pol\",\"lat\",\"urd\",\"sqi\",\"isl\",\"fry\",\"afr\",\"ron\",\"fao\",\"san\",\"bre\",\"tat\",\"yid\",\"uig\",\"uzb\",\"srp\",\"qya\",\"dan\",\"pes\",\"slk\",\"eus\",\"cycl\",\"acm\",\"tgl\",\"lvs\",\"kaz\",\"hye\",\"hin\",\"lit\",\"ben\",\"cat\",\"bos\",\"hrv\",\"tha\",\"orv\",\"cha\",\"mon\",\"lzh\",\"scn\",\"gle\",\"mkd\",\"slv\",\"frm\",\"glg\",\"vol\",\"ain\",\"jbo\",\"tok\",\"ina\",\"nds\",\"mal\",\"tlh\",\"roh\",\"ltz\",\"oss\",\"ido\",\"gla\",\"mlt\",\"sco\",\"ast\",\"jav\",\"oci\",\"ile\",\"ota\",\"xal\",\"tel\",\"sjn\",\"nov\",\"khm\",\"tpi\",\"ang\",\"aze\",\"tgk\",\"tuk\",\"chv\",\"hsb\",\"dsb\",\"bod\",\"sme\",\"cym\",\"mri\",\"ksh\",\"kmr\",\"ewe\",\"kab\",\"ber\",\"tpw\",\"udm\",\"lld\",\"pms\",\"lad\",\"grn\",\"mlg\",\"xho\",\"pnb\",\"grc\",\"hat\",\"lao\",\"npi\",\"cor\",\"nah\",\"avk\",\"mar\",\"guj\",\"pan\",\"kir\",\"myv\",\"prg\",\"sux\",\"crs\",\"ckt\",\"bak\",\"zlm\",\"hil\",\"cbk\",\"chr\",\"nav\",\"lkt\",\"enm\",\"arq\",\"lin\",\"abk\",\"pcd\",\"rom\",\"gsw\",\"tam\",\"zul\",\"awa\",\"wln\",\"amh\",\"bar\",\"hbo\",\"mhr\",\"bho\",\"mrj\",\"ckb\",\"osx\",\"pfl\",\"mgm\",\"sna\",\"mah\",\"hau\",\"kan\",\"nog\",\"sin\",\"glv\",\"dng\",\"kal\",\"liv\",\"vro\",\"apc\",\"jdt\",\"fur\",\"che\",\"haw\",\"yor\",\"crh\",\"pdc\",\"ppl\",\"kin\",\"shs\",\"mnw\",\"tet\",\"sah\",\"kum\",\"ngt\",\"nya\",\"pus\",\"hif\",\"mya\",\"moh\",\"wol\",\"tir\",\"ton\",\"lzz\",\"oar\",\"lug\",\"brx\",\"non\",\"mww\",\"hak\",\"nlv\",\"ngu\",\"bua\",\"aym\",\"vec\",\"ibo\",\"tkl\",\"bam\",\"kha\",\"ceb\",\"lou\",\"fuc\",\"smo\",\"gag\",\"lfn\",\"arg\",\"umb\",\"tyv\",\"kjh\",\"oji\",\"cyo\",\"urh\",\"kzj\",\"pam\",\"srd\",\"lmo\",\"swg\",\"mdf\",\"gil\",\"snd\",\"tso\",\"sot\",\"zza\",\"tsn\",\"pau\",\"som\",\"egl\",\"ady\",\"asm\",\"ori\",\"dtp\",\"cho\",\"max\",\"kam\",\"niu\",\"sag\",\"ilo\",\"kaa\",\"fuv\",\"nch\",\"hoc\",\"iba\",\"gbm\",\"sun\",\"war\",\"mvv\",\"pap\",\"ary\",\"kxi\",\"csb\",\"pag\",\"cos\",\"rif\",\"kek\",\"krc\",\"aii\",\"ban\",\"ssw\",\"tvl\",\"mfe\",\"tah\",\"bvy\",\"bcl\",\"hnj\",\"nau\",\"nst\",\"afb\",\"quc\",\"min\",\"tmw\",\"mad\",\"bjn\",\"mai\",\"cjy\",\"got\",\"hsn\",\"gan\",\"tzl\",\"dws\",\"ldn\",\"afh\",\"sgs\",\"krl\",\"vep\",\"rue\",\"tly\",\"mic\",\"ext\",\"izh\",\"sma\",\"jam\",\"cmo\",\"mwl\",\"kpv\",\"koi\",\"bis\",\"ike\",\"run\",\"evn\",\"ryu\",\"mnc\",\"aoz\",\"otk\",\"kas\",\"aln\",\"akl\",\"yua\",\"shy\",\"fkv\",\"gos\",\"fij\",\"thv\",\"zgh\",\"gcf\",\"cay\",\"xmf\",\"tig\",\"div\",\"lij\",\"rap\",\"hrx\",\"cpi\",\"tts\",\"gaa\",\"tmr\",\"iii\",\"ltg\",\"bzt\",\"syc\",\"emx\",\"gom\",\"chg\",\"osp\",\"stq\",\"frr\",\"fro\",\"nys\",\"toi\",\"new\",\"phn\",\"jpa\",\"rel\",\"drt\",\"chn\",\"pli\",\"laa\",\"bal\",\"hdn\",\"hax\",\"mik\",\"ajp\",\"xqa\",\"pal\",\"crk\",\"mni\",\"lut\",\"ayl\",\"ood\",\"sdh\",\"ofs\",\"nus\",\"kiu\",\"diq\",\"qxq\",\"alt\",\"bfz\",\"klj\",\"mus\",\"srn\",\"guc\",\"lim\",\"zea\",\"shi\",\"mnr\",\"bom\",\"sat\",\"szl\"]\r\nfeatures = Features({ 'label': ClassLabel(names=class_names), 'text': Value('string')})\r\nnum_labels = features['label'].num_classes\r\ndata_files = { \"train\": \"train.csv\", \"test\": \"test.csv\" }\r\nsentences = load_dataset(\r\n \"loretoparisi/tatoeba-sentences\",\r\n data_files=data_files,\r\n delimiter='\\t', \r\n column_names=['label', 'text'],\r\n)\r\n# You can make this part faster with num_proc=<some int>\r\nsentences = sentences.map(lambda ex: features[\"label\"].str2int(ex[\"label\"]) if ex[\"label\"] is not None else None, features=features)\r\n```\r\n\r\n@lhoestq IIRC, I suggested adding `cast_to_storage` to `ClassLabel` + `table_cast` to the packaged loaders if the `ClassLabel`/`Image`/`Audio` type is present in `features` to avoid this kind of error, but your concern was speed. IMO shouldn't be a problem if we do `table_cast` only when these features are present.",
"I agree packaged loaders should support `ClassLabel` feature without throwing an error.",
"@albertvillanova @mariosasko thank you, with that change now I get\r\n```\r\n---------------------------------------------------------------------------\r\nTypeError Traceback (most recent call last)\r\n[<ipython-input-9-eeb68eeb9bec>](https://localhost:8080/#) in <module>()\r\n 11 )\r\n 12 # You can make this part faster with num_proc=<some int>\r\n---> 13 sentences = sentences.map(lambda ex: features[\"label\"].str2int(ex[\"label\"]) if ex[\"label\"] is not None else None, features=features)\r\n 14 sentences = sentences.shuffle()\r\n\r\n8 frames\r\n[/usr/local/lib/python3.7/dist-packages/datasets/arrow_dataset.py](https://localhost:8080/#) in validate_function_output(processed_inputs, indices)\r\n 2193 if processed_inputs is not None and not isinstance(processed_inputs, (Mapping, pa.Table)):\r\n 2194 raise TypeError(\r\n-> 2195 f\"Provided `function` which is applied to all elements of table returns a variable of type {type(processed_inputs)}. Make sure provided `function` returns a variable of type `dict` (or a pyarrow table) to update the dataset or `None` if you are only interested in side effects.\"\r\n 2196 )\r\n 2197 elif isinstance(indices, list) and isinstance(processed_inputs, Mapping):\r\n\r\nTypeError: Provided `function` which is applied to all elements of table returns a variable of type <class 'int'>. Make sure provided `function` returns a variable of type `dict` (or a pyarrow table) to update the dataset or `None` if you are only interested in side effects.\r\n```\r\n\r\nthe error is raised by [this](https://github.com/huggingface/datasets/blob/master/src/datasets/arrow_dataset.py#L2221)\r\n\r\n```\r\n[/usr/local/lib/python3.7/dist-packages/datasets/arrow_dataset.py](https://localhost:8080/#) in validate_function_output(processed_inputs, indices)\r\n```",
"@mariosasko changed it like\r\n\r\n```python\r\nsentences = sentences.map(lambda ex: {\"label\" : features[\"label\"].str2int(ex[\"label\"]) if ex[\"label\"] is not None else None}, features=features)\r\n```\r\n\r\nto avoid the above errorr.",
"Any update on this? Is this correct ?\r\n> @mariosasko changed it like\r\n> \r\n> ```python\r\n> sentences = sentences.map(lambda ex: {\"label\" : features[\"label\"].str2int(ex[\"label\"]) if ex[\"label\"] is not None else None}, features=features)\r\n> ```\r\n> \r\n> to avoid the above errorr.\r\n\r\n"
] | 2022-04-25T07:28:42 | 2022-05-31T12:16:31 | 2022-05-31T12:16:31 | NONE | null | ### System Info
```shell
- `transformers` version: 4.18.0
- Platform: Linux-5.4.144+-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.7.13
- Huggingface_hub version: 0.5.1
- PyTorch version (GPU?): 1.10.0+cu111 (True)
- Tensorflow version (GPU?): 2.8.0 (True)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
```
### Who can help?
@LysandreJik
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
```python
from datasets import load_dataset,Features,Value,ClassLabel
class_names = ["cmn","deu","rus","fra","eng","jpn","spa","ita","kor","vie","nld","epo","por","tur","heb","hun","ell","ind","ara","arz","fin","bul","yue","swe","ukr","bel","que","ces","swh","nno","wuu","nob","zsm","est","kat","pol","lat","urd","sqi","isl","fry","afr","ron","fao","san","bre","tat","yid","uig","uzb","srp","qya","dan","pes","slk","eus","cycl","acm","tgl","lvs","kaz","hye","hin","lit","ben","cat","bos","hrv","tha","orv","cha","mon","lzh","scn","gle","mkd","slv","frm","glg","vol","ain","jbo","tok","ina","nds","mal","tlh","roh","ltz","oss","ido","gla","mlt","sco","ast","jav","oci","ile","ota","xal","tel","sjn","nov","khm","tpi","ang","aze","tgk","tuk","chv","hsb","dsb","bod","sme","cym","mri","ksh","kmr","ewe","kab","ber","tpw","udm","lld","pms","lad","grn","mlg","xho","pnb","grc","hat","lao","npi","cor","nah","avk","mar","guj","pan","kir","myv","prg","sux","crs","ckt","bak","zlm","hil","cbk","chr","nav","lkt","enm","arq","lin","abk","pcd","rom","gsw","tam","zul","awa","wln","amh","bar","hbo","mhr","bho","mrj","ckb","osx","pfl","mgm","sna","mah","hau","kan","nog","sin","glv","dng","kal","liv","vro","apc","jdt","fur","che","haw","yor","crh","pdc","ppl","kin","shs","mnw","tet","sah","kum","ngt","nya","pus","hif","mya","moh","wol","tir","ton","lzz","oar","lug","brx","non","mww","hak","nlv","ngu","bua","aym","vec","ibo","tkl","bam","kha","ceb","lou","fuc","smo","gag","lfn","arg","umb","tyv","kjh","oji","cyo","urh","kzj","pam","srd","lmo","swg","mdf","gil","snd","tso","sot","zza","tsn","pau","som","egl","ady","asm","ori","dtp","cho","max","kam","niu","sag","ilo","kaa","fuv","nch","hoc","iba","gbm","sun","war","mvv","pap","ary","kxi","csb","pag","cos","rif","kek","krc","aii","ban","ssw","tvl","mfe","tah","bvy","bcl","hnj","nau","nst","afb","quc","min","tmw","mad","bjn","mai","cjy","got","hsn","gan","tzl","dws","ldn","afh","sgs","krl","vep","rue","tly","mic","ext","izh","sma","jam","cmo","mwl","kpv","koi","bis","ike","run","evn","ryu","mnc","aoz","otk","kas","aln","akl","yua","shy","fkv","gos","fij","thv","zgh","gcf","cay","xmf","tig","div","lij","rap","hrx","cpi","tts","gaa","tmr","iii","ltg","bzt","syc","emx","gom","chg","osp","stq","frr","fro","nys","toi","new","phn","jpa","rel","drt","chn","pli","laa","bal","hdn","hax","mik","ajp","xqa","pal","crk","mni","lut","ayl","ood","sdh","ofs","nus","kiu","diq","qxq","alt","bfz","klj","mus","srn","guc","lim","zea","shi","mnr","bom","sat","szl"]
features = Features({ 'label': ClassLabel(names=class_names), 'text': Value('string')})
num_labels = features['label'].num_classes
data_files = { "train": "train.csv", "test": "test.csv" }
sentences = load_dataset("loretoparisi/tatoeba-sentences",
data_files=data_files,
delimiter='\t',
column_names=['label', 'text'],
features = features
```
ERROR:
```
ClassLabel(num_classes=403, names=['cmn', 'deu', 'rus', 'fra', 'eng', 'jpn', 'spa', 'ita', 'kor', 'vie', 'nld', 'epo', 'por', 'tur', 'heb', 'hun', 'ell', 'ind', 'ara', 'arz', 'fin', 'bul', 'yue', 'swe', 'ukr', 'bel', 'que', 'ces', 'swh', 'nno', 'wuu', 'nob', 'zsm', 'est', 'kat', 'pol', 'lat', 'urd', 'sqi', 'isl', 'fry', 'afr', 'ron', 'fao', 'san', 'bre', 'tat', 'yid', 'uig', 'uzb', 'srp', 'qya', 'dan', 'pes', 'slk', 'eus', 'cycl', 'acm', 'tgl', 'lvs', 'kaz', 'hye', 'hin', 'lit', 'ben', 'cat', 'bos', 'hrv', 'tha', 'orv', 'cha', 'mon', 'lzh', 'scn', 'gle', 'mkd', 'slv', 'frm', 'glg', 'vol', 'ain', 'jbo', 'tok', 'ina', 'nds', 'mal', 'tlh', 'roh', 'ltz', 'oss', 'ido', 'gla', 'mlt', 'sco', 'ast', 'jav', 'oci', 'ile', 'ota', 'xal', 'tel', 'sjn', 'nov', 'khm', 'tpi', 'ang', 'aze', 'tgk', 'tuk', 'chv', 'hsb', 'dsb', 'bod', 'sme', 'cym', 'mri', 'ksh', 'kmr', 'ewe', 'kab', 'ber', 'tpw', 'udm', 'lld', 'pms', 'lad', 'grn', 'mlg', 'xho', 'pnb', 'grc', 'hat', 'lao', 'npi', 'cor', 'nah', 'avk', 'mar', 'guj', 'pan', 'kir', 'myv', 'prg', 'sux', 'crs', 'ckt', 'bak', 'zlm', 'hil', 'cbk', 'chr', 'nav', 'lkt', 'enm', 'arq', 'lin', 'abk', 'pcd', 'rom', 'gsw', 'tam', 'zul', 'awa', 'wln', 'amh', 'bar', 'hbo', 'mhr', 'bho', 'mrj', 'ckb', 'osx', 'pfl', 'mgm', 'sna', 'mah', 'hau', 'kan', 'nog', 'sin', 'glv', 'dng', 'kal', 'liv', 'vro', 'apc', 'jdt', 'fur', 'che', 'haw', 'yor', 'crh', 'pdc', 'ppl', 'kin', 'shs', 'mnw', 'tet', 'sah', 'kum', 'ngt', 'nya', 'pus', 'hif', 'mya', 'moh', 'wol', 'tir', 'ton', 'lzz', 'oar', 'lug', 'brx', 'non', 'mww', 'hak', 'nlv', 'ngu', 'bua', 'aym', 'vec', 'ibo', 'tkl', 'bam', 'kha', 'ceb', 'lou', 'fuc', 'smo', 'gag', 'lfn', 'arg', 'umb', 'tyv', 'kjh', 'oji', 'cyo', 'urh', 'kzj', 'pam', 'srd', 'lmo', 'swg', 'mdf', 'gil', 'snd', 'tso', 'sot', 'zza', 'tsn', 'pau', 'som', 'egl', 'ady', 'asm', 'ori', 'dtp', 'cho', 'max', 'kam', 'niu', 'sag', 'ilo', 'kaa', 'fuv', 'nch', 'hoc', 'iba', 'gbm', 'sun', 'war', 'mvv', 'pap', 'ary', 'kxi', 'csb', 'pag', 'cos', 'rif', 'kek', 'krc', 'aii', 'ban', 'ssw', 'tvl', 'mfe', 'tah', 'bvy', 'bcl', 'hnj', 'nau', 'nst', 'afb', 'quc', 'min', 'tmw', 'mad', 'bjn', 'mai', 'cjy', 'got', 'hsn', 'gan', 'tzl', 'dws', 'ldn', 'afh', 'sgs', 'krl', 'vep', 'rue', 'tly', 'mic', 'ext', 'izh', 'sma', 'jam', 'cmo', 'mwl', 'kpv', 'koi', 'bis', 'ike', 'run', 'evn', 'ryu', 'mnc', 'aoz', 'otk', 'kas', 'aln', 'akl', 'yua', 'shy', 'fkv', 'gos', 'fij', 'thv', 'zgh', 'gcf', 'cay', 'xmf', 'tig', 'div', 'lij', 'rap', 'hrx', 'cpi', 'tts', 'gaa', 'tmr', 'iii', 'ltg', 'bzt', 'syc', 'emx', 'gom', 'chg', 'osp', 'stq', 'frr', 'fro', 'nys', 'toi', 'new', 'phn', 'jpa', 'rel', 'drt', 'chn', 'pli', 'laa', 'bal', 'hdn', 'hax', 'mik', 'ajp', 'xqa', 'pal', 'crk', 'mni', 'lut', 'ayl', 'ood', 'sdh', 'ofs', 'nus', 'kiu', 'diq', 'qxq', 'alt', 'bfz', 'klj', 'mus', 'srn', 'guc', 'lim', 'zea', 'shi', 'mnr', 'bom', 'sat', 'szl'], id=None)
Value(dtype='string', id=None)
Using custom data configuration loretoparisi--tatoeba-sentences-7b2c5e991f398f39
Downloading and preparing dataset csv/loretoparisi--tatoeba-sentences to /root/.cache/huggingface/datasets/csv/loretoparisi--tatoeba-sentences-7b2c5e991f398f39/0.0.0/433e0ccc46f9880962cc2b12065189766fbb2bee57a221866138fb9203c83519...
Downloading data files: 100%
2/2 [00:18<00:00, 8.06s/it]
Downloading data: 100%
391M/391M [00:13<00:00, 35.3MB/s]
Downloading data: 100%
92.4M/92.4M [00:02<00:00, 36.5MB/s]
Failed to read file '/root/.cache/huggingface/datasets/downloads/933132df9905194ea9faeb30cabca8c49318795612f6495fcb941a290191dd5d' with error <class 'ValueError'>: invalid literal for int() with base 10: 'cmn'
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
/usr/local/lib/python3.7/dist-packages/pandas/_libs/parsers.pyx in pandas._libs.parsers.TextReader._convert_tokens()
TypeError: Cannot cast array data from dtype('O') to dtype('int64') according to the rule 'safe'
During handling of the above exception, another exception occurred:
ValueError Traceback (most recent call last)
15 frames
/usr/local/lib/python3.7/dist-packages/pandas/_libs/parsers.pyx in pandas._libs.parsers.TextReader._convert_tokens()
ValueError: invalid literal for int() with base 10: 'cmn'
```
while loading without `features` it loads without errors
```
sentences = load_dataset("loretoparisi/tatoeba-sentences",
data_files=data_files,
delimiter='\t',
column_names=['label', 'text']
)
```
but the `label` col seems to be wrong (without the `ClassLabel` object):
```
sentences['train'].features
{'label': Value(dtype='string', id=None),
'text': Value(dtype='string', id=None)}
```
The dataset was https://huggingface.co/datasets/loretoparisi/tatoeba-sentences
Dataset format is:
```
ces Nechci vΔdΔt, co je tam uvnitΕ.
ces Kdo o tom chce slyΕ‘et?
deu Tom sagte, er fΓΌhle sich nicht wohl.
ber Mel-iyi-d anida-t tura ?
hun Gondom lesz rΓ‘ rΓΆgtΓΆn.
ber Mel-iyi-d anida-tt tura ?
deu Ich will dich nicht reden hΓΆren.
```
### Expected behavior
```shell
correctly load train and test files.
``` | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4210/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4210/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4208 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4208/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4208/comments | https://api.github.com/repos/huggingface/datasets/issues/4208/events | https://github.com/huggingface/datasets/pull/4208 | 1,213,716,426 | PR_kwDODunzps42r7bW | 4,208 | Add CMU MoCap Dataset | {
"login": "dnaveenr",
"id": 17746528,
"node_id": "MDQ6VXNlcjE3NzQ2NTI4",
"avatar_url": "https://avatars.githubusercontent.com/u/17746528?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dnaveenr",
"html_url": "https://github.com/dnaveenr",
"followers_url": "https://api.github.com/users/dnaveenr/followers",
"following_url": "https://api.github.com/users/dnaveenr/following{/other_user}",
"gists_url": "https://api.github.com/users/dnaveenr/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dnaveenr/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dnaveenr/subscriptions",
"organizations_url": "https://api.github.com/users/dnaveenr/orgs",
"repos_url": "https://api.github.com/users/dnaveenr/repos",
"events_url": "https://api.github.com/users/dnaveenr/events{/privacy}",
"received_events_url": "https://api.github.com/users/dnaveenr/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 4564477500,
"node_id": "LA_kwDODunzps8AAAABEBBmPA",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20contribution",
"name": "dataset contribution",
"color": "0e8a16",
"default": false,
"description": "Contribution to a dataset script"
}
] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"- Updated the readme.\r\n- Added dummy_data.zip and ran the all the tests.\r\n\r\nThe dataset works for \"asf/amc\" and \"avi\" formats which have a single download link for the complete dataset. But \"c3d\" and \"mpg\" have multiple download links, can we combine and host these links on the Hub since the dataset is free to use ?",
"\"c3d\" and \"mpg\" have multiple download links (part archives) and dl_manager.download_and_extract() extracts the files to multiple paths, is there a way to extract these multiple archives into one folder ? Any other way to go about this ?\r\nCan we combine and host these links on the Hub since the dataset is free to use ?",
"> \"c3d\" and \"mpg\" have multiple download links (part archives) and dl_manager.download_and_extract() extracts the files to multiple paths, is there a way to extract these multiple archives into one folder ? Any other way to go about this ?\r\n\r\nWe store downloaded data under `~/.cache/huggingface/datasets/downloads` (by default), so these downloads are \"hidden\" and won't clutter one's filesystem in an \"obvious way\".",
"> We store downloaded data under ~/.cache/huggingface/datasets/downloads (by default), so these downloads are \"hidden\" and won't clutter one's filesystem in an \"obvious way\".\r\n\r\nYes, the filesystem won't be clustered, but the problem is processing the dataset becomes cumbersome. For eg, for the c3d format has 5 part-downloads, so the folders will be as follows : \r\n```\r\n['~/.cache/huggingface/datasets/downloads/extracted/0e6bf028f490bf18c23ce572d1437c4ef32a74f630e33c26a806250d35cfcdd1', '~/.cache/huggingface/datasets/downloads/extracted/1b44fc5c7a6e031c904545422d449fd964f8ee795b9d1dcb0b6a76d03b50ebe6', '~/.cache/huggingface/datasets/downloads/extracted/137595188e96187c24ce1aa5c78200c7f78816fbd9d6c62354c01b3e6ec550c7', '~/.cache/huggingface/datasets/downloads/extracted/6c0c893e435f36fd79aa0f199f58fe16f01985f039644a7cb094a8c43a15ffd4', '~/.cache/huggingface/datasets/downloads/extracted/45e4703354cbc975e6add66f1b17b716c882b56f44575b033c5926aa5fcfb17f']\r\n```\r\nEach of these folders have a given set of subjects, so we'll be need to write extra code to fetch data from each of these folders, and the mpg format has 12 part-downloads which will lead to 12 folders having certain set of subjects, so it is cumbersome to process them.",
"I have added all the changes that were suggested. We just need to handle the multi-part download for c3d and mpg formats. Easiest way would be to have just one zip for these formats.",
"But we can handle this with a simple mapping that stores the id ranges (for each config), no? And an actual file path is not important during processing.",
"I have added code to handle c3d, mpg formats as well. The data for the mpg format seems incomplete as it contains only 53 rows. I have added a note regarding this in the Data Splits section.",
"The real data test works fine and dummy_data test work fine. There were few missing files which was causing issues, I have fixed it now.\r\n",
"- Reduced the dummy_data size.\r\n- Added sample dataset preprocessing code, it is not complete though.\r\n- Added all changes suggested.\r\n\r\nLet me know if anything else is required. Thank you. :)",
"Thanks for your contribution, @dnaveenr.\r\n\r\nWe are removing the dataset scripts from this GitHub repo and moving them to the Hugging Face Hub: https://huggingface.co/datasets\r\n\r\nWe would suggest you create this dataset there. Please, feel free to tell us if you need some help."
] | 2022-04-24T17:31:08 | 2022-10-03T09:38:24 | 2022-10-03T09:36:30 | CONTRIBUTOR | null | Resolves #3457
Dataset Request : Add CMU Graphics Lab Motion Capture dataset [#3457](https://github.com/huggingface/datasets/issues/3457)
This PR adds the CMU MoCap Dataset.
The authors didn't respond even after multiple follow ups, so I ended up crawling the website to get categories, subcategories and description information. Some of the subjects do not have category/subcategory/description as well. I am using a subject to categories, subcategories and description map (metadata file).
Currently the loading of the dataset works for "asf/amc" and "avi" formats since they have a single download link. But "c3d" and "mpg" have multiple download links (part archives) and dl_manager.download_and_extract() extracts the files to multiple paths, is there a way to extract these multiple archives into one folder ? Any other way to go about this ?
Any suggestions/inputs on this would be helpful. Thank you.
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4208/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4208/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4208",
"html_url": "https://github.com/huggingface/datasets/pull/4208",
"diff_url": "https://github.com/huggingface/datasets/pull/4208.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4208.patch",
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4207 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4207/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4207/comments | https://api.github.com/repos/huggingface/datasets/issues/4207/events | https://github.com/huggingface/datasets/pull/4207 | 1,213,604,615 | PR_kwDODunzps42rmbK | 4,207 | [Minor edit] Fix typo in class name | {
"login": "cakiki",
"id": 3664563,
"node_id": "MDQ6VXNlcjM2NjQ1NjM=",
"avatar_url": "https://avatars.githubusercontent.com/u/3664563?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/cakiki",
"html_url": "https://github.com/cakiki",
"followers_url": "https://api.github.com/users/cakiki/followers",
"following_url": "https://api.github.com/users/cakiki/following{/other_user}",
"gists_url": "https://api.github.com/users/cakiki/gists{/gist_id}",
"starred_url": "https://api.github.com/users/cakiki/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/cakiki/subscriptions",
"organizations_url": "https://api.github.com/users/cakiki/orgs",
"repos_url": "https://api.github.com/users/cakiki/repos",
"events_url": "https://api.github.com/users/cakiki/events{/privacy}",
"received_events_url": "https://api.github.com/users/cakiki/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2022-04-24T09:49:37 | 2022-05-05T13:17:47 | 2022-05-05T13:17:47 | CONTRIBUTOR | null | Typo: `datasets.DatsetDict` -> `datasets.DatasetDict` | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4207/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4207/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4207",
"html_url": "https://github.com/huggingface/datasets/pull/4207",
"diff_url": "https://github.com/huggingface/datasets/pull/4207.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4207.patch",
"merged_at": "2022-05-05T13:17:47"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4206 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4206/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4206/comments | https://api.github.com/repos/huggingface/datasets/issues/4206/events | https://github.com/huggingface/datasets/pull/4206 | 1,212,715,581 | PR_kwDODunzps42pJQW | 4,206 | Add Nerval Metric | {
"login": "mdadda",
"id": 49372461,
"node_id": "MDQ6VXNlcjQ5MzcyNDYx",
"avatar_url": "https://avatars.githubusercontent.com/u/49372461?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mdadda",
"html_url": "https://github.com/mdadda",
"followers_url": "https://api.github.com/users/mdadda/followers",
"following_url": "https://api.github.com/users/mdadda/following{/other_user}",
"gists_url": "https://api.github.com/users/mdadda/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mdadda/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mdadda/subscriptions",
"organizations_url": "https://api.github.com/users/mdadda/orgs",
"repos_url": "https://api.github.com/users/mdadda/repos",
"events_url": "https://api.github.com/users/mdadda/events{/privacy}",
"received_events_url": "https://api.github.com/users/mdadda/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 4190228726,
"node_id": "LA_kwDODunzps75wdD2",
"url": "https://api.github.com/repos/huggingface/datasets/labels/transfer-to-evaluate",
"name": "transfer-to-evaluate",
"color": "E3165C",
"default": false,
"description": ""
}
] | closed | false | null | [] | null | [
"Metrics are deprecated in `datasets` and `evaluate` should be used instead: https://github.com/huggingface/evaluate"
] | 2022-04-22T19:45:00 | 2023-07-11T09:34:56 | 2023-07-11T09:34:55 | NONE | null | This PR adds readme.md and ner_val.py to metrics.
Nerval is a python package that helps evaluate NER models. It creates classification report and confusion matrix at entity level. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4206/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4206/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4206",
"html_url": "https://github.com/huggingface/datasets/pull/4206",
"diff_url": "https://github.com/huggingface/datasets/pull/4206.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4206.patch",
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4205 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4205/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4205/comments | https://api.github.com/repos/huggingface/datasets/issues/4205/events | https://github.com/huggingface/datasets/pull/4205 | 1,212,466,138 | PR_kwDODunzps42oVFE | 4,205 | Fix `convert_file_size_to_int` for kilobits and megabits | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-04-22T14:56:21 | 2022-05-03T15:28:42 | 2022-05-03T15:21:48 | CONTRIBUTOR | null | Minor change to fully align this function with the recent change in Transformers (https://github.com/huggingface/transformers/pull/16891) | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4205/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4205/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4205",
"html_url": "https://github.com/huggingface/datasets/pull/4205",
"diff_url": "https://github.com/huggingface/datasets/pull/4205.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4205.patch",
"merged_at": "2022-05-03T15:21:48"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4204 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4204/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4204/comments | https://api.github.com/repos/huggingface/datasets/issues/4204/events | https://github.com/huggingface/datasets/pull/4204 | 1,212,431,764 | PR_kwDODunzps42oN0j | 4,204 | Add Recall Metric Card | {
"login": "emibaylor",
"id": 27527747,
"node_id": "MDQ6VXNlcjI3NTI3NzQ3",
"avatar_url": "https://avatars.githubusercontent.com/u/27527747?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/emibaylor",
"html_url": "https://github.com/emibaylor",
"followers_url": "https://api.github.com/users/emibaylor/followers",
"following_url": "https://api.github.com/users/emibaylor/following{/other_user}",
"gists_url": "https://api.github.com/users/emibaylor/gists{/gist_id}",
"starred_url": "https://api.github.com/users/emibaylor/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/emibaylor/subscriptions",
"organizations_url": "https://api.github.com/users/emibaylor/orgs",
"repos_url": "https://api.github.com/users/emibaylor/repos",
"events_url": "https://api.github.com/users/emibaylor/events{/privacy}",
"received_events_url": "https://api.github.com/users/emibaylor/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"This looks good to me! "
] | 2022-04-22T14:24:26 | 2022-05-03T13:23:23 | 2022-05-03T13:16:24 | CONTRIBUTOR | null | What this PR mainly does:
- add metric card for recall metric
- update docs in recall python file
Note: I've also included a .json file with all of the metric card information. I've started compiling the relevant information in this type of .json files, and then using a script I wrote to generate the formatted metric card, as well as the docs to go in the .py file. I figured I'd upload the .json because it could be useful, especially if I also make a PR with the script I'm using (let me know if that's something you think would be beneficial!) | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4204/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4204/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4204",
"html_url": "https://github.com/huggingface/datasets/pull/4204",
"diff_url": "https://github.com/huggingface/datasets/pull/4204.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4204.patch",
"merged_at": "2022-05-03T13:16:24"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4203 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4203/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4203/comments | https://api.github.com/repos/huggingface/datasets/issues/4203/events | https://github.com/huggingface/datasets/pull/4203 | 1,212,431,067 | PR_kwDODunzps42oNrS | 4,203 | Add Precision Metric Card | {
"login": "emibaylor",
"id": 27527747,
"node_id": "MDQ6VXNlcjI3NTI3NzQ3",
"avatar_url": "https://avatars.githubusercontent.com/u/27527747?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/emibaylor",
"html_url": "https://github.com/emibaylor",
"followers_url": "https://api.github.com/users/emibaylor/followers",
"following_url": "https://api.github.com/users/emibaylor/following{/other_user}",
"gists_url": "https://api.github.com/users/emibaylor/gists{/gist_id}",
"starred_url": "https://api.github.com/users/emibaylor/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/emibaylor/subscriptions",
"organizations_url": "https://api.github.com/users/emibaylor/orgs",
"repos_url": "https://api.github.com/users/emibaylor/repos",
"events_url": "https://api.github.com/users/emibaylor/events{/privacy}",
"received_events_url": "https://api.github.com/users/emibaylor/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-04-22T14:23:48 | 2022-05-03T14:23:40 | 2022-05-03T14:16:46 | CONTRIBUTOR | null | What this PR mainly does:
- add metric card for precision metric
- update docs in precision python file
Note: I've also included a .json file with all of the metric card information. I've started compiling the relevant information in this type of .json files, and then using a script I wrote to generate the formatted metric card, as well as the docs to go in the .py file. I figured I'd upload the .json because it could be useful, especially if I also make a PR with the script I'm using (let me know if that's something you think would be beneficial!) | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4203/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4203/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4203",
"html_url": "https://github.com/huggingface/datasets/pull/4203",
"diff_url": "https://github.com/huggingface/datasets/pull/4203.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4203.patch",
"merged_at": "2022-05-03T14:16:45"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4202 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4202/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4202/comments | https://api.github.com/repos/huggingface/datasets/issues/4202/events | https://github.com/huggingface/datasets/pull/4202 | 1,212,326,288 | PR_kwDODunzps42n278 | 4,202 | Fix some type annotation in doc | {
"login": "thomasw21",
"id": 24695242,
"node_id": "MDQ6VXNlcjI0Njk1MjQy",
"avatar_url": "https://avatars.githubusercontent.com/u/24695242?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/thomasw21",
"html_url": "https://github.com/thomasw21",
"followers_url": "https://api.github.com/users/thomasw21/followers",
"following_url": "https://api.github.com/users/thomasw21/following{/other_user}",
"gists_url": "https://api.github.com/users/thomasw21/gists{/gist_id}",
"starred_url": "https://api.github.com/users/thomasw21/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/thomasw21/subscriptions",
"organizations_url": "https://api.github.com/users/thomasw21/orgs",
"repos_url": "https://api.github.com/users/thomasw21/repos",
"events_url": "https://api.github.com/users/thomasw21/events{/privacy}",
"received_events_url": "https://api.github.com/users/thomasw21/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-04-22T12:53:31 | 2022-04-22T15:03:00 | 2022-04-22T14:56:43 | CONTRIBUTOR | null | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4202/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4202/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4202",
"html_url": "https://github.com/huggingface/datasets/pull/4202",
"diff_url": "https://github.com/huggingface/datasets/pull/4202.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4202.patch",
"merged_at": "2022-04-22T14:56:43"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4201 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4201/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4201/comments | https://api.github.com/repos/huggingface/datasets/issues/4201/events | https://github.com/huggingface/datasets/pull/4201 | 1,212,086,420 | PR_kwDODunzps42nIRm | 4,201 | Update GH template for dataset viewer issues | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"You can see rendering at: https://github.com/huggingface/datasets/blob/6b48fedbdafe12a42c7b6edcecc32820af1a4822/.github/ISSUE_TEMPLATE/dataset-viewer.yml"
] | 2022-04-22T09:34:44 | 2022-05-06T08:38:43 | 2022-04-26T08:45:55 | MEMBER | null | Update template to use new issue forms instead.
With this PR we can check if this new feature is useful for us.
Once validated, we can update the other templates.
CC: @severo | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4201/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4201/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4201",
"html_url": "https://github.com/huggingface/datasets/pull/4201",
"diff_url": "https://github.com/huggingface/datasets/pull/4201.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4201.patch",
"merged_at": "2022-04-26T08:45:55"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4200 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4200/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4200/comments | https://api.github.com/repos/huggingface/datasets/issues/4200/events | https://github.com/huggingface/datasets/pull/4200 | 1,211,980,110 | PR_kwDODunzps42mz0w | 4,200 | Add to docs how to load from local script | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-04-22T08:08:25 | 2022-05-06T08:39:25 | 2022-04-23T05:47:25 | MEMBER | null | This option was missing from the docs guide (it was only explained in the docstring of `load_dataset`). Although this is an infrequent use case, there might be some users interested in it.
Related to #4192
CC: @stevhliu | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4200/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4200/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4200",
"html_url": "https://github.com/huggingface/datasets/pull/4200",
"diff_url": "https://github.com/huggingface/datasets/pull/4200.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4200.patch",
"merged_at": "2022-04-23T05:47:24"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4199 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4199/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4199/comments | https://api.github.com/repos/huggingface/datasets/issues/4199/events | https://github.com/huggingface/datasets/issues/4199 | 1,211,953,308 | I_kwDODunzps5IPPCc | 4,199 | Cache miss during reload for datasets using image fetch utilities through map | {
"login": "apsdehal",
"id": 3616806,
"node_id": "MDQ6VXNlcjM2MTY4MDY=",
"avatar_url": "https://avatars.githubusercontent.com/u/3616806?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/apsdehal",
"html_url": "https://github.com/apsdehal",
"followers_url": "https://api.github.com/users/apsdehal/followers",
"following_url": "https://api.github.com/users/apsdehal/following{/other_user}",
"gists_url": "https://api.github.com/users/apsdehal/gists{/gist_id}",
"starred_url": "https://api.github.com/users/apsdehal/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/apsdehal/subscriptions",
"organizations_url": "https://api.github.com/users/apsdehal/orgs",
"repos_url": "https://api.github.com/users/apsdehal/repos",
"events_url": "https://api.github.com/users/apsdehal/events{/privacy}",
"received_events_url": "https://api.github.com/users/apsdehal/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"Hi ! Maybe one of the objects in the function is not deterministic across sessions ? You can read more about it and how to investigate here: https://huggingface.co/docs/datasets/about_cache",
"Hi @apsdehal! Can you verify that replacing\r\n```python\r\ndef fetch_single_image(image_url, timeout=None, retries=0):\r\n for _ in range(retries + 1):\r\n try:\r\n request = urllib.request.Request(\r\n image_url,\r\n data=None,\r\n headers={\"user-agent\": get_datasets_user_agent()},\r\n )\r\n with urllib.request.urlopen(request, timeout=timeout) as req:\r\n image = PIL.Image.open(io.BytesIO(req.read()))\r\n break\r\n except Exception:\r\n image = None\r\n return image\r\n```\r\nwith \r\n```python\r\nUSER_AGENT = get_datasets_user_agent()\r\n\r\ndef fetch_single_image(image_url, timeout=None, retries=0):\r\n for _ in range(retries + 1):\r\n try:\r\n request = urllib.request.Request(\r\n image_url,\r\n data=None,\r\n headers={\"user-agent\": USER_AGENT},\r\n )\r\n with urllib.request.urlopen(request, timeout=timeout) as req:\r\n image = PIL.Image.open(io.BytesIO(req.read()))\r\n break\r\n except Exception:\r\n image = None\r\n return image\r\n```\r\nfixes the issue?",
"Thanks @mariosasko. That does fix the issue. In general, I think these image downloading utilities since they are being used by a lot of image dataset should be provided as a part of `datasets` library right to keep the logic consistent and READMEs smaller? If they already exists, that is also great, please point me to those. I saw that `http_get` does exist.",
"You can find my rationale (and a proposed solution) for why these utilities are not a part of `datasets` here: https://github.com/huggingface/datasets/pull/4100#issuecomment-1097994003.",
"Makes sense. But, I think as the number of image datasets as grow, more people are copying pasting original code from docs to work as it is while we make fixes to them later. I think we do need a central place for these to avoid that confusion as well as more easier access to image datasets. Should we restart that discussion, possible on slack?"
] | 2022-04-22T07:47:08 | 2022-04-26T17:00:32 | 2022-04-26T13:38:26 | CONTRIBUTOR | null | ## Describe the bug
It looks like that result of `.map` operation dataset are missing the cache when you reload the script and always run from scratch. In same interpretor session, they are able to find the cache and reload it. But, when you exit the interpretor and reload it, the downloading starts from scratch.
## Steps to reproduce the bug
Using the example provided in `red_caps` dataset.
```python
from concurrent.futures import ThreadPoolExecutor
from functools import partial
import io
import urllib
import PIL.Image
import datasets
from datasets import load_dataset
from datasets.utils.file_utils import get_datasets_user_agent
def fetch_single_image(image_url, timeout=None, retries=0):
for _ in range(retries + 1):
try:
request = urllib.request.Request(
image_url,
data=None,
headers={"user-agent": get_datasets_user_agent()},
)
with urllib.request.urlopen(request, timeout=timeout) as req:
image = PIL.Image.open(io.BytesIO(req.read()))
break
except Exception:
image = None
return image
def fetch_images(batch, num_threads, timeout=None, retries=0):
fetch_single_image_with_args = partial(fetch_single_image, timeout=timeout, retries=retries)
with ThreadPoolExecutor(max_workers=num_threads) as executor:
batch["image"] = list(executor.map(lambda image_urls: [fetch_single_image_with_args(image_url) for image_url in image_urls], batch["image_url"]))
return batch
def process_image_urls(batch):
processed_batch_image_urls = []
for image_url in batch["image_url"]:
processed_example_image_urls = []
image_url_splits = re.findall(r"http\S+", image_url)
for image_url_split in image_url_splits:
if "imgur" in image_url_split and "," in image_url_split:
for image_url_part in image_url_split.split(","):
if not image_url_part:
continue
image_url_part = image_url_part.strip()
root, ext = os.path.splitext(image_url_part)
if not root.startswith("http"):
root = "http://i.imgur.com/" + root
root = root.split("#")[0]
if not ext:
ext = ".jpg"
ext = re.split(r"[?%]", ext)[0]
image_url_part = root + ext
processed_example_image_urls.append(image_url_part)
else:
processed_example_image_urls.append(image_url_split)
processed_batch_image_urls.append(processed_example_image_urls)
batch["image_url"] = processed_batch_image_urls
return batch
dset = load_dataset("red_caps", "jellyfish")
dset = dset.map(process_image_urls, batched=True, num_proc=4)
features = dset["train"].features.copy()
features["image"] = datasets.Sequence(datasets.Image())
num_threads = 5
dset = dset.map(fetch_images, batched=True, batch_size=50, features=features, fn_kwargs={"num_threads": num_threads})
```
Run this in an interpretor or as a script twice and see that the cache is missed the second time.
## Expected results
At reload there should not be any cache miss
## Actual results
Every time script is run, cache is missed and dataset is built from scratch.
## Environment info
- `datasets` version: 2.1.1.dev0
- Platform: Linux-4.19.0-20-cloud-amd64-x86_64-with-glibc2.10
- Python version: 3.8.13
- PyArrow version: 7.0.0
- Pandas version: 1.4.1
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4199/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4199/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4198 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4198/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4198/comments | https://api.github.com/repos/huggingface/datasets/issues/4198/events | https://github.com/huggingface/datasets/issues/4198 | 1,211,456,559 | I_kwDODunzps5INVwv | 4,198 | There is no dataset | {
"login": "wilfoderek",
"id": 1625647,
"node_id": "MDQ6VXNlcjE2MjU2NDc=",
"avatar_url": "https://avatars.githubusercontent.com/u/1625647?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/wilfoderek",
"html_url": "https://github.com/wilfoderek",
"followers_url": "https://api.github.com/users/wilfoderek/followers",
"following_url": "https://api.github.com/users/wilfoderek/following{/other_user}",
"gists_url": "https://api.github.com/users/wilfoderek/gists{/gist_id}",
"starred_url": "https://api.github.com/users/wilfoderek/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/wilfoderek/subscriptions",
"organizations_url": "https://api.github.com/users/wilfoderek/orgs",
"repos_url": "https://api.github.com/users/wilfoderek/repos",
"events_url": "https://api.github.com/users/wilfoderek/events{/privacy}",
"received_events_url": "https://api.github.com/users/wilfoderek/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2022-04-21T19:19:26 | 2022-05-03T11:29:05 | 2022-04-22T06:12:25 | NONE | null | ## Dataset viewer issue for '*name of the dataset*'
**Link:** *link to the dataset viewer page*
*short description of the issue*
Am I the one who added this dataset ? Yes-No
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4198/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4198/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4197 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4197/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4197/comments | https://api.github.com/repos/huggingface/datasets/issues/4197/events | https://github.com/huggingface/datasets/pull/4197 | 1,211,342,558 | PR_kwDODunzps42kyXD | 4,197 | Add remove_columns=True | {
"login": "thomasw21",
"id": 24695242,
"node_id": "MDQ6VXNlcjI0Njk1MjQy",
"avatar_url": "https://avatars.githubusercontent.com/u/24695242?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/thomasw21",
"html_url": "https://github.com/thomasw21",
"followers_url": "https://api.github.com/users/thomasw21/followers",
"following_url": "https://api.github.com/users/thomasw21/following{/other_user}",
"gists_url": "https://api.github.com/users/thomasw21/gists{/gist_id}",
"starred_url": "https://api.github.com/users/thomasw21/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/thomasw21/subscriptions",
"organizations_url": "https://api.github.com/users/thomasw21/orgs",
"repos_url": "https://api.github.com/users/thomasw21/repos",
"events_url": "https://api.github.com/users/thomasw21/events{/privacy}",
"received_events_url": "https://api.github.com/users/thomasw21/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"Any reason why we can't just do `[inputs.copy()]` in this line for in-place operations to not have effects anymore:\r\nhttps://github.com/huggingface/datasets/blob/bf432011ff9155a5bc16c03956bc63e514baf80d/src/datasets/arrow_dataset.py#L2232.\r\n\r\n(in the `batched` case, we can also copy the inputs' values (list objects) to ignore in-place modifications to the inputs' columns)\r\n\r\nI think `remove_columns=True` has no meaning, so I'm not a fan of this change.",
"@mariosasko copy does have a cost associated with it ... and plus you'll have to consider `deepcopy` Imagine columnds that are list of list of list of list .... Though I have to agree that `remove_columns=True` doesn't make sense (but, IMO, neither does it in its current use-case as it should refer to `input_columns`) ",
"Okay closing this PR for the following reasons:\r\n - `remove_columns=True` was expected to keep the `.update`-like operator for `.map`. I initially thought it would be a good way to ignore function side effects and only keep output of that function (cf. PR description).\r\n - expected `remove_columns=True` is a bad API according to @mariosasko and introduces unecessary changes for little gain (strictly equivalent to `remove_columns=dset.column_names`)"
] | 2022-04-21T17:28:13 | 2022-04-22T14:51:41 | 2022-04-22T14:45:30 | CONTRIBUTOR | null | This should fix all the issue we have with in place operations in mapping functions. This is crucial as where we do some weird things like:
```
def apply(batch):
batch_size = len(batch["id"])
batch["text"] = ["potato" for _ range(batch_size)]
return {}
# Columns are: {"id": int}
dset.map(apply, batched=True, remove_columns="text") # crashes because `text` is not in the original columns
dset.map(apply, batched=True) # mapped datasets has `text` column
```
In this PR we suggest to have `remove_columns=True` so that we ignore the input completely, and just use the output to generate mapped dataset. This means that inplace operations won't have any effects anymore. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4197/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4197/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4197",
"html_url": "https://github.com/huggingface/datasets/pull/4197",
"diff_url": "https://github.com/huggingface/datasets/pull/4197.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4197.patch",
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4196 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4196/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4196/comments | https://api.github.com/repos/huggingface/datasets/issues/4196/events | https://github.com/huggingface/datasets/issues/4196 | 1,211,271,261 | I_kwDODunzps5IMohd | 4,196 | Embed image and audio files in `save_to_disk` | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2022-04-21T16:25:18 | 2022-12-14T18:22:59 | 2022-12-14T18:22:59 | MEMBER | null | Following https://github.com/huggingface/datasets/pull/4184, currently a dataset saved using `save_to_disk` doesn't actually contain the bytes of the image or audio files. Instead it stores the path to your local files.
Adding `embed_external_files` and set it to True by default to save_to_disk would be kind of a breaking change since some users will get bigger Arrow files when updating the lib, but the advantages are nice:
- the resulting dataset is self contained, in case you want to delete your cache for example or share it with someone else
- users also upload these Arrow files to cloud storage via the fs parameter, and in this case they would expect to upload a self-contained dataset
- consistency with push_to_hub
This can be implemented at the same time as sharding for `save_to_disk` for efficiency, and reuse the helpers from `push_to_hub` to embed the external files.
cc @mariosasko | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4196/reactions",
"total_count": 6,
"+1": 6,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4196/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4194 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4194/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4194/comments | https://api.github.com/repos/huggingface/datasets/issues/4194/events | https://github.com/huggingface/datasets/pull/4194 | 1,210,958,602 | PR_kwDODunzps42jjD3 | 4,194 | Support lists of multi-dimensional numpy arrays | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-04-21T12:22:26 | 2022-05-12T15:16:34 | 2022-05-12T15:08:40 | MEMBER | null | Fix #4191.
CC: @SaulLu | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4194/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4194/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4194",
"html_url": "https://github.com/huggingface/datasets/pull/4194",
"diff_url": "https://github.com/huggingface/datasets/pull/4194.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4194.patch",
"merged_at": "2022-05-12T15:08:40"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4193 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4193/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4193/comments | https://api.github.com/repos/huggingface/datasets/issues/4193/events | https://github.com/huggingface/datasets/pull/4193 | 1,210,734,701 | PR_kwDODunzps42izQG | 4,193 | Document save_to_disk and push_to_hub on images and audio files | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"Good catch, I updated the docstrings"
] | 2022-04-21T09:04:36 | 2022-04-22T09:55:55 | 2022-04-22T09:49:31 | MEMBER | null | Following https://github.com/huggingface/datasets/pull/4187, I explained in the documentation of `save_to_disk` and `push_to_hub` how they handle image and audio data. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4193/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4193/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4193",
"html_url": "https://github.com/huggingface/datasets/pull/4193",
"diff_url": "https://github.com/huggingface/datasets/pull/4193.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4193.patch",
"merged_at": "2022-04-22T09:49:31"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4192 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4192/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4192/comments | https://api.github.com/repos/huggingface/datasets/issues/4192/events | https://github.com/huggingface/datasets/issues/4192 | 1,210,692,554 | I_kwDODunzps5IKbPK | 4,192 | load_dataset can't load local dataset,Unable to find ... | {
"login": "ahf876828330",
"id": 33253979,
"node_id": "MDQ6VXNlcjMzMjUzOTc5",
"avatar_url": "https://avatars.githubusercontent.com/u/33253979?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ahf876828330",
"html_url": "https://github.com/ahf876828330",
"followers_url": "https://api.github.com/users/ahf876828330/followers",
"following_url": "https://api.github.com/users/ahf876828330/following{/other_user}",
"gists_url": "https://api.github.com/users/ahf876828330/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ahf876828330/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ahf876828330/subscriptions",
"organizations_url": "https://api.github.com/users/ahf876828330/orgs",
"repos_url": "https://api.github.com/users/ahf876828330/repos",
"events_url": "https://api.github.com/users/ahf876828330/events{/privacy}",
"received_events_url": "https://api.github.com/users/ahf876828330/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | [
"Hi! :)\r\n\r\nI believe that should work unless `dataset_infos.json` isn't actually a dataset. For Hugging Face datasets, there is usually a file named `dataset_infos.json` which contains metadata about the dataset (eg. the dataset citation, license, description, etc). Can you double-check that `dataset_infos.json` isn't just metadata please?",
"Hi @ahf876828330, \r\n\r\nAs @stevhliu pointed out, the proper way to load a dataset is not trying to load its metadata file.\r\n\r\nIn your case, as the dataset script is local, you should better point to your local loading script:\r\n```python\r\ndataset = load_dataset(\"dataset/opus_books.py\")\r\n```\r\n\r\nPlease, feel free to re-open this issue if the previous code snippet does not work for you.",
"> Hi! :)\r\n> \r\n> I believe that should work unless `dataset_infos.json` isn't actually a dataset. For Hugging Face datasets, there is usually a file named `dataset_infos.json` which contains metadata about the dataset (eg. the dataset citation, license, description, etc). Can you double-check that `dataset_infos.json` isn't just metadata please?\r\n\r\nYesοΌyou are right!So if I have a metadata dataset local,How can I turn it to a dataset that can be used by the load_dataset() functionοΌAre there some examples?",
"The metadata file isn't a dataset so you can't turn it into one. You should try @albertvillanova's code snippet above (now merged in the docs [here](https://huggingface.co/docs/datasets/master/en/loading#local-loading-script)), which uses your local loading script `opus_books.py` to:\r\n\r\n1. Download the actual dataset. \r\n2. Once the dataset is downloaded, `load_dataset` will load it for you."
] | 2022-04-21T08:28:58 | 2022-04-25T16:51:57 | 2022-04-22T07:39:53 | NONE | null |
Traceback (most recent call last):
File "/home/gs603/ahf/pretrained/model.py", line 48, in <module>
dataset = load_dataset("json",data_files="dataset/dataset_infos.json")
File "/home/gs603/miniconda3/envs/coderepair/lib/python3.7/site-packages/datasets/load.py", line 1675, in load_dataset
**config_kwargs,
File "/home/gs603/miniconda3/envs/coderepair/lib/python3.7/site-packages/datasets/load.py", line 1496, in load_dataset_builder
data_files=data_files,
File "/home/gs603/miniconda3/envs/coderepair/lib/python3.7/site-packages/datasets/load.py", line 1155, in dataset_module_factory
download_mode=download_mode,
File "/home/gs603/miniconda3/envs/coderepair/lib/python3.7/site-packages/datasets/load.py", line 800, in get_module
data_files = DataFilesDict.from_local_or_remote(patterns, use_auth_token=self.downnload_config.use_auth_token)
File "/home/gs603/miniconda3/envs/coderepair/lib/python3.7/site-packages/datasets/data_files.py", line 582, in from_local_or_remote
if not isinstance(patterns_for_key, DataFilesList)
File "/home/gs603/miniconda3/envs/coderepair/lib/python3.7/site-packages/datasets/data_files.py", line 544, in from_local_or_remote
data_files = resolve_patterns_locally_or_by_urls(base_path, patterns, allowed_extensions)
File "/home/gs603/miniconda3/envs/coderepair/lib/python3.7/site-packages/datasets/data_files.py", line 194, in resolve_patterns_locally_or_by_urls
for path in _resolve_single_pattern_locally(base_path, pattern, allowed_extensions):
File "/home/gs603/miniconda3/envs/coderepair/lib/python3.7/site-packages/datasets/data_files.py", line 144, in _resolve_single_pattern_locally
raise FileNotFoundError(error_msg)
FileNotFoundError: Unable to find '/home/gs603/ahf/pretrained/dataset/dataset_infos.json' at /home/gs603/ahf/pretrained
![image](https://user-images.githubusercontent.com/33253979/164413285-84ea65ac-9126-408f-9cd2-ce4751a5dd73.png)
![image](https://user-images.githubusercontent.com/33253979/164413338-4735142f-408b-41d9-ab87-8484de2be54f.png)
the code is in the model.py,why I can't use the load_dataset function to load my local dataset? | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4192/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4192/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4191 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4191/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4191/comments | https://api.github.com/repos/huggingface/datasets/issues/4191/events | https://github.com/huggingface/datasets/issues/4191 | 1,210,028,090 | I_kwDODunzps5IH5A6 | 4,191 | feat: create an `Array3D` column from a list of arrays of dimension 2 | {
"login": "SaulLu",
"id": 55560583,
"node_id": "MDQ6VXNlcjU1NTYwNTgz",
"avatar_url": "https://avatars.githubusercontent.com/u/55560583?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/SaulLu",
"html_url": "https://github.com/SaulLu",
"followers_url": "https://api.github.com/users/SaulLu/followers",
"following_url": "https://api.github.com/users/SaulLu/following{/other_user}",
"gists_url": "https://api.github.com/users/SaulLu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/SaulLu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/SaulLu/subscriptions",
"organizations_url": "https://api.github.com/users/SaulLu/orgs",
"repos_url": "https://api.github.com/users/SaulLu/repos",
"events_url": "https://api.github.com/users/SaulLu/events{/privacy}",
"received_events_url": "https://api.github.com/users/SaulLu/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | closed | false | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"Hi @SaulLu, thanks for your proposal.\r\n\r\nJust I got a bit confused about the dimensions...\r\n- For the 2D case, you mention it is possible to create an `Array2D` from a list of arrays of dimension 1\r\n- However, you give an example of creating an `Array2D` from arrays of dimension 2:\r\n - the values of `data_map` are arrays of dimension 2\r\n - the outer list in `prepare_dataset_2D` should not be taken into account in the dimension counting, as it is used because in `map` you pass `batched=True`\r\n\r\nNote that for the 3D alternatives you mention:\r\n- In `prepare_dataset_3D_ter`, you create an `Array3D` from arrays of dimension 3:\r\n - the array `data_map[index][np.newaxis, :, :]` has dimension 3\r\n - the outer list in `prepare_dataset_3D_ter` is the one used by `batched=True`\r\n- In `prepare_dataset_3D_bis`, you create an `Array3D` from a list of list of lists:\r\n - the value of `data_map[index].tolist()` is a list of lists\r\n - it is enclosed by another list `[data_map[index].tolist()]`, thus giving a list of list of lists\r\n - the outer list is the one used by `batched=True`\r\n\r\nTherefore, if I understand correctly, your request would be to be able to create an `Array3D` from a list of an array of dimension 2:\r\n- In `prepare_dataset_3D`, `data_map[index]` is an array of dimension 2\r\n- it is enclosed by a list `[data_map[index]]`, thus giving a list of an array of dimension 2\r\n- the outer list is the one used by `batched=True`\r\n\r\nPlease, feel free to tell me if I did not understand you correctly.",
"Hi @albertvillanova ,\r\n\r\nIndeed my message was confusing and you guessed right :smile: : I think would be interesting to be able to create an Array3D from a list of an array of dimension 2. \r\n\r\nFor the 2D case I should have given as a \"similar\" example:\r\n```python\r\n\r\ndata_map_1D = {\r\n 1: np.array([0.2, 0.4]),\r\n 2: np.array([0.1, 0.4]),\r\n}\r\n\r\ndef prepare_dataset_2D(batch):\r\n batch[\"pixel_values\"] = [[data_map_1D[index]] for index in batch[\"id\"]]\r\n return batch\r\n \r\nds_2D = ds.map(\r\n prepare_dataset_2D, \r\n batched=True, \r\n remove_columns=ds.column_names, \r\n features=features.Features({\"pixel_values\": features.Array2D(shape=(1, 2), dtype=\"float32\")})\r\n)\r\n```"
] | 2022-04-20T18:04:32 | 2022-05-12T15:08:40 | 2022-05-12T15:08:40 | CONTRIBUTOR | null | **Is your feature request related to a problem? Please describe.**
It is possible to create an `Array2D` column from a list of arrays of dimension 1. Similarly, I think it might be nice to be able to create a `Array3D` column from a list of lists of arrays of dimension 1.
To illustrate my proposal, let's take the following toy dataset t:
```python
import numpy as np
from datasets import Dataset, features
data_map = {
1: np.array([[0.2, 0,4],[0.19, 0,3]]),
2: np.array([[0.1, 0,4],[0.19, 0,3]]),
}
def create_toy_ds():
my_dict = {"id":[1, 2]}
return Dataset.from_dict(my_dict)
ds = create_toy_ds()
```
The following 2D processing works without any errors raised:
```python
def prepare_dataset_2D(batch):
batch["pixel_values"] = [data_map[index] for index in batch["id"]]
return batch
ds_2D = ds.map(
prepare_dataset_2D,
batched=True,
remove_columns=ds.column_names,
features=features.Features({"pixel_values": features.Array2D(shape=(2, 3), dtype="float32")})
)
```
The following 3D processing doesn't work:
```python
def prepare_dataset_3D(batch):
batch["pixel_values"] = [[data_map[index]] for index in batch["id"]]
return batch
ds_3D = ds.map(
prepare_dataset_3D,
batched=True,
remove_columns=ds.column_names,
features=features.Features({"pixel_values": features.Array3D(shape=(1, 2, 3, dtype="float32")})
)
```
The error raised is:
```
---------------------------------------------------------------------------
ArrowInvalid Traceback (most recent call last)
[<ipython-input-6-676547e4cd41>](https://localhost:8080/#) in <module>()
3 batched=True,
4 remove_columns=ds.column_names,
----> 5 features=features.Features({"pixel_values": features.Array3D(shape=(1, 2, 3), dtype="float32")})
6 )
12 frames
[/usr/local/lib/python3.7/dist-packages/datasets/arrow_dataset.py](https://localhost:8080/#) in map(self, function, with_indices, with_rank, input_columns, batched, batch_size, drop_last_batch, remove_columns, keep_in_memory, load_from_cache_file, cache_file_name, writer_batch_size, features, disable_nullable, fn_kwargs, num_proc, suffix_template, new_fingerprint, desc)
1971 new_fingerprint=new_fingerprint,
1972 disable_tqdm=disable_tqdm,
-> 1973 desc=desc,
1974 )
1975 else:
[/usr/local/lib/python3.7/dist-packages/datasets/arrow_dataset.py](https://localhost:8080/#) in wrapper(*args, **kwargs)
518 self: "Dataset" = kwargs.pop("self")
519 # apply actual function
--> 520 out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
521 datasets: List["Dataset"] = list(out.values()) if isinstance(out, dict) else [out]
522 for dataset in datasets:
[/usr/local/lib/python3.7/dist-packages/datasets/arrow_dataset.py](https://localhost:8080/#) in wrapper(*args, **kwargs)
485 }
486 # apply actual function
--> 487 out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
488 datasets: List["Dataset"] = list(out.values()) if isinstance(out, dict) else [out]
489 # re-apply format to the output
[/usr/local/lib/python3.7/dist-packages/datasets/fingerprint.py](https://localhost:8080/#) in wrapper(*args, **kwargs)
456 # Call actual function
457
--> 458 out = func(self, *args, **kwargs)
459
460 # Update fingerprint of in-place transforms + update in-place history of transforms
[/usr/local/lib/python3.7/dist-packages/datasets/arrow_dataset.py](https://localhost:8080/#) in _map_single(self, function, with_indices, with_rank, input_columns, batched, batch_size, drop_last_batch, remove_columns, keep_in_memory, load_from_cache_file, cache_file_name, writer_batch_size, features, disable_nullable, fn_kwargs, new_fingerprint, rank, offset, disable_tqdm, desc, cache_only)
2354 writer.write_table(batch)
2355 else:
-> 2356 writer.write_batch(batch)
2357 if update_data and writer is not None:
2358 writer.finalize() # close_stream=bool(buf_writer is None)) # We only close if we are writing in a file
[/usr/local/lib/python3.7/dist-packages/datasets/arrow_writer.py](https://localhost:8080/#) in write_batch(self, batch_examples, writer_batch_size)
505 col_try_type = try_features[col] if try_features is not None and col in try_features else None
506 typed_sequence = OptimizedTypedSequence(batch_examples[col], type=col_type, try_type=col_try_type, col=col)
--> 507 arrays.append(pa.array(typed_sequence))
508 inferred_features[col] = typed_sequence.get_inferred_type()
509 schema = inferred_features.arrow_schema if self.pa_writer is None else self.schema
/usr/local/lib/python3.7/dist-packages/pyarrow/array.pxi in pyarrow.lib.array()
/usr/local/lib/python3.7/dist-packages/pyarrow/array.pxi in pyarrow.lib._handle_arrow_array_protocol()
[/usr/local/lib/python3.7/dist-packages/datasets/arrow_writer.py](https://localhost:8080/#) in __arrow_array__(self, type)
175 storage = list_of_np_array_to_pyarrow_listarray(data, type=pa_type.value_type)
176 else:
--> 177 storage = pa.array(data, pa_type.storage_dtype)
178 return pa.ExtensionArray.from_storage(pa_type, storage)
179
/usr/local/lib/python3.7/dist-packages/pyarrow/array.pxi in pyarrow.lib.array()
/usr/local/lib/python3.7/dist-packages/pyarrow/array.pxi in pyarrow.lib._sequence_to_array()
/usr/local/lib/python3.7/dist-packages/pyarrow/error.pxi in pyarrow.lib.pyarrow_internal_check_status()
/usr/local/lib/python3.7/dist-packages/pyarrow/error.pxi in pyarrow.lib.check_status()
ArrowInvalid: Can only convert 1-dimensional array values
```
**Describe the solution you'd like**
No error in the second scenario and an identical result to the following snippets.
**Describe alternatives you've considered**
There are other alternatives that work such as:
```python
def prepare_dataset_3D_bis(batch):
batch["pixel_values"] = [[data_map[index].tolist()] for index in batch["id"]]
return batch
ds_3D_bis = ds.map(
prepare_dataset_3D_bis,
batched=True,
remove_columns=ds.column_names,
features=features.Features({"pixel_values": features.Array3D(shape=(1, 2, 3), dtype="float32")})
)
```
or
```python
def prepare_dataset_3D_ter(batch):
batch["pixel_values"] = [data_map[index][np.newaxis, :, :] for index in batch["id"]]
return batch
ds_3D_ter = ds.map(
prepare_dataset_3D_ter,
batched=True,
remove_columns=ds.column_names,
features=features.Features({"pixel_values": features.Array3D(shape=(1, 2, 3), dtype="float32")})
)
```
But both solutions require the user to be aware that `data_map[index]` is an `np.array` type.
cc @lhoestq as we discuss this offline :smile: | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4191/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4191/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4190 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4190/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4190/comments | https://api.github.com/repos/huggingface/datasets/issues/4190/events | https://github.com/huggingface/datasets/pull/4190 | 1,209,901,677 | PR_kwDODunzps42gK3y | 4,190 | Deprecate `shard_size` in `push_to_hub` in favor of `max_shard_size` | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-04-20T16:08:01 | 2022-04-22T13:58:25 | 2022-04-22T13:52:00 | CONTRIBUTOR | null | This PR adds a `max_shard_size` param to `push_to_hub` and deprecates `shard_size` in favor of this new param to have a more descriptive name (a shard has at most the `shard_size` bytes in `push_to_hub`) for the param and to align the API with [Transformers](https://github.com/huggingface/transformers/blob/ff06b177917384137af2d9585697d2d76c40cdfc/src/transformers/modeling_utils.py#L1350).
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4190/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4190/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4190",
"html_url": "https://github.com/huggingface/datasets/pull/4190",
"diff_url": "https://github.com/huggingface/datasets/pull/4190.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4190.patch",
"merged_at": "2022-04-22T13:52:00"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4189 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4189/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4189/comments | https://api.github.com/repos/huggingface/datasets/issues/4189/events | https://github.com/huggingface/datasets/pull/4189 | 1,209,881,351 | PR_kwDODunzps42gGv5 | 4,189 | Document how to use FAISS index for special operations | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-04-20T15:51:56 | 2022-05-06T08:43:10 | 2022-05-06T08:35:52 | MEMBER | null | Document how to use FAISS index for special operations, by accessing the index itself.
Close #4029. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4189/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4189/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4189",
"html_url": "https://github.com/huggingface/datasets/pull/4189",
"diff_url": "https://github.com/huggingface/datasets/pull/4189.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4189.patch",
"merged_at": "2022-05-06T08:35:52"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4188 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4188/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4188/comments | https://api.github.com/repos/huggingface/datasets/issues/4188/events | https://github.com/huggingface/datasets/pull/4188 | 1,209,740,957 | PR_kwDODunzps42fpMv | 4,188 | Support streaming cnn_dailymail dataset | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"Did you run the `datasets-cli` command before merging to make sure you generate all the examples ?"
] | 2022-04-20T14:04:36 | 2022-05-11T13:39:06 | 2022-04-20T15:52:49 | MEMBER | null | Support streaming cnn_dailymail dataset.
Fix #3969.
CC: @severo | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4188/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4188/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4188",
"html_url": "https://github.com/huggingface/datasets/pull/4188",
"diff_url": "https://github.com/huggingface/datasets/pull/4188.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4188.patch",
"merged_at": "2022-04-20T15:52:49"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4187 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4187/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4187/comments | https://api.github.com/repos/huggingface/datasets/issues/4187/events | https://github.com/huggingface/datasets/pull/4187 | 1,209,721,532 | PR_kwDODunzps42flGp | 4,187 | Don't duplicate data when encoding audio or image | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"I'm not familiar with the concept of streaming vs non-streaming in HF datasets. I just wonder that you have the distinction here. Why doesn't it work to always make use of `bytes`? \"using a local file - which is often required for audio\" - why would that be?\r\n\r\nThe `path` would always point to some location in the `cache_dir`? I think this can be problematic. I would have expected that after I did `dataset.save_to_disk(...)` that I can remove the cache dir. But maybe just because I'm not familiar with HF. Or maybe the docs can be improved to clarify this.\r\n",
"We could always load every data file into `bytes` and save it this way the audio as bytes in `arrow` format, but the problem then would be that it makes the `file` column useless, *i.e.* people cannot inspect the audio file locally anymore or else they would need to first save bytes as a file which is not evident. This either breaks backwards compatibility or forces the user to stored 2x the required size locally. There was a longer discussion here: https://github.com/huggingface/datasets/issues/3663\r\n\r\nIt's a good argument though that `dataset.save_to_disk(...)` should save everything that is needed to the disk and should be independent of other folders, but I do think the arguments of #3663 to not break backwards compatibility and to allow people to inspect the downloaded audio files locally are a bit more important here. \r\n\r\nBut maybe, we could add a flag, `save_files_as_bytes` or `make_independent`, `make_self_contained` or a better name to `save_to_disk(...)` and `push_to_hub(...)` that would allow to make the resulting folder completely independent. ",
"What do you think @mariosasko @lhoestq @polinaeterna @anton-l ?\r\n",
"For context: you can either store the path to local images or audio files, or the bytes of those files.\r\n\r\nIf your images and audio files are local files, then the arrow file from `save_to_disk` will store paths to these files.\r\nIf you want to include the bytes or your images or audio files instead, you must `read()` those files first.\r\nThis can be done by storing the \"bytes\" instead of the \"path\" of the images or audio files.\r\n\r\nOn the other hand, the resulting Parquet files from `push_to_hub` are self-contained, so that anyone can reload the dataset from the Hub. If your dataset contains image or audio data, the Parquet files will store the bytes of your images or audio files.\r\n\r\nFor now I just updated the documentation: https://github.com/huggingface/datasets/pull/4193. Maybe we can also embed the image and audio bytes in `save_to_disk` when we implement sharding, so that is can be done as efficiently as `push_to_hub`.\r\n\r\nAnyway, merging this one :)"
] | 2022-04-20T13:50:37 | 2022-04-21T09:17:00 | 2022-04-21T09:10:47 | MEMBER | null | Right now if you pass both the `bytes` and a local `path` for audio or image data, then the `bytes` are unnecessarily written in the Arrow file, while we could just keep the local `path`.
This PR discards the `bytes` when the audio or image file exists locally.
In particular it's common for audio datasets builders to provide both the bytes and the local path in order to work for both streaming (using the bytes) and non-streaming mode (using a local file - which is often required for audio).
cc @patrickvonplaten | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4187/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4187/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4187",
"html_url": "https://github.com/huggingface/datasets/pull/4187",
"diff_url": "https://github.com/huggingface/datasets/pull/4187.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4187.patch",
"merged_at": "2022-04-21T09:10:47"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4186 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4186/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4186/comments | https://api.github.com/repos/huggingface/datasets/issues/4186/events | https://github.com/huggingface/datasets/pull/4186 | 1,209,463,599 | PR_kwDODunzps42evF5 | 4,186 | Fix outdated docstring about default dataset config | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-04-20T10:04:51 | 2022-04-22T12:54:44 | 2022-04-22T12:48:31 | MEMBER | null | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4186/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4186/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4186",
"html_url": "https://github.com/huggingface/datasets/pull/4186",
"diff_url": "https://github.com/huggingface/datasets/pull/4186.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4186.patch",
"merged_at": "2022-04-22T12:48:31"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4185 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4185/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4185/comments | https://api.github.com/repos/huggingface/datasets/issues/4185/events | https://github.com/huggingface/datasets/issues/4185 | 1,209,429,743 | I_kwDODunzps5IFm7v | 4,185 | Librispeech documentation, clarification on format | {
"login": "albertz",
"id": 59132,
"node_id": "MDQ6VXNlcjU5MTMy",
"avatar_url": "https://avatars.githubusercontent.com/u/59132?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertz",
"html_url": "https://github.com/albertz",
"followers_url": "https://api.github.com/users/albertz/followers",
"following_url": "https://api.github.com/users/albertz/following{/other_user}",
"gists_url": "https://api.github.com/users/albertz/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertz/subscriptions",
"organizations_url": "https://api.github.com/users/albertz/orgs",
"repos_url": "https://api.github.com/users/albertz/repos",
"events_url": "https://api.github.com/users/albertz/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertz/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | [
"(@patrickvonplaten )",
"Also cc @lhoestq here",
"The documentation in the code is definitely outdated - thanks for letting me know, I'll remove it in https://github.com/huggingface/datasets/pull/4184 .\r\n\r\nYou're exactly right `audio` `array` already decodes the audio file to the correct waveform. This is done on the fly, which is also why one should **not** do `ds[\"audio\"][\"array\"][0]` as this will decode all dataset samples, but instead `ds[0][\"audio\"][\"array\"]` see: https://huggingface.co/docs/datasets/audio_process#audio-datasets\r\n\r\n",
"So, again to clarify: On disk, only the raw flac file content is stored? Is this also the case after `save_to_disk`?\r\n\r\nAnd is it simple to also store it re-encoded as ogg or mp3 instead?\r\n",
"Hey, \r\n\r\nSorry yeah I was just about to look into this! We actually had an outdated version of Librispeech ASR that didn't save any files, but instead converted the audio files to a byte string, then was then decoded on-the-fly. This however is not very user-friendly so we recently decided to instead show the full path of the audio files with the `path` parameter.\r\n\r\nI'm currently changing this for Librispeech here: https://github.com/huggingface/datasets/pull/4184 .\r\nYou should be able to see the audio file in the original `flac` format under `path` then. I don't think it's a good idea to convert to MP3 out-of-the-box, but we could maybe think about some kind of convert function for audio datasets cc @lhoestq ? ",
"> I don't think it's a good idea to convert to MP3 out-of-the-box, but we could maybe think about some kind of convert function for audio datasets cc @lhoestq ?\r\n\r\nSure, I would expect that `load_dataset(\"librispeech_asr\")` would give you the original (not re-encoded) data (flac or already decoded). So such re-encoding logic would be some separate generic function. So I could do sth like `dataset.reencode_as_ogg(**ogg_encode_opts).save_to_disk(...)` or so.\r\n",
"A follow-up question: I wonder whether a Parquet dataset is maybe more what we actually want to have? (Following also my comment here: https://github.com/huggingface/datasets/pull/4184#issuecomment-1105045491.) Because I think we actually would prefer to embed the data content in the dataset.\r\n\r\nSo, instead of `save_to_disk`/`load_from_disk`, we would use `to_parquet`,`from_parquet`? Is there any downside? Are arrow files more efficient?\r\n\r\nRelated is also the doc update in #4193.\r\n",
"`save_to_disk` saves the dataset as an Arrow file, which is the format we use to load a dataset using memory mapping. This way the dataset does not fill your RAM, but is read from your disk instead.\r\n\r\nTherefore you can directly reload a dataset saved with `save_to_disk` using `load_from_disk`.\r\n\r\nParquet files are used for cold storage: to use memory mapping on a Parquet dataset, you first have to convert it to Arrow. We use Parquet to reduce the I/O when pushing/downloading data from the Hugging face Hub. When you load a Parquet file from the Hub, it is converted to Arrow on the fly during the download."
] | 2022-04-20T09:35:55 | 2022-04-21T11:00:53 | null | NONE | null | https://github.com/huggingface/datasets/blob/cd3ce34ab1604118351e1978d26402de57188901/datasets/librispeech_asr/librispeech_asr.py#L53
> Note that in order to limit the required storage for preparing this dataset, the audio
> is stored in the .flac format and is not converted to a float32 array. To convert, the audio
> file to a float32 array, please make use of the `.map()` function as follows:
>
> ```python
> import soundfile as sf
> def map_to_array(batch):
> speech_array, _ = sf.read(batch["file"])
> batch["speech"] = speech_array
> return batch
> dataset = dataset.map(map_to_array, remove_columns=["file"])
> ```
Is this still true?
In my case, `ds["train.100"]` returns:
```
Dataset({
features: ['file', 'audio', 'text', 'speaker_id', 'chapter_id', 'id'],
num_rows: 28539
})
```
and taking the first instance yields:
```
{'file': '374-180298-0000.flac',
'audio': {'path': '374-180298-0000.flac',
'array': array([ 7.01904297e-04, 7.32421875e-04, 7.32421875e-04, ...,
-2.74658203e-04, -1.83105469e-04, -3.05175781e-05]),
'sampling_rate': 16000},
'text': 'CHAPTER SIXTEEN I MIGHT HAVE TOLD YOU OF THE BEGINNING OF THIS LIAISON IN A FEW LINES BUT I WANTED YOU TO SEE EVERY STEP BY WHICH WE CAME I TO AGREE TO WHATEVER MARGUERITE WISHED',
'speaker_id': 374,
'chapter_id': 180298,
'id': '374-180298-0000'}
```
The `audio` `array` seems to be already decoded. So such convert/decode code as mentioned in the doc is wrong?
But I wonder, is it actually stored as flac on disk, and the decoding is done on-the-fly? Or was it decoded already during the preparation and is stored as raw samples on disk?
Note that I also used `datasets.load_dataset("librispeech_asr", "clean").save_to_disk(...)` and then `datasets.load_from_disk(...)` in this example. Does this change anything on how it is stored on disk?
A small related question: Actually I would prefer to even store it as mp3 or ogg on disk. Is this easy to convert? | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4185/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4185/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4184 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4184/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4184/comments | https://api.github.com/repos/huggingface/datasets/issues/4184/events | https://github.com/huggingface/datasets/pull/4184 | 1,208,592,669 | PR_kwDODunzps42cB2j | 4,184 | [Librispeech] Add 'all' config | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Fix https://github.com/huggingface/datasets/issues/4179",
"_The documentation is not available anymore as the PR was closed or merged._",
"Just that I understand: With this change, simply doing `load_dataset(\"librispeech_asr\")` is possible and returns the whole dataset?\r\n\r\nAnd to get the subsets, I do sth like:\r\n```python\r\nds = load_dataset(\"librispeech_asr\")\r\ntrain_ds = ds[\"train\"]\r\ndev_clean_ds = ds[\"dev-clean\"]\r\ndev_other_ds = ds[\"dev-other\"]\r\ntest_clean_ds = ds[\"test-clean\"]\r\ntest_other_ds = ds[\"test-other\"]\r\n```\r\n?\r\n",
"> Just that I understand: With this change, simply doing `load_dataset(\"librispeech_asr\")` is possible and returns the whole dataset?\r\n> \r\n> And to get the subsets, I do sth like:\r\n> \r\n> ```python\r\n> ds = load_dataset(\"librispeech_asr\")\r\n> train_ds = ds[\"train\"]\r\n> dev_clean_ds = ds[\"dev-clean\"]\r\n> dev_other_ds = ds[\"dev-other\"]\r\n> test_clean_ds = ds[\"test-clean\"]\r\n> test_other_ds = ds[\"test-other\"]\r\n> ```\r\n> \r\n> ?\r\n\r\nYou could do:\r\n\r\n\r\n```python\r\nds = load_dataset(\"librispeech_asr\", \"all\") # <- note that we have to pass a config\r\ntrain_ds = ds[\"train\"]\r\ndev_clean_ds = ds[\"dev-clean\"]\r\ndev_other_ds = ds[\"dev-other\"]\r\ntest_clean_ds = ds[\"test-clean\"]\r\ntest_other_ds = ds[\"test-other\"]\r\n```",
"So, `load_dataset(\"librispeech_asr\")` is not possible, it must be `load_dataset(\"librispeech_asr\", \"all\")`?\r\n\r\nWhy is that?\r\n\r\nThe docs say:\r\n```\r\nname: `str` name, optional configuration for the dataset that affects the data generated on disk. Different\r\n `builder_config`s will have their own subdirectories and versions.\r\n If not provided, uses the first configuration in self.BUILDER_CONFIGS\r\n```\r\nhttps://github.com/huggingface/datasets/blob/cd3ce34ab1604118351e1978d26402de57188901/src/datasets/builder.py#L228\r\n\r\nOr maybe you could just define `DEFAULT_CONFIG_NAME`?\r\n",
"> If not provided, uses the first configuration in self.BUILDER_CONFIGS\r\n\r\nOh crap this is outdated documentation. No it doesn't take the first config by default.\r\n\r\nEDIT: opened a PR to fix this: https://github.com/huggingface/datasets/pull/4186",
"> No it doesn't take the first config by default.\r\n\r\nBut defining `DEFAULT_CONFIG_NAME` would work?\r\n\r\nSo should we define `DEFAULT_CONFIG_NAME = \"all\"` here as well? I think this is a reasonable default config.\r\n\r\nDon't most datasets have some default config?\r\n",
"> But defining DEFAULT_CONFIG_NAME would work?\r\n>\r\n> So should we define DEFAULT_CONFIG_NAME = \"all\" here as well? I think this is a reasonable default config.\r\n\r\nYes that would work, and I also find it reasonable to do it :)\r\n\r\n> Don't most datasets have some default config?\r\n\r\nMost datasets only have one configuration, so the single configuration is the default one. Then other datasets gave several configurations, and whether they have a default one is decided case-by-case.\r\n\r\ne.g. `glue` is a benchmark and doesn't have a default task, one must choose which task of `glue` they want to use explicitely.",
"Thanks a lot for the feedback! \r\n\r\nUsing `\"all\"` now as the default config. I changed the layout a bit so that there is not a single \"train\", but instead we have multiple \"train.clean.100\", \"train.clean.360\", \"train.other.500\". This way we don't even need to do filtering and it's also cleaner IMO.\r\n\r\n@albertz - you should now be able to do the following:\r\n\r\n```python\r\nload_dataset(\"librispeech_asr\") # <- run this once to download, prepare dataset and cache everything\r\n\r\n# The following operations will be very fast since all the downloading and processing is already cached\r\ntrain_1 = load_dataset(\"librispeech_asr\", split=\"train.clean.100\")\r\nprint(train_1)\r\ntrain_2 = load_dataset(\"librispeech_asr\", split=\"train.clean.100+train.clean.360\")\r\nprint(train_2)\r\ntrain_full = load_dataset(\"librispeech_asr\", split=\"train.clean.100+train.clean.360+train.other.500\")\r\nprint(train_full)\r\ndev_clean_ds = load_dataset(\"librispeech_asr\", split=\"validation.clean\")\r\nprint(dev_clean_ds)\r\ndev_other_ds = load_dataset(\"librispeech_asr\", split=\"validation.other\")\r\nprint(dev_other_ds)\r\ntest_clean_ds = load_dataset(\"librispeech_asr\", split=\"test.clean\")\r\nprint(test_clean_ds)\r\ntest_other_ds = load_dataset(\"librispeech_asr\", split=\"test.other\")\r\nprint(test_other_ds)\r\n```\r\n\r\n\r\n",
"Think this way we have the best of both worlds. Also @lhoestq, I think we could highlight better in the docs that it's possible to combine different splits. We do this actually quite a lot for speech. For Common Voice many people include \"validation\" in the training if the data is too small, e.g.: https://github.com/huggingface/transformers/blob/ff06b177917384137af2d9585697d2d76c40cdfc/examples/pytorch/speech-recognition/run_speech_recognition_ctc.py#L147\r\n\r\nShould we maybe add a short section to the loading tutorial here: https://huggingface.co/docs/datasets/v2.1.0/en/loading#hugging-face-hub ? (Happy to do it)",
"Is there any advantage or difference in calling `load_dataset` multiple times for each split? Or why not just call `load_dataset` once and then access each split?\r\n\r\nNote in our case, we cannot really use the caching mechanism because we have a recipe pipeline used by multiple users (and I think a common cache dir for all users might end up in problems) and we basically would use `load_dataset(\"librispeech_asr\").save_to_disk(...)` and then later `load_from_disk(...)`. (See here: https://github.com/rwth-i6/i6_core/pull/253)\r\n\r\nSo with `load_from_disk`, we cannot really provide the split this way, so we anyway would do sth like:\r\n```python\r\nds = datasets.load_from_disk(...)\r\ntrain = ds[\"train\"]\r\n```\r\nOr with your latest proposal, it would look like:\r\n```python\r\nds = datasets.load_from_disk(...)\r\ntrain_ds = datasets.concatenate_datasets(\r\n [ds[\"train.clean.100\"], ds[\"train.clean.360\"], ds[\"train.other.500\"]])\r\n```\r\nright?\r\n",
"> Is there any advantage or difference in calling `load_dataset` multiple times for each split? Or why not just call `load_dataset` once and then access each split?\r\n> \r\n> Note in our case, we cannot really use the caching mechanism because we have a recipe pipeline used by multiple users (and I think a common cache dir for all users might end up in problems) and we basically would use `load_dataset(\"librispeech_asr\").save_to_disk(...)` and then later `load_from_disk(...)`. (See here: [rwth-i6/i6_core#253](https://github.com/rwth-i6/i6_core/pull/253))\r\n> \r\n> So with `load_from_disk`, we cannot really provide the split this way, so we anyway would do sth like:\r\n> \r\n> ```python\r\n> ds = datasets.load_from_disk(...)\r\n> train = ds[\"train\"]\r\n> ```\r\n> \r\n> Or with your latest proposal, it would look like:\r\n> \r\n> ```python\r\n> ds = datasets.load_from_disk(...)\r\n> train_ds = datasets.concatenate_datasets(\r\n> [ds[\"train.clean.100\"], ds[\"train.clean.360\"], ds[\"train.other.500\"]])\r\n> ```\r\n> \r\n> right?\r\n\r\nI see the use case! The only advantage by calling `datasets` multiple times is that one can easily \"merge\" splits with `\"+\"`, but yeah you can do the exact same with `concatenate`.\r\n\r\n@lhoestq what do you think is the best approach with `load_from_disk`? \r\n\r\n@albertz, you could also define the `cache_dir` when doing `load_dataset(...)` which will then put all the relevant `arrow` files int the cache dir that you defined, e.g.:\r\n\r\n```python\r\nload_dataset(\"librispeech_asr\", cache_dir=\"/easy/to/access/directory\")\r\n```",
"@albertz, I took a read through https://github.com/rwth-i6/i6_core/pull/253 . \r\n\r\nI think the best would be the following:\r\n\r\n1. Do `ds = load_dataset(..., cache_dir=\"/dir/that/is/easy/to/access\")` <- having merged this PR, this will save all the original `.flac` files in the `cache_dir`\r\n2. Do `ds.save_to_disk(\"local/path\")` this should then only save the `arrow.format` with a `path` string to the audio files which are located in `cache_dir` <- this won't require a lot of memory after https://github.com/huggingface/datasets/pull/4184#discussion_r854132740 is fixed and can be done for each person individually.\r\n3. `ds = datasets.load_from_disk(\"local/path\")` can the be used. An object of `ds` will then have a `path` variable that links to the original audio files in the `cache_dir`. You can change these audio files then easily to `.mp3. You could do this with the `.map(...)` function, e.g. define a function that maps through all audio files, load them and then save them on disk afterward.",
"@lhoestq - I think this one is good to go",
"> @albertz, I took a read through [rwth-i6/i6_core#253](https://github.com/rwth-i6/i6_core/pull/253) .\r\n> \r\n> I think the best would be the following:\r\n> \r\n> 1. Do `ds = load_dataset(..., cache_dir=\"/dir/that/is/easy/to/access\")` <- having merged this PR, this will save all the original `.flac` files in the `cache_dir`\r\n> 2. Do `ds.save_to_disk(\"local/path\")` this should then only save the `arrow.format` with a `path` string to the audio files which are located in `cache_dir` <- this won't require a lot of memory after [[Librispeech] Add 'all' configΒ #4184 (comment)](https://github.com/huggingface/datasets/pull/4184#discussion_r854132740) is fixed and can be done for each person individually.\r\n> 3. `ds = datasets.load_from_disk(\"local/path\")` can the be used. An object of `ds` will then have a `path` variable that links to the original audio files in the `cache_dir`. You can change these audio files then easily to `.mp3. You could do this with the `.map(...)` function, e.g. define a function that maps through all audio files, load them and then save them on disk afterward.\r\n\r\nOh, so you say that our current implementation in https://github.com/rwth-i6/i6_core/pull/253 is broken? Because our cache dir is just some temp directory which will be removed afterwards, and we just store what we get out of `save_to_disk`. I think it would be good to clarify that in the doc of `save_to_disk`, that this is not enough and can depend on files from the cache dir. (@dthulke)\r\n\r\nSo, you say we anyway need to share the cache dir among users? But we would want to make sure that after the initial download and preparation of the data, this is set to readonly, because we want to make sure that other people will not modify the data in any way. Right?\r\n\r\nBut then, we don't really need the `save_to_disk` and `load_from_disk` at all, right?\r\n",
"@albertz \r\n\r\n> Oh, so you say that our current implementation in https://github.com/rwth-i6/i6_core/pull/253 is broken? Because our cache dir is just some temp directory which will be removed afterwards, and we just store what we get out of save_to_disk. I think it would be good to clarify that in the doc of save_to_disk, that this is not enough and can depend on files from the cache dir. (@dthulke)\r\n\r\nOh, I wasn't aware that audio files are handled this way. Then we should have the cache directory as an additional job output, so that we keep the audio files. \r\n\r\n> So, you say we anyway need to share the cache dir among users?\r\n\r\nNo, the cache dir can still be a directory in the job output folder. Then the audio paths in the corresponding dataset column correspond to the flac files in that directory. This way the \"output\" of the job is contained into the job directory and we don't write files to a global cache directory that is independent of the sisyphus graph.\r\n\r\nIf we want to share the audio data between different users, we can just link to a central instance of the job (similar to how we do it with the `DownloadLibriSpeechCorpusJob`).",
"@dthulke - that's a good point actually! So you can do both things:\r\n\r\n1. Convert all audio files to bytes. Bytes can be saved by `arrow` so in this case you can do `save_to_disk(...)`, but then you cannot really inspect the audio files locally as they'll just be saved within a large arrow file (this actually used to be the default case but we're changing this now). The problem of this is summarized here a bit: https://github.com/huggingface/datasets/issues/3663 . You can still do this if you'd like, e.g. you could do:\r\n\r\n```python\r\nds = load_dataset(\"librispeech_asr\")\r\n\r\ndef read_file(batch):\r\n with open(batch[\"file\"], \"r\") as f:\r\n batch[\"bytes\"] = f.read() \r\n return batch\r\n\r\nds = ds.map(read_file)\r\nds.save_to_disk(\"/path\") <- the saved arrow object will now contain everything you need\r\n```\r\n\r\nhowever this is not recommend - it's should be much easier to just save the path to the downloaded audio files.\r\n\r\n2. Not convert audio files to bytes, but just leave them in their original file format. Then only the path to the original files will be save in arrow. This will be the default case. This means that when you do `load_dataset(...)` both the orginal audio data and the arrow file will be saved in the `cache_dir` (which can be saved locally for every user or in a shared cache - we actually use a shared cache quite a bit at Hugging Face). When do you do `save_to_disk(...)` now only the `path` will be saved in `arrow` format (after this PR is merged, you'll see that the `arrow files should be very light weight` meaning that `save_to_disk(...)` can be done for every user, but has a dependency on the `cache_dir` (because the audio files live there).\r\n\r\n=> Now what you could do as well would be to simply move all the audio files to the folder you want (the `save_to_disk(...)` folder) and then change the path of every sample to this folder (maybe with `map(...)`) and then this folder would be self contained. I do however think it's better to just specific a `cache_dir` and re-use `load_dataset(...)` every time instead of `load_from_disk` or `save_to_disk(...)`. Note that you can even pass the relevant cache files to `load_dataset(...)` here: https://huggingface.co/docs/datasets/v2.1.0/en/package_reference/loading_methods#datasets.load_dataset.data_files in which case you can be 100% sure that nothing is redownloaded. \r\n\r\nWe discussed storing audio files quite a bit, e.g. see: https://github.com/huggingface/datasets/issues/3663 and had (too many) changes around this topic recently, but we've come to the conclusion that the best is to leave the audio format in the format it was originally (`.flac` for Librispeech) so that the user can easily inspect it / understand the data. Arrow cannot save data is `.flac` so we'll just save a path to the original data. Curious to hear your guys opinion on this as well.",
"So what I would suggest here is to do the following:\r\n\r\n1. Do `load_dataset(..., cache_dir=/a/read-only/folder)`\r\n2. \r\n- Either just re-use `load_dataset(..., cache_dir=...)` which should always re-use the data in the `cache_dir` since the hash of the url matches - so there should never be any duplicated downloading \r\n\r\nor \r\n\r\n- If you want to store the files in MP3 locally, first convert the files to MP3 in the read-only folder, then take do `ds.save_to_disk(/some/path)` which will save the correct path to the read-only folder to MP3 and then you can easily re-use the small arrow dataset that is saved in `/some/path`",
"> So what I would suggest here is to do the following:\r\n> \r\n> 1. Do `load_dataset(..., cache_dir=/a/read-only/folder)`\r\n> \r\n> * Either just re-use `load_dataset(..., cache_dir=...)` which should always re-use the data in the `cache_dir` since the hash of the url matches - so there should never be any duplicated downloading\r\n> \r\n> or\r\n> \r\n> * If you want to store the files in MP3 locally, first convert the files to MP3 in the read-only folder, then take do `ds.save_to_disk(/some/path)` which will save the correct path to the read-only folder to MP3 and then you can easily re-use the small arrow dataset that is saved in `/some/path`\r\n\r\nAlso relevant here: https://github.com/huggingface/datasets/issues/3663",
"I also added some documentation about how `save_to_disk` handles audio files here: https://github.com/huggingface/datasets/pull/4193",
"> > So, you say we anyway need to share the cache dir among users?\r\n> \r\n> No, the cache dir can still be a directory in the job output folder.\r\n\r\n@dthulke But this is what I mean. When we share the job output folder, it means we share the cache dir among users.\r\n\r\nI wonder if `load_dataset(..., cache_dir=job_output_cache_dir)` is always save to do then, that it really would not modify the `job_output_cache_dir`.\r\n\r\nWe could enforce that by making the `job_output_cache_dir` read-only afterwards. We currently don't do this.\r\n\r\n@patrickvonplaten @dthulke But in any case, we actually prefer the data content to be inside the dataset (the arrow files). Lots of small files would be very problematic for our cache manager. We have one main copy of the data on NFS, but accessing the NFS directly by all computing nodes is not feasible, so the cache manager will have copies of the files on the nodes. So it means, whenever we access some file, we query the cache manager DB whether the file is already cached somewhere (some other computing node) and if so, it copies it from the other computing node and not from NFS. This works very well when there are not too many files (but the files can be big). So, we want to have only a few but big files. Even for NFS access this is much better.\r\n\r\nI also commented in #3663.\r\n",
"Hey @albertz @dthulke,\r\n\r\nThanks a lot for your input! \r\n\r\nWe've discussed quite a bit with @lhoestq and we think the best approach is the following:\r\n\r\n\r\na)\r\n`load_dataset(...)` will not store both bytes and the files because this would mean that 3x the size of the dataset would often be needed (1. the compressed `tar.gz` file, 2. the extracted file b, 3. the raw bytes in arrow format). \r\n\r\nFor canonical datasets like librispeech and common voice I think we want to keep the dataset filenames because of i) no breaking changes and ii) reasons explained in #3663\r\n\r\nHowever it's also trivial to write your own datasetset downloading script of librispeech and just not extract the folder e.g. this line: https://huggingface.co/datasets/common_voice/blob/main/common_voice.py#L671\r\n\r\nAnd then it'll be allowed to save the bytes and the dataset will be self-contained out-of-the-box when using `load_dataset(...)`\r\n\r\nb) Now, one major problem that you guys uncovered is that `save_to_disk(...)` is currently not necessarily saving a dataset to be self-contained. We will change that asap. This means that after we've corrected this when you do download the canonical librispeech dataset the following will work:\r\n\r\n```python\r\nds = load_dataset(\"....\") # <- here we have a dependency on the filepathes\r\nds[0][\"audio\"][\"bytes\"] # <- will not work\r\n\r\nds.save_to_disk(\"/local/path\") # <- now we want to have a self-contained dataset in arrow format, so we load the files into bytes and save it in arrow format\r\n\r\n# now you can delete everything besides \"/local/path\"\r\n\r\nds = load_from_disk(\"/local/path\") # <- this will work\r\n```\r\n\r\nSo either option a) where you define your own librispeech data downloading script (you guys could just sign up here: https://huggingface.co/join) and upload a dataset loading script in private mode so that no one can see it and you would always store the audio as bytes or b) where you first load then save to disk then delete cache would work. \r\n\r\nHope that fits in your vision :-)\r\n\r\ncc @lhoestq @mariosasko ",
"@patrickvonplaten sounds like a good approach to me. For b) this could even be configurable with a parameter like `embed_external_files` as you have for `push_to_hub` (if people prefer to keep separate audio files).\r\n",
"> However it's also trivial to write your own datasetset downloading script of librispeech and just not extract the folder\r\n\r\nI don't exactly understand. In all cases, we need to extract it to prepare the dataset, or not? No matter if we want to store the raw bytes inside the dataset or leaving them as local files. Just in the first case, we can safely delete the extracted files after the dataset preparation.\r\n\r\n> `save_to_disk(...)` is currently not necessarily saving a dataset to be self-contained. We will change that asap.\r\n\r\nFor us, this sounds exactly like what we want.\r\n\r\nBut regarding not introducing breaking changes, wouldn't this maybe also break some setups for users who don't expect this new behavior?\r\n",
"@albertz I would suggest to move the discussion on implementation details on our side to the following issue: rwth-i6/i6_core/issues/257",
"I like the idea of adding `embed_external_files` and set it to True by default to `save_to_disk`.\r\nIt's indeed a kind of breaking change since some users will get bigger Arrow files when updating the lib, but the advantages are nice:\r\n1. I like the idea of having it self contained, in case you want to delete your cache\r\n2. users also upload these Arrow files to cloud storage via the `fs` parameter, and in this case they would expect to upload a self-contained dataset\r\n3. consistency with `push_to_hub`\r\n\r\nIf it sounds good to you I'll open an issue to discuss this and track the advancements",
"Closed #4179."
] | 2022-04-19T16:27:56 | 2022-08-29T06:35:57 | 2022-04-22T09:45:17 | MEMBER | null | Add `"all"` config to Librispeech
Closed #4179 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4184/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4184/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4184",
"html_url": "https://github.com/huggingface/datasets/pull/4184",
"diff_url": "https://github.com/huggingface/datasets/pull/4184.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4184.patch",
"merged_at": "2022-04-22T09:45:17"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4183 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4183/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4183/comments | https://api.github.com/repos/huggingface/datasets/issues/4183/events | https://github.com/huggingface/datasets/pull/4183 | 1,208,449,335 | PR_kwDODunzps42bjXn | 4,183 | Document librispeech configs | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"I think the main purpose of #4179 was how to be able to load both configs into one, so should we maybe add this part of the code: https://github.com/huggingface/datasets/issues/4179#issuecomment-1102383717 \r\n\r\nto the doc? \r\n\r\nActually @lhoestq would this work given that they have different split names: https://huggingface.co/datasets/librispeech_asr#data-splits ? ",
"This doc extension does not explain why I can't simply load the whole dataset. Or what workaround I need to get the whole dataset, which is what people usually want for Librispeech.",
"_The documentation is not available anymore as the PR was closed or merged._",
"@lhoestq, I can add a `\"all\"` config to Librispeech have the datasets already cached somewhere ",
"I'm closing this PR then, feel free to continue the discussion in https://github.com/huggingface/datasets/issues/4179\r\n"
] | 2022-04-19T14:26:59 | 2022-04-19T15:21:36 | 2022-04-19T15:15:20 | MEMBER | null | Added an example of how to load one config or the other | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4183/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4183/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4183",
"html_url": "https://github.com/huggingface/datasets/pull/4183",
"diff_url": "https://github.com/huggingface/datasets/pull/4183.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4183.patch",
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4182 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4182/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4182/comments | https://api.github.com/repos/huggingface/datasets/issues/4182/events | https://github.com/huggingface/datasets/issues/4182 | 1,208,285,235 | I_kwDODunzps5IBPgz | 4,182 | Zenodo.org download is not responding | {
"login": "dkajtoch",
"id": 32985207,
"node_id": "MDQ6VXNlcjMyOTg1MjA3",
"avatar_url": "https://avatars.githubusercontent.com/u/32985207?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dkajtoch",
"html_url": "https://github.com/dkajtoch",
"followers_url": "https://api.github.com/users/dkajtoch/followers",
"following_url": "https://api.github.com/users/dkajtoch/following{/other_user}",
"gists_url": "https://api.github.com/users/dkajtoch/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dkajtoch/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dkajtoch/subscriptions",
"organizations_url": "https://api.github.com/users/dkajtoch/orgs",
"repos_url": "https://api.github.com/users/dkajtoch/repos",
"events_url": "https://api.github.com/users/dkajtoch/events{/privacy}",
"received_events_url": "https://api.github.com/users/dkajtoch/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | [
"[Off topic but related: Is the uptime of S3 provably better than Zenodo's?]",
"Hi @dkajtoch, please note that at HuggingFace we are not hosting this dataset: we are just using a script to download their data file and create a dataset from it.\r\n\r\nIt was the dataset owners decision to host their data at Zenodo. You can see this on their website: https://marcobaroni.org/composes/sick.html\r\n\r\nAnd yes, you are right: Zenodo is currently having some incidents and people are reporting problems from it.\r\n\r\nOn the other hand, we could contact the data owners and propose them to host their data at our Hugging Face Hub.\r\n\r\n@julien-c I guess so.\r\n",
"Thanks @albertvillanova. I know that the problem lies in the source data. I just wanted to point out that these kind of problems are unavoidable without having one place where data sources are cached. Websites may go down or data sources may move. Having a copy in Hugging Face Hub would be a great solution. ",
"Definitely, @dkajtoch! But we have to ask permission to the data owners. And many dataset licenses directly forbid data redistribution: in those cases we are not allowed to host their data on our Hub.",
"Ahhh good point! License is the problem :("
] | 2022-04-19T12:26:57 | 2022-04-20T07:11:05 | 2022-04-20T07:11:05 | CONTRIBUTOR | null | ## Describe the bug
Source download_url from zenodo.org does not respond.
`_DOWNLOAD_URL = "https://zenodo.org/record/2787612/files/SICK.zip?download=1"`
Other datasets also use zenodo.org to store data and they cannot be downloaded as well.
It would be better to actually use more reliable way to store original data like s3 bucket.
## Steps to reproduce the bug
```python
load_dataset("sick")
```
## Expected results
Dataset should be downloaded.
## Actual results
ConnectionError: Couldn't reach https://zenodo.org/record/2787612/files/SICK.zip?download=1 (ReadTimeout(ReadTimeoutError("HTTPSConnectionPool(host='zenodo.org', port=443): Read timed out. (read timeout=100)")))
## Environment info
- `datasets` version: 2.1.0
- Platform: Darwin-21.4.0-x86_64-i386-64bit
- Python version: 3.7.11
- PyArrow version: 7.0.0
- Pandas version: 1.3.5
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4182/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4182/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4181 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4181/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4181/comments | https://api.github.com/repos/huggingface/datasets/issues/4181/events | https://github.com/huggingface/datasets/issues/4181 | 1,208,194,805 | I_kwDODunzps5IA5b1 | 4,181 | Support streaming FLEURS dataset | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2067388877,
"node_id": "MDU6TGFiZWwyMDY3Mzg4ODc3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20bug",
"name": "dataset bug",
"color": "2edb81",
"default": false,
"description": "A bug in a dataset script provided in the library"
}
] | closed | false | null | [] | null | [
"Yes, you just have to use `dl_manager.iter_archive` instead of `dl_manager.download_and_extract`.\r\n\r\nThat's because `download_and_extract` doesn't support TAR archives in streaming mode.",
"Tried to make it streamable, but I don't think it's really possible. @lhoestq @polinaeterna maybe you guys can check: \r\nhttps://huggingface.co/datasets/google/fleurs/commit/dcf80160cd77977490a8d32b370c027107f2407b \r\n\r\nreal quick. \r\n\r\nI think the problem is that we cannot ensure that the metadata file is found before the audio. Or is this possible somehow @lhoestq ? ",
"@patrickvonplaten I think the metadata file should be found first because the audio files are contained in a folder next to the metadata files (just as in common voice), so the metadata files should be \"on top of the list\" as they are closer to the root in the directories hierarchy ",
"@patrickvonplaten but apparently it doesn't... I don't really know why.",
"Yeah! Any ideas what could be the reason here? cc @lhoestq ?",
"The order of the files is determined when the TAR archive is created, depending on the commands the creator ran.\r\nIf the metadata file is not at the beginning of the file, that makes streaming completely inefficient. In this case the TAR archive needs to be recreated in an appropriate order.",
"Actually we could maybe just host the metadata file ourselves and then stream the audio data only. Don't think that this would be a problem for the FLEURS authors (I can ask them :-)) ",
"I made a PR to their repo to support streaming (by uploading the metadata file to the Hub). See:\r\n- https://huggingface.co/datasets/google/fleurs/discussions/4",
"I'm closing this issue as the PR above has been merged."
] | 2022-04-19T11:09:56 | 2022-07-25T11:44:02 | 2022-07-25T11:44:02 | MEMBER | null | ## Dataset viewer issue for '*name of the dataset*'
https://huggingface.co/datasets/google/fleurs
```
Status code: 400
Exception: NotImplementedError
Message: Extraction protocol for TAR archives like 'https://storage.googleapis.com/xtreme_translations/FLEURS/af_za.tar.gz' is not implemented in streaming mode. Please use `dl_manager.iter_archive` instead.
```
Am I the one who added this dataset ? Yes
Can I fix this somehow in the script? @lhoestq @severo
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4181/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4181/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4180 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4180/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4180/comments | https://api.github.com/repos/huggingface/datasets/issues/4180/events | https://github.com/huggingface/datasets/issues/4180 | 1,208,042,320 | I_kwDODunzps5IAUNQ | 4,180 | Add some iteration method on a dataset column (specific for inference) | {
"login": "Narsil",
"id": 204321,
"node_id": "MDQ6VXNlcjIwNDMyMQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/204321?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Narsil",
"html_url": "https://github.com/Narsil",
"followers_url": "https://api.github.com/users/Narsil/followers",
"following_url": "https://api.github.com/users/Narsil/following{/other_user}",
"gists_url": "https://api.github.com/users/Narsil/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Narsil/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Narsil/subscriptions",
"organizations_url": "https://api.github.com/users/Narsil/orgs",
"repos_url": "https://api.github.com/users/Narsil/repos",
"events_url": "https://api.github.com/users/Narsil/events{/privacy}",
"received_events_url": "https://api.github.com/users/Narsil/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | open | false | null | [] | null | [
"Thanks for the suggestion ! I agree it would be nice to have something directly in `datasets` to do something as simple as that\r\n\r\ncc @albertvillanova @mariosasko @polinaeterna What do you think if we have something similar to pandas `Series` that wouldn't bring everything in memory when doing `dataset[\"audio\"]` ? Currently it returns a list with all the decoded audio data in memory.\r\n\r\nIt would be a breaking change though, since `isinstance(dataset[\"audio\"], list)` wouldn't work anymore, but we could implement a `Sequence` so that `dataset[\"audio\"][0]` still works and only loads one item in memory.\r\n\r\nYour alternative suggestion with `iterate` is also sensible, though maybe less satisfactory in terms of experience IMO",
"I agree that current behavior (decoding all audio file sin the dataset when accessing `dataset[\"audio\"]`) is not useful, IMHO. Indeed in our docs, we are constantly warning our collaborators not to do that.\r\n\r\nTherefore I upvote for a \"useful\" behavior of `dataset[\"audio\"]`. I don't think the breaking change is important in this case, as I guess no many people use it with its current behavior. Therefore, for me it seems reasonable to return a generator (instead of an in-memeory list) for \"special\" features, like Audio/Image.\r\n\r\n@lhoestq on the other hand I don't understand your proposal about Pandas-like... ",
"I recall I had the same idea while working on the `Image` feature, so I agree implementing something similar to `pd.Series` that lazily brings elements in memory would be beneficial.",
"@lhoestq @mariosasko Could you please give a link to that new feature of `pandas.Series`? As far as I remember since I worked with pandas for more than 6 years, there was no lazy in-memory feature; it was everything in-memory; that was the reason why other frameworks were created, like Vaex or Dask, e.g. ",
"Yea pandas doesn't do lazy loading. I was referring to pandas.Series to say that they have a dedicated class to represent a column ;)"
] | 2022-04-19T09:15:45 | 2022-04-21T10:30:58 | null | CONTRIBUTOR | null | **Is your feature request related to a problem? Please describe.**
A clear and concise description of what the problem is.
Currently, `dataset["audio"]` will load EVERY element in the dataset in RAM, which can be quite big for an audio dataset.
Having an iterator (or sequence) type of object, would make inference with `transformers` 's `pipeline` easier to use and not so memory hungry.
**Describe the solution you'd like**
A clear and concise description of what you want to happen.
For a non breaking change:
```python
for audio in dataset.iterate("audio"):
# {"array": np.array(...), "sampling_rate":...}
```
For a breaking change solution (not necessary), changing the type of `dataset["audio"]` to a sequence type so that
```python
pipe = pipeline(model="...")
for out in pipe(dataset["audio"]):
# {"text":....}
```
could work
**Describe alternatives you've considered**
A clear and concise description of any alternative solutions or features you've considered.
```python
def iterate(dataset, key):
for item in dataset:
yield dataset[key]
for out in pipeline(iterate(dataset, "audio")):
# {"array": ...}
```
This works but requires the helper function which feels slightly clunky.
**Additional context**
Add any other context about the feature request here.
The context is actually to showcase better integration between `pipeline` and `datasets` in the Quicktour demo: https://github.com/huggingface/transformers/pull/16723/files
@lhoestq
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4180/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4180/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4179 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4179/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4179/comments | https://api.github.com/repos/huggingface/datasets/issues/4179/events | https://github.com/huggingface/datasets/issues/4179 | 1,208,001,118 | I_kwDODunzps5IAKJe | 4,179 | Dataset librispeech_asr fails to load | {
"login": "albertz",
"id": 59132,
"node_id": "MDQ6VXNlcjU5MTMy",
"avatar_url": "https://avatars.githubusercontent.com/u/59132?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertz",
"html_url": "https://github.com/albertz",
"followers_url": "https://api.github.com/users/albertz/followers",
"following_url": "https://api.github.com/users/albertz/following{/other_user}",
"gists_url": "https://api.github.com/users/albertz/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertz/subscriptions",
"organizations_url": "https://api.github.com/users/albertz/orgs",
"repos_url": "https://api.github.com/users/albertz/repos",
"events_url": "https://api.github.com/users/albertz/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertz/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | [
"@patrickvonplaten Hi! I saw that you prepared this? :)",
"Another thing, but maybe this should be a separate issue: As I see from the code, it would try to use up to 16 simultaneous downloads? This is problematic for Librispeech or anything on OpenSLR. On [the homepage](https://www.openslr.org/), it says:\r\n\r\n> If you want to download things from this site, please download them one at a time, and please don't use any fancy software-- just download things from your browser or use 'wget'. We have a firewall rule to drop connections from hosts with more than 5 simultaneous connections, and certain types of download software may activate this rule.\r\n\r\nRelated: https://github.com/tensorflow/datasets/issues/3885",
"Hey @albertz,\r\n\r\nNice to see you here! It's been a while ;-) ",
"Sorry maybe the docs haven't been super clear here. By `split` we mean one of `train.500`, `train.360`, `train.100`, `validation`, `test`. For Librispeech, you'll have to specific a config (either `other` or `clean`) though:\r\n\r\n```py\r\ndatasets.load_dataset(\"librispeech_asr\", \"clean\")\r\n```\r\n\r\nshould work and give you all splits (being \"train\", \"test\", ...) for the clean config of the dataset.\r\n",
"If you need both `\"clean\"` and `\"other\"` I think you'll have to do concatenate them as follows: \r\n\r\n```py\r\nfrom datasets import concatenate_datasets, load_dataset\r\n\r\nother = load_dataset(\"librispeech_asr\", \"other\")\r\nclean = load_dataset(\"librispeech_asr\", \"clean\")\r\n\r\nlibrispeech = concatenate_datasets([other, clean])\r\n```\r\n\r\nSee https://huggingface.co/docs/datasets/v2.1.0/en/process#concatenate",
"Downloading one split would be:\r\n\r\n```py\r\nfrom datasets import load_dataset\r\n\r\nother = load_dataset(\"librispeech_asr\", \"other\", split=\"train.500\")\r\n```\r\n\r\n\r\n",
"cc @lhoestq FYI maybe the docs can be improved here",
"Ah thanks. But wouldn't it be easier/nicer (and more canonical) to just make it in a way that simply `load_dataset(\"librispeech_asr\")` works?",
"Pinging @lhoestq here, think this could make sense! Not sure however how the dictionary would then look like",
"Would it make sense to have `clean` as the default config ?\r\n\r\nAlso I think `load_dataset(\"librispeech_asr\")` should have raised you an error that says that you need to specify a config\r\n\r\nI also opened a PR to improve the doc: https://github.com/huggingface/datasets/pull/4183",
"> Would it make sense to have `clean` as the default config ?\r\n\r\nI think a user would expect that the default would give you the full dataset.\r\n\r\n> Also I think `load_dataset(\"librispeech_asr\")` should have raised you an error that says that you need to specify a config\r\n\r\nIt does raise an error, but this error confused me because I did not understand why I needed a config, or why I could not simply download the whole dataset, which is what people usually do with Librispeech.\r\n",
"+1 for @albertz. Also think lots of people download the whole dataset (`\"clean\"` + `\"other\"`) for Librispeech.\r\n\r\nThink there are also some people though who:\r\n- a) Don't have the memory to store the whole dataset\r\n- b) Just want to evaluate on one of the two configs",
"Ok ! Adding the \"all\" configuration would do the job then, thanks ! In the \"all\" configuration we can merge all the train.xxx splits into one \"train\" split, or keep them separate depending on what's the most practical to use (probably put everything in \"train\" no ?)",
"I'm not too familiar with how to work with HuggingFace datasets, but people often do some curriculum learning scheme, where they start with train.100, later go over to train.100 + train.360, and then later use the whole train (960h). It would be good if this is easily possible.\r\n",
"Hey @albertz, \r\n\r\nopened a PR here. Think by adding the \"subdataset\" class to each split \"train\", \"dev\", \"other\" as shown here: https://github.com/huggingface/datasets/pull/4184/files#r853272727 it should be easily possible (e.g. with the filter function https://huggingface.co/docs/datasets/v2.1.0/en/package_reference/main_classes#datasets.Dataset.filter )",
"But also since everything is cached one could also just do:\r\n\r\n```python\r\nload_dataset(\"librispeech\", \"clean\", \"train.100\")\r\nload_dataset(\"librispeech\", \"clean\", \"train.100+train.360\")\r\nload_dataset(\"librispeech\" \"all\", \"train\") \r\n```",
"Hi @patrickvonplaten ,\r\n\r\nload_dataset(\"librispeech_asr\", \"clean\", \"train.100\") actually downloads the whole dataset and not the 100 hr split, is this a bug?",
"Hmm, I don't really see how that's possible: https://github.com/huggingface/datasets/blob/d22e39a0693d4be7410cf9a5d41fd5aac22be3cc/datasets/librispeech_asr/librispeech_asr.py#L51\r\n\r\nNote that all datasets related to `\"clean\"` are downloaded, but only `\"train.100\"` should be used. \r\n\r\ncc @lhoestq @albertvillanova @mariosasko can we do anything against download dataset links that are not related to the \"split\" that one actually needs. E.g. why should the split `\"train.360\"` be downloaded if for the user executes the above command:\r\n\r\n```py\r\nload_dataset(\"librispeech_asr\", \"clean\", \"train.100\")\r\n```",
"@patrickvonplaten This problem is a bit harder than it may seem, and it has to do with how our scripts are structured - `_split_generators` downloads data for a split before its definition. There was an attempt to fix this in https://github.com/huggingface/datasets/pull/2249, but it wasn't flexible enough. Luckily, I have a plan of attack, and this issue is on our short-term roadmap, so I'll work on it soon.\r\n\r\nIn the meantime, one can use streaming or manually download a dataset script, remove unwanted splits and load a dataset via `load_dataset`.",
"> load_dataset(\"librispeech_asr\", \"clean\", \"train.100\") actually downloads the whole dataset and not the 100 hr split, is this a bug?\r\n\r\nSince this bug is still there and google led me here when I was searching for a solution, I am writing down how to quickly fix it (as suggested by @mariosasko) for whoever else is not familiar with how the HF Hub works.\r\n\r\nDownload the [librispeech_asr.py](https://huggingface.co/datasets/librispeech_asr/blob/main/librispeech_asr.py) script and remove the unwanted splits both from the [`_DL_URLS` dictionary](https://huggingface.co/datasets/librispeech_asr/blob/main/librispeech_asr.py#L47-L68) and from the [`_split_generators` function](https://huggingface.co/datasets/librispeech_asr/blob/main/librispeech_asr.py#L121-L241).\r\n[Here ](https://huggingface.co/datasets/andreagasparini/librispeech_test_only) I made an example with only the test sets.\r\n\r\nThen either save the script locally and load the dataset via \r\n```python\r\nload_dataset(\"${local_path}/librispeech_asr.py\")\r\n```\r\n\r\nor [create a new dataset repo on the hub](https://huggingface.co/new-dataset) named \"librispeech_asr\" and upload the script there, then you can just run\r\n```python\r\nload_dataset(\"${hugging_face_username}/librispeech_asr\")\r\n```",
"Fixed by https://github.com/huggingface/datasets/pull/4184"
] | 2022-04-19T08:45:48 | 2022-07-27T16:10:00 | 2022-07-27T16:10:00 | NONE | null | ## Describe the bug
The dataset librispeech_asr (standard Librispeech) fails to load.
## Steps to reproduce the bug
```python
datasets.load_dataset("librispeech_asr")
```
## Expected results
It should download and prepare the whole dataset (all subsets).
In [the doc](https://huggingface.co/datasets/librispeech_asr), it says it has two configurations (clean and other).
However, the dataset doc says that not specifying `split` should just load the whole dataset, which is what I want.
Also, in case of this specific dataset, this is also the standard what the community uses. When you look at any publications with results on Librispeech, they always use the whole train dataset for training.
## Actual results
```
...
File "/home/az/.cache/huggingface/modules/datasets_modules/datasets/librispeech_asr/1f4602f6b5fed8d3ab3e3382783173f2e12d9877e98775e34d7780881175096c/librispeech_asr.py", line 119, in LibrispeechASR._split_generators
line: archive_path = dl_manager.download(_DL_URLS[self.config.name])
locals:
archive_path = <not found>
dl_manager = <local> <datasets.utils.download_manager.DownloadManager object at 0x7fc07b426160>
dl_manager.download = <local> <bound method DownloadManager.download of <datasets.utils.download_manager.DownloadManager object at 0x7fc07b426160>>
_DL_URLS = <global> {'clean': {'dev': 'http://www.openslr.org/resources/12/dev-clean.tar.gz', 'test': 'http://www.openslr.org/resources/12/test-clean.tar.gz', 'train.100': 'http://www.openslr.org/resources/12/train-clean-100.tar.gz', 'train.360': 'http://www.openslr.org/resources/12/train-clean-360.tar.gz'}, 'other'...
self = <local> <datasets_modules.datasets.librispeech_asr.1f4602f6b5fed8d3ab3e3382783173f2e12d9877e98775e34d7780881175096c.librispeech_asr.LibrispeechASR object at 0x7fc12a633310>
self.config = <local> BuilderConfig(name='default', version=0.0.0, data_dir='/home/az/i6/setups/2022-03-20--sis/work/i6_core/datasets/huggingface/DownloadAndPrepareHuggingFaceDatasetJob.TV6Nwm6dFReF/output/data_dir', data_files=None, description=None)
self.config.name = <local> 'default', len = 7
KeyError: 'default'
```
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 2.1.0
- Platform: Linux-5.4.0-107-generic-x86_64-with-glibc2.31
- Python version: 3.9.9
- PyArrow version: 6.0.1
- Pandas version: 1.4.2
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4179/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4179/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4178 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4178/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4178/comments | https://api.github.com/repos/huggingface/datasets/issues/4178/events | https://github.com/huggingface/datasets/pull/4178 | 1,207,787,073 | PR_kwDODunzps42ZfFN | 4,178 | [feat] Add ImageNet dataset | {
"login": "apsdehal",
"id": 3616806,
"node_id": "MDQ6VXNlcjM2MTY4MDY=",
"avatar_url": "https://avatars.githubusercontent.com/u/3616806?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/apsdehal",
"html_url": "https://github.com/apsdehal",
"followers_url": "https://api.github.com/users/apsdehal/followers",
"following_url": "https://api.github.com/users/apsdehal/following{/other_user}",
"gists_url": "https://api.github.com/users/apsdehal/gists{/gist_id}",
"starred_url": "https://api.github.com/users/apsdehal/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/apsdehal/subscriptions",
"organizations_url": "https://api.github.com/users/apsdehal/orgs",
"repos_url": "https://api.github.com/users/apsdehal/repos",
"events_url": "https://api.github.com/users/apsdehal/events{/privacy}",
"received_events_url": "https://api.github.com/users/apsdehal/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"Thanks for the comments. I believe I have addressed all of them and also decreased the size of the dummy data file, so it should be ready for a re-review. I also made a change to allow adding synset mapping and valprep script in config in case we add ImageNet 21k some time later. ",
"@lhoestq I have updated the PR to address all of the review comments."
] | 2022-04-19T06:01:35 | 2022-04-29T21:43:59 | 2022-04-29T21:37:08 | CONTRIBUTOR | null | To use the dataset download the tar file
[imagenet_object_localization_patched2019.tar.gz](https://www.kaggle.com/competitions/imagenet-object-localization-challenge/data?select=imagenet_object_localization_patched2019.tar.gz) from Kaggle and then point the datasets library to it by using:
```py
from datasets import load_dataset
dataset = load_dataset("imagenet",
data_dir="/path/to/imagenet_object_localization_patched2019.tar.gz")
```
Currently train and validation splits are supported. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4178/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4178/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4178",
"html_url": "https://github.com/huggingface/datasets/pull/4178",
"diff_url": "https://github.com/huggingface/datasets/pull/4178.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4178.patch",
"merged_at": "2022-04-29T21:37:08"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4177 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4177/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4177/comments | https://api.github.com/repos/huggingface/datasets/issues/4177/events | https://github.com/huggingface/datasets/pull/4177 | 1,207,535,920 | PR_kwDODunzps42Yxca | 4,177 | Adding missing subsets to the `SemEval-2018 Task 1` dataset | {
"login": "micahcarroll",
"id": 11460267,
"node_id": "MDQ6VXNlcjExNDYwMjY3",
"avatar_url": "https://avatars.githubusercontent.com/u/11460267?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/micahcarroll",
"html_url": "https://github.com/micahcarroll",
"followers_url": "https://api.github.com/users/micahcarroll/followers",
"following_url": "https://api.github.com/users/micahcarroll/following{/other_user}",
"gists_url": "https://api.github.com/users/micahcarroll/gists{/gist_id}",
"starred_url": "https://api.github.com/users/micahcarroll/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/micahcarroll/subscriptions",
"organizations_url": "https://api.github.com/users/micahcarroll/orgs",
"repos_url": "https://api.github.com/users/micahcarroll/repos",
"events_url": "https://api.github.com/users/micahcarroll/events{/privacy}",
"received_events_url": "https://api.github.com/users/micahcarroll/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 4564477500,
"node_id": "LA_kwDODunzps8AAAABEBBmPA",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20contribution",
"name": "dataset contribution",
"color": "0e8a16",
"default": false,
"description": "Contribution to a dataset script"
}
] | open | false | null | [] | null | [
"Datasets are not tracked in this repository anymore. You should move this PR to the [discussions page of this dataset](https://huggingface.co/datasets/sem_eval_2018_task_1/discussions)"
] | 2022-04-18T22:59:30 | 2022-10-05T10:38:16 | null | NONE | null | This dataset for the [1st task of SemEval-2018](https://competitions.codalab.org/competitions/17751) competition was missing all subtasks except for subtask 5. I added another two subtasks (subtask 1 and 2), which are each comprised of 12 additional data subsets: for each language in En, Es, Ar, there are 4 datasets, broken down by emotions (anger, fear, joy, sadness).
## Remaining questions
I wasn't able to find any documentation about how one should make PRs to modify datasets. Because of that, I just did my best to integrate the new data into the code, and tested locally that this worked. I'm sorry if I'm not respecting your contributing guidelines β if they are documented somewhere, I'd appreciate if you could send a pointer!
Not sure how `dataset_infos.json` and `dummy` should be updated. My understanding is that they were automatically generated at the time of the original dataset creation? | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4177/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4177/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4177",
"html_url": "https://github.com/huggingface/datasets/pull/4177",
"diff_url": "https://github.com/huggingface/datasets/pull/4177.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4177.patch",
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4176 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4176/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4176/comments | https://api.github.com/repos/huggingface/datasets/issues/4176/events | https://github.com/huggingface/datasets/issues/4176 | 1,206,515,563 | I_kwDODunzps5H6fdr | 4,176 | Very slow between two operations | {
"login": "yananchen1989",
"id": 26405281,
"node_id": "MDQ6VXNlcjI2NDA1Mjgx",
"avatar_url": "https://avatars.githubusercontent.com/u/26405281?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yananchen1989",
"html_url": "https://github.com/yananchen1989",
"followers_url": "https://api.github.com/users/yananchen1989/followers",
"following_url": "https://api.github.com/users/yananchen1989/following{/other_user}",
"gists_url": "https://api.github.com/users/yananchen1989/gists{/gist_id}",
"starred_url": "https://api.github.com/users/yananchen1989/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yananchen1989/subscriptions",
"organizations_url": "https://api.github.com/users/yananchen1989/orgs",
"repos_url": "https://api.github.com/users/yananchen1989/repos",
"events_url": "https://api.github.com/users/yananchen1989/events{/privacy}",
"received_events_url": "https://api.github.com/users/yananchen1989/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | [] | 2022-04-17T23:52:29 | 2022-04-18T00:03:00 | 2022-04-18T00:03:00 | NONE | null | Hello, in the processing stage, I use two operations. The first one : map + filter, is very fast and it uses the full cores, while the socond step is very slow and did not use full cores.
Also, there is a significant lag between them. Am I missing something ?
```
raw_datasets = raw_datasets.map(split_func,
batched=False,
num_proc=args.preprocessing_num_workers,
load_from_cache_file=not args.overwrite_cache,
desc = "running split para ==>")\
.filter(lambda example: example['text1']!='' and example['text2']!='',
num_proc=args.preprocessing_num_workers, desc="filtering ==>")
processed_datasets = raw_datasets.map(
preprocess_function,
batched=True,
num_proc=args.preprocessing_num_workers,
remove_columns=column_names,
load_from_cache_file=not args.overwrite_cache,
desc="Running tokenizer on dataset===>",
)
``` | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4176/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4176/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4175 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4175/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4175/comments | https://api.github.com/repos/huggingface/datasets/issues/4175/events | https://github.com/huggingface/datasets/pull/4175 | 1,205,589,842 | PR_kwDODunzps42SqF- | 4,175 | Add WIT Dataset | {
"login": "thomasw21",
"id": 24695242,
"node_id": "MDQ6VXNlcjI0Njk1MjQy",
"avatar_url": "https://avatars.githubusercontent.com/u/24695242?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/thomasw21",
"html_url": "https://github.com/thomasw21",
"followers_url": "https://api.github.com/users/thomasw21/followers",
"following_url": "https://api.github.com/users/thomasw21/following{/other_user}",
"gists_url": "https://api.github.com/users/thomasw21/gists{/gist_id}",
"starred_url": "https://api.github.com/users/thomasw21/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/thomasw21/subscriptions",
"organizations_url": "https://api.github.com/users/thomasw21/orgs",
"repos_url": "https://api.github.com/users/thomasw21/repos",
"events_url": "https://api.github.com/users/thomasw21/events{/privacy}",
"received_events_url": "https://api.github.com/users/thomasw21/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"Hi! Coming in late with some context.\r\n\r\nThere are two versions of the WIT dataset:\r\n1. The original source dataset managed by Wikimedia. It has more information, raw image representations, and each row corresponds to an image linked to all of its captions wherever it happens in Wikipedia (in multiple languages)\r\n2. The Google version, corresponding to the data script in this PR, which duplicates image instances and requires the user to download the images themselves from the provided URL (note that a basic implementation will have them download the same picture several time. @thomasw21 using our download manager instead of `urllib` could help with that, but it wouldn't be required if people had access to the first version)\r\n\r\nThe Wikimedia folks were really interested in us hosting a ready-to-go streaming version of this dataset where users don't have to download the version themselves, which is why we have the pre-processed versions on an HF bucket, with the raw images and a pre-computed embedding (don't remember the model, we can keep it ). That's the data script currently in https://github.com/huggingface/datasets/pull/2981 . It's nearly ready to go, the one thing we should do is move the raw data from our HF google Cloud bucket to the Hub.\r\n\r\nHow do you want to move forward? IMO the best way would be to have a WIT dataset under the Wikimedia org with both configurations, but it depends on everyone's timelines",
"Okay after offline discussion. We'll improve this versions and push it to the hub under `google` namespace. \r\n\r\n> which duplicates image instances and requires the user to download the images themselves from the provided URL (note that a basic implementation will have them download the same picture several time. @thomasw21 using our download manager instead of urllib could help with that, but it wouldn't be required if people had access to the first version)\r\n\r\nAh interesting wasn't aware of this duplication issue, concretely it'll just mean that our dataset in bigger than expected ... I think this should be handled after this loading script (though I have to figure our how to spawn a dl_manager).\r\n\r\n> The Wikimedia folks were really interested in us hosting a ready-to-go streaming version of this dataset where users don't have to download the version themselves, which is why we have the pre-processed versions on an HF bucket, with the raw images and a pre-computed embedding (don't remember the model, we can keep it ). That's the data script currently in https://github.com/huggingface/datasets/pull/2981 . It's nearly ready to go, the one thing we should do is move the raw data from our HF google Cloud bucket to the Hub.\r\n\r\nSimilarly a script will be written and pushed to `wikimedia` organisation.",
"@mariosasko can you make one last review concerning the text description changes? Then I'll handle putting it under `google` namespace and close this PR.",
"Looks all good now. Great job! ",
"Closing as this has been migrated to the hub under `google` namespace: https://huggingface.co/datasets/google/wit"
] | 2022-04-15T13:42:32 | 2022-05-02T14:34:01 | 2022-05-02T14:26:41 | CONTRIBUTOR | null | closes #2981 #2810
@nateraw @hassiahk I've listed you guys as co-author as you've contributed previously to this dataset | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4175/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4175/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4175",
"html_url": "https://github.com/huggingface/datasets/pull/4175",
"diff_url": "https://github.com/huggingface/datasets/pull/4175.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4175.patch",
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4174 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4174/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4174/comments | https://api.github.com/repos/huggingface/datasets/issues/4174/events | https://github.com/huggingface/datasets/pull/4174 | 1,205,575,941 | PR_kwDODunzps42SnJS | 4,174 | Fix when map function modifies input in-place | {
"login": "thomasw21",
"id": 24695242,
"node_id": "MDQ6VXNlcjI0Njk1MjQy",
"avatar_url": "https://avatars.githubusercontent.com/u/24695242?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/thomasw21",
"html_url": "https://github.com/thomasw21",
"followers_url": "https://api.github.com/users/thomasw21/followers",
"following_url": "https://api.github.com/users/thomasw21/following{/other_user}",
"gists_url": "https://api.github.com/users/thomasw21/gists{/gist_id}",
"starred_url": "https://api.github.com/users/thomasw21/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/thomasw21/subscriptions",
"organizations_url": "https://api.github.com/users/thomasw21/orgs",
"repos_url": "https://api.github.com/users/thomasw21/repos",
"events_url": "https://api.github.com/users/thomasw21/events{/privacy}",
"received_events_url": "https://api.github.com/users/thomasw21/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-04-15T13:23:15 | 2022-04-15T14:52:07 | 2022-04-15T14:45:58 | CONTRIBUTOR | null | When `function` modifies input in-place, the guarantee that columns in `remove_columns` are contained in `input` doesn't hold true anymore. Therefore we need to relax way we pop elements by checking if that column exists. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4174/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4174/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4174",
"html_url": "https://github.com/huggingface/datasets/pull/4174",
"diff_url": "https://github.com/huggingface/datasets/pull/4174.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4174.patch",
"merged_at": "2022-04-15T14:45:58"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4173 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4173/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4173/comments | https://api.github.com/repos/huggingface/datasets/issues/4173/events | https://github.com/huggingface/datasets/pull/4173 | 1,204,657,114 | PR_kwDODunzps42Ppnd | 4,173 | Stream private zipped images | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"oops looks like some tests are failing sorry, will fix them tomorrow\r\n\r\nEDIT: not today but asap hopefully",
"cc @mariosasko this is ready for review, let me know what you think !"
] | 2022-04-14T15:15:07 | 2022-05-05T14:05:54 | 2022-05-05T13:58:35 | MEMBER | null | As mentioned in https://github.com/huggingface/datasets/issues/4139 it's currently not possible to stream private/gated zipped images from the Hub.
This is because `Image.decode_example` does not handle authentication. Indeed decoding requires to access and download the file from the private repository.
In this PR I added authentication to `Image.decode_example` via a `token_per_repo_id` optional argument. I first wanted to just pass `use_auth_token` but a single `Image` instance can be responsible of decoding images from a combination of several datasets together (from `interleave_datasets` for example). Therefore I just used a dictionary `repo_id` -> `token` instead.
I'm getting the `repo_id` from the dataset builder (I replaced the `namepace` attribute with `repo_id`)
I did the same for `Audio.decode_example`
cc @SBrandeis @severo | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4173/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4173/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4173",
"html_url": "https://github.com/huggingface/datasets/pull/4173",
"diff_url": "https://github.com/huggingface/datasets/pull/4173.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4173.patch",
"merged_at": "2022-05-05T13:58:35"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4172 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4172/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4172/comments | https://api.github.com/repos/huggingface/datasets/issues/4172/events | https://github.com/huggingface/datasets/pull/4172 | 1,204,433,160 | PR_kwDODunzps42O7LW | 4,172 | Update assin2 dataset_infos.json | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-04-14T11:53:06 | 2022-04-15T14:47:42 | 2022-04-15T14:41:22 | MEMBER | null | Following comments in https://github.com/huggingface/datasets/issues/4003 we found that it was outdated and casing an error when loading the dataset | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4172/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4172/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4172",
"html_url": "https://github.com/huggingface/datasets/pull/4172",
"diff_url": "https://github.com/huggingface/datasets/pull/4172.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4172.patch",
"merged_at": "2022-04-15T14:41:22"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4170 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4170/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4170/comments | https://api.github.com/repos/huggingface/datasets/issues/4170/events | https://github.com/huggingface/datasets/pull/4170 | 1,204,413,620 | PR_kwDODunzps42O2-L | 4,170 | to_tf_dataset rewrite | {
"login": "Rocketknight1",
"id": 12866554,
"node_id": "MDQ6VXNlcjEyODY2NTU0",
"avatar_url": "https://avatars.githubusercontent.com/u/12866554?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Rocketknight1",
"html_url": "https://github.com/Rocketknight1",
"followers_url": "https://api.github.com/users/Rocketknight1/followers",
"following_url": "https://api.github.com/users/Rocketknight1/following{/other_user}",
"gists_url": "https://api.github.com/users/Rocketknight1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Rocketknight1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Rocketknight1/subscriptions",
"organizations_url": "https://api.github.com/users/Rocketknight1/orgs",
"repos_url": "https://api.github.com/users/Rocketknight1/repos",
"events_url": "https://api.github.com/users/Rocketknight1/events{/privacy}",
"received_events_url": "https://api.github.com/users/Rocketknight1/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"[Magic is now banned](https://www.youtube.com/watch?v=WIn58XoY728#t=36s) by decree of @sgugger. This is honestly much cleaner, and the functionality will make much more sense in `transformers` anyway!",
"@gante I renamed the default collator to `minimal_tf_collate_fn`!",
"@lhoestq @sgugger @gante \r\n\r\nI think this should now be ready, it looks good in testing! I'll try a few more notebooks today and tomorrow to be sure before I merge. Key changes are:\r\n\r\n- No column autodetection magic (will make a separate PR to add this as a `transformers` function)\r\n- Drops non-numerical features automatically (this is more of a 'DataLoader' method, we'll have a separate method to expose 'raw' datasets to `tf.data`)\r\n- Better autodetection of numerical features.\r\n- Shouldn't randomly crash mid-function :skull: \r\n\r\nWe definitely have some questions still to resolve about how to handle making a 'DataLoader' dataset versus a 'raw' dataset - see [the Notion doc](https://www.notion.so/huggingface2/Splitting-to_tf_dataset-c2e0773c4bec484384064b30ed634383) if you're interested. Still, since this PR is just fixes/improvements to an existing method which never supported non-numerical features anyway, we can merge it before we've resolved those issues, and then think about how to name and split things afterwards.",
"P.S. I'll take out the region comments at the end before I merge, I promise! They're just helpful while I'm editing it",
"+1 for the tests\r\n\r\n> Drops non-numerical features automatically\r\n\r\nCan you give more details on how this work and the rationale as well ? This is not explained in the docs\r\n\r\nAlso why are you adding `error_on_missing` and `auto_fix_label_names ` ? The rationale is not clear to me. In particular I think it is sensible enough to expect users to not ask columns that don't exist, and to rename a label column when required.",
"@lhoestq I rewrote those parts - they were causing some other issues too! `error_on_missing` and `auto_fix_label_names` have been removed. The new logic is to simply drop (before batch collation) all columns the user doesn't ask for, but not to raise errors if the user asked for columns not in the dataset, as they may be added by the collator. Hopefully this cleans it up and matches the documentation better!",
"@lhoestq New tests are now in!",
"Seeing some other random tests failing that don't look to be associated with this PR.",
"@lhoestq I can't figure out these test failures! They don't seem related to this PR at all, but I rebased to the latest version and they keep happening, even though they're not visible on master.",
"Thanks for the ping, will take a look tomorrow :)\r\n\r\nMaybe the rebase didn't go well for the code recently merged about label alignment from https://github.com/huggingface/datasets/pull/4277 ?",
"It's very strange! The rebase looks fine to me. I might try to move my changes to a new branch from `master` and see if I can figure out which change causes this problem to appear.",
"@lhoestq Got it! It was caused by a name collision - I was importing `typing.Sequence`, but the code also needed `features.Sequence`. The tests from that PR were expecting the latter but got the former, and then crashed.",
"@lhoestq Thanks! Also, when you're ready, don't merge it immediately! I'd like to do a quick round of manual testing with the very final build once you're happy to make sure it still works in our notebooks and examples.",
"@lhoestq Tests look good to me, merging now!"
] | 2022-04-14T11:30:58 | 2022-06-06T14:31:12 | 2022-06-06T14:22:09 | MEMBER | null | This PR rewrites almost all of `to_tf_dataset()`, which makes it kind of hard to list all the changes, but the most critical ones are:
- Much better stability and no more dropping unexpected column names (Sorry @NielsRogge)
- Doesn't clobber custom transforms on the data (Sorry @NielsRogge again)
- Much better handling of the situation when the `collate_fn` adds columns that aren't in the dataset.
- Better inference of shapes and data types
- Lots of hacky special-casing code removed
- Can return string columns (as `tf.String`)
- Most arguments have default values, calling the method should be much simpler
- ~~Can accept a `model` argument and only return columns that are valid inputs to that model~~
- Drops the `dummy_labels` argument - this was a workaround for Keras issues that have been resolved by changes in `transformers`. Also remove it from tests and the Overview notebook.
I still have a couple of TODOs remaining and some testing to do, so don't merge yet, but it should be mostly ready for review at this point! | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4170/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4170/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4170",
"html_url": "https://github.com/huggingface/datasets/pull/4170",
"diff_url": "https://github.com/huggingface/datasets/pull/4170.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4170.patch",
"merged_at": "2022-06-06T14:22:09"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4169 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4169/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4169/comments | https://api.github.com/repos/huggingface/datasets/issues/4169/events | https://github.com/huggingface/datasets/issues/4169 | 1,203,995,869 | I_kwDODunzps5Hw4Td | 4,169 | Timit_asr dataset cannot be previewed recently | {
"login": "YingLi001",
"id": 75192317,
"node_id": "MDQ6VXNlcjc1MTkyMzE3",
"avatar_url": "https://avatars.githubusercontent.com/u/75192317?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/YingLi001",
"html_url": "https://github.com/YingLi001",
"followers_url": "https://api.github.com/users/YingLi001/followers",
"following_url": "https://api.github.com/users/YingLi001/following{/other_user}",
"gists_url": "https://api.github.com/users/YingLi001/gists{/gist_id}",
"starred_url": "https://api.github.com/users/YingLi001/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/YingLi001/subscriptions",
"organizations_url": "https://api.github.com/users/YingLi001/orgs",
"repos_url": "https://api.github.com/users/YingLi001/repos",
"events_url": "https://api.github.com/users/YingLi001/events{/privacy}",
"received_events_url": "https://api.github.com/users/YingLi001/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Thanks for reporting. The bug has already been detected, and we hope to fix it soon.",
"TIMIT is now a dataset that requires manual download, see #4145 \r\n\r\nTherefore it might take a bit more time to fix it",
"> TIMIT is now a dataset that requires manual download, see #4145\r\n> \r\n> Therefore it might take a bit more time to fix it\r\n\r\nThank you for your quickly response. Exactly, I also found the manual download issue in the morning. But when I used *list_datasets()* to check the available datasets, *'timit_asr'* is still in the list. So I am a little bit confused. If *'timit_asr'* need to be manually downloaded, does that mean we can **not** automatically download it **any more** in the future?",
"Yes exactly. If you try to load the dataset it will ask you to download it manually first, and to pass the downloaded and extracted data like `load_dataset(\"timir_asr\", data_dir=\"path/to/extracted/data\")`\r\n\r\nThe URL we were using was coming from a host that doesn't have the permission to redistribute the data, and the dataset owners (LDC) notified us about it.",
"I downloaded the timit_asr data and unzipped. But I can't run my code. Could you resolve this problem for me? Thanks\r\n\r\n import soundfile as sf\r\n import torch\r\n from datasets import load_dataset\r\n dataset = load_dataset(\"timit_asr\", data_dir=\"/Users/nguyenvannham/Documents/test_case/data\")\r\n \r\n \r\n Generating train split: 0 examples [00:00, ? examples/s]\r\n\r\nGenerating train split: 0 examples [00:00, ? examples/s]Traceback (most recent call last):\r\n\r\n File \"/opt/anaconda3/envs/audio/lib/python3.9/site-packages/datasets/builder.py\", line 1571, in _prepare_split_single\r\n for key, record in generator:\r\n\r\n File \"/Users/nguyenvannham/.cache/huggingface/modules/datasets_modules/datasets/timit_asr/43f9448dd5db58e95ee48a277f466481b151f112ea53e27f8173784da9254fb2/timit_asr.py\", line 138, in _generate_examples\r\n with txt_path.open(encoding=\"utf-8\") as op:\r\n\r\n File \"/opt/anaconda3/envs/audio/lib/python3.9/pathlib.py\", line 1252, in open\r\n return io.open(self, mode, buffering, encoding, errors, newline,\r\n\r\n File \"/opt/anaconda3/envs/audio/lib/python3.9/pathlib.py\", line 1120, in _opener\r\n return self._accessor.open(self, flags, mode)\r\n\r\nFileNotFoundError: [Errno 2] No such file or directory: '/Users/nguyenvannham/Documents/test_case/data/train/DR1/FCJF0/SA1.WAV.TXT'\r\n\r\n\r\nThe above exception was the direct cause of the following exception:\r\n\r\nTraceback (most recent call last):\r\n\r\n File \"/var/folders/t9/l8d3rwpn1k33_gjtqs732lzc0000gn/T/ipykernel_3891/1203313828.py\", line 1, in <module>\r\n dataset = load_dataset(\"timit_asr\", data_dir=\"/Users/nguyenvannham/Documents/test_case/data\")\r\n\r\n File \"/opt/anaconda3/envs/audio/lib/python3.9/site-packages/datasets/load.py\", line 1758, in load_dataset\r\n builder_instance.download_and_prepare(\r\n\r\n File \"/opt/anaconda3/envs/audio/lib/python3.9/site-packages/datasets/builder.py\", line 860, in download_and_prepare\r\n self._download_and_prepare(\r\n\r\n File \"/opt/anaconda3/envs/audio/lib/python3.9/site-packages/datasets/builder.py\", line 1612, in _download_and_prepare\r\n super()._download_and_prepare(\r\n\r\n File \"/opt/anaconda3/envs/audio/lib/python3.9/site-packages/datasets/builder.py\", line 953, in _download_and_prepare\r\n self._prepare_split(split_generator, **prepare_split_kwargs)\r\n\r\n File \"/opt/anaconda3/envs/audio/lib/python3.9/site-packages/datasets/builder.py\", line 1450, in _prepare_split\r\n for job_id, done, content in self._prepare_split_single(\r\n\r\n File \"/opt/anaconda3/envs/audio/lib/python3.9/site-packages/datasets/builder.py\", line 1607, in _prepare_split_single\r\n raise DatasetGenerationError(\"An error occurred while generating the dataset\") from e\r\n\r\nDatasetGenerationError: An error occurred while generating the dataset"
] | 2022-04-14T03:28:31 | 2023-02-03T04:54:57 | 2022-05-06T16:06:51 | NONE | null | ## Dataset viewer issue for '*timit_asr*'
**Link:** *https://huggingface.co/datasets/timit_asr*
Issue: The timit-asr dataset cannot be previewed recently.
Am I the one who added this dataset ? Yes-No
No | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4169/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4169/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4168 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4168/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4168/comments | https://api.github.com/repos/huggingface/datasets/issues/4168/events | https://github.com/huggingface/datasets/pull/4168 | 1,203,867,540 | PR_kwDODunzps42NL6F | 4,168 | Add code examples to API docs | {
"login": "stevhliu",
"id": 59462357,
"node_id": "MDQ6VXNlcjU5NDYyMzU3",
"avatar_url": "https://avatars.githubusercontent.com/u/59462357?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stevhliu",
"html_url": "https://github.com/stevhliu",
"followers_url": "https://api.github.com/users/stevhliu/followers",
"following_url": "https://api.github.com/users/stevhliu/following{/other_user}",
"gists_url": "https://api.github.com/users/stevhliu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stevhliu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stevhliu/subscriptions",
"organizations_url": "https://api.github.com/users/stevhliu/orgs",
"repos_url": "https://api.github.com/users/stevhliu/repos",
"events_url": "https://api.github.com/users/stevhliu/events{/privacy}",
"received_events_url": "https://api.github.com/users/stevhliu/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892861,
"node_id": "MDU6TGFiZWwxOTM1ODkyODYx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/documentation",
"name": "documentation",
"color": "0075ca",
"default": true,
"description": "Improvements or additions to documentation"
}
] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"> Do you think it is clearer to make every code example fully reproducible so when users copy the code they can actually run it and get an output? This seems quite repetitive - maybe even unnecessary - but it is definitely clearer.\r\n\r\nI think it's ok to be repetitive to get more clarity. Many users come from `transformers` and may have little experience with some processing methods (especially torch users).\r\n\r\n> Should we showcase a function with more than one parameter to highlight different use-cases (it's pretty basic right now, but I'd be happy to add more)?\r\n\r\nMaybe let's do it case by case, depending on whether there are parameters that are likely to be used often ?\r\n\r\n> For the class_encode_column function, let me know if there is a simpler dataset with fewer columns (currently using winograd_wsc) so it is easier for users to see what changed.\r\n\r\nYou can try with `boolq`, it has a boolean column that can be converted to labels\r\n\r\n> Where possible, I try to show the input before and the output after using a function like flatten for example. Do you think this is too much and just showing the usage (ie, >>> ds.flatten()) will be sufficient?\r\n\r\nNo I don't think it's too much, it's nice this way thanks :)",
"Updated each code example so they are fully reproducible (where applicable)! The next step will be to identify some functions where we can show off some parameters that are useful or commonly used. Some useful parameters can be:\r\n\r\n- use `map(batched=True)` to process batches of examples.\r\n- set a seed in `shuffle`.\r\n- set `shuffle` and `seed` in `train_test_split`.\r\n\r\nLet me know if you think of anything else related to the functions in `arrow_dataset.py`!",
"Cool thanks ! I think you can also do `num_proc` for `map`"
] | 2022-04-13T23:03:38 | 2022-04-27T18:53:37 | 2022-04-27T18:48:34 | MEMBER | null | This PR adds code examples for functions related to the base Datasets class to highlight usage. Most of the examples use the `rotten_tomatoes` dataset since it is nice and small. Several things I would appreciate feedback on:
- Do you think it is clearer to make every code example fully reproducible so when users copy the code they can actually run it and get an output? This seems quite repetitive - maybe even unnecessary - but it is definitely clearer. Personally, I think we might be able to get away with not including this since users probably want to try the function on their own dataset. For example:
```py
>>> from datasets import load_dataset
>>> ds = load_dataset("rotten_tomatoes", split="validation")
>>> code example goes here
```
- Should we showcase a function with more than one parameter to highlight different use-cases (it's pretty basic right now, but I'd be happy to add more)?
- For the `class_encode_column` function, let me know if there is a simpler dataset with fewer columns (currently using `winograd_wsc`) so it is easier for users to see what changed.
- Where possible, I try to show the input before and the output after using a function like `flatten` for example. Do you think this is too much and just showing the usage (ie, `>>> ds.flatten()`) will be sufficient?
Thanks :) | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4168/reactions",
"total_count": 2,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 2,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4168/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4168",
"html_url": "https://github.com/huggingface/datasets/pull/4168",
"diff_url": "https://github.com/huggingface/datasets/pull/4168.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4168.patch",
"merged_at": "2022-04-27T18:48:34"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4167 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4167/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4167/comments | https://api.github.com/repos/huggingface/datasets/issues/4167/events | https://github.com/huggingface/datasets/pull/4167 | 1,203,761,614 | PR_kwDODunzps42M1O5 | 4,167 | Avoid rate limit in update hub repositories | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"I also set GIT_LFS_SKIP_SMUDGE=1 to speed up git clones",
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-04-13T20:32:17 | 2022-04-13T20:56:41 | 2022-04-13T20:50:32 | MEMBER | null | use http.extraHeader to avoid rate limit | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4167/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4167/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4167",
"html_url": "https://github.com/huggingface/datasets/pull/4167",
"diff_url": "https://github.com/huggingface/datasets/pull/4167.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4167.patch",
"merged_at": "2022-04-13T20:50:32"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4166 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4166/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4166/comments | https://api.github.com/repos/huggingface/datasets/issues/4166/events | https://github.com/huggingface/datasets/pull/4166 | 1,203,758,004 | PR_kwDODunzps42M0dS | 4,166 | Fix exact match | {
"login": "emibaylor",
"id": 27527747,
"node_id": "MDQ6VXNlcjI3NTI3NzQ3",
"avatar_url": "https://avatars.githubusercontent.com/u/27527747?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/emibaylor",
"html_url": "https://github.com/emibaylor",
"followers_url": "https://api.github.com/users/emibaylor/followers",
"following_url": "https://api.github.com/users/emibaylor/following{/other_user}",
"gists_url": "https://api.github.com/users/emibaylor/gists{/gist_id}",
"starred_url": "https://api.github.com/users/emibaylor/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/emibaylor/subscriptions",
"organizations_url": "https://api.github.com/users/emibaylor/orgs",
"repos_url": "https://api.github.com/users/emibaylor/repos",
"events_url": "https://api.github.com/users/emibaylor/events{/privacy}",
"received_events_url": "https://api.github.com/users/emibaylor/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-04-13T20:28:06 | 2022-05-03T12:23:31 | 2022-05-03T12:16:27 | CONTRIBUTOR | null | Clarify docs and add clarifying example to the exact_match metric | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4166/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4166/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4166",
"html_url": "https://github.com/huggingface/datasets/pull/4166",
"diff_url": "https://github.com/huggingface/datasets/pull/4166.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4166.patch",
"merged_at": "2022-05-03T12:16:27"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4165 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4165/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4165/comments | https://api.github.com/repos/huggingface/datasets/issues/4165/events | https://github.com/huggingface/datasets/pull/4165 | 1,203,730,187 | PR_kwDODunzps42MubF | 4,165 | Fix google bleu typos, examples | {
"login": "emibaylor",
"id": 27527747,
"node_id": "MDQ6VXNlcjI3NTI3NzQ3",
"avatar_url": "https://avatars.githubusercontent.com/u/27527747?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/emibaylor",
"html_url": "https://github.com/emibaylor",
"followers_url": "https://api.github.com/users/emibaylor/followers",
"following_url": "https://api.github.com/users/emibaylor/following{/other_user}",
"gists_url": "https://api.github.com/users/emibaylor/gists{/gist_id}",
"starred_url": "https://api.github.com/users/emibaylor/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/emibaylor/subscriptions",
"organizations_url": "https://api.github.com/users/emibaylor/orgs",
"repos_url": "https://api.github.com/users/emibaylor/repos",
"events_url": "https://api.github.com/users/emibaylor/events{/privacy}",
"received_events_url": "https://api.github.com/users/emibaylor/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-04-13T19:59:54 | 2022-05-03T12:23:52 | 2022-05-03T12:16:44 | CONTRIBUTOR | null | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4165/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4165/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4165",
"html_url": "https://github.com/huggingface/datasets/pull/4165",
"diff_url": "https://github.com/huggingface/datasets/pull/4165.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4165.patch",
"merged_at": "2022-05-03T12:16:44"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4164 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4164/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4164/comments | https://api.github.com/repos/huggingface/datasets/issues/4164/events | https://github.com/huggingface/datasets/pull/4164 | 1,203,661,346 | PR_kwDODunzps42MfxX | 4,164 | Fix duplicate key in multi_news | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-04-13T18:48:24 | 2022-04-13T21:04:16 | 2022-04-13T20:58:02 | MEMBER | null | To merge after this job succeeded: https://github.com/huggingface/datasets/runs/6012207928 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4164/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4164/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4164",
"html_url": "https://github.com/huggingface/datasets/pull/4164",
"diff_url": "https://github.com/huggingface/datasets/pull/4164.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4164.patch",
"merged_at": "2022-04-13T20:58:02"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4163 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4163/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4163/comments | https://api.github.com/repos/huggingface/datasets/issues/4163/events | https://github.com/huggingface/datasets/issues/4163 | 1,203,539,268 | I_kwDODunzps5HvI1E | 4,163 | Optional Content Warning for Datasets | {
"login": "TristanThrush",
"id": 20826878,
"node_id": "MDQ6VXNlcjIwODI2ODc4",
"avatar_url": "https://avatars.githubusercontent.com/u/20826878?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/TristanThrush",
"html_url": "https://github.com/TristanThrush",
"followers_url": "https://api.github.com/users/TristanThrush/followers",
"following_url": "https://api.github.com/users/TristanThrush/following{/other_user}",
"gists_url": "https://api.github.com/users/TristanThrush/gists{/gist_id}",
"starred_url": "https://api.github.com/users/TristanThrush/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/TristanThrush/subscriptions",
"organizations_url": "https://api.github.com/users/TristanThrush/orgs",
"repos_url": "https://api.github.com/users/TristanThrush/repos",
"events_url": "https://api.github.com/users/TristanThrush/events{/privacy}",
"received_events_url": "https://api.github.com/users/TristanThrush/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | open | false | null | [] | null | [
"Hi! You can use the `extra_gated_prompt` YAML field in a dataset card for displaying custom messages/warnings that the user must accept before gaining access to the actual dataset. This option also keeps the viewer hidden until the user agrees to terms. ",
"Hi @mariosasko, thanks for explaining how to add this feature. \r\n\r\nIf the current dataset yaml is:\r\n```\r\n---\r\nannotations_creators:\r\n- expert\r\nlanguage_creators:\r\n- expert-generated\r\nlanguages:\r\n- en\r\nlicense:\r\n- cc-by-4.0\r\nmultilinguality:\r\n- monolingual\r\npretty_name: HatemojiBuild\r\nsize_categories:\r\n- 1K<n<10K\r\nsource_datasets:\r\n- original\r\ntask_categories:\r\n- text-classification\r\ntask_ids:\r\n- hate-speech-detection\r\n---\r\n```\r\n\r\nCan you provide a minimal working example of how to added the gated prompt?\r\n\r\nThanks!",
"```\r\n---\r\nannotations_creators:\r\n- expert\r\nlanguage_creators:\r\n- expert-generated\r\nlanguages:\r\n- en\r\nlicense:\r\n- cc-by-4.0\r\nmultilinguality:\r\n- monolingual\r\npretty_name: HatemojiBuild\r\nsize_categories:\r\n- 1K<n<10K\r\nsource_datasets:\r\n- original\r\ntask_categories:\r\n- text-classification\r\ntask_ids:\r\n- hate-speech-detection\r\nextra_gated_prompt: \"This repository contains harmful content.\"\r\n---\r\n```\r\n\\+ enable `User Access requests` under the Settings pane.\r\n\r\nThere's a brief guide here https://discuss.huggingface.co/t/how-to-customize-the-user-access-requests-message/13953 , and you can see the field in action here, https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0/blob/main/README.md (you need to agree the terms in the Dataset Card pane to be able to access the files pane, so this comes up 403 at first).\r\n\r\nAnd a working example here! https://huggingface.co/datasets/DDSC/dkhate :) Great to be able to mitigate harms in text.",
"-- is there a way to gate content anonymously, i.e. without registering which users access it?",
"+1 to @leondz's question. One scenario is if you don't want the dataset to be indexed by search engines or viewed in browser b/c of upstream conditions on data, but don't want to collect emails. Some ability to turn off the dataset viewer or add a gating mechanism without emails would be fantastic."
] | 2022-04-13T16:38:01 | 2022-06-09T20:39:02 | null | CONTRIBUTOR | null | **Is your feature request related to a problem? Please describe.**
A clear and concise description of what the problem is.
We now have hate speech datasets on the hub, like this one: https://huggingface.co/datasets/HannahRoseKirk/HatemojiBuild
I'm wondering if there is an option to select a content warning message that appears before the dataset preview? Otherwise, people immediately see hate speech when clicking on this dataset.
**Describe the solution you'd like**
A clear and concise description of what you want to happen.
Implementation of a content warning message that separates users from the dataset preview until they click out of the warning.
**Describe alternatives you've considered**
A clear and concise description of any alternative solutions or features you've considered.
Possibly just a way to remove the dataset preview completely? I think I like the content warning option better, though.
**Additional context**
Add any other context about the feature request here.
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4163/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4163/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4162 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4162/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4162/comments | https://api.github.com/repos/huggingface/datasets/issues/4162/events | https://github.com/huggingface/datasets/pull/4162 | 1,203,421,909 | PR_kwDODunzps42LtGO | 4,162 | Add Conceptual 12M | {
"login": "thomasw21",
"id": 24695242,
"node_id": "MDQ6VXNlcjI0Njk1MjQy",
"avatar_url": "https://avatars.githubusercontent.com/u/24695242?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/thomasw21",
"html_url": "https://github.com/thomasw21",
"followers_url": "https://api.github.com/users/thomasw21/followers",
"following_url": "https://api.github.com/users/thomasw21/following{/other_user}",
"gists_url": "https://api.github.com/users/thomasw21/gists{/gist_id}",
"starred_url": "https://api.github.com/users/thomasw21/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/thomasw21/subscriptions",
"organizations_url": "https://api.github.com/users/thomasw21/orgs",
"repos_url": "https://api.github.com/users/thomasw21/repos",
"events_url": "https://api.github.com/users/thomasw21/events{/privacy}",
"received_events_url": "https://api.github.com/users/thomasw21/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"Looks like your dummy_data.zip file is not in the right location ;)\r\ndatasets/datasets/conceptual_12m/dummy/default/0.0.0/dummy_data.zip\r\n->\r\ndatasets/conceptual_12m/dummy/default/0.0.0/dummy_data.zip"
] | 2022-04-13T14:57:23 | 2022-04-15T08:13:01 | 2022-04-15T08:06:25 | CONTRIBUTOR | null | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4162/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4162/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4162",
"html_url": "https://github.com/huggingface/datasets/pull/4162",
"diff_url": "https://github.com/huggingface/datasets/pull/4162.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4162.patch",
"merged_at": "2022-04-15T08:06:25"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4161 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4161/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4161/comments | https://api.github.com/repos/huggingface/datasets/issues/4161/events | https://github.com/huggingface/datasets/pull/4161 | 1,203,230,485 | PR_kwDODunzps42LEhi | 4,161 | Add Visual Genome | {
"login": "thomasw21",
"id": 24695242,
"node_id": "MDQ6VXNlcjI0Njk1MjQy",
"avatar_url": "https://avatars.githubusercontent.com/u/24695242?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/thomasw21",
"html_url": "https://github.com/thomasw21",
"followers_url": "https://api.github.com/users/thomasw21/followers",
"following_url": "https://api.github.com/users/thomasw21/following{/other_user}",
"gists_url": "https://api.github.com/users/thomasw21/gists{/gist_id}",
"starred_url": "https://api.github.com/users/thomasw21/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/thomasw21/subscriptions",
"organizations_url": "https://api.github.com/users/thomasw21/orgs",
"repos_url": "https://api.github.com/users/thomasw21/repos",
"events_url": "https://api.github.com/users/thomasw21/events{/privacy}",
"received_events_url": "https://api.github.com/users/thomasw21/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"Hum there seems to be some issues with tasks in test:\r\n - some tasks don't fit anything in `tasks.json`. Do I remove them in `task_categories`?\r\n - some tasks should exist, typically `visual-question-answering` (https://github.com/huggingface/datasets/blame/9f2ff14673cac1f1ad56d80221a793f5938b68c7/src/datasets/utils/resources/tasks.json#L195) yet the exception is failing on me. I'm guessing it's because my `master` is not up-to-date. However this means that the testing only tests my branch instead of the one merged with master?\r\n \r\n cc @mariosasko @lhoestq ",
"> some tasks don't fit anything in tasks.json. Do I remove them in task_categories?\r\n\r\nYou can keep them, but add `other-` as a prefix to those tasks to make the CI ignore it\r\n\r\n> some tasks should exist, typically visual-question-answering (https://github.com/huggingface/datasets/blame/9f2ff14673cac1f1ad56d80221a793f5938b68c7/src/datasets/utils/resources/tasks.json#L195) yet the exception is failing on me. I'm guessing it's because my master is not up-to-date. However this means that the testing only tests my branch instead of the one merged with master?\r\n\r\nFeel free to merge upstream/master into your branch ;)\r\n\r\nEDIT: actually I just noticed you've already done this, thanks !",
"After offline discussions: will keep that image essentially it's necessary as I have a mapping that creates a mapping between url and local path (images are downloaded via a zip file) and dummy data needs to store that dummy image. The issue is when I read an annotation, I get a url, compute the local path, and basically I assume the local path exists since I've extracted all the images ... This isn't true if dummy data doesn't have all the images, so instead I've added a script that \"fixes\" the dummy data after using the CLI, it essentially adds the dummy image in the zip corresponding to the url."
] | 2022-04-13T12:25:24 | 2022-04-21T15:42:49 | 2022-04-21T13:08:52 | CONTRIBUTOR | null | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4161/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4161/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4161",
"html_url": "https://github.com/huggingface/datasets/pull/4161",
"diff_url": "https://github.com/huggingface/datasets/pull/4161.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4161.patch",
"merged_at": "2022-04-21T13:08:52"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4160 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4160/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4160/comments | https://api.github.com/repos/huggingface/datasets/issues/4160/events | https://github.com/huggingface/datasets/issues/4160 | 1,202,845,874 | I_kwDODunzps5Hsfiy | 4,160 | RGBA images not showing | {
"login": "cceyda",
"id": 15624271,
"node_id": "MDQ6VXNlcjE1NjI0Mjcx",
"avatar_url": "https://avatars.githubusercontent.com/u/15624271?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/cceyda",
"html_url": "https://github.com/cceyda",
"followers_url": "https://api.github.com/users/cceyda/followers",
"following_url": "https://api.github.com/users/cceyda/following{/other_user}",
"gists_url": "https://api.github.com/users/cceyda/gists{/gist_id}",
"starred_url": "https://api.github.com/users/cceyda/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/cceyda/subscriptions",
"organizations_url": "https://api.github.com/users/cceyda/orgs",
"repos_url": "https://api.github.com/users/cceyda/repos",
"events_url": "https://api.github.com/users/cceyda/events{/privacy}",
"received_events_url": "https://api.github.com/users/cceyda/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 3470211881,
"node_id": "LA_kwDODunzps7O1zsp",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset-viewer",
"name": "dataset-viewer",
"color": "E5583E",
"default": false,
"description": "Related to the dataset viewer on huggingface.co"
},
{
"id": 4030246674,
"node_id": "LA_kwDODunzps7wOK8S",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset-viewer-rgba-images",
"name": "dataset-viewer-rgba-images",
"color": "6C5FC0",
"default": false,
"description": ""
}
] | closed | false | {
"login": "severo",
"id": 1676121,
"node_id": "MDQ6VXNlcjE2NzYxMjE=",
"avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/severo",
"html_url": "https://github.com/severo",
"followers_url": "https://api.github.com/users/severo/followers",
"following_url": "https://api.github.com/users/severo/following{/other_user}",
"gists_url": "https://api.github.com/users/severo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/severo/subscriptions",
"organizations_url": "https://api.github.com/users/severo/orgs",
"repos_url": "https://api.github.com/users/severo/repos",
"events_url": "https://api.github.com/users/severo/events{/privacy}",
"received_events_url": "https://api.github.com/users/severo/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "severo",
"id": 1676121,
"node_id": "MDQ6VXNlcjE2NzYxMjE=",
"avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/severo",
"html_url": "https://github.com/severo",
"followers_url": "https://api.github.com/users/severo/followers",
"following_url": "https://api.github.com/users/severo/following{/other_user}",
"gists_url": "https://api.github.com/users/severo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/severo/subscriptions",
"organizations_url": "https://api.github.com/users/severo/orgs",
"repos_url": "https://api.github.com/users/severo/repos",
"events_url": "https://api.github.com/users/severo/events{/privacy}",
"received_events_url": "https://api.github.com/users/severo/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"Thanks for reporting. It's a known issue, and we hope to fix it soon.",
"Fixed, thanks!"
] | 2022-04-13T06:59:23 | 2022-06-21T16:43:11 | 2022-06-21T16:43:11 | CONTRIBUTOR | null | ## Dataset viewer issue for ceyda/smithsonian_butterflies_transparent
[**Link:** *link to the dataset viewer page*](https://huggingface.co/datasets/ceyda/smithsonian_butterflies_transparent)
![image](https://user-images.githubusercontent.com/15624271/163117683-e91edb28-41bf-43d9-b371-5c62e14f40c9.png)
Am I the one who added this dataset ? Yes
π More of a general issue of 'RGBA' png images not being supported
(the dataset itself is just for the huggan sprint and not that important, consider it just an example) | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4160/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4160/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4159 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4159/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4159/comments | https://api.github.com/repos/huggingface/datasets/issues/4159/events | https://github.com/huggingface/datasets/pull/4159 | 1,202,522,153 | PR_kwDODunzps42Izmd | 4,159 | Add `TruthfulQA` dataset | {
"login": "jon-tow",
"id": 41410219,
"node_id": "MDQ6VXNlcjQxNDEwMjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/41410219?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jon-tow",
"html_url": "https://github.com/jon-tow",
"followers_url": "https://api.github.com/users/jon-tow/followers",
"following_url": "https://api.github.com/users/jon-tow/following{/other_user}",
"gists_url": "https://api.github.com/users/jon-tow/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jon-tow/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jon-tow/subscriptions",
"organizations_url": "https://api.github.com/users/jon-tow/orgs",
"repos_url": "https://api.github.com/users/jon-tow/repos",
"events_url": "https://api.github.com/users/jon-tow/events{/privacy}",
"received_events_url": "https://api.github.com/users/jon-tow/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"Bump. (I'm not sure which reviewer to `@` but, previously, @lhoestq has been very helpful π€ )"
] | 2022-04-12T23:19:04 | 2022-06-08T15:51:33 | 2022-06-08T14:43:34 | CONTRIBUTOR | null | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4159/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4159/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4159",
"html_url": "https://github.com/huggingface/datasets/pull/4159",
"diff_url": "https://github.com/huggingface/datasets/pull/4159.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4159.patch",
"merged_at": "2022-06-08T14:43:34"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4158 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4158/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4158/comments | https://api.github.com/repos/huggingface/datasets/issues/4158/events | https://github.com/huggingface/datasets/pull/4158 | 1,202,376,843 | PR_kwDODunzps42ITg3 | 4,158 | Add AUC ROC Metric | {
"login": "emibaylor",
"id": 27527747,
"node_id": "MDQ6VXNlcjI3NTI3NzQ3",
"avatar_url": "https://avatars.githubusercontent.com/u/27527747?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/emibaylor",
"html_url": "https://github.com/emibaylor",
"followers_url": "https://api.github.com/users/emibaylor/followers",
"following_url": "https://api.github.com/users/emibaylor/following{/other_user}",
"gists_url": "https://api.github.com/users/emibaylor/gists{/gist_id}",
"starred_url": "https://api.github.com/users/emibaylor/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/emibaylor/subscriptions",
"organizations_url": "https://api.github.com/users/emibaylor/orgs",
"repos_url": "https://api.github.com/users/emibaylor/repos",
"events_url": "https://api.github.com/users/emibaylor/events{/privacy}",
"received_events_url": "https://api.github.com/users/emibaylor/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-04-12T20:53:28 | 2022-04-26T19:41:50 | 2022-04-26T19:35:22 | CONTRIBUTOR | null | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4158/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4158/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4158",
"html_url": "https://github.com/huggingface/datasets/pull/4158",
"diff_url": "https://github.com/huggingface/datasets/pull/4158.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4158.patch",
"merged_at": "2022-04-26T19:35:22"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4157 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4157/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4157/comments | https://api.github.com/repos/huggingface/datasets/issues/4157/events | https://github.com/huggingface/datasets/pull/4157 | 1,202,239,622 | PR_kwDODunzps42H2Wf | 4,157 | Fix formatting in BLEU metric card | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-04-12T18:29:51 | 2022-04-13T14:30:25 | 2022-04-13T14:16:34 | CONTRIBUTOR | null | Fix #4148 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4157/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4157/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4157",
"html_url": "https://github.com/huggingface/datasets/pull/4157",
"diff_url": "https://github.com/huggingface/datasets/pull/4157.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4157.patch",
"merged_at": "2022-04-13T14:16:34"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4156 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4156/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4156/comments | https://api.github.com/repos/huggingface/datasets/issues/4156/events | https://github.com/huggingface/datasets/pull/4156 | 1,202,220,531 | PR_kwDODunzps42HySw | 4,156 | Adding STSb-TR dataset | {
"login": "figenfikri",
"id": 12762065,
"node_id": "MDQ6VXNlcjEyNzYyMDY1",
"avatar_url": "https://avatars.githubusercontent.com/u/12762065?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/figenfikri",
"html_url": "https://github.com/figenfikri",
"followers_url": "https://api.github.com/users/figenfikri/followers",
"following_url": "https://api.github.com/users/figenfikri/following{/other_user}",
"gists_url": "https://api.github.com/users/figenfikri/gists{/gist_id}",
"starred_url": "https://api.github.com/users/figenfikri/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/figenfikri/subscriptions",
"organizations_url": "https://api.github.com/users/figenfikri/orgs",
"repos_url": "https://api.github.com/users/figenfikri/repos",
"events_url": "https://api.github.com/users/figenfikri/events{/privacy}",
"received_events_url": "https://api.github.com/users/figenfikri/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 4564477500,
"node_id": "LA_kwDODunzps8AAAABEBBmPA",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20contribution",
"name": "dataset contribution",
"color": "0e8a16",
"default": false,
"description": "Contribution to a dataset script"
}
] | closed | false | null | [] | null | [
"Thanks for your contribution, @figenfikri.\r\n\r\nWe are removing the dataset scripts from this GitHub repo and moving them to the Hugging Face Hub: https://huggingface.co/datasets\r\n\r\nWe would suggest you create this dataset there. Please, feel free to tell us if you need some help."
] | 2022-04-12T18:10:05 | 2022-10-03T09:36:25 | 2022-10-03T09:36:25 | NONE | null | Semantic Textual Similarity benchmark Turkish (STSb-TR) dataset introduced in our paper [Semantic Similarity Based Evaluation for Abstractive News Summarization](https://aclanthology.org/2021.gem-1.3.pdf) added. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4156/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4156/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4156",
"html_url": "https://github.com/huggingface/datasets/pull/4156",
"diff_url": "https://github.com/huggingface/datasets/pull/4156.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4156.patch",
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4155 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4155/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4155/comments | https://api.github.com/repos/huggingface/datasets/issues/4155/events | https://github.com/huggingface/datasets/pull/4155 | 1,202,183,608 | PR_kwDODunzps42Hqam | 4,155 | Make HANS dataset streamable | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-04-12T17:34:13 | 2022-04-13T12:03:46 | 2022-04-13T11:57:35 | CONTRIBUTOR | null | Fix #4133 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4155/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4155/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4155",
"html_url": "https://github.com/huggingface/datasets/pull/4155",
"diff_url": "https://github.com/huggingface/datasets/pull/4155.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4155.patch",
"merged_at": "2022-04-13T11:57:34"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4154 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4154/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4154/comments | https://api.github.com/repos/huggingface/datasets/issues/4154/events | https://github.com/huggingface/datasets/pull/4154 | 1,202,145,721 | PR_kwDODunzps42Hh14 | 4,154 | Generate tasks.json taxonomy from `huggingface_hub` | {
"login": "julien-c",
"id": 326577,
"node_id": "MDQ6VXNlcjMyNjU3Nw==",
"avatar_url": "https://avatars.githubusercontent.com/u/326577?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/julien-c",
"html_url": "https://github.com/julien-c",
"followers_url": "https://api.github.com/users/julien-c/followers",
"following_url": "https://api.github.com/users/julien-c/following{/other_user}",
"gists_url": "https://api.github.com/users/julien-c/gists{/gist_id}",
"starred_url": "https://api.github.com/users/julien-c/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/julien-c/subscriptions",
"organizations_url": "https://api.github.com/users/julien-c/orgs",
"repos_url": "https://api.github.com/users/julien-c/repos",
"events_url": "https://api.github.com/users/julien-c/events{/privacy}",
"received_events_url": "https://api.github.com/users/julien-c/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"Ok recomputed the json file, this should be ready to review now! @lhoestq ",
"Note: the generated JSON from `hf/hub-docs` can be found in the output of a GitHub Action run on that repo, for instance in https://github.com/huggingface/hub-docs/runs/6006686983?check_suite_focus=true\r\n\r\n(click on \"Run export-tasks script\")",
"Should we not add the tasks with hideInDatasets?",
"yes, probably true β i'll change that in a PR in `hub-docs`",
"Yes that's good :) feel free to merge",
"thanks to the both of you!"
] | 2022-04-12T17:12:46 | 2022-04-14T10:32:32 | 2022-04-14T10:26:13 | MEMBER | null | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4154/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4154/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4154",
"html_url": "https://github.com/huggingface/datasets/pull/4154",
"diff_url": "https://github.com/huggingface/datasets/pull/4154.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4154.patch",
"merged_at": "2022-04-14T10:26:13"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4153 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4153/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4153/comments | https://api.github.com/repos/huggingface/datasets/issues/4153/events | https://github.com/huggingface/datasets/pull/4153 | 1,202,040,506 | PR_kwDODunzps42HLA8 | 4,153 | Adding Text-based NP Enrichment (TNE) dataset | {
"login": "yanaiela",
"id": 8031035,
"node_id": "MDQ6VXNlcjgwMzEwMzU=",
"avatar_url": "https://avatars.githubusercontent.com/u/8031035?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yanaiela",
"html_url": "https://github.com/yanaiela",
"followers_url": "https://api.github.com/users/yanaiela/followers",
"following_url": "https://api.github.com/users/yanaiela/following{/other_user}",
"gists_url": "https://api.github.com/users/yanaiela/gists{/gist_id}",
"starred_url": "https://api.github.com/users/yanaiela/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yanaiela/subscriptions",
"organizations_url": "https://api.github.com/users/yanaiela/orgs",
"repos_url": "https://api.github.com/users/yanaiela/repos",
"events_url": "https://api.github.com/users/yanaiela/events{/privacy}",
"received_events_url": "https://api.github.com/users/yanaiela/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hey @lhoestq, can you please have a look? π",
"Great, thanks again @lhoestq! I think we're good to go now",
"Done"
] | 2022-04-12T15:47:03 | 2022-05-03T14:05:48 | 2022-05-03T14:05:48 | CONTRIBUTOR | null | Added the [TNE](https://github.com/yanaiela/TNE) dataset to the library | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4153/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4153/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4153",
"html_url": "https://github.com/huggingface/datasets/pull/4153",
"diff_url": "https://github.com/huggingface/datasets/pull/4153.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4153.patch",
"merged_at": "2022-05-03T14:05:48"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4152 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4152/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4152/comments | https://api.github.com/repos/huggingface/datasets/issues/4152/events | https://github.com/huggingface/datasets/issues/4152 | 1,202,034,115 | I_kwDODunzps5HpZXD | 4,152 | ArrayND error in pyarrow 5 | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Where do we bump the required pyarrow version? Any inputs on how I fix this issue? ",
"We need to bump it in `setup.py` as well as update some CI job to use pyarrow 6 instead of 5 in `.circleci/config.yaml` and `.github/workflows/benchmarks.yaml`"
] | 2022-04-12T15:41:40 | 2022-05-04T09:29:46 | 2022-05-04T09:29:46 | MEMBER | null | As found in https://github.com/huggingface/datasets/pull/3903, The ArrayND features fail on pyarrow 5:
```python
import pyarrow as pa
from datasets import Array2D
from datasets.table import cast_array_to_feature
arr = pa.array([[[0]]])
feature_type = Array2D(shape=(1, 1), dtype="int64")
cast_array_to_feature(arr, feature_type)
```
raises
```python
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
<ipython-input-8-04610f9fa78c> in <module>
----> 1 cast_array_to_feature(pa.array([[[0]]]), Array2D(shape=(1, 1), dtype="int32"))
~/Desktop/hf/datasets/src/datasets/table.py in wrapper(array, *args, **kwargs)
1672 return pa.chunked_array([func(chunk, *args, **kwargs) for chunk in array.chunks])
1673 else:
-> 1674 return func(array, *args, **kwargs)
1675
1676 return wrapper
~/Desktop/hf/datasets/src/datasets/table.py in cast_array_to_feature(array, feature, allow_number_to_str)
1806 return array_cast(array, get_nested_type(feature), allow_number_to_str=allow_number_to_str)
1807 elif not isinstance(feature, (Sequence, dict, list, tuple)):
-> 1808 return array_cast(array, feature(), allow_number_to_str=allow_number_to_str)
1809 raise TypeError(f"Couldn't cast array of type\n{array.type}\nto\n{feature}")
1810
~/Desktop/hf/datasets/src/datasets/table.py in wrapper(array, *args, **kwargs)
1672 return pa.chunked_array([func(chunk, *args, **kwargs) for chunk in array.chunks])
1673 else:
-> 1674 return func(array, *args, **kwargs)
1675
1676 return wrapper
~/Desktop/hf/datasets/src/datasets/table.py in array_cast(array, pa_type, allow_number_to_str)
1705 array = array.storage
1706 if isinstance(pa_type, pa.ExtensionType):
-> 1707 return pa_type.wrap_array(array)
1708 elif pa.types.is_struct(array.type):
1709 if pa.types.is_struct(pa_type) and (
AttributeError: 'Array2DExtensionType' object has no attribute 'wrap_array'
```
The thing is that `cast_array_to_feature` is called when writing an Arrow file, so creating an Arrow dataset using any ArrayND type currently fails.
`wrap_array` has been added in pyarrow 6, so we can either bump the required pyarrow version or fix this for pyarrow 5 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4152/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4152/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4151 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4151/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4151/comments | https://api.github.com/repos/huggingface/datasets/issues/4151/events | https://github.com/huggingface/datasets/pull/4151 | 1,201,837,999 | PR_kwDODunzps42GgLu | 4,151 | Add missing label for emotion description | {
"login": "lijiazheng99",
"id": 44396506,
"node_id": "MDQ6VXNlcjQ0Mzk2NTA2",
"avatar_url": "https://avatars.githubusercontent.com/u/44396506?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lijiazheng99",
"html_url": "https://github.com/lijiazheng99",
"followers_url": "https://api.github.com/users/lijiazheng99/followers",
"following_url": "https://api.github.com/users/lijiazheng99/following{/other_user}",
"gists_url": "https://api.github.com/users/lijiazheng99/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lijiazheng99/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lijiazheng99/subscriptions",
"organizations_url": "https://api.github.com/users/lijiazheng99/orgs",
"repos_url": "https://api.github.com/users/lijiazheng99/repos",
"events_url": "https://api.github.com/users/lijiazheng99/events{/privacy}",
"received_events_url": "https://api.github.com/users/lijiazheng99/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2022-04-12T13:17:37 | 2022-04-12T13:58:50 | 2022-04-12T13:58:50 | CONTRIBUTOR | null | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4151/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4151/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4151",
"html_url": "https://github.com/huggingface/datasets/pull/4151",
"diff_url": "https://github.com/huggingface/datasets/pull/4151.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4151.patch",
"merged_at": "2022-04-12T13:58:50"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4150 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4150/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4150/comments | https://api.github.com/repos/huggingface/datasets/issues/4150/events | https://github.com/huggingface/datasets/issues/4150 | 1,201,689,730 | I_kwDODunzps5HoFSC | 4,150 | Inconsistent splits generation for datasets without loading script (packaged dataset puts everything into a single split) | {
"login": "polinaeterna",
"id": 16348744,
"node_id": "MDQ6VXNlcjE2MzQ4NzQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/16348744?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/polinaeterna",
"html_url": "https://github.com/polinaeterna",
"followers_url": "https://api.github.com/users/polinaeterna/followers",
"following_url": "https://api.github.com/users/polinaeterna/following{/other_user}",
"gists_url": "https://api.github.com/users/polinaeterna/gists{/gist_id}",
"starred_url": "https://api.github.com/users/polinaeterna/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/polinaeterna/subscriptions",
"organizations_url": "https://api.github.com/users/polinaeterna/orgs",
"repos_url": "https://api.github.com/users/polinaeterna/repos",
"events_url": "https://api.github.com/users/polinaeterna/events{/privacy}",
"received_events_url": "https://api.github.com/users/polinaeterna/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | [] | 2022-04-12T11:15:55 | 2022-04-28T21:02:44 | 2022-04-28T21:02:44 | CONTRIBUTOR | null | ## Describe the bug
Splits for dataset loaders without scripts are prepared inconsistently. I think it might be confusing for users.
## Steps to reproduce the bug
* If you load a packaged datasets from Hub, it infers splits from directory structure / filenames (check out the data [here](https://huggingface.co/datasets/nateraw/test-imagefolder-dataset)):
```python
ds = load_dataset("nateraw/test-imagefolder-dataset")
print(ds)
### Output:
DatasetDict({
train: Dataset({
features: ['image', 'label'],
num_rows: 6
})
test: Dataset({
features: ['image', 'label'],
num_rows: 4
})
})
```
* If you do the same from locally stored data specifying only directory path you'll get the same:
```python
ds = load_dataset("/path/to/local/data/test-imagefolder-dataset")
print(ds)
### Output:
DatasetDict({
train: Dataset({
features: ['image', 'label'],
num_rows: 6
})
test: Dataset({
features: ['image', 'label'],
num_rows: 4
})
})
```
* However, if you explicitely specify package name (like `imagefolder`, `csv`, `json`), all the data is put into a single split:
```python
ds = load_dataset("imagefolder", data_dir="/path/to/local/data/test-imagefolder-dataset")
print(ds)
### Output:
DatasetDict({
train: Dataset({
features: ['image', 'label'],
num_rows: 10
})
})
```
## Expected results
For `load_dataset("imagefolder", data_dir="/path/to/local/data/test-imagefolder-dataset")` I expect the same output as of the two first options. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4150/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4150/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4149 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4149/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4149/comments | https://api.github.com/repos/huggingface/datasets/issues/4149/events | https://github.com/huggingface/datasets/issues/4149 | 1,201,389,221 | I_kwDODunzps5Hm76l | 4,149 | load_dataset for winoground returning decoding error | {
"login": "odellus",
"id": 4686956,
"node_id": "MDQ6VXNlcjQ2ODY5NTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4686956?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/odellus",
"html_url": "https://github.com/odellus",
"followers_url": "https://api.github.com/users/odellus/followers",
"following_url": "https://api.github.com/users/odellus/following{/other_user}",
"gists_url": "https://api.github.com/users/odellus/gists{/gist_id}",
"starred_url": "https://api.github.com/users/odellus/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/odellus/subscriptions",
"organizations_url": "https://api.github.com/users/odellus/orgs",
"repos_url": "https://api.github.com/users/odellus/repos",
"events_url": "https://api.github.com/users/odellus/events{/privacy}",
"received_events_url": "https://api.github.com/users/odellus/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | [
"I thought I had fixed it with this after some helpful hints from @severo\r\n```python\r\nimport datasets \r\ntoken = 'hf_XXXXX'\r\ndataset = datasets.load_dataset(\r\n 'facebook/winoground', \r\n name='facebook--winoground', \r\n split='train', \r\n streaming=True,\r\n use_auth_token=token,\r\n)\r\n```\r\nbut I found out that wasn't the case\r\n```python\r\n[x for x in dataset]\r\n...\r\nClientResponseError: 401, message='Unauthorized', url=URL('https://huggingface.co/datasets/facebook/winoground/resolve/a86a60456fbbd242e9a744199071a6bd3e7fd9de/examples.jsonl')\r\n```",
"Hi ! This dataset structure (image + labels in a JSON file) is not supported yet, though we're adding support for this in in #4069 \r\n\r\nThe following structure will be supported soon:\r\n```\r\nmetadata.json\r\nimages/\r\n image0.png\r\n image1.png\r\n ...\r\n```\r\nWhere `metadata.json` is a JSON Lines file with labels or other metadata, and each line must have a \"file_name\" field with the name of the image file.\r\n\r\nFor the moment are only supported:\r\n- JSON files only\r\n- image files only\r\n\r\nSince this dataset is a mix of the two, at the moment it fails trying to read the images as JSON.\r\n\r\nTherefore to be able to load this dataset we need to wait for the new structure to be supported (very soon ^^), or add a dataset script in the repository that reads both the JSON and the images cc @TristanThrush \r\n",
"We'll also investigate the issue with the streaming download manager in https://github.com/huggingface/datasets/issues/4139 ;) thanks for reporting",
"Are there any updates on this?",
"In the meantime, anyone can always download the images.zip and examples.jsonl files directly from huggingface.co - let me know if anyone has issues with that.",
"I mirrored the files at https://huggingface.co/datasets/facebook/winoground in a folder on my local machine `winground`\r\nand when I tried\r\n```python\r\nimport datasets\r\nds = datasets.load_from_disk('./winoground')\r\n```\r\nI get the following error\r\n```python\r\n--------------------------------------------------------------------------\r\nFileNotFoundError Traceback (most recent call last)\r\nInput In [2], in <cell line: 1>()\r\n----> 1 ds = datasets.load_from_disk('./winoground')\r\n\r\nFile ~/.local/lib/python3.8/site-packages/datasets/load.py:1759, in load_from_disk(dataset_path, fs, keep_in_memory)\r\n 1757 return DatasetDict.load_from_disk(dataset_path, fs, keep_in_memory=keep_in_memory)\r\n 1758 else:\r\n-> 1759 raise FileNotFoundError(\r\n 1760 f\"Directory {dataset_path} is neither a dataset directory nor a dataset dict directory.\"\r\n 1761 )\r\n\r\nFileNotFoundError: Directory ./winoground is neither a dataset directory nor a dataset dict directory.\r\n```\r\nso still some work to be done on the backend imo.",
"Note that `load_from_disk` is the function that reloads an Arrow dataset saved with `my_dataset.save_to_disk`.\r\n\r\nOnce we do support images with metadata you'll be able to use `load_dataset(\"facebook/winoground\")` directly (or `load_dataset(\"./winoground\")` of you've cloned the winoground repository locally).",
"Apologies for the delay. I added a custom dataset loading script for winoground. It should work now, with an auth token:\r\n\r\n`examples = load_dataset('facebook/winoground', use_auth_token=<your auth token>)`\r\n\r\nLet me know if there are any issues",
"Adding the dataset loading script definitely didn't take as long as I thought it would π
",
"killer"
] | 2022-04-12T08:16:16 | 2022-05-04T23:40:38 | 2022-05-04T23:40:38 | CONTRIBUTOR | null | ## Describe the bug
I am trying to use datasets to load winoground and I'm getting a JSON decoding error.
## Steps to reproduce the bug
```python
from datasets import load_dataset
token = 'hf_XXXXX' # my HF access token
datasets = load_dataset('facebook/winoground', use_auth_token=token)
```
## Expected results
I downloaded images.zip and examples.jsonl manually. I was expecting to have some trouble decoding json so I didn't use jsonlines but instead was able to get a complete set of 400 examples by doing
```python
import json
with open('examples.jsonl', 'r') as f:
examples = f.read().split('\n')
# Thinking this would error if the JSON is not utf-8 encoded
json_data = [json.loads(x) for x in examples]
print(json_data[-1])
```
and I see
```python
{'caption_0': 'someone is overdoing it',
'caption_1': 'someone is doing it over',
'collapsed_tag': 'Relation',
'id': 399,
'image_0': 'ex_399_img_0',
'image_1': 'ex_399_img_1',
'num_main_preds': 1,
'secondary_tag': 'Morpheme-Level',
'tag': 'Scope, Preposition'}
```
so I'm not sure what's going on here honestly. The file `examples.jsonl` doesn't have non-UTF-8 encoded text.
## Actual results
During the split operation after downloading, datasets encounters an error in the JSON ([trace](https://gist.github.com/odellus/e55d390ca203386bf551f38e0c63a46b) abbreviated for brevity).
```
datasets/packaged_modules/json/json.py:144 in Json._generate_tables(self, files)
...
UnicodeDecodeError: 'utf-8' codec can't decode byte 0xff in position 0: invalid start byte
```
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 1.18.4
- Platform: Linux-5.13.0-39-generic-x86_64-with-glibc2.29
- Python version: 3.8.10
- PyArrow version: 7.0.0
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4149/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4149/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4148 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4148/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4148/comments | https://api.github.com/repos/huggingface/datasets/issues/4148/events | https://github.com/huggingface/datasets/issues/4148 | 1,201,169,242 | I_kwDODunzps5HmGNa | 4,148 | fix confusing bleu metric example | {
"login": "aizawa-naoki",
"id": 6253193,
"node_id": "MDQ6VXNlcjYyNTMxOTM=",
"avatar_url": "https://avatars.githubusercontent.com/u/6253193?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/aizawa-naoki",
"html_url": "https://github.com/aizawa-naoki",
"followers_url": "https://api.github.com/users/aizawa-naoki/followers",
"following_url": "https://api.github.com/users/aizawa-naoki/following{/other_user}",
"gists_url": "https://api.github.com/users/aizawa-naoki/gists{/gist_id}",
"starred_url": "https://api.github.com/users/aizawa-naoki/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/aizawa-naoki/subscriptions",
"organizations_url": "https://api.github.com/users/aizawa-naoki/orgs",
"repos_url": "https://api.github.com/users/aizawa-naoki/repos",
"events_url": "https://api.github.com/users/aizawa-naoki/events{/privacy}",
"received_events_url": "https://api.github.com/users/aizawa-naoki/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | closed | false | null | [] | null | [] | 2022-04-12T06:18:26 | 2022-04-13T14:16:34 | 2022-04-13T14:16:34 | NONE | null | **Is your feature request related to a problem? Please describe.**
I would like to see the example in "Metric Card for BLEU" changed.
The 0th element in the predictions list is not closed in square brackets, and the 1st list is missing a comma.
The BLEU score are calculated correctly, but it is difficult to understand, so it would be helpful if you could correct this.
```
>> predictions = [
... ["hello", "there", "general", "kenobi", # <- no closing square bracket.
... ["foo", "bar" "foobar"] # <- no comma between "bar" and "foobar"
... ]
>>> references = [
... [["hello", "there", "general", "kenobi"]],
... [["foo", "bar", "foobar"]]
... ]
>>> bleu = datasets.load_metric("bleu")
>>> results = bleu.compute(predictions=predictions, references=references)
>>> print(results)
{'bleu': 0.6370964381207871, ...
```
**Describe the solution you'd like**
```
>> predictions = [
... ["hello", "there", "general", "kenobi", # <- no closing square bracket.
... ["foo", "bar" "foobar"] # <- no comma between "bar" and "foobar"
... ]
# and
>>> print(results)
{'bleu':1.0, ...
```
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4148/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4148/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4147 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4147/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4147/comments | https://api.github.com/repos/huggingface/datasets/issues/4147/events | https://github.com/huggingface/datasets/pull/4147 | 1,200,756,008 | PR_kwDODunzps42CtPl | 4,147 | Adjust path to datasets tutorial in How-To | {
"login": "NimaBoscarino",
"id": 6765188,
"node_id": "MDQ6VXNlcjY3NjUxODg=",
"avatar_url": "https://avatars.githubusercontent.com/u/6765188?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/NimaBoscarino",
"html_url": "https://github.com/NimaBoscarino",
"followers_url": "https://api.github.com/users/NimaBoscarino/followers",
"following_url": "https://api.github.com/users/NimaBoscarino/following{/other_user}",
"gists_url": "https://api.github.com/users/NimaBoscarino/gists{/gist_id}",
"starred_url": "https://api.github.com/users/NimaBoscarino/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/NimaBoscarino/subscriptions",
"organizations_url": "https://api.github.com/users/NimaBoscarino/orgs",
"repos_url": "https://api.github.com/users/NimaBoscarino/repos",
"events_url": "https://api.github.com/users/NimaBoscarino/events{/privacy}",
"received_events_url": "https://api.github.com/users/NimaBoscarino/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-04-12T01:20:34 | 2022-04-12T08:32:24 | 2022-04-12T08:26:02 | CONTRIBUTOR | null | The link in the How-To overview page to the Datasets tutorials is currently broken. This is just a small adjustment to make it match the format used in https://github.com/huggingface/datasets/blob/master/docs/source/tutorial.md.
(Edit to add: The link in the PR deployment (https://moon-ci-docs.huggingface.co/docs/datasets/pr_4147/en/how_to) is also broken since it's actually hardcoded to `master` and not dynamic to the branch name, but other links seem to behave similarly.) | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4147/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4147/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4147",
"html_url": "https://github.com/huggingface/datasets/pull/4147",
"diff_url": "https://github.com/huggingface/datasets/pull/4147.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4147.patch",
"merged_at": "2022-04-12T08:26:02"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4146 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4146/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4146/comments | https://api.github.com/repos/huggingface/datasets/issues/4146/events | https://github.com/huggingface/datasets/issues/4146 | 1,200,215,789 | I_kwDODunzps5Hidbt | 4,146 | SAMSum dataset viewer not working | {
"login": "aakashnegi10",
"id": 39906333,
"node_id": "MDQ6VXNlcjM5OTA2MzMz",
"avatar_url": "https://avatars.githubusercontent.com/u/39906333?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/aakashnegi10",
"html_url": "https://github.com/aakashnegi10",
"followers_url": "https://api.github.com/users/aakashnegi10/followers",
"following_url": "https://api.github.com/users/aakashnegi10/following{/other_user}",
"gists_url": "https://api.github.com/users/aakashnegi10/gists{/gist_id}",
"starred_url": "https://api.github.com/users/aakashnegi10/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/aakashnegi10/subscriptions",
"organizations_url": "https://api.github.com/users/aakashnegi10/orgs",
"repos_url": "https://api.github.com/users/aakashnegi10/repos",
"events_url": "https://api.github.com/users/aakashnegi10/events{/privacy}",
"received_events_url": "https://api.github.com/users/aakashnegi10/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"https://huggingface.co/datasets/samsum\r\n\r\n```\r\nStatus code: 400\r\nException: ValueError\r\nMessage: Cannot seek streaming HTTP file\r\n```",
"Currently, only the datasets that can be streamed support the dataset viewer. Maybe @lhoestq @albertvillanova or @mariosasko could give more details about why the dataset cannot be streamed.",
"It looks like the host (https://arxiv.org) doesn't allow HTTP Range requests, which is what we use to stream data.\r\n\r\nThis can be fix if we host the data ourselves, which is ok since the dataset is under CC BY-NC-ND 4.0"
] | 2022-04-11T16:22:57 | 2022-04-29T16:26:09 | 2022-04-29T16:26:09 | NONE | null | ## Dataset viewer issue for '*name of the dataset*'
**Link:** *link to the dataset viewer page*
*short description of the issue*
Am I the one who added this dataset ? Yes-No
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4146/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4146/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4145 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4145/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4145/comments | https://api.github.com/repos/huggingface/datasets/issues/4145/events | https://github.com/huggingface/datasets/pull/4145 | 1,200,209,781 | PR_kwDODunzps42A6Rt | 4,145 | Redirect TIMIT download from LDC | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"CI is failing because some tags are outdated, but they're fixed in #4067 ",
"_The documentation is not available anymore as the PR was closed or merged._",
"We may do a release pretty soon (today ?), let me know if it's fine to include it in the new release",
"Fine to include this change!"
] | 2022-04-11T16:17:55 | 2022-04-13T15:39:31 | 2022-04-13T15:33:04 | MEMBER | null | LDC data is protected under US copyright laws and under various legal agreements between the Linguistic Data Consortium/the University of Pennsylvania and data providers which prohibit redistribution of that data by anyone other than LDC. Similarly, LDC's membership agreements, non-member user agreement and various corpus-specific license agreements specifically state that users cannot publish, retransmit, disclose, copy, reproduce or redistribute LDC databases to others outside their organizations.
LDC explicitly asked us to remove the download script for the TIMIT dataset. In this PR I remove all means to download the dataset, and redirect users to download the data from https://catalog.ldc.upenn.edu/LDC93S1 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4145/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4145/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4145",
"html_url": "https://github.com/huggingface/datasets/pull/4145",
"diff_url": "https://github.com/huggingface/datasets/pull/4145.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4145.patch",
"merged_at": "2022-04-13T15:33:03"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4144 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4144/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4144/comments | https://api.github.com/repos/huggingface/datasets/issues/4144/events | https://github.com/huggingface/datasets/pull/4144 | 1,200,016,983 | PR_kwDODunzps42ARmu | 4,144 | Fix splits in local packaged modules, local datasets without script and hub datasets without script | {
"login": "polinaeterna",
"id": 16348744,
"node_id": "MDQ6VXNlcjE2MzQ4NzQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/16348744?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/polinaeterna",
"html_url": "https://github.com/polinaeterna",
"followers_url": "https://api.github.com/users/polinaeterna/followers",
"following_url": "https://api.github.com/users/polinaeterna/following{/other_user}",
"gists_url": "https://api.github.com/users/polinaeterna/gists{/gist_id}",
"starred_url": "https://api.github.com/users/polinaeterna/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/polinaeterna/subscriptions",
"organizations_url": "https://api.github.com/users/polinaeterna/orgs",
"repos_url": "https://api.github.com/users/polinaeterna/repos",
"events_url": "https://api.github.com/users/polinaeterna/events{/privacy}",
"received_events_url": "https://api.github.com/users/polinaeterna/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"Thanks !\r\nI'm in favor of this change, even though it's a breaking change:\r\n\r\nif you had a dataset\r\n```\r\ndata/\r\n train.csv\r\n test.csv\r\n```\r\n\r\nthen running this code would now return both train and test splits:\r\n```python\r\nload_dataset(\"csv\", data_dir=\"data/\")\r\n```\r\nwhereas right now it returns only a train split with the data from both CSV files.\r\n\r\nIn my opinion it's ok do do this breaking change because:\r\n- it makes this behavior consistent with `load_dataset(\"path/to/data\")` that also returns both splits: data_files resolution must be the same\r\n- I don't expect too many affected users (unless people really wanted to group train and test images in the train split on purpose ?) compared to the many new users to come (especially with #4069 )\r\n- this usage will become more and more common as we add packaged builder and imagefolder/audiofolder usage grows, so it may be better to do this change early\r\n\r\nLet me know if you think this is acceptable @mariosasko @albertvillanova or not, and if you think we need to first have a warning for some time before switching to this new behavior",
"Also, if people really want to put train and test, say, images in a single train split they could do \r\n`load_dataset(\"imagefolder\", data_files={\"train\": \"/path/to/data/**})`. Probably (arguably :)), if this is a more counterintuitive case, then it should require manual files specification, not a default one (in which we expect that users do want to infer splits from filenames / dir structure but currently they have to pass smth like `{\"train\": \"/path/to/data/train*\", \"test\": \"/path/to/data/test*\"}` explicitly as `data_files`) ",
"I also like this change, and I don't think we even need a warning during the transition period, considering I've been asked several times since the release of `imagefolder` why splits are not correctly inferred if the directory structure is as follows:\r\n```\r\ndata_dir\r\n train\r\n label_a\r\n 0.jpg\r\n ...\r\n label_b \r\n 0.jpg\r\n ...\r\n test\r\n label_a\r\n 0.jpg\r\n ...\r\n label_b \r\n 0.jpg\r\n ...\r\n```",
"Cool ! Feel free to add a test (maybe something similar to `test_PackagedDatasetModuleFactory_with_data_dir` but with a data_dir that contains several splits) and mark this PR as ready for review then @polinaeterna :)",
"@lhoestq @mariosasko do you think it's a good idea to do the same with `HubDatasetModuleFactoryWithoutScript` and `LocalDatasetModuleFactoryWithoutScript` (see the latest change). If we agree on the current change, doing \r\n```python\r\nds = load_dataset(\"polinaeterna/jsonl_test\", data_dir=\"data/\")\r\n```\r\non dataset with the following structure:\r\n```\r\ntrain.jsonl\r\ntest.jsonl\r\ndata/\r\n train.jsonl\r\n test.jsonl\r\n```\r\nwill result in having two splits from files under `data/` dir in specified repo, while master version returns a single train split. \r\nThe same would be for local dataset without script if doing smth like:\r\n```python\r\nds = load_dataset(\"/home/polina/workspace/repos/jsonl_test\", data_dir=\"/home/polina/workspace/repos/jsonl_test/data\")\r\n```\r\n(though I'm not sure I understand this use case :D)\r\nLet me know if you think we should preserve the same logic for all factories or if I should roll back this change.",
"@lhoestq to test passing subdirectory (`base_path`) to data_files functions and methods, I extended the temporary test directory with data so that it contains subdirectory. Because of that the number of files in this directory increased, so I had to change some numbers and patterns to account for this change - [907ddf0](https://github.com/huggingface/datasets/pull/4144/commits/907ddf09d3afece5afbae18675c859d6e453f2bf)\r\n\r\nDo you think it's ok? Another option is to create another tmp dir and do all the checks inside it. "
] | 2022-04-11T13:57:33 | 2022-04-29T09:12:14 | 2022-04-28T21:02:45 | CONTRIBUTOR | null | fixes #4150
I suggest to infer splits structure from files when `data_dir` is passed with `get_patterns_locally`, analogous to what's done in `LocalDatasetModuleFactoryWithoutScript` with `self.path`, instead of generating files with `data_dir/**` patterns and putting them all into a single default (train) split.
I would also suggest to align `HubDatasetModuleFactoryWithoutScript` and `LocalDatasetModuleFactoryWithoutScript` with this logic (remove `data_files = os.path.join(data_dir, "**")`). It's not reflected in the current code now as I'd like to discuss it cause I might be unaware of some use cases. @lhoestq @mariosasko @albertvillanova WDYT? | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4144/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4144/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4144",
"html_url": "https://github.com/huggingface/datasets/pull/4144",
"diff_url": "https://github.com/huggingface/datasets/pull/4144.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4144.patch",
"merged_at": "2022-04-28T21:02:44"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4143 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4143/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4143/comments | https://api.github.com/repos/huggingface/datasets/issues/4143/events | https://github.com/huggingface/datasets/issues/4143 | 1,199,937,961 | I_kwDODunzps5HhZmp | 4,143 | Unable to download `Wikepedia` 20220301.en version | {
"login": "beyondguo",
"id": 37113676,
"node_id": "MDQ6VXNlcjM3MTEzNjc2",
"avatar_url": "https://avatars.githubusercontent.com/u/37113676?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/beyondguo",
"html_url": "https://github.com/beyondguo",
"followers_url": "https://api.github.com/users/beyondguo/followers",
"following_url": "https://api.github.com/users/beyondguo/following{/other_user}",
"gists_url": "https://api.github.com/users/beyondguo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/beyondguo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/beyondguo/subscriptions",
"organizations_url": "https://api.github.com/users/beyondguo/orgs",
"repos_url": "https://api.github.com/users/beyondguo/repos",
"events_url": "https://api.github.com/users/beyondguo/events{/privacy}",
"received_events_url": "https://api.github.com/users/beyondguo/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | [
"Hi! We've recently updated the Wikipedia script, so these changes are only available on master and can be fetched as follows:\r\n```python\r\ndataset_wikipedia = load_dataset(\"wikipedia\", \"20220301.en\", revision=\"master\")\r\n```",
"Hi, how can I load the previous \"20200501.en\" version of wikipedia which had been downloaded to the default path? Thanks!",
"@JiaQiSJTU just reinstall the previous verision of the package, e.g. `!pip install -q datasets==1.0.0`"
] | 2022-04-11T13:00:14 | 2022-08-17T00:37:55 | 2022-04-21T17:04:14 | NONE | null | ## Describe the bug
Unable to download `Wikepedia` dataset, 20220301.en version
## Steps to reproduce the bug
```python
!pip install apache_beam mwparserfromhell
dataset_wikipedia = load_dataset("wikipedia", "20220301.en")
```
## Actual results
```
ValueError: BuilderConfig 20220301.en not found.
Available: ['20200501.aa', '20200501.ab', '20200501.ace', '20200501.ady', '20200501.af', '20200501.ak', '20200501.als', '20200501.am', '20200501.an', '20200501.ang', '20200501.ar', '20200501.arc', '20200501.arz', '20200501.as', '20200501.ast', '20200501.atj', '20200501.av', '20200501.ay', '20200501.az', '20200501.azb', '20200501.ba', '20200501.bar', '20200501.bat-smg', '20200501.bcl', '20200501.be', '20200501.be-x-old', '20200501.bg', '20200501.bh', '20200501.bi', '20200501.bjn', '20200501.bm', '20200501.bn', '20200501.bo', '20200501.bpy', '20200501.br', '20200501.bs', '20200501.bug', '20200501.bxr', '20200501.ca', '20200501.cbk-zam', '20200501.cdo', '20200501.ce', '20200501.ceb', '20200501.ch', '20200501.cho', '20200501.chr', '20200501.chy', '20200501.ckb', '20200501.co', '20200501.cr', '20200501.crh', '20200501.cs', '20200501.csb', '20200501.cu', '20200501.cv', '20200501.cy', '20200501.da', '20200501.de', '20200501.din', '20200501.diq', '20200501.dsb', '20200501.dty', '20200501.dv', '20200501.dz', '20200501.ee', '20200501.el', '20200501.eml', '20200501.en', '20200501.eo', '20200501.es', '20200501.et', '20200501.eu', '20200501.ext', '20200501.fa', '20200501.ff', '20200501.fi', '20200501.fiu-vro', '20200501.fj', '20200501.fo', '20200501.fr', '20200501.frp', '20200501.frr', '20200501.fur', '20200501.fy', '20200501.ga', '20200501.gag', '20200501.gan', '20200501.gd', '20200501.gl', '20200501.glk', '20200501.gn', '20200501.gom', '20200501.gor', '20200501.got', '20200501.gu', '20200501.gv', '20200501.ha', '20200501.hak', '20200501.haw', '20200501.he', '20200501.hi', '20200501.hif', '20200501.ho', '20200501.hr', '20200501.hsb', '20200501.ht', '20200501.hu', '20200501.hy', '20200501.ia', '20200501.id', '20200501.ie', '20200501.ig', '20200501.ii', '20200501.ik', '20200501.ilo', '20200501.inh', '20200501.io', '20200501.is', '20200501.it', '20200501.iu', '20200501.ja', '20200501.jam', '20200501.jbo', '20200501.jv', '20200501.ka', '20200501.kaa', '20200501.kab', '20200501.kbd', '20200501.kbp', '20200501.kg', '20200501.ki', '20200501.kj', '20200501.kk', '20200501.kl', '20200501.km', '20200501.kn', '20200501.ko', '20200501.koi', '20200501.krc', '20200501.ks', '20200501.ksh', '20200501.ku', '20200501.kv', '20200501.kw', '20200501.ky', '20200501.la', '20200501.lad', '20200501.lb', '20200501.lbe', '20200501.lez', '20200501.lfn', '20200501.lg', '20200501.li', '20200501.lij', '20200501.lmo', '20200501.ln', '20200501.lo', '20200501.lrc', '20200501.lt', '20200501.ltg', '20200501.lv', '20200501.mai', '20200501.map-bms', '20200501.mdf', '20200501.mg', '20200501.mh', '20200501.mhr', '20200501.mi', '20200501.min', '20200501.mk', '20200501.ml', '20200501.mn', '20200501.mr', '20200501.mrj', '20200501.ms', '20200501.mt', '20200501.mus', '20200501.mwl', '20200501.my', '20200501.myv', '20200501.mzn', '20200501.na', '20200501.nah', '20200501.nap', '20200501.nds', '20200501.nds-nl', '20200501.ne', '20200501.new', '20200501.ng', '20200501.nl', '20200501.nn', '20200501.no', '20200501.nov', '20200501.nrm', '20200501.nso', '20200501.nv', '20200501.ny', '20200501.oc', '20200501.olo', '20200501.om', '20200501.or', '20200501.os', '20200501.pa', '20200501.pag', '20200501.pam', '20200501.pap', '20200501.pcd', '20200501.pdc', '20200501.pfl', '20200501.pi', '20200501.pih', '20200501.pl', '20200501.pms', '20200501.pnb', '20200501.pnt', '20200501.ps', '20200501.pt', '20200501.qu', '20200501.rm', '20200501.rmy', '20200501.rn', '20200501.ro', '20200501.roa-rup', '20200501.roa-tara', '20200501.ru', '20200501.rue', '20200501.rw', '20200501.sa', '20200501.sah', '20200501.sat', '20200501.sc', '20200501.scn', '20200501.sco', '20200501.sd', '20200501.se', '20200501.sg', '20200501.sh', '20200501.si', '20200501.simple', '20200501.sk', '20200501.sl', '20200501.sm', '20200501.sn', '20200501.so', '20200501.sq', '20200501.sr', '20200501.srn', '20200501.ss', '20200501.st', '20200501.stq', '20200501.su', '20200501.sv', '20200501.sw', '20200501.szl', '20200501.ta', '20200501.tcy', '20200501.te', '20200501.tet', '20200501.tg', '20200501.th', '20200501.ti', '20200501.tk', '20200501.tl', '20200501.tn', '20200501.to', '20200501.tpi', '20200501.tr', '20200501.ts', '20200501.tt', '20200501.tum', '20200501.tw', '20200501.ty', '20200501.tyv', '20200501.udm', '20200501.ug', '20200501.uk', '20200501.ur', '20200501.uz', '20200501.ve', '20200501.vec', '20200501.vep', '20200501.vi', '20200501.vls', '20200501.vo', '20200501.wa', '20200501.war', '20200501.wo', '20200501.wuu', '20200501.xal', '20200501.xh', '20200501.xmf', '20200501.yi', '20200501.yo', '20200501.za', '20200501.zea', '20200501.zh', '20200501.zh-classical', '20200501.zh-min-nan', '20200501.zh-yue', '20200501.zu']
```
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 2.0.0
- Platform: Ubuntu
- Python version: 3.6
- PyArrow version: 6.0.1 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4143/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4143/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4142 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4142/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4142/comments | https://api.github.com/repos/huggingface/datasets/issues/4142/events | https://github.com/huggingface/datasets/issues/4142 | 1,199,794,750 | I_kwDODunzps5Hg2o- | 4,142 | Add ObjectFolder 2.0 dataset | {
"login": "osanseviero",
"id": 7246357,
"node_id": "MDQ6VXNlcjcyNDYzNTc=",
"avatar_url": "https://avatars.githubusercontent.com/u/7246357?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/osanseviero",
"html_url": "https://github.com/osanseviero",
"followers_url": "https://api.github.com/users/osanseviero/followers",
"following_url": "https://api.github.com/users/osanseviero/following{/other_user}",
"gists_url": "https://api.github.com/users/osanseviero/gists{/gist_id}",
"starred_url": "https://api.github.com/users/osanseviero/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/osanseviero/subscriptions",
"organizations_url": "https://api.github.com/users/osanseviero/orgs",
"repos_url": "https://api.github.com/users/osanseviero/repos",
"events_url": "https://api.github.com/users/osanseviero/events{/privacy}",
"received_events_url": "https://api.github.com/users/osanseviero/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2067376369,
"node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request",
"name": "dataset request",
"color": "e99695",
"default": false,
"description": "Requesting to add a new dataset"
}
] | open | false | null | [] | null | [
"Datasets are not tracked in this repository anymore."
] | 2022-04-11T10:57:51 | 2022-10-05T10:30:49 | null | MEMBER | null | ## Adding a Dataset
- **Name:** ObjectFolder 2.0
- **Description:** ObjectFolder 2.0 is a dataset of 1,000 objects in the form of implicit representations. It contains 1,000 Object Files each containing the complete multisensory profile for an object instance.
- **Paper:** [*link to the dataset paper if available*](https://arxiv.org/abs/2204.02389)
- **Data:** https://github.com/rhgao/ObjectFolder
Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md).
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4142/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4142/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4141 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4141/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4141/comments | https://api.github.com/repos/huggingface/datasets/issues/4141/events | https://github.com/huggingface/datasets/issues/4141 | 1,199,610,885 | I_kwDODunzps5HgJwF | 4,141 | Why is the dataset not visible under the dataset preview section? | {
"login": "Nid989",
"id": 75028682,
"node_id": "MDQ6VXNlcjc1MDI4Njgy",
"avatar_url": "https://avatars.githubusercontent.com/u/75028682?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Nid989",
"html_url": "https://github.com/Nid989",
"followers_url": "https://api.github.com/users/Nid989/followers",
"following_url": "https://api.github.com/users/Nid989/following{/other_user}",
"gists_url": "https://api.github.com/users/Nid989/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Nid989/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Nid989/subscriptions",
"organizations_url": "https://api.github.com/users/Nid989/orgs",
"repos_url": "https://api.github.com/users/Nid989/repos",
"events_url": "https://api.github.com/users/Nid989/events{/privacy}",
"received_events_url": "https://api.github.com/users/Nid989/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 3470211881,
"node_id": "LA_kwDODunzps7O1zsp",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset-viewer",
"name": "dataset-viewer",
"color": "E5583E",
"default": false,
"description": "Related to the dataset viewer on huggingface.co"
}
] | closed | false | null | [] | null | [] | 2022-04-11T08:36:42 | 2022-04-11T18:55:32 | 2022-04-11T17:09:49 | NONE | null | ## Dataset viewer issue for '*name of the dataset*'
**Link:** *link to the dataset viewer page*
*short description of the issue*
Am I the one who added this dataset ? Yes-No
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4141/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4141/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4140 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4140/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4140/comments | https://api.github.com/repos/huggingface/datasets/issues/4140/events | https://github.com/huggingface/datasets/issues/4140 | 1,199,492,356 | I_kwDODunzps5Hfs0E | 4,140 | Error loading arxiv data set | {
"login": "yjqiu",
"id": 5383918,
"node_id": "MDQ6VXNlcjUzODM5MTg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5383918?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yjqiu",
"html_url": "https://github.com/yjqiu",
"followers_url": "https://api.github.com/users/yjqiu/followers",
"following_url": "https://api.github.com/users/yjqiu/following{/other_user}",
"gists_url": "https://api.github.com/users/yjqiu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/yjqiu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yjqiu/subscriptions",
"organizations_url": "https://api.github.com/users/yjqiu/orgs",
"repos_url": "https://api.github.com/users/yjqiu/repos",
"events_url": "https://api.github.com/users/yjqiu/events{/privacy}",
"received_events_url": "https://api.github.com/users/yjqiu/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | [
"Hi! I think this error may be related to using an older version of the library. I was able to load the dataset without any issues using the latest version of `datasets`. Can you upgrade to the latest version of `datasets` and try again? :)",
"Hi! As @stevhliu suggested, to fix the issue, update the lib to the newest version with:\r\n```\r\npip install -U datasets\r\n```\r\nand download the dataset as follows:\r\n```python\r\nfrom datasets import load_dataset\r\ndset = load_dataset('scientific_papers', 'arxiv', download_mode=\"force_redownload\")\r\n```",
"Thanks for the quick response! It works now. The problem is that I used nlp. load_dataset instead of datasets. load_dataset."
] | 2022-04-11T07:06:34 | 2022-04-12T16:24:08 | 2022-04-12T16:24:08 | NONE | null | ## Describe the bug
A clear and concise description of what the bug is.
I met the error below when loading arxiv dataset via `nlp.load_dataset('scientific_papers', 'arxiv',)`.
```
Traceback (most recent call last):
File "scripts/summarization.py", line 354, in <module>
main(args)
File "scripts/summarization.py", line 306, in main
model.hf_datasets = nlp.load_dataset('scientific_papers', 'arxiv')
File "/opt/conda/envs/longformer/lib/python3.7/site-packages/nlp/load.py", line 549, in load_dataset
download_config=download_config, download_mode=download_mode, ignore_verifications=ignore_verifications,
File "/opt/conda/envs/longformer/lib/python3.7/site-packages/nlp/builder.py", line 463, in download_and_prepare
dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs
File "/opt/conda/envs/longformer/lib/python3.7/site-packages/nlp/builder.py", line 522, in _download_and_prepare
self.info.download_checksums, dl_manager.get_recorded_sizes_checksums(), "dataset source files"
File "/opt/conda/envs/longformer/lib/python3.7/site-packages/nlp/utils/info_utils.py", line 38, in verify_checksums
raise NonMatchingChecksumError(error_msg + str(bad_urls))
nlp.utils.info_utils.NonMatchingChecksumError: Checksums didn't match for dataset source files:
['https://drive.google.com/uc?id=1b3rmCSIoh6VhD4HKWjI4HOW-cSwcwbeC&export=download', 'https://drive.google.com/uc?id=1lvsqvsFi3W-pE1SqNZI0s8NR9rC1tsja&export=download']
```
I then tried to ignore verification steps by `ignore_verifications=True` and there is another error.
```
Traceback (most recent call last):
File "/opt/conda/envs/longformer/lib/python3.7/site-packages/nlp/builder.py", line 537, in _download_and_prepare
self._prepare_split(split_generator, **prepare_split_kwargs)
File "/opt/conda/envs/longformer/lib/python3.7/site-packages/nlp/builder.py", line 810, in _prepare_split
for key, record in utils.tqdm(generator, unit=" examples", total=split_info.num_examples, leave=False):
File "/opt/conda/envs/longformer/lib/python3.7/site-packages/tqdm/std.py", line 1195, in __iter__
for obj in iterable:
File "/opt/conda/envs/longformer/lib/python3.7/site-packages/nlp/datasets/scientific_papers/9e4f2cfe3d8494e9f34a84ce49c3214605b4b52a3d8eb199104430d04c52cc12/scientific_papers.py", line 108, in _generate_examples
with open(path, encoding="utf-8") as f:
NotADirectoryError: [Errno 20] Not a directory: '/home/username/.cache/huggingface/datasets/downloads/c0deae7af7d9c87f25dfadf621f7126f708d7dcac6d353c7564883084a000076/arxiv-dataset/train.txt'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "scripts/summarization.py", line 354, in <module>
main(args)
File "scripts/summarization.py", line 306, in main
model.hf_datasets = nlp.load_dataset('scientific_papers', 'arxiv', ignore_verifications=True)
File "/opt/conda/envs/longformer/lib/python3.7/site-packages/nlp/load.py", line 549, in load_dataset
download_config=download_config, download_mode=download_mode, ignore_verifications=ignore_verifications,
File "/opt/conda/envs/longformer/lib/python3.7/site-packages/nlp/builder.py", line 463, in download_and_prepare
dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs
File "/opt/conda/envs/longformer/lib/python3.7/site-packages/nlp/builder.py", line 539, in _download_and_prepare
raise OSError("Cannot find data file. " + (self.manual_download_instructions or ""))
OSError: Cannot find data file.
```
## Steps to reproduce the bug
```python
# Sample code to reproduce the bug
```
## Expected results
A clear and concise description of the expected results.
## Actual results
Specify the actual results or traceback.
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version:
- Platform:
- Python version:
- PyArrow version:
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4140/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4140/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4139 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4139/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4139/comments | https://api.github.com/repos/huggingface/datasets/issues/4139/events | https://github.com/huggingface/datasets/issues/4139 | 1,199,443,822 | I_kwDODunzps5Hfg9u | 4,139 | Dataset viewer issue for Winoground | {
"login": "alcinos",
"id": 7438704,
"node_id": "MDQ6VXNlcjc0Mzg3MDQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/7438704?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/alcinos",
"html_url": "https://github.com/alcinos",
"followers_url": "https://api.github.com/users/alcinos/followers",
"following_url": "https://api.github.com/users/alcinos/following{/other_user}",
"gists_url": "https://api.github.com/users/alcinos/gists{/gist_id}",
"starred_url": "https://api.github.com/users/alcinos/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/alcinos/subscriptions",
"organizations_url": "https://api.github.com/users/alcinos/orgs",
"repos_url": "https://api.github.com/users/alcinos/repos",
"events_url": "https://api.github.com/users/alcinos/events{/privacy}",
"received_events_url": "https://api.github.com/users/alcinos/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 3470211881,
"node_id": "LA_kwDODunzps7O1zsp",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset-viewer",
"name": "dataset-viewer",
"color": "E5583E",
"default": false,
"description": "Related to the dataset viewer on huggingface.co"
},
{
"id": 4030248571,
"node_id": "LA_kwDODunzps7wOLZ7",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset-viewer-gated",
"name": "dataset-viewer-gated",
"color": "51F745",
"default": false,
"description": ""
}
] | closed | false | {
"login": "severo",
"id": 1676121,
"node_id": "MDQ6VXNlcjE2NzYxMjE=",
"avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/severo",
"html_url": "https://github.com/severo",
"followers_url": "https://api.github.com/users/severo/followers",
"following_url": "https://api.github.com/users/severo/following{/other_user}",
"gists_url": "https://api.github.com/users/severo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/severo/subscriptions",
"organizations_url": "https://api.github.com/users/severo/orgs",
"repos_url": "https://api.github.com/users/severo/repos",
"events_url": "https://api.github.com/users/severo/events{/privacy}",
"received_events_url": "https://api.github.com/users/severo/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "severo",
"id": 1676121,
"node_id": "MDQ6VXNlcjE2NzYxMjE=",
"avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/severo",
"html_url": "https://github.com/severo",
"followers_url": "https://api.github.com/users/severo/followers",
"following_url": "https://api.github.com/users/severo/following{/other_user}",
"gists_url": "https://api.github.com/users/severo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/severo/subscriptions",
"organizations_url": "https://api.github.com/users/severo/orgs",
"repos_url": "https://api.github.com/users/severo/repos",
"events_url": "https://api.github.com/users/severo/events{/privacy}",
"received_events_url": "https://api.github.com/users/severo/received_events",
"type": "User",
"site_admin": false
},
{
"login": "SBrandeis",
"id": 33657802,
"node_id": "MDQ6VXNlcjMzNjU3ODAy",
"avatar_url": "https://avatars.githubusercontent.com/u/33657802?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/SBrandeis",
"html_url": "https://github.com/SBrandeis",
"followers_url": "https://api.github.com/users/SBrandeis/followers",
"following_url": "https://api.github.com/users/SBrandeis/following{/other_user}",
"gists_url": "https://api.github.com/users/SBrandeis/gists{/gist_id}",
"starred_url": "https://api.github.com/users/SBrandeis/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/SBrandeis/subscriptions",
"organizations_url": "https://api.github.com/users/SBrandeis/orgs",
"repos_url": "https://api.github.com/users/SBrandeis/repos",
"events_url": "https://api.github.com/users/SBrandeis/events{/privacy}",
"received_events_url": "https://api.github.com/users/SBrandeis/received_events",
"type": "User",
"site_admin": false
},
{
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"related (same dataset): https://github.com/huggingface/datasets/issues/4149. But the issue is different. Looking at it",
"I thought this issue was related to the error I was seeing, but upon consideration I'd think the dataset viewer would return a 500 (unable to create the split like me) or a 404 (unable to load split b/c it was never created) error if it was having the issue I was seeing in #4149. 401 message makes it look like dataset viewer isn't passing through the identity of the user who has signed the licensing agreement when making the request to GET [examples.jsonl](https://huggingface.co/datasets/facebook/winoground/resolve/a86a60456fbbd242e9a744199071a6bd3e7fd9de/examples.jsonl).",
"Pinging @SBrandeis, as it seems related to gated datasets and access tokens.",
"To replicate:\r\n\r\n```python\r\n>>> import datasets\r\n>>> dataset= datasets.load_dataset('facebook/winoground', name='facebook--winoground', split='train', use_auth_token=\"hf_app_...\", streaming=True)\r\n>>> next(iter(dataset))\r\nTraceback (most recent call last):\r\n File \"<stdin>\", line 1, in <module>\r\n File \"/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py\", line 497, in __iter__\r\n for key, example in self._iter():\r\n File \"/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py\", line 494, in _iter\r\n yield from ex_iterable\r\n File \"/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py\", line 87, in __iter__\r\n yield from self.generate_examples_fn(**self.kwargs)\r\n File \"/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py\", line 439, in wrapper\r\n for key, table in generate_tables_fn(**kwargs):\r\n File \"/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.9/site-packages/datasets/packaged_modules/json/json.py\", line 85, in _generate_tables\r\n for file_idx, file in enumerate(files):\r\n File \"/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.9/site-packages/datasets/utils/streaming_download_manager.py\", line 679, in __iter__\r\n yield from self.generator(*self.args, **self.kwargs)\r\n File \"/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.9/site-packages/datasets/utils/streaming_download_manager.py\", line 731, in _iter_from_urlpaths\r\n for dirpath, _, filenames in xwalk(urlpath, use_auth_token=use_auth_token):\r\n File \"/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.9/site-packages/datasets/utils/streaming_download_manager.py\", line 623, in xwalk\r\n for dirpath, dirnames, filenames in fs.walk(main_hop):\r\n File \"/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.9/site-packages/fsspec/spec.py\", line 372, in walk\r\n listing = self.ls(path, detail=True, **kwargs)\r\n File \"/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.9/site-packages/fsspec/asyn.py\", line 85, in wrapper\r\n return sync(self.loop, func, *args, **kwargs)\r\n File \"/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.9/site-packages/fsspec/asyn.py\", line 65, in sync\r\n raise return_result\r\n File \"/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.9/site-packages/fsspec/asyn.py\", line 25, in _runner\r\n result[0] = await coro\r\n File \"/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.9/site-packages/fsspec/implementations/http.py\", line 196, in _ls\r\n out = await self._ls_real(url, detail=detail, **kwargs)\r\n File \"/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.9/site-packages/fsspec/implementations/http.py\", line 150, in _ls_real\r\n self._raise_not_found_for_status(r, url)\r\n File \"/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.9/site-packages/fsspec/implementations/http.py\", line 208, in _raise_not_found_for_status\r\n response.raise_for_status()\r\n File \"/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.9/site-packages/aiohttp/client_reqrep.py\", line 1004, in raise_for_status\r\n raise ClientResponseError(\r\naiohttp.client_exceptions.ClientResponseError: 401, message='Unauthorized', url=URL('https://huggingface.co/datasets/facebook/winoground/resolve/a86a60456fbbd242e9a744199071a6bd3e7fd9de/examples.jsonl')\r\n```\r\n\r\n*edited to fix `use_token` -> `use_auth_token`, thx @odellus*",
"~~Using your command to replicate and changing `use_token` to `use_auth_token` fixes the problem I was seeing in #4149.~~\r\nNevermind it gave me an iterator to a method returning the same 401s. Changing `use_token` to `use_auth_token` does not fix the issue.",
"After investigation with @severo , we found a potential culprit: https://github.com/huggingface/datasets/blob/3cd0a009a43f9f174056d70bfa2ca32216181926/src/datasets/utils/streaming_download_manager.py#L610-L624\r\n\r\nThe streaming manager does not seem to pass `use_auth_token` to `fsspec` when streaming and not iterating content of a zip archive\r\n\r\ncc @albertvillanova @lhoestq ",
"I was able to reproduce it on a private dataset, let me work on a fix",
"Hey @lhoestq, Thanks for working on a fix! Any plans to merge #4173 into master? ",
"Thanks for the heads up, I still need to fix some tests that are failing in the CI before merging ;)",
"The fix has been merged, we'll do a new release soon, and update the dataset viewer",
"Fixed, thanks!\r\n<img width=\"1119\" alt=\"Capture dβeΜcran 2022-06-21 aΜ 18 41 09\" src=\"https://user-images.githubusercontent.com/1676121/174853571-afb0749c-4178-4c89-ab40-bb162a449788.png\">\r\n"
] | 2022-04-11T06:11:41 | 2022-06-21T16:43:58 | 2022-06-21T16:43:58 | NONE | null | ## Dataset viewer issue for 'Winoground'
**Link:** [*link to the dataset viewer page*](https://huggingface.co/datasets/facebook/winoground/viewer/facebook--winoground/train)
*short description of the issue*
Getting 401, message='Unauthorized'
The dataset is subject to authorization, but I can access the files from the interface, so I assume I'm granted to access it. I'd assume the permission somehow doesn't propagate to the dataset viewer tool.
Am I the one who added this dataset ? No
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4139/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4139/timeline | null | completed | null | null | false |