url
stringlengths 61
61
| repository_url
stringclasses 1
value | labels_url
stringlengths 75
75
| comments_url
stringlengths 70
70
| events_url
stringlengths 68
68
| html_url
stringlengths 49
51
| id
int64 1.14B
1.87B
| node_id
stringlengths 18
19
| number
int64 3.74k
6.19k
| title
stringlengths 1
290
| user
dict | labels
list | state
stringclasses 2
values | locked
bool 1
class | assignee
dict | assignees
list | milestone
dict | comments
sequence | created_at
timestamp[s] | updated_at
timestamp[s] | closed_at
timestamp[s] | author_association
stringclasses 3
values | active_lock_reason
null | body
stringlengths 2
33.9k
⌀ | reactions
dict | timeline_url
stringlengths 70
70
| performed_via_github_app
null | state_reason
stringclasses 3
values | draft
bool 2
classes | pull_request
dict | is_pull_request
bool 2
classes |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://api.github.com/repos/huggingface/datasets/issues/5057 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5057/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5057/comments | https://api.github.com/repos/huggingface/datasets/issues/5057/events | https://github.com/huggingface/datasets/pull/5057 | 1,394,827,216 | PR_kwDODunzps5AD4c6 | 5,057 | Support `converters` in `CsvBuilder` | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-10-03T14:23:21 | 2022-10-04T11:19:28 | 2022-10-04T11:17:32 | CONTRIBUTOR | null | Add the `converters` param to `CsvBuilder`, to help in situations like [this one](https://discuss.huggingface.co/t/typeerror-in-load-dataset-related-to-a-sequence-of-strings/23545).
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5057/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5057/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5057",
"html_url": "https://github.com/huggingface/datasets/pull/5057",
"diff_url": "https://github.com/huggingface/datasets/pull/5057.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5057.patch",
"merged_at": "2022-10-04T11:17:32"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/5056 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5056/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5056/comments | https://api.github.com/repos/huggingface/datasets/issues/5056/events | https://github.com/huggingface/datasets/pull/5056 | 1,394,713,173 | PR_kwDODunzps5ADfxN | 5,056 | Fix broken URL's (GEM) | {
"login": "manandey",
"id": 6687858,
"node_id": "MDQ6VXNlcjY2ODc4NTg=",
"avatar_url": "https://avatars.githubusercontent.com/u/6687858?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/manandey",
"html_url": "https://github.com/manandey",
"followers_url": "https://api.github.com/users/manandey/followers",
"following_url": "https://api.github.com/users/manandey/following{/other_user}",
"gists_url": "https://api.github.com/users/manandey/gists{/gist_id}",
"starred_url": "https://api.github.com/users/manandey/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/manandey/subscriptions",
"organizations_url": "https://api.github.com/users/manandey/orgs",
"repos_url": "https://api.github.com/users/manandey/repos",
"events_url": "https://api.github.com/users/manandey/events{/privacy}",
"received_events_url": "https://api.github.com/users/manandey/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_5056). All of your documentation changes will be reflected on that endpoint.",
"Thanks, @manandey. We have removed all dataset scripts from this repo. Subsequent PRs should be opened directly on the Hugging Face Hub."
] | 2022-10-03T13:13:22 | 2022-10-04T13:49:00 | 2022-10-04T13:48:59 | CONTRIBUTOR | null | This PR fixes the broken URL's in GEM. cc. @lhoestq, @albertvillanova | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5056/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5056/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5056",
"html_url": "https://github.com/huggingface/datasets/pull/5056",
"diff_url": "https://github.com/huggingface/datasets/pull/5056.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5056.patch",
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/5055 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5055/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5055/comments | https://api.github.com/repos/huggingface/datasets/issues/5055/events | https://github.com/huggingface/datasets/pull/5055 | 1,394,503,844 | PR_kwDODunzps5ACyVU | 5,055 | Fix backward compatibility for dataset_infos.json | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-10-03T10:30:14 | 2022-10-03T13:43:55 | 2022-10-03T13:41:32 | MEMBER | null | While working on https://github.com/huggingface/datasets/pull/5018 I noticed a small bug introduced in #4926 regarding backward compatibility for dataset_infos.json
Indeed, when a dataset repo had both dataset_infos.json and README.md, the JSON file was ignored. This is unexpected: in practice it should be ignored only if the README.md has a dataset_info field, which has precedence over the data in the JSON file. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5055/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5055/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5055",
"html_url": "https://github.com/huggingface/datasets/pull/5055",
"diff_url": "https://github.com/huggingface/datasets/pull/5055.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5055.patch",
"merged_at": "2022-10-03T13:41:32"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/5054 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5054/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5054/comments | https://api.github.com/repos/huggingface/datasets/issues/5054/events | https://github.com/huggingface/datasets/pull/5054 | 1,394,152,728 | PR_kwDODunzps5ABnd3 | 5,054 | Fix license/citation information of squadshifts dataset card | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 4564477500,
"node_id": "LA_kwDODunzps8AAAABEBBmPA",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20contribution",
"name": "dataset contribution",
"color": "0e8a16",
"default": false,
"description": "Contribution to a dataset script"
}
] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-10-03T05:19:13 | 2022-10-03T09:26:49 | 2022-10-03T09:24:30 | MEMBER | null | This PR fixes the license/citation information of squadshifts dataset card, once the dataset owners have responded to our request for information:
- https://github.com/modestyachts/squadshifts-website/issues/1
Additionally, we have updated the mention in their website to our `datasets` library (they were referring old name `nlp`):
- https://github.com/modestyachts/squadshifts-website/pull/2#event-7500953009 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5054/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5054/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5054",
"html_url": "https://github.com/huggingface/datasets/pull/5054",
"diff_url": "https://github.com/huggingface/datasets/pull/5054.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5054.patch",
"merged_at": "2022-10-03T09:24:30"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/5053 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5053/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5053/comments | https://api.github.com/repos/huggingface/datasets/issues/5053/events | https://github.com/huggingface/datasets/issues/5053 | 1,393,739,882 | I_kwDODunzps5TEshq | 5,053 | Intermittent JSON parse error when streaming the Pile | {
"login": "neelnanda-io",
"id": 77788841,
"node_id": "MDQ6VXNlcjc3Nzg4ODQx",
"avatar_url": "https://avatars.githubusercontent.com/u/77788841?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/neelnanda-io",
"html_url": "https://github.com/neelnanda-io",
"followers_url": "https://api.github.com/users/neelnanda-io/followers",
"following_url": "https://api.github.com/users/neelnanda-io/following{/other_user}",
"gists_url": "https://api.github.com/users/neelnanda-io/gists{/gist_id}",
"starred_url": "https://api.github.com/users/neelnanda-io/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/neelnanda-io/subscriptions",
"organizations_url": "https://api.github.com/users/neelnanda-io/orgs",
"repos_url": "https://api.github.com/users/neelnanda-io/repos",
"events_url": "https://api.github.com/users/neelnanda-io/events{/privacy}",
"received_events_url": "https://api.github.com/users/neelnanda-io/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | open | false | null | [] | null | [
"Maybe #2838 can help. In this PR we allow to skip bad chunks of JSON data to not crash the training\r\n\r\nDid you have warning messages before the error ?\r\n\r\nsomething like this maybe ?\r\n```\r\n03/24/2022 02:19:46 - WARNING - datasets.utils.streaming_download_manager - Got disconnected from remote data host. Retrying in 5sec [1/20]\r\n03/24/2022 02:20:01 - WARNING - datasets.utils.streaming_download_manager - Got disconnected from remote data host. Retrying in 5sec [2/20]\r\n03/24/2022 02:20:09 - ERROR - datasets.packaged_modules.json.json - Failed to read file 'gzip://file-000000000007.json::https://huggingface.co/datasets/lvwerra/codeparrot-clean-train/resolve/1d740acb9d09cf7a3307553323e2c677a6535407/file-000000000007.json.gz' with error <class 'pyarrow.lib.ArrowInvalid'>: JSON parse error: Invalid value. in row 0\r\n```",
"Ah, thanks! I did get errors like that. Sad that PR wasn't merged in! \r\n\r\nI'm currently just downloading 200GB of the Pile locally to avoid streaming (I have space and it's faster anyway), but that's really useful! I can probably apply the dumb patch of just commenting out the bits that raise the JSON Parse Error lol, based on your code - if I continue the loop should it be fine?",
"Yup you can get some inspiration from this PR. It simply ignores the bad chunks (a chunk is ~a few MBs of data).\r\nWe'll try to merge this PR soon"
] | 2022-10-02T11:56:46 | 2022-10-04T17:59:03 | null | NONE | null | ## Describe the bug
I have an intermittent error when streaming the Pile, where I get a JSON parse error which causes my program to crash.
This is intermittent - when I rerun the program with the same random seed it does not crash in the same way. The exact point this happens also varied - it happened to me 11B tokens and 4 days into a training run, and now just happened 2 minutes into one, but I can't reliably reproduce it.
I'm using a remote machine with 8 A6000 GPUs via runpod.io
## Expected results
I have a DataLoader which can iterate through the whole Pile
## Actual results
Stack trace:
```
Failed to read file 'zstd://12.jsonl::https://the-eye.eu/public/AI/pile/train/12.jsonl.zst' with error <class 'pyarrow.lib.ArrowInvalid'>: JSON parse error: Invalid value. in row 0
```
I'm currently using HuggingFace accelerate, which also gave me the following stack trace, but I've also experienced this problem intermittently when using DataParallel, so I don't think it's to do with parallelisation
```
Traceback (most recent call last):
File "ddp_script.py", line 1258, in <module>
main()
File "ddp_script.py", line 1143, in main
for c, batch in tqdm.tqdm(enumerate(data_iter)):
File "/opt/conda/lib/python3.7/site-packages/tqdm/std.py", line 1195, in __iter__
for obj in iterable:
File "/opt/conda/lib/python3.7/site-packages/accelerate/data_loader.py", line 503, in __iter__
next_batch, next_batch_info, next_skip = self._fetch_batches(main_iterator)
File "/opt/conda/lib/python3.7/site-packages/accelerate/data_loader.py", line 454, in _fetch_batches
broadcast_object_list(batch_info)
File "/opt/conda/lib/python3.7/site-packages/accelerate/utils/operations.py", line 333, in broadcast_object_list
torch.distributed.broadcast_object_list(object_list, src=from_process)
File "/opt/conda/lib/python3.7/site-packages/torch/distributed/distributed_c10d.py", line 1900, in broadcast_object_list
object_list[i] = _tensor_to_object(obj_view, obj_size)
File "/opt/conda/lib/python3.7/site-packages/torch/distributed/distributed_c10d.py", line 1571, in _tensor_to_object
return _unpickler(io.BytesIO(buf)).load()
_pickle.UnpicklingError: invalid load key, '@'.
```
## Steps to reproduce the bug
```python
from datasets import load_dataset
dataset = load_dataset(
cfg["dataset_name"], streaming=True, split="train")
dataset = dataset.remove_columns("meta")
dataset = dataset.map(tokenize_and_concatenate, batched=True)
dataset = dataset.with_format(type="torch")
train_data_loader = DataLoader(
dataset, batch_size=cfg["batch_size"], num_workers=3)
for batch in train_data_loader:
continue
```
`tokenize_and_concatenate` is a custom tokenization function I defined on the GPT-NeoX tokenizer to tokenize the text, separated by endoftext tokens, and reshape to have length batch_size, I don't think this is related to tokenization:
```
import numpy as np
import einops
import torch
def tokenize_and_concatenate(examples):
texts = examples["text"]
full_text = tokenizer.eos_token.join(texts)
div = 20
length = len(full_text) // div
text_list = [full_text[i * length: (i + 1) * length]
for i in range(div)]
tokens = tokenizer(text_list, return_tensors="np", padding=True)[
"input_ids"
].flatten()
tokens = tokens[tokens != tokenizer.pad_token_id]
n = len(tokens)
curr_batch_size = n // (seq_len - 1)
tokens = tokens[: (seq_len - 1) * curr_batch_size]
tokens = einops.rearrange(
tokens,
"(batch_size seq) -> batch_size seq",
batch_size=curr_batch_size,
seq=seq_len - 1,
)
prefix = np.ones((curr_batch_size, 1), dtype=np.int64) * \
tokenizer.bos_token_id
return {
"text": np.concatenate([prefix, tokens], axis=1)
}
```
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 2.4.0
- Platform: Linux-5.4.0-105-generic-x86_64-with-debian-buster-sid
- Python version: 3.7.13
- PyArrow version: 9.0.0
- Pandas version: 1.3.5
ZStandard data:
Version: 0.18.0
Summary: Zstandard bindings for Python
Home-page: https://github.com/indygreg/python-zstandard
Author: Gregory Szorc
Author-email: [email protected]
License: BSD
Location: /opt/conda/lib/python3.7/site-packages
Requires:
Required-by: | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5053/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5053/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/5052 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5052/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5052/comments | https://api.github.com/repos/huggingface/datasets/issues/5052/events | https://github.com/huggingface/datasets/pull/5052 | 1,393,076,765 | PR_kwDODunzps4_-PZw | 5,052 | added from_generator method to IterableDataset class. | {
"login": "hamid-vakilzadeh",
"id": 56002455,
"node_id": "MDQ6VXNlcjU2MDAyNDU1",
"avatar_url": "https://avatars.githubusercontent.com/u/56002455?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/hamid-vakilzadeh",
"html_url": "https://github.com/hamid-vakilzadeh",
"followers_url": "https://api.github.com/users/hamid-vakilzadeh/followers",
"following_url": "https://api.github.com/users/hamid-vakilzadeh/following{/other_user}",
"gists_url": "https://api.github.com/users/hamid-vakilzadeh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/hamid-vakilzadeh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/hamid-vakilzadeh/subscriptions",
"organizations_url": "https://api.github.com/users/hamid-vakilzadeh/orgs",
"repos_url": "https://api.github.com/users/hamid-vakilzadeh/repos",
"events_url": "https://api.github.com/users/hamid-vakilzadeh/events{/privacy}",
"received_events_url": "https://api.github.com/users/hamid-vakilzadeh/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"I added a test and moved the `streaming` param from `read` to `__init_`. Then, I also decided to update the `read` method of the rest of the packaged modules to account for this param. \r\n\r\n@hamid-vakilzadeh Are you OK with these changes? ",
"@mariosasko these all look great! Thanks for the updates."
] | 2022-09-30T22:14:05 | 2022-10-05T12:51:48 | 2022-10-05T12:10:48 | CONTRIBUTOR | null | Hello,
This resolves issues #4988.
I added a method `from_generator` to class `IterableDataset`.
I modified the `read` method of input stream generator to also return Iterable_dataset.
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5052/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5052/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5052",
"html_url": "https://github.com/huggingface/datasets/pull/5052",
"diff_url": "https://github.com/huggingface/datasets/pull/5052.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5052.patch",
"merged_at": "2022-10-05T12:10:48"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/5051 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5051/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5051/comments | https://api.github.com/repos/huggingface/datasets/issues/5051/events | https://github.com/huggingface/datasets/pull/5051 | 1,392,559,503 | PR_kwDODunzps4_8drw | 5,051 | Revert task removal in folder-based builders | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-09-30T14:50:03 | 2022-10-03T12:23:35 | 2022-10-03T12:21:31 | CONTRIBUTOR | null | Reverts the removal of `task_templates` in the folder-based builders. I also added the `AudioClassifaction` task for consistency.
This is needed to fix https://github.com/huggingface/transformers/issues/19177.
I think we should soon deprecate and remove the current task API (and investigate if it's possible to integrate the `train eval index` API), but we need to update the Transformers examples before that so we don't break them.
cc @NielsRogge | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5051/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5051/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5051",
"html_url": "https://github.com/huggingface/datasets/pull/5051",
"diff_url": "https://github.com/huggingface/datasets/pull/5051.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5051.patch",
"merged_at": "2022-10-03T12:21:31"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/5050 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5050/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5050/comments | https://api.github.com/repos/huggingface/datasets/issues/5050/events | https://github.com/huggingface/datasets/issues/5050 | 1,392,381,882 | I_kwDODunzps5S_g-6 | 5,050 | Restore saved format state in `load_from_disk` | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 1935892877,
"node_id": "MDU6TGFiZWwxOTM1ODkyODc3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/good%20first%20issue",
"name": "good first issue",
"color": "7057ff",
"default": true,
"description": "Good for newcomers"
}
] | closed | false | {
"login": "asofiaoliveira",
"id": 74454835,
"node_id": "MDQ6VXNlcjc0NDU0ODM1",
"avatar_url": "https://avatars.githubusercontent.com/u/74454835?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/asofiaoliveira",
"html_url": "https://github.com/asofiaoliveira",
"followers_url": "https://api.github.com/users/asofiaoliveira/followers",
"following_url": "https://api.github.com/users/asofiaoliveira/following{/other_user}",
"gists_url": "https://api.github.com/users/asofiaoliveira/gists{/gist_id}",
"starred_url": "https://api.github.com/users/asofiaoliveira/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/asofiaoliveira/subscriptions",
"organizations_url": "https://api.github.com/users/asofiaoliveira/orgs",
"repos_url": "https://api.github.com/users/asofiaoliveira/repos",
"events_url": "https://api.github.com/users/asofiaoliveira/events{/privacy}",
"received_events_url": "https://api.github.com/users/asofiaoliveira/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "asofiaoliveira",
"id": 74454835,
"node_id": "MDQ6VXNlcjc0NDU0ODM1",
"avatar_url": "https://avatars.githubusercontent.com/u/74454835?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/asofiaoliveira",
"html_url": "https://github.com/asofiaoliveira",
"followers_url": "https://api.github.com/users/asofiaoliveira/followers",
"following_url": "https://api.github.com/users/asofiaoliveira/following{/other_user}",
"gists_url": "https://api.github.com/users/asofiaoliveira/gists{/gist_id}",
"starred_url": "https://api.github.com/users/asofiaoliveira/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/asofiaoliveira/subscriptions",
"organizations_url": "https://api.github.com/users/asofiaoliveira/orgs",
"repos_url": "https://api.github.com/users/asofiaoliveira/repos",
"events_url": "https://api.github.com/users/asofiaoliveira/events{/privacy}",
"received_events_url": "https://api.github.com/users/asofiaoliveira/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"Hi, can I work on this?",
"Hi, sure! Let us know if you need some pointers/help."
] | 2022-09-30T12:40:07 | 2022-10-11T16:49:24 | 2022-10-11T16:49:24 | CONTRIBUTOR | null | Even though we save the `format` state in `save_to_disk`, we don't restore it in `load_from_disk`. We should fix that.
Reported here: https://discuss.huggingface.co/t/save-to-disk-loses-formatting-information/23815 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5050/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5050/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/5049 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5049/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5049/comments | https://api.github.com/repos/huggingface/datasets/issues/5049/events | https://github.com/huggingface/datasets/pull/5049 | 1,392,361,381 | PR_kwDODunzps4_7zOY | 5,049 | Add `kwargs` to `Dataset.from_generator` | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-09-30T12:24:27 | 2022-10-03T11:00:11 | 2022-10-03T10:58:15 | CONTRIBUTOR | null | Add the `kwargs` param to `from_generator` to align it with the rest of the `from_` methods (this param allows passing custom `writer_batch_size` for instance). | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5049/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5049/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5049",
"html_url": "https://github.com/huggingface/datasets/pull/5049",
"diff_url": "https://github.com/huggingface/datasets/pull/5049.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5049.patch",
"merged_at": "2022-10-03T10:58:15"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/5048 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5048/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5048/comments | https://api.github.com/repos/huggingface/datasets/issues/5048/events | https://github.com/huggingface/datasets/pull/5048 | 1,392,170,680 | PR_kwDODunzps4_7KI2 | 5,048 | Fix bug with labels of eurlex config of lex_glue dataset | {
"login": "iliaschalkidis",
"id": 1626984,
"node_id": "MDQ6VXNlcjE2MjY5ODQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/1626984?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/iliaschalkidis",
"html_url": "https://github.com/iliaschalkidis",
"followers_url": "https://api.github.com/users/iliaschalkidis/followers",
"following_url": "https://api.github.com/users/iliaschalkidis/following{/other_user}",
"gists_url": "https://api.github.com/users/iliaschalkidis/gists{/gist_id}",
"starred_url": "https://api.github.com/users/iliaschalkidis/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/iliaschalkidis/subscriptions",
"organizations_url": "https://api.github.com/users/iliaschalkidis/orgs",
"repos_url": "https://api.github.com/users/iliaschalkidis/repos",
"events_url": "https://api.github.com/users/iliaschalkidis/events{/privacy}",
"received_events_url": "https://api.github.com/users/iliaschalkidis/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 4564477500,
"node_id": "LA_kwDODunzps8AAAABEBBmPA",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20contribution",
"name": "dataset contribution",
"color": "0e8a16",
"default": false,
"description": "Contribution to a dataset script"
}
] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"@JamesLYC88 here is the fix! Thanks again!",
"Thanks, @albertvillanova. When do you expect that this change will take effect when someone downloads the dataset?",
"The change is immediately available now, since this change we made to our library:\r\n- #4059"
] | 2022-09-30T09:47:12 | 2022-09-30T16:30:25 | 2022-09-30T16:21:41 | CONTRIBUTOR | null | Fix for a critical bug in the EURLEX dataset label list to make LexGLUE EURLEX results replicable.
In LexGLUE (Chalkidis et al., 2022), the following is mentioned w.r.t. EUR-LEX: _"It supports four different label granularities, comprising 21, 127, 567, 7390 EuroVoc concepts, respectively. We use the 100 most frequent concepts from level 2 [...]”._ The current label list has all 127 labels, which leads to different (lower) results, as communicated by users.
Thanks! | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5048/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5048/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5048",
"html_url": "https://github.com/huggingface/datasets/pull/5048",
"diff_url": "https://github.com/huggingface/datasets/pull/5048.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5048.patch",
"merged_at": "2022-09-30T16:21:41"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/5047 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5047/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5047/comments | https://api.github.com/repos/huggingface/datasets/issues/5047/events | https://github.com/huggingface/datasets/pull/5047 | 1,392,088,398 | PR_kwDODunzps4_64bS | 5,047 | Fix cats_vs_dogs | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 4564477500,
"node_id": "LA_kwDODunzps8AAAABEBBmPA",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20contribution",
"name": "dataset contribution",
"color": "0e8a16",
"default": false,
"description": "Contribution to a dataset script"
}
] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-09-30T08:47:29 | 2022-09-30T10:23:22 | 2022-09-30T09:34:28 | MEMBER | null | Reported in https://github.com/huggingface/datasets/pull/3878
I updated the number of examples | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5047/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5047/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5047",
"html_url": "https://github.com/huggingface/datasets/pull/5047",
"diff_url": "https://github.com/huggingface/datasets/pull/5047.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5047.patch",
"merged_at": "2022-09-30T09:34:28"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/5046 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5046/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5046/comments | https://api.github.com/repos/huggingface/datasets/issues/5046/events | https://github.com/huggingface/datasets/issues/5046 | 1,391,372,519 | I_kwDODunzps5S7qjn | 5,046 | Audiofolder creates empty Dataset if files same level as metadata | {
"login": "msis",
"id": 577139,
"node_id": "MDQ6VXNlcjU3NzEzOQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/577139?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/msis",
"html_url": "https://github.com/msis",
"followers_url": "https://api.github.com/users/msis/followers",
"following_url": "https://api.github.com/users/msis/following{/other_user}",
"gists_url": "https://api.github.com/users/msis/gists{/gist_id}",
"starred_url": "https://api.github.com/users/msis/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/msis/subscriptions",
"organizations_url": "https://api.github.com/users/msis/orgs",
"repos_url": "https://api.github.com/users/msis/repos",
"events_url": "https://api.github.com/users/msis/events{/privacy}",
"received_events_url": "https://api.github.com/users/msis/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 1935892877,
"node_id": "MDU6TGFiZWwxOTM1ODkyODc3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/good%20first%20issue",
"name": "good first issue",
"color": "7057ff",
"default": true,
"description": "Good for newcomers"
},
{
"id": 4614514401,
"node_id": "LA_kwDODunzps8AAAABEwvm4Q",
"url": "https://api.github.com/repos/huggingface/datasets/labels/hacktoberfest",
"name": "hacktoberfest",
"color": "DF8D62",
"default": false,
"description": ""
}
] | closed | false | {
"login": "riccardobucco",
"id": 9295277,
"node_id": "MDQ6VXNlcjkyOTUyNzc=",
"avatar_url": "https://avatars.githubusercontent.com/u/9295277?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/riccardobucco",
"html_url": "https://github.com/riccardobucco",
"followers_url": "https://api.github.com/users/riccardobucco/followers",
"following_url": "https://api.github.com/users/riccardobucco/following{/other_user}",
"gists_url": "https://api.github.com/users/riccardobucco/gists{/gist_id}",
"starred_url": "https://api.github.com/users/riccardobucco/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/riccardobucco/subscriptions",
"organizations_url": "https://api.github.com/users/riccardobucco/orgs",
"repos_url": "https://api.github.com/users/riccardobucco/repos",
"events_url": "https://api.github.com/users/riccardobucco/events{/privacy}",
"received_events_url": "https://api.github.com/users/riccardobucco/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "riccardobucco",
"id": 9295277,
"node_id": "MDQ6VXNlcjkyOTUyNzc=",
"avatar_url": "https://avatars.githubusercontent.com/u/9295277?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/riccardobucco",
"html_url": "https://github.com/riccardobucco",
"followers_url": "https://api.github.com/users/riccardobucco/followers",
"following_url": "https://api.github.com/users/riccardobucco/following{/other_user}",
"gists_url": "https://api.github.com/users/riccardobucco/gists{/gist_id}",
"starred_url": "https://api.github.com/users/riccardobucco/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/riccardobucco/subscriptions",
"organizations_url": "https://api.github.com/users/riccardobucco/orgs",
"repos_url": "https://api.github.com/users/riccardobucco/repos",
"events_url": "https://api.github.com/users/riccardobucco/events{/privacy}",
"received_events_url": "https://api.github.com/users/riccardobucco/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"Hi! Unfortunately, I can't reproduce this behavior. Instead, I get `ValueError: audio at 2063_fe9936e7-62b2-4e62-a276-acbd344480ce_1.wav doesn't have metadata in /audio-data/metadata.csv`, which can be fixed by removing the `./` from the file name.\r\n\r\n(Link to a Colab that tries to reproduce this behavior: https://colab.research.google.com/drive/1IhQzULYi0Van1xLrN_SddBX1JF7mLZZK?usp=sharing)",
"I think we can make the file name matching part more robust by replacing `file_name` with `os.path.normpath(file_name)`, to ignore \"./\" among other things, in these two places:\r\n* https://github.com/huggingface/datasets/blob/85cd129bde605cd9acacdff0d065fc02e39e09b1/src/datasets/packaged_modules/folder_based_builder/folder_based_builder.py#L319\r\n* https://github.com/huggingface/datasets/blob/85cd129bde605cd9acacdff0d065fc02e39e09b1/src/datasets/packaged_modules/folder_based_builder/folder_based_builder.py#L388",
"@mariosasko Some tests failed (see my PR). Any thoughts on that?",
"Yes, I mentioned the solution in my review.",
"I realized what I was doing wrong.\r\n\r\nThe documentation puts the files in a subfolder.\r\nOnce I have done that, it worked.\r\n\r\nBut l agree that this should be handled better if possible."
] | 2022-09-29T19:17:23 | 2022-10-28T13:05:07 | 2022-10-28T13:05:07 | NONE | null | ## Describe the bug
When audio files are at the same level as the metadata (`metadata.csv` or `metadata.jsonl` ), the `load_dataset` returns a `DatasetDict` with no rows but the correct columns.
https://github.com/huggingface/datasets/blob/1ea4d091b7a4b83a85b2eeb8df65115d39af3766/docs/source/audio_dataset.mdx?plain=1#L88
## Steps to reproduce the bug
`metadata.csv`:
```csv
file_name,duration,transcription
./2063_fe9936e7-62b2-4e62-a276-acbd344480ce_1.wav,10.768,hello
```
```python
>>> audio_dataset = load_dataset("audiofolder", data_dir="/audio-data/")
>>> audio_dataset
DatasetDict({
train: Dataset({
features: ['audio', 'duration', 'transcription'],
num_rows: 0
})
validation: Dataset({
features: ['audio', 'duration', 'transcription'],
num_rows: 0
})
})
```
I've tried, with no success,:
- setting `split` to something else so I don't get a `DatasetDict`,
- removing the `./`,
- using `.jsonl`.
## Expected results
```
Dataset({
features: ['audio', 'duration', 'transcription'],
num_rows: 1
})
```
## Actual results
```
DatasetDict({
train: Dataset({
features: ['audio', 'duration', 'transcription'],
num_rows: 0
})
validation: Dataset({
features: ['audio', 'duration', 'transcription'],
num_rows: 0
})
})
```
## Environment info
- `datasets` version: 2.5.1
- Platform: Linux-5.13.0-1025-aws-x86_64-with-glibc2.29
- Python version: 3.8.10
- PyArrow version: 9.0.0
- Pandas version: 1.5.0
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5046/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5046/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/5045 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5045/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5045/comments | https://api.github.com/repos/huggingface/datasets/issues/5045/events | https://github.com/huggingface/datasets/issues/5045 | 1,391,287,609 | I_kwDODunzps5S7V05 | 5,045 | Automatically revert to last successful commit to hub when a push_to_hub is interrupted | {
"login": "jorahn",
"id": 13120204,
"node_id": "MDQ6VXNlcjEzMTIwMjA0",
"avatar_url": "https://avatars.githubusercontent.com/u/13120204?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jorahn",
"html_url": "https://github.com/jorahn",
"followers_url": "https://api.github.com/users/jorahn/followers",
"following_url": "https://api.github.com/users/jorahn/following{/other_user}",
"gists_url": "https://api.github.com/users/jorahn/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jorahn/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jorahn/subscriptions",
"organizations_url": "https://api.github.com/users/jorahn/orgs",
"repos_url": "https://api.github.com/users/jorahn/repos",
"events_url": "https://api.github.com/users/jorahn/events{/privacy}",
"received_events_url": "https://api.github.com/users/jorahn/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | open | false | null | [] | null | [
"Could you share the error you got please ? Maybe the full stack trace if you have it ?\r\n\r\nMaybe `push_to_hub` be implemented as a single commit @Wauplin ? This way if it fails, the repo is still at the previous (valid) state instead of ending-up in an invalid/incimplete state.",
"> Maybe push_to_hub be implemented as a single commit ? \r\n\r\nI think that would definitely be the way to go. Do you know the reasons why not implementing it like this in the first place ? I guess it is because of not been able to upload all at once with `huggingface_hub` but if there was another reason, please let me know.\r\nAbout pushing all at once, it seems to be a more and more requested feature. I have created this issue https://github.com/huggingface/huggingface_hub/issues/1085 recently but other discussions already happened in the past. The `moon-landing` team is working on it (cc @coyotte508). The `huggingface_hub` integration will come afterwards.\r\n\r\nFor now, maybe it's best to wait for a proper implementation instead of creating a temporary workaround :)\r\n",
"> I think that would definitely be the way to go. Do you know the reasons why not implementing it like this in the first place ? I guess it is because of not been able to upload all at once with huggingface_hub but if there was another reason, please let me know.\r\n\r\nIdeally we would want to upload the files iteratively - and then once everything is uploaded we proceed to commit. When we implemented `push_to_hub`, using `upload_file` for each shard was the only option.\r\n\r\nFor more context: for each shard to upload we do:\r\n1. load the arrow shard in memory\r\n2. convert to parquet\r\n3. upload\r\n\r\nSo to avoid OOM we need to upload the files iteratively.\r\n\r\n> For now, maybe it's best to wait for a proper implementation instead of creating a temporary workaround :)\r\n\r\nLet us know if we can help !",
"> Ideally we would want to upload the files iteratively - and then once everything is uploaded we proceed to commit. \r\n\r\nOh I see. So maybe this has to be done in an implementation specific to `datasets/` as it is not a very common case (upload a bunch of files on the fly).\r\n\r\nYou can maybe have a look at how `huggingface_hub` is implemented for LFS files (arrow shards are LFS anyway, right?).\r\nIn [`upload_lfs_files`](https://github.com/huggingface/huggingface_hub/blob/e28646c977fc9304a4c3576ce61ff07f9778950b/src/huggingface_hub/_commit_api.py#L164) LFS files are uploaded 1 by 1 (multithreaded) and then [the commit is pushed](https://github.com/huggingface/huggingface_hub/blob/e28646c977fc9304a4c3576ce61ff07f9778950b/src/huggingface_hub/hf_api.py#L1926) to the Hub once all files have been uploaded. This is pretty much what you need, right ?\r\n\r\nI can help you if you have questions how to do it in `datasets`. If that makes sense we could then move the implementation from `datasets` to `huggingface_hub` once it's mature. Next week I'm on holidays but feel free to start without my input.\r\n\r\n(also cc @coyotte508 and @SBrandeis who implemented LFS upload in `hfh`)",
"> Could you share the error you got please ? Maybe the full stack trace if you have it ?\r\n\r\nHere’s part of the stack trace, that I can reproduce at the moment from a photo I took (potential typos from OCR):\r\n```\r\nValueError\r\nTraceback (most recent call last)\r\n<ipython-input-4-274613b7d3f5> in <module>\r\nfrom datasets import load dataset\r\nds = load_dataset('jrahn/chessv6', use_auth_token-True)\r\n\r\n/us/local/1ib/python3.7/dist-packages/datasets/table.py in cast_table _to_schema (table, schema)\r\nLine 2005 raise ValueError()\r\n\r\nValueError: Couldn't cast \r\nfen: string \r\nmove: string \r\nres: string \r\neco: string \r\nmove_id: int64\r\nres_num: int64 to\r\n{ 'fen': Value(dtype='string', id=None), \r\n'move': Value(dtype=' string', id=None),\r\n'res': Value(dtype='string', id=None),\r\n'eco': Value(dtype='string', id=None), \r\n'hc': Value(dtype='string', id=None), \r\n'move_ id': Value(dtype='int64', id=None),\r\n'res_num': Value(dtype= 'int64' , id=None) }\r\nbecause column names don't match \r\n```\r\n\r\nThe column 'hc' was removed before the interrupted push_to_hub(). It appears in the column list in curly brackets but not in the column list above.\r\n\r\nLet me know, if I can be of any help."
] | 2022-09-29T18:08:12 | 2022-09-30T16:49:21 | null | NONE | null | **Is your feature request related to a problem? Please describe.**
I pushed a modification of a large dataset (remove a column) to the hub. The push was interrupted after some files were committed to the repo. This left the dataset to raise an error on load_dataset() (ValueError couldn’t cast … because column names don’t match). Only by specifying the previous (complete) commit as revision=commit_hash in load_data(), I was able to repair this and after a successful, complete push, the dataset loads without error again.
**Describe the solution you'd like**
Would it make sense to detect an incomplete push_to_hub() and automatically revert to the previous commit/revision?
**Describe alternatives you've considered**
Leave everything as is, the revision parameter in load_dataset() allows to manually fix this problem.
**Additional context**
Provide useful defaults
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5045/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5045/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/5044 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5044/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5044/comments | https://api.github.com/repos/huggingface/datasets/issues/5044/events | https://github.com/huggingface/datasets/issues/5044 | 1,391,242,908 | I_kwDODunzps5S7K6c | 5,044 | integrate `load_from_disk` into `load_dataset` | {
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | open | false | null | [] | null | [
"I agree the situation is not ideal and it would be awesome to use `load_dataset` to reload a dataset saved locally !\r\n\r\nFor context:\r\n\r\n- `load_dataset` works in three steps: download the dataset, then prepare it as an arrow dataset, and finally return a memory mapped arrow dataset. In particular it creates a cache directory to store the arrow data and the subsequent cache files for `map`.\r\n\r\n- `load_from_disk` directly returns a memory mapped dataset from the arrow file (similar to `Dataset.from_file`). It doesn't create a cache diretory, instead all the subsequent `map` calls write in the same directory as the original data. \r\n\r\nIf we want to keep the download_and_prepare step for consistency, it would unnecessarily copy the arrow data into the datasets cache. On the other hand if we don't do this step, the cache directory doesn't exist which is inconsistent.\r\n\r\nI'm curious, what would you expect to happen in this situation ?",
"Thank you for the detailed breakdown, @lhoestq \r\n\r\n> I'm curious, what would you expect to happen in this situation ?\r\n\r\n1. the simplest solution is to add a flag to the dataset saved by `save_to_disk` and have `load_dataset` check that flag - if it's set simply switch control to `load_from_disk` behind the scenes. So `load_dataset` detects it's a local filesystem, looks inside to see whether it's something it can cache or whether it should use it directly as is and continues accordingly with one of the 2 dataset-type specific APIs.\r\n\r\n2. the more evolved solution is to look at a dataset produced by `save_to_disk` as a remote resource like hub. So the first time `load_dataset` sees it, it'll take a fingerprint and create a normal cached dataset. On subsequent uses it'll again discover it as a remote resource, validate that it has it cached via the fingerprint and serve as a normal dataset. \r\n\r\nAs you said the cons of approach 2 is that if the dataset is huge it'll make 2 copies on the same machine. So it's possible that both approaches can be integrated. Say if `save_to_disc(do_not_cache=True)` is passed it'll use solution 1, otherwise solution 2. or could even symlink the huge arrow files to the cache instead? or perhaps it's more intuitive to use `load_dataset(do_not_cache=True)` instead. So that one can choose whether to make a cached copy or not for the locally saved dataset. i.e. a simple at use point user control.\r\n\r\nSurely there are other ways to handle it, this is just one possibility.\r\n",
"I think the simplest is to always memory map the local file without copy, but still have a cached directory in the cache at `~/.cache/huggingface` instead of saving `map` results next to the original data.\r\n\r\nIn practice we can even use symlinks if it makes the implementation simpler",
"Yes, so that you always have the cached entry for any dataset, but the \"payload\" doesn't have to be physically in the cache if it's already on the local filesystem. As you said a symlink will do. ",
"Any updates?",
"We haven't had the bandwidth to implement this so far. Let me know if you'd be interested in contributing this feature :)",
"@lhoestq I can jump into that. What I don't like is having functions with many parameters input. Even though they are optional, it's always harder to reason about and test such cases.\r\nIf there are more features worth to work on, feel free to ping me. It's a lot of fun to help :smile: ",
"Thanks a lot for your help @mariusz-jachimowicz-83 :)\r\n\r\nI think as a first step we could implement an Arrow dataset builder to be able to load and stream Arrow datasets locally or from Hugging Face. Maybe something similar to the Parquet builder at [src/datasets/packaged_modules/parquet/parquet.py](https://github.com/huggingface/datasets/blob/main/src/datasets/packaged_modules/parquet/parquet.py) ?\r\n\r\nAnd we can deal with the disk space optimization as a second step. What do you think ?\r\n\r\n(this issue is also related to https://github.com/huggingface/datasets/issues/3035)",
"@lhoestq I made a PR based on suggestion https://github.com/huggingface/datasets/pull/5944. Could you please review it?",
"@lhoestq Let me know if you have further recommendations or anything that you would like to add but you don't have bandwith for. "
] | 2022-09-29T17:37:12 | 2023-06-13T18:34:02 | null | MEMBER | null | **Is your feature request related to a problem? Please describe.**
Is it possible to make `load_dataset` more universal similar to `from_pretrained` in `transformers` so that it can handle the hub, and the local path datasets of all supported types?
Currently one has to choose a different loader depending on how the dataset has been created.
e.g. this won't work:
```
$ git clone https://huggingface.co/datasets/severo/test-parquet
$ python -c 'from datasets import load_dataset; ds=load_dataset("test-parquet"); \
ds.save_to_disk("my_dataset"); load_dataset("my_dataset")'
[...]
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "/home/stas/anaconda3/envs/py38-pt112/lib/python3.8/site-packages/datasets/load.py", line 1746, in load_dataset
builder_instance.download_and_prepare(
File "/home/stas/anaconda3/envs/py38-pt112/lib/python3.8/site-packages/datasets/builder.py", line 704, in download_and_prepare
self._download_and_prepare(
File "/home/stas/anaconda3/envs/py38-pt112/lib/python3.8/site-packages/datasets/builder.py", line 793, in _download_and_prepare
self._prepare_split(split_generator, **prepare_split_kwargs)
File "/home/stas/anaconda3/envs/py38-pt112/lib/python3.8/site-packages/datasets/builder.py", line 1277, in _prepare_split
writer.write_table(table)
File "/home/stas/anaconda3/envs/py38-pt112/lib/python3.8/site-packages/datasets/arrow_writer.py", line 524, in write_table
pa_table = table_cast(pa_table, self._schema)
File "/home/stas/anaconda3/envs/py38-pt112/lib/python3.8/site-packages/datasets/table.py", line 2005, in table_cast
return cast_table_to_schema(table, schema)
File "/home/stas/anaconda3/envs/py38-pt112/lib/python3.8/site-packages/datasets/table.py", line 1968, in cast_table_to_schema
raise ValueError(f"Couldn't cast\n{table.schema}\nto\n{features}\nbecause column names don't match")
ValueError: Couldn't cast
_data_files: list<item: struct<filename: string>>
child 0, item: struct<filename: string>
child 0, filename: string
```
both times the dataset is being loaded from disk. Why does it fail the second time?
Why can't `save_to_disk` generate a dataset that can be immediately loaded by `load_dataset`?
e.g. the simplest hack would be to have `save_to_disk` add some flag to the saved dataset, that tells `load_dataset` to internally call `load_from_disk`. like having `save_to_disk` create a `load_me_with_load_from_disk.txt` file ;) and `load_dataset` will support that feature from saved datasets from new `datasets` versions. The old ones will still need to use `load_from_disk` explicitly. Unless the flag is not needed and one can immediately tell by looking at the saved dataset that it was saved via `save_to_disk` and thus use `load_from_disk` internally.
The use-case is defining a simple API where the user only ever needs to pass a `dataset_name_or_path` and it will always just work. Currently one needs to manually add additional switches telling the system whether to use one loading method or the other which works but it's not smooth.
Thank you! | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5044/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5044/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/5043 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5043/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5043/comments | https://api.github.com/repos/huggingface/datasets/issues/5043/events | https://github.com/huggingface/datasets/pull/5043 | 1,391,141,773 | PR_kwDODunzps4_3uzy | 5,043 | Fix `flatten_indices` with empty indices mapping | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-09-29T16:17:28 | 2022-09-30T15:46:39 | 2022-09-30T15:44:25 | CONTRIBUTOR | null | Fix #5038 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5043/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5043/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5043",
"html_url": "https://github.com/huggingface/datasets/pull/5043",
"diff_url": "https://github.com/huggingface/datasets/pull/5043.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5043.patch",
"merged_at": "2022-09-30T15:44:25"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/5042 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5042/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5042/comments | https://api.github.com/repos/huggingface/datasets/issues/5042/events | https://github.com/huggingface/datasets/pull/5042 | 1,390,762,877 | PR_kwDODunzps4_2eqa | 5,042 | Update swiss judgment prediction | {
"login": "JoelNiklaus",
"id": 3775944,
"node_id": "MDQ6VXNlcjM3NzU5NDQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/3775944?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/JoelNiklaus",
"html_url": "https://github.com/JoelNiklaus",
"followers_url": "https://api.github.com/users/JoelNiklaus/followers",
"following_url": "https://api.github.com/users/JoelNiklaus/following{/other_user}",
"gists_url": "https://api.github.com/users/JoelNiklaus/gists{/gist_id}",
"starred_url": "https://api.github.com/users/JoelNiklaus/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/JoelNiklaus/subscriptions",
"organizations_url": "https://api.github.com/users/JoelNiklaus/orgs",
"repos_url": "https://api.github.com/users/JoelNiklaus/repos",
"events_url": "https://api.github.com/users/JoelNiklaus/events{/privacy}",
"received_events_url": "https://api.github.com/users/JoelNiklaus/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 4564477500,
"node_id": "LA_kwDODunzps8AAAABEBBmPA",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20contribution",
"name": "dataset contribution",
"color": "0e8a16",
"default": false,
"description": "Contribution to a dataset script"
}
] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-09-29T12:10:02 | 2022-09-30T07:14:00 | 2022-09-29T14:32:02 | CONTRIBUTOR | null | I forgot to add the new citation. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5042/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5042/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5042",
"html_url": "https://github.com/huggingface/datasets/pull/5042",
"diff_url": "https://github.com/huggingface/datasets/pull/5042.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5042.patch",
"merged_at": "2022-09-29T14:32:02"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/5041 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5041/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5041/comments | https://api.github.com/repos/huggingface/datasets/issues/5041/events | https://github.com/huggingface/datasets/pull/5041 | 1,390,722,230 | PR_kwDODunzps4_2WES | 5,041 | Support streaming hendrycks_test dataset. | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 4564477500,
"node_id": "LA_kwDODunzps8AAAABEBBmPA",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20contribution",
"name": "dataset contribution",
"color": "0e8a16",
"default": false,
"description": "Contribution to a dataset script"
}
] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-09-29T11:37:58 | 2022-09-30T07:13:38 | 2022-09-29T12:07:29 | MEMBER | null | This PR:
- supports streaming
- fixes the description section of the dataset card | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5041/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5041/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5041",
"html_url": "https://github.com/huggingface/datasets/pull/5041",
"diff_url": "https://github.com/huggingface/datasets/pull/5041.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5041.patch",
"merged_at": "2022-09-29T12:07:29"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/5040 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5040/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5040/comments | https://api.github.com/repos/huggingface/datasets/issues/5040/events | https://github.com/huggingface/datasets/pull/5040 | 1,390,566,428 | PR_kwDODunzps4_11O2 | 5,040 | Fix NonMatchingChecksumError in hendrycks_test dataset | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 4564477500,
"node_id": "LA_kwDODunzps8AAAABEBBmPA",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20contribution",
"name": "dataset contribution",
"color": "0e8a16",
"default": false,
"description": "Contribution to a dataset script"
}
] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-09-29T09:37:43 | 2022-09-29T10:06:22 | 2022-09-29T10:04:19 | MEMBER | null | Update metadata JSON.
Fix #5039. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5040/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5040/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5040",
"html_url": "https://github.com/huggingface/datasets/pull/5040",
"diff_url": "https://github.com/huggingface/datasets/pull/5040.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5040.patch",
"merged_at": "2022-09-29T10:04:19"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/5039 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5039/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5039/comments | https://api.github.com/repos/huggingface/datasets/issues/5039/events | https://github.com/huggingface/datasets/issues/5039 | 1,390,353,315 | I_kwDODunzps5S3xuj | 5,039 | Hendrycks Checksum | {
"login": "DanielHesslow",
"id": 9974388,
"node_id": "MDQ6VXNlcjk5NzQzODg=",
"avatar_url": "https://avatars.githubusercontent.com/u/9974388?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/DanielHesslow",
"html_url": "https://github.com/DanielHesslow",
"followers_url": "https://api.github.com/users/DanielHesslow/followers",
"following_url": "https://api.github.com/users/DanielHesslow/following{/other_user}",
"gists_url": "https://api.github.com/users/DanielHesslow/gists{/gist_id}",
"starred_url": "https://api.github.com/users/DanielHesslow/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/DanielHesslow/subscriptions",
"organizations_url": "https://api.github.com/users/DanielHesslow/orgs",
"repos_url": "https://api.github.com/users/DanielHesslow/repos",
"events_url": "https://api.github.com/users/DanielHesslow/events{/privacy}",
"received_events_url": "https://api.github.com/users/DanielHesslow/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2067388877,
"node_id": "MDU6TGFiZWwyMDY3Mzg4ODc3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20bug",
"name": "dataset bug",
"color": "2edb81",
"default": false,
"description": "A bug in a dataset script provided in the library"
}
] | closed | false | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"Thanks for reporting, @DanielHesslow. We are fixing it. ",
"@albertvillanova thanks for taking care of this so quickly!",
"The dataset metadata is fixed. You can download it normally."
] | 2022-09-29T06:56:20 | 2022-09-29T10:23:30 | 2022-09-29T10:04:20 | NONE | null | Hi,
The checksum for [hendrycks_test](https://huggingface.co/datasets/hendrycks_test) does not compare correctly, I guess it has been updated on the remote.
```
datasets.utils.info_utils.NonMatchingChecksumError: Checksums didn't match for dataset source files:
['https://people.eecs.berkeley.edu/~hendrycks/data.tar']
```
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5039/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5039/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/5038 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5038/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5038/comments | https://api.github.com/repos/huggingface/datasets/issues/5038/events | https://github.com/huggingface/datasets/issues/5038 | 1,389,631,122 | I_kwDODunzps5S1BaS | 5,038 | `Dataset.unique` showing wrong output after filtering | {
"login": "mxschmdt",
"id": 4904985,
"node_id": "MDQ6VXNlcjQ5MDQ5ODU=",
"avatar_url": "https://avatars.githubusercontent.com/u/4904985?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxschmdt",
"html_url": "https://github.com/mxschmdt",
"followers_url": "https://api.github.com/users/mxschmdt/followers",
"following_url": "https://api.github.com/users/mxschmdt/following{/other_user}",
"gists_url": "https://api.github.com/users/mxschmdt/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxschmdt/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxschmdt/subscriptions",
"organizations_url": "https://api.github.com/users/mxschmdt/orgs",
"repos_url": "https://api.github.com/users/mxschmdt/repos",
"events_url": "https://api.github.com/users/mxschmdt/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxschmdt/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | [
"Hi! It seems like `flatten_indices` (called in `unique`) doesn't know how to handle empty indices mappings. I'm working on the fix.",
"Thanks, that was fast!"
] | 2022-09-28T16:20:35 | 2022-09-30T15:44:25 | 2022-09-30T15:44:25 | CONTRIBUTOR | null | ## Describe the bug
After filtering a dataset, and if no samples remain, `Dataset.unique` will return the unique values of the unfiltered dataset.
## Steps to reproduce the bug
```python
from datasets import Dataset
dataset = Dataset.from_dict({'id': [0]})
dataset = dataset.filter(lambda _: False)
print(dataset.unique('id'))
```
## Expected results
The above code should return an empty list since the dataset is empty.
## Actual results
```bash
[0]
```
## Environment info
- `datasets` version: 2.5.1
- Platform: Linux-5.18.19-100.fc35.x86_64-x86_64-with-glibc2.34
- Python version: 3.9.14
- PyArrow version: 7.0.0
- Pandas version: 1.3.5 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5038/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5038/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/5037 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5037/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5037/comments | https://api.github.com/repos/huggingface/datasets/issues/5037/events | https://github.com/huggingface/datasets/pull/5037 | 1,389,244,722 | PR_kwDODunzps4_xcp0 | 5,037 | Improve CI performance speed of PackagedDatasetTest | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"There was a CI error which seemed unrelated: https://github.com/huggingface/datasets/actions/runs/3143581330/jobs/5111807056\r\n```\r\nFAILED tests/test_load.py::test_load_dataset_private_zipped_images[True] - FileNotFoundError: https://hub-ci.huggingface.co/datasets/__DUMMY_TRANSFORMERS_USER__/repo_zipped_img_data-16643808721979/resolve/75c3fc424a3b898a828b2b3fd84d96da4703228a/data.zip\r\n```\r\nIt disappeared after merging the main branch."
] | 2022-09-28T12:08:16 | 2022-09-30T16:05:42 | 2022-09-30T16:03:24 | MEMBER | null | This PR improves PackagedDatasetTest CI performance speed. For Ubuntu (latest):
- Duration (without parallelism) before: 334.78s (5.58m)
- Duration (without parallelism) afterwards: 0.48s
The approach is passing a dummy `data_files` argument to load the builder, so that it avoids the slow inferring of it over the entire root directory of the repo.
## Total duration of PackagedDatasetTest
| | Before | Afterwards | Improvement
|---|---:|---:|---:|
| Linux | 334.78s | 0.48s | x700
| Windows | 513.02s | 1.09s | x500
## Durations by each individual sub-test
More accurate durations, running them on GitHub, for Linux (latest).
Before this PR, the total test time (without parallelism) for `tests/test_dataset_common.py::PackagedDatasetTest` is 334.78s (5.58m)
```
39.07s call tests/test_dataset_common.py::PackagedDatasetTest::test_load_dataset_offline_imagefolder
38.94s call tests/test_dataset_common.py::PackagedDatasetTest::test_load_dataset_offline_audiofolder
34.18s call tests/test_dataset_common.py::PackagedDatasetTest::test_load_dataset_offline_parquet
34.12s call tests/test_dataset_common.py::PackagedDatasetTest::test_load_dataset_offline_csv
34.00s call tests/test_dataset_common.py::PackagedDatasetTest::test_load_dataset_offline_pandas
34.00s call tests/test_dataset_common.py::PackagedDatasetTest::test_load_dataset_offline_text
33.86s call tests/test_dataset_common.py::PackagedDatasetTest::test_load_dataset_offline_json
10.39s call tests/test_dataset_common.py::PackagedDatasetTest::test_builder_class_audiofolder
6.50s call tests/test_dataset_common.py::PackagedDatasetTest::test_builder_configs_audiofolder
6.46s call tests/test_dataset_common.py::PackagedDatasetTest::test_builder_configs_imagefolder
6.40s call tests/test_dataset_common.py::PackagedDatasetTest::test_builder_class_imagefolder
5.77s call tests/test_dataset_common.py::PackagedDatasetTest::test_builder_class_csv
5.77s call tests/test_dataset_common.py::PackagedDatasetTest::test_builder_class_text
5.74s call tests/test_dataset_common.py::PackagedDatasetTest::test_builder_configs_parquet
5.69s call tests/test_dataset_common.py::PackagedDatasetTest::test_builder_class_json
5.68s call tests/test_dataset_common.py::PackagedDatasetTest::test_builder_configs_pandas
5.67s call tests/test_dataset_common.py::PackagedDatasetTest::test_builder_class_parquet
5.67s call tests/test_dataset_common.py::PackagedDatasetTest::test_builder_class_pandas
5.66s call tests/test_dataset_common.py::PackagedDatasetTest::test_builder_configs_json
5.66s call tests/test_dataset_common.py::PackagedDatasetTest::test_builder_configs_csv
5.55s call tests/test_dataset_common.py::PackagedDatasetTest::test_builder_configs_text
(42 durations < 0.005s hidden.)
```
With this PR: 0.48s
```
0.09s call tests/test_dataset_common.py::PackagedDatasetTest::test_load_dataset_offline_audiofolder
0.08s call tests/test_dataset_common.py::PackagedDatasetTest::test_load_dataset_offline_csv
0.08s call tests/test_dataset_common.py::PackagedDatasetTest::test_load_dataset_offline_imagefolder
0.06s call tests/test_dataset_common.py::PackagedDatasetTest::test_load_dataset_offline_json
0.05s call tests/test_dataset_common.py::PackagedDatasetTest::test_builder_class_audiofolder
0.05s call tests/test_dataset_common.py::PackagedDatasetTest::test_load_dataset_offline_parquet
0.04s call tests/test_dataset_common.py::PackagedDatasetTest::test_load_dataset_offline_pandas
0.03s call tests/test_dataset_common.py::PackagedDatasetTest::test_load_dataset_offline_text
(55 durations < 0.005s hidden.)
```
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5037/reactions",
"total_count": 2,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 1,
"eyes": 1
} | https://api.github.com/repos/huggingface/datasets/issues/5037/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5037",
"html_url": "https://github.com/huggingface/datasets/pull/5037",
"diff_url": "https://github.com/huggingface/datasets/pull/5037.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5037.patch",
"merged_at": "2022-09-30T16:03:24"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/5036 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5036/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5036/comments | https://api.github.com/repos/huggingface/datasets/issues/5036/events | https://github.com/huggingface/datasets/pull/5036 | 1,389,094,075 | PR_kwDODunzps4_w8Bs | 5,036 | Add oversampling strategy iterable datasets interleave | {
"login": "ylacombe",
"id": 52246514,
"node_id": "MDQ6VXNlcjUyMjQ2NTE0",
"avatar_url": "https://avatars.githubusercontent.com/u/52246514?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ylacombe",
"html_url": "https://github.com/ylacombe",
"followers_url": "https://api.github.com/users/ylacombe/followers",
"following_url": "https://api.github.com/users/ylacombe/following{/other_user}",
"gists_url": "https://api.github.com/users/ylacombe/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ylacombe/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ylacombe/subscriptions",
"organizations_url": "https://api.github.com/users/ylacombe/orgs",
"repos_url": "https://api.github.com/users/ylacombe/repos",
"events_url": "https://api.github.com/users/ylacombe/events{/privacy}",
"received_events_url": "https://api.github.com/users/ylacombe/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-09-28T10:10:23 | 2022-09-30T12:30:48 | 2022-09-30T12:28:23 | CONTRIBUTOR | null | Hello everyone,
Following the issue #4893 and the PR #4831, I propose here an oversampling strategy for a `IterableDataset` list.
The `all_exhausted` strategy stops building the new dataset as soon as all samples in each dataset have been added at least once.
It follows roughly the same logic behind #4831, namely:
- if ``probabilities`` is `None` and the strategy is `all_exhausted`, it simply performs a round robin interleaving that stops when the longest dataset is out of samples. Here the new dataset length will be $maxLengthDataset*nbDataset$.
- if ``probabilities`` is not `None` and the strategy is `all_exhausted`, it keeps trace of the datasets which were out of samples but continues to add them to the new dataset, and stops as soons as every dataset runs out of samples at least once.
In order to be consistent and also to align with the `Dataset` behavior, please note that the behavior of the default strategy (`first_exhausted`) has been changed. Namely, it really stops when a dataset is out of samples whereas it used to stop when receiving the `StopIteration` error.
To give an example of the last note, with the following snippet:
```
>>> from tests.test_iterable_dataset import *
>>> d1 = IterableDataset(ExamplesIterable((lambda: (yield from [(i, {"a": i}) for i in [0, 1, 2]])), {}))
>>> d2 = IterableDataset(ExamplesIterable((lambda: (yield from [(i, {"a": i}) for i in [10, 11, 12, 13]])), {}))
>>> d3 = IterableDataset(ExamplesIterable((lambda: (yield from [(i, {"a": i}) for i in [20, 21, 22, 23, 24]])), {}))
>>> dataset = interleave_datasets([d1, d2, d3])
>>> [x["a"] for x in dataset]
```
The result here will then be `[10, 0, 11, 1, 2]` instead of `[10, 0, 11, 1, 2, 20, 12, 13]`.
I modified the behavior because I found it to be consistent with the under/oversampling approach and because it unified the undersampling and oversampling code, but I stay open to any suggestions.
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5036/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5036/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5036",
"html_url": "https://github.com/huggingface/datasets/pull/5036",
"diff_url": "https://github.com/huggingface/datasets/pull/5036.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5036.patch",
"merged_at": "2022-09-30T12:28:23"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/5035 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5035/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5035/comments | https://api.github.com/repos/huggingface/datasets/issues/5035/events | https://github.com/huggingface/datasets/pull/5035 | 1,388,914,476 | PR_kwDODunzps4_wVie | 5,035 | Fix typos in load docstrings and comments | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-09-28T08:05:07 | 2022-09-28T17:28:40 | 2022-09-28T17:26:15 | MEMBER | null | Minor fix of typos in load docstrings and comments | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5035/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5035/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5035",
"html_url": "https://github.com/huggingface/datasets/pull/5035",
"diff_url": "https://github.com/huggingface/datasets/pull/5035.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5035.patch",
"merged_at": "2022-09-28T17:26:14"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/5034 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5034/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5034/comments | https://api.github.com/repos/huggingface/datasets/issues/5034/events | https://github.com/huggingface/datasets/pull/5034 | 1,388,855,136 | PR_kwDODunzps4_wJCu | 5,034 | Update README.md of yahoo_answers_topics dataset | {
"login": "borgr",
"id": 6416600,
"node_id": "MDQ6VXNlcjY0MTY2MDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/6416600?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/borgr",
"html_url": "https://github.com/borgr",
"followers_url": "https://api.github.com/users/borgr/followers",
"following_url": "https://api.github.com/users/borgr/following{/other_user}",
"gists_url": "https://api.github.com/users/borgr/gists{/gist_id}",
"starred_url": "https://api.github.com/users/borgr/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/borgr/subscriptions",
"organizations_url": "https://api.github.com/users/borgr/orgs",
"repos_url": "https://api.github.com/users/borgr/repos",
"events_url": "https://api.github.com/users/borgr/events{/privacy}",
"received_events_url": "https://api.github.com/users/borgr/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_5034). All of your documentation changes will be reflected on that endpoint.",
"Thanks, @borgr. We have removed all dataset scripts from this repo. Subsequent PRs should be opened directly on the Hugging Face Hub.",
"Do you mean to edit through \"edit dataset card\" button? because it just leads to a broken page...\r\nhttps://huggingface.co/datasets/yahoo_answers_topics\r\n![image](https://user-images.githubusercontent.com/6416600/193852796-009ba537-1e8f-4c8b-898a-8c4f817b86ee.png)\r\nhttps://github.com/huggingface/datasets/tree/main/datasets/yahoo_answers_topics",
"Hi @borgr, good catch! I'm going to report the button leading to a broken link.\r\n\r\nIn the meantime, you can propose a PR to the `README.md` file using this link: https://huggingface.co/datasets/yahoo_answers_topics/blob/main/README.md"
] | 2022-09-28T07:17:33 | 2022-10-06T15:56:05 | 2022-10-04T13:49:25 | CONTRIBUTOR | null | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5034/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5034/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5034",
"html_url": "https://github.com/huggingface/datasets/pull/5034",
"diff_url": "https://github.com/huggingface/datasets/pull/5034.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5034.patch",
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/5033 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5033/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5033/comments | https://api.github.com/repos/huggingface/datasets/issues/5033/events | https://github.com/huggingface/datasets/pull/5033 | 1,388,842,236 | PR_kwDODunzps4_wGSE | 5,033 | Remove redundant code from some dataset module factories | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-09-28T07:06:26 | 2022-09-28T16:57:51 | 2022-09-28T16:55:12 | MEMBER | null | This PR removes some redundant code introduced by mistake after a refactoring in:
- #4576 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5033/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5033/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5033",
"html_url": "https://github.com/huggingface/datasets/pull/5033",
"diff_url": "https://github.com/huggingface/datasets/pull/5033.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5033.patch",
"merged_at": "2022-09-28T16:55:12"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/5032 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5032/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5032/comments | https://api.github.com/repos/huggingface/datasets/issues/5032/events | https://github.com/huggingface/datasets/issues/5032 | 1,388,270,935 | I_kwDODunzps5Sv1VX | 5,032 | new dataset type: single-label and multi-label video classification | {
"login": "fcakyon",
"id": 34196005,
"node_id": "MDQ6VXNlcjM0MTk2MDA1",
"avatar_url": "https://avatars.githubusercontent.com/u/34196005?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/fcakyon",
"html_url": "https://github.com/fcakyon",
"followers_url": "https://api.github.com/users/fcakyon/followers",
"following_url": "https://api.github.com/users/fcakyon/following{/other_user}",
"gists_url": "https://api.github.com/users/fcakyon/gists{/gist_id}",
"starred_url": "https://api.github.com/users/fcakyon/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/fcakyon/subscriptions",
"organizations_url": "https://api.github.com/users/fcakyon/orgs",
"repos_url": "https://api.github.com/users/fcakyon/repos",
"events_url": "https://api.github.com/users/fcakyon/events{/privacy}",
"received_events_url": "https://api.github.com/users/fcakyon/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | open | false | null | [] | null | [
"Hi ! You can in the `features` folder how we implemented the audio and image feature types.\r\n\r\nWe can have something similar to videos. What we need to decide:\r\n- the video loading library to use\r\n- the output format when a user accesses a video type object\r\n- what parameters a `Video()` feature type needs\r\n\r\nalso cc @nateraw who also took a look at what we can do for video",
"@lhoestq @nateraw is there any progress on adding video classification datasets? ",
"Hi ! I think we just missing which lib we're going to use to decode the videos + which parameters must go in the `Video` type",
"Hmm. `decord` could be nice but it's no longer maintained [it seems](https://github.com/dmlc/decord/issues/214). ",
"pytorchvideo uses [pyav](https://github.com/PyAV-Org/PyAV) as the default decoder: https://github.com/facebookresearch/pytorchvideo/blob/c8d23d8b7e597586a9e2d18f6ed31ad8aa379a7a/pytorchvideo/data/labeled_video_dataset.py#L37\r\n\r\nAlso it would be great if `optionally` audio can also be decoded from the video as in pytorchvideo: https://github.com/facebookresearch/pytorchvideo/blob/c8d23d8b7e597586a9e2d18f6ed31ad8aa379a7a/pytorchvideo/data/labeled_video_dataset.py#L35\r\n\r\nHere are the other decoders supported in pytorchvideo: https://github.com/facebookresearch/pytorchvideo/blob/c8d23d8b7e597586a9e2d18f6ed31ad8aa379a7a/pytorchvideo/data/encoded_video.py#L17\r\n",
"@sayakpaul I did do quite a bit of work on [this PR](https://github.com/huggingface/datasets/pull/4532) a while back to add a video feature. It's outdated, but uses my `encoded_video` [package](https://github.com/nateraw/encoded-video) under the hood, which is basically a wrapper around PyAV stolen from [pytorchvideo](https://github.com/facebookresearch/pytorchvideo/) that gets rid of the `torch` dependency. \r\n\r\nwould be really great to get something like this in...it's just a really tricky and time consuming feature to add. "
] | 2022-09-27T19:40:11 | 2022-11-02T19:10:13 | null | NONE | null | **Is your feature request related to a problem? Please describe.**
In my research, I am dealing with multi-modal (audio+text+frame sequence) video classification. It would be great if the datasets library supported generating multi-modal batches from a video dataset.
**Describe the solution you'd like**
Assume I have video files having single/multiple labels. I want to train a single/multi-label video classification model. I want datasets to support generating multi-modal batches (audio+frame sequence) from video files. Audio waveform and frame sequence can be extracted from each video clip then I can use any audio, image and video model from transformers library to extract features which will be fed into my model.
**Describe alternatives you've considered**
Currently, I am using https://github.com/facebookresearch/pytorchvideo dataloaders. There seems to be not much alternative.
**Additional context**
I am wiling to open a PR but don't know where to start.
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5032/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 1
} | https://api.github.com/repos/huggingface/datasets/issues/5032/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/5031 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5031/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5031/comments | https://api.github.com/repos/huggingface/datasets/issues/5031/events | https://github.com/huggingface/datasets/pull/5031 | 1,388,201,146 | PR_kwDODunzps4_t82_ | 5,031 | Support hfh 0.10 implicit auth | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"@lhoestq it is now released so you can move forward with it :) ",
"I took your comments into account @Wauplin :)\r\nI also bumped the requirement to 0.2.0 because we're using `set_access_token`\r\n\r\ncc @albertvillanova WDYT ? I edited the CI job to also check for our minimum supported version of hfh at the same time as the minimum pyarrow version",
"@lhoestq great, thanks ! :)"
] | 2022-09-27T18:37:49 | 2022-09-30T09:18:24 | 2022-09-30T09:15:59 | MEMBER | null | In huggingface-hub 0.10 the `token` parameter is deprecated for dataset_info and list_repo_files in favor of use_auth_token.
Moreover if use_auth_token=None then the user's token is used implicitly.
I took those two changes into account
Close https://github.com/huggingface/datasets/issues/4990
TODO:
- [x] fix tests
We should wait hfh 0.10 to be relased first to make sure it works correctly before merging | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5031/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5031/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5031",
"html_url": "https://github.com/huggingface/datasets/pull/5031",
"diff_url": "https://github.com/huggingface/datasets/pull/5031.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5031.patch",
"merged_at": "2022-09-30T09:15:59"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/5030 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5030/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5030/comments | https://api.github.com/repos/huggingface/datasets/issues/5030/events | https://github.com/huggingface/datasets/pull/5030 | 1,388,061,340 | PR_kwDODunzps4_tfBO | 5,030 | Fast dataset iter | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"I ran some benchmarks (focused on the data fetching part of `__iter__`) and it seems like the combination `table.to_reader(batch_size)` + `RecordBatch.slice` performs the best ([script](https://gist.github.com/mariosasko/0248288a2e3a7556873969717c1fe52b) with the results). I think we can choose (implicit) `batch_size=10` in the final implementation to avoid having problems with fetching large examples."
] | 2022-09-27T16:44:51 | 2022-09-29T15:50:44 | 2022-09-29T15:48:17 | CONTRIBUTOR | null | Use `pa.Table.to_reader` to make iteration over examples/batches faster in `Dataset.{__iter__, map}`
TODO:
* [x] benchmarking (the only benchmark for now - iterating over (single) examples of `bookcorpus` (75 mil examples) in Colab is approx. 2.3x faster)
* [x] check if iterating over bigger chunks + slicing to fetch individual examples in `_iter` yields better performance
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5030/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5030/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5030",
"html_url": "https://github.com/huggingface/datasets/pull/5030",
"diff_url": "https://github.com/huggingface/datasets/pull/5030.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5030.patch",
"merged_at": "2022-09-29T15:48:17"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/5029 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5029/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5029/comments | https://api.github.com/repos/huggingface/datasets/issues/5029/events | https://github.com/huggingface/datasets/pull/5029 | 1,387,600,960 | PR_kwDODunzps4_r8-j | 5,029 | Fix import in `ClassLabel` docstring example | {
"login": "alvarobartt",
"id": 36760800,
"node_id": "MDQ6VXNlcjM2NzYwODAw",
"avatar_url": "https://avatars.githubusercontent.com/u/36760800?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/alvarobartt",
"html_url": "https://github.com/alvarobartt",
"followers_url": "https://api.github.com/users/alvarobartt/followers",
"following_url": "https://api.github.com/users/alvarobartt/following{/other_user}",
"gists_url": "https://api.github.com/users/alvarobartt/gists{/gist_id}",
"starred_url": "https://api.github.com/users/alvarobartt/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/alvarobartt/subscriptions",
"organizations_url": "https://api.github.com/users/alvarobartt/orgs",
"repos_url": "https://api.github.com/users/alvarobartt/repos",
"events_url": "https://api.github.com/users/alvarobartt/events{/privacy}",
"received_events_url": "https://api.github.com/users/alvarobartt/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-09-27T11:35:29 | 2022-09-27T14:03:24 | 2022-09-27T12:27:50 | CONTRIBUTOR | null | This PR addresses a super-simple fix: adding a missing `import` to the `ClassLabel` docstring example, as it was formatted as `from datasets Features`, so it's been fixed to `from datasets import Features`. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5029/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5029/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5029",
"html_url": "https://github.com/huggingface/datasets/pull/5029",
"diff_url": "https://github.com/huggingface/datasets/pull/5029.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5029.patch",
"merged_at": "2022-09-27T12:27:50"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/5028 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5028/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5028/comments | https://api.github.com/repos/huggingface/datasets/issues/5028/events | https://github.com/huggingface/datasets/issues/5028 | 1,386,272,533 | I_kwDODunzps5SoNcV | 5,028 | passing parameters to the method passed to Dataset.from_generator() | {
"login": "Basir-mahmood",
"id": 64276129,
"node_id": "MDQ6VXNlcjY0Mjc2MTI5",
"avatar_url": "https://avatars.githubusercontent.com/u/64276129?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Basir-mahmood",
"html_url": "https://github.com/Basir-mahmood",
"followers_url": "https://api.github.com/users/Basir-mahmood/followers",
"following_url": "https://api.github.com/users/Basir-mahmood/following{/other_user}",
"gists_url": "https://api.github.com/users/Basir-mahmood/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Basir-mahmood/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Basir-mahmood/subscriptions",
"organizations_url": "https://api.github.com/users/Basir-mahmood/orgs",
"repos_url": "https://api.github.com/users/Basir-mahmood/repos",
"events_url": "https://api.github.com/users/Basir-mahmood/events{/privacy}",
"received_events_url": "https://api.github.com/users/Basir-mahmood/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | closed | false | null | [] | null | [
"Hi! Yes, you can either use the `gen_kwargs` param in `Dataset.from_generator` (`ds = Dataset.from_generator(gen, gen_kwargs={\"param1\": val})`) or wrap the generator function with `functools.partial`\r\n(`ds = Dataset.from_generator(functools.partial(gen, param1=\"val\"))`) to pass custom parameters to it.\r\n"
] | 2022-09-26T15:20:06 | 2022-10-03T13:00:00 | 2022-10-03T13:00:00 | NONE | null | Big thanks for providing dataset creation via a generator.
I want to ask whether there is any way that parameters can be passed to the method Dataset.from_generator() method, like as follows.
```
from datasets import Dataset
def gen(param1):
for idx in len(custom_dataset):
yield custom_dataset[idx] + param1
ds = Dataset.from_generator(gen(param1))
```
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5028/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5028/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/5027 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5027/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5027/comments | https://api.github.com/repos/huggingface/datasets/issues/5027/events | https://github.com/huggingface/datasets/pull/5027 | 1,386,153,072 | PR_kwDODunzps4_nFUE | 5,027 | Fix typo in error message | {
"login": "severo",
"id": 1676121,
"node_id": "MDQ6VXNlcjE2NzYxMjE=",
"avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/severo",
"html_url": "https://github.com/severo",
"followers_url": "https://api.github.com/users/severo/followers",
"following_url": "https://api.github.com/users/severo/following{/other_user}",
"gists_url": "https://api.github.com/users/severo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/severo/subscriptions",
"organizations_url": "https://api.github.com/users/severo/orgs",
"repos_url": "https://api.github.com/users/severo/repos",
"events_url": "https://api.github.com/users/severo/events{/privacy}",
"received_events_url": "https://api.github.com/users/severo/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-09-26T14:10:09 | 2022-09-27T12:28:03 | 2022-09-27T12:26:02 | CONTRIBUTOR | null | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5027/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5027/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5027",
"html_url": "https://github.com/huggingface/datasets/pull/5027",
"diff_url": "https://github.com/huggingface/datasets/pull/5027.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5027.patch",
"merged_at": "2022-09-27T12:26:02"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/5026 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5026/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5026/comments | https://api.github.com/repos/huggingface/datasets/issues/5026/events | https://github.com/huggingface/datasets/pull/5026 | 1,386,071,154 | PR_kwDODunzps4_mz1w | 5,026 | patch CI_HUB_TOKEN_PATH with Path instead of str | {
"login": "Wauplin",
"id": 11801849,
"node_id": "MDQ6VXNlcjExODAxODQ5",
"avatar_url": "https://avatars.githubusercontent.com/u/11801849?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Wauplin",
"html_url": "https://github.com/Wauplin",
"followers_url": "https://api.github.com/users/Wauplin/followers",
"following_url": "https://api.github.com/users/Wauplin/following{/other_user}",
"gists_url": "https://api.github.com/users/Wauplin/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Wauplin/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Wauplin/subscriptions",
"organizations_url": "https://api.github.com/users/Wauplin/orgs",
"repos_url": "https://api.github.com/users/Wauplin/repos",
"events_url": "https://api.github.com/users/Wauplin/events{/privacy}",
"received_events_url": "https://api.github.com/users/Wauplin/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-09-26T13:19:01 | 2022-09-26T14:30:55 | 2022-09-26T14:28:45 | CONTRIBUTOR | null | Should fix the tests for `huggingface_hub==0.10.0rc0` prerelease (see [failed CI](https://github.com/huggingface/datasets/actions/runs/3127805250/jobs/5074879144)).
Related to [this thread](https://huggingface.slack.com/archives/C02V5EA0A95/p1664195165294559) (internal link).
Note: this should be a backward compatible fix (e.g. works also with previous versions of `huggingface_hub`)
I am not sure where to put the changes so feel free to cherry-pick the commit and close this one without merging.
cc @lhoestq | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5026/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5026/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5026",
"html_url": "https://github.com/huggingface/datasets/pull/5026",
"diff_url": "https://github.com/huggingface/datasets/pull/5026.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5026.patch",
"merged_at": "2022-09-26T14:28:45"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/5025 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5025/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5025/comments | https://api.github.com/repos/huggingface/datasets/issues/5025/events | https://github.com/huggingface/datasets/issues/5025 | 1,386,011,239 | I_kwDODunzps5SnNpn | 5,025 | Custom Json Dataset Throwing Error when batch is False | {
"login": "jmandivarapu1",
"id": 21245519,
"node_id": "MDQ6VXNlcjIxMjQ1NTE5",
"avatar_url": "https://avatars.githubusercontent.com/u/21245519?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmandivarapu1",
"html_url": "https://github.com/jmandivarapu1",
"followers_url": "https://api.github.com/users/jmandivarapu1/followers",
"following_url": "https://api.github.com/users/jmandivarapu1/following{/other_user}",
"gists_url": "https://api.github.com/users/jmandivarapu1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmandivarapu1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmandivarapu1/subscriptions",
"organizations_url": "https://api.github.com/users/jmandivarapu1/orgs",
"repos_url": "https://api.github.com/users/jmandivarapu1/repos",
"events_url": "https://api.github.com/users/jmandivarapu1/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmandivarapu1/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | [
"Hi! Our processors are meant to be used in `batched` mode, so if `batched` is `False`, you need to drop the batch dimension (the error message warns you that the array has an extra dimension meaning it's 4D instead of 3D) to avoid the error:\r\n```python\r\ndef prepare_examples(examples):\r\n #Some preporcessing for each image and text as all my data saved in cloud\r\n #For this reason I couldn't set the batch to True. \r\n encoding = processor(img_as_tensor, words, boxes=boxes, word_labels=labels,\r\n truncation=True, padding=\"max_length\", return_tensors=\"np\")\r\n # drop extra dim\r\n for k in encoding.items():\r\n encoding[k]=encoding[k][0]\r\n return encoding\r\n```",
"> Hi! Our processors are meant to be used in `batched` mode, so if `batched` is `False`, you need to drop the batch dimension (the error message warns you that the array has an extra dimension meaning it's 4D instead of 3D) to avoid the error:\r\n> \r\n> ```python\r\n> def prepare_examples(examples):\r\n> #Some preporcessing for each image and text as all my data saved in cloud\r\n> #For this reason I couldn't set the batch to True. \r\n> encoding = processor(img_as_tensor, words, boxes=boxes, word_labels=labels,\r\n> truncation=True, padding=\"max_length\", return_tensors=\"np\")\r\n> # drop extra dim\r\n> for k in encoding.items():\r\n> encoding[k]=encoding[k][0]\r\n> return encoding\r\n> ```\r\n\r\nThank you it did work\r\n\r\n```\r\nfor k,v in encoding.items():\r\n encoding[k]=encoding[k][0]\r\n```"
] | 2022-09-26T12:38:39 | 2022-09-27T19:50:00 | 2022-09-27T19:50:00 | NONE | null | ## Describe the bug
A clear and concise description of what the bug is.
I tried to create my custom dataset using below code
```
from datasets import Features, Sequence, ClassLabel, Value, Array2D, Array3D
from torchvision import transforms
from transformers import AutoProcessor
# we'll use the Auto API here - it will load LayoutLMv3Processor behind the scenes,
# based on the checkpoint we provide from the hub
from datasets import load_dataset
def prepare_examples(examples):
#Some preporcessing for each image and text as all my data saved in cloud
#For this reason I couldn't set the batch to True.
encoding = processor(img_as_tensor, words, boxes=boxes, word_labels=labels,
truncation=True, padding="max_length")
# encoding['pixel_values']=np.array(encoding['pixel_values'])
return encoding
dataset = load_dataset("json", data_files='issues.jsonl')
processor = AutoProcessor.from_pretrained("microsoft/layoutlmv3-base", apply_ocr=False)
features = dataset["train"].features
column_names = dataset["train"].column_names
# we need to define custom features for `set_format` (used later on) to work properly
features = Features({
'pixel_values': Array3D(dtype="float32", shape=(3, 224, 224)),
'input_ids': Sequence(feature=Value(dtype='int64')),
'attention_mask': Sequence(Value(dtype='int64')),
'bbox': Array2D(dtype="int64", shape=(512, 4)),
'labels': Sequence(feature=Value(dtype='int64')),
})
train_dataset = dataset["train"].map(
prepare_examples,
batched=False,
remove_columns=column_names,
features=features
)
```
It throws below error.
```
/opt/conda/lib/python3.7/site-packages/datasets/arrow_writer.py in __arrow_array__(self, type)
172 storage = to_pyarrow_listarray(data, pa_type)
--> 173 return pa.ExtensionArray.from_storage(pa_type, storage)
174
/opt/conda/lib/python3.7/site-packages/pyarrow/array.pxi in pyarrow.lib.ExtensionArray.from_storage()
TypeError: Incompatible storage type list<item: list<item: list<item: list<item: float>>>> for extension type extension<arrow.py_extension_type<Array3DExtensionType>>
```
## Steps to reproduce the bug
```python
# Sample code to reproduce the bug
```
rom datasets import Features, Sequence, ClassLabel, Value, Array2D, Array3D
from torchvision import transforms
from transformers import AutoProcessor
# we'll use the Auto API here - it will load LayoutLMv3Processor behind the scenes,
# based on the checkpoint we provide from the hub
from datasets import load_dataset
def prepare_examples(examples):
#Some preporcessing for each image and text as all my data saved in cloud
encoding = processor(img_as_tensor, words, boxes=boxes, word_labels=labels,
truncation=True, padding="max_length")
# encoding['pixel_values']=np.array(encoding['pixel_values'])
return encoding
dataset = load_dataset("json", data_files='issues.jsonl')
processor = AutoProcessor.from_pretrained("microsoft/layoutlmv3-base", apply_ocr=False)
features = dataset["train"].features
column_names = dataset["train"].column_names
# we need to define custom features for `set_format` (used later on) to work properly
features = Features({
'pixel_values': Array3D(dtype="float32", shape=(3, 224, 224)),
'input_ids': Sequence(feature=Value(dtype='int64')),
'attention_mask': Sequence(Value(dtype='int64')),
'bbox': Array2D(dtype="int64", shape=(512, 4)),
'labels': Sequence(feature=Value(dtype='int64')),
})
train_dataset = dataset["train"].map(
prepare_examples,
batched=False,
remove_columns=column_names,
features=features
)
## Expected results
A clear and concise description of the expected results.
Expected would be similar to all the otherdatasets with no error.
## Actual results
Specify the actual results or traceback.
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version:
- Platform: Unix
- Python version: 3.9
- PyArrow version: 9.0.0
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5025/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5025/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/5024 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5024/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5024/comments | https://api.github.com/repos/huggingface/datasets/issues/5024/events | https://github.com/huggingface/datasets/pull/5024 | 1,385,947,624 | PR_kwDODunzps4_mZ3J | 5,024 | Fix string features of xcsr dataset | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 4564477500,
"node_id": "LA_kwDODunzps8AAAABEBBmPA",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20contribution",
"name": "dataset contribution",
"color": "0e8a16",
"default": false,
"description": "Contribution to a dataset script"
}
] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-09-26T11:55:36 | 2022-09-28T07:56:18 | 2022-09-28T07:54:19 | MEMBER | null | This PR fixes string features of `xcsr` dataset to avoid character splitting.
Fix #5023.
CC: @yangxqiao, @yuchenlin | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5024/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5024/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5024",
"html_url": "https://github.com/huggingface/datasets/pull/5024",
"diff_url": "https://github.com/huggingface/datasets/pull/5024.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5024.patch",
"merged_at": "2022-09-28T07:54:19"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/5023 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5023/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5023/comments | https://api.github.com/repos/huggingface/datasets/issues/5023/events | https://github.com/huggingface/datasets/issues/5023 | 1,385,881,112 | I_kwDODunzps5Smt4Y | 5,023 | Text strings are split into lists of characters in xcsr dataset | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2067388877,
"node_id": "MDU6TGFiZWwyMDY3Mzg4ODc3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20bug",
"name": "dataset bug",
"color": "2edb81",
"default": false,
"description": "A bug in a dataset script provided in the library"
}
] | closed | false | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
] | null | [] | 2022-09-26T11:11:50 | 2022-09-28T07:54:20 | 2022-09-28T07:54:20 | MEMBER | null | ## Describe the bug
Text strings are split into lists of characters.
Example for "X-CSQA-en":
```
{'id': 'd3845adc08414fda',
'lang': 'en',
'question': {'stem': ['T',
'h',
'e',
' ',
'd',
'e',
'n',
't',
'a',
'l',
' ',
'o',
'f',
'f',
'i',
'c',
'e',
' ',
'h',
'a',
'n',
'd',
'l',
'e',
'd',
' ',
'a',
' ',
'l',
'o',
't',
' ',
'o',
'f',
' ',
'p',
'a',
't',
'i',
'e',
'n',
't',
's',
' ',
'w',
'h',
'o',
' ',
'e',
'x',
'p',
'e',
'r',
'i',
'e',
'n',
'c',
'e',
'd',
' ',
't',
'r',
'a',
'u',
'm',
'a',
't',
'i',
'c',
' ',
'm',
'o',
'u',
't',
'h',
' ',
'i',
'n',
'j',
'u',
'r',
'y',
',',
' ',
'w',
'h',
'e',
'r',
'e',
' ',
'w',
'e',
'r',
'e',
' ',
't',
'h',
'e',
's',
'e',
' ',
'p',
'a',
't',
'i',
'e',
'n',
't',
's',
' ',
'c',
'o',
'm',
'i',
'n',
'g',
' ',
'f',
'r',
'o',
'm',
'?'],
'choices': [{'label': ['A'], 'text': ['t', 'o', 'w', 'n']},
{'label': ['B'], 'text': ['m', 'i', 'c', 'h', 'i', 'g', 'a', 'n']},
{'label': ['C'], 'text': ['h', 'o', 's', 'p', 'i', 't', 'a', 'l']},
{'label': ['D'], 'text': ['s', 'c', 'h', 'o', 'o', 'l', 's']},
{'label': ['E'],
'text': ['o',
'f',
'f',
'i',
'c',
'e',
' ',
'b',
'u',
'i',
'l',
'd',
'i',
'n',
'g']}]},
'answerKey': 'C'}
## Steps to reproduce the bug
```python
ds = load_dataset("datasets/xcsr", "X-CSQA-en", split="validation", streaming=True)
item = next(iter(ds))
item
```
## Expected results
```
{'id': 'd3845adc08414fda',
'lang': 'en',
'question': {'stem': 'The dental office handled a lot of patients who experienced traumatic mouth injury, where were these patients coming from?',
'choices': {'label': ['A', 'B', 'C', 'D', 'E'],
'text': ['town', 'michigan', 'hospital', 'schools', 'office building']}},
'answerKey': 'C'}
```
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5023/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5023/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/5022 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5022/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5022/comments | https://api.github.com/repos/huggingface/datasets/issues/5022/events | https://github.com/huggingface/datasets/pull/5022 | 1,385,432,859 | PR_kwDODunzps4_kxYe | 5,022 | Fix languages of X-CSQA configs in xcsr dataset | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 4564477500,
"node_id": "LA_kwDODunzps8AAAABEBBmPA",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20contribution",
"name": "dataset contribution",
"color": "0e8a16",
"default": false,
"description": "Contribution to a dataset script"
}
] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"Thanks @lhoestq, I had missed that... ",
"thx for the super fast work @albertvillanova ! any estimate for when the relevant release will happen?\r\n\r\nThanks again ",
"@thesofakillers after a recent change in our library (see #4059), now fixes in all datasets are immediately accessible. You can try it:\r\n```python\r\nfrench = datasets.load_dataset(\"xcsr\", \"X-CSQA-fr\")\r\n```\r\n\r\nPlease note there is an additional fix to that dataset in progress (to be merged today):\r\n- #5024"
] | 2022-09-26T05:13:39 | 2022-09-26T12:27:20 | 2022-09-26T10:57:30 | MEMBER | null | Fix #5017.
CC: @yangxqiao, @yuchenlin | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5022/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5022/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5022",
"html_url": "https://github.com/huggingface/datasets/pull/5022",
"diff_url": "https://github.com/huggingface/datasets/pull/5022.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5022.patch",
"merged_at": "2022-09-26T10:57:30"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/5021 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5021/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5021/comments | https://api.github.com/repos/huggingface/datasets/issues/5021/events | https://github.com/huggingface/datasets/issues/5021 | 1,385,351,250 | I_kwDODunzps5SkshS | 5,021 | Split is inferred from filename and overrides metadata.jsonl | {
"login": "float-trip",
"id": 102226344,
"node_id": "U_kgDOBhfZqA",
"avatar_url": "https://avatars.githubusercontent.com/u/102226344?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/float-trip",
"html_url": "https://github.com/float-trip",
"followers_url": "https://api.github.com/users/float-trip/followers",
"following_url": "https://api.github.com/users/float-trip/following{/other_user}",
"gists_url": "https://api.github.com/users/float-trip/gists{/gist_id}",
"starred_url": "https://api.github.com/users/float-trip/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/float-trip/subscriptions",
"organizations_url": "https://api.github.com/users/float-trip/orgs",
"repos_url": "https://api.github.com/users/float-trip/repos",
"events_url": "https://api.github.com/users/float-trip/events{/privacy}",
"received_events_url": "https://api.github.com/users/float-trip/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 1935892865,
"node_id": "MDU6TGFiZWwxOTM1ODkyODY1",
"url": "https://api.github.com/repos/huggingface/datasets/labels/duplicate",
"name": "duplicate",
"color": "cfd3d7",
"default": true,
"description": "This issue or pull request already exists"
}
] | closed | false | null | [] | null | [
"Hi! What's the structure of your image folder? `datasets` by default tries to infer to what split each file belongs based on directory/file names. If it's OK to load all the images inside the `dataset` folder in the `train` split, you can do the following:\r\n```python\r\ndataset = load_dataset(\"imagefolder\", data_files=\"dataset/**\")\r\n```",
"Thanks! Specifying `data_files` worked for that case.\r\n\r\nI'm new to the library, so let me try rephrasing the issue. If there's no actual bug here, sorry for the trouble.\r\n\r\nI've uploaded an example [here](https://files.catbox.moe/nfj2pd.zip) with the following files: \r\n\r\n```\r\n.\r\n├── bug.py\r\n└── imagefolder\r\n ├── test\r\n │ ├── metadata.jsonl\r\n │ ├── dog.jpg\r\n │ └── personal trainer.jpg\r\n └── train\r\n ├── metadata.jsonl\r\n ├── cat.jpg\r\n └── testing center.jpg\r\n```\r\n\r\n`bug.py`\r\n```\r\nfrom datasets import load_dataset\r\n\r\ndataset = load_dataset(\"imagefolder\")\r\n\r\nprint(dataset)\r\n# DatasetDict({\r\n# test: Dataset({\r\n# features: ['image', 'text'],\r\n# num_rows: 1\r\n# })\r\n# })\r\n\r\nfor split in dataset:\r\n print(\"Split:\", split)\r\n for n in dataset[split]:\r\n print(n['text'])\r\n\r\n\r\n# Split: test\r\n# testing center\r\n```\r\n\r\nAs far as I can tell, this conforms with the example given here: https://huggingface.co/docs/datasets/image_dataset#imagefolder. It appears to me that, even though `metadata.jsonl` is present, the inferred labels from the path are taking precedent. Does this sound like a bug/undocumented behavior?",
"This looks like a duplicate of https://github.com/huggingface/datasets/issues/4895 (the problem is explained in this comment: https://github.com/huggingface/datasets/issues/4895#issuecomment-1248269550).\r\n\r\nIn the meantime, you can do the following to fetch all the splits:\r\n```python\r\ndataset = load_dataset(\"imagefolder\", data_files={\"train\": \"imagefolder/train/**\", \"test\": \"imagefolder/test/**\"})\r\n```\r\n"
] | 2022-09-26T03:22:14 | 2022-09-29T08:07:50 | 2022-09-29T08:07:50 | NONE | null | ## Describe the bug
Including the strings "test" or "train" anywhere in a filename causes `datasets` to infer the split and silently ignore all other files.
This behavior is documented for directory names but not filenames: https://huggingface.co/docs/datasets/image_dataset#imagefolder
## Steps to reproduce the bug
`metadata.jsonl`
```json
{"file_name": "photo of a cat.jpg", "text": "a photo of a cat"}
{"file_name": "photo of a dog.jpg", "text": "a photo of a dog"}
{"file_name": "photo of a train.jpg", "text": "a photo of a train"}
{"file_name": "photo of test tubes.jpg", "text": "a photo of test tubes"}
```
`bug.py`
```python
from datasets import load_dataset
dataset = load_dataset("dataset")
print(dataset)
# DatasetDict({
# train: Dataset({
# features: ['image', 'text'],
# num_rows: 1
# })
# test: Dataset({
# features: ['image', 'text'],
# num_rows: 1
# })
# })
for split in dataset:
for n in dataset[split]:
print(n['text'])
# a photo of a train
# a photo of test tubes
```
## Expected results
One single dataset with all four images / a warning for unused files / documentation of this behavior
## Actual results
Only the images with "test" or "train" in the name are loaded
## Environment info
- `datasets` version: 2.5.1
- Platform: macOS-12.5.1-x86_64-i386-64bit
- Python version: 3.10.4
- PyArrow version: 9.0.0
- Pandas version: 1.5.0 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5021/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5021/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/5020 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5020/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5020/comments | https://api.github.com/repos/huggingface/datasets/issues/5020/events | https://github.com/huggingface/datasets/pull/5020 | 1,384,684,078 | PR_kwDODunzps4_istJ | 5,020 | Fix URLs of sbu_captions dataset | {
"login": "donglixp",
"id": 1070872,
"node_id": "MDQ6VXNlcjEwNzA4NzI=",
"avatar_url": "https://avatars.githubusercontent.com/u/1070872?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/donglixp",
"html_url": "https://github.com/donglixp",
"followers_url": "https://api.github.com/users/donglixp/followers",
"following_url": "https://api.github.com/users/donglixp/following{/other_user}",
"gists_url": "https://api.github.com/users/donglixp/gists{/gist_id}",
"starred_url": "https://api.github.com/users/donglixp/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/donglixp/subscriptions",
"organizations_url": "https://api.github.com/users/donglixp/orgs",
"repos_url": "https://api.github.com/users/donglixp/repos",
"events_url": "https://api.github.com/users/donglixp/events{/privacy}",
"received_events_url": "https://api.github.com/users/donglixp/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 4564477500,
"node_id": "LA_kwDODunzps8AAAABEBBmPA",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20contribution",
"name": "dataset contribution",
"color": "0e8a16",
"default": false,
"description": "Contribution to a dataset script"
}
] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-09-24T14:00:33 | 2022-09-28T07:20:20 | 2022-09-28T07:18:23 | CONTRIBUTOR | null | Forbidden
You don't have permission to access /~vicente/sbucaptions/sbu-captions-all.tar.gz on this server.
Additionally, a 403 Forbidden error was encountered while trying to use an ErrorDocument to handle the request.
Apache/2.4.6 (Red Hat Enterprise Linux) OpenSSL/1.0.2k-fips PHP/5.4.16 mod_fcgid/2.3.9 mod_wsgi/3.4 Python/2.7.5 mod_perl/2.0.11 Perl/v5.16.3 Server at [www.cs.virginia.edu](mailto:[email protected]) Port 443 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5020/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5020/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5020",
"html_url": "https://github.com/huggingface/datasets/pull/5020",
"diff_url": "https://github.com/huggingface/datasets/pull/5020.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5020.patch",
"merged_at": "2022-09-28T07:18:23"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/5019 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5019/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5019/comments | https://api.github.com/repos/huggingface/datasets/issues/5019/events | https://github.com/huggingface/datasets/pull/5019 | 1,384,673,718 | PR_kwDODunzps4_iq9b | 5,019 | Update swiss judgment prediction | {
"login": "JoelNiklaus",
"id": 3775944,
"node_id": "MDQ6VXNlcjM3NzU5NDQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/3775944?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/JoelNiklaus",
"html_url": "https://github.com/JoelNiklaus",
"followers_url": "https://api.github.com/users/JoelNiklaus/followers",
"following_url": "https://api.github.com/users/JoelNiklaus/following{/other_user}",
"gists_url": "https://api.github.com/users/JoelNiklaus/gists{/gist_id}",
"starred_url": "https://api.github.com/users/JoelNiklaus/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/JoelNiklaus/subscriptions",
"organizations_url": "https://api.github.com/users/JoelNiklaus/orgs",
"repos_url": "https://api.github.com/users/JoelNiklaus/repos",
"events_url": "https://api.github.com/users/JoelNiklaus/events{/privacy}",
"received_events_url": "https://api.github.com/users/JoelNiklaus/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 4564477500,
"node_id": "LA_kwDODunzps8AAAABEBBmPA",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20contribution",
"name": "dataset contribution",
"color": "0e8a16",
"default": false,
"description": "Contribution to a dataset script"
}
] | closed | false | null | [] | null | [
"Thank you very much for the detailed review @albertvillanova!\r\n\r\nI updated the PR with the requested changes. ",
"At the end, I had to manually fix the conflict, so that CI tests are launched.\r\n\r\nPLEASE NOTE: you should first pull to incorporate the previous commit\r\n```shell\r\ngit pull\r\n```",
"_The documentation is not available anymore as the PR was closed or merged._",
"Thank you very much for the detailed feedback and your time @albertvillanova! \r\nYes, thanks. My other datasets are already on the hub: https://huggingface.co/joelito\r\n"
] | 2022-09-24T13:28:57 | 2022-09-28T07:13:39 | 2022-09-28T05:48:50 | CONTRIBUTOR | null | Hi,
I updated the dataset to include additional data made available recently. When I test it locally, it seems to work. However, I get the following error with the dummy data creation:
`Dummy data generation done but dummy data test failed since splits ['train', 'validation', 'test'] have 0 examples for config 'fr'`. Do you know why this could be the case?
Cheers,
Joel | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5019/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5019/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5019",
"html_url": "https://github.com/huggingface/datasets/pull/5019",
"diff_url": "https://github.com/huggingface/datasets/pull/5019.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5019.patch",
"merged_at": "2022-09-28T05:48:50"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/5018 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5018/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5018/comments | https://api.github.com/repos/huggingface/datasets/issues/5018/events | https://github.com/huggingface/datasets/pull/5018 | 1,384,146,585 | PR_kwDODunzps4_hA0V | 5,018 | Create all YAML dataset_info | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 4564477500,
"node_id": "LA_kwDODunzps8AAAABEBBmPA",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20contribution",
"name": "dataset contribution",
"color": "0e8a16",
"default": false,
"description": "Contribution to a dataset script"
}
] | closed | false | null | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_5018). All of your documentation changes will be reflected on that endpoint.",
"Closing since https://github.com/huggingface/datasets/pull/4974 removed all the datasets scripts.\r\n\r\nIndividual PRs must be opened on the Hugging face Hub to add the YAML metadata"
] | 2022-09-23T18:08:15 | 2022-10-03T17:08:05 | 2022-10-03T17:08:05 | MEMBER | null | Following https://github.com/huggingface/datasets/pull/4926
Creates all the `dataset_info` YAML fields in the dataset cards
The JSON are also updated using the simplified backward compatible format added in https://github.com/huggingface/datasets/pull/4926
Needs https://github.com/huggingface/datasets/pull/4926 to be merged first | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5018/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5018/timeline | null | null | true | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5018",
"html_url": "https://github.com/huggingface/datasets/pull/5018",
"diff_url": "https://github.com/huggingface/datasets/pull/5018.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5018.patch",
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/5017 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5017/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5017/comments | https://api.github.com/repos/huggingface/datasets/issues/5017/events | https://github.com/huggingface/datasets/issues/5017 | 1,384,022,463 | I_kwDODunzps5SfoG_ | 5,017 | xcsr: X-CSQA simply uses english for all alleged non-english data | {
"login": "thesofakillers",
"id": 26286291,
"node_id": "MDQ6VXNlcjI2Mjg2Mjkx",
"avatar_url": "https://avatars.githubusercontent.com/u/26286291?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/thesofakillers",
"html_url": "https://github.com/thesofakillers",
"followers_url": "https://api.github.com/users/thesofakillers/followers",
"following_url": "https://api.github.com/users/thesofakillers/following{/other_user}",
"gists_url": "https://api.github.com/users/thesofakillers/gists{/gist_id}",
"starred_url": "https://api.github.com/users/thesofakillers/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/thesofakillers/subscriptions",
"organizations_url": "https://api.github.com/users/thesofakillers/orgs",
"repos_url": "https://api.github.com/users/thesofakillers/repos",
"events_url": "https://api.github.com/users/thesofakillers/events{/privacy}",
"received_events_url": "https://api.github.com/users/thesofakillers/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2067388877,
"node_id": "MDU6TGFiZWwyMDY3Mzg4ODc3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20bug",
"name": "dataset bug",
"color": "2edb81",
"default": false,
"description": "A bug in a dataset script provided in the library"
}
] | closed | false | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"Thanks for reporting, @thesofakillers. Good catch. We are fixing this. "
] | 2022-09-23T16:11:54 | 2022-09-26T10:57:31 | 2022-09-26T10:57:31 | NONE | null | ## Describe the bug
All the alleged non-english subcollections for the X-CSQA task in the [xcsr benchmark dataset ](https://huggingface.co/datasets/xcsr) seem to be copies of the english subcollection, rather than translations. This is in contrast to the data description:
> we automatically translate the original CSQA and CODAH datasets, which only have English versions, to 15 other languages, forming development and test sets for studying X-CSR
## Steps to reproduce the bug
```python
# let's say you want to load the french X-CSQA subcollection
french = datasets.load_dataset("xcsr", "X-CSQA-fr")
# for good measure, let's load english too
english = datasets.load_dataset("xcsr", "X-CSQA-en")
# let's inspect
"".join(english['test'][0]['question']['stem'])
# output: 'The people wanted to stop the parade, so what did they set up to thwart it?'
"".join(french['test'][0]['question']['stem'])
# output: 'The people wanted to stop the parade, so what did they set up to thwart it?'
# what? Why are they both in english?
# I've checked this for validation and train splits too, across many datapoints. It's all the same english dataset
# maybe i need to look better?
french['test'].unique('lang')
# output: ['en']
# no, it's all english
```
## Expected results
Accessing a subcollection in language X should return a subcollection containg samples in language X
## Actual results
Accessing a subcollection in language X returns a subcollection containing samples in English.
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 2.5.1
- Platform: macOS-10.15.7-x86_64-i386-64bit
- Python version: 3.8.13
- PyArrow version: 9.0.0
- Pandas version: 1.4.3
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5017/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5017/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/5016 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5016/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5016/comments | https://api.github.com/repos/huggingface/datasets/issues/5016/events | https://github.com/huggingface/datasets/pull/5016 | 1,383,883,058 | PR_kwDODunzps4_gKny | 5,016 | Fix tar extraction vuln | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-09-23T14:22:21 | 2022-09-29T12:42:26 | 2022-09-29T12:40:28 | MEMBER | null | Fix for CVE-2007-4559
Description:
Directory traversal vulnerability in the (1) extract and (2) extractall functions in the tarfile
module in Python allows user-assisted remote attackers to overwrite arbitrary files via a .. (dot dot)
sequence in filenames in a TAR archive, a related issue to CVE-2001-1267.
I fixed it by using the solution proposed in https://stackoverflow.com/questions/10060069/safely-extract-zip-or-tar-using-python
It blocks extraction of files with an absolute path or double dots and symlinks. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5016/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5016/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5016",
"html_url": "https://github.com/huggingface/datasets/pull/5016",
"diff_url": "https://github.com/huggingface/datasets/pull/5016.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5016.patch",
"merged_at": "2022-09-29T12:40:28"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/5015 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5015/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5015/comments | https://api.github.com/repos/huggingface/datasets/issues/5015/events | https://github.com/huggingface/datasets/issues/5015 | 1,383,485,558 | I_kwDODunzps5SdlB2 | 5,015 | Transfer dataset scripts to Hub | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"Sounds good ! Can I help with anything ?"
] | 2022-09-23T08:48:10 | 2022-10-05T07:15:57 | 2022-10-05T07:15:57 | MEMBER | null | Before merging:
- #4974
TODO:
- [x] Create label: ["dataset contribution"](https://github.com/huggingface/datasets/pulls?q=label%3A%22dataset+contribution%22)
- [x] Create project: [Datasets: Transfer datasets to Hub](https://github.com/orgs/huggingface/projects/22/)
- [x] PRs:
- [x] Add dataset: we should recommend transfer all additions of datasets to the Hub, under the appropriate namespace; no more additions of datasets on GitHub
- [x] Update dataset: in general, we should merge bug fixes; enhancements should be considered on a case-by-case basis, depending on whether there is a more suitable namespace on the Hub
- [ ] Issues
Finally:
- [x] #4974
Let me know what you think! :hugs: | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5015/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 1,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5015/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/5014 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5014/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5014/comments | https://api.github.com/repos/huggingface/datasets/issues/5014/events | https://github.com/huggingface/datasets/issues/5014 | 1,383,422,639 | I_kwDODunzps5SdVqv | 5,014 | I need to read the custom dataset in conll format | {
"login": "shell-nlp",
"id": 39985245,
"node_id": "MDQ6VXNlcjM5OTg1MjQ1",
"avatar_url": "https://avatars.githubusercontent.com/u/39985245?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/shell-nlp",
"html_url": "https://github.com/shell-nlp",
"followers_url": "https://api.github.com/users/shell-nlp/followers",
"following_url": "https://api.github.com/users/shell-nlp/following{/other_user}",
"gists_url": "https://api.github.com/users/shell-nlp/gists{/gist_id}",
"starred_url": "https://api.github.com/users/shell-nlp/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/shell-nlp/subscriptions",
"organizations_url": "https://api.github.com/users/shell-nlp/orgs",
"repos_url": "https://api.github.com/users/shell-nlp/repos",
"events_url": "https://api.github.com/users/shell-nlp/events{/privacy}",
"received_events_url": "https://api.github.com/users/shell-nlp/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | open | false | null | [] | null | [
"Hi! We don't currently have a builder for parsing custom `conll` datasets, but I guess we could add one as a packaged module (similarly to what [TFDS](https://github.com/tensorflow/datasets/blob/master/tensorflow_datasets/core/dataset_builders/conll/conll_dataset_builder.py) did). @lhoestq @albertvillanova WDYT?\r\n\r\nIn the meantime, you can use `Dataset.from_generator` to create a dataset as follows:\r\n```python\r\nfrom datasets import Dataset\r\n\r\n# 2009 version\r\nINPUT_COLUMNS = \"ID FORM LEMMA PLEMMA POS PPOS FEAT PFEAT HEAD PHEAD DEPREL PDEPREL\".split()\r\n\r\ndef read_conll(file):\r\n example = {col: [] for col in INPUT_COLUMNS}\r\n idx = 0\r\n with open(file) as f:\r\n for line in f:\r\n if line.startswith(\"-DOCSTART-\") or line == \"\\n\" or not line:\r\n if example[next(iter(example))]:\r\n yield idx, example\r\n idx += 1\r\n example = {col: [] for col in INPUT_COLUMNS}\r\n else:\r\n row_cols = line.split()\r\n for i, col in enumerate(example):\r\n example[col] = row_cols[i].rstrip()\r\n\r\n# (optional) pass custom features with `features=Features(...)`\r\ndset = Dataset.from_generator(read_conll, gen_kwargs={\"file\": \"path/to/conll/file\"}) \r\n``` ",
"I think we could add a dedicated builder if you think this format is general enough.",
"\r\n\r\n\r\n> I think we could add a dedicated builder if you think this format is general enough.\r\n\r\nI think its functions are incomplete. It should have to_ Conll and from_ There are two methods of conll."
] | 2022-09-23T07:49:42 | 2022-11-02T11:57:15 | null | NONE | null | I need to read the custom dataset in conll format
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5014/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5014/timeline | null | reopened | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/5013 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5013/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5013/comments | https://api.github.com/repos/huggingface/datasets/issues/5013/events | https://github.com/huggingface/datasets/issues/5013 | 1,383,415,971 | I_kwDODunzps5SdUCj | 5,013 | would huggingface like publish cpp binding for datasets package ? | {
"login": "mullerhai",
"id": 6143404,
"node_id": "MDQ6VXNlcjYxNDM0MDQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/6143404?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mullerhai",
"html_url": "https://github.com/mullerhai",
"followers_url": "https://api.github.com/users/mullerhai/followers",
"following_url": "https://api.github.com/users/mullerhai/following{/other_user}",
"gists_url": "https://api.github.com/users/mullerhai/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mullerhai/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mullerhai/subscriptions",
"organizations_url": "https://api.github.com/users/mullerhai/orgs",
"repos_url": "https://api.github.com/users/mullerhai/repos",
"events_url": "https://api.github.com/users/mullerhai/events{/privacy}",
"received_events_url": "https://api.github.com/users/mullerhai/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892913,
"node_id": "MDU6TGFiZWwxOTM1ODkyOTEz",
"url": "https://api.github.com/repos/huggingface/datasets/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": "This will not be worked on"
}
] | closed | false | null | [] | null | [
"Hi ! Can you share more information about your use case ? How could it help you to have cpp bindings versus using the python libraries ?",
"> Hi ! Can you share more information about your use case ? How could it help you to have cpp bindings versus using the python libraries ?\r\n\r\nfor example ,the huggingface load_model() and load_dataset() can execute in cpp env",
"If it's a viable option for you, you can check [tch-rs](https://github.com/LaurentMazare/tch-rs) to load models in Rust. Regarding datasets, you can first download them in python and then use Arrow C++ or Rust to load them",
"If you are more adventurous, another option is to embed python calls inside c++ e.g. with `pybind11`.",
"> pybind11\r\n\r\nI think it is not the best solution"
] | 2022-09-23T07:42:49 | 2023-02-24T16:20:57 | 2023-02-24T16:20:57 | NONE | null | HI:
I use cpp env libtorch, I like use hugggingface ,but huggingface not cpp binding, would you like publish cpp binding for it.
thanks | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5013/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5013/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/5012 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5012/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5012/comments | https://api.github.com/repos/huggingface/datasets/issues/5012/events | https://github.com/huggingface/datasets/issues/5012 | 1,382,851,096 | I_kwDODunzps5SbKIY | 5,012 | Force JSON format regardless of file naming on S3 | {
"login": "junwang-wish",
"id": 112650299,
"node_id": "U_kgDOBrboOw",
"avatar_url": "https://avatars.githubusercontent.com/u/112650299?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/junwang-wish",
"html_url": "https://github.com/junwang-wish",
"followers_url": "https://api.github.com/users/junwang-wish/followers",
"following_url": "https://api.github.com/users/junwang-wish/following{/other_user}",
"gists_url": "https://api.github.com/users/junwang-wish/gists{/gist_id}",
"starred_url": "https://api.github.com/users/junwang-wish/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/junwang-wish/subscriptions",
"organizations_url": "https://api.github.com/users/junwang-wish/orgs",
"repos_url": "https://api.github.com/users/junwang-wish/repos",
"events_url": "https://api.github.com/users/junwang-wish/events{/privacy}",
"received_events_url": "https://api.github.com/users/junwang-wish/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | closed | false | null | [] | null | [
"Hi ! Support for URIs like `s3://...` is not implemented yet in `data_files=`. You can use the HTTP URL instead if your data is public in the meantime",
"Hi,\r\nI want to make sure I understand this response. I have a set of files on S3 that are private for security reasons. Because they are not public files I cannot read those files (many are parquet) into my hf notebooks in Kaggle? That can't be correct, can it? ",
"Hi ! There is a discussion at https://github.com/huggingface/datasets/issues/5281\r\n\r\nUsing the latest `datasets` 2.11 you can try passing fsspec URLs to private buckets to `data_files` in `load_dataset()`. Though this is still experimental and undocumented, so feedback is welcome. You may not have the best experience though, since anything related to performance and caching hasn't been tested properly yet.",
"closing this one since data_files supports fsspec (still experimental/untested/undocumented for s3 though)"
] | 2022-09-22T18:28:15 | 2023-08-16T09:58:36 | 2023-08-16T09:58:36 | NONE | null | I have a file on S3 created by Data Version Control, it looks like `s3://dvc/ac/badff5b134382a0f25248f1b45d7b2` but contains a json file. If I run
```python
dataset = load_dataset(
"json",
data_files='s3://dvc/ac/badff5b134382a0f25248f1b45d7b2'
)
```
It gives me
```
InvalidSchema: No connection adapters were found for 's3://dvc/ac/badff5b134382a0f25248f1b45d7b2'
```
However, I cannot go ahead and change the names of the s3 file. Is there a way to "force" load a S3 url with certain decoder (JSON, CSV, etc.) regardless of s3 URL naming? | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5012/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5012/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/5011 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5011/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5011/comments | https://api.github.com/repos/huggingface/datasets/issues/5011/events | https://github.com/huggingface/datasets/issues/5011 | 1,382,609,587 | I_kwDODunzps5SaPKz | 5,011 | Audio: `encode_example` fails with IndexError | {
"login": "sanchit-gandhi",
"id": 93869735,
"node_id": "U_kgDOBZhWpw",
"avatar_url": "https://avatars.githubusercontent.com/u/93869735?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sanchit-gandhi",
"html_url": "https://github.com/sanchit-gandhi",
"followers_url": "https://api.github.com/users/sanchit-gandhi/followers",
"following_url": "https://api.github.com/users/sanchit-gandhi/following{/other_user}",
"gists_url": "https://api.github.com/users/sanchit-gandhi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sanchit-gandhi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sanchit-gandhi/subscriptions",
"organizations_url": "https://api.github.com/users/sanchit-gandhi/orgs",
"repos_url": "https://api.github.com/users/sanchit-gandhi/repos",
"events_url": "https://api.github.com/users/sanchit-gandhi/events{/privacy}",
"received_events_url": "https://api.github.com/users/sanchit-gandhi/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | [
"Sorry bug on my part 😅 Closing "
] | 2022-09-22T15:07:27 | 2022-09-23T09:05:18 | 2022-09-23T09:05:18 | CONTRIBUTOR | null | ## Describe the bug
Loading the dataset [earnings-22](https://huggingface.co/datasets/sanchit-gandhi/earnings22_split) from the Hub yields an Index Error. I created this dataset locally and then pushed to hub at the specified URL. Thus, I expect the dataset should work out-of-the-box! Indeed, the dataset viewer functions correctly, and there were no issues when I had the dataset locally.
Don't think it's a sound file bug as the version matches what worked previously.
Update: the bug appeared for me on a GPU, mysteriously on a TPU I can't repro and it downloads correctly...
## Steps to reproduce the bug
```python
from datasets import load_dataset
earnings22 = load_dataset("sanchit-gandhi/earnings22_split")
```
## Expected results
```
>>> earnings22
DatasetDict({
validation: Dataset({
features: ['source_id', 'audio', 'segment_id', 'sentence', 'start_ts', 'end_ts', 'id'],
num_rows: 2650
})
train: Dataset({
features: ['source_id', 'audio', 'segment_id', 'sentence', 'start_ts', 'end_ts', 'id'],
num_rows: 52006
})
test: Dataset({
features: ['source_id', 'audio', 'segment_id', 'sentence', 'start_ts', 'end_ts', 'id'],
num_rows: 2735
})
})
```
## Actual results
```
Traceback (most recent call last):
File "/opt/conda/envs/hf/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 2764, in _map_single
writer.write(example)
File "/opt/conda/envs/hf/lib/python3.8/site-packages/datasets/arrow_writer.py", line 451, in write
self.write_examples_on_file()
File "/opt/conda/envs/hf/lib/python3.8/site-packages/datasets/arrow_writer.py", line 409, in write_examples_on_file
self.write_batch(batch_examples=batch_examples)
File "/opt/conda/envs/hf/lib/python3.8/site-packages/datasets/arrow_writer.py", line 508, in write_batch
arrays.append(pa.array(typed_sequence))
File "pyarrow/array.pxi", line 231, in pyarrow.lib.array
File "pyarrow/array.pxi", line 110, in pyarrow.lib._handle_arrow_array_protocol
File "/opt/conda/envs/hf/lib/python3.8/site-packages/datasets/arrow_writer.py", line 197, in __arrow_array__
out = cast_array_to_feature(out, type, allow_number_to_str=not self.trying_type)
File "/opt/conda/envs/hf/lib/python3.8/site-packages/datasets/table.py", line 1683, in wrapper
return func(array, *args, **kwargs)
File "/opt/conda/envs/hf/lib/python3.8/site-packages/datasets/table.py", line 1795, in cast_array_to_feature
return feature.cast_storage(array)
File "/opt/conda/envs/hf/lib/python3.8/site-packages/datasets/features/audio.py", line 190, in cast_storage
storage = pa.array([Audio().encode_example(x) if x is not None else None for x in storage.to_pylist()])
File "/opt/conda/envs/hf/lib/python3.8/site-packages/datasets/features/audio.py", line 190, in <listcomp>
storage = pa.array([Audio().encode_example(x) if x is not None else None for x in storage.to_pylist()])
File "/opt/conda/envs/hf/lib/python3.8/site-packages/datasets/features/audio.py", line 92, in encode_example
sf.write(buffer, value["array"], value["sampling_rate"], format="wav")
File "/opt/conda/envs/hf/lib/python3.8/site-packages/soundfile.py", line 313, in write
channels = data.shape[1]
IndexError: tuple index out of range
```
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 2.4.0
- Platform: Linux-4.19.0-21-cloud-amd64-x86_64-with-glibc2.10
- Python version: 3.8.13
- PyArrow version: 9.0.0
- Pandas version: 1.4.3
Plus:
- SoundFile version: 0.10.3.post1
cc @lhoestq @polinaeterna | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5011/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5011/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/5010 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5010/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5010/comments | https://api.github.com/repos/huggingface/datasets/issues/5010/events | https://github.com/huggingface/datasets/pull/5010 | 1,382,308,799 | PR_kwDODunzps4_bB3q | 5,010 | Add deprecation warning to multilingual_librispeech dataset card | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 4564477500,
"node_id": "LA_kwDODunzps8AAAABEBBmPA",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20contribution",
"name": "dataset contribution",
"color": "0e8a16",
"default": false,
"description": "Contribution to a dataset script"
}
] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-09-22T11:41:59 | 2022-09-23T12:04:37 | 2022-09-23T12:02:45 | MEMBER | null | Besides the current deprecation warning in the script of `multilingual_librispeech`, this PR adds a deprecation warning to its dataset card as well.
The format of the deprecation warning is aligned with the one in the library documentation when docstrings contain the `<Deprecated/>` tag.
Related to:
- #4060 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5010/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5010/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5010",
"html_url": "https://github.com/huggingface/datasets/pull/5010",
"diff_url": "https://github.com/huggingface/datasets/pull/5010.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5010.patch",
"merged_at": "2022-09-23T12:02:45"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/5009 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5009/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5009/comments | https://api.github.com/repos/huggingface/datasets/issues/5009/events | https://github.com/huggingface/datasets/issues/5009 | 1,381,194,067 | I_kwDODunzps5SU1lT | 5,009 | Error loading StonyBrookNLP/tellmewhy dataset from hub even though local copy loads correctly | {
"login": "ykl7",
"id": 4996184,
"node_id": "MDQ6VXNlcjQ5OTYxODQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/4996184?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ykl7",
"html_url": "https://github.com/ykl7",
"followers_url": "https://api.github.com/users/ykl7/followers",
"following_url": "https://api.github.com/users/ykl7/following{/other_user}",
"gists_url": "https://api.github.com/users/ykl7/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ykl7/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ykl7/subscriptions",
"organizations_url": "https://api.github.com/users/ykl7/orgs",
"repos_url": "https://api.github.com/users/ykl7/repos",
"events_url": "https://api.github.com/users/ykl7/events{/privacy}",
"received_events_url": "https://api.github.com/users/ykl7/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | [
"I think this is because some columns are mostly empty lists. In particular the train and validation splits only have empty lists for `val_ann`. Therefore the type inference doesn't know which type is inside (or it would have to scan the other splits first before knowing).\r\n\r\nYou can fix that by specifying the features types explicitly.\r\nThen you can save the feature types inside the dataset repository, so that you won't need to specify the features in subsequent calls:\r\n```python\r\nfrom datasets import load_dataset, Features, Sequence, Value\r\nfrom datasets.info import DatasetInfosDict\r\n\r\nfeatures = Features({\r\n 'narrative': Value('string'),\r\n 'question': Value('string'),\r\n 'original_sentence_for_question': Value('string'),\r\n 'narrative_lexical_overlap': Value('float64'),\r\n 'is_ques_answerable': Value('string'),\r\n 'answer': Value('string'),\r\n 'is_ques_answerable_annotator': Value('string'),\r\n 'original_narrative_form': Sequence(Value('string')),\r\n 'question_meta': Value('string'),\r\n 'helpful_sentences': Sequence(Value('int64')),\r\n 'human_eval': Value('bool'),\r\n 'val_ann': Sequence(Value('int64')),\r\n 'gram_ann': Sequence(Value('int64'))\r\n})\r\nds = load_dataset('StonyBrookNLP/tellmewhy', features=features)\r\nDatasetInfosDict({\"default\": ds[\"train\"].info}).write_to_directory(\"path/to/local/tellmewhy\")\r\n```\r\nand then after pushing the change to the dataset repository on the Hub, `load_dataset(\"StonyBrookNLP/tellmewhy\")` will work directly`",
"(Note that specifying explicit types will be made easier with https://github.com/huggingface/datasets/pull/4926)",
"`gram_ann` and `val_ann` are annotations that only exist for part of the test set. I wanted to keep all the columns consistent across all files, so I added them to train and validation as well. I'll check if removing them from those files is still compliant with this repo. Otherwise, I will do as you suggested. Thanks @lhoestq !",
"@lhoestq I followed the exact steps you described but it seems like I'm getting the same error unfortunately. Any other ideas? Thanks in advance",
"Hi ! If you move `dataset_infos.json` from `data/` to the root of your dataset repository if should work :)",
"I tried that and pushed to the [hub](https://huggingface.co/datasets/StonyBrookNLP/tellmewhy/tree/main). Now, there is a new error.\r\n```\r\n File \"/home/yklal95/tellmewhy/src/prepare_data.py\", line 67, in main\r\n dataset = load_dataset('StonyBrookNLP/tellmewhy')\r\n File \"/home/yklal95/anaconda3/envs/tmw-generalization/lib/python3.9/site-packages/datasets/load.py\", line 1746, in load_dataset\r\n builder_instance.download_and_prepare(\r\n File \"/home/yklal95/anaconda3/envs/tmw-generalization/lib/python3.9/site-packages/datasets/builder.py\", line 704, in download_and_prepare\r\n self._download_and_prepare(\r\n File \"/home/yklal95/anaconda3/envs/tmw-generalization/lib/python3.9/site-packages/datasets/builder.py\", line 775, in _download_and_prepare\r\n verify_checksums(\r\n File \"/home/yklal95/anaconda3/envs/tmw-generalization/lib/python3.9/site-packages/datasets/utils/info_utils.py\", line 33, in verify_checksums\r\n raise ExpectedMoreDownloadedFiles(str(set(expected_checksums) - set(recorded_checksums)))\r\ndatasets.utils.info_utils.ExpectedMoreDownloadedFiles: {'/home/yklal95/tellmewhy/data/test.json', '/home/yklal95/tellmewhy/data/validation.json', '/home/yklal95/tellmewhy/data/train.json'}\r\n```\r\nNo changes were made to any of the other files and they are still on the hub. Let me know if you have any ideas @lhoestq Thanks!",
"Oh I see - the code I gave you returns local paths instead of URLs to store metadata about files to download.\r\nI opened a PR in your repo here to remove this: https://huggingface.co/datasets/StonyBrookNLP/tellmewhy/discussions/1\r\nsorry for the inconvenience !",
"It works now! Thanks a lot @lhoestq "
] | 2022-09-21T16:23:06 | 2022-09-29T13:07:29 | 2022-09-29T13:07:29 | NONE | null | ## Describe the bug
I have added a new dataset with the identifier `StonyBrookNLP/tellmewhy` to the hub. When I load the individual files using my local copy using `dataset = datasets.load_dataset("json", data_files="data/train.jsonl")`, it loads the dataset correctly. However, when I try to load it from the hub, I get an error (pasted below). Additionally, `dataset = datasets.load_dataset("json", data_dir="data/")` throws the same error.
## Steps to reproduce the bug
```python
dataset = datasets.load_dataset('StonyBrookNLP/tellmewhy')
```
## Expected results
Successfully load the `StonyBrookNLP/tellmewhy` dataset.
## Actual results
```
Using custom data configuration StonyBrookNLP--tellmewhy-82712924092694ff
Downloading and preparing dataset json/StonyBrookNLP--tellmewhy to /home/yklal95/.cache/huggingface/datasets/StonyBrookNLP___json/StonyBrookNLP--tellmewhy-82712924092694ff/0.0.0/a3e658c4731e59120d44081ac10bf85dc7e1388126b92338344ce9661907f253...
Downloading data files: 100%|██████████████████████████████| 3/3 [00:00<00:00, 957.46it/s]
Extracting data files: 100%|███████████████████████████████| 3/3 [00:00<00:00, 299.14it/s]
Traceback (most recent call last):
File "/home/yklal95/tmw-generalization/src/load_datasets.py", line 17, in <module>
main(args)
File "/home/yklal95/tmw-generalization/src/load_datasets.py", line 11, in main
dataset = datasets.load_dataset(args.dataset_name)
File "/home/yklal95/anaconda3/envs/tmw-generalization/lib/python3.9/site-packages/datasets/load.py", line 1746, in load_dataset
builder_instance.download_and_prepare(
File "/home/yklal95/anaconda3/envs/tmw-generalization/lib/python3.9/site-packages/datasets/builder.py", line 704, in download_and_prepare
self._download_and_prepare(
File "/home/yklal95/anaconda3/envs/tmw-generalization/lib/python3.9/site-packages/datasets/builder.py", line 793, in _download_and_prepare
self._prepare_split(split_generator, **prepare_split_kwargs)
File "/home/yklal95/anaconda3/envs/tmw-generalization/lib/python3.9/site-packages/datasets/builder.py", line 1277, in _prepare_split
writer.write_table(table)
File "/home/yklal95/anaconda3/envs/tmw-generalization/lib/python3.9/site-packages/datasets/arrow_writer.py", line 524, in write_table
pa_table = table_cast(pa_table, self._schema)
File "/home/yklal95/anaconda3/envs/tmw-generalization/lib/python3.9/site-packages/datasets/table.py", line 2005, in table_cast
return cast_table_to_schema(table, schema)
File "/home/yklal95/anaconda3/envs/tmw-generalization/lib/python3.9/site-packages/datasets/table.py", line 1969, in cast_table_to_schema
arrays = [cast_array_to_feature(table[name], feature) for name, feature in features.items()]
File "/home/yklal95/anaconda3/envs/tmw-generalization/lib/python3.9/site-packages/datasets/table.py", line 1969, in <listcomp>
arrays = [cast_array_to_feature(table[name], feature) for name, feature in features.items()]
File "/home/yklal95/anaconda3/envs/tmw-generalization/lib/python3.9/site-packages/datasets/table.py", line 1681, in wrapper
return pa.chunked_array([func(chunk, *args, **kwargs) for chunk in array.chunks])
File "/home/yklal95/anaconda3/envs/tmw-generalization/lib/python3.9/site-packages/datasets/table.py", line 1681, in <listcomp>
return pa.chunked_array([func(chunk, *args, **kwargs) for chunk in array.chunks])
File "/home/yklal95/anaconda3/envs/tmw-generalization/lib/python3.9/site-packages/datasets/table.py", line 1822, in cast_array_to_feature
casted_values = _c(array.values, feature.feature)
File "/home/yklal95/anaconda3/envs/tmw-generalization/lib/python3.9/site-packages/datasets/table.py", line 1683, in wrapper
return func(array, *args, **kwargs)
File "/home/yklal95/anaconda3/envs/tmw-generalization/lib/python3.9/site-packages/datasets/table.py", line 1853, in cast_array_to_feature
return array_cast(array, feature(), allow_number_to_str=allow_number_to_str)
File "/home/yklal95/anaconda3/envs/tmw-generalization/lib/python3.9/site-packages/datasets/table.py", line 1683, in wrapper
return func(array, *args, **kwargs)
File "/home/yklal95/anaconda3/envs/tmw-generalization/lib/python3.9/site-packages/datasets/table.py", line 1761, in array_cast
raise TypeError(f"Couldn't cast array of type {array.type} to {pa_type}")
TypeError: Couldn't cast array of type int64 to null
```
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 2.4.0
- Platform: Linux-4.15.0-121-generic-x86_64-with-glibc2.27
- Python version: 3.9.13
- PyArrow version: 9.0.0
- Pandas version: 1.5.0
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5009/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5009/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/5008 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5008/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5008/comments | https://api.github.com/repos/huggingface/datasets/issues/5008/events | https://github.com/huggingface/datasets/pull/5008 | 1,381,090,903 | PR_kwDODunzps4_XAc5 | 5,008 | Re-apply input columns change | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-09-21T15:09:01 | 2022-09-22T13:57:36 | 2022-09-22T13:55:23 | CONTRIBUTOR | null | Fixes the `filter` + `input_columns` combination, which is used in the `transformers` examples for instance.
Revert #5006 (which in turn reverts #4971)
Fix https://github.com/huggingface/datasets/issues/4858 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5008/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5008/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5008",
"html_url": "https://github.com/huggingface/datasets/pull/5008",
"diff_url": "https://github.com/huggingface/datasets/pull/5008.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5008.patch",
"merged_at": "2022-09-22T13:55:23"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/5007 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5007/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5007/comments | https://api.github.com/repos/huggingface/datasets/issues/5007/events | https://github.com/huggingface/datasets/pull/5007 | 1,381,007,607 | PR_kwDODunzps4_WvFQ | 5,007 | Add some note about running the transformers ci before a release | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-09-21T14:14:25 | 2022-09-22T10:16:14 | 2022-09-22T10:14:06 | MEMBER | null | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5007/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5007/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5007",
"html_url": "https://github.com/huggingface/datasets/pull/5007",
"diff_url": "https://github.com/huggingface/datasets/pull/5007.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5007.patch",
"merged_at": "2022-09-22T10:14:06"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/5006 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5006/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5006/comments | https://api.github.com/repos/huggingface/datasets/issues/5006/events | https://github.com/huggingface/datasets/pull/5006 | 1,380,968,395 | PR_kwDODunzps4_Wm8z | 5,006 | Revert input_columns change | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"Merging this one and I'll check if it fixes the `transformers` CI before doing a patch release"
] | 2022-09-21T13:49:20 | 2022-09-21T14:14:33 | 2022-09-21T14:11:57 | MEMBER | null | Revert https://github.com/huggingface/datasets/pull/4971
Fix https://github.com/huggingface/datasets/issues/5005 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5006/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5006/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5006",
"html_url": "https://github.com/huggingface/datasets/pull/5006",
"diff_url": "https://github.com/huggingface/datasets/pull/5006.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5006.patch",
"merged_at": "2022-09-21T14:11:57"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/5005 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5005/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5005/comments | https://api.github.com/repos/huggingface/datasets/issues/5005/events | https://github.com/huggingface/datasets/issues/5005 | 1,380,952,960 | I_kwDODunzps5ST6uA | 5,005 | Release 2.5.0 breaks transformers CI | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"Shall we revert https://github.com/huggingface/datasets/pull/4971 @mariosasko ?\r\n\r\nAnd for consistency we can update IterableDataset.map later"
] | 2022-09-21T13:39:19 | 2022-09-21T14:11:57 | 2022-09-21T14:11:57 | MEMBER | null | ## Describe the bug
As reported by @lhoestq:
> see https://app.circleci.com/pipelines/github/huggingface/transformers/47634/workflows/b491886b-e66e-4edb-af96-8b459e72aa25/jobs/564563
this is used here: [https://github.com/huggingface/transformers/blob/3b19c0317b6909e2d7f11b5053895ac55[…]torch/speech-pretraining/run_wav2vec2_pretraining_no_trainer.py](https://github.com/huggingface/transformers/blob/3b19c0317b6909e2d7f11b5053895ac55250e7da/examples/pytorch/speech-pretraining/run_wav2vec2_pretraining_no_trainer.py#L482-L488)
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5005/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5005/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/5004 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5004/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5004/comments | https://api.github.com/repos/huggingface/datasets/issues/5004/events | https://github.com/huggingface/datasets/pull/5004 | 1,380,860,606 | PR_kwDODunzps4_WQck | 5,004 | Remove license tag file and validation | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-09-21T12:35:14 | 2022-09-22T11:47:41 | 2022-09-22T11:45:46 | MEMBER | null | As requested, we are removing the validation of the licenses from `datasets` because this is done on the Hub.
Fix #4994.
Related to:
- #4926, which is removing all the validation from `datasets` | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5004/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5004/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5004",
"html_url": "https://github.com/huggingface/datasets/pull/5004",
"diff_url": "https://github.com/huggingface/datasets/pull/5004.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5004.patch",
"merged_at": "2022-09-22T11:45:46"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/5003 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5003/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5003/comments | https://api.github.com/repos/huggingface/datasets/issues/5003/events | https://github.com/huggingface/datasets/pull/5003 | 1,380,617,353 | PR_kwDODunzps4_Vdko | 5,003 | Fix missing use_auth_token in streaming docstrings | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-09-21T09:27:03 | 2022-09-21T16:24:01 | 2022-09-21T16:20:59 | MEMBER | null | This PRs fixes docstrings:
- adds the missing `use_auth_token` param
- updates syntax of param types
- adds params to docstrings without them
- fixes return/yield types
- fixes syntax | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5003/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5003/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5003",
"html_url": "https://github.com/huggingface/datasets/pull/5003",
"diff_url": "https://github.com/huggingface/datasets/pull/5003.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5003.patch",
"merged_at": "2022-09-21T16:20:59"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/5002 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5002/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5002/comments | https://api.github.com/repos/huggingface/datasets/issues/5002/events | https://github.com/huggingface/datasets/issues/5002 | 1,380,589,402 | I_kwDODunzps5SSh9a | 5,002 | Dataset Viewer issue for loubnabnl/humaneval-x | {
"login": "loubnabnl",
"id": 44069155,
"node_id": "MDQ6VXNlcjQ0MDY5MTU1",
"avatar_url": "https://avatars.githubusercontent.com/u/44069155?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/loubnabnl",
"html_url": "https://github.com/loubnabnl",
"followers_url": "https://api.github.com/users/loubnabnl/followers",
"following_url": "https://api.github.com/users/loubnabnl/following{/other_user}",
"gists_url": "https://api.github.com/users/loubnabnl/gists{/gist_id}",
"starred_url": "https://api.github.com/users/loubnabnl/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/loubnabnl/subscriptions",
"organizations_url": "https://api.github.com/users/loubnabnl/orgs",
"repos_url": "https://api.github.com/users/loubnabnl/repos",
"events_url": "https://api.github.com/users/loubnabnl/events{/privacy}",
"received_events_url": "https://api.github.com/users/loubnabnl/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 3470211881,
"node_id": "LA_kwDODunzps7O1zsp",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset-viewer",
"name": "dataset-viewer",
"color": "E5583E",
"default": false,
"description": "Related to the dataset viewer on huggingface.co"
}
] | closed | false | {
"login": "severo",
"id": 1676121,
"node_id": "MDQ6VXNlcjE2NzYxMjE=",
"avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/severo",
"html_url": "https://github.com/severo",
"followers_url": "https://api.github.com/users/severo/followers",
"following_url": "https://api.github.com/users/severo/following{/other_user}",
"gists_url": "https://api.github.com/users/severo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/severo/subscriptions",
"organizations_url": "https://api.github.com/users/severo/orgs",
"repos_url": "https://api.github.com/users/severo/repos",
"events_url": "https://api.github.com/users/severo/events{/privacy}",
"received_events_url": "https://api.github.com/users/severo/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "severo",
"id": 1676121,
"node_id": "MDQ6VXNlcjE2NzYxMjE=",
"avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/severo",
"html_url": "https://github.com/severo",
"followers_url": "https://api.github.com/users/severo/followers",
"following_url": "https://api.github.com/users/severo/following{/other_user}",
"gists_url": "https://api.github.com/users/severo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/severo/subscriptions",
"organizations_url": "https://api.github.com/users/severo/orgs",
"repos_url": "https://api.github.com/users/severo/repos",
"events_url": "https://api.github.com/users/severo/events{/privacy}",
"received_events_url": "https://api.github.com/users/severo/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"It's a bug! Thanks for reporting, I'm looking at it",
"Fixed."
] | 2022-09-21T09:06:17 | 2022-09-21T11:49:49 | 2022-09-21T11:49:49 | NONE | null | ### Link
https://huggingface.co/datasets/loubnabnl/humaneval-x/viewer/
### Description
The dataset has subsets but the viewer gets stuck in the default subset even when I select another one (the data loading of the subsets works fine)
### Owner
Yes | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5002/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 1
} | https://api.github.com/repos/huggingface/datasets/issues/5002/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/5001 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5001/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5001/comments | https://api.github.com/repos/huggingface/datasets/issues/5001/events | https://github.com/huggingface/datasets/pull/5001 | 1,379,844,820 | PR_kwDODunzps4_TBWa | 5,001 | Support loading XML datasets | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_5001). All of your documentation changes will be reflected on that endpoint.",
"> CC: @davanstrien\r\n\r\nI should have some time to look at this on Friday :) ",
"@albertvillanova I've tried this with a few different XML datasets. One issue I've run into is getting a `KeyError` when the attributes of a field differ from the first parsed row. Unfortunately, this can come up in the ALTO XML format, for example, if you want to parse the 'string' field, which contains the text in the ALTO XML files. \r\n\r\nWhen parsing a file, this instance has no 'STYLE' attribute: \r\n\r\n```xml\r\n<TextLine HEIGHT=\"39\" WIDTH=\"295\" VPOS=\"926\" HPOS=\"247\"><String WC=\"0.4600000083\" CONTENT=\"jufqu’en\" HEIGHT=\"39\" WIDTH=\"117\" VPOS=\"926\" HPOS=\"247\"/><SP WIDTH=\"14\" VPOS=\"928\" HPOS=\"365\"/><String WC=\"0.6075000167\" CONTENT=\"l’an\" HEIGHT=\"26\" WIDTH=\"50\" VPOS=\"928\" HPOS=\"380\"/><SP WIDTH=\"24\" VPOS=\"936\" HPOS=\"431\"/><String WC=\"0.4300000072\" CONTENT=\"1\" HEIGHT=\"16\" WIDTH=\"9\" VPOS=\"936\" HPOS=\"456\"/><String STYLE=\"italics\" WC=\"0.5774999857\" CONTENT=\"361.\" HEIGHT=\"25\" WIDTH=\"68\" VPOS=\"933\" HPOS=\"474\"/></TextLine>\r\n```\r\n\r\nWhereas this one which appears later in the file, does have this field: \r\n\r\n```xml\r\n<TextLine HEIGHT=\"39\" WIDTH=\"712\" VPOS=\"966\" HPOS=\"297\"><String STYLE=\"italics\" WC=\"0.6999999881\" CONTENT=\"I\" HEIGHT=\"17\" WIDTH=\"9\" VPOS=\"977\" HPOS=\"297\"/><String WC=\"0.5\" CONTENT=\"I.\" HEIGHT=\"18\" WIDTH=\"25\" VPOS=\"976\" HPOS=\"318\"/><SP WIDTH=\"24\" VPOS=\"971\" HPOS=\"344\"/><String STYLE=\"italics\" WC=\"0.3359999955\" CONTENT=\"Crade\" HEIGHT=\"26\" WIDTH=\"91\" VPOS=\"967\" HPOS=\"369\"/><SP WIDTH=\"31\" VPOS=\"971\" HPOS=\"461\"/><String STYLE=\"italics\" WC=\"0.6060000062\" CONTENT=\"Pétri\" HEIGHT=\"26\" WIDTH=\"71\" VPOS=\"968\" HPOS=\"493\"/><SP WIDTH=\"23\" VPOS=\"968\" HPOS=\"565\"/><String STYLE=\"italics\" WC=\"0.612857163\" CONTENT=\"Candidi\" HEIGHT=\"27\" WIDTH=\"111\" VPOS=\"967\" HPOS=\"589\"/><SP WIDTH=\"19\" VPOS=\"967\" HPOS=\"701\"/><String STYLE=\"italics\" WC=\"0.4088888764\" CONTENT=\"Decembrii\" HEIGHT=\"28\" WIDTH=\"144\" VPOS=\"966\" HPOS=\"721\"/><SP WIDTH=\"10\" VPOS=\"968\" HPOS=\"866\"/><String STYLE=\"italics\" WC=\"0.4600000083\" CONTENT=\"in\" HEIGHT=\"25\" WIDTH=\"27\" VPOS=\"968\" HPOS=\"877\"/><SP WIDTH=\"9\" VPOS=\"967\" HPOS=\"905\"/><String STYLE=\"italics\" WC=\"0.5099999905\" CONTENT=\"funere\" HEIGHT=\"38\" WIDTH=\"94\" VPOS=\"967\" HPOS=\"915\"/></TextLine>\r\n```\r\n\r\nSince the first-seen fields define what is passed to `arrow_writer`, this causes a KeyError when the version with the extra attributes is encountered because it doesn't expect this column. \r\n\r\nSince it's important to support streaming, I'm not sure there is a nice way to detect attributes for the whole file easily in an automatic way. The two potential ways I can see of doing it.\r\n\r\n- Do an initial pass on a batch of data to have a higher chance of encountering variations in attributes before doing the arrow write. \r\n- Do a full pass on one file (and assume that this won't change across files) \r\n\r\nI think the other way of doing this would be to allow users to define expected/wanted attributes as another loading argument. This could then be used to extract the described attributes (and make them None if not found). This requires a bit more work from the user but could be helpful. For example, in the XML above, likely, most users will only want the `WC` and `CONTENT` attributes. So they could specify this upfront and avoid loading extra data they don't need or want. I suspect this option would make more sense than making this operation automatic for the case where attributes might change. WDYT? \r\n\r\n\r\n\r\n\r\n\r\n\r\n"
] | 2022-09-20T18:42:58 | 2022-11-01T12:44:42 | null | MEMBER | null | CC: @davanstrien | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5001/reactions",
"total_count": 2,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 2
} | https://api.github.com/repos/huggingface/datasets/issues/5001/timeline | null | null | true | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5001",
"html_url": "https://github.com/huggingface/datasets/pull/5001",
"diff_url": "https://github.com/huggingface/datasets/pull/5001.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5001.patch",
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/5000 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5000/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5000/comments | https://api.github.com/repos/huggingface/datasets/issues/5000/events | https://github.com/huggingface/datasets/issues/5000 | 1,379,709,398 | I_kwDODunzps5SPLHW | 5,000 | Dataset Viewer issue for asapp/slue | {
"login": "fwu-asapp",
"id": 56092571,
"node_id": "MDQ6VXNlcjU2MDkyNTcx",
"avatar_url": "https://avatars.githubusercontent.com/u/56092571?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/fwu-asapp",
"html_url": "https://github.com/fwu-asapp",
"followers_url": "https://api.github.com/users/fwu-asapp/followers",
"following_url": "https://api.github.com/users/fwu-asapp/following{/other_user}",
"gists_url": "https://api.github.com/users/fwu-asapp/gists{/gist_id}",
"starred_url": "https://api.github.com/users/fwu-asapp/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/fwu-asapp/subscriptions",
"organizations_url": "https://api.github.com/users/fwu-asapp/orgs",
"repos_url": "https://api.github.com/users/fwu-asapp/repos",
"events_url": "https://api.github.com/users/fwu-asapp/events{/privacy}",
"received_events_url": "https://api.github.com/users/fwu-asapp/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"<img width=\"519\" alt=\"Capture d’écran 2022-09-20 à 22 33 47\" src=\"https://user-images.githubusercontent.com/1676121/191358952-1220cb7d-745a-4203-a66b-3c707b25038f.png\">\r\n\r\n```\r\nNot found.\r\n\r\nError code: SplitsResponseNotFound\r\n```\r\n\r\nhttps://datasets-server.huggingface.co/splits?dataset=asapp/slue\r\n\r\n```json\r\n{\"error\":\"Not found.\"}\r\n```",
"I just launched a refresh. It's weird, I don't see any entry for this dataset in the cache, it's a bug on our side. In order to try to understand what happened, did you change the visibility status from private to public, by any chance?",
"The dataset is being refreshed, please retry later.\r\n\r\n<img width=\"802\" alt=\"Capture d’écran 2022-09-20 à 22 39 46\" src=\"https://user-images.githubusercontent.com/1676121/191360072-7cc86486-4e84-4b47-8f9a-4a69fe84a5ac.png\">\r\n",
"OK. We now have an issue because the dataset cannot be streamed, and the dataset viewer relies on it.\r\n\r\nMaybe @huggingface/datasets can help:\r\n\r\n```\r\nError code: StreamingRowsError\r\nException: NotImplementedError\r\nMessage: Extraction protocol for TAR archives like 'https://public-dataset-model-store.awsdev.asapp.com/users/sshon/public/slue/slue-voxpopuli_v0.2_blind.tar.gz' is not implemented in streaming mode. Please use `dl_manager.iter_archive` instead.\r\nTraceback: Traceback (most recent call last):\r\n File \"/src/services/worker/src/worker/responses/first_rows.py\", line 337, in get_first_rows_response\r\n rows = get_rows(dataset, config, split, streaming=True, rows_max_number=rows_max_number, hf_token=hf_token)\r\n File \"/src/services/worker/src/worker/utils.py\", line 123, in decorator\r\n return func(*args, **kwargs)\r\n File \"/src/services/worker/src/worker/responses/first_rows.py\", line 65, in get_rows\r\n ds = load_dataset(\r\n File \"/src/services/worker/.venv/lib/python3.9/site-packages/datasets/load.py\", line 1739, in load_dataset\r\n return builder_instance.as_streaming_dataset(split=split)\r\n File \"/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py\", line 1025, in as_streaming_dataset\r\n splits_generators = {sg.name: sg for sg in self._split_generators(dl_manager)}\r\n File \"/tmp/modules-cache/datasets_modules/datasets/asapp--slue/adaa0c78233e1a1df9c2f054e690ec5fc3eaf453bd76b80fe5cbe5728e55d9b1/slue.py\", line 189, in _split_generators\r\n dl_dir = dl_manager.download_and_extract(_DL_URLS[config_name])\r\n File \"/src/services/worker/.venv/lib/python3.9/site-packages/datasets/download/streaming_download_manager.py\", line 944, in download_and_extract\r\n return self.extract(self.download(url_or_urls))\r\n File \"/src/services/worker/.venv/lib/python3.9/site-packages/datasets/download/streaming_download_manager.py\", line 907, in extract\r\n urlpaths = map_nested(self._extract, path_or_paths, map_tuple=True)\r\n File \"/src/services/worker/.venv/lib/python3.9/site-packages/datasets/utils/py_utils.py\", line 385, in map_nested\r\n return function(data_struct)\r\n File \"/src/services/worker/.venv/lib/python3.9/site-packages/datasets/download/streaming_download_manager.py\", line 912, in _extract\r\n protocol = _get_extraction_protocol(urlpath, use_auth_token=self.download_config.use_auth_token)\r\n File \"/src/services/worker/.venv/lib/python3.9/site-packages/datasets/download/streaming_download_manager.py\", line 390, in _get_extraction_protocol\r\n raise NotImplementedError(\r\n NotImplementedError: Extraction protocol for TAR archives like 'https://public-dataset-model-store.awsdev.asapp.com/users/sshon/public/slue/slue-voxpopuli_v0.2_blind.tar.gz' is not implemented in streaming mode. Please use `dl_manager.iter_archive` instead.\r\n```",
"Thanks @severo, \r\n\r\nDo I have to modify the python script to support streaming so that it can be previewed?\r\nIs there a document somewhere that I can follow?\r\n",
"Hi @fwu-asapp thanks for reporting, and thanks @severo for the investigation.\r\n\r\nAs explained by @severo, the preview requires that your dataset loading script supports streaming.\r\n\r\nThere are several options here:\r\n- the easiest would be to replace the source files, archived using ZIP instead TAR: the TAR format does not allow random access while streaming, but only sequential access; the ZIP files support streaming out of the box.\r\n- alternatively, to stream TAR archives you can use `dl_manager.iter_archive`: the only prerequisite is that your \"index\" files (.tsv) should have been archived before their corresponding audio files, so while iterating the content of the TAR archive, the metadata files appear first. I think this is the case for voxpopuli tar but not for voxceleb.\r\n- if your .tsv files were not archived before their corresponding audio files (I think this is the case for voxceleb), then you should extract the .tsv files and host them separately (you can host them on the same Hugging Face Hub).\r\n - you can take as example, e.g.: https://huggingface.co/datasets/vivos/blob/main/vivos.py\r\n\r\nAs an advanced approach, you can handle both streaming and non-streaming cases separately.\r\n- as for example: https://huggingface.co/datasets/librispeech_asr/blob/main/librispeech_asr.py or https://huggingface.co/datasets/google/fleurs/blob/main/fleurs.py\r\n\r\nSee related discussion:\r\n- https://github.com/huggingface/datasets/issues/4697#issuecomment-1191502492",
"Thanks @albertvillanova for your clarification. I'll talk to my collaborators to see if we can replace those files. Let me just close this issue for now.",
"FYI, after replacing the source files with the ZIP ones, the dataset viewer works well. Thanks again to @severo and @albertvillanova for your help!",
"Great! And thank you for sharing that interesting dataset!"
] | 2022-09-20T16:45:45 | 2022-09-27T07:04:03 | 2022-09-21T07:24:07 | NONE | null | ### Link
https://huggingface.co/datasets/asapp/slue/viewer/
### Description
Hi,
I wonder how to get the dataset viewer of our slue dataset to work.
Best,
Felix
### Owner
Yes | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5000/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5000/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4999 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4999/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4999/comments | https://api.github.com/repos/huggingface/datasets/issues/4999/events | https://github.com/huggingface/datasets/pull/4999 | 1,379,610,030 | PR_kwDODunzps4_SQxL | 4,999 | Add EmptyDatasetError | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-09-20T15:28:05 | 2022-09-21T12:23:43 | 2022-09-21T12:21:24 | MEMBER | null | examples:
from the hub:
```python
Traceback (most recent call last):
File "playground/ttest.py", line 3, in <module>
print(load_dataset("lhoestq/empty"))
File "/Users/quentinlhoest/Desktop/hf/nlp/src/datasets/load.py", line 1686, in load_dataset
**config_kwargs,
File "/Users/quentinlhoest/Desktop/hf/nlp/src/datasets/load.py", line 1458, in load_dataset_builder
data_files=data_files,
File "/Users/quentinlhoest/Desktop/hf/nlp/src/datasets/load.py", line 1171, in dataset_module_factory
raise e1 from None
File "/Users/quentinlhoest/Desktop/hf/nlp/src/datasets/load.py", line 1162, in dataset_module_factory
download_mode=download_mode,
File "/Users/quentinlhoest/Desktop/hf/nlp/src/datasets/load.py", line 760, in get_module
else get_data_patterns_in_dataset_repository(hfh_dataset_info, self.data_dir)
File "/Users/quentinlhoest/Desktop/hf/nlp/src/datasets/data_files.py", line 678, in get_data_patterns_in_dataset_repository
) from None
datasets.data_files.EmptyDatasetError: The dataset repository at 'lhoestq/empty' doesn't contain any data file.
```
from local directory:
```python
Traceback (most recent call last):
File "playground/ttest.py", line 3, in <module>
print(load_dataset("playground/empty"))
File "/Users/quentinlhoest/Desktop/hf/nlp/src/datasets/load.py", line 1686, in load_dataset
**config_kwargs,
File "/Users/quentinlhoest/Desktop/hf/nlp/src/datasets/load.py", line 1458, in load_dataset_builder
data_files=data_files,
File "/Users/quentinlhoest/Desktop/hf/nlp/src/datasets/load.py", line 1107, in dataset_module_factory
path, data_dir=data_dir, data_files=data_files, download_mode=download_mode
File "/Users/quentinlhoest/Desktop/hf/nlp/src/datasets/load.py", line 625, in get_module
else get_data_patterns_locally(base_path)
File "/Users/quentinlhoest/Desktop/hf/nlp/src/datasets/data_files.py", line 460, in get_data_patterns_locally
raise EmptyDatasetError(f"The directory at {base_path} doesn't contain any data file") from None
datasets.data_files.EmptyDatasetError: The directory at playground/empty doesn't contain any data file
```
Close https://github.com/huggingface/datasets/issues/4995 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4999/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4999/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4999",
"html_url": "https://github.com/huggingface/datasets/pull/4999",
"diff_url": "https://github.com/huggingface/datasets/pull/4999.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4999.patch",
"merged_at": "2022-09-21T12:21:24"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4998 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4998/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4998/comments | https://api.github.com/repos/huggingface/datasets/issues/4998/events | https://github.com/huggingface/datasets/pull/4998 | 1,379,466,717 | PR_kwDODunzps4_Ryp3 | 4,998 | Don't add a tag on the Hub on release | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-09-20T13:54:57 | 2022-09-20T14:11:46 | 2022-09-20T14:08:54 | MEMBER | null | Datasets with no namespace on the Hub have tags to redirect to the version of datasets where they come from.
I’m about to remove them all because I think it looks bad/unexpected in the UI and it’s not actually useful
Therefore I'm also disabling tagging.
Note that the CI job will be completely removed in https://github.com/huggingface/datasets/pull/4974 anyway | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4998/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4998/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4998",
"html_url": "https://github.com/huggingface/datasets/pull/4998",
"diff_url": "https://github.com/huggingface/datasets/pull/4998.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4998.patch",
"merged_at": "2022-09-20T14:08:54"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4997 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4997/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4997/comments | https://api.github.com/repos/huggingface/datasets/issues/4997/events | https://github.com/huggingface/datasets/pull/4997 | 1,379,430,711 | PR_kwDODunzps4_RrBU | 4,997 | Add support for parsing JSON files in array form | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-09-20T13:31:26 | 2022-09-20T15:42:40 | 2022-09-20T15:40:06 | CONTRIBUTOR | null | Support parsing JSON files in the array form (top-level object is an array). For simplicity, `json.load` is used for decoding. This means the entire file is loaded into memory. If requested, we can optimize this by introducing a param similar to `lines` in [`pandas.read_json`](https://pandas.pydata.org/docs/reference/api/pandas.read_json.html), which, if set to `True`, would allow us to read in chunks.
Fixes https://github.com/huggingface/datasets/issues/4963
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4997/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4997/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4997",
"html_url": "https://github.com/huggingface/datasets/pull/4997",
"diff_url": "https://github.com/huggingface/datasets/pull/4997.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4997.patch",
"merged_at": "2022-09-20T15:40:05"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4996 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4996/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4996/comments | https://api.github.com/repos/huggingface/datasets/issues/4996/events | https://github.com/huggingface/datasets/issues/4996 | 1,379,345,161 | I_kwDODunzps5SNyMJ | 4,996 | Dataset Viewer issue for Jean-Baptiste/wikiner_fr | {
"login": "severo",
"id": 1676121,
"node_id": "MDQ6VXNlcjE2NzYxMjE=",
"avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/severo",
"html_url": "https://github.com/severo",
"followers_url": "https://api.github.com/users/severo/followers",
"following_url": "https://api.github.com/users/severo/following{/other_user}",
"gists_url": "https://api.github.com/users/severo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/severo/subscriptions",
"organizations_url": "https://api.github.com/users/severo/orgs",
"repos_url": "https://api.github.com/users/severo/repos",
"events_url": "https://api.github.com/users/severo/events{/privacy}",
"received_events_url": "https://api.github.com/users/severo/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"The script uses `Dataset.load_from_disk`, which as you can expect, doesn't work in streaming mode.\r\n\r\nIt would probably be more practical to load the dataset locally using `Dataset.load_from_disk` first and then `push_to_hub` to upload it in Parquet on the Hub",
"I've transferred this issue to the Hub repo: https://huggingface.co/datasets/Jean-Baptiste/wikiner_fr/discussions/3\r\n\r\nI'm closing this."
] | 2022-09-20T12:32:07 | 2022-09-27T12:35:44 | 2022-09-27T12:35:44 | CONTRIBUTOR | null | ### Link
https://huggingface.co/datasets/Jean-Baptiste/wikiner_fr
### Description
```
Error code: StreamingRowsError
Exception: FileNotFoundError
Message: [Errno 2] No such file or directory: 'zip:/data/train::https:/huggingface.co/datasets/Jean-Baptiste/wikiner_fr/resolve/main/data.zip/state.json'
Traceback: Traceback (most recent call last):
File "/src/services/worker/src/worker/responses/first_rows.py", line 337, in get_first_rows_response
rows = get_rows(dataset, config, split, streaming=True, rows_max_number=rows_max_number, hf_token=hf_token)
File "/src/services/worker/src/worker/utils.py", line 123, in decorator
return func(*args, **kwargs)
File "/src/services/worker/src/worker/responses/first_rows.py", line 77, in get_rows
rows_plus_one = list(itertools.islice(ds, rows_max_number + 1))
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 718, in __iter__
for key, example in self._iter():
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 708, in _iter
yield from ex_iterable
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 112, in __iter__
yield from self.generate_examples_fn(**self.kwargs)
File "/tmp/modules-cache/datasets_modules/datasets/Jean-Baptiste--wikiner_fr/683a580ba6ec769d508f7dfc603a651667b0ed3817b1ae5bfd45f97cc024923f/wikiner_fr.py", line 165, in _generate_examples
dataset = Dataset.load_from_disk(filepath)
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/arrow_dataset.py", line 1210, in load_from_disk
with open(Path(dataset_path, config.DATASET_STATE_JSON_FILENAME).as_posix(), encoding="utf-8") as state_file:
FileNotFoundError: [Errno 2] No such file or directory: 'zip:/data/train::https:/huggingface.co/datasets/Jean-Baptiste/wikiner_fr/resolve/main/data.zip/state.json'
```
Is it an error with the dataset script, or the data itself, @huggingface/datasets?
https://huggingface.co/datasets/Jean-Baptiste/wikiner_fr/tree/main
### Owner
No | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4996/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4996/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4995 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4995/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4995/comments | https://api.github.com/repos/huggingface/datasets/issues/4995/events | https://github.com/huggingface/datasets/issues/4995 | 1,379,108,482 | I_kwDODunzps5SM4aC | 4,995 | Get a specific Exception when the dataset has no data | {
"login": "severo",
"id": 1676121,
"node_id": "MDQ6VXNlcjE2NzYxMjE=",
"avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/severo",
"html_url": "https://github.com/severo",
"followers_url": "https://api.github.com/users/severo/followers",
"following_url": "https://api.github.com/users/severo/following{/other_user}",
"gists_url": "https://api.github.com/users/severo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/severo/subscriptions",
"organizations_url": "https://api.github.com/users/severo/orgs",
"repos_url": "https://api.github.com/users/severo/repos",
"events_url": "https://api.github.com/users/severo/events{/privacy}",
"received_events_url": "https://api.github.com/users/severo/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
},
{
"id": 3470211881,
"node_id": "LA_kwDODunzps7O1zsp",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset-viewer",
"name": "dataset-viewer",
"color": "E5583E",
"default": false,
"description": "Related to the dataset viewer on huggingface.co"
}
] | closed | false | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
}
] | null | [] | 2022-09-20T09:31:59 | 2022-09-21T12:21:25 | 2022-09-21T12:21:25 | CONTRIBUTOR | null | In the dataset viewer on the Hub (https://huggingface.co/datasets/glue/viewer), we would like (https://github.com/huggingface/moon-landing/issues/3882) to show a specific message when the repository lacks any data files.
In that case, instead of showing a complex traceback, we want to show a call to action to help the user upload data.
To do that, it would be very helpful to know for sure that the repository is missing any (supported) data files.
It could be done by raising a custom exception, for example, `NoDataError`. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4995/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4995/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4994 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4994/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4994/comments | https://api.github.com/repos/huggingface/datasets/issues/4994/events | https://github.com/huggingface/datasets/issues/4994 | 1,379,084,015 | I_kwDODunzps5SMybv | 4,994 | delete the hardcoded license list in `datasets` | {
"login": "julien-c",
"id": 326577,
"node_id": "MDQ6VXNlcjMyNjU3Nw==",
"avatar_url": "https://avatars.githubusercontent.com/u/326577?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/julien-c",
"html_url": "https://github.com/julien-c",
"followers_url": "https://api.github.com/users/julien-c/followers",
"following_url": "https://api.github.com/users/julien-c/following{/other_user}",
"gists_url": "https://api.github.com/users/julien-c/gists{/gist_id}",
"starred_url": "https://api.github.com/users/julien-c/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/julien-c/subscriptions",
"organizations_url": "https://api.github.com/users/julien-c/orgs",
"repos_url": "https://api.github.com/users/julien-c/repos",
"events_url": "https://api.github.com/users/julien-c/events{/privacy}",
"received_events_url": "https://api.github.com/users/julien-c/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
] | null | [] | 2022-09-20T09:14:41 | 2022-09-22T11:45:47 | 2022-09-22T11:45:47 | MEMBER | null | > Feel free to delete the license list in `datasets` [...]
>
> Also FYI in #4926 I also removed all the validation steps anyway (language, license, types etc.)
_Originally posted by @lhoestq in https://github.com/huggingface/datasets/issues/4930#issuecomment-1238401662_
> [...], in my opinion we can just delete this file from `datasets`, the validation is happening hub-side anyways now?
_Originally posted by @julien-c in https://github.com/huggingface/datasets/issues/4930#issuecomment-1238390659_ | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4994/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4994/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4993 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4993/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4993/comments | https://api.github.com/repos/huggingface/datasets/issues/4993/events | https://github.com/huggingface/datasets/pull/4993 | 1,379,044,435 | PR_kwDODunzps4_QYas | 4,993 | fix: avoid casting tuples after Dataset.map | {
"login": "szmoro",
"id": 5697926,
"node_id": "MDQ6VXNlcjU2OTc5MjY=",
"avatar_url": "https://avatars.githubusercontent.com/u/5697926?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/szmoro",
"html_url": "https://github.com/szmoro",
"followers_url": "https://api.github.com/users/szmoro/followers",
"following_url": "https://api.github.com/users/szmoro/following{/other_user}",
"gists_url": "https://api.github.com/users/szmoro/gists{/gist_id}",
"starred_url": "https://api.github.com/users/szmoro/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/szmoro/subscriptions",
"organizations_url": "https://api.github.com/users/szmoro/orgs",
"repos_url": "https://api.github.com/users/szmoro/repos",
"events_url": "https://api.github.com/users/szmoro/events{/privacy}",
"received_events_url": "https://api.github.com/users/szmoro/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-09-20T08:45:16 | 2022-09-20T16:11:27 | 2022-09-20T13:08:29 | CONTRIBUTOR | null | This PR updates features.py to avoid casting tuples to lists when reading the results of Dataset.map as suggested by @lhoestq [here](https://github.com/huggingface/datasets/issues/4676#issuecomment-1187371367) in https://github.com/huggingface/datasets/issues/4676.
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4993/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4993/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4993",
"html_url": "https://github.com/huggingface/datasets/pull/4993",
"diff_url": "https://github.com/huggingface/datasets/pull/4993.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4993.patch",
"merged_at": "2022-09-20T13:08:29"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4992 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4992/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4992/comments | https://api.github.com/repos/huggingface/datasets/issues/4992/events | https://github.com/huggingface/datasets/pull/4992 | 1,379,031,842 | PR_kwDODunzps4_QVw4 | 4,992 | Support streaming iwslt2017 dataset | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-09-20T08:35:41 | 2022-09-20T09:27:55 | 2022-09-20T09:15:24 | MEMBER | null | Support streaming iwslt2017 dataset.
Once this PR is merged:
- [x] Remove old ".tgz" data files from the Hub. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4992/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4992/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4992",
"html_url": "https://github.com/huggingface/datasets/pull/4992",
"diff_url": "https://github.com/huggingface/datasets/pull/4992.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4992.patch",
"merged_at": "2022-09-20T09:15:24"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4991 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4991/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4991/comments | https://api.github.com/repos/huggingface/datasets/issues/4991/events | https://github.com/huggingface/datasets/pull/4991 | 1,378,898,752 | PR_kwDODunzps4_P5hI | 4,991 | Fix missing tags in dataset cards | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-09-20T06:42:07 | 2022-09-22T12:25:32 | 2022-09-20T07:37:30 | MEMBER | null | Fix missing tags in dataset cards:
- aeslc
- empathetic_dialogues
- event2Mind
- gap
- iwslt2017
- newsgroup
- qa4mre
- scicite
This PR partially fixes the missing tags in dataset cards. Subsequent PRs will follow to complete this task.
Related to:
- #4833
- #4891
- #4896
- #4908
- #4921
- #4931
- #4979 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4991/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4991/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4991",
"html_url": "https://github.com/huggingface/datasets/pull/4991",
"diff_url": "https://github.com/huggingface/datasets/pull/4991.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4991.patch",
"merged_at": "2022-09-20T07:37:30"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4990 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4990/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4990/comments | https://api.github.com/repos/huggingface/datasets/issues/4990/events | https://github.com/huggingface/datasets/issues/4990 | 1,378,120,806 | I_kwDODunzps5SJHRm | 4,990 | "no-token" is passed to `huggingface_hub` when token is `None` | {
"login": "Wauplin",
"id": 11801849,
"node_id": "MDQ6VXNlcjExODAxODQ5",
"avatar_url": "https://avatars.githubusercontent.com/u/11801849?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Wauplin",
"html_url": "https://github.com/Wauplin",
"followers_url": "https://api.github.com/users/Wauplin/followers",
"following_url": "https://api.github.com/users/Wauplin/following{/other_user}",
"gists_url": "https://api.github.com/users/Wauplin/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Wauplin/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Wauplin/subscriptions",
"organizations_url": "https://api.github.com/users/Wauplin/orgs",
"repos_url": "https://api.github.com/users/Wauplin/repos",
"events_url": "https://api.github.com/users/Wauplin/events{/privacy}",
"received_events_url": "https://api.github.com/users/Wauplin/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"Hi @Wauplin, thanks for raising this potential issue.\r\n\r\nThe choice of passing `\"no-token\"` instead of `None` was made in this PR:\r\n- #4536 \r\n\r\nAccording to the PR description, the reason why it is passed is to avoid that `HfApi.dataset_info` uses the local token when no token should be used.",
"Hi @albertvillanova , thanks for finding the original issue :+1: \r\n\r\nAs of next release of `huggingface_hub`, the `token` argument will be deprecated in favor of the `use_auth_token` argument in `dataset_info` method. This change as been done by @SBrandeis in https://github.com/huggingface/huggingface_hub/pull/928. `use_auth_token` is a bit different and allow the case \"don't sent the cached token by default\".\r\n\r\nIf you want to strictly avoid sending the cached token from `datasets`, you can use:\r\n```py\r\n# token=token if token else \"no-token\", <- will fail because token is not valid\r\n\r\nuse_auth_token=token if token else False, # using the new `use_auth_token` parameter\r\n```\r\n\r\nAnd as a note, I am currently updating the \"don't send the cached token by default\"-rule to \"don't send the cached token on public repos by default but use it in private ones\" in https://github.com/huggingface/huggingface_hub/pull/1064. This will not change the fact that `use_auth_token=False` doesn't send the token at all.\r\n",
"What is current strategy in term of updating `huggingface_hub` version in `datasets` ? I don't want to break stuff in the next release so let's find a proper solution :) ",
"As soon as `token` is deprecated and hfh has a new release, we'll update `datasets` to use the new argument instead. Does it sound good to you ?",
"Perfect :ok_hand: ",
"Hi @Wauplin, thanks for the warning about the deprecation of `token` in favor of `use_auth_token`.\r\n\r\nIndeed, in datasets we use internally `use_auth_token`, which in this case was transformed to `token` to call `HfApi.dataset_info`:\r\nhttps://github.com/huggingface/datasets/blob/1a9385d7cc8a3241b44015145ef56a230fdadc51/src/datasets/load.py#L747\r\n\r\nTherefore, for the new hfh release, the fix will be trivial: we will pass directly `use_auth_token`.\r\n\r\nAs discussed during our meeting yesterday, due to the fact that at datasets we support multiple hfh versions, I think we should handle passing `token` or `use_auth_token` depending on the hfh version."
] | 2022-09-19T15:14:40 | 2022-09-30T09:16:00 | 2022-09-30T09:16:00 | CONTRIBUTOR | null | ## Describe the bug
In the 2 lines listed below, a token is passed to `huggingface_hub` to get information from a dataset. If no token is provided, a "no-token" string is passed. What is the purpose of it ? If no real, I would prefer if the `None` value could be sent directly to be handle by `huggingface_hub`. I feel that here it is working because we assume the token will never be validated.
https://github.com/huggingface/datasets/blob/5b23f58535f14cc4dd7649485bce1ccc836e7bca/src/datasets/load.py#L753
https://github.com/huggingface/datasets/blob/5b23f58535f14cc4dd7649485bce1ccc836e7bca/src/datasets/load.py#L1121
## Expected results
Pass `token=None` to `huggingface_hub`.
## Actual results
`token="no-token"` is passed.
## Environment info
`huggingface_hub v0.10.0dev` | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4990/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4990/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4989 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4989/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4989/comments | https://api.github.com/repos/huggingface/datasets/issues/4989/events | https://github.com/huggingface/datasets/issues/4989 | 1,376,832,233 | I_kwDODunzps5SEMrp | 4,989 | Running add_column() seems to corrupt existing sequence-type column info | {
"login": "derek-rocheleau",
"id": 93728165,
"node_id": "U_kgDOBZYtpQ",
"avatar_url": "https://avatars.githubusercontent.com/u/93728165?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/derek-rocheleau",
"html_url": "https://github.com/derek-rocheleau",
"followers_url": "https://api.github.com/users/derek-rocheleau/followers",
"following_url": "https://api.github.com/users/derek-rocheleau/following{/other_user}",
"gists_url": "https://api.github.com/users/derek-rocheleau/gists{/gist_id}",
"starred_url": "https://api.github.com/users/derek-rocheleau/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/derek-rocheleau/subscriptions",
"organizations_url": "https://api.github.com/users/derek-rocheleau/orgs",
"repos_url": "https://api.github.com/users/derek-rocheleau/repos",
"events_url": "https://api.github.com/users/derek-rocheleau/events{/privacy}",
"received_events_url": "https://api.github.com/users/derek-rocheleau/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | [
"Nevermind, I was incorrect."
] | 2022-09-17T17:42:05 | 2022-09-19T12:54:54 | 2022-09-19T12:54:54 | NONE | null | I have a dataset that contains a column ("foo") that is a sequence type of length 4. So when I run .to_pandas() on it, the resulting dataframe correctly contains 4 columns - foo_0, foo_1, foo_2, foo_3. So the 1st row of the dataframe might look like:
ds = load_dataset(...)
df = ds.to_pandas()
df:
foo_0 | foo_1 | foo_2 | foo_3
0.0 | 1.0 | 2.0 | 3.0
If I run .add_column("new_col", data) on the dataset, and then .to_pandas() on the resulting new dataset, the resulting dataframe contains only 2 columns - foo, new_col. The values in column foo are lists of length 4, the 4 elements that should have been split into separate columns. Dataframe 1st row would be:
ds = load_dataset(...)
new_ds = ds.add_column("new_col", data)
df = new_ds.to_pandas()
df:
foo | new_col
[0.0, 1.0, 2.0, 3.0] | new_val
I've explored the 2 datasets in a debugger and haven't noticed any changes to any attributes related to the foo column, but I can't determine why the dataframes are so different. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4989/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4989/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4988 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4988/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4988/comments | https://api.github.com/repos/huggingface/datasets/issues/4988/events | https://github.com/huggingface/datasets/issues/4988 | 1,376,096,584 | I_kwDODunzps5SBZFI | 4,988 | Add `IterableDataset.from_generator` to the API | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
},
{
"id": 1935892877,
"node_id": "MDU6TGFiZWwxOTM1ODkyODc3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/good%20first%20issue",
"name": "good first issue",
"color": "7057ff",
"default": true,
"description": "Good for newcomers"
}
] | closed | false | {
"login": "hamid-vakilzadeh",
"id": 56002455,
"node_id": "MDQ6VXNlcjU2MDAyNDU1",
"avatar_url": "https://avatars.githubusercontent.com/u/56002455?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/hamid-vakilzadeh",
"html_url": "https://github.com/hamid-vakilzadeh",
"followers_url": "https://api.github.com/users/hamid-vakilzadeh/followers",
"following_url": "https://api.github.com/users/hamid-vakilzadeh/following{/other_user}",
"gists_url": "https://api.github.com/users/hamid-vakilzadeh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/hamid-vakilzadeh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/hamid-vakilzadeh/subscriptions",
"organizations_url": "https://api.github.com/users/hamid-vakilzadeh/orgs",
"repos_url": "https://api.github.com/users/hamid-vakilzadeh/repos",
"events_url": "https://api.github.com/users/hamid-vakilzadeh/events{/privacy}",
"received_events_url": "https://api.github.com/users/hamid-vakilzadeh/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "hamid-vakilzadeh",
"id": 56002455,
"node_id": "MDQ6VXNlcjU2MDAyNDU1",
"avatar_url": "https://avatars.githubusercontent.com/u/56002455?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/hamid-vakilzadeh",
"html_url": "https://github.com/hamid-vakilzadeh",
"followers_url": "https://api.github.com/users/hamid-vakilzadeh/followers",
"following_url": "https://api.github.com/users/hamid-vakilzadeh/following{/other_user}",
"gists_url": "https://api.github.com/users/hamid-vakilzadeh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/hamid-vakilzadeh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/hamid-vakilzadeh/subscriptions",
"organizations_url": "https://api.github.com/users/hamid-vakilzadeh/orgs",
"repos_url": "https://api.github.com/users/hamid-vakilzadeh/repos",
"events_url": "https://api.github.com/users/hamid-vakilzadeh/events{/privacy}",
"received_events_url": "https://api.github.com/users/hamid-vakilzadeh/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"#take",
"Thanks @hamid-vakilzadeh ! Let us know if you have some questions or if we can help",
"Thank you! I certainly will reach out if I need any help."
] | 2022-09-16T15:19:41 | 2022-10-05T12:10:49 | 2022-10-05T12:10:49 | CONTRIBUTOR | null | We've just added `Dataset.from_generator` to the API. It would also be cool to add `IterableDataset.from_generator` to support creating an iterable dataset from a generator.
cc @lhoestq | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4988/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4988/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4987 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4987/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4987/comments | https://api.github.com/repos/huggingface/datasets/issues/4987/events | https://github.com/huggingface/datasets/pull/4987 | 1,376,006,477 | PR_kwDODunzps4_GlIu | 4,987 | Embed image/audio data in dl_and_prepare parquet | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-09-16T14:09:27 | 2022-09-16T16:24:47 | 2022-09-16T16:22:35 | MEMBER | null | Embed the bytes of the image or audio files in the Parquet files directly, instead of having a "path" that points to a local file.
Indeed Parquet files are often used to share data or to be used by workers that may not have access to the local files. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4987/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4987/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4987",
"html_url": "https://github.com/huggingface/datasets/pull/4987",
"diff_url": "https://github.com/huggingface/datasets/pull/4987.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4987.patch",
"merged_at": "2022-09-16T16:22:35"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4986 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4986/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4986/comments | https://api.github.com/repos/huggingface/datasets/issues/4986/events | https://github.com/huggingface/datasets/pull/4986 | 1,375,895,035 | PR_kwDODunzps4_GNSd | 4,986 | [doc] Fix broken snippet that had too many quotes | {
"login": "tomaarsen",
"id": 37621491,
"node_id": "MDQ6VXNlcjM3NjIxNDkx",
"avatar_url": "https://avatars.githubusercontent.com/u/37621491?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/tomaarsen",
"html_url": "https://github.com/tomaarsen",
"followers_url": "https://api.github.com/users/tomaarsen/followers",
"following_url": "https://api.github.com/users/tomaarsen/following{/other_user}",
"gists_url": "https://api.github.com/users/tomaarsen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/tomaarsen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/tomaarsen/subscriptions",
"organizations_url": "https://api.github.com/users/tomaarsen/orgs",
"repos_url": "https://api.github.com/users/tomaarsen/repos",
"events_url": "https://api.github.com/users/tomaarsen/events{/privacy}",
"received_events_url": "https://api.github.com/users/tomaarsen/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"Spent the day familiarising myself with the huggingface line of products, and happened to run into some small issues here and there. Magically, I've found exactly one small issue in `transformers`, one in `accelerate` and now one in `datasets`, hah!\r\n\r\nAs for this PR, the issue seems solved according to the [new PR documentation](https://moon-ci-docs.huggingface.co/docs/datasets/pr_4986/en/process#map):\r\n![image](https://user-images.githubusercontent.com/37621491/190646405-6afa06fa-9eac-48f6-ab30-2677944fb7b6.png)\r\n"
] | 2022-09-16T12:41:07 | 2022-09-16T22:12:21 | 2022-09-16T17:32:14 | CONTRIBUTOR | null | Hello!
### Pull request overview
* Fix broken snippet in https://huggingface.co/docs/datasets/main/en/process that has too many quotes
### Details
The snippet in question can be found here: https://huggingface.co/docs/datasets/main/en/process#map
This screenshot shows the issue, there is a quote too many, causing the snippet to be colored incorrectly:
![image](https://user-images.githubusercontent.com/37621491/190640627-f7587362-0e44-4464-a5d1-a0b98df6986f.png)
The change speaks for itself.
Thank you for the detailed documentation, by the way.
- Tom Aarsen
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4986/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4986/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4986",
"html_url": "https://github.com/huggingface/datasets/pull/4986",
"diff_url": "https://github.com/huggingface/datasets/pull/4986.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4986.patch",
"merged_at": "2022-09-16T17:32:14"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4985 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4985/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4985/comments | https://api.github.com/repos/huggingface/datasets/issues/4985/events | https://github.com/huggingface/datasets/pull/4985 | 1,375,807,768 | PR_kwDODunzps4_F6kU | 4,985 | Prefer split patterns from directories over split patterns from filenames | {
"login": "polinaeterna",
"id": 16348744,
"node_id": "MDQ6VXNlcjE2MzQ4NzQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/16348744?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/polinaeterna",
"html_url": "https://github.com/polinaeterna",
"followers_url": "https://api.github.com/users/polinaeterna/followers",
"following_url": "https://api.github.com/users/polinaeterna/following{/other_user}",
"gists_url": "https://api.github.com/users/polinaeterna/gists{/gist_id}",
"starred_url": "https://api.github.com/users/polinaeterna/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/polinaeterna/subscriptions",
"organizations_url": "https://api.github.com/users/polinaeterna/orgs",
"repos_url": "https://api.github.com/users/polinaeterna/repos",
"events_url": "https://api.github.com/users/polinaeterna/events{/privacy}",
"received_events_url": "https://api.github.com/users/polinaeterna/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"Can we merge this one since the issue this PR fixes was reported for the second time? I also think we don't need a test for this simple change.",
"@mariosasko sure! could you please approve it? ",
"Hi there @polinaeterna @mariosasko! I have installed 5.2.3.dev0, which should have this fix. Unfortunately, I am still getting the error:\r\n`ValueError: Unknown split \"validation\". Should be one of ['train'].` When I call `load_dataset(\"csv\", data_files=files, split=split)`\r\n\r\nAny help would be greatly appreciated!"
] | 2022-09-16T11:20:40 | 2022-11-02T11:54:28 | 2022-09-29T08:07:49 | CONTRIBUTOR | null | related to https://github.com/huggingface/datasets/issues/4895
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4985/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4985/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4985",
"html_url": "https://github.com/huggingface/datasets/pull/4985",
"diff_url": "https://github.com/huggingface/datasets/pull/4985.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4985.patch",
"merged_at": "2022-09-29T08:07:49"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4984 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4984/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4984/comments | https://api.github.com/repos/huggingface/datasets/issues/4984/events | https://github.com/huggingface/datasets/pull/4984 | 1,375,690,330 | PR_kwDODunzps4_FhTm | 4,984 | docs: ✏️ add links to the Datasets API | {
"login": "severo",
"id": 1676121,
"node_id": "MDQ6VXNlcjE2NzYxMjE=",
"avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/severo",
"html_url": "https://github.com/severo",
"followers_url": "https://api.github.com/users/severo/followers",
"following_url": "https://api.github.com/users/severo/following{/other_user}",
"gists_url": "https://api.github.com/users/severo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/severo/subscriptions",
"organizations_url": "https://api.github.com/users/severo/orgs",
"repos_url": "https://api.github.com/users/severo/repos",
"events_url": "https://api.github.com/users/severo/events{/privacy}",
"received_events_url": "https://api.github.com/users/severo/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"OK, thanks @lhoestq. I'll close this PR, and come back to it with @stevhliu once we work on https://github.com/huggingface/datasets-server/issues/568"
] | 2022-09-16T09:34:12 | 2022-09-16T13:10:14 | 2022-09-16T13:07:33 | CONTRIBUTOR | null | I added some links to the Datasets API in the docs. See https://github.com/huggingface/datasets-server/pull/566 for a companion PR in the datasets-server. The idea is to improve the discovery of the API through the docs.
I'm a bit shy about pasting a lot of links to the API in the docs, so it's minimal for now. I'm interested in ideas to integrate the API better in these docs without being too much. cc @lhoestq @julien-c @albertvillanova @stevhliu. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4984/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4984/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4984",
"html_url": "https://github.com/huggingface/datasets/pull/4984",
"diff_url": "https://github.com/huggingface/datasets/pull/4984.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4984.patch",
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4983 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4983/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4983/comments | https://api.github.com/repos/huggingface/datasets/issues/4983/events | https://github.com/huggingface/datasets/issues/4983 | 1,375,667,654 | I_kwDODunzps5R_wXG | 4,983 | How to convert torch.utils.data.Dataset to huggingface dataset? | {
"login": "DEROOCE",
"id": 77595952,
"node_id": "MDQ6VXNlcjc3NTk1OTUy",
"avatar_url": "https://avatars.githubusercontent.com/u/77595952?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/DEROOCE",
"html_url": "https://github.com/DEROOCE",
"followers_url": "https://api.github.com/users/DEROOCE/followers",
"following_url": "https://api.github.com/users/DEROOCE/following{/other_user}",
"gists_url": "https://api.github.com/users/DEROOCE/gists{/gist_id}",
"starred_url": "https://api.github.com/users/DEROOCE/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/DEROOCE/subscriptions",
"organizations_url": "https://api.github.com/users/DEROOCE/orgs",
"repos_url": "https://api.github.com/users/DEROOCE/repos",
"events_url": "https://api.github.com/users/DEROOCE/events{/privacy}",
"received_events_url": "https://api.github.com/users/DEROOCE/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | closed | false | null | [] | null | [
"Hi! I think you can use the newly-added `from_generator` method for that:\r\n```python\r\nfrom datasets import Dataset\r\n\r\ndef gen():\r\n for idx in len(torch_dataset):\r\n yield torch_dataset[idx] # this has to be a dictionary\r\n ## or if it's an IterableDataset\r\n # for ex in torch_dataset:\r\n # yield ex\r\n\r\ndset = Dataset.from_generator(gen)\r\n```",
"Maybe `Dataset.from_list` can work as well no ?\r\n```python\r\nfrom datasets import Dataset\r\n\r\ndset = Dataset.from_list(torch_dataset)\r\n```",
"> ```python\r\n> from datasets import Dataset\r\n> \r\n> def gen():\r\n> for idx in len(torch_dataset):\r\n> yield torch_dataset[idx] # this has to be a dictionary\r\n> ## or if it's an IterableDataset\r\n> # for ex in torch_dataset:\r\n> # yield ex\r\n> \r\n> dset = Dataset.from_generator(gen)\r\n> ```\r\n\r\nI try to use `Dataset.from_generator()` method, and it returns an error:\r\n```bash\r\nAttributeError: type object 'Dataset' has no attribute 'from_generator'\r\n```\r\nAnd I think it maybe the version of my datasets package is out-of-date, so I update it\r\n```bash\r\npip install --upgrade datasets\r\n```\r\nBut after that, the code still return the above Error. ",
"> ```python\r\n> dset = Dataset.from_list(torch_dataset)\r\n> ```\r\n\r\nIt seems that Dataset also has no `from_list` method 😂\r\n```bash\r\nAttributeError: type object 'Dataset' has no attribute 'from_list'\r\n```",
"> I look through the huggingface dataset docs, and it seems that there is no offical support function to convert `torch.utils.data.Dataset` to huggingface dataset. However, there is a way to convert huggingface dataset to `torch.utils.data.Dataset`, like below:\r\n> \r\n> ```python\r\n> from datasets import Dataset\r\n> data = [[1, 2],[3, 4]]\r\n> ds = Dataset.from_dict({\"data\": data})\r\n> ds = ds.with_format(\"torch\")\r\n> ds[0]\r\n> ds[:2]\r\n> ```\r\n> \r\n> So is there something I miss, or there IS no function to convert `torch.utils.data.Dataset` to huggingface dataset. If so, is there any way to do this convert? Thanks.\r\n\r\nMy dummy code is like:\r\n```python\r\nimport os\r\nimport json\r\nfrom torch.utils import data\r\nimport datasets\r\n\r\ndef gen(torch_dataset):\r\n for idx in len(torch_dataset):\r\n yield torch_dataset[idx] # this has to be a dictionary\r\n\r\nclass MyDataset(data.Dataset):\r\n def __init__(self, path):\r\n self.dict = []\r\n for line in open(path, 'r', encoding='utf-8'):\r\n j_dict = json.loads(line)\r\n self.dict.append(j_dict['context'])\r\n \r\n def __getitem__(self, idx):\r\n return self.dict[idx]\r\n\r\n def __len__(self):\r\n return len(self.dict)\r\n\r\nroot_path = os.path.dirname(os.path.abspath(__file__))\r\npath = os.path.join(root_path, 'dataset', 'train.json')\r\ntorch_dataset = MyDataset(path)\r\n\r\ndit = []\r\nfor line in open(path, 'r', encoding='utf-8'):\r\n j_dict = json.loads(line)\r\n dit.append(j_dict['context'])\r\ndset1 = datasets.Dataset.from_list(dit)\r\nprint(dset1)\r\ndset2 = datasets.Dataset.from_generator(gen)\r\nprint(dset2)\r\n```",
"We're releasing `from_generator` and `from_list` today :)\r\nIn the meantime you can play with them by installing `datasets` from source",
"> We're releasing `from_generator` and `from_list` today :) In the meantime you can play with them by installing `datasets` from source\r\n\r\nThanks a lot for your work!",
"> > I look through the huggingface dataset docs, and it seems that there is no offical support function to convert `torch.utils.data.Dataset` to huggingface dataset. However, there is a way to convert huggingface dataset to `torch.utils.data.Dataset`, like below:\r\n> > ```python\r\n> > from datasets import Dataset\r\n> > data = [[1, 2],[3, 4]]\r\n> > ds = Dataset.from_dict({\"data\": data})\r\n> > ds = ds.with_format(\"torch\")\r\n> > ds[0]\r\n> > ds[:2]\r\n> > ```\r\n> > \r\n> > \r\n> > \r\n> > \r\n> > \r\n> > \r\n> > \r\n> > \r\n> > \r\n> > \r\n> > \r\n> > So is there something I miss, or there IS no function to convert `torch.utils.data.Dataset` to huggingface dataset. If so, is there any way to do this convert? Thanks.\r\n> \r\n> My dummy code is like:\r\n> \r\n> ```python\r\n> import os\r\n> import json\r\n> from torch.utils import data\r\n> import datasets\r\n> \r\n> def gen(torch_dataset):\r\n> for idx in len(torch_dataset):\r\n> yield torch_dataset[idx] # this has to be a dictionary\r\n> \r\n> class MyDataset(data.Dataset):\r\n> def __init__(self, path):\r\n> self.dict = []\r\n> for line in open(path, 'r', encoding='utf-8'):\r\n> j_dict = json.loads(line)\r\n> self.dict.append(j_dict['context'])\r\n> \r\n> def __getitem__(self, idx):\r\n> return self.dict[idx]\r\n> \r\n> def __len__(self):\r\n> return len(self.dict)\r\n> \r\n> root_path = os.path.dirname(os.path.abspath(__file__))\r\n> path = os.path.join(root_path, 'dataset', 'train.json')\r\n> torch_dataset = MyDataset(path)\r\n> \r\n> dit = []\r\n> for line in open(path, 'r', encoding='utf-8'):\r\n> j_dict = json.loads(line)\r\n> dit.append(j_dict['context'])\r\n> dset1 = datasets.Dataset.from_list(dit)\r\n> print(dset1)\r\n> dset2 = datasets.Dataset.from_generator(gen)\r\n> print(dset2)\r\n> ```\r\nHi, when I am using this code to build my own dataset, ` datasets.Dataset.from_generator(gen)` report `TypeError: cannot pickle generator object` whre MyDataset returns a dict like {'image': bytes, 'text': string}. How can I resolve this? Thanks a lot!",
"Hi ! Right now generator functions are expected to be picklable, so that `datasets` can hash it and use the hash to cache the resulting Dataset on disk. Maybe this can be improved.\r\n\r\nIn the meantime, can you check that you're not using unpickable objects. In your case it looks like you're using a generator object that is unpickable. It might come from an opened file, e.g. this doesn't work:\r\n```python\r\nwith open(...) as f:\r\n\r\n def gen():\r\n for x in f:\r\n yield json.loads(x)\r\n\r\n ds = Dataset.from_generator(gen)\r\n```\r\nbut this does work:\r\n```python\r\ndef gen():\r\n with open(...) as f:\r\n for x in f:\r\n yield json.loads(x)\r\n\r\nds = Dataset.from_generator(gen)\r\n```",
"> Hi ! Right now generator functions are expected to be picklable, so that `datasets` can hash it and use the hash to cache the resulting Dataset on disk. Maybe this can be improved.\r\n> \r\n> In the meantime, can you check that you're not using unpickable objects. In your case it looks like you're using a generator object that is unpickable. It might come from an opened file, e.g. this doesn't work:\r\n> \r\n> ```python\r\n> with open(...) as f:\r\n> \r\n> def gen():\r\n> for x in f:\r\n> yield json.loads(x)\r\n> \r\n> ds = Dataset.from_generator(gen)\r\n> ```\r\n> \r\n> but this does work:\r\n> \r\n> ```python\r\n> def gen():\r\n> with open(...) as f:\r\n> for x in f:\r\n> yield json.loads(x)\r\n> \r\n> ds = Dataset.from_generator(gen)\r\n> ```\r\n\r\nThanks a lot! That's the reason why I have encountered this issue. Sorry for bothering you again with another problem, since my dataset is large and I use IterableDataset.from_generator which has no attribute with_transform, how can I equip it with some customed preprocessings like Dataset.from_generator? Should I move the preprocessing to the my torch Dataset?",
"Iterable datasets are lazy: exactly like `with_transform` they apply processing on the fly when accessing the examples.\r\n\r\nTherefore you can use `my_iterable_dataset.map()` instead :)",
"@lhoestq thanks a lot and I have successfully made it work~",
"@lhoestq I am having a similar issue. Can you help me understand which kinds of generators are picklable? I previously thought that no generators are picklable so I'm intrigued to hear this.",
"Generator functions are generally picklable. E.g.\r\n```python\r\nimport dill as pickle\r\n\r\ndef generator_fn():\r\n for i in range(10):\r\n yield i\r\n\r\npickle.dumps(generator_fn)\r\n```\r\n\r\nhowever generators are not picklable\r\n```python\r\ngenerator = generator_fn()\r\npickle.dumps(generator)\r\n# TypeError: cannot pickle 'generator' object\r\n```\r\n\r\nThough it can happen that some generator functions are not recursively picklable if they use global objects that are not picklable:\r\n```python\r\ndef generator_fn_not_picklable():\r\n for i in generator:\r\n yield i\r\n\r\npickle.dumps(generator_fn_not_picklable, recurse=True)\r\n# TypeError: cannot pickle 'generator' object\r\n````"
] | 2022-09-16T09:15:10 | 2023-05-05T14:20:07 | 2022-09-20T11:23:43 | NONE | null | I look through the huggingface dataset docs, and it seems that there is no offical support function to convert `torch.utils.data.Dataset` to huggingface dataset. However, there is a way to convert huggingface dataset to `torch.utils.data.Dataset`, like below:
```python
from datasets import Dataset
data = [[1, 2],[3, 4]]
ds = Dataset.from_dict({"data": data})
ds = ds.with_format("torch")
ds[0]
ds[:2]
```
So is there something I miss, or there IS no function to convert `torch.utils.data.Dataset` to huggingface dataset. If so, is there any way to do this convert?
Thanks. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4983/reactions",
"total_count": 3,
"+1": 3,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4983/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4982 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4982/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4982/comments | https://api.github.com/repos/huggingface/datasets/issues/4982/events | https://github.com/huggingface/datasets/issues/4982 | 1,375,604,693 | I_kwDODunzps5R_g_V | 4,982 | Create dataset_infos.json with VALIDATION and TEST splits | {
"login": "skalinin",
"id": 26695348,
"node_id": "MDQ6VXNlcjI2Njk1MzQ4",
"avatar_url": "https://avatars.githubusercontent.com/u/26695348?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/skalinin",
"html_url": "https://github.com/skalinin",
"followers_url": "https://api.github.com/users/skalinin/followers",
"following_url": "https://api.github.com/users/skalinin/following{/other_user}",
"gists_url": "https://api.github.com/users/skalinin/gists{/gist_id}",
"starred_url": "https://api.github.com/users/skalinin/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/skalinin/subscriptions",
"organizations_url": "https://api.github.com/users/skalinin/orgs",
"repos_url": "https://api.github.com/users/skalinin/repos",
"events_url": "https://api.github.com/users/skalinin/events{/privacy}",
"received_events_url": "https://api.github.com/users/skalinin/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | [
"@mariosasko could you help me with this issue? we've started the discussion from [here](https://github.com/huggingface/datasets/issues/4895#issuecomment-1248227130)",
"Hi again! Can you please pass the directory name containing the dataset script instead of the script name to `datasets-cli test`?",
"Yes, it worked! thanks a lot"
] | 2022-09-16T08:21:19 | 2022-09-28T07:59:39 | 2022-09-28T07:59:39 | NONE | null | The problem is described in that [issue](https://github.com/huggingface/datasets/issues/4895#issuecomment-1247975569).
> When I try to create data_infos.json using datasets-cli test Peter.py --save_infos --all_configs I get an error:
> ValueError: Unknown split "test". Should be one of ['train'].
>
> The data_infos.json is created perfectly fine when I use only one split - datasets.Split.TRAIN
>
> You can find the code here: https://huggingface.co/datasets/sberbank-ai/Peter/tree/add_splits (add_splits branch)
I tried to clear the cache folder, than I got an another error. I run:
```
git clone https://huggingface.co/datasets/sberbank-ai/Peter
cd Peter
git checkout add_splits # switch to a add_splits branch
rm dataset_infos.json # remove local dataset_infos.json
rm -r ~/.cache/huggingface # remove cached dataset_infos.json
datasets-cli test Peter.py --save_infos --all_configs # trying to create new dataset_infos.json
```
The error message:
```
Using custom data configuration default
Testing builder 'default' (1/1)
Downloading and preparing dataset peter/default to /Users/kalinin/.cache/huggingface/datasets/peter/default/0.0.0/ef579519e140d6a40df2555996f26165f04c47557d7373709c8d7e7b4fd7465d...
Downloading data files: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████| 4/4 [00:00<00:00, 5160.63it/s]
Extracting data files: 0%| | 0/4 [00:00<?, ?it/s]Traceback (most recent call last):
File "/usr/local/bin/datasets-cli", line 8, in <module>
sys.exit(main())
File "/usr/local/lib/python3.9/site-packages/datasets/commands/datasets_cli.py", line 39, in main
service.run()
File "/usr/local/lib/python3.9/site-packages/datasets/commands/test.py", line 137, in run
builder.download_and_prepare(
File "/usr/local/lib/python3.9/site-packages/datasets/builder.py", line 704, in download_and_prepare
self._download_and_prepare(
File "/usr/local/lib/python3.9/site-packages/datasets/builder.py", line 1227, in _download_and_prepare
super()._download_and_prepare(dl_manager, verify_infos, check_duplicate_keys=verify_infos)
File "/usr/local/lib/python3.9/site-packages/datasets/builder.py", line 771, in _download_and_prepare
split_generators = self._split_generators(dl_manager, **split_generators_kwargs)
File "/Users/kalinin/.cache/huggingface/modules/datasets_modules/datasets/Peter/ef579519e140d6a40df2555996f26165f04c47557d7373709c8d7e7b4fd7465d/Peter.py", line 23, in _split_generators
data_files = dl_manager.download_and_extract(_URLS)
File "/usr/local/lib/python3.9/site-packages/datasets/download/download_manager.py", line 431, in download_and_extract
return self.extract(self.download(url_or_urls))
File "/usr/local/lib/python3.9/site-packages/datasets/download/download_manager.py", line 403, in extract
extracted_paths = map_nested(
File "/usr/local/lib/python3.9/site-packages/datasets/utils/py_utils.py", line 393, in map_nested
mapped = [
File "/usr/local/lib/python3.9/site-packages/datasets/utils/py_utils.py", line 394, in <listcomp>
_single_map_nested((function, obj, types, None, True, None))
File "/usr/local/lib/python3.9/site-packages/datasets/utils/py_utils.py", line 330, in _single_map_nested
return function(data_struct)
File "/usr/local/lib/python3.9/site-packages/datasets/utils/file_utils.py", line 213, in cached_path
output_path = ExtractManager(cache_dir=download_config.cache_dir).extract(
File "/usr/local/lib/python3.9/site-packages/datasets/utils/extract.py", line 46, in extract
self.extractor.extract(input_path, output_path, extractor_format)
File "/usr/local/lib/python3.9/site-packages/datasets/utils/extract.py", line 263, in extract
with FileLock(lock_path):
File "/usr/local/lib/python3.9/site-packages/datasets/utils/filelock.py", line 399, in __init__
max_filename_length = os.statvfs(os.path.dirname(lock_file)).f_namemax
FileNotFoundError: [Errno 2] No such file or directory: ''
Exception ignored in: <function BaseFileLock.__del__ at 0x11caeec10>
Traceback (most recent call last):
File "/usr/local/lib/python3.9/site-packages/datasets/utils/filelock.py", line 328, in __del__
self.release(force=True)
File "/usr/local/lib/python3.9/site-packages/datasets/utils/filelock.py", line 303, in release
with self._thread_lock:
AttributeError: 'UnixFileLock' object has no attribute '_thread_lock'
Extracting data files: 0%| | 0/4 [00:00<?, ?it/s]
```
Can you help me please?
## Environment info
- `datasets` version: 2.4.0
- Platform: macOS-12.5.1-x86_64-i386-64bit
- Python version: 3.9.5
- PyArrow version: 9.0.0
- Pandas version: 1.2.4
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4982/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4982/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4981 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4981/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4981/comments | https://api.github.com/repos/huggingface/datasets/issues/4981/events | https://github.com/huggingface/datasets/issues/4981 | 1,375,086,773 | I_kwDODunzps5R9ii1 | 4,981 | Can't create a dataset with `float16` features | {
"login": "dconathan",
"id": 15098095,
"node_id": "MDQ6VXNlcjE1MDk4MDk1",
"avatar_url": "https://avatars.githubusercontent.com/u/15098095?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dconathan",
"html_url": "https://github.com/dconathan",
"followers_url": "https://api.github.com/users/dconathan/followers",
"following_url": "https://api.github.com/users/dconathan/following{/other_user}",
"gists_url": "https://api.github.com/users/dconathan/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dconathan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dconathan/subscriptions",
"organizations_url": "https://api.github.com/users/dconathan/orgs",
"repos_url": "https://api.github.com/users/dconathan/repos",
"events_url": "https://api.github.com/users/dconathan/events{/privacy}",
"received_events_url": "https://api.github.com/users/dconathan/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | open | false | null | [] | null | [
"Hi @dconathan, thanks for reporting.\r\n\r\nWe rely on Arrow as a backend, and as far as I know currently support for `float16` in Arrow is not fully implemented in Python (C++), hence the `ArrowNotImplementedError` you get.\r\n\r\nSee, e.g.: https://arrow.apache.org/docs/status.html?highlight=float16#data-types",
"Thanks for the link…. didn’t realize arrow didn’t support it yet. Should it be removed from https://huggingface.co/docs/datasets/v2.4.0/en/package_reference/main_classes#datasets.Value until Arrow supports it?",
"Yes, you are right: maybe we should either remove it from our docs or add a comment explaining the issue.\r\n\r\nThe thing is that in Arrow it is partially supported: you can create `float16` values, but you can't cast them from/to other types. And current implementation of `Value` always tries to perform a cast from `float64` to `float16`.",
"Maybe we can just add a note in the `Value` documentation ?",
"Would you accept a PR to fix this? @lhoestq Do you have an idea of how hard it would be to fix?",
"I think the issue comes mostly from pyarrow not supporting `float16` completely.\r\n\r\nFor example you stil can't cast from/to `float16`\r\n```python\r\nimport numpy as np\r\nimport pyarrow as pa\r\n\r\npa.array(range(5)).cast(pa.float16())\r\n# ArrowNotImplementedError: Unsupported cast from int64 to halffloat using function cast_half_float\r\npa.array(range(5), pa.float32()).cast(pa.float16())\r\n# ArrowNotImplementedError: Unsupported cast from float to halffloat using function cast_half_float\r\npa.array(range(5), pa.float16())\r\n# ArrowTypeError: Expected np.float16 instance\r\npa.array(np.arange(5, dtype=np.float16())).cast(pa.float32())\r\n# ArrowNotImplementedError: Unsupported cast from halffloat to float using function cast_float\r\n```",
"Hmm it seems like we can either:\r\n1. try to fix pyarrow upstream\r\n2. half-support float16 with some workaround to make sure we don't ever do casting internally\r\n"
] | 2022-09-15T21:03:24 | 2023-03-22T21:40:09 | null | CONTRIBUTOR | null | ## Describe the bug
I can't create a dataset with `float16` features.
I understand from the traceback that this is a `pyarrow` error, but I don't see anywhere in the `datasets` documentation about how to successfully do this. Is it actually supported? I've tried older versions of `pyarrow` as well with the same exact error.
The bug seems to arise from `datasets` casting the values to `double` and then `pyarrow` doesn't know how to convert those back to `float16`... does that sound right? Is there a way to bypass this since it's not necessary in the `numpy` and `torch` cases?
Thanks!
## Steps to reproduce the bug
All of the following raise the following error with the same exact (as far as I can tell) traceback:
```python
ArrowNotImplementedError: Unsupported cast from double to halffloat using function cast_half_float
```
```python
from datasets import Dataset, Features, Value
Dataset.from_dict({"x": [0.0, 1.0, 2.0]}, features=Features(x=Value("float16")))
import numpy as np
Dataset.from_dict({"x": np.arange(3, dtype=np.float16)}, features=Features(x=Value("float16")))
import torch
Dataset.from_dict({"x": torch.arange(3).to(torch.float16)}, features=Features(x=Value("float16")))
```
## Expected results
A dataset with `float16` features is successfully created.
## Actual results
```python
---------------------------------------------------------------------------
ArrowNotImplementedError Traceback (most recent call last)
Cell In [14], line 1
----> 1 Dataset.from_dict({"x": [1.0, 2.0, 3.0]}, features=Features(x=Value("float16")))
File ~/scratch/scratch-env-39/.venv/lib/python3.9/site-packages/datasets/arrow_dataset.py:870, in Dataset.from_dict(cls, mapping, features, info, split)
865 mapping = features.encode_batch(mapping)
866 mapping = {
867 col: OptimizedTypedSequence(data, type=features[col] if features is not None else None, col=col)
868 for col, data in mapping.items()
869 }
--> 870 pa_table = InMemoryTable.from_pydict(mapping=mapping)
871 if info.features is None:
872 info.features = Features({col: ts.get_inferred_type() for col, ts in mapping.items()})
File ~/scratch/scratch-env-39/.venv/lib/python3.9/site-packages/datasets/table.py:750, in InMemoryTable.from_pydict(cls, *args, **kwargs)
734 @classmethod
735 def from_pydict(cls, *args, **kwargs):
736 """
737 Construct a Table from Arrow arrays or columns
738
(...)
748 :class:`datasets.table.Table`:
749 """
--> 750 return cls(pa.Table.from_pydict(*args, **kwargs))
File ~/scratch/scratch-env-39/.venv/lib/python3.9/site-packages/pyarrow/table.pxi:3648, in pyarrow.lib.Table.from_pydict()
File ~/scratch/scratch-env-39/.venv/lib/python3.9/site-packages/pyarrow/table.pxi:5174, in pyarrow.lib._from_pydict()
File ~/scratch/scratch-env-39/.venv/lib/python3.9/site-packages/pyarrow/array.pxi:343, in pyarrow.lib.asarray()
File ~/scratch/scratch-env-39/.venv/lib/python3.9/site-packages/pyarrow/array.pxi:231, in pyarrow.lib.array()
File ~/scratch/scratch-env-39/.venv/lib/python3.9/site-packages/pyarrow/array.pxi:110, in pyarrow.lib._handle_arrow_array_protocol()
File ~/scratch/scratch-env-39/.venv/lib/python3.9/site-packages/datasets/arrow_writer.py:197, in TypedSequence.__arrow_array__(self, type)
192 # otherwise we can finally use the user's type
193 elif type is not None:
194 # We use cast_array_to_feature to support casting to custom types like Audio and Image
195 # Also, when trying type "string", we don't want to convert integers or floats to "string".
196 # We only do it if trying_type is False - since this is what the user asks for.
--> 197 out = cast_array_to_feature(out, type, allow_number_to_str=not self.trying_type)
198 return out
199 except (TypeError, pa.lib.ArrowInvalid) as e: # handle type errors and overflows
File ~/scratch/scratch-env-39/.venv/lib/python3.9/site-packages/datasets/table.py:1683, in _wrap_for_chunked_arrays.<locals>.wrapper(array, *args, **kwargs)
1681 return pa.chunked_array([func(chunk, *args, **kwargs) for chunk in array.chunks])
1682 else:
-> 1683 return func(array, *args, **kwargs)
File ~/scratch/scratch-env-39/.venv/lib/python3.9/site-packages/datasets/table.py:1853, in cast_array_to_feature(array, feature, allow_number_to_str)
1851 return array_cast(array, get_nested_type(feature), allow_number_to_str=allow_number_to_str)
1852 elif not isinstance(feature, (Sequence, dict, list, tuple)):
-> 1853 return array_cast(array, feature(), allow_number_to_str=allow_number_to_str)
1854 raise TypeError(f"Couldn't cast array of type\n{array.type}\nto\n{feature}")
File ~/scratch/scratch-env-39/.venv/lib/python3.9/site-packages/datasets/table.py:1683, in _wrap_for_chunked_arrays.<locals>.wrapper(array, *args, **kwargs)
1681 return pa.chunked_array([func(chunk, *args, **kwargs) for chunk in array.chunks])
1682 else:
-> 1683 return func(array, *args, **kwargs)
File ~/scratch/scratch-env-39/.venv/lib/python3.9/site-packages/datasets/table.py:1762, in array_cast(array, pa_type, allow_number_to_str)
1760 if pa.types.is_null(pa_type) and not pa.types.is_null(array.type):
1761 raise TypeError(f"Couldn't cast array of type {array.type} to {pa_type}")
-> 1762 return array.cast(pa_type)
1763 raise TypeError(f"Couldn't cast array of type\n{array.type}\nto\n{pa_type}")
File ~/scratch/scratch-env-39/.venv/lib/python3.9/site-packages/pyarrow/array.pxi:919, in pyarrow.lib.Array.cast()
File ~/scratch/scratch-env-39/.venv/lib/python3.9/site-packages/pyarrow/compute.py:389, in cast(arr, target_type, safe, options)
387 else:
388 options = CastOptions.safe(target_type)
--> 389 return call_function("cast", [arr], options)
File ~/scratch/scratch-env-39/.venv/lib/python3.9/site-packages/pyarrow/_compute.pyx:560, in pyarrow._compute.call_function()
File ~/scratch/scratch-env-39/.venv/lib/python3.9/site-packages/pyarrow/_compute.pyx:355, in pyarrow._compute.Function.call()
File ~/scratch/scratch-env-39/.venv/lib/python3.9/site-packages/pyarrow/error.pxi:144, in pyarrow.lib.pyarrow_internal_check_status()
File ~/scratch/scratch-env-39/.venv/lib/python3.9/site-packages/pyarrow/error.pxi:121, in pyarrow.lib.check_status()
ArrowNotImplementedError: Unsupported cast from double to halffloat using function cast_half_float
```
## Environment info
- `datasets` version: 2.4.0
- Platform: macOS-12.5.1-arm64-arm-64bit
- Python version: 3.9.13
- PyArrow version: 9.0.0
- Pandas version: 1.4.4
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4981/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4981/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4980 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4980/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4980/comments | https://api.github.com/repos/huggingface/datasets/issues/4980/events | https://github.com/huggingface/datasets/issues/4980 | 1,374,868,083 | I_kwDODunzps5R8tJz | 4,980 | Make `pyarrow` optional | {
"login": "KOLANICH",
"id": 240344,
"node_id": "MDQ6VXNlcjI0MDM0NA==",
"avatar_url": "https://avatars.githubusercontent.com/u/240344?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/KOLANICH",
"html_url": "https://github.com/KOLANICH",
"followers_url": "https://api.github.com/users/KOLANICH/followers",
"following_url": "https://api.github.com/users/KOLANICH/following{/other_user}",
"gists_url": "https://api.github.com/users/KOLANICH/gists{/gist_id}",
"starred_url": "https://api.github.com/users/KOLANICH/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/KOLANICH/subscriptions",
"organizations_url": "https://api.github.com/users/KOLANICH/orgs",
"repos_url": "https://api.github.com/users/KOLANICH/repos",
"events_url": "https://api.github.com/users/KOLANICH/events{/privacy}",
"received_events_url": "https://api.github.com/users/KOLANICH/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | closed | false | null | [] | null | [
"The whole datasets library is pretty much a wrapper to pyarrow (just take a look at some of the source for a Dataset) https://github.com/huggingface/datasets/blob/51aef08ad7053c0bfe8f9a961207b26df15850d3/src/datasets/arrow_dataset.py#L639 \r\n\r\nI think removing the pyarrow dependency would involve a complete rewrite / a different library with minimal functionality (datasets-lite ?)",
"Thanks for the proposal, @KOLANICH. And also thanks for your answer, @dconathan.\r\n\r\nIndeed, we are using `pyarrow` as the backend for our datasets, in order to cache them and also allow memory-mapping (using datasets larger than your RAM memory).\r\n\r\nOne way to avoid using `pyarrow` could be loading the datasets in streaming mode, by passing `streaming=True` to `load_dataset`. This way you basically get a generator for the dataset; nothing is downloaded, nor cached. ",
"Thanks for the info. Could `datasets` then be made optional for `transformers` instead? I used `transformers` only to deal with pretrained models to deploy them (convert to ONNX, and then I use TVM), so I don't really need `pyarrow` and `datasets` by now.\r\n"
] | 2022-09-15T17:38:03 | 2022-09-16T17:23:47 | 2022-09-16T17:23:47 | NONE | null | **Is your feature request related to a problem? Please describe.**
Is `pyarrow` really needed for every dataset?
**Describe the solution you'd like**
It is made optional.
**Describe alternatives you've considered**
Likely, no.
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4980/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4980/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4979 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4979/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4979/comments | https://api.github.com/repos/huggingface/datasets/issues/4979/events | https://github.com/huggingface/datasets/pull/4979 | 1,374,820,758 | PR_kwDODunzps4_CouM | 4,979 | Fix missing tags in dataset cards | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-09-15T16:51:03 | 2022-09-22T12:37:55 | 2022-09-15T17:12:09 | MEMBER | null | Fix missing tags in dataset cards:
- amazon_us_reviews
- art
- discofuse
- indic_glue
- ubuntu_dialogs_corpus
This PR partially fixes the missing tags in dataset cards. Subsequent PRs will follow to complete this task.
Related to:
- #4833
- #4891
- #4896
- #4908
- #4921
- #4931 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4979/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4979/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4979",
"html_url": "https://github.com/huggingface/datasets/pull/4979",
"diff_url": "https://github.com/huggingface/datasets/pull/4979.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4979.patch",
"merged_at": "2022-09-15T17:12:09"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4978 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4978/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4978/comments | https://api.github.com/repos/huggingface/datasets/issues/4978/events | https://github.com/huggingface/datasets/pull/4978 | 1,374,271,504 | PR_kwDODunzps4_Axnh | 4,978 | Update IndicGLUE download links | {
"login": "sumanthd17",
"id": 28291870,
"node_id": "MDQ6VXNlcjI4MjkxODcw",
"avatar_url": "https://avatars.githubusercontent.com/u/28291870?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sumanthd17",
"html_url": "https://github.com/sumanthd17",
"followers_url": "https://api.github.com/users/sumanthd17/followers",
"following_url": "https://api.github.com/users/sumanthd17/following{/other_user}",
"gists_url": "https://api.github.com/users/sumanthd17/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sumanthd17/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sumanthd17/subscriptions",
"organizations_url": "https://api.github.com/users/sumanthd17/orgs",
"repos_url": "https://api.github.com/users/sumanthd17/repos",
"events_url": "https://api.github.com/users/sumanthd17/events{/privacy}",
"received_events_url": "https://api.github.com/users/sumanthd17/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-09-15T10:05:57 | 2022-09-15T22:00:20 | 2022-09-15T21:57:34 | CONTRIBUTOR | null | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4978/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4978/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4978",
"html_url": "https://github.com/huggingface/datasets/pull/4978",
"diff_url": "https://github.com/huggingface/datasets/pull/4978.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4978.patch",
"merged_at": "2022-09-15T21:57:34"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4977 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4977/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4977/comments | https://api.github.com/repos/huggingface/datasets/issues/4977/events | https://github.com/huggingface/datasets/issues/4977 | 1,372,962,157 | I_kwDODunzps5R1b1t | 4,977 | Providing dataset size | {
"login": "sashavor",
"id": 14205986,
"node_id": "MDQ6VXNlcjE0MjA1OTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/14205986?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sashavor",
"html_url": "https://github.com/sashavor",
"followers_url": "https://api.github.com/users/sashavor/followers",
"following_url": "https://api.github.com/users/sashavor/following{/other_user}",
"gists_url": "https://api.github.com/users/sashavor/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sashavor/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sashavor/subscriptions",
"organizations_url": "https://api.github.com/users/sashavor/orgs",
"repos_url": "https://api.github.com/users/sashavor/repos",
"events_url": "https://api.github.com/users/sashavor/events{/privacy}",
"received_events_url": "https://api.github.com/users/sashavor/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | open | false | null | [] | null | [
"Hi @sashavor, thanks for your suggestion.\r\n\r\nUntil now we have the CLI command \r\n```\r\ndatasets-cli test datasets/<your-dataset-folder> --save_infos --all_configs\r\n```\r\nthat generates the `dataset_infos.json` with the size of the downloaded dataset, among other information.\r\n\r\nWe are currently in the middle of removing those JSON files and putting their information directly in the header of the `README.md` (as YAML tags). Normally, the CLI command should continue working but saving its output to the dataset card instead. See:\r\n- #4926",
"Additionally, the download size can be inferred by doing HEAD requests to the files to be downloaded. And for files hosted on the hub you can even get the file sizes using the Hub API",
"Amazing @albertvillanova ! I think just having that information visible in the dataset info (without having to do any requests/additional coding) would be really useful :hugs: "
] | 2022-09-14T13:09:27 | 2022-09-15T16:03:58 | null | NONE | null | **Is your feature request related to a problem? Please describe.**
Especially for big datasets like [LAION](https://huggingface.co/datasets/laion/laion2B-en/), it's hard to know exactly the downloaded size (because there are many files and you don't have their exact size when downloaded).
**Describe the solution you'd like**
Auto-populating the downloaded dataset size on the dataset page would be really useful, including that of each split (when there are some).
**Describe alternatives you've considered**
People should be adding this to dataset cards, but I don't think that is systematically the case :slightly_smiling_face:
**Additional context**
Mentioned to @lhoestq
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4977/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4977/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4976 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4976/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4976/comments | https://api.github.com/repos/huggingface/datasets/issues/4976/events | https://github.com/huggingface/datasets/issues/4976 | 1,372,322,382 | I_kwDODunzps5Ry_pO | 4,976 | Hope to adapt Python3.9 as soon as possible | {
"login": "RedHeartSecretMan",
"id": 74012141,
"node_id": "MDQ6VXNlcjc0MDEyMTQx",
"avatar_url": "https://avatars.githubusercontent.com/u/74012141?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/RedHeartSecretMan",
"html_url": "https://github.com/RedHeartSecretMan",
"followers_url": "https://api.github.com/users/RedHeartSecretMan/followers",
"following_url": "https://api.github.com/users/RedHeartSecretMan/following{/other_user}",
"gists_url": "https://api.github.com/users/RedHeartSecretMan/gists{/gist_id}",
"starred_url": "https://api.github.com/users/RedHeartSecretMan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/RedHeartSecretMan/subscriptions",
"organizations_url": "https://api.github.com/users/RedHeartSecretMan/orgs",
"repos_url": "https://api.github.com/users/RedHeartSecretMan/repos",
"events_url": "https://api.github.com/users/RedHeartSecretMan/events{/privacy}",
"received_events_url": "https://api.github.com/users/RedHeartSecretMan/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | open | false | null | [] | null | [
"Hi! `datasets` should work in Python 3.9. What kind of issue have you encountered?",
"There is this related issue already: https://github.com/huggingface/datasets/issues/4113\r\nAnd I guess we need a CI job for 3.9 ^^",
"Perhaps we should report this issue in the `filelock` repo?"
] | 2022-09-14T04:42:22 | 2022-09-26T16:32:35 | null | NONE | null | **Is your feature request related to a problem? Please describe.**
A clear and concise description of what the problem is.
**Describe the solution you'd like**
A clear and concise description of what you want to happen.
**Describe alternatives you've considered**
A clear and concise description of any alternative solutions or features you've considered.
**Additional context**
Add any other context about the feature request here.
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4976/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4976/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4975 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4975/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4975/comments | https://api.github.com/repos/huggingface/datasets/issues/4975/events | https://github.com/huggingface/datasets/pull/4975 | 1,371,703,691 | PR_kwDODunzps4-4NXX | 4,975 | Add `fn_kwargs` param to `IterableDataset.map` | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"Thank you for adding this fix! \r\n\r\nWould it be possible to get `fn_kwargs` added to `IterableDatasetDict.map` as well? It looks like a very similar problem, and hopefully shouldn't be a huge change. \r\n",
"Hi @brianhill11! https://github.com/huggingface/datasets/pull/5810 adds this (opened a couple of days ago). It should be merged soon.",
"That's fantastic news, thanks @mariosasko ! I'll give it a shot once the changes are merged in. "
] | 2022-09-13T16:19:05 | 2023-05-05T16:53:43 | 2022-09-13T16:45:34 | CONTRIBUTOR | null | Add the `fn_kwargs` parameter to `IterableDataset.map`.
("Resolves" https://discuss.huggingface.co/t/how-to-use-large-image-text-datasets-in-hugging-face-hub-without-downloading-for-free/22780/3) | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4975/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4975/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4975",
"html_url": "https://github.com/huggingface/datasets/pull/4975",
"diff_url": "https://github.com/huggingface/datasets/pull/4975.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4975.patch",
"merged_at": "2022-09-13T16:45:34"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4974 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4974/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4974/comments | https://api.github.com/repos/huggingface/datasets/issues/4974/events | https://github.com/huggingface/datasets/pull/4974 | 1,371,682,020 | PR_kwDODunzps4-4Iri | 4,974 | [GH->HF] Part 2: Remove all dataset scripts from github | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"So this means metrics will be deleted from this repo in favor of the \"evaluate\" library? Maybe you guys could just redirect metrics to that library.",
"We are deprecating the metrics in `datasets` indeed and suggest users to switch to `evaluate` (via a warning message)\r\n\r\nWe'll keep the current metrics as they are for now, but they'll be completely removed at one point",
"I guess this is ready to merge ?\r\n\r\nIt should break nothing except one rare case:\r\n\r\nIf someone is using an old version of `datasets` to try to load a recent dataset. Indeed in that case it fetches the `main` branch on github to see if it exists. But since we're removing all the datasets, forward fetching won't work anymore.\r\n\r\ne.g. if someone uses \"imagenet-1k\" with a version of `datasets` that didn't have it at that time. I checked on kibana and one single user would be affected with 4k downloads/months. It should still work for them though thanks to the `datasets` cache\r\n\r\nBut if they delete their cache, the workaround is... 🥁 update `datasets` 😅",
"Let's merge this on monday if we can, to make sure contributors who wanted to merge their dataset PRs here could do it",
"Alright, merging !"
] | 2022-09-13T16:01:12 | 2022-10-03T17:09:39 | 2022-10-03T17:07:32 | MEMBER | null | Now that all the datasets live on the Hub we can remove the /datasets directory that contains all the dataset scripts of this repository
- [x] Needs https://github.com/huggingface/datasets/pull/4973 to be merged first
- [x] and PR to be enabled on the Hub for non-namespaced datasets | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4974/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4974/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4974",
"html_url": "https://github.com/huggingface/datasets/pull/4974",
"diff_url": "https://github.com/huggingface/datasets/pull/4974.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4974.patch",
"merged_at": "2022-10-03T17:07:32"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4973 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4973/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4973/comments | https://api.github.com/repos/huggingface/datasets/issues/4973/events | https://github.com/huggingface/datasets/pull/4973 | 1,371,600,074 | PR_kwDODunzps4-33JW | 4,973 | [GH->HF] Load datasets from the Hub | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"Duplicate of:\r\n- #4059"
] | 2022-09-13T15:01:41 | 2022-09-15T15:26:51 | 2022-09-15T15:24:26 | MEMBER | null | Currently datasets with no namespace (e.g. squad, glue) are loaded from github.
In this PR I changed this logic to use the Hugging Face Hub instead.
This is the first step in removing all the dataset scripts in this repository
related to discussions in https://github.com/huggingface/datasets/pull/4059 (I should have continued from this PR actually) | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4973/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4973/timeline | null | null | true | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4973",
"html_url": "https://github.com/huggingface/datasets/pull/4973",
"diff_url": "https://github.com/huggingface/datasets/pull/4973.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4973.patch",
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4972 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4972/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4972/comments | https://api.github.com/repos/huggingface/datasets/issues/4972/events | https://github.com/huggingface/datasets/pull/4972 | 1,371,443,306 | PR_kwDODunzps4-3VVF | 4,972 | Fix map batched with torch output | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-09-13T13:16:34 | 2022-09-20T09:42:02 | 2022-09-20T09:39:33 | MEMBER | null | Reported in https://discuss.huggingface.co/t/typeerror-when-applying-map-after-set-format-type-torch/23067/2
Currently it fails if one uses batched `map` and the map function returns a torch tensor.
I fixed it for torch, tf, jax and pandas series. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4972/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4972/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4972",
"html_url": "https://github.com/huggingface/datasets/pull/4972",
"diff_url": "https://github.com/huggingface/datasets/pull/4972.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4972.patch",
"merged_at": "2022-09-20T09:39:33"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4971 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4971/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4971/comments | https://api.github.com/repos/huggingface/datasets/issues/4971/events | https://github.com/huggingface/datasets/pull/4971 | 1,370,319,516 | PR_kwDODunzps4-zk3g | 4,971 | Preserve non-`input_colums` in `Dataset.map` if `input_columns` are specified | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-09-12T18:08:24 | 2022-09-13T13:51:08 | 2022-09-13T13:48:45 | CONTRIBUTOR | null | Currently, if the `input_columns` list in `Dataset.map` is specified, the columns not in that list are dropped after the `map` transform.
This makes the behavior inconsistent with `IterableDataset.map`.
(It seems this issue was introduced by mistake in https://github.com/huggingface/datasets/pull/2246)
Fix https://github.com/huggingface/datasets/issues/4858 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4971/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4971/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4971",
"html_url": "https://github.com/huggingface/datasets/pull/4971",
"diff_url": "https://github.com/huggingface/datasets/pull/4971.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4971.patch",
"merged_at": "2022-09-13T13:48:44"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4970 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4970/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4970/comments | https://api.github.com/repos/huggingface/datasets/issues/4970/events | https://github.com/huggingface/datasets/pull/4970 | 1,369,433,074 | PR_kwDODunzps4-wkY2 | 4,970 | Support streaming nli_tr dataset | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-09-12T07:48:45 | 2022-09-12T08:45:04 | 2022-09-12T08:43:08 | MEMBER | null | Support streaming nli_tr dataset.
This PR removes legacy `codecs.open` and replaces it with `open` that supports passing encoding.
Fix #3186. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4970/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4970/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4970",
"html_url": "https://github.com/huggingface/datasets/pull/4970",
"diff_url": "https://github.com/huggingface/datasets/pull/4970.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4970.patch",
"merged_at": "2022-09-12T08:43:08"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4969 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4969/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4969/comments | https://api.github.com/repos/huggingface/datasets/issues/4969/events | https://github.com/huggingface/datasets/pull/4969 | 1,369,334,740 | PR_kwDODunzps4-wPOk | 4,969 | Fix data URL and metadata of vivos dataset | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-09-12T06:12:34 | 2022-09-12T07:16:15 | 2022-09-12T07:14:19 | MEMBER | null | After contacting the authors of the VIVOS dataset to report that their data server is down, we have received a reply from Hieu-Thi Luong that their data is now hosted on Zenodo: https://doi.org/10.5281/zenodo.7068130
This PR updates their data URL and some metadata (homepage, citation and license).
Fix #4936. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4969/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4969/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4969",
"html_url": "https://github.com/huggingface/datasets/pull/4969",
"diff_url": "https://github.com/huggingface/datasets/pull/4969.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4969.patch",
"merged_at": "2022-09-12T07:14:19"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4968 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4968/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4968/comments | https://api.github.com/repos/huggingface/datasets/issues/4968/events | https://github.com/huggingface/datasets/pull/4968 | 1,369,312,877 | PR_kwDODunzps4-wKkw | 4,968 | Support streaming compguesswhat dataset | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-09-12T05:42:24 | 2022-09-12T08:00:06 | 2022-09-12T07:58:06 | MEMBER | null | Support streaming `compguesswhat` dataset.
Fix #3191. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4968/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4968/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4968",
"html_url": "https://github.com/huggingface/datasets/pull/4968",
"diff_url": "https://github.com/huggingface/datasets/pull/4968.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4968.patch",
"merged_at": "2022-09-12T07:58:06"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4967 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4967/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4967/comments | https://api.github.com/repos/huggingface/datasets/issues/4967/events | https://github.com/huggingface/datasets/pull/4967 | 1,369,092,452 | PR_kwDODunzps4-vbS- | 4,967 | Strip "/" in local dataset path to avoid empty dataset name error | {
"login": "apohllo",
"id": 40543,
"node_id": "MDQ6VXNlcjQwNTQz",
"avatar_url": "https://avatars.githubusercontent.com/u/40543?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/apohllo",
"html_url": "https://github.com/apohllo",
"followers_url": "https://api.github.com/users/apohllo/followers",
"following_url": "https://api.github.com/users/apohllo/following{/other_user}",
"gists_url": "https://api.github.com/users/apohllo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/apohllo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/apohllo/subscriptions",
"organizations_url": "https://api.github.com/users/apohllo/orgs",
"repos_url": "https://api.github.com/users/apohllo/repos",
"events_url": "https://api.github.com/users/apohllo/events{/privacy}",
"received_events_url": "https://api.github.com/users/apohllo/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"Cool :-)"
] | 2022-09-11T23:09:16 | 2022-09-29T10:46:21 | 2022-09-12T15:30:38 | CONTRIBUTOR | null | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4967/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4967/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4967",
"html_url": "https://github.com/huggingface/datasets/pull/4967",
"diff_url": "https://github.com/huggingface/datasets/pull/4967.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4967.patch",
"merged_at": "2022-09-12T15:30:38"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4965 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4965/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4965/comments | https://api.github.com/repos/huggingface/datasets/issues/4965/events | https://github.com/huggingface/datasets/issues/4965 | 1,368,661,002 | I_kwDODunzps5RlBwK | 4,965 | [Apple M1] MemoryError: Cannot allocate write+execute memory for ffi.callback() | {
"login": "hoangtnm",
"id": 35718590,
"node_id": "MDQ6VXNlcjM1NzE4NTkw",
"avatar_url": "https://avatars.githubusercontent.com/u/35718590?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/hoangtnm",
"html_url": "https://github.com/hoangtnm",
"followers_url": "https://api.github.com/users/hoangtnm/followers",
"following_url": "https://api.github.com/users/hoangtnm/following{/other_user}",
"gists_url": "https://api.github.com/users/hoangtnm/gists{/gist_id}",
"starred_url": "https://api.github.com/users/hoangtnm/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/hoangtnm/subscriptions",
"organizations_url": "https://api.github.com/users/hoangtnm/orgs",
"repos_url": "https://api.github.com/users/hoangtnm/repos",
"events_url": "https://api.github.com/users/hoangtnm/events{/privacy}",
"received_events_url": "https://api.github.com/users/hoangtnm/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | [
"Hi! This seems like a bug in `soundfile`. Could you please open an issue in their repo? `soundfile` works without any issues on my M1, so I'm not sure we can help.",
"Hi @mariosasko, can you share how you installed `soundfile` on your mac M1?",
"Hi @hoangtnm - I upgraded to python 3.10 and it fixed the problem for me. I was also running 3.8 on an M1 mac."
] | 2022-09-10T15:55:49 | 2023-07-21T14:45:50 | 2023-07-21T14:45:50 | NONE | null | ## Describe the bug
I'm trying to run `cast_column("audio", Audio())` on Apple M1 Pro, but it seems that it doesn't work.
## Steps to reproduce the bug
```python
import datasets
dataset = load_dataset("csv", data_files="./train.csv")["train"]
dataset = dataset.map(lambda x: {"audio": str(DATA_DIR / "audio" / x["audio"])})
dataset = dataset.cast_column("audio", Audio())
dataset[0]
```
## Expected results
```
{'audio': {'bytes': None,
'path': '/root/.cache/huggingface/datasets/downloads/extracted/f14948e0e84be638dd7943ac36518a4cf3324e8b7aa331c5ab11541518e9368c/en-US~JOINT_ACCOUNT/602ba55abb1e6d0fbce92065.wav'},
'english_transcription': 'I would like to set up a joint account with my partner',
'intent_class': 11,
'lang_id': 4,
'path': '/root/.cache/huggingface/datasets/downloads/extracted/f14948e0e84be638dd7943ac36518a4cf3324e8b7aa331c5ab11541518e9368c/en-US~JOINT_ACCOUNT/602ba55abb1e6d0fbce92065.wav',
'transcription': 'I would like to set up a joint account with my partner'}
```
## Actual results
````---------------------------------------------------------------------------
MemoryError Traceback (most recent call last)
Input In [6], in <cell line: 1>()
----> 1 dataset[0]
File ~/miniconda3/envs/rodan/lib/python3.8/site-packages/datasets/arrow_dataset.py:2165, in Dataset.__getitem__(self, key)
2163 def __getitem__(self, key): # noqa: F811
2164 """Can be used to index columns (by string names) or rows (by integer index or iterable of indices or bools)."""
-> 2165 return self._getitem(
2166 key,
2167 )
File ~/miniconda3/envs/rodan/lib/python3.8/site-packages/datasets/arrow_dataset.py:2150, in Dataset._getitem(self, key, decoded, **kwargs)
2148 formatter = get_formatter(format_type, features=self.features, decoded=decoded, **format_kwargs)
2149 pa_subtable = query_table(self._data, key, indices=self._indices if self._indices is not None else None)
-> 2150 formatted_output = format_table(
2151 pa_subtable, key, formatter=formatter, format_columns=format_columns, output_all_columns=output_all_columns
2152 )
2153 return formatted_output
File ~/miniconda3/envs/rodan/lib/python3.8/site-packages/datasets/formatting/formatting.py:532, in format_table(table, key, formatter, format_columns, output_all_columns)
530 python_formatter = PythonFormatter(features=None)
531 if format_columns is None:
--> 532 return formatter(pa_table, query_type=query_type)
533 elif query_type == "column":
534 if key in format_columns:
File ~/miniconda3/envs/rodan/lib/python3.8/site-packages/datasets/formatting/formatting.py:281, in Formatter.__call__(self, pa_table, query_type)
279 def __call__(self, pa_table: pa.Table, query_type: str) -> Union[RowFormat, ColumnFormat, BatchFormat]:
280 if query_type == "row":
--> 281 return self.format_row(pa_table)
282 elif query_type == "column":
283 return self.format_column(pa_table)
File ~/miniconda3/envs/rodan/lib/python3.8/site-packages/datasets/formatting/formatting.py:312, in PythonFormatter.format_row(self, pa_table)
310 row = self.python_arrow_extractor().extract_row(pa_table)
311 if self.decoded:
--> 312 row = self.python_features_decoder.decode_row(row)
313 return row
File ~/miniconda3/envs/rodan/lib/python3.8/site-packages/datasets/formatting/formatting.py:221, in PythonFeaturesDecoder.decode_row(self, row)
220 def decode_row(self, row: dict) -> dict:
--> 221 return self.features.decode_example(row) if self.features else row
File ~/miniconda3/envs/rodan/lib/python3.8/site-packages/datasets/features/features.py:1647, in Features.decode_example(self, example, token_per_repo_id)
1634 def decode_example(self, example: dict, token_per_repo_id: Optional[Dict[str, Union[str, bool, None]]] = None):
1635 """Decode example with custom feature decoding.
1636
1637 Args:
(...)
1644 :obj:`dict[str, Any]`
1645 """
-> 1647 return {
1648 column_name: decode_nested_example(feature, value, token_per_repo_id=token_per_repo_id)
1649 if self._column_requires_decoding[column_name]
1650 else value
1651 for column_name, (feature, value) in zip_dict(
1652 {key: value for key, value in self.items() if key in example}, example
1653 )
1654 }
File ~/miniconda3/envs/rodan/lib/python3.8/site-packages/datasets/features/features.py:1648, in <dictcomp>(.0)
1634 def decode_example(self, example: dict, token_per_repo_id: Optional[Dict[str, Union[str, bool, None]]] = None):
1635 """Decode example with custom feature decoding.
1636
1637 Args:
(...)
1644 :obj:`dict[str, Any]`
1645 """
1647 return {
-> 1648 column_name: decode_nested_example(feature, value, token_per_repo_id=token_per_repo_id)
1649 if self._column_requires_decoding[column_name]
1650 else value
1651 for column_name, (feature, value) in zip_dict(
1652 {key: value for key, value in self.items() if key in example}, example
1653 )
1654 }
File ~/miniconda3/envs/rodan/lib/python3.8/site-packages/datasets/features/features.py:1260, in decode_nested_example(schema, obj, token_per_repo_id)
1257 # Object with special decoding:
1258 elif isinstance(schema, (Audio, Image)):
1259 # we pass the token to read and decode files from private repositories in streaming mode
-> 1260 return schema.decode_example(obj, token_per_repo_id=token_per_repo_id) if obj is not None else None
1261 return obj
File ~/miniconda3/envs/rodan/lib/python3.8/site-packages/datasets/features/audio.py:156, in Audio.decode_example(self, value, token_per_repo_id)
154 array, sampling_rate = self._decode_non_mp3_file_like(file)
155 else:
--> 156 array, sampling_rate = self._decode_non_mp3_path_like(path, token_per_repo_id=token_per_repo_id)
157 return {"path": path, "array": array, "sampling_rate": sampling_rate}
File ~/miniconda3/envs/rodan/lib/python3.8/site-packages/datasets/features/audio.py:257, in Audio._decode_non_mp3_path_like(self, path, format, token_per_repo_id)
254 use_auth_token = None
256 with xopen(path, "rb", use_auth_token=use_auth_token) as f:
--> 257 array, sampling_rate = librosa.load(f, sr=self.sampling_rate, mono=self.mono)
258 return array, sampling_rate
File ~/miniconda3/envs/rodan/lib/python3.8/site-packages/librosa/util/decorators.py:88, in deprecate_positional_args.<locals>._inner_deprecate_positional_args.<locals>.inner_f(*args, **kwargs)
86 extra_args = len(args) - len(all_args)
87 if extra_args <= 0:
---> 88 return f(*args, **kwargs)
90 # extra_args > 0
91 args_msg = [
92 "{}={}".format(name, arg)
93 for name, arg in zip(kwonly_args[:extra_args], args[-extra_args:])
94 ]
File ~/miniconda3/envs/rodan/lib/python3.8/site-packages/librosa/core/audio.py:164, in load(path, sr, mono, offset, duration, dtype, res_type)
161 else:
162 # Otherwise try soundfile first, and then fall back if necessary
163 try:
--> 164 y, sr_native = __soundfile_load(path, offset, duration, dtype)
166 except RuntimeError as exc:
167 # If soundfile failed, try audioread instead
168 if isinstance(path, (str, pathlib.PurePath)):
File ~/miniconda3/envs/rodan/lib/python3.8/site-packages/librosa/core/audio.py:195, in __soundfile_load(path, offset, duration, dtype)
192 context = path
193 else:
194 # Otherwise, create the soundfile object
--> 195 context = sf.SoundFile(path)
197 with context as sf_desc:
198 sr_native = sf_desc.samplerate
File ~/miniconda3/envs/rodan/lib/python3.8/site-packages/soundfile.py:629, in SoundFile.__init__(self, file, mode, samplerate, channels, subtype, endian, format, closefd)
626 self._mode = mode
627 self._info = _create_info_struct(file, mode, samplerate, channels,
628 format, subtype, endian)
--> 629 self._file = self._open(file, mode_int, closefd)
630 if set(mode).issuperset('r+') and self.seekable():
631 # Move write position to 0 (like in Python file objects)
632 self.seek(0)
File ~/miniconda3/envs/rodan/lib/python3.8/site-packages/soundfile.py:1179, in SoundFile._open(self, file, mode_int, closefd)
1177 file_ptr = _snd.sf_open_fd(file, mode_int, self._info, closefd)
1178 elif _has_virtual_io_attrs(file, mode_int):
-> 1179 file_ptr = _snd.sf_open_virtual(self._init_virtual_io(file),
1180 mode_int, self._info, _ffi.NULL)
1181 else:
1182 raise TypeError("Invalid file: {0!r}".format(self.name))
File ~/miniconda3/envs/rodan/lib/python3.8/site-packages/soundfile.py:1197, in SoundFile._init_virtual_io(self, file)
1194 def _init_virtual_io(self, file):
1195 """Initialize callback functions for sf_open_virtual()."""
1196 @_ffi.callback("sf_vio_get_filelen")
-> 1197 def vio_get_filelen(user_data):
1198 curr = file.tell()
1199 file.seek(0, SEEK_END)
MemoryError: Cannot allocate write+execute memory for ffi.callback(). You might be running on a system that prevents this. For more information, see https://cffi.readthedocs.io/en/latest/using.html#callbacks
```
## Environment info
- `datasets` version: 2.4.0
- Platform: macOS-12.5.1-arm64-arm-64bit
- Python version: 3.8.13
- PyArrow version: 9.0.0
- Pandas version: 1.4.4 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4965/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4965/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4964 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4964/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4964/comments | https://api.github.com/repos/huggingface/datasets/issues/4964/events | https://github.com/huggingface/datasets/issues/4964 | 1,368,617,322 | I_kwDODunzps5Rk3Fq | 4,964 | Column of arrays (2D+) are using unreasonably high memory | {
"login": "vigsterkr",
"id": 30353,
"node_id": "MDQ6VXNlcjMwMzUz",
"avatar_url": "https://avatars.githubusercontent.com/u/30353?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/vigsterkr",
"html_url": "https://github.com/vigsterkr",
"followers_url": "https://api.github.com/users/vigsterkr/followers",
"following_url": "https://api.github.com/users/vigsterkr/following{/other_user}",
"gists_url": "https://api.github.com/users/vigsterkr/gists{/gist_id}",
"starred_url": "https://api.github.com/users/vigsterkr/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/vigsterkr/subscriptions",
"organizations_url": "https://api.github.com/users/vigsterkr/orgs",
"repos_url": "https://api.github.com/users/vigsterkr/repos",
"events_url": "https://api.github.com/users/vigsterkr/events{/privacy}",
"received_events_url": "https://api.github.com/users/vigsterkr/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | open | false | null | [] | null | [
"note i have tried the same code with `datasets` version 2.4.0, the outcome is the very same as described above.",
"Seems related to issues #4623 and #4802 so it would appear this issue has been around for a few months.",
"Hi ! `Dataset.from_dict` keeps the data in memory. You can write on disk and reload them with\r\n```python\r\ndataset.save_to_disk(\"path/to/local\")\r\ndataset = load_from_disk(\"path/to/local\")\r\n```\r\nthis way you'll end up with a dataset loaded from your disk using memory mapping, and it won't fill up your RAM :)\r\n\r\nrelated to https://github.com/huggingface/datasets/issues/4861",
"@lhoestq thnx for getting back to me! i've tested the suggested method, but unfortunately the memory consumption is the very same:\r\n\r\n```\r\nfrom datasets import Dataset, Features, Array2D, Array3D, load_from_disk\r\nimport numpy as np\r\n\r\ncolumn_name = \"a\"\r\narray_shape = (64, 64, 3)\r\n\r\ndata = np.random.random((10000,) + array_shape)\r\ndataset = Dataset.from_dict({column_name: data}, features=Features({column_name: Array3D(shape=array_shape, dtype=\"float64\")}))\r\ndataset.save_to_disk(\"foo\")\r\n\r\nfoo_db = load_from_disk(\"foo\")\r\ncolum_value = foo_db[column_name]\r\n```\r\n\r\nthe very same happens when you create the dataset, but dont specify the feature type.\r\n\r\ni've tried running this on different envs (macOS, linux) and it's behaving the very same way.",
"When you call `colum_value = foo_db[column_name]`, you load the full column in memory.\r\n\r\nIf you want to avoid filling up your memory, you can access chunks of data instead\r\n```python\r\nembeddings = dataset[i:i + chunk_size][\"embeddings\"]\r\n```",
"@lhoestq yeah that's intentional, i.e. i really want to load the whole column into the memory. but as said above there's an unreasonable amount of overhead for the memory. the np array itself is using about 1G of memory:\r\n```\r\n>>> getsizeof(data)/1024/1024\r\n937.5001525878906\r\n```\r\nthat accessing of column above is using 10x memory compared to the original numpy array.",
"The dataset must be twice as big because we use regular arrow ListArray under the hood and not FixedSizeListArray. Basically we store unnecessary offsets.\r\n\r\nAnd this should affect performance as well. When we developed this, FixedSizeListArray still had some issues but they should be resolved on the PyArrow side now",
"A doubling would be fine. My very basic understanding of PyArrow is that using ListArray is probably related to the issue though. Using a multi-dimensional array in datasets is storing everything as strange nested 1d object arrays, which I imagine is creating the massive overhead.\r\n\r\nI think it should be a PyArrow Tensor, no?",
"PyArrow tensors are not part of the Arrow format AFAIK:\r\n\r\n> There is no direct support in the arrow columnar format to store Tensors as column values.\r\n\r\nsource: https://github.com/apache/arrow/issues/4802#issuecomment-508494694",
"That's... unfortunate. I didn't realize that."
] | 2022-09-10T13:07:22 | 2022-09-22T18:29:22 | null | CONTRIBUTOR | null | ## Describe the bug
When trying to store `Array2D, Array3D, etc` as column values in a dataset, accessing that column (or creating depending on how you create it, see code below) will cause more than 10 fold of memory usage.
## Steps to reproduce the bug
```python
from datasets import Dataset, Features, Array2D, Array3D
import numpy as np
column_name = "a"
array_shape = (64, 64, 3)
data = np.random.random((10000,) + array_shape)
dataset = Dataset.from_dict({column_name: data}, features=Features({column_name: Array3D(shape=array_shape, dtype="float64")}))
```
the code above will use about 10Gb of RAM while constructing the `dataset` object.
The code below will use roughly the same amount of memory (and time) when trying to actually access the data itself of that column.
```python
from datasets import Dataset
import numpy as np
column_name = "a"
array_shape = (64, 64, 3)
data = np.random.random((10000,) + array_shape)
dataset = Dataset.from_dict({column_name: data})
dataset[column_name]
```
## Expected results
Some memory overhead, but not like as it is now and certainly not an overhead of such runtime that is currently happening.
## Actual results
Enormous memory- and runtime overhead.
## Environment info
- `datasets` version: 2.3.2
- Platform: macOS-12.5.1-arm64-arm-64bit
- Python version: 3.8.13
- PyArrow version: 9.0.0
- Pandas version: 1.4.4 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4964/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4964/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4963 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4963/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4963/comments | https://api.github.com/repos/huggingface/datasets/issues/4963/events | https://github.com/huggingface/datasets/issues/4963 | 1,368,201,188 | I_kwDODunzps5RjRfk | 4,963 | Dataset without script does not support regular JSON data file | {
"login": "julien-c",
"id": 326577,
"node_id": "MDQ6VXNlcjMyNjU3Nw==",
"avatar_url": "https://avatars.githubusercontent.com/u/326577?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/julien-c",
"html_url": "https://github.com/julien-c",
"followers_url": "https://api.github.com/users/julien-c/followers",
"following_url": "https://api.github.com/users/julien-c/following{/other_user}",
"gists_url": "https://api.github.com/users/julien-c/gists{/gist_id}",
"starred_url": "https://api.github.com/users/julien-c/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/julien-c/subscriptions",
"organizations_url": "https://api.github.com/users/julien-c/orgs",
"repos_url": "https://api.github.com/users/julien-c/repos",
"events_url": "https://api.github.com/users/julien-c/events{/privacy}",
"received_events_url": "https://api.github.com/users/julien-c/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi @julien-c,\r\n\r\nOut of the box, we only support JSON lines (NDJSON) data files, but your data file is a regular JSON file. The reason is we use `pyarrow.json.read_json` and this only supports line-delimited JSON. "
] | 2022-09-09T18:45:33 | 2022-09-20T15:40:07 | 2022-09-20T15:40:07 | MEMBER | null | ### Link
https://huggingface.co/datasets/julien-c/label-studio-my-dogs
### Description
<img width="1115" alt="image" src="https://user-images.githubusercontent.com/326577/189422048-7e9c390f-bea7-4521-a232-43f049ccbd1f.png">
### Owner
Yes | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4963/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4963/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4962 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4962/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4962/comments | https://api.github.com/repos/huggingface/datasets/issues/4962/events | https://github.com/huggingface/datasets/pull/4962 | 1,368,155,365 | PR_kwDODunzps4-sh-o | 4,962 | Update setup.py | {
"login": "DCNemesis",
"id": 3616964,
"node_id": "MDQ6VXNlcjM2MTY5NjQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/3616964?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/DCNemesis",
"html_url": "https://github.com/DCNemesis",
"followers_url": "https://api.github.com/users/DCNemesis/followers",
"following_url": "https://api.github.com/users/DCNemesis/following{/other_user}",
"gists_url": "https://api.github.com/users/DCNemesis/gists{/gist_id}",
"starred_url": "https://api.github.com/users/DCNemesis/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/DCNemesis/subscriptions",
"organizations_url": "https://api.github.com/users/DCNemesis/orgs",
"repos_url": "https://api.github.com/users/DCNemesis/repos",
"events_url": "https://api.github.com/users/DCNemesis/events{/privacy}",
"received_events_url": "https://api.github.com/users/DCNemesis/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Before addressing this PR, we should be sure about the issue. See my comment in:\r\n- https://github.com/huggingface/datasets/issues/4961#issuecomment-1243376247",
"Once we know 2022.8.2 works, I'm closing this PR, as the corresponding issue."
] | 2022-09-09T17:57:56 | 2022-09-12T14:33:04 | 2022-09-12T14:33:04 | NONE | null | exclude broken version of fsspec. See the [related issue](https://github.com/huggingface/datasets/issues/4961) | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4962/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4962/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4962",
"html_url": "https://github.com/huggingface/datasets/pull/4962",
"diff_url": "https://github.com/huggingface/datasets/pull/4962.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4962.patch",
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4961 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4961/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4961/comments | https://api.github.com/repos/huggingface/datasets/issues/4961/events | https://github.com/huggingface/datasets/issues/4961 | 1,368,124,033 | I_kwDODunzps5Ri-qB | 4,961 | fsspec 2022.8.2 breaks xopen in streaming mode | {
"login": "DCNemesis",
"id": 3616964,
"node_id": "MDQ6VXNlcjM2MTY5NjQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/3616964?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/DCNemesis",
"html_url": "https://github.com/DCNemesis",
"followers_url": "https://api.github.com/users/DCNemesis/followers",
"following_url": "https://api.github.com/users/DCNemesis/following{/other_user}",
"gists_url": "https://api.github.com/users/DCNemesis/gists{/gist_id}",
"starred_url": "https://api.github.com/users/DCNemesis/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/DCNemesis/subscriptions",
"organizations_url": "https://api.github.com/users/DCNemesis/orgs",
"repos_url": "https://api.github.com/users/DCNemesis/repos",
"events_url": "https://api.github.com/users/DCNemesis/events{/privacy}",
"received_events_url": "https://api.github.com/users/DCNemesis/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | [
"loading `fsspec==2022.7.1` fixes this issue, setup.py would need to be changed to prevent users from using the latest version of fsspec.",
"Opened [PR](https://github.com/huggingface/datasets/pull/4962) to address this.",
"Hi @DCNemesis, thanks for reporting.\r\n\r\nThat was a temporary issue in `fsspec` releases 2022.8.0 and 2022.8.1. But they fixed it in their patch release 2022.8.2 (and yanked both previous versions). See:\r\n- https://github.com/huggingface/transformers/pull/18846\r\n\r\nAre you sure you have version 2022.8.2 installed?\r\n```shell\r\npip install -U fsspec\r\n```\r\n",
"@albertvillanova I was using a temporary Google Colab instance, but checking it again today it seems it was loading 2022.8.1 rather than 2022.8.2. It's surprising that colab is using the version that was replaced the same day it was released. Testing with 2022.8.2 did work. It appears Colab [will be fixing it](https://github.com/googlecolab/colabtools/issues/3055) on their end too. ",
"Thanks for the additional information.\r\n\r\nOnce we know 2022.8.2 works, I'm closing this issue. Feel free to reopen it if necessary.",
"Colab just upgraded their default `fsspec` version to 2022.8.2:\r\n- https://github.com/googlecolab/colabtools/issues/3055#issuecomment-1244019010"
] | 2022-09-09T17:26:55 | 2022-09-12T17:45:50 | 2022-09-12T14:32:05 | NONE | null | ## Describe the bug
When fsspec 2022.8.2 is installed in your environment, xopen will prematurely close files, making streaming mode inoperable.
## Steps to reproduce the bug
```python
import datasets
data = datasets.load_dataset('MLCommons/ml_spoken_words', 'id_wav', split='train', streaming=True)
```
## Expected results
Dataset should load as iterator.
## Actual results
```
[/usr/local/lib/python3.7/dist-packages/datasets/load.py](https://localhost:8080/#) in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, ignore_verifications, keep_in_memory, save_infos, revision, use_auth_token, task, streaming, **config_kwargs)
1737 # Return iterable dataset in case of streaming
1738 if streaming:
-> 1739 return builder_instance.as_streaming_dataset(split=split)
1740
1741 # Some datasets are already processed on the HF google storage
[/usr/local/lib/python3.7/dist-packages/datasets/builder.py](https://localhost:8080/#) in as_streaming_dataset(self, split, base_path)
1023 )
1024 self._check_manual_download(dl_manager)
-> 1025 splits_generators = {sg.name: sg for sg in self._split_generators(dl_manager)}
1026 # By default, return all splits
1027 if split is None:
[~/.cache/huggingface/modules/datasets_modules/datasets/MLCommons--ml_spoken_words/321ea853cf0a05abb7a2d7efea900692a3d8622af65a2f3ce98adb7800a5d57b/ml_spoken_words.py](https://localhost:8080/#) in _split_generators(self, dl_manager)
182 name=datasets.Split.TRAIN,
183 gen_kwargs={
--> 184 "audio_archives": [download_audio(split="train", lang=lang) for lang in self.config.languages],
185 "local_audio_archives_paths": [download_extract_audio(split="train", lang=lang) for lang in
186 self.config.languages] if not dl_manager.is_streaming else None,
[~/.cache/huggingface/modules/datasets_modules/datasets/MLCommons--ml_spoken_words/321ea853cf0a05abb7a2d7efea900692a3d8622af65a2f3ce98adb7800a5d57b/ml_spoken_words.py](https://localhost:8080/#) in <listcomp>(.0)
182 name=datasets.Split.TRAIN,
183 gen_kwargs={
--> 184 "audio_archives": [download_audio(split="train", lang=lang) for lang in self.config.languages],
185 "local_audio_archives_paths": [download_extract_audio(split="train", lang=lang) for lang in
186 self.config.languages] if not dl_manager.is_streaming else None,
[~/.cache/huggingface/modules/datasets_modules/datasets/MLCommons--ml_spoken_words/321ea853cf0a05abb7a2d7efea900692a3d8622af65a2f3ce98adb7800a5d57b/ml_spoken_words.py](https://localhost:8080/#) in _download_audio_archives(dl_manager, lang, format, split)
267 # for streaming case
268 def _download_audio_archives(dl_manager, lang, format, split):
--> 269 archives_paths = _download_audio_archives_paths(dl_manager, lang, format, split)
270 return [dl_manager.iter_archive(archive_path) for archive_path in archives_paths]
[~/.cache/huggingface/modules/datasets_modules/datasets/MLCommons--ml_spoken_words/321ea853cf0a05abb7a2d7efea900692a3d8622af65a2f3ce98adb7800a5d57b/ml_spoken_words.py](https://localhost:8080/#) in _download_audio_archives_paths(dl_manager, lang, format, split)
251 n_files_path = dl_manager.download(n_files_url)
252
--> 253 with open(n_files_path, "r", encoding="utf-8") as file:
254 n_files = int(file.read().strip()) # the file contains a number of archives
255
ValueError: I/O operation on closed file.
```
## Environment info
- `datasets` version: 2.4.0
- Platform: Linux-5.10.133+-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.7.13
- PyArrow version: 6.0.1
- Pandas version: 1.3.5
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4961/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4961/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4960 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4960/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4960/comments | https://api.github.com/repos/huggingface/datasets/issues/4960/events | https://github.com/huggingface/datasets/issues/4960 | 1,368,035,159 | I_kwDODunzps5Rio9X | 4,960 | BioASQ AttributeError: 'BuilderConfig' object has no attribute 'schema' | {
"login": "DSLituiev",
"id": 8426290,
"node_id": "MDQ6VXNlcjg0MjYyOTA=",
"avatar_url": "https://avatars.githubusercontent.com/u/8426290?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/DSLituiev",
"html_url": "https://github.com/DSLituiev",
"followers_url": "https://api.github.com/users/DSLituiev/followers",
"following_url": "https://api.github.com/users/DSLituiev/following{/other_user}",
"gists_url": "https://api.github.com/users/DSLituiev/gists{/gist_id}",
"starred_url": "https://api.github.com/users/DSLituiev/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/DSLituiev/subscriptions",
"organizations_url": "https://api.github.com/users/DSLituiev/orgs",
"repos_url": "https://api.github.com/users/DSLituiev/repos",
"events_url": "https://api.github.com/users/DSLituiev/events{/privacy}",
"received_events_url": "https://api.github.com/users/DSLituiev/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2067388877,
"node_id": "MDU6TGFiZWwyMDY3Mzg4ODc3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20bug",
"name": "dataset bug",
"color": "2edb81",
"default": false,
"description": "A bug in a dataset script provided in the library"
}
] | open | false | null | [] | null | [
"Following worked:\r\n\r\n```\r\ndata_dir = \"/Users/dlituiev/repos/datasets/bioasq/\"\r\nbioasq_task_b = load_dataset(\"aps/bioasq_task_b\", data_dir=data_dir, name=\"bioasq_9b_source\")\r\n```\r\n\r\nWould maintainers be open to one of the following:\r\n- automating this with a latest default config (e.g. `bioasq_9b_source`); how can this be generalized to other datasets?\r\n- providing an actionable error message that lists available `name` values? I only got available `name` values once I've provided something there (`name=\"aps/bioasq_task_b\"`), before it would not even mention that it requires `name` argument",
"Hi ! In general the list of available configurations is prompted. I think this is an issue with this specific dataset.\r\n\r\nFeel free to open a new discussions at https://huggingface.co/datasets/aps/bioasq_task_b/discussions\r\n\r\ncc @apsdehal\r\n\r\nIn particular it sounds like the `BUILDER_CONFIG_CLASS= BigBioConfig ` class attribute is missing and the _info should account for schema being None and raise an error"
] | 2022-09-09T16:06:43 | 2022-09-13T08:51:03 | null | NONE | null | ## Describe the bug
I am trying to load a dataset from drive and running into an error.
## Steps to reproduce the bug
```python
data_dir = "/Users/dlituiev/repos/datasets/bioasq/BioASQ-training9b"
bioasq_task_b = load_dataset("aps/bioasq_task_b", data_dir=data_dir)
```
## Actual results
`AttributeError: 'BuilderConfig' object has no attribute 'schema'`
<details>
```
Using custom data configuration default-a1ca3e05be5abf2f
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
Input In [8], in <cell line: 2>()
1 data_dir = "/Users/dlituiev/repos/datasets/bioasq/BioASQ-training9b"
----> 2 bioasq_task_b = load_dataset("aps/bioasq_task_b", data_dir=data_dir)
File ~/opt/anaconda3/envs/spacy3/lib/python3.10/site-packages/datasets/load.py:1723, in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, ignore_verifications, keep_in_memory, save_infos, revision, use_auth_token, task, streaming, **config_kwargs)
1720 ignore_verifications = ignore_verifications or save_infos
1722 # Create a dataset builder
-> 1723 builder_instance = load_dataset_builder(
1724 path=path,
1725 name=name,
1726 data_dir=data_dir,
1727 data_files=data_files,
1728 cache_dir=cache_dir,
1729 features=features,
1730 download_config=download_config,
1731 download_mode=download_mode,
1732 revision=revision,
1733 use_auth_token=use_auth_token,
1734 **config_kwargs,
1735 )
1737 # Return iterable dataset in case of streaming
1738 if streaming:
File ~/opt/anaconda3/envs/spacy3/lib/python3.10/site-packages/datasets/load.py:1526, in load_dataset_builder(path, name, data_dir, data_files, cache_dir, features, download_config, download_mode, revision, use_auth_token, **config_kwargs)
1523 raise ValueError(error_msg)
1525 # Instantiate the dataset builder
-> 1526 builder_instance: DatasetBuilder = builder_cls(
1527 cache_dir=cache_dir,
1528 config_name=config_name,
1529 data_dir=data_dir,
1530 data_files=data_files,
1531 hash=hash,
1532 features=features,
1533 use_auth_token=use_auth_token,
1534 **builder_kwargs,
1535 **config_kwargs,
1536 )
1538 return builder_instance
File ~/opt/anaconda3/envs/spacy3/lib/python3.10/site-packages/datasets/builder.py:1154, in GeneratorBasedBuilder.__init__(self, writer_batch_size, *args, **kwargs)
1153 def __init__(self, *args, writer_batch_size=None, **kwargs):
-> 1154 super().__init__(*args, **kwargs)
1155 # Batch size used by the ArrowWriter
1156 # It defines the number of samples that are kept in memory before writing them
1157 # and also the length of the arrow chunks
1158 # None means that the ArrowWriter will use its default value
1159 self._writer_batch_size = writer_batch_size or self.DEFAULT_WRITER_BATCH_SIZE
File ~/opt/anaconda3/envs/spacy3/lib/python3.10/site-packages/datasets/builder.py:307, in DatasetBuilder.__init__(self, cache_dir, config_name, hash, base_path, info, features, use_auth_token, repo_id, data_files, data_dir, name, **config_kwargs)
305 if info is None:
306 info = self.get_exported_dataset_info()
--> 307 info.update(self._info())
308 info.builder_name = self.name
309 info.config_name = self.config.name
File ~/.cache/huggingface/modules/datasets_modules/datasets/aps--bioasq_task_b/3d54b1213f7e8001eef755af92877f9efa44161ee83c2a70d5d649defa95759e/bioasq_task_b.py:477, in BioasqTaskBDataset._info(self)
474 def _info(self):
475
476 # BioASQ Task B source schema
--> 477 if self.config.schema == "source":
478 features = datasets.Features(
479 {
480 "id": datasets.Value("string"),
(...)
504 }
505 )
506 # simplified schema for QA tasks
AttributeError: 'BuilderConfig' object has no attribute 'schema'
```
</details>
## Environment info
- `datasets` version: 2.4.0
- Platform: macOS-10.16-x86_64-i386-64bit
- Python version: 3.10.4
- PyArrow version: 9.0.0
- Pandas version: 1.4.3 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4960/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4960/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4959 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4959/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4959/comments | https://api.github.com/repos/huggingface/datasets/issues/4959/events | https://github.com/huggingface/datasets/pull/4959 | 1,367,924,429 | PR_kwDODunzps4-rx6l | 4,959 | Fix data URLs of compguesswhat dataset | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-09-09T14:36:10 | 2022-09-09T16:01:34 | 2022-09-09T15:59:04 | MEMBER | null | After we informed the `compguesswhat` dataset authors about an error with their data URLs, they have updated them:
- https://github.com/CompGuessWhat/compguesswhat.github.io/issues/1
This PR updates their data URLs in our loading script.
Related to:
- #3191 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4959/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4959/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4959",
"html_url": "https://github.com/huggingface/datasets/pull/4959",
"diff_url": "https://github.com/huggingface/datasets/pull/4959.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4959.patch",
"merged_at": "2022-09-09T15:59:04"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4958 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4958/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4958/comments | https://api.github.com/repos/huggingface/datasets/issues/4958/events | https://github.com/huggingface/datasets/issues/4958 | 1,367,695,376 | I_kwDODunzps5RhWAQ | 4,958 | ConnectionError: Couldn't reach https://raw.githubusercontent.com/huggingface/datasets/2.4.0/datasets/jsonl/jsonl.py | {
"login": "hasakikiki",
"id": 66322047,
"node_id": "MDQ6VXNlcjY2MzIyMDQ3",
"avatar_url": "https://avatars.githubusercontent.com/u/66322047?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/hasakikiki",
"html_url": "https://github.com/hasakikiki",
"followers_url": "https://api.github.com/users/hasakikiki/followers",
"following_url": "https://api.github.com/users/hasakikiki/following{/other_user}",
"gists_url": "https://api.github.com/users/hasakikiki/gists{/gist_id}",
"starred_url": "https://api.github.com/users/hasakikiki/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/hasakikiki/subscriptions",
"organizations_url": "https://api.github.com/users/hasakikiki/orgs",
"repos_url": "https://api.github.com/users/hasakikiki/repos",
"events_url": "https://api.github.com/users/hasakikiki/events{/privacy}",
"received_events_url": "https://api.github.com/users/hasakikiki/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | [
"I have solved this problem... The extension of the file should be `.json` not `.jsonl`"
] | 2022-09-09T11:29:55 | 2022-09-09T11:38:44 | 2022-09-09T11:38:44 | NONE | null | Hi,
When I use load_dataset from local jsonl files, below error happens, and I type the link into the browser prompting me `404: Not Found`. I download the other `.py` files using the same method and it works. It seems that the server is missing the appropriate file, or it is a problem with the code version.
```
ConnectionError: Couldn't reach https://raw.githubusercontent.com/huggingface/datasets/2.3.0/datasets/jsonl/jsonl.py (ConnectionError(MaxRetryError("HTTPSConnectionPool(host='raw.githubusercontent.com', port=443): Max retries exceeded with url: /huggingface/datasets/2.3.0/datasets/jsonl/jsonl.py (Caused by NewConnectionError('<urllib3.connection.HTTPSConnection object at 0x2b08342004c0>: Failed to establish a new connection: [Errno 101] Network is unreachable'))")))
```
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4958/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4958/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4957 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4957/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4957/comments | https://api.github.com/repos/huggingface/datasets/issues/4957/events | https://github.com/huggingface/datasets/pull/4957 | 1,366,532,849 | PR_kwDODunzps4-nGIk | 4,957 | Add `Dataset.from_generator` | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"I restarted the builder PR job just in case",
"_The documentation is not available anymore as the PR was closed or merged._",
"CI is now green. https://github.com/huggingface/doc-builder/pull/296 explains why it failed."
] | 2022-09-08T15:08:25 | 2022-09-16T14:46:35 | 2022-09-16T14:44:18 | CONTRIBUTOR | null | Add `Dataset.from_generator` to the API to allow creating datasets from data larger than RAM. The implementation relies on a packaged module not exposed in `load_dataset` to tie this method with `datasets`' caching mechanism.
Closes https://github.com/huggingface/datasets/issues/4417 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4957/reactions",
"total_count": 2,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 2,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4957/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4957",
"html_url": "https://github.com/huggingface/datasets/pull/4957",
"diff_url": "https://github.com/huggingface/datasets/pull/4957.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4957.patch",
"merged_at": "2022-09-16T14:44:18"
} | true |