url
stringlengths 58
61
| repository_url
stringclasses 1
value | labels_url
stringlengths 72
75
| comments_url
stringlengths 67
70
| events_url
stringlengths 65
68
| html_url
stringlengths 46
51
| id
int64 599M
1.5B
| node_id
stringlengths 18
32
| number
int64 1
5.38k
| title
stringlengths 1
276
| user
dict | labels
list | state
stringclasses 2
values | locked
bool 1
class | assignee
dict | assignees
list | milestone
dict | comments
sequence | created_at
stringlengths 20
20
| updated_at
stringlengths 20
20
| closed_at
stringlengths 20
20
⌀ | author_association
stringclasses 3
values | active_lock_reason
null | draft
bool 2
classes | pull_request
dict | body
stringlengths 0
228k
⌀ | reactions
dict | timeline_url
stringlengths 67
70
| performed_via_github_app
null | state_reason
stringclasses 3
values | is_pull_request
bool 1
class |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://api.github.com/repos/huggingface/datasets/issues/4970 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4970/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4970/comments | https://api.github.com/repos/huggingface/datasets/issues/4970/events | https://github.com/huggingface/datasets/pull/4970 | 1,369,433,074 | PR_kwDODunzps4-wkY2 | 4,970 | Support streaming nli_tr dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | [] | closed | false | null | [] | null | [] | 2022-09-12T07:48:45Z | 2022-09-12T08:45:04Z | 2022-09-12T08:43:08Z | MEMBER | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/4970.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4970",
"merged_at": "2022-09-12T08:43:08Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4970.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4970"
} | Support streaming nli_tr dataset.
This PR removes legacy `codecs.open` and replaces it with `open` that supports passing encoding.
Fix #3186. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4970/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4970/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/4969 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4969/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4969/comments | https://api.github.com/repos/huggingface/datasets/issues/4969/events | https://github.com/huggingface/datasets/pull/4969 | 1,369,334,740 | PR_kwDODunzps4-wPOk | 4,969 | Fix data URL and metadata of vivos dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | [] | closed | false | null | [] | null | [] | 2022-09-12T06:12:34Z | 2022-09-12T07:16:15Z | 2022-09-12T07:14:19Z | MEMBER | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/4969.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4969",
"merged_at": "2022-09-12T07:14:19Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4969.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4969"
} | After contacting the authors of the VIVOS dataset to report that their data server is down, we have received a reply from Hieu-Thi Luong that their data is now hosted on Zenodo: https://doi.org/10.5281/zenodo.7068130
This PR updates their data URL and some metadata (homepage, citation and license).
Fix #4936. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4969/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4969/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/4968 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4968/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4968/comments | https://api.github.com/repos/huggingface/datasets/issues/4968/events | https://github.com/huggingface/datasets/pull/4968 | 1,369,312,877 | PR_kwDODunzps4-wKkw | 4,968 | Support streaming compguesswhat dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | [] | closed | false | null | [] | null | [] | 2022-09-12T05:42:24Z | 2022-09-12T08:00:06Z | 2022-09-12T07:58:06Z | MEMBER | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/4968.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4968",
"merged_at": "2022-09-12T07:58:06Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4968.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4968"
} | Support streaming `compguesswhat` dataset.
Fix #3191. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4968/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4968/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/4967 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4967/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4967/comments | https://api.github.com/repos/huggingface/datasets/issues/4967/events | https://github.com/huggingface/datasets/pull/4967 | 1,369,092,452 | PR_kwDODunzps4-vbS- | 4,967 | Strip "/" in local dataset path to avoid empty dataset name error | {
"avatar_url": "https://avatars.githubusercontent.com/u/40543?v=4",
"events_url": "https://api.github.com/users/apohllo/events{/privacy}",
"followers_url": "https://api.github.com/users/apohllo/followers",
"following_url": "https://api.github.com/users/apohllo/following{/other_user}",
"gists_url": "https://api.github.com/users/apohllo/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/apohllo",
"id": 40543,
"login": "apohllo",
"node_id": "MDQ6VXNlcjQwNTQz",
"organizations_url": "https://api.github.com/users/apohllo/orgs",
"received_events_url": "https://api.github.com/users/apohllo/received_events",
"repos_url": "https://api.github.com/users/apohllo/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/apohllo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/apohllo/subscriptions",
"type": "User",
"url": "https://api.github.com/users/apohllo"
} | [] | closed | false | null | [] | null | [] | 2022-09-11T23:09:16Z | 2022-09-29T10:46:21Z | 2022-09-12T15:30:38Z | CONTRIBUTOR | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/4967.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4967",
"merged_at": "2022-09-12T15:30:38Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4967.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4967"
} | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4967/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4967/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/4965 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4965/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4965/comments | https://api.github.com/repos/huggingface/datasets/issues/4965/events | https://github.com/huggingface/datasets/issues/4965 | 1,368,661,002 | I_kwDODunzps5RlBwK | 4,965 | [Apple M1] MemoryError: Cannot allocate write+execute memory for ffi.callback() | {
"avatar_url": "https://avatars.githubusercontent.com/u/35718590?v=4",
"events_url": "https://api.github.com/users/hoangtnm/events{/privacy}",
"followers_url": "https://api.github.com/users/hoangtnm/followers",
"following_url": "https://api.github.com/users/hoangtnm/following{/other_user}",
"gists_url": "https://api.github.com/users/hoangtnm/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/hoangtnm",
"id": 35718590,
"login": "hoangtnm",
"node_id": "MDQ6VXNlcjM1NzE4NTkw",
"organizations_url": "https://api.github.com/users/hoangtnm/orgs",
"received_events_url": "https://api.github.com/users/hoangtnm/received_events",
"repos_url": "https://api.github.com/users/hoangtnm/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/hoangtnm/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/hoangtnm/subscriptions",
"type": "User",
"url": "https://api.github.com/users/hoangtnm"
} | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | open | false | null | [] | null | [] | 2022-09-10T15:55:49Z | 2022-11-18T23:45:02Z | null | NONE | null | null | null | ## Describe the bug
I'm trying to run `cast_column("audio", Audio())` on Apple M1 Pro, but it seems that it doesn't work.
## Steps to reproduce the bug
```python
import datasets
dataset = load_dataset("csv", data_files="./train.csv")["train"]
dataset = dataset.map(lambda x: {"audio": str(DATA_DIR / "audio" / x["audio"])})
dataset = dataset.cast_column("audio", Audio())
dataset[0]
```
## Expected results
```
{'audio': {'bytes': None,
'path': '/root/.cache/huggingface/datasets/downloads/extracted/f14948e0e84be638dd7943ac36518a4cf3324e8b7aa331c5ab11541518e9368c/en-US~JOINT_ACCOUNT/602ba55abb1e6d0fbce92065.wav'},
'english_transcription': 'I would like to set up a joint account with my partner',
'intent_class': 11,
'lang_id': 4,
'path': '/root/.cache/huggingface/datasets/downloads/extracted/f14948e0e84be638dd7943ac36518a4cf3324e8b7aa331c5ab11541518e9368c/en-US~JOINT_ACCOUNT/602ba55abb1e6d0fbce92065.wav',
'transcription': 'I would like to set up a joint account with my partner'}
```
## Actual results
````---------------------------------------------------------------------------
MemoryError Traceback (most recent call last)
Input In [6], in <cell line: 1>()
----> 1 dataset[0]
File ~/miniconda3/envs/rodan/lib/python3.8/site-packages/datasets/arrow_dataset.py:2165, in Dataset.__getitem__(self, key)
2163 def __getitem__(self, key): # noqa: F811
2164 """Can be used to index columns (by string names) or rows (by integer index or iterable of indices or bools)."""
-> 2165 return self._getitem(
2166 key,
2167 )
File ~/miniconda3/envs/rodan/lib/python3.8/site-packages/datasets/arrow_dataset.py:2150, in Dataset._getitem(self, key, decoded, **kwargs)
2148 formatter = get_formatter(format_type, features=self.features, decoded=decoded, **format_kwargs)
2149 pa_subtable = query_table(self._data, key, indices=self._indices if self._indices is not None else None)
-> 2150 formatted_output = format_table(
2151 pa_subtable, key, formatter=formatter, format_columns=format_columns, output_all_columns=output_all_columns
2152 )
2153 return formatted_output
File ~/miniconda3/envs/rodan/lib/python3.8/site-packages/datasets/formatting/formatting.py:532, in format_table(table, key, formatter, format_columns, output_all_columns)
530 python_formatter = PythonFormatter(features=None)
531 if format_columns is None:
--> 532 return formatter(pa_table, query_type=query_type)
533 elif query_type == "column":
534 if key in format_columns:
File ~/miniconda3/envs/rodan/lib/python3.8/site-packages/datasets/formatting/formatting.py:281, in Formatter.__call__(self, pa_table, query_type)
279 def __call__(self, pa_table: pa.Table, query_type: str) -> Union[RowFormat, ColumnFormat, BatchFormat]:
280 if query_type == "row":
--> 281 return self.format_row(pa_table)
282 elif query_type == "column":
283 return self.format_column(pa_table)
File ~/miniconda3/envs/rodan/lib/python3.8/site-packages/datasets/formatting/formatting.py:312, in PythonFormatter.format_row(self, pa_table)
310 row = self.python_arrow_extractor().extract_row(pa_table)
311 if self.decoded:
--> 312 row = self.python_features_decoder.decode_row(row)
313 return row
File ~/miniconda3/envs/rodan/lib/python3.8/site-packages/datasets/formatting/formatting.py:221, in PythonFeaturesDecoder.decode_row(self, row)
220 def decode_row(self, row: dict) -> dict:
--> 221 return self.features.decode_example(row) if self.features else row
File ~/miniconda3/envs/rodan/lib/python3.8/site-packages/datasets/features/features.py:1647, in Features.decode_example(self, example, token_per_repo_id)
1634 def decode_example(self, example: dict, token_per_repo_id: Optional[Dict[str, Union[str, bool, None]]] = None):
1635 """Decode example with custom feature decoding.
1636
1637 Args:
(...)
1644 :obj:`dict[str, Any]`
1645 """
-> 1647 return {
1648 column_name: decode_nested_example(feature, value, token_per_repo_id=token_per_repo_id)
1649 if self._column_requires_decoding[column_name]
1650 else value
1651 for column_name, (feature, value) in zip_dict(
1652 {key: value for key, value in self.items() if key in example}, example
1653 )
1654 }
File ~/miniconda3/envs/rodan/lib/python3.8/site-packages/datasets/features/features.py:1648, in <dictcomp>(.0)
1634 def decode_example(self, example: dict, token_per_repo_id: Optional[Dict[str, Union[str, bool, None]]] = None):
1635 """Decode example with custom feature decoding.
1636
1637 Args:
(...)
1644 :obj:`dict[str, Any]`
1645 """
1647 return {
-> 1648 column_name: decode_nested_example(feature, value, token_per_repo_id=token_per_repo_id)
1649 if self._column_requires_decoding[column_name]
1650 else value
1651 for column_name, (feature, value) in zip_dict(
1652 {key: value for key, value in self.items() if key in example}, example
1653 )
1654 }
File ~/miniconda3/envs/rodan/lib/python3.8/site-packages/datasets/features/features.py:1260, in decode_nested_example(schema, obj, token_per_repo_id)
1257 # Object with special decoding:
1258 elif isinstance(schema, (Audio, Image)):
1259 # we pass the token to read and decode files from private repositories in streaming mode
-> 1260 return schema.decode_example(obj, token_per_repo_id=token_per_repo_id) if obj is not None else None
1261 return obj
File ~/miniconda3/envs/rodan/lib/python3.8/site-packages/datasets/features/audio.py:156, in Audio.decode_example(self, value, token_per_repo_id)
154 array, sampling_rate = self._decode_non_mp3_file_like(file)
155 else:
--> 156 array, sampling_rate = self._decode_non_mp3_path_like(path, token_per_repo_id=token_per_repo_id)
157 return {"path": path, "array": array, "sampling_rate": sampling_rate}
File ~/miniconda3/envs/rodan/lib/python3.8/site-packages/datasets/features/audio.py:257, in Audio._decode_non_mp3_path_like(self, path, format, token_per_repo_id)
254 use_auth_token = None
256 with xopen(path, "rb", use_auth_token=use_auth_token) as f:
--> 257 array, sampling_rate = librosa.load(f, sr=self.sampling_rate, mono=self.mono)
258 return array, sampling_rate
File ~/miniconda3/envs/rodan/lib/python3.8/site-packages/librosa/util/decorators.py:88, in deprecate_positional_args.<locals>._inner_deprecate_positional_args.<locals>.inner_f(*args, **kwargs)
86 extra_args = len(args) - len(all_args)
87 if extra_args <= 0:
---> 88 return f(*args, **kwargs)
90 # extra_args > 0
91 args_msg = [
92 "{}={}".format(name, arg)
93 for name, arg in zip(kwonly_args[:extra_args], args[-extra_args:])
94 ]
File ~/miniconda3/envs/rodan/lib/python3.8/site-packages/librosa/core/audio.py:164, in load(path, sr, mono, offset, duration, dtype, res_type)
161 else:
162 # Otherwise try soundfile first, and then fall back if necessary
163 try:
--> 164 y, sr_native = __soundfile_load(path, offset, duration, dtype)
166 except RuntimeError as exc:
167 # If soundfile failed, try audioread instead
168 if isinstance(path, (str, pathlib.PurePath)):
File ~/miniconda3/envs/rodan/lib/python3.8/site-packages/librosa/core/audio.py:195, in __soundfile_load(path, offset, duration, dtype)
192 context = path
193 else:
194 # Otherwise, create the soundfile object
--> 195 context = sf.SoundFile(path)
197 with context as sf_desc:
198 sr_native = sf_desc.samplerate
File ~/miniconda3/envs/rodan/lib/python3.8/site-packages/soundfile.py:629, in SoundFile.__init__(self, file, mode, samplerate, channels, subtype, endian, format, closefd)
626 self._mode = mode
627 self._info = _create_info_struct(file, mode, samplerate, channels,
628 format, subtype, endian)
--> 629 self._file = self._open(file, mode_int, closefd)
630 if set(mode).issuperset('r+') and self.seekable():
631 # Move write position to 0 (like in Python file objects)
632 self.seek(0)
File ~/miniconda3/envs/rodan/lib/python3.8/site-packages/soundfile.py:1179, in SoundFile._open(self, file, mode_int, closefd)
1177 file_ptr = _snd.sf_open_fd(file, mode_int, self._info, closefd)
1178 elif _has_virtual_io_attrs(file, mode_int):
-> 1179 file_ptr = _snd.sf_open_virtual(self._init_virtual_io(file),
1180 mode_int, self._info, _ffi.NULL)
1181 else:
1182 raise TypeError("Invalid file: {0!r}".format(self.name))
File ~/miniconda3/envs/rodan/lib/python3.8/site-packages/soundfile.py:1197, in SoundFile._init_virtual_io(self, file)
1194 def _init_virtual_io(self, file):
1195 """Initialize callback functions for sf_open_virtual()."""
1196 @_ffi.callback("sf_vio_get_filelen")
-> 1197 def vio_get_filelen(user_data):
1198 curr = file.tell()
1199 file.seek(0, SEEK_END)
MemoryError: Cannot allocate write+execute memory for ffi.callback(). You might be running on a system that prevents this. For more information, see https://cffi.readthedocs.io/en/latest/using.html#callbacks
```
## Environment info
- `datasets` version: 2.4.0
- Platform: macOS-12.5.1-arm64-arm-64bit
- Python version: 3.8.13
- PyArrow version: 9.0.0
- Pandas version: 1.4.4 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4965/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4965/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/4964 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4964/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4964/comments | https://api.github.com/repos/huggingface/datasets/issues/4964/events | https://github.com/huggingface/datasets/issues/4964 | 1,368,617,322 | I_kwDODunzps5Rk3Fq | 4,964 | Column of arrays (2D+) are using unreasonably high memory | {
"avatar_url": "https://avatars.githubusercontent.com/u/30353?v=4",
"events_url": "https://api.github.com/users/vigsterkr/events{/privacy}",
"followers_url": "https://api.github.com/users/vigsterkr/followers",
"following_url": "https://api.github.com/users/vigsterkr/following{/other_user}",
"gists_url": "https://api.github.com/users/vigsterkr/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/vigsterkr",
"id": 30353,
"login": "vigsterkr",
"node_id": "MDQ6VXNlcjMwMzUz",
"organizations_url": "https://api.github.com/users/vigsterkr/orgs",
"received_events_url": "https://api.github.com/users/vigsterkr/received_events",
"repos_url": "https://api.github.com/users/vigsterkr/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/vigsterkr/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/vigsterkr/subscriptions",
"type": "User",
"url": "https://api.github.com/users/vigsterkr"
} | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | open | false | null | [] | null | [] | 2022-09-10T13:07:22Z | 2022-09-22T18:29:22Z | null | CONTRIBUTOR | null | null | null | ## Describe the bug
When trying to store `Array2D, Array3D, etc` as column values in a dataset, accessing that column (or creating depending on how you create it, see code below) will cause more than 10 fold of memory usage.
## Steps to reproduce the bug
```python
from datasets import Dataset, Features, Array2D, Array3D
import numpy as np
column_name = "a"
array_shape = (64, 64, 3)
data = np.random.random((10000,) + array_shape)
dataset = Dataset.from_dict({column_name: data}, features=Features({column_name: Array3D(shape=array_shape, dtype="float64")}))
```
the code above will use about 10Gb of RAM while constructing the `dataset` object.
The code below will use roughly the same amount of memory (and time) when trying to actually access the data itself of that column.
```python
from datasets import Dataset
import numpy as np
column_name = "a"
array_shape = (64, 64, 3)
data = np.random.random((10000,) + array_shape)
dataset = Dataset.from_dict({column_name: data})
dataset[column_name]
```
## Expected results
Some memory overhead, but not like as it is now and certainly not an overhead of such runtime that is currently happening.
## Actual results
Enormous memory- and runtime overhead.
## Environment info
- `datasets` version: 2.3.2
- Platform: macOS-12.5.1-arm64-arm-64bit
- Python version: 3.8.13
- PyArrow version: 9.0.0
- Pandas version: 1.4.4 | {
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4964/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4964/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/4963 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4963/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4963/comments | https://api.github.com/repos/huggingface/datasets/issues/4963/events | https://github.com/huggingface/datasets/issues/4963 | 1,368,201,188 | I_kwDODunzps5RjRfk | 4,963 | Dataset without script does not support regular JSON data file | {
"avatar_url": "https://avatars.githubusercontent.com/u/326577?v=4",
"events_url": "https://api.github.com/users/julien-c/events{/privacy}",
"followers_url": "https://api.github.com/users/julien-c/followers",
"following_url": "https://api.github.com/users/julien-c/following{/other_user}",
"gists_url": "https://api.github.com/users/julien-c/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/julien-c",
"id": 326577,
"login": "julien-c",
"node_id": "MDQ6VXNlcjMyNjU3Nw==",
"organizations_url": "https://api.github.com/users/julien-c/orgs",
"received_events_url": "https://api.github.com/users/julien-c/received_events",
"repos_url": "https://api.github.com/users/julien-c/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/julien-c/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/julien-c/subscriptions",
"type": "User",
"url": "https://api.github.com/users/julien-c"
} | [] | closed | false | null | [] | null | [] | 2022-09-09T18:45:33Z | 2022-09-20T15:40:07Z | 2022-09-20T15:40:07Z | MEMBER | null | null | null | ### Link
https://huggingface.co/datasets/julien-c/label-studio-my-dogs
### Description
<img width="1115" alt="image" src="https://user-images.githubusercontent.com/326577/189422048-7e9c390f-bea7-4521-a232-43f049ccbd1f.png">
### Owner
Yes | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4963/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4963/timeline | null | completed | true |
https://api.github.com/repos/huggingface/datasets/issues/4962 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4962/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4962/comments | https://api.github.com/repos/huggingface/datasets/issues/4962/events | https://github.com/huggingface/datasets/pull/4962 | 1,368,155,365 | PR_kwDODunzps4-sh-o | 4,962 | Update setup.py | {
"avatar_url": "https://avatars.githubusercontent.com/u/3616964?v=4",
"events_url": "https://api.github.com/users/DCNemesis/events{/privacy}",
"followers_url": "https://api.github.com/users/DCNemesis/followers",
"following_url": "https://api.github.com/users/DCNemesis/following{/other_user}",
"gists_url": "https://api.github.com/users/DCNemesis/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/DCNemesis",
"id": 3616964,
"login": "DCNemesis",
"node_id": "MDQ6VXNlcjM2MTY5NjQ=",
"organizations_url": "https://api.github.com/users/DCNemesis/orgs",
"received_events_url": "https://api.github.com/users/DCNemesis/received_events",
"repos_url": "https://api.github.com/users/DCNemesis/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/DCNemesis/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/DCNemesis/subscriptions",
"type": "User",
"url": "https://api.github.com/users/DCNemesis"
} | [] | closed | false | null | [] | null | [] | 2022-09-09T17:57:56Z | 2022-09-12T14:33:04Z | 2022-09-12T14:33:04Z | NONE | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/4962.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4962",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/4962.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4962"
} | exclude broken version of fsspec. See the [related issue](https://github.com/huggingface/datasets/issues/4961) | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4962/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4962/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/4961 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4961/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4961/comments | https://api.github.com/repos/huggingface/datasets/issues/4961/events | https://github.com/huggingface/datasets/issues/4961 | 1,368,124,033 | I_kwDODunzps5Ri-qB | 4,961 | fsspec 2022.8.2 breaks xopen in streaming mode | {
"avatar_url": "https://avatars.githubusercontent.com/u/3616964?v=4",
"events_url": "https://api.github.com/users/DCNemesis/events{/privacy}",
"followers_url": "https://api.github.com/users/DCNemesis/followers",
"following_url": "https://api.github.com/users/DCNemesis/following{/other_user}",
"gists_url": "https://api.github.com/users/DCNemesis/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/DCNemesis",
"id": 3616964,
"login": "DCNemesis",
"node_id": "MDQ6VXNlcjM2MTY5NjQ=",
"organizations_url": "https://api.github.com/users/DCNemesis/orgs",
"received_events_url": "https://api.github.com/users/DCNemesis/received_events",
"repos_url": "https://api.github.com/users/DCNemesis/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/DCNemesis/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/DCNemesis/subscriptions",
"type": "User",
"url": "https://api.github.com/users/DCNemesis"
} | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | closed | false | null | [] | null | [] | 2022-09-09T17:26:55Z | 2022-09-12T17:45:50Z | 2022-09-12T14:32:05Z | NONE | null | null | null | ## Describe the bug
When fsspec 2022.8.2 is installed in your environment, xopen will prematurely close files, making streaming mode inoperable.
## Steps to reproduce the bug
```python
import datasets
data = datasets.load_dataset('MLCommons/ml_spoken_words', 'id_wav', split='train', streaming=True)
```
## Expected results
Dataset should load as iterator.
## Actual results
```
[/usr/local/lib/python3.7/dist-packages/datasets/load.py](https://localhost:8080/#) in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, ignore_verifications, keep_in_memory, save_infos, revision, use_auth_token, task, streaming, **config_kwargs)
1737 # Return iterable dataset in case of streaming
1738 if streaming:
-> 1739 return builder_instance.as_streaming_dataset(split=split)
1740
1741 # Some datasets are already processed on the HF google storage
[/usr/local/lib/python3.7/dist-packages/datasets/builder.py](https://localhost:8080/#) in as_streaming_dataset(self, split, base_path)
1023 )
1024 self._check_manual_download(dl_manager)
-> 1025 splits_generators = {sg.name: sg for sg in self._split_generators(dl_manager)}
1026 # By default, return all splits
1027 if split is None:
[~/.cache/huggingface/modules/datasets_modules/datasets/MLCommons--ml_spoken_words/321ea853cf0a05abb7a2d7efea900692a3d8622af65a2f3ce98adb7800a5d57b/ml_spoken_words.py](https://localhost:8080/#) in _split_generators(self, dl_manager)
182 name=datasets.Split.TRAIN,
183 gen_kwargs={
--> 184 "audio_archives": [download_audio(split="train", lang=lang) for lang in self.config.languages],
185 "local_audio_archives_paths": [download_extract_audio(split="train", lang=lang) for lang in
186 self.config.languages] if not dl_manager.is_streaming else None,
[~/.cache/huggingface/modules/datasets_modules/datasets/MLCommons--ml_spoken_words/321ea853cf0a05abb7a2d7efea900692a3d8622af65a2f3ce98adb7800a5d57b/ml_spoken_words.py](https://localhost:8080/#) in <listcomp>(.0)
182 name=datasets.Split.TRAIN,
183 gen_kwargs={
--> 184 "audio_archives": [download_audio(split="train", lang=lang) for lang in self.config.languages],
185 "local_audio_archives_paths": [download_extract_audio(split="train", lang=lang) for lang in
186 self.config.languages] if not dl_manager.is_streaming else None,
[~/.cache/huggingface/modules/datasets_modules/datasets/MLCommons--ml_spoken_words/321ea853cf0a05abb7a2d7efea900692a3d8622af65a2f3ce98adb7800a5d57b/ml_spoken_words.py](https://localhost:8080/#) in _download_audio_archives(dl_manager, lang, format, split)
267 # for streaming case
268 def _download_audio_archives(dl_manager, lang, format, split):
--> 269 archives_paths = _download_audio_archives_paths(dl_manager, lang, format, split)
270 return [dl_manager.iter_archive(archive_path) for archive_path in archives_paths]
[~/.cache/huggingface/modules/datasets_modules/datasets/MLCommons--ml_spoken_words/321ea853cf0a05abb7a2d7efea900692a3d8622af65a2f3ce98adb7800a5d57b/ml_spoken_words.py](https://localhost:8080/#) in _download_audio_archives_paths(dl_manager, lang, format, split)
251 n_files_path = dl_manager.download(n_files_url)
252
--> 253 with open(n_files_path, "r", encoding="utf-8") as file:
254 n_files = int(file.read().strip()) # the file contains a number of archives
255
ValueError: I/O operation on closed file.
```
## Environment info
- `datasets` version: 2.4.0
- Platform: Linux-5.10.133+-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.7.13
- PyArrow version: 6.0.1
- Pandas version: 1.3.5
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4961/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4961/timeline | null | completed | true |
https://api.github.com/repos/huggingface/datasets/issues/4960 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4960/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4960/comments | https://api.github.com/repos/huggingface/datasets/issues/4960/events | https://github.com/huggingface/datasets/issues/4960 | 1,368,035,159 | I_kwDODunzps5Rio9X | 4,960 | BioASQ AttributeError: 'BuilderConfig' object has no attribute 'schema' | {
"avatar_url": "https://avatars.githubusercontent.com/u/8426290?v=4",
"events_url": "https://api.github.com/users/DSLituiev/events{/privacy}",
"followers_url": "https://api.github.com/users/DSLituiev/followers",
"following_url": "https://api.github.com/users/DSLituiev/following{/other_user}",
"gists_url": "https://api.github.com/users/DSLituiev/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/DSLituiev",
"id": 8426290,
"login": "DSLituiev",
"node_id": "MDQ6VXNlcjg0MjYyOTA=",
"organizations_url": "https://api.github.com/users/DSLituiev/orgs",
"received_events_url": "https://api.github.com/users/DSLituiev/received_events",
"repos_url": "https://api.github.com/users/DSLituiev/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/DSLituiev/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/DSLituiev/subscriptions",
"type": "User",
"url": "https://api.github.com/users/DSLituiev"
} | [
{
"color": "2edb81",
"default": false,
"description": "A bug in a dataset script provided in the library",
"id": 2067388877,
"name": "dataset bug",
"node_id": "MDU6TGFiZWwyMDY3Mzg4ODc3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20bug"
}
] | open | false | null | [] | null | [] | 2022-09-09T16:06:43Z | 2022-09-13T08:51:03Z | null | NONE | null | null | null | ## Describe the bug
I am trying to load a dataset from drive and running into an error.
## Steps to reproduce the bug
```python
data_dir = "/Users/dlituiev/repos/datasets/bioasq/BioASQ-training9b"
bioasq_task_b = load_dataset("aps/bioasq_task_b", data_dir=data_dir)
```
## Actual results
`AttributeError: 'BuilderConfig' object has no attribute 'schema'`
<details>
```
Using custom data configuration default-a1ca3e05be5abf2f
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
Input In [8], in <cell line: 2>()
1 data_dir = "/Users/dlituiev/repos/datasets/bioasq/BioASQ-training9b"
----> 2 bioasq_task_b = load_dataset("aps/bioasq_task_b", data_dir=data_dir)
File ~/opt/anaconda3/envs/spacy3/lib/python3.10/site-packages/datasets/load.py:1723, in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, ignore_verifications, keep_in_memory, save_infos, revision, use_auth_token, task, streaming, **config_kwargs)
1720 ignore_verifications = ignore_verifications or save_infos
1722 # Create a dataset builder
-> 1723 builder_instance = load_dataset_builder(
1724 path=path,
1725 name=name,
1726 data_dir=data_dir,
1727 data_files=data_files,
1728 cache_dir=cache_dir,
1729 features=features,
1730 download_config=download_config,
1731 download_mode=download_mode,
1732 revision=revision,
1733 use_auth_token=use_auth_token,
1734 **config_kwargs,
1735 )
1737 # Return iterable dataset in case of streaming
1738 if streaming:
File ~/opt/anaconda3/envs/spacy3/lib/python3.10/site-packages/datasets/load.py:1526, in load_dataset_builder(path, name, data_dir, data_files, cache_dir, features, download_config, download_mode, revision, use_auth_token, **config_kwargs)
1523 raise ValueError(error_msg)
1525 # Instantiate the dataset builder
-> 1526 builder_instance: DatasetBuilder = builder_cls(
1527 cache_dir=cache_dir,
1528 config_name=config_name,
1529 data_dir=data_dir,
1530 data_files=data_files,
1531 hash=hash,
1532 features=features,
1533 use_auth_token=use_auth_token,
1534 **builder_kwargs,
1535 **config_kwargs,
1536 )
1538 return builder_instance
File ~/opt/anaconda3/envs/spacy3/lib/python3.10/site-packages/datasets/builder.py:1154, in GeneratorBasedBuilder.__init__(self, writer_batch_size, *args, **kwargs)
1153 def __init__(self, *args, writer_batch_size=None, **kwargs):
-> 1154 super().__init__(*args, **kwargs)
1155 # Batch size used by the ArrowWriter
1156 # It defines the number of samples that are kept in memory before writing them
1157 # and also the length of the arrow chunks
1158 # None means that the ArrowWriter will use its default value
1159 self._writer_batch_size = writer_batch_size or self.DEFAULT_WRITER_BATCH_SIZE
File ~/opt/anaconda3/envs/spacy3/lib/python3.10/site-packages/datasets/builder.py:307, in DatasetBuilder.__init__(self, cache_dir, config_name, hash, base_path, info, features, use_auth_token, repo_id, data_files, data_dir, name, **config_kwargs)
305 if info is None:
306 info = self.get_exported_dataset_info()
--> 307 info.update(self._info())
308 info.builder_name = self.name
309 info.config_name = self.config.name
File ~/.cache/huggingface/modules/datasets_modules/datasets/aps--bioasq_task_b/3d54b1213f7e8001eef755af92877f9efa44161ee83c2a70d5d649defa95759e/bioasq_task_b.py:477, in BioasqTaskBDataset._info(self)
474 def _info(self):
475
476 # BioASQ Task B source schema
--> 477 if self.config.schema == "source":
478 features = datasets.Features(
479 {
480 "id": datasets.Value("string"),
(...)
504 }
505 )
506 # simplified schema for QA tasks
AttributeError: 'BuilderConfig' object has no attribute 'schema'
```
</details>
## Environment info
- `datasets` version: 2.4.0
- Platform: macOS-10.16-x86_64-i386-64bit
- Python version: 3.10.4
- PyArrow version: 9.0.0
- Pandas version: 1.4.3 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4960/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4960/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/4959 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4959/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4959/comments | https://api.github.com/repos/huggingface/datasets/issues/4959/events | https://github.com/huggingface/datasets/pull/4959 | 1,367,924,429 | PR_kwDODunzps4-rx6l | 4,959 | Fix data URLs of compguesswhat dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | [] | closed | false | null | [] | null | [] | 2022-09-09T14:36:10Z | 2022-09-09T16:01:34Z | 2022-09-09T15:59:04Z | MEMBER | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/4959.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4959",
"merged_at": "2022-09-09T15:59:04Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4959.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4959"
} | After we informed the `compguesswhat` dataset authors about an error with their data URLs, they have updated them:
- https://github.com/CompGuessWhat/compguesswhat.github.io/issues/1
This PR updates their data URLs in our loading script.
Related to:
- #3191 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4959/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4959/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/4958 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4958/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4958/comments | https://api.github.com/repos/huggingface/datasets/issues/4958/events | https://github.com/huggingface/datasets/issues/4958 | 1,367,695,376 | I_kwDODunzps5RhWAQ | 4,958 | ConnectionError: Couldn't reach https://raw.githubusercontent.com/huggingface/datasets/2.4.0/datasets/jsonl/jsonl.py | {
"avatar_url": "https://avatars.githubusercontent.com/u/66322047?v=4",
"events_url": "https://api.github.com/users/hasakikiki/events{/privacy}",
"followers_url": "https://api.github.com/users/hasakikiki/followers",
"following_url": "https://api.github.com/users/hasakikiki/following{/other_user}",
"gists_url": "https://api.github.com/users/hasakikiki/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/hasakikiki",
"id": 66322047,
"login": "hasakikiki",
"node_id": "MDQ6VXNlcjY2MzIyMDQ3",
"organizations_url": "https://api.github.com/users/hasakikiki/orgs",
"received_events_url": "https://api.github.com/users/hasakikiki/received_events",
"repos_url": "https://api.github.com/users/hasakikiki/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/hasakikiki/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/hasakikiki/subscriptions",
"type": "User",
"url": "https://api.github.com/users/hasakikiki"
} | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | closed | false | null | [] | null | [] | 2022-09-09T11:29:55Z | 2022-09-09T11:38:44Z | 2022-09-09T11:38:44Z | NONE | null | null | null | Hi,
When I use load_dataset from local jsonl files, below error happens, and I type the link into the browser prompting me `404: Not Found`. I download the other `.py` files using the same method and it works. It seems that the server is missing the appropriate file, or it is a problem with the code version.
```
ConnectionError: Couldn't reach https://raw.githubusercontent.com/huggingface/datasets/2.3.0/datasets/jsonl/jsonl.py (ConnectionError(MaxRetryError("HTTPSConnectionPool(host='raw.githubusercontent.com', port=443): Max retries exceeded with url: /huggingface/datasets/2.3.0/datasets/jsonl/jsonl.py (Caused by NewConnectionError('<urllib3.connection.HTTPSConnection object at 0x2b08342004c0>: Failed to establish a new connection: [Errno 101] Network is unreachable'))")))
```
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4958/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4958/timeline | null | completed | true |
https://api.github.com/repos/huggingface/datasets/issues/4957 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4957/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4957/comments | https://api.github.com/repos/huggingface/datasets/issues/4957/events | https://github.com/huggingface/datasets/pull/4957 | 1,366,532,849 | PR_kwDODunzps4-nGIk | 4,957 | Add `Dataset.from_generator` | {
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko"
} | [] | closed | false | null | [] | null | [] | 2022-09-08T15:08:25Z | 2022-09-16T14:46:35Z | 2022-09-16T14:44:18Z | CONTRIBUTOR | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/4957.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4957",
"merged_at": "2022-09-16T14:44:18Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4957.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4957"
} | Add `Dataset.from_generator` to the API to allow creating datasets from data larger than RAM. The implementation relies on a packaged module not exposed in `load_dataset` to tie this method with `datasets`' caching mechanism.
Closes https://github.com/huggingface/datasets/issues/4417 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 2,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 2,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4957/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4957/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/4956 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4956/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4956/comments | https://api.github.com/repos/huggingface/datasets/issues/4956/events | https://github.com/huggingface/datasets/pull/4956 | 1,366,475,160 | PR_kwDODunzps4-m5NU | 4,956 | Fix TF tests for 2.10 | {
"avatar_url": "https://avatars.githubusercontent.com/u/12866554?v=4",
"events_url": "https://api.github.com/users/Rocketknight1/events{/privacy}",
"followers_url": "https://api.github.com/users/Rocketknight1/followers",
"following_url": "https://api.github.com/users/Rocketknight1/following{/other_user}",
"gists_url": "https://api.github.com/users/Rocketknight1/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Rocketknight1",
"id": 12866554,
"login": "Rocketknight1",
"node_id": "MDQ6VXNlcjEyODY2NTU0",
"organizations_url": "https://api.github.com/users/Rocketknight1/orgs",
"received_events_url": "https://api.github.com/users/Rocketknight1/received_events",
"repos_url": "https://api.github.com/users/Rocketknight1/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Rocketknight1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Rocketknight1/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Rocketknight1"
} | [] | closed | false | null | [] | null | [] | 2022-09-08T14:39:10Z | 2022-09-08T15:16:51Z | 2022-09-08T15:14:44Z | MEMBER | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/4956.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4956",
"merged_at": "2022-09-08T15:14:44Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4956.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4956"
} | Fixes #4953 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4956/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4956/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/4955 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4955/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4955/comments | https://api.github.com/repos/huggingface/datasets/issues/4955/events | https://github.com/huggingface/datasets/issues/4955 | 1,366,382,314 | I_kwDODunzps5RcVbq | 4,955 | Raise a more precise error when the URL is unreachable in streaming mode | {
"avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4",
"events_url": "https://api.github.com/users/severo/events{/privacy}",
"followers_url": "https://api.github.com/users/severo/followers",
"following_url": "https://api.github.com/users/severo/following{/other_user}",
"gists_url": "https://api.github.com/users/severo/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/severo",
"id": 1676121,
"login": "severo",
"node_id": "MDQ6VXNlcjE2NzYxMjE=",
"organizations_url": "https://api.github.com/users/severo/orgs",
"received_events_url": "https://api.github.com/users/severo/received_events",
"repos_url": "https://api.github.com/users/severo/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/severo/subscriptions",
"type": "User",
"url": "https://api.github.com/users/severo"
} | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | open | false | null | [] | null | [] | 2022-09-08T13:52:37Z | 2022-09-08T13:53:36Z | null | CONTRIBUTOR | null | null | null | See for example:
- https://github.com/huggingface/datasets/issues/3191
- https://github.com/huggingface/datasets/issues/3186
It would help provide clearer information on the Hub and help the dataset maintainer solve the issue by themselves quicker. Currently:
- https://huggingface.co/datasets/compguesswhat
<img width="1029" alt="Capture d’écran 2022-09-08 à 15 51 37" src="https://user-images.githubusercontent.com/1676121/189139946-6deffb91-f21b-4281-8825-a98026c69740.png">
- https://huggingface.co/datasets/nli_tr
<img width="1032" alt="Capture d’écran 2022-09-08 à 15 51 44" src="https://user-images.githubusercontent.com/1676121/189139963-d26490ed-ad23-48ea-9cfc-1ab9c4d08d0c.png">
cc @albertvillanova | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4955/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4955/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/4954 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4954/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4954/comments | https://api.github.com/repos/huggingface/datasets/issues/4954/events | https://github.com/huggingface/datasets/pull/4954 | 1,366,369,682 | PR_kwDODunzps4-mhl5 | 4,954 | Pin TensorFlow temporarily | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | [] | closed | false | null | [] | null | [] | 2022-09-08T13:46:15Z | 2022-09-08T14:12:33Z | 2022-09-08T14:10:03Z | MEMBER | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/4954.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4954",
"merged_at": "2022-09-08T14:10:03Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4954.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4954"
} | Temporarily fix TensorFlow until a permanent solution is found.
Related to:
- #4953 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4954/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4954/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/4953 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4953/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4953/comments | https://api.github.com/repos/huggingface/datasets/issues/4953/events | https://github.com/huggingface/datasets/issues/4953 | 1,366,356,514 | I_kwDODunzps5RcPIi | 4,953 | CI test of TensorFlow is failing | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | closed | false | null | [] | null | [] | 2022-09-08T13:39:29Z | 2022-09-08T15:14:45Z | 2022-09-08T15:14:45Z | MEMBER | null | null | null | ## Describe the bug
The following CI test fails: https://github.com/huggingface/datasets/runs/8246722693?check_suite_focus=true
```
FAILED tests/test_py_utils.py::TempSeedTest::test_tensorflow - AssertionError:
```
Details:
```
_________________________ TempSeedTest.test_tensorflow _________________________
[gw0] linux -- Python 3.7.13 /opt/hostedtoolcache/Python/3.7.13/x64/bin/python
self = <tests.test_py_utils.TempSeedTest testMethod=test_tensorflow>
@require_tf
def test_tensorflow(self):
import tensorflow as tf
from tensorflow.keras import layers
def gen_random_output():
model = layers.Dense(2)
x = tf.random.uniform((1, 3))
return model(x).numpy()
with temp_seed(42, set_tensorflow=True):
out1 = gen_random_output()
with temp_seed(42, set_tensorflow=True):
out2 = gen_random_output()
out3 = gen_random_output()
> np.testing.assert_equal(out1, out2)
E AssertionError:
E Arrays are not equal
E
E Mismatched elements: 2 / 2 (100%)
E Max absolute difference: 0.84619296
E Max relative difference: 16.083529
E x: array([[-0.793581, 0.333286]], dtype=float32)
E y: array([[0.052612, 0.539708]], dtype=float32)
tests/test_py_utils.py:149: AssertionError
```
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4953/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4953/timeline | null | completed | true |
https://api.github.com/repos/huggingface/datasets/issues/4952 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4952/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4952/comments | https://api.github.com/repos/huggingface/datasets/issues/4952/events | https://github.com/huggingface/datasets/pull/4952 | 1,366,354,604 | PR_kwDODunzps4-meM0 | 4,952 | Add test-datasets CI job | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [] | closed | false | null | [] | null | [] | 2022-09-08T13:38:30Z | 2022-09-16T13:28:02Z | 2022-09-16T13:25:48Z | MEMBER | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/4952.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4952",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/4952.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4952"
} | To avoid having too many conflicts in the datasets and metrics dependencies I split the CI into test and test-catalog
test does the test of the core of the `datasets` lib, while test-catalog tests the datasets scripts and metrics scripts
This also makes `pip install -e .[dev]` much smaller for developers
WDYT @albertvillanova ? | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4952/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4952/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/4951 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4951/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4951/comments | https://api.github.com/repos/huggingface/datasets/issues/4951/events | https://github.com/huggingface/datasets/pull/4951 | 1,365,954,814 | PR_kwDODunzps4-lDqd | 4,951 | Fix license information in qasc dataset card | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | [] | closed | false | null | [] | null | [] | 2022-09-08T10:04:39Z | 2022-09-08T14:54:47Z | 2022-09-08T14:52:05Z | MEMBER | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/4951.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4951",
"merged_at": "2022-09-08T14:52:05Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4951.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4951"
} | This PR adds the license information to `qasc` dataset, once reported via GitHub by Tushar Khot, the dataset is licensed under CC BY 4.0:
- https://github.com/allenai/qasc/issues/5
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4951/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4951/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/4950 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4950/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4950/comments | https://api.github.com/repos/huggingface/datasets/issues/4950/events | https://github.com/huggingface/datasets/pull/4950 | 1,365,458,633 | PR_kwDODunzps4-jWZ1 | 4,950 | Update Enwik8 broken link and information | {
"avatar_url": "https://avatars.githubusercontent.com/u/54819091?v=4",
"events_url": "https://api.github.com/users/mtanghu/events{/privacy}",
"followers_url": "https://api.github.com/users/mtanghu/followers",
"following_url": "https://api.github.com/users/mtanghu/following{/other_user}",
"gists_url": "https://api.github.com/users/mtanghu/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mtanghu",
"id": 54819091,
"login": "mtanghu",
"node_id": "MDQ6VXNlcjU0ODE5MDkx",
"organizations_url": "https://api.github.com/users/mtanghu/orgs",
"received_events_url": "https://api.github.com/users/mtanghu/received_events",
"repos_url": "https://api.github.com/users/mtanghu/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mtanghu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mtanghu/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mtanghu"
} | [] | closed | false | null | [] | null | [] | 2022-09-08T03:15:00Z | 2022-09-24T22:14:35Z | 2022-09-08T14:51:00Z | CONTRIBUTOR | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/4950.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4950",
"merged_at": "2022-09-08T14:51:00Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4950.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4950"
} | The current enwik8 dataset link give a 502 bad gateway error which can be view on https://huggingface.co/datasets/enwik8 (click the dropdown to see the dataset preview, it will show the error). This corrects the links, and json metadata as well as adds a little bit more information about enwik8. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4950/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4950/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/4949 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4949/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4949/comments | https://api.github.com/repos/huggingface/datasets/issues/4949/events | https://github.com/huggingface/datasets/pull/4949 | 1,365,251,916 | PR_kwDODunzps4-iqzI | 4,949 | Update enwik8 fixing the broken link | {
"avatar_url": "https://avatars.githubusercontent.com/u/54819091?v=4",
"events_url": "https://api.github.com/users/mtanghu/events{/privacy}",
"followers_url": "https://api.github.com/users/mtanghu/followers",
"following_url": "https://api.github.com/users/mtanghu/following{/other_user}",
"gists_url": "https://api.github.com/users/mtanghu/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mtanghu",
"id": 54819091,
"login": "mtanghu",
"node_id": "MDQ6VXNlcjU0ODE5MDkx",
"organizations_url": "https://api.github.com/users/mtanghu/orgs",
"received_events_url": "https://api.github.com/users/mtanghu/received_events",
"repos_url": "https://api.github.com/users/mtanghu/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mtanghu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mtanghu/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mtanghu"
} | [] | closed | false | null | [] | null | [] | 2022-09-07T22:17:14Z | 2022-09-08T03:14:04Z | 2022-09-08T03:14:04Z | CONTRIBUTOR | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/4949.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4949",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/4949.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4949"
} | The current enwik8 dataset link give a 502 bad gateway error which can be view on https://huggingface.co/datasets/enwik8 (click the dropdown to see the dataset preview, it will show the error). This corrects the links, and json metadata as well as adds a little bit more information about enwik8. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4949/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4949/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/4948 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4948/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4948/comments | https://api.github.com/repos/huggingface/datasets/issues/4948/events | https://github.com/huggingface/datasets/pull/4948 | 1,364,973,778 | PR_kwDODunzps4-hwsl | 4,948 | Fix minor typo in error message for missing imports | {
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko"
} | [] | closed | false | null | [] | null | [] | 2022-09-07T17:20:51Z | 2022-09-08T14:59:31Z | 2022-09-08T14:57:15Z | CONTRIBUTOR | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/4948.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4948",
"merged_at": "2022-09-08T14:57:15Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4948.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4948"
} | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4948/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4948/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/4947 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4947/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4947/comments | https://api.github.com/repos/huggingface/datasets/issues/4947/events | https://github.com/huggingface/datasets/pull/4947 | 1,364,967,957 | PR_kwDODunzps4-hvbq | 4,947 | Try to fix the Windows CI after TF update 2.10 | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [] | closed | false | null | [] | null | [] | 2022-09-07T17:14:49Z | 2022-09-08T09:13:10Z | 2022-09-08T09:13:10Z | MEMBER | null | true | {
"diff_url": "https://github.com/huggingface/datasets/pull/4947.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4947",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/4947.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4947"
} | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4947/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4947/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/4946 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4946/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4946/comments | https://api.github.com/repos/huggingface/datasets/issues/4946/events | https://github.com/huggingface/datasets/pull/4946 | 1,364,692,069 | PR_kwDODunzps4-g0Hz | 4,946 | Introduce regex check when pushing as well | {
"avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4",
"events_url": "https://api.github.com/users/LysandreJik/events{/privacy}",
"followers_url": "https://api.github.com/users/LysandreJik/followers",
"following_url": "https://api.github.com/users/LysandreJik/following{/other_user}",
"gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/LysandreJik",
"id": 30755778,
"login": "LysandreJik",
"node_id": "MDQ6VXNlcjMwNzU1Nzc4",
"organizations_url": "https://api.github.com/users/LysandreJik/orgs",
"received_events_url": "https://api.github.com/users/LysandreJik/received_events",
"repos_url": "https://api.github.com/users/LysandreJik/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions",
"type": "User",
"url": "https://api.github.com/users/LysandreJik"
} | [] | closed | false | null | [] | null | [] | 2022-09-07T13:45:58Z | 2022-09-13T10:19:01Z | 2022-09-13T10:16:34Z | MEMBER | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/4946.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4946",
"merged_at": "2022-09-13T10:16:34Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4946.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4946"
} | Closes https://github.com/huggingface/datasets/issues/4945 by adding a regex check when pushing to hub.
Let me know if this is helpful and if it's the fix you would have in mind for the issue and I'm happy to contribute tests. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4946/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4946/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/4945 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4945/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4945/comments | https://api.github.com/repos/huggingface/datasets/issues/4945/events | https://github.com/huggingface/datasets/issues/4945 | 1,364,691,096 | I_kwDODunzps5RV4iY | 4,945 | Push to hub can push splits that do not respect the regex | {
"avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4",
"events_url": "https://api.github.com/users/LysandreJik/events{/privacy}",
"followers_url": "https://api.github.com/users/LysandreJik/followers",
"following_url": "https://api.github.com/users/LysandreJik/following{/other_user}",
"gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/LysandreJik",
"id": 30755778,
"login": "LysandreJik",
"node_id": "MDQ6VXNlcjMwNzU1Nzc4",
"organizations_url": "https://api.github.com/users/LysandreJik/orgs",
"received_events_url": "https://api.github.com/users/LysandreJik/received_events",
"repos_url": "https://api.github.com/users/LysandreJik/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions",
"type": "User",
"url": "https://api.github.com/users/LysandreJik"
} | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | closed | false | null | [] | null | [] | 2022-09-07T13:45:17Z | 2022-09-13T10:16:35Z | 2022-09-13T10:16:35Z | MEMBER | null | null | null | ## Describe the bug
The `push_to_hub` method can push splits that do not respect the regex check that is used for downloads. Therefore, splits may be pushed but never re-used, which can be painful if the split was done after runtime preprocessing.
## Steps to reproduce the bug
```python
>>> from datasets import Dataset, DatasetDict, load_dataset
>>> d = Dataset.from_dict({'x': [1,2,3], 'y': [1,2,3]})
>>> di = DatasetDict()
>>> di['identifier-with-column'] = d
>>> di.push_to_hub('open-source-metrics/test')
Pushing split identifier-with-column to the Hub.
Pushing dataset shards to the dataset hub: 100%|██████████| 1/1 [00:04<00:00, 4.40s/it]
```
Loading it afterwards:
```python
>>> load_dataset('open-source-metrics/test')
Downloading: 100%|██████████| 610/610 [00:00<00:00, 432kB/s]
Using custom data configuration open-source-metrics--test-28b63ec7cde80488
Downloading and preparing dataset None/None (download: 950 bytes, generated: 48 bytes, post-processed: Unknown size, total: 998 bytes) to /home/lysandre/.cache/huggingface/datasets/open-source-metrics___parquet/open-source-metrics--test-28b63ec7cde80488/0.0.0/2a3b91fbd88a2c90d1dbbb32b460cf621d31bd5b05b934492fdef7d8d6f236ec...
Downloading data files: 0%| | 0/1 [00:00<?, ?it/s]
Downloading data: 100%|██████████| 950/950 [00:00<00:00, 1.01MB/s]
Downloading data files: 100%|██████████| 1/1 [00:01<00:00, 1.48s/it]
Extracting data files: 100%|██████████| 1/1 [00:00<00:00, 2291.97it/s]
Traceback (most recent call last):
File "/home/lysandre/.pyenv/versions/3.10.6/lib/python3.10/code.py", line 90, in runcode
exec(code, self.locals)
File "<input>", line 1, in <module>
File "/home/lysandre/Workspaces/python/Metrics/GitHub-Metrics/.env/lib/python3.10/site-packages/datasets/load.py", line 1746, in load_dataset
builder_instance.download_and_prepare(
File "/home/lysandre/Workspaces/python/Metrics/GitHub-Metrics/.env/lib/python3.10/site-packages/datasets/builder.py", line 704, in download_and_prepare
self._download_and_prepare(
File "/home/lysandre/Workspaces/python/Metrics/GitHub-Metrics/.env/lib/python3.10/site-packages/datasets/builder.py", line 771, in _download_and_prepare
split_generators = self._split_generators(dl_manager, **split_generators_kwargs)
File "/home/lysandre/Workspaces/python/Metrics/GitHub-Metrics/.env/lib/python3.10/site-packages/datasets/packaged_modules/parquet/parquet.py", line 48, in _split_generators
splits.append(datasets.SplitGenerator(name=split_name, gen_kwargs={"files": files}))
File "<string>", line 5, in __init__
File "/home/lysandre/Workspaces/python/Metrics/GitHub-Metrics/.env/lib/python3.10/site-packages/datasets/splits.py", line 599, in __post_init__
NamedSplit(self.name) # check that it's a valid split name
File "/home/lysandre/Workspaces/python/Metrics/GitHub-Metrics/.env/lib/python3.10/site-packages/datasets/splits.py", line 346, in __init__
raise ValueError(f"Split name should match '{_split_re}' but got '{split_name}'.")
ValueError: Split name should match '^\w+(\.\w+)*$' but got 'identifier-with-column'.
```
## Expected results
I would expect `push_to_hub` to stop me in my tracks if trying to upload a split that will not be working afterwards.
## Actual results
See above
## Environment info
- `datasets` version: 2.4.0
- Platform: Linux-5.15.64-1-lts-x86_64-with-glibc2.36
- Python version: 3.10.6
- PyArrow version: 9.0.0
- Pandas version: 1.4.4
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4945/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4945/timeline | null | completed | true |
https://api.github.com/repos/huggingface/datasets/issues/4944 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4944/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4944/comments | https://api.github.com/repos/huggingface/datasets/issues/4944/events | https://github.com/huggingface/datasets/issues/4944 | 1,364,313,569 | I_kwDODunzps5RUcXh | 4,944 | larger dataset, larger GPU memory in the training phase? Is that correct? | {
"avatar_url": "https://avatars.githubusercontent.com/u/38886373?v=4",
"events_url": "https://api.github.com/users/debby1103/events{/privacy}",
"followers_url": "https://api.github.com/users/debby1103/followers",
"following_url": "https://api.github.com/users/debby1103/following{/other_user}",
"gists_url": "https://api.github.com/users/debby1103/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/debby1103",
"id": 38886373,
"login": "debby1103",
"node_id": "MDQ6VXNlcjM4ODg2Mzcz",
"organizations_url": "https://api.github.com/users/debby1103/orgs",
"received_events_url": "https://api.github.com/users/debby1103/received_events",
"repos_url": "https://api.github.com/users/debby1103/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/debby1103/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/debby1103/subscriptions",
"type": "User",
"url": "https://api.github.com/users/debby1103"
} | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | closed | false | null | [] | null | [] | 2022-09-07T08:46:30Z | 2022-09-07T12:34:58Z | 2022-09-07T12:34:58Z | NONE | null | null | null | from datasets import set_caching_enabled
set_caching_enabled(False)
for ds_name in ["squad","newsqa","nqopen","narrativeqa"]:
train_ds = load_from_disk("../../../dall/downstream/processedproqa/{}-train.hf".format(ds_name))
break
train_ds = concatenate_datasets([train_ds,train_ds,train_ds,train_ds]) #operation 1
trainer = QuestionAnsweringTrainer( #huggingface trainer
model=model,
args=training_args,
train_dataset=train_ds,
eval_dataset= None,
eval_examples=None,
answer_column_name=answer_column,
dataset_name="squad",
tokenizer=tokenizer,
data_collator=data_collator,
compute_metrics=compute_metrics if training_args.predict_with_generate else None,
)
with operation 1, the GPU memory increases from 16G to 23G | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4944/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4944/timeline | null | completed | true |
https://api.github.com/repos/huggingface/datasets/issues/4943 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4943/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4943/comments | https://api.github.com/repos/huggingface/datasets/issues/4943/events | https://github.com/huggingface/datasets/pull/4943 | 1,363,967,650 | PR_kwDODunzps4-eZd_ | 4,943 | Add splits to MBPP dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/2788526?v=4",
"events_url": "https://api.github.com/users/cwarny/events{/privacy}",
"followers_url": "https://api.github.com/users/cwarny/followers",
"following_url": "https://api.github.com/users/cwarny/following{/other_user}",
"gists_url": "https://api.github.com/users/cwarny/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/cwarny",
"id": 2788526,
"login": "cwarny",
"node_id": "MDQ6VXNlcjI3ODg1MjY=",
"organizations_url": "https://api.github.com/users/cwarny/orgs",
"received_events_url": "https://api.github.com/users/cwarny/received_events",
"repos_url": "https://api.github.com/users/cwarny/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/cwarny/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/cwarny/subscriptions",
"type": "User",
"url": "https://api.github.com/users/cwarny"
} | [] | closed | false | null | [] | null | [] | 2022-09-07T01:18:31Z | 2022-09-13T12:29:19Z | 2022-09-13T12:27:21Z | CONTRIBUTOR | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/4943.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4943",
"merged_at": "2022-09-13T12:27:21Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4943.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4943"
} | This PR addresses https://github.com/huggingface/datasets/issues/4795 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4943/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4943/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/4942 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4942/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4942/comments | https://api.github.com/repos/huggingface/datasets/issues/4942/events | https://github.com/huggingface/datasets/issues/4942 | 1,363,869,421 | I_kwDODunzps5RSv7t | 4,942 | Trec Dataset has incorrect labels | {
"avatar_url": "https://avatars.githubusercontent.com/u/6539145?v=4",
"events_url": "https://api.github.com/users/wmpauli/events{/privacy}",
"followers_url": "https://api.github.com/users/wmpauli/followers",
"following_url": "https://api.github.com/users/wmpauli/following{/other_user}",
"gists_url": "https://api.github.com/users/wmpauli/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/wmpauli",
"id": 6539145,
"login": "wmpauli",
"node_id": "MDQ6VXNlcjY1MzkxNDU=",
"organizations_url": "https://api.github.com/users/wmpauli/orgs",
"received_events_url": "https://api.github.com/users/wmpauli/received_events",
"repos_url": "https://api.github.com/users/wmpauli/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/wmpauli/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/wmpauli/subscriptions",
"type": "User",
"url": "https://api.github.com/users/wmpauli"
} | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | closed | false | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
}
] | null | [] | 2022-09-06T22:13:40Z | 2022-09-08T11:12:03Z | 2022-09-08T11:12:03Z | NONE | null | null | null | ## Describe the bug
Both coarse and fine labels seem to be out of line.
## Steps to reproduce the bug
```python
from datasets import load_dataset
dataset = "trec"
raw_datasets = load_dataset(dataset)
df = pd.DataFrame(raw_datasets["test"])
df.head()
```
## Expected results
text (string) | coarse_label (class label) | fine_label (class label)
-- | -- | --
How far is it from Denver to Aspen ? | 5 (NUM) | 40 (NUM:dist)
What county is Modesto , California in ? | 4 (LOC) | 32 (LOC:city)
Who was Galileo ? | 3 (HUM) | 31 (HUM:desc)
What is an atom ? | 2 (DESC) | 24 (DESC:def)
When did Hawaii become a state ? | 5 (NUM) | 39 (NUM:date)
## Actual results
index | label-coarse |label-fine | text
-- |-- | -- | --
0 | 4 | 40 | How far is it from Denver to Aspen ?
1 | 5 | 21 | What county is Modesto , California in ?
2 | 3 | 12 | Who was Galileo ?
3 | 0 | 7 | What is an atom ?
4 | 4 | 8 | When did Hawaii become a state ?
## Environment info
- `datasets` version: 2.4.0
- Platform: Linux-5.4.0-1086-azure-x86_64-with-glibc2.27
- Python version: 3.9.13
- PyArrow version: 8.0.0
- Pandas version: 1.4.3
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4942/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4942/timeline | null | completed | true |
https://api.github.com/repos/huggingface/datasets/issues/4941 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4941/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4941/comments | https://api.github.com/repos/huggingface/datasets/issues/4941/events | https://github.com/huggingface/datasets/pull/4941 | 1,363,622,861 | PR_kwDODunzps4-dQ9F | 4,941 | Add Papers with Code ID to scifact dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | [] | closed | false | null | [] | null | [] | 2022-09-06T17:46:37Z | 2022-09-06T18:28:17Z | 2022-09-06T18:26:01Z | MEMBER | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/4941.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4941",
"merged_at": "2022-09-06T18:26:01Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4941.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4941"
} | This PR:
- adds Papers with Code ID
- forces sync between GitHub and Hub, which previously failed due to Hub validation error of the license tag: https://github.com/huggingface/datasets/runs/8200223631?check_suite_focus=true | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4941/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4941/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/4940 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4940/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4940/comments | https://api.github.com/repos/huggingface/datasets/issues/4940/events | https://github.com/huggingface/datasets/pull/4940 | 1,363,513,058 | PR_kwDODunzps4-c6WY | 4,940 | Fix multilinguality tag and missing sections in xquad_r dataset card | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | [] | closed | false | null | [] | null | [] | 2022-09-06T16:05:35Z | 2022-09-12T10:11:07Z | 2022-09-12T10:08:48Z | MEMBER | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/4940.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4940",
"merged_at": "2022-09-12T10:08:48Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4940.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4940"
} | This PR fixes issue reported on the Hub:
- Label as multilingual: https://huggingface.co/datasets/xquad_r/discussions/1 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4940/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4940/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/4939 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4939/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4939/comments | https://api.github.com/repos/huggingface/datasets/issues/4939/events | https://github.com/huggingface/datasets/pull/4939 | 1,363,468,679 | PR_kwDODunzps4-cw4A | 4,939 | Fix NonMatchingChecksumError in adv_glue dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | [] | closed | false | null | [] | null | [] | 2022-09-06T15:31:16Z | 2022-09-06T17:42:10Z | 2022-09-06T17:39:16Z | MEMBER | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/4939.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4939",
"merged_at": "2022-09-06T17:39:16Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4939.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4939"
} | Fix issue reported on the Hub: https://huggingface.co/datasets/adv_glue/discussions/1 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4939/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4939/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/4938 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4938/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4938/comments | https://api.github.com/repos/huggingface/datasets/issues/4938/events | https://github.com/huggingface/datasets/pull/4938 | 1,363,429,228 | PR_kwDODunzps4-coaB | 4,938 | Remove main branch rename notice | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [] | closed | false | null | [] | null | [] | 2022-09-06T15:03:05Z | 2022-09-06T16:46:11Z | 2022-09-06T16:43:53Z | MEMBER | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/4938.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4938",
"merged_at": "2022-09-06T16:43:53Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4938.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4938"
} | We added a notice in README.md to show that we renamed the master branch to main, but we can remove it now (it's been 2 months)
I also unpinned the github issue about the branch renaming | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4938/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4938/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/4937 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4937/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4937/comments | https://api.github.com/repos/huggingface/datasets/issues/4937/events | https://github.com/huggingface/datasets/pull/4937 | 1,363,426,946 | PR_kwDODunzps4-cn6W | 4,937 | Remove deprecated identical_ok | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [] | closed | false | null | [] | null | [] | 2022-09-06T15:01:24Z | 2022-09-06T22:24:09Z | 2022-09-06T22:21:57Z | MEMBER | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/4937.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4937",
"merged_at": "2022-09-06T22:21:57Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4937.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4937"
} | `huggingface-hub` says that the `identical_ok` argument of `HfApi.upload_file` is now deprecated, and will be removed soon. It even has no effect at the moment when it's passed:
```python
Args:
...
identical_ok (`bool`, *optional*, defaults to `True`):
Deprecated: will be removed in 0.11.0.
Changing this value has no effect.
...
```
There was only one occurence of `identical_ok=False` but it's maybe not worth adding a check ti verify if the files were the same.
cc @mariosasko | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4937/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4937/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/4936 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4936/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4936/comments | https://api.github.com/repos/huggingface/datasets/issues/4936/events | https://github.com/huggingface/datasets/issues/4936 | 1,363,274,907 | I_kwDODunzps5RQeyb | 4,936 | vivos (Vietnamese speech corpus) dataset not accessible | {
"avatar_url": "https://avatars.githubusercontent.com/u/16348744?v=4",
"events_url": "https://api.github.com/users/polinaeterna/events{/privacy}",
"followers_url": "https://api.github.com/users/polinaeterna/followers",
"following_url": "https://api.github.com/users/polinaeterna/following{/other_user}",
"gists_url": "https://api.github.com/users/polinaeterna/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/polinaeterna",
"id": 16348744,
"login": "polinaeterna",
"node_id": "MDQ6VXNlcjE2MzQ4NzQ0",
"organizations_url": "https://api.github.com/users/polinaeterna/orgs",
"received_events_url": "https://api.github.com/users/polinaeterna/received_events",
"repos_url": "https://api.github.com/users/polinaeterna/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/polinaeterna/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/polinaeterna/subscriptions",
"type": "User",
"url": "https://api.github.com/users/polinaeterna"
} | [
{
"color": "2edb81",
"default": false,
"description": "A bug in a dataset script provided in the library",
"id": 2067388877,
"name": "dataset bug",
"node_id": "MDU6TGFiZWwyMDY3Mzg4ODc3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20bug"
}
] | closed | false | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
}
] | null | [] | 2022-09-06T13:17:55Z | 2022-09-21T06:06:02Z | 2022-09-12T07:14:20Z | CONTRIBUTOR | null | null | null | ## Describe the bug
VIVOS data is not accessible anymore, neither of these links work (at least from France):
* https://ailab.hcmus.edu.vn/assets/vivos.tar.gz (data)
* https://ailab.hcmus.edu.vn/vivos (dataset page)
Therefore `load_dataset` doesn't work.
## Steps to reproduce the bug
```python
ds = load_dataset("vivos")
```
## Expected results
dataset loaded
## Actual results
```
ConnectionError: Couldn't reach https://ailab.hcmus.edu.vn/assets/vivos.tar.gz (ConnectionError(MaxRetryError("HTTPSConnectionPool(host='ailab.hcmus.edu.vn', port=443): Max retries exceeded with url: /assets/vivos.tar.gz (Caused by NewConnectionError('<urllib3.connection.HTTPSConnection object at 0x7f9d8a27d190>: Failed to establish a new connection: [Errno -5] No address associated with hostname'))")))
```
Will try to contact the authors, as we wanted to use Vivos as an example in documentation on how to create scripts for audio datasets (https://github.com/huggingface/datasets/pull/4872), because it's small and straightforward and uses tar archives. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4936/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4936/timeline | null | completed | true |
https://api.github.com/repos/huggingface/datasets/issues/4935 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4935/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4935/comments | https://api.github.com/repos/huggingface/datasets/issues/4935/events | https://github.com/huggingface/datasets/issues/4935 | 1,363,226,736 | I_kwDODunzps5RQTBw | 4,935 | Dataset Viewer issue for ubuntu_dialogs_corpus | {
"avatar_url": "https://avatars.githubusercontent.com/u/87330568?v=4",
"events_url": "https://api.github.com/users/CibinQuadance/events{/privacy}",
"followers_url": "https://api.github.com/users/CibinQuadance/followers",
"following_url": "https://api.github.com/users/CibinQuadance/following{/other_user}",
"gists_url": "https://api.github.com/users/CibinQuadance/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/CibinQuadance",
"id": 87330568,
"login": "CibinQuadance",
"node_id": "MDQ6VXNlcjg3MzMwNTY4",
"organizations_url": "https://api.github.com/users/CibinQuadance/orgs",
"received_events_url": "https://api.github.com/users/CibinQuadance/received_events",
"repos_url": "https://api.github.com/users/CibinQuadance/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/CibinQuadance/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/CibinQuadance/subscriptions",
"type": "User",
"url": "https://api.github.com/users/CibinQuadance"
} | [
{
"color": "E5583E",
"default": false,
"description": "Related to the dataset viewer on huggingface.co",
"id": 3470211881,
"name": "dataset-viewer",
"node_id": "LA_kwDODunzps7O1zsp",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset-viewer"
}
] | closed | false | {
"avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4",
"events_url": "https://api.github.com/users/severo/events{/privacy}",
"followers_url": "https://api.github.com/users/severo/followers",
"following_url": "https://api.github.com/users/severo/following{/other_user}",
"gists_url": "https://api.github.com/users/severo/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/severo",
"id": 1676121,
"login": "severo",
"node_id": "MDQ6VXNlcjE2NzYxMjE=",
"organizations_url": "https://api.github.com/users/severo/orgs",
"received_events_url": "https://api.github.com/users/severo/received_events",
"repos_url": "https://api.github.com/users/severo/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/severo/subscriptions",
"type": "User",
"url": "https://api.github.com/users/severo"
} | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4",
"events_url": "https://api.github.com/users/severo/events{/privacy}",
"followers_url": "https://api.github.com/users/severo/followers",
"following_url": "https://api.github.com/users/severo/following{/other_user}",
"gists_url": "https://api.github.com/users/severo/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/severo",
"id": 1676121,
"login": "severo",
"node_id": "MDQ6VXNlcjE2NzYxMjE=",
"organizations_url": "https://api.github.com/users/severo/orgs",
"received_events_url": "https://api.github.com/users/severo/received_events",
"repos_url": "https://api.github.com/users/severo/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/severo/subscriptions",
"type": "User",
"url": "https://api.github.com/users/severo"
}
] | null | [] | 2022-09-06T12:41:50Z | 2022-09-06T12:51:25Z | 2022-09-06T12:51:25Z | NONE | null | null | null | ### Link
_No response_
### Description
_No response_
### Owner
_No response_ | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4935/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4935/timeline | null | completed | true |
https://api.github.com/repos/huggingface/datasets/issues/4934 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4934/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4934/comments | https://api.github.com/repos/huggingface/datasets/issues/4934/events | https://github.com/huggingface/datasets/issues/4934 | 1,363,034,253 | I_kwDODunzps5RPkCN | 4,934 | Dataset Viewer issue for indonesian-nlp/librivox-indonesia | {
"avatar_url": "https://avatars.githubusercontent.com/u/7669893?v=4",
"events_url": "https://api.github.com/users/cahya-wirawan/events{/privacy}",
"followers_url": "https://api.github.com/users/cahya-wirawan/followers",
"following_url": "https://api.github.com/users/cahya-wirawan/following{/other_user}",
"gists_url": "https://api.github.com/users/cahya-wirawan/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/cahya-wirawan",
"id": 7669893,
"login": "cahya-wirawan",
"node_id": "MDQ6VXNlcjc2Njk4OTM=",
"organizations_url": "https://api.github.com/users/cahya-wirawan/orgs",
"received_events_url": "https://api.github.com/users/cahya-wirawan/received_events",
"repos_url": "https://api.github.com/users/cahya-wirawan/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/cahya-wirawan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/cahya-wirawan/subscriptions",
"type": "User",
"url": "https://api.github.com/users/cahya-wirawan"
} | [] | closed | false | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
}
] | null | [] | 2022-09-06T10:03:23Z | 2022-09-06T12:46:40Z | 2022-09-06T12:46:40Z | CONTRIBUTOR | null | null | null | ### Link
https://huggingface.co/datasets/indonesian-nlp/librivox-indonesia
### Description
I created a new speech dataset https://huggingface.co/datasets/indonesian-nlp/librivox-indonesia, but the dataset preview doesn't work with following error message:
```
Server error
Status code: 400
Exception: TypeError
Message: unsupported operand type(s) for +: 'NoneType' and 'str'
```
Please help, I am not sure what the problem here is. Thanks a lot.
### Owner
Yes | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4934/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4934/timeline | null | completed | true |
https://api.github.com/repos/huggingface/datasets/issues/4933 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4933/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4933/comments | https://api.github.com/repos/huggingface/datasets/issues/4933/events | https://github.com/huggingface/datasets/issues/4933 | 1,363,013,023 | I_kwDODunzps5RPe2f | 4,933 | Dataset/DatasetDict.filter() cannot have `batched=True` due to `mask` (numpy array?) being non-iterable. | {
"avatar_url": "https://avatars.githubusercontent.com/u/4812544?v=4",
"events_url": "https://api.github.com/users/tianjianjiang/events{/privacy}",
"followers_url": "https://api.github.com/users/tianjianjiang/followers",
"following_url": "https://api.github.com/users/tianjianjiang/following{/other_user}",
"gists_url": "https://api.github.com/users/tianjianjiang/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/tianjianjiang",
"id": 4812544,
"login": "tianjianjiang",
"node_id": "MDQ6VXNlcjQ4MTI1NDQ=",
"organizations_url": "https://api.github.com/users/tianjianjiang/orgs",
"received_events_url": "https://api.github.com/users/tianjianjiang/received_events",
"repos_url": "https://api.github.com/users/tianjianjiang/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/tianjianjiang/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/tianjianjiang/subscriptions",
"type": "User",
"url": "https://api.github.com/users/tianjianjiang"
} | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | closed | false | null | [] | null | [] | 2022-09-06T09:47:48Z | 2022-09-06T11:44:27Z | 2022-09-06T11:44:27Z | CONTRIBUTOR | null | null | null | ## Describe the bug
`Dataset/DatasetDict.filter()` cannot have `batched=True` due to `mask` (numpy array?) being non-iterable.
## Steps to reproduce the bug
(In a python 3.7.12 env, I've tried 2.4.0 and 2.3.2 with both `pyarraw==9.0.0` and `pyarrow==8.0.0`.)
```python
from datasets import load_dataset
ds_mc4_ja = load_dataset("mc4", "ja") # This will take 6+ hours... perhaps test it with a toy dataset instead?
ds_mc4_ja_2020 = ds_mc4_ja.filter(
lambda example: example["timestamp"][:4] == "2020",
batched=True,
)
```
## Expected results
No error
## Actual results
```python
---------------------------------------------------------------------------
RemoteTraceback Traceback (most recent call last)
RemoteTraceback:
"""
Traceback (most recent call last):
File "/opt/conda/lib/python3.7/site-packages/multiprocess/pool.py", line 121, in worker
result = (True, func(*args, **kwds))
File "/opt/conda/lib/python3.7/site-packages/datasets/arrow_dataset.py", line 557, in wrapper
out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
File "/opt/conda/lib/python3.7/site-packages/datasets/arrow_dataset.py", line 524, in wrapper
out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
File "/opt/conda/lib/python3.7/site-packages/datasets/fingerprint.py", line 480, in wrapper
out = func(self, *args, **kwargs)
File "/opt/conda/lib/python3.7/site-packages/datasets/arrow_dataset.py", line 2779, in _map_single
offset=offset,
File "/opt/conda/lib/python3.7/site-packages/datasets/arrow_dataset.py", line 2655, in apply_function_on_filtered_inputs
processed_inputs = function(*fn_args, *additional_args, **fn_kwargs)
File "/opt/conda/lib/python3.7/site-packages/datasets/arrow_dataset.py", line 2347, in decorated
result = f(decorated_item, *args, **kwargs)
File "/opt/conda/lib/python3.7/site-packages/datasets/arrow_dataset.py", line 4946, in get_indices_from_mask_function
indices_array = [i for i, to_keep in zip(indices, mask) if to_keep]
TypeError: zip argument #2 must support iteration
"""
The above exception was the direct cause of the following exception:
TypeError Traceback (most recent call last)
/tmp/ipykernel_51348/2345782281.py in <module>
7 batched=True,
8 # batch_size=10_000,
----> 9 num_proc=111,
10 )
11 # ds_mc4_ja_clean_2020 = ds_mc4_ja.filter(
/opt/conda/lib/python3.7/site-packages/datasets/dataset_dict.py in filter(self, function, with_indices, input_columns, batched, batch_size, keep_in_memory, load_from_cache_file, cache_file_names, writer_batch_size, fn_kwargs, num_proc, desc)
878 desc=desc,
879 )
--> 880 for k, dataset in self.items()
881 }
882 )
/opt/conda/lib/python3.7/site-packages/datasets/dataset_dict.py in <dictcomp>(.0)
878 desc=desc,
879 )
--> 880 for k, dataset in self.items()
881 }
882 )
/opt/conda/lib/python3.7/site-packages/datasets/arrow_dataset.py in wrapper(*args, **kwargs)
522 }
523 # apply actual function
--> 524 out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
525 datasets: List["Dataset"] = list(out.values()) if isinstance(out, dict) else [out]
526 # re-apply format to the output
/opt/conda/lib/python3.7/site-packages/datasets/fingerprint.py in wrapper(*args, **kwargs)
478 # Call actual function
479
--> 480 out = func(self, *args, **kwargs)
481
482 # Update fingerprint of in-place transforms + update in-place history of transforms
/opt/conda/lib/python3.7/site-packages/datasets/arrow_dataset.py in filter(self, function, with_indices, input_columns, batched, batch_size, keep_in_memory, load_from_cache_file, cache_file_name, writer_batch_size, fn_kwargs, num_proc, suffix_template, new_fingerprint, desc)
2920 new_fingerprint=new_fingerprint,
2921 input_columns=input_columns,
-> 2922 desc=desc,
2923 )
2924 new_dataset = copy.deepcopy(self)
/opt/conda/lib/python3.7/site-packages/datasets/arrow_dataset.py in map(self, function, with_indices, with_rank, input_columns, batched, batch_size, drop_last_batch, remove_columns, keep_in_memory, load_from_cache_file, cache_file_name, writer_batch_size, features, disable_nullable, fn_kwargs, num_proc, suffix_template, new_fingerprint, desc)
2498
2499 for index, async_result in results.items():
-> 2500 transformed_shards[index] = async_result.get()
2501
2502 assert (
/opt/conda/lib/python3.7/site-packages/multiprocess/pool.py in get(self, timeout)
655 return self._value
656 else:
--> 657 raise self._value
658
659 def _set(self, i, obj):
TypeError: zip argument #2 must support iteration
```
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 2.4.0
- Platform: Linux-4.19.0-21-cloud-amd64-x86_64-with-debian-10.12
- Python version: 3.7.12
- PyArrow version: 9.0.0
- Pandas version: 1.3.5
(I've tried 2.4.0 and 2.3.2 with both `pyarraw==9.0.0` and `pyarrow==8.0.0`.) | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4933/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4933/timeline | null | completed | true |
https://api.github.com/repos/huggingface/datasets/issues/4932 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4932/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4932/comments | https://api.github.com/repos/huggingface/datasets/issues/4932/events | https://github.com/huggingface/datasets/issues/4932 | 1,362,522,423 | I_kwDODunzps5RNnE3 | 4,932 | Dataset Viewer issue for bigscience-biomedical/biosses | {
"avatar_url": "https://avatars.githubusercontent.com/u/663051?v=4",
"events_url": "https://api.github.com/users/galtay/events{/privacy}",
"followers_url": "https://api.github.com/users/galtay/followers",
"following_url": "https://api.github.com/users/galtay/following{/other_user}",
"gists_url": "https://api.github.com/users/galtay/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/galtay",
"id": 663051,
"login": "galtay",
"node_id": "MDQ6VXNlcjY2MzA1MQ==",
"organizations_url": "https://api.github.com/users/galtay/orgs",
"received_events_url": "https://api.github.com/users/galtay/received_events",
"repos_url": "https://api.github.com/users/galtay/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/galtay/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/galtay/subscriptions",
"type": "User",
"url": "https://api.github.com/users/galtay"
} | [] | closed | false | null | [] | null | [] | 2022-09-05T22:40:32Z | 2022-09-06T14:24:56Z | 2022-09-06T14:24:56Z | NONE | null | null | null | ### Link
https://huggingface.co/datasets/bigscience-biomedical/biosses
### Description
I've just been working on adding the dataset loader script to this dataset and working with the relative imports. I'm not sure how to interpret the error below (show where the dataset preview used to be) .
```
Status code: 400
Exception: ModuleNotFoundError
Message: No module named 'datasets_modules.datasets.bigscience-biomedical--biosses.ddbd5893bf6c2f4db06f407665eaeac619520ba41f69d94ead28f7cc5b674056.bigbiohub'
```
### Owner
Yes | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4932/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4932/timeline | null | completed | true |
https://api.github.com/repos/huggingface/datasets/issues/4931 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4931/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4931/comments | https://api.github.com/repos/huggingface/datasets/issues/4931/events | https://github.com/huggingface/datasets/pull/4931 | 1,362,298,764 | PR_kwDODunzps4-Y3L6 | 4,931 | Fix missing tags in dataset cards | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | [] | closed | false | null | [] | null | [] | 2022-09-05T17:03:04Z | 2022-09-22T12:40:15Z | 2022-09-06T05:39:29Z | MEMBER | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/4931.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4931",
"merged_at": "2022-09-06T05:39:29Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4931.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4931"
} | Fix missing tags in dataset cards:
- coqa
- hyperpartisan_news_detection
- opinosis
- scientific_papers
- scifact
- search_qa
- wiki_qa
- wiki_split
- wikisql
This PR partially fixes the missing tags in dataset cards. Subsequent PRs will follow to complete this task.
Related to:
- #4833
- #4891
- #4896
- #4908
- #4921 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4931/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4931/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/4930 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4930/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4930/comments | https://api.github.com/repos/huggingface/datasets/issues/4930/events | https://github.com/huggingface/datasets/pull/4930 | 1,362,193,587 | PR_kwDODunzps4-Yflc | 4,930 | Add cc-by-nc-2.0 to list of licenses | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | [] | closed | false | null | [] | null | [] | 2022-09-05T15:37:32Z | 2022-09-06T16:43:32Z | 2022-09-05T17:01:04Z | MEMBER | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/4930.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4930",
"merged_at": "2022-09-05T17:01:04Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4930.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4930"
} | This PR adds the `cc-by-nc-2.0` to the list of licenses because it is required by `scifact` dataset: https://github.com/allenai/scifact/blob/master/LICENSE.md | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4930/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4930/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/4929 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4929/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4929/comments | https://api.github.com/repos/huggingface/datasets/issues/4929/events | https://github.com/huggingface/datasets/pull/4929 | 1,361,508,366 | PR_kwDODunzps4-WK2w | 4,929 | Fixes a typo in loading documentation | {
"avatar_url": "https://avatars.githubusercontent.com/u/7144772?v=4",
"events_url": "https://api.github.com/users/sighingnow/events{/privacy}",
"followers_url": "https://api.github.com/users/sighingnow/followers",
"following_url": "https://api.github.com/users/sighingnow/following{/other_user}",
"gists_url": "https://api.github.com/users/sighingnow/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/sighingnow",
"id": 7144772,
"login": "sighingnow",
"node_id": "MDQ6VXNlcjcxNDQ3NzI=",
"organizations_url": "https://api.github.com/users/sighingnow/orgs",
"received_events_url": "https://api.github.com/users/sighingnow/received_events",
"repos_url": "https://api.github.com/users/sighingnow/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/sighingnow/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sighingnow/subscriptions",
"type": "User",
"url": "https://api.github.com/users/sighingnow"
} | [] | closed | false | null | [] | null | [] | 2022-09-05T07:18:54Z | 2022-09-06T02:11:03Z | 2022-09-05T13:06:38Z | CONTRIBUTOR | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/4929.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4929",
"merged_at": "2022-09-05T13:06:38Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4929.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4929"
} | As show in the [documentation page](https://huggingface.co/docs/datasets/loading) here the `"tr"in` should be `"train`.

| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4929/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4929/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/4928 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4928/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4928/comments | https://api.github.com/repos/huggingface/datasets/issues/4928/events | https://github.com/huggingface/datasets/pull/4928 | 1,360,941,172 | PR_kwDODunzps4-Ubi4 | 4,928 | Add ability to read-write to SQL databases. | {
"avatar_url": "https://avatars.githubusercontent.com/u/8976546?v=4",
"events_url": "https://api.github.com/users/Dref360/events{/privacy}",
"followers_url": "https://api.github.com/users/Dref360/followers",
"following_url": "https://api.github.com/users/Dref360/following{/other_user}",
"gists_url": "https://api.github.com/users/Dref360/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Dref360",
"id": 8976546,
"login": "Dref360",
"node_id": "MDQ6VXNlcjg5NzY1NDY=",
"organizations_url": "https://api.github.com/users/Dref360/orgs",
"received_events_url": "https://api.github.com/users/Dref360/received_events",
"repos_url": "https://api.github.com/users/Dref360/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Dref360/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Dref360/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Dref360"
} | [] | closed | false | null | [] | null | [] | 2022-09-03T19:09:08Z | 2022-10-03T16:34:36Z | 2022-10-03T16:32:28Z | CONTRIBUTOR | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/4928.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4928",
"merged_at": "2022-10-03T16:32:28Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4928.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4928"
} | Fixes #3094
Add ability to read/write to SQLite files and also read from any SQL database supported by SQLAlchemy.
I didn't add SQLAlchemy as a dependence as it is fairly big and it remains optional.
I also recorded a Loom to showcase the feature.
https://www.loom.com/share/f0e602c2de8a46f58bca4b43333d541f | {
"+1": 2,
"-1": 0,
"confused": 0,
"eyes": 2,
"heart": 4,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 8,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4928/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4928/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/4927 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4927/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4927/comments | https://api.github.com/repos/huggingface/datasets/issues/4927/events | https://github.com/huggingface/datasets/pull/4927 | 1,360,428,139 | PR_kwDODunzps4-S0we | 4,927 | fix BLEU metric card | {
"avatar_url": "https://avatars.githubusercontent.com/u/40452030?v=4",
"events_url": "https://api.github.com/users/antoniolanza1996/events{/privacy}",
"followers_url": "https://api.github.com/users/antoniolanza1996/followers",
"following_url": "https://api.github.com/users/antoniolanza1996/following{/other_user}",
"gists_url": "https://api.github.com/users/antoniolanza1996/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/antoniolanza1996",
"id": 40452030,
"login": "antoniolanza1996",
"node_id": "MDQ6VXNlcjQwNDUyMDMw",
"organizations_url": "https://api.github.com/users/antoniolanza1996/orgs",
"received_events_url": "https://api.github.com/users/antoniolanza1996/received_events",
"repos_url": "https://api.github.com/users/antoniolanza1996/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/antoniolanza1996/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/antoniolanza1996/subscriptions",
"type": "User",
"url": "https://api.github.com/users/antoniolanza1996"
} | [] | closed | false | null | [] | null | [] | 2022-09-02T17:00:56Z | 2022-09-09T16:28:15Z | 2022-09-09T16:28:15Z | CONTRIBUTOR | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/4927.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4927",
"merged_at": "2022-09-09T16:28:15Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4927.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4927"
} | I've fixed some typos in BLEU metric card. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4927/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4927/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/4926 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4926/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4926/comments | https://api.github.com/repos/huggingface/datasets/issues/4926/events | https://github.com/huggingface/datasets/pull/4926 | 1,360,384,484 | PR_kwDODunzps4-Srm1 | 4,926 | Dataset infos in yaml | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [
{
"color": "0e8a16",
"default": false,
"description": "Contribution to a dataset script",
"id": 4564477500,
"name": "dataset contribution",
"node_id": "LA_kwDODunzps8AAAABEBBmPA",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20contribution"
}
] | closed | false | null | [] | null | [] | 2022-09-02T16:10:05Z | 2022-10-03T09:13:07Z | 2022-10-03T09:11:12Z | MEMBER | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/4926.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4926",
"merged_at": "2022-10-03T09:11:12Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4926.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4926"
} | To simplify the addition of new datasets, we'd like to have the dataset infos in the YAML and deprecate the dataset_infos.json file. YAML is readable and easy to edit, and the YAML metadata of the readme already contain dataset metadata so we would have everything in one place.
To be more specific, I moved these fields from DatasetInfo to the YAML:
- config_name (if there are several configs)
- download_size
- dataset_size
- features
- splits
Here is what I ended up with for `squad`:
```yaml
dataset_info:
features:
- name: id
dtype: string
- name: title
dtype: string
- name: context
dtype: string
- name: question
dtype: string
- name: answers
sequence:
- name: text
dtype: string
- name: answer_start
dtype: int32
splits:
- name: train
num_bytes: 79346360
num_examples: 87599
- name: validation
num_bytes: 10473040
num_examples: 10570
config_name: plain_text
download_size: 35142551
dataset_size: 89819400
```
and it can be a list if there are several configs
I already did the change for `conll2000` and `crime_and_punish` as an example.
## Implementation details
### Load/Read
This is done via `DatasetInfosDict.write_to_directory/from_directory`
I had to implement a custom the YAML export logic for `SplitDict`, `Version` and `Features`.
The first two are trivial, but the logic for `Features` is more complicated, because I added a simplification step (or the YAML would be too long and less readable): it's just a formatting step to remove unnecessary nesting of YAML data.
### Other changes
I had to update the DatasetModule factories to also download the README.md alongside the dataset scripts/data files, and not just the dataset_infos.json
## YAML validation
I removed the old validation code that was in metadata.py, now we can just use the Hub YAML validation
## Datasets-cli
The `datasets-cli test --save_infos` command now creates a README.md file with the dataset_infos in it, instead of a datasets_infos.json file
## Backward compatibility
`dataset_infos.json` files are still supported and loaded if they exist to have full backward compatibility.
Though I removed the unnecessary keys when the value is the default (like all the `id: null` from the Value feature types) to make them easier to read.
## TODO
- [x] add comments
- [x] tests
- [x] document the new YAML fields
- [x] try to reload the new dataset_infos.json file content with an old version of `datasets`
## EDITS
- removed "config_name" when there's only one config
- removed "version" for now (?), because it's not useful in general
- renamed the yaml field "dataset_info" instead of "dataset_infos", since it only has one by default (and because "infos" is not english)
Fix https://github.com/huggingface/datasets/issues/4876 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4926/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4926/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/4925 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4925/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4925/comments | https://api.github.com/repos/huggingface/datasets/issues/4925/events | https://github.com/huggingface/datasets/pull/4925 | 1,360,007,616 | PR_kwDODunzps4-RbP5 | 4,925 | Add note about loading image / audio files to docs | {
"avatar_url": "https://avatars.githubusercontent.com/u/26859204?v=4",
"events_url": "https://api.github.com/users/lewtun/events{/privacy}",
"followers_url": "https://api.github.com/users/lewtun/followers",
"following_url": "https://api.github.com/users/lewtun/following{/other_user}",
"gists_url": "https://api.github.com/users/lewtun/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lewtun",
"id": 26859204,
"login": "lewtun",
"node_id": "MDQ6VXNlcjI2ODU5MjA0",
"organizations_url": "https://api.github.com/users/lewtun/orgs",
"received_events_url": "https://api.github.com/users/lewtun/received_events",
"repos_url": "https://api.github.com/users/lewtun/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lewtun/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lewtun/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lewtun"
} | [] | closed | false | null | [] | null | [] | 2022-09-02T10:31:58Z | 2022-09-26T12:21:30Z | 2022-09-23T13:59:07Z | MEMBER | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/4925.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4925",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/4925.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4925"
} | This PR adds a small note about how to load image / audio datasets that have multiple splits in their dataset structure.
Related forum thread: https://discuss.huggingface.co/t/loading-train-and-test-splits-with-audiofolder/22447
cc @NielsRogge | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4925/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4925/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/4924 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4924/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4924/comments | https://api.github.com/repos/huggingface/datasets/issues/4924/events | https://github.com/huggingface/datasets/issues/4924 | 1,358,611,513 | I_kwDODunzps5Q-sQ5 | 4,924 | Concatenate_datasets loads everything into RAM | {
"avatar_url": "https://avatars.githubusercontent.com/u/39416047?v=4",
"events_url": "https://api.github.com/users/louisdeneve/events{/privacy}",
"followers_url": "https://api.github.com/users/louisdeneve/followers",
"following_url": "https://api.github.com/users/louisdeneve/following{/other_user}",
"gists_url": "https://api.github.com/users/louisdeneve/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/louisdeneve",
"id": 39416047,
"login": "louisdeneve",
"node_id": "MDQ6VXNlcjM5NDE2MDQ3",
"organizations_url": "https://api.github.com/users/louisdeneve/orgs",
"received_events_url": "https://api.github.com/users/louisdeneve/received_events",
"repos_url": "https://api.github.com/users/louisdeneve/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/louisdeneve/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/louisdeneve/subscriptions",
"type": "User",
"url": "https://api.github.com/users/louisdeneve"
} | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | closed | false | null | [] | null | [] | 2022-09-01T10:25:17Z | 2022-09-01T11:50:54Z | 2022-09-01T11:50:54Z | NONE | null | null | null | ## Describe the bug
When loading the datasets seperately and saving them on disk, I want to concatenate them. But `concatenate_datasets` is filling up my RAM and the process gets killed. Is there a way to prevent this from happening or is this intended behaviour? Thanks in advance
## Steps to reproduce the bug
```python
gcs = gcsfs.GCSFileSystem(project='project')
datasets = [load_from_disk(f'path/to/slice/of/data/{i}', fs=gcs, keep_in_memory=False) for i in range(10)]
dataset = concatenate_datasets(datasets)
```
## Expected results
A concatenated dataset which is stored on my disk.
## Actual results
Concatenated dataset gets loaded into RAM and overflows it which gets the process killed.
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 2.4.0
- Platform: Linux-4.19.0-21-cloud-amd64-x86_64-with-glibc2.10
- Python version: 3.8.13
- PyArrow version: 8.0.1
- Pandas version: 1.4.3 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4924/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4924/timeline | null | completed | true |
https://api.github.com/repos/huggingface/datasets/issues/4923 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4923/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4923/comments | https://api.github.com/repos/huggingface/datasets/issues/4923/events | https://github.com/huggingface/datasets/pull/4923 | 1,357,735,287 | PR_kwDODunzps4-Jv7C | 4,923 | decode mp3 with librosa if torchaudio is > 0.12 as a temporary workaround | {
"avatar_url": "https://avatars.githubusercontent.com/u/16348744?v=4",
"events_url": "https://api.github.com/users/polinaeterna/events{/privacy}",
"followers_url": "https://api.github.com/users/polinaeterna/followers",
"following_url": "https://api.github.com/users/polinaeterna/following{/other_user}",
"gists_url": "https://api.github.com/users/polinaeterna/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/polinaeterna",
"id": 16348744,
"login": "polinaeterna",
"node_id": "MDQ6VXNlcjE2MzQ4NzQ0",
"organizations_url": "https://api.github.com/users/polinaeterna/orgs",
"received_events_url": "https://api.github.com/users/polinaeterna/received_events",
"repos_url": "https://api.github.com/users/polinaeterna/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/polinaeterna/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/polinaeterna/subscriptions",
"type": "User",
"url": "https://api.github.com/users/polinaeterna"
} | [] | closed | false | null | [] | null | [] | 2022-08-31T18:57:59Z | 2022-11-02T11:54:33Z | 2022-09-20T13:12:52Z | CONTRIBUTOR | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/4923.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4923",
"merged_at": "2022-09-20T13:12:52Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4923.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4923"
} | `torchaudio>0.12` fails with decoding mp3 files if `ffmpeg<4`. currently we ask users to downgrade torchaudio, but sometimes it's not possible as torchaudio version is binded to torch version. as a temporary workaround we can decode mp3 with librosa (though it 60 times slower, at least it works)
another option would be to ask users to install the required version of `ffmpeg`, but is non-trivial on colab: it's not in apt packages in ubuntu 18 and `conda` is not preinstalled (with `conda` it would be easily installable)
- [x] decode with torchaudio anyway if the version of ffmpeg is correct? it's 60 times faster
- [x] tests
- [x] DO NOT FORGET to get back all the tests
see https://github.com/huggingface/datasets/issues/4776 and https://github.com/huggingface/datasets/issues/3663#issuecomment-1225797165 (there is a Colab notebook to reproduce the error) | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4923/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4923/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/4922 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4922/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4922/comments | https://api.github.com/repos/huggingface/datasets/issues/4922/events | https://github.com/huggingface/datasets/issues/4922 | 1,357,684,018 | I_kwDODunzps5Q7J0y | 4,922 | I/O error on Google Colab in streaming mode | {
"avatar_url": "https://avatars.githubusercontent.com/u/5595043?v=4",
"events_url": "https://api.github.com/users/jotterbach/events{/privacy}",
"followers_url": "https://api.github.com/users/jotterbach/followers",
"following_url": "https://api.github.com/users/jotterbach/following{/other_user}",
"gists_url": "https://api.github.com/users/jotterbach/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/jotterbach",
"id": 5595043,
"login": "jotterbach",
"node_id": "MDQ6VXNlcjU1OTUwNDM=",
"organizations_url": "https://api.github.com/users/jotterbach/orgs",
"received_events_url": "https://api.github.com/users/jotterbach/received_events",
"repos_url": "https://api.github.com/users/jotterbach/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/jotterbach/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jotterbach/subscriptions",
"type": "User",
"url": "https://api.github.com/users/jotterbach"
} | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | closed | false | null | [] | null | [] | 2022-08-31T18:08:26Z | 2022-08-31T18:15:48Z | 2022-08-31T18:15:48Z | NONE | null | null | null | ## Describe the bug
When trying to load a streaming dataset in Google Colab the loading fails with an I/O error
## Steps to reproduce the bug
```python
import datasets
from datasets import load_dataset
hf_ds = load_dataset(path='wmt19', name='cs-en', streaming=True, split=datasets.Split.VALIDATION)
list(hf_ds.take(5))
```
## Expected results
It should load five data points
## Actual results
```
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
[<ipython-input-13-7b5b8b1e7e58>](https://localhost:8080/#) in <module>
2 from datasets import load_dataset
3 hf_ds = load_dataset(path='wmt19', name='cs-en', streaming=True, split=datasets.Split.VALIDATION)
----> 4 list(hf_ds.take(5))
6 frames
[/usr/local/lib/python3.7/dist-packages/datasets/iterable_dataset.py](https://localhost:8080/#) in __iter__(self)
716
717 def __iter__(self):
--> 718 for key, example in self._iter():
719 if self.features:
720 # `IterableDataset` automatically fills missing columns with None.
[/usr/local/lib/python3.7/dist-packages/datasets/iterable_dataset.py](https://localhost:8080/#) in _iter(self)
706 else:
707 ex_iterable = self._ex_iterable
--> 708 yield from ex_iterable
709
710 def _iter_shard(self, shard_idx: int):
[/usr/local/lib/python3.7/dist-packages/datasets/iterable_dataset.py](https://localhost:8080/#) in __iter__(self)
582
583 def __iter__(self):
--> 584 yield from islice(self.ex_iterable, self.n)
585
586 def shuffle_data_sources(self, generator: np.random.Generator) -> "TakeExamplesIterable":
[/usr/local/lib/python3.7/dist-packages/datasets/iterable_dataset.py](https://localhost:8080/#) in __iter__(self)
110
111 def __iter__(self):
--> 112 yield from self.generate_examples_fn(**self.kwargs)
113
114 def shuffle_data_sources(self, generator: np.random.Generator) -> "ExamplesIterable":
[~/.cache/huggingface/modules/datasets_modules/datasets/wmt19/aeadcbe9f1cbf9969e603239d33d3e43670cf250c1158edf74f5f6e74d4f21d0/wmt_utils.py](https://localhost:8080/#) in _generate_examples(self, split_subsets, extraction_map, with_translation)
845 raise ValueError("Invalid number of files: %d" % len(files))
846
--> 847 for sub_key, ex in sub_generator(*sub_generator_args):
848 if not all(ex.values()):
849 continue
[~/.cache/huggingface/modules/datasets_modules/datasets/wmt19/aeadcbe9f1cbf9969e603239d33d3e43670cf250c1158edf74f5f6e74d4f21d0/wmt_utils.py](https://localhost:8080/#) in _parse_parallel_sentences(f1, f2, filename1, filename2)
923 l2_sentences, l2 = parse_file(f2_i, filename2)
924
--> 925 for line_id, (s1, s2) in enumerate(zip(l1_sentences, l2_sentences)):
926 key = f"{f_id}/{line_id}"
927 yield key, {l1: s1, l2: s2}
[~/.cache/huggingface/modules/datasets_modules/datasets/wmt19/aeadcbe9f1cbf9969e603239d33d3e43670cf250c1158edf74f5f6e74d4f21d0/wmt_utils.py](https://localhost:8080/#) in gen()
895
896 def gen():
--> 897 with open(path, encoding="utf-8") as f:
898 for line in f:
899 seg_match = re.match(seg_re, line)
ValueError: I/O operation on closed file.
```
## Environment info
Copy-and-paste the text below in your GitHub issue.
- `datasets` version: 2.4.0
- Platform: Linux-5.4.188+-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.7.13
- PyArrow version: 9.0.0. (the same error happened with PyArrow version 6.0.0)
- Pandas version: 1.3.5
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4922/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4922/timeline | null | not_planned | true |
https://api.github.com/repos/huggingface/datasets/issues/4921 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4921/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4921/comments | https://api.github.com/repos/huggingface/datasets/issues/4921/events | https://github.com/huggingface/datasets/pull/4921 | 1,357,609,003 | PR_kwDODunzps4-JVFV | 4,921 | Fix missing tags in dataset cards | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | [] | closed | false | null | [] | null | [] | 2022-08-31T16:52:27Z | 2022-09-22T14:34:11Z | 2022-09-01T05:04:53Z | MEMBER | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/4921.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4921",
"merged_at": "2022-09-01T05:04:53Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4921.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4921"
} | Fix missing tags in dataset cards:
- eraser_multi_rc
- hotpot_qa
- metooma
- movie_rationales
- qanta
- quora
- quoref
- race
- ted_hrlr
- ted_talks_iwslt
This PR partially fixes the missing tags in dataset cards. Subsequent PRs will follow to complete this task.
Related to:
- #4833
- #4891
- #4896
- #4908 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4921/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4921/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/4920 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4920/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4920/comments | https://api.github.com/repos/huggingface/datasets/issues/4920/events | https://github.com/huggingface/datasets/issues/4920 | 1,357,564,589 | I_kwDODunzps5Q6sqt | 4,920 | Unable to load local tsv files through load_dataset method | {
"avatar_url": "https://avatars.githubusercontent.com/u/44038517?v=4",
"events_url": "https://api.github.com/users/DataNoob0723/events{/privacy}",
"followers_url": "https://api.github.com/users/DataNoob0723/followers",
"following_url": "https://api.github.com/users/DataNoob0723/following{/other_user}",
"gists_url": "https://api.github.com/users/DataNoob0723/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/DataNoob0723",
"id": 44038517,
"login": "DataNoob0723",
"node_id": "MDQ6VXNlcjQ0MDM4NTE3",
"organizations_url": "https://api.github.com/users/DataNoob0723/orgs",
"received_events_url": "https://api.github.com/users/DataNoob0723/received_events",
"repos_url": "https://api.github.com/users/DataNoob0723/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/DataNoob0723/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/DataNoob0723/subscriptions",
"type": "User",
"url": "https://api.github.com/users/DataNoob0723"
} | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | closed | false | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
}
] | null | [] | 2022-08-31T16:13:39Z | 2022-09-01T05:31:30Z | 2022-09-01T05:31:30Z | NONE | null | null | null | ## Describe the bug
Unable to load local tsv files through load_dataset method.
## Steps to reproduce the bug
```python
# Sample code to reproduce the bug
data_files = {
'train': 'train.tsv',
'test': 'test.tsv'
}
raw_datasets = load_dataset('tsv', data_files=data_files)
## Expected results
I am pretty sure the data files exist in the current directory. The above code should load them as Datasets, but threw exceptions.
## Actual results
---------------------------------------------------------------------------
FileNotFoundError Traceback (most recent call last)
[<ipython-input-9-24207899c1af>](https://localhost:8080/#) in <module>
----> 1 raw_datasets = load_dataset('tsv', data_files='train.tsv')
2 frames
[/usr/local/lib/python3.7/dist-packages/datasets/load.py](https://localhost:8080/#) in dataset_module_factory(path, revision, download_config, download_mode, dynamic_modules_path, data_dir, data_files, **download_kwargs)
1244 f"Couldn't find a dataset script at {relative_to_absolute_path(combined_path)} or any data file in the same directory. "
1245 f"Couldn't find '{path}' on the Hugging Face Hub either: {type(e1).__name__}: {e1}"
-> 1246 ) from None
1247 raise e1 from None
1248 else:
FileNotFoundError: Couldn't find a dataset script at /content/tsv/tsv.py or any data file in the same directory. Couldn't find 'tsv' on the Hugging Face Hub either: FileNotFoundError: Couldn't find file at https://raw.githubusercontent.com/huggingface/datasets/main/datasets/tsv/tsv.py
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 2.4.0
- Platform: Linux-5.4.188+-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.7.13
- PyArrow version: 6.0.1
- Pandas version: 1.3.5
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4920/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4920/timeline | null | completed | true |
https://api.github.com/repos/huggingface/datasets/issues/4919 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4919/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4919/comments | https://api.github.com/repos/huggingface/datasets/issues/4919/events | https://github.com/huggingface/datasets/pull/4919 | 1,357,441,599 | PR_kwDODunzps4-IxDZ | 4,919 | feat: improve error message on Keys mismatch. closes #4917 | {
"avatar_url": "https://avatars.githubusercontent.com/u/25532159?v=4",
"events_url": "https://api.github.com/users/PaulLerner/events{/privacy}",
"followers_url": "https://api.github.com/users/PaulLerner/followers",
"following_url": "https://api.github.com/users/PaulLerner/following{/other_user}",
"gists_url": "https://api.github.com/users/PaulLerner/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/PaulLerner",
"id": 25532159,
"login": "PaulLerner",
"node_id": "MDQ6VXNlcjI1NTMyMTU5",
"organizations_url": "https://api.github.com/users/PaulLerner/orgs",
"received_events_url": "https://api.github.com/users/PaulLerner/received_events",
"repos_url": "https://api.github.com/users/PaulLerner/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/PaulLerner/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/PaulLerner/subscriptions",
"type": "User",
"url": "https://api.github.com/users/PaulLerner"
} | [] | closed | false | null | [] | null | [] | 2022-08-31T14:41:36Z | 2022-09-05T08:46:01Z | 2022-09-05T08:43:33Z | CONTRIBUTOR | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/4919.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4919",
"merged_at": "2022-09-05T08:43:33Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4919.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4919"
} | Hi @lhoestq what do you think?
Let me give you a code sample:
```py
>>> import datasets
>>> foo = datasets.Dataset.from_dict({'foo':[0,1], 'bar':[2,3]})
>>> foo.save_to_disk('foo')
# edit foo/dataset_info.json e.g. rename the 'foo' feature to 'baz'
>>> datasets.load_from_disk('foo')
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
<ipython-input-2-4863e606b330> in <module>
----> 1 datasets.load_from_disk('foo')
~/code/datasets/src/datasets/load.py in load_from_disk(dataset_path, fs, keep_in_memory)
1851 raise FileNotFoundError(f"Directory {dataset_path} not found")
1852 if fs.isfile(Path(dest_dataset_path, config.DATASET_INFO_FILENAME).as_posix()):
-> 1853 return Dataset.load_from_disk(dataset_path, fs, keep_in_memory=keep_in_memory)
1854 elif fs.isfile(Path(dest_dataset_path, config.DATASETDICT_JSON_FILENAME).as_posix()):
1855 return DatasetDict.load_from_disk(dataset_path, fs, keep_in_memory=keep_in_memory)
~/code/datasets/src/datasets/arrow_dataset.py in load_from_disk(dataset_path, fs, keep_in_memory)
1230 info=dataset_info,
1231 split=split,
-> 1232 fingerprint=state["_fingerprint"],
1233 )
1234
~/code/datasets/src/datasets/arrow_dataset.py in __init__(self, arrow_table, info, split, indices_table, fingerprint)
687 self.info.features = inferred_features
688 else: # make sure the nested columns are in the right order
--> 689 self.info.features = self.info.features.reorder_fields_as(inferred_features)
690
691 # Infer fingerprint if None
~/code/datasets/src/datasets/features/features.py in reorder_fields_as(self, other)
1771 return source
1772
-> 1773 return Features(recursive_reorder(self, other))
1774
1775 def flatten(self, max_depth=16) -> "Features":
~/code/datasets/src/datasets/features/features.py in recursive_reorder(source, target, stack)
1760 f"{source.keys()-target.keys()} are missing from dataset.arrow "
1761 f"and {target.keys()-source.keys()} are missing from dataset_info.json"+stack_position)
-> 1762 raise ValueError(message)
1763 return {key: recursive_reorder(source[key], target[key], stack + f".{key}") for key in target}
1764 elif isinstance(source, list):
ValueError: Keys mismatch: between {'baz': Value(dtype='int64', id=None), 'bar': Value(dtype='int64', id=None)} (dataset_info.json) and {'foo': Value(dtype='int64', id=None), 'bar': Value(dtype='int64', id=None)} (inferred from dataset.arrow).
{'baz'} are missing from dataset.arrow and {'foo'} are missing from dataset_info.json
``` | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4919/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4919/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/4918 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4918/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4918/comments | https://api.github.com/repos/huggingface/datasets/issues/4918/events | https://github.com/huggingface/datasets/issues/4918 | 1,357,242,757 | I_kwDODunzps5Q5eGF | 4,918 | Dataset Viewer issue for pysentimiento/spanish-targeted-sentiment-headlines | {
"avatar_url": "https://avatars.githubusercontent.com/u/167943?v=4",
"events_url": "https://api.github.com/users/finiteautomata/events{/privacy}",
"followers_url": "https://api.github.com/users/finiteautomata/followers",
"following_url": "https://api.github.com/users/finiteautomata/following{/other_user}",
"gists_url": "https://api.github.com/users/finiteautomata/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/finiteautomata",
"id": 167943,
"login": "finiteautomata",
"node_id": "MDQ6VXNlcjE2Nzk0Mw==",
"organizations_url": "https://api.github.com/users/finiteautomata/orgs",
"received_events_url": "https://api.github.com/users/finiteautomata/received_events",
"repos_url": "https://api.github.com/users/finiteautomata/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/finiteautomata/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/finiteautomata/subscriptions",
"type": "User",
"url": "https://api.github.com/users/finiteautomata"
} | [
{
"color": "E5583E",
"default": false,
"description": "Related to the dataset viewer on huggingface.co",
"id": 3470211881,
"name": "dataset-viewer",
"node_id": "LA_kwDODunzps7O1zsp",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset-viewer"
}
] | closed | false | {
"avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4",
"events_url": "https://api.github.com/users/severo/events{/privacy}",
"followers_url": "https://api.github.com/users/severo/followers",
"following_url": "https://api.github.com/users/severo/following{/other_user}",
"gists_url": "https://api.github.com/users/severo/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/severo",
"id": 1676121,
"login": "severo",
"node_id": "MDQ6VXNlcjE2NzYxMjE=",
"organizations_url": "https://api.github.com/users/severo/orgs",
"received_events_url": "https://api.github.com/users/severo/received_events",
"repos_url": "https://api.github.com/users/severo/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/severo/subscriptions",
"type": "User",
"url": "https://api.github.com/users/severo"
} | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4",
"events_url": "https://api.github.com/users/severo/events{/privacy}",
"followers_url": "https://api.github.com/users/severo/followers",
"following_url": "https://api.github.com/users/severo/following{/other_user}",
"gists_url": "https://api.github.com/users/severo/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/severo",
"id": 1676121,
"login": "severo",
"node_id": "MDQ6VXNlcjE2NzYxMjE=",
"organizations_url": "https://api.github.com/users/severo/orgs",
"received_events_url": "https://api.github.com/users/severo/received_events",
"repos_url": "https://api.github.com/users/severo/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/severo/subscriptions",
"type": "User",
"url": "https://api.github.com/users/severo"
}
] | null | [] | 2022-08-31T12:09:07Z | 2022-09-05T21:36:34Z | 2022-09-05T16:32:44Z | NONE | null | null | null | ### Link
https://huggingface.co/datasets/pysentimiento/spanish-targeted-sentiment-headlines
### Description
After moving the dataset from my user (`finiteautomata`) to the `pysentimiento` organization, the dataset viewer says that it doesn't exist.
### Owner
_No response_ | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4918/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4918/timeline | null | completed | true |
https://api.github.com/repos/huggingface/datasets/issues/4917 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4917/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4917/comments | https://api.github.com/repos/huggingface/datasets/issues/4917/events | https://github.com/huggingface/datasets/issues/4917 | 1,357,193,841 | I_kwDODunzps5Q5SJx | 4,917 | Keys mismatch: make error message more informative | {
"avatar_url": "https://avatars.githubusercontent.com/u/25532159?v=4",
"events_url": "https://api.github.com/users/PaulLerner/events{/privacy}",
"followers_url": "https://api.github.com/users/PaulLerner/followers",
"following_url": "https://api.github.com/users/PaulLerner/following{/other_user}",
"gists_url": "https://api.github.com/users/PaulLerner/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/PaulLerner",
"id": 25532159,
"login": "PaulLerner",
"node_id": "MDQ6VXNlcjI1NTMyMTU5",
"organizations_url": "https://api.github.com/users/PaulLerner/orgs",
"received_events_url": "https://api.github.com/users/PaulLerner/received_events",
"repos_url": "https://api.github.com/users/PaulLerner/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/PaulLerner/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/PaulLerner/subscriptions",
"type": "User",
"url": "https://api.github.com/users/PaulLerner"
} | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
},
{
"color": "7057ff",
"default": true,
"description": "Good for newcomers",
"id": 1935892877,
"name": "good first issue",
"node_id": "MDU6TGFiZWwxOTM1ODkyODc3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/good%20first%20issue"
}
] | closed | false | null | [] | null | [] | 2022-08-31T11:24:34Z | 2022-09-05T08:43:38Z | 2022-09-05T08:43:38Z | CONTRIBUTOR | null | null | null | **Is your feature request related to a problem? Please describe.**
When loading a dataset from disk with a defect in its `dataset_info.json` describing its features (I don’t know when/why/how this happens but it deserves its own issue), you will get an error message like:
`ValueError: Keys mismatch: between {'bar': Value(dtype='int64', id=None)} and {'foo': Value(dtype='int64', id=None)}`
Which is fine when you have only a few features like in the example but it gets very hard to read when you have a lot of features in your dataset.
**Describe the solution you'd like**
The error message should give the difference between the features (what keys are in A but missing in B and vice-versa). It should also tell which keys are inferred from `dataset.arrow` and which come from `dataset_info.json`.
Willing to help :)
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4917/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4917/timeline | null | completed | true |
https://api.github.com/repos/huggingface/datasets/issues/4916 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4916/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4916/comments | https://api.github.com/repos/huggingface/datasets/issues/4916/events | https://github.com/huggingface/datasets/issues/4916 | 1,357,076,940 | I_kwDODunzps5Q41nM | 4,916 | Apache Beam unable to write the downloaded wikipedia dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/71849081?v=4",
"events_url": "https://api.github.com/users/Shilpac20/events{/privacy}",
"followers_url": "https://api.github.com/users/Shilpac20/followers",
"following_url": "https://api.github.com/users/Shilpac20/following{/other_user}",
"gists_url": "https://api.github.com/users/Shilpac20/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Shilpac20",
"id": 71849081,
"login": "Shilpac20",
"node_id": "MDQ6VXNlcjcxODQ5MDgx",
"organizations_url": "https://api.github.com/users/Shilpac20/orgs",
"received_events_url": "https://api.github.com/users/Shilpac20/received_events",
"repos_url": "https://api.github.com/users/Shilpac20/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Shilpac20/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Shilpac20/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Shilpac20"
} | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | closed | false | null | [] | null | [] | 2022-08-31T09:39:25Z | 2022-08-31T10:53:19Z | 2022-08-31T10:53:19Z | NONE | null | null | null | ## Describe the bug
Hi, I am currently trying to download wikipedia dataset using
load_dataset("wikipedia", language="aa", date="20220401", split="train",beam_runner='DirectRunner'). However, I end up in getting filenotfound error. I get this error for any language I try to download. It downloads the file but while saving it in hugging face cache it fails to write. This happens for any available date of any language in wikipedia dump. I had raised another issue earlier #4915 but probably was not that clear and the solution provider misunderstood my problem. Hence raising one more issue. Any help is appreciated.
## Steps to reproduce the bug
```python
from datasets import load_dataset
load_dataset("wikipedia", language="aa", date="20220401", split="train",beam_runner='DirectRunner')
```
## Expected results
to load the dataset
## Actual results
I am pasting the error trace here:
Downloading builder script: 35.9kB [00:00, ?B/s]
Downloading metadata: 30.4kB [00:00, 1.94MB/s]
Using custom data configuration 20220401.aa-date=20220401,language=aa
Downloading and preparing dataset wikipedia/20220401.aa to C:\Users\Shilpa.cache\huggingface\datasets\wikipedia\20220401.aa-date=20220401,language=aa\2.0.0\aa542ed919df55cc5d3347f42dd4521d05ca68751f50dbc32bae2a7f1e167559...
Downloading data: 100%|████████████████████████████████████████████████████████████| 11.1k/11.1k [00:00<00:00, 712kB/s]
Downloading data files: 100%|████████████████████████████████████████████████████████████| 1/1 [00:02<00:00, 2.82s/it]
Extracting data files: 100%|█████████████████████████████████████████████████████████████████████| 1/1 [00:00<?, ?it/s]
Downloading data: 100%|███████████████████████████████████████████████████████████| 35.6k/35.6k [00:00<00:00, 84.3kB/s]
Downloading data files: 100%|████████████████████████████████████████████████████████████| 1/1 [00:02<00:00, 2.93s/it]
Traceback (most recent call last):
File "apache_beam\runners\common.py", line 1417, in apache_beam.runners.common.DoFnRunner.process
File "apache_beam\runners\common.py", line 837, in apache_beam.runners.common.PerWindowInvoker.invoke_process
File "apache_beam\runners\common.py", line 981, in apache_beam.runners.common.PerWindowInvoker._invoke_process_per_window
File "apache_beam\runners\common.py", line 1571, in apache_beam.runners.common._OutputHandler.handle_process_outputs
File "G:\Python3.7\lib\site-packages\apache_beam\io\iobase.py", line 1193, in process
self.writer = self.sink.open_writer(init_result, str(uuid.uuid4()))
File "G:\Python3.7\lib\site-packages\apache_beam\options\value_provider.py", line 193, in _f
return fnc(self, *args, **kwargs)
File "G:\Python3.7\lib\site-packages\apache_beam\io\filebasedsink.py", line 202, in open_writer
return FileBasedSinkWriter(self, writer_path)
File "G:\Python3.7\lib\site-packages\apache_beam\io\filebasedsink.py", line 419, in init
self.temp_handle = self.sink.open(temp_shard_path)
File "G:\Python3.7\lib\site-packages\apache_beam\io\parquetio.py", line 553, in open
self._file_handle = super().open(temp_path)
File "G:\Python3.7\lib\site-packages\apache_beam\options\value_provider.py", line 193, in _f
return fnc(self, *args, **kwargs)
File "G:\Python3.7\lib\site-packages\apache_beam\io\filebasedsink.py", line 139, in open
temp_path, self.mime_type, self.compression_type)
File "G:\Python3.7\lib\site-packages\apache_beam\io\filesystems.py", line 224, in create
return filesystem.create(path, mime_type, compression_type)
File "G:\Python3.7\lib\site-packages\apache_beam\io\localfilesystem.py", line 163, in create
return self._path_open(path, 'wb', mime_type, compression_type)
File "G:\Python3.7\lib\site-packages\apache_beam\io\localfilesystem.py", line 140, in _path_open
raw_file = io.open(path, mode)
FileNotFoundError: [Errno 2] No such file or directory: 'C:\Users\Shilpa\.cache\huggingface\datasets\wikipedia\20220401.aa-date=20220401,language=aa\2.0.0\aa542ed919df55cc5d3347f42dd4521d05ca68751f50dbc32bae2a7f1e167559.incomplete\beam-temp-wikipedia-train-880233e8287e11edaf9d3ca067f2714e\20a05238-6106-4420-a713-4eca6dd5959a.wikipedia-train'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "G:/abc/temp.py", line 32, in
beam_runner='DirectRunner')
File "G:\Python3.7\lib\site-packages\datasets\load.py", line 1751, in load_dataset
use_auth_token=use_auth_token,
File "G:\Python3.7\lib\site-packages\datasets\builder.py", line 705, in download_and_prepare
dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs
File "G:\Python3.7\lib\site-packages\datasets\builder.py", line 1394, in _download_and_prepare
pipeline_results = pipeline.run()
File "G:\Python3.7\lib\site-packages\apache_beam\pipeline.py", line 574, in run
return self.runner.run_pipeline(self, self._options)
File "G:\Python3.7\lib\site-packages\apache_beam\runners\direct\direct_runner.py", line 131, in run_pipeline
return runner.run_pipeline(pipeline, options)
File "G:\Python3.7\lib\site-packages\apache_beam\runners\portability\fn_api_runner\fn_runner.py", line 201, in run_pipeline
options)
File "G:\Python3.7\lib\site-packages\apache_beam\runners\portability\fn_api_runner\fn_runner.py", line 212, in run_via_runner_api
return self.run_stages(stage_context, stages)
File "G:\Python3.7\lib\site-packages\apache_beam\runners\portability\fn_api_runner\fn_runner.py", line 443, in run_stages
runner_execution_context, bundle_context_manager, bundle_input)
File "G:\Python3.7\lib\site-packages\apache_beam\runners\portability\fn_api_runner\fn_runner.py", line 776, in _execute_bundle
bundle_manager))
File "G:\Python3.7\lib\site-packages\apache_beam\runners\portability\fn_api_runner\fn_runner.py", line 1000, in _run_bundle
data_input, data_output, input_timers, expected_timer_output)
File "G:\Python3.7\lib\site-packages\apache_beam\runners\portability\fn_api_runner\fn_runner.py", line 1309, in process_bundle
result_future = self._worker_handler.control_conn.push(process_bundle_req)
File "G:\Python3.7\lib\site-packages\apache_beam\runners\portability\fn_api_runner\worker_handlers.py", line 380, in push
response = self.worker.do_instruction(request)
File "G:\Python3.7\lib\site-packages\apache_beam\runners\worker\sdk_worker.py", line 598, in do_instruction
getattr(request, request_type), request.instruction_id)
File "G:\Python3.7\lib\site-packages\apache_beam\runners\worker\sdk_worker.py", line 635, in process_bundle
bundle_processor.process_bundle(instruction_id))
File "G:\Python3.7\lib\site-packages\apache_beam\runners\worker\bundle_processor.py", line 1004, in process_bundle
element.data)
File "G:\Python3.7\lib\site-packages\apache_beam\runners\worker\bundle_processor.py", line 227, in process_encoded
self.output(decoded_value)
File "apache_beam\runners\worker\operations.py", line 526, in apache_beam.runners.worker.operations.Operation.output
File "apache_beam\runners\worker\operations.py", line 528, in apache_beam.runners.worker.operations.Operation.output
File "apache_beam\runners\worker\operations.py", line 237, in apache_beam.runners.worker.operations.SingletonElementConsumerSet.receive
File "apache_beam\runners\worker\operations.py", line 240, in apache_beam.runners.worker.operations.SingletonElementConsumerSet.receive
File "apache_beam\runners\worker\operations.py", line 907, in apache_beam.runners.worker.operations.DoOperation.process
File "apache_beam\runners\worker\operations.py", line 908, in apache_beam.runners.worker.operations.DoOperation.process
File "apache_beam\runners\common.py", line 1419, in apache_beam.runners.common.DoFnRunner.process
File "apache_beam\runners\common.py", line 1491, in apache_beam.runners.common.DoFnRunner._reraise_augmented
File "apache_beam\runners\common.py", line 1417, in apache_beam.runners.common.DoFnRunner.process
File "apache_beam\runners\common.py", line 623, in apache_beam.runners.common.SimpleInvoker.invoke_process
File "apache_beam\runners\common.py", line 1581, in apache_beam.runners.common._OutputHandler.handle_process_outputs
File "apache_beam\runners\common.py", line 1694, in apache_beam.runners.common._OutputHandler._write_value_to_tag
File "apache_beam\runners\worker\operations.py", line 240, in apache_beam.runners.worker.operations.SingletonElementConsumerSet.receive
File "apache_beam\runners\worker\operations.py", line 907, in apache_beam.runners.worker.operations.DoOperation.process
File "apache_beam\runners\worker\operations.py", line 908, in apache_beam.runners.worker.operations.DoOperation.process
File "apache_beam\runners\common.py", line 1419, in apache_beam.runners.common.DoFnRunner.process
File "apache_beam\runners\common.py", line 1491, in apache_beam.runners.common.DoFnRunner._reraise_augmented
File "apache_beam\runners\common.py", line 1417, in apache_beam.runners.common.DoFnRunner.process
File "apache_beam\runners\common.py", line 623, in apache_beam.runners.common.SimpleInvoker.invoke_process
File "apache_beam\runners\common.py", line 1581, in apache_beam.runners.common._OutputHandler.handle_process_outputs
File "apache_beam\runners\common.py", line 1694, in apache_beam.runners.common._OutputHandler._write_value_to_tag
File "apache_beam\runners\worker\operations.py", line 240, in apache_beam.runners.worker.operations.SingletonElementConsumerSet.receive
File "apache_beam\runners\worker\operations.py", line 907, in apache_beam.runners.worker.operations.DoOperation.process
File "apache_beam\runners\worker\operations.py", line 908, in apache_beam.runners.worker.operations.DoOperation.process
File "apache_beam\runners\common.py", line 1419, in apache_beam.runners.common.DoFnRunner.process
File "apache_beam\runners\common.py", line 1491, in apache_beam.runners.common.DoFnRunner._reraise_augmented
File "apache_beam\runners\common.py", line 1417, in apache_beam.runners.common.DoFnRunner.process
File "apache_beam\runners\common.py", line 837, in apache_beam.runners.common.PerWindowInvoker.invoke_process
File "apache_beam\runners\common.py", line 981, in apache_beam.runners.common.PerWindowInvoker._invoke_process_per_window
File "apache_beam\runners\common.py", line 1581, in apache_beam.runners.common._OutputHandler.handle_process_outputs
File "apache_beam\runners\common.py", line 1694, in apache_beam.runners.common._OutputHandler._write_value_to_tag
File "apache_beam\runners\worker\operations.py", line 240, in apache_beam.runners.worker.operations.SingletonElementConsumerSet.receive
File "apache_beam\runners\worker\operations.py", line 907, in apache_beam.runners.worker.operations.DoOperation.process
File "apache_beam\runners\worker\operations.py", line 908, in apache_beam.runners.worker.operations.DoOperation.process
File "apache_beam\runners\common.py", line 1419, in apache_beam.runners.common.DoFnRunner.process
File "apache_beam\runners\common.py", line 1491, in apache_beam.runners.common.DoFnRunner._reraise_augmented
File "apache_beam\runners\common.py", line 1417, in apache_beam.runners.common.DoFnRunner.process
File "apache_beam\runners\common.py", line 623, in apache_beam.runners.common.SimpleInvoker.invoke_process
File "apache_beam\runners\common.py", line 1581, in apache_beam.runners.common._OutputHandler.handle_process_outputs
File "apache_beam\runners\common.py", line 1694, in apache_beam.runners.common._OutputHandler._write_value_to_tag
File "apache_beam\runners\worker\operations.py", line 324, in apache_beam.runners.worker.operations.GeneralPurposeConsumerSet.receive
File "apache_beam\runners\worker\operations.py", line 905, in apache_beam.runners.worker.operations.DoOperation.process
File "apache_beam\runners\worker\operations.py", line 907, in apache_beam.runners.worker.operations.DoOperation.process
File "apache_beam\runners\worker\operations.py", line 908, in apache_beam.runners.worker.operations.DoOperation.process
File "apache_beam\runners\common.py", line 1419, in apache_beam.runners.common.DoFnRunner.process
File "apache_beam\runners\common.py", line 1491, in apache_beam.runners.common.DoFnRunner._reraise_augmented
File "apache_beam\runners\common.py", line 1417, in apache_beam.runners.common.DoFnRunner.process
File "apache_beam\runners\common.py", line 623, in apache_beam.runners.common.SimpleInvoker.invoke_process
File "apache_beam\runners\common.py", line 1581, in apache_beam.runners.common._OutputHandler.handle_process_outputs
File "apache_beam\runners\common.py", line 1694, in apache_beam.runners.common._OutputHandler._write_value_to_tag
File "apache_beam\runners\worker\operations.py", line 240, in apache_beam.runners.worker.operations.SingletonElementConsumerSet.receive
File "apache_beam\runners\worker\operations.py", line 907, in apache_beam.runners.worker.operations.DoOperation.process
File "apache_beam\runners\worker\operations.py", line 908, in apache_beam.runners.worker.operations.DoOperation.process
File "apache_beam\runners\common.py", line 1419, in apache_beam.runners.common.DoFnRunner.process
File "apache_beam\runners\common.py", line 1491, in apache_beam.runners.common.DoFnRunner._reraise_augmented
File "apache_beam\runners\common.py", line 1417, in apache_beam.runners.common.DoFnRunner.process
File "apache_beam\runners\common.py", line 837, in apache_beam.runners.common.PerWindowInvoker.invoke_process
File "apache_beam\runners\common.py", line 981, in apache_beam.runners.common.PerWindowInvoker._invoke_process_per_window
File "apache_beam\runners\common.py", line 1581, in apache_beam.runners.common._OutputHandler.handle_process_outputs
File "apache_beam\runners\common.py", line 1694, in apache_beam.runners.common._OutputHandler._write_value_to_tag
File "apache_beam\runners\worker\operations.py", line 240, in apache_beam.runners.worker.operations.SingletonElementConsumerSet.receive
File "apache_beam\runners\worker\operations.py", line 907, in apache_beam.runners.worker.operations.DoOperation.process
File "apache_beam\runners\worker\operations.py", line 908, in apache_beam.runners.worker.operations.DoOperation.process
File "apache_beam\runners\common.py", line 1419, in apache_beam.runners.common.DoFnRunner.process
File "apache_beam\runners\common.py", line 1507, in apache_beam.runners.common.DoFnRunner._reraise_augmented
File "apache_beam\runners\common.py", line 1417, in apache_beam.runners.common.DoFnRunner.process
File "apache_beam\runners\common.py", line 837, in apache_beam.runners.common.PerWindowInvoker.invoke_process
File "apache_beam\runners\common.py", line 981, in apache_beam.runners.common.PerWindowInvoker._invoke_process_per_window
File "apache_beam\runners\common.py", line 1571, in apache_beam.runners.common._OutputHandler.handle_process_outputs
File "G:\Python3.7\lib\site-packages\apache_beam\io\iobase.py", line 1193, in process
self.writer = self.sink.open_writer(init_result, str(uuid.uuid4()))
File "G:\Python3.7\lib\site-packages\apache_beam\options\value_provider.py", line 193, in _f
return fnc(self, *args, **kwargs)
File "G:\Python3.7\lib\site-packages\apache_beam\io\filebasedsink.py", line 202, in open_writer
return FileBasedSinkWriter(self, writer_path)
File "G:\Python3.7\lib\site-packages\apache_beam\io\filebasedsink.py", line 419, in init
self.temp_handle = self.sink.open(temp_shard_path)
File "G:\Python3.7\lib\site-packages\apache_beam\io\parquetio.py", line 553, in open
self._file_handle = super().open(temp_path)
File "G:\Python3.7\lib\site-packages\apache_beam\options\value_provider.py", line 193, in _f
return fnc(self, *args, **kwargs)
File "G:\Python3.7\lib\site-packages\apache_beam\io\filebasedsink.py", line 139, in open
temp_path, self.mime_type, self.compression_type)
File "G:\Python3.7\lib\site-packages\apache_beam\io\filesystems.py", line 224, in create
return filesystem.create(path, mime_type, compression_type)
File "G:\Python3.7\lib\site-packages\apache_beam\io\localfilesystem.py", line 163, in create
return self._path_open(path, 'wb', mime_type, compression_type)
File "G:\Python3.7\lib\site-packages\apache_beam\io\localfilesystem.py", line 140, in _path_open
raw_file = io.open(path, mode)
RuntimeError: FileNotFoundError: [Errno 2] No such file or directory: 'C:\Users\Shilpa\.cache\huggingface\datasets\wikipedia\20220401.aa-date=20220401,language=aa\2.0.0\aa542ed919df55cc5d3347f42dd4521d05ca68751f50dbc32bae2a7f1e167559.incomplete\beam-temp-wikipedia-train-880233e8287e11edaf9d3ca067f2714e\20a05238-6106-4420-a713-4eca6dd5959a.wikipedia-train' [while running 'train/Save to parquet/Write/WriteImpl/WriteBundles']
## Environment info
Python: 3.7.6
Windows 10 Pro
datasets :2.4.0
apache_beam: 2.41.0
mwparserfromhell: 0.6.4 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4916/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4916/timeline | null | completed | true |
https://api.github.com/repos/huggingface/datasets/issues/4915 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4915/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4915/comments | https://api.github.com/repos/huggingface/datasets/issues/4915/events | https://github.com/huggingface/datasets/issues/4915 | 1,356,009,042 | I_kwDODunzps5Q0w5S | 4,915 | FileNotFoundError while downloading wikipedia dataset for any language | {
"avatar_url": "https://avatars.githubusercontent.com/u/71849081?v=4",
"events_url": "https://api.github.com/users/Shilpac20/events{/privacy}",
"followers_url": "https://api.github.com/users/Shilpac20/followers",
"following_url": "https://api.github.com/users/Shilpac20/following{/other_user}",
"gists_url": "https://api.github.com/users/Shilpac20/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Shilpac20",
"id": 71849081,
"login": "Shilpac20",
"node_id": "MDQ6VXNlcjcxODQ5MDgx",
"organizations_url": "https://api.github.com/users/Shilpac20/orgs",
"received_events_url": "https://api.github.com/users/Shilpac20/received_events",
"repos_url": "https://api.github.com/users/Shilpac20/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Shilpac20/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Shilpac20/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Shilpac20"
} | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | open | false | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
}
] | null | [] | 2022-08-30T16:15:46Z | 2022-12-04T22:20:33Z | null | NONE | null | null | null | ## Describe the bug
Hi, I am currently trying to download wikipedia dataset using
load_dataset("wikipedia", language="aa", date="20220401", split="train",beam_runner='DirectRunner'). However, I end up in getting filenotfound error. I get this error for any language I try to download.
Environment:
## Steps to reproduce the bug
```python
from datasets import load_dataset
load_dataset("wikipedia", language="aa", date="20220401", split="train",beam_runner='DirectRunner')
```
## Expected results
to load the dataset
## Actual results
I am pasting the error trace here:
Downloading builder script: 35.9kB [00:00, ?B/s]
Downloading metadata: 30.4kB [00:00, 1.94MB/s]
Using custom data configuration 20220401.aa-date=20220401,language=aa
Downloading and preparing dataset wikipedia/20220401.aa to C:\Users\Shilpa\.cache\huggingface\datasets\wikipedia\20220401.aa-date=20220401,language=aa\2.0.0\aa542ed919df55cc5d3347f42dd4521d05ca68751f50dbc32bae2a7f1e167559...
Downloading data: 100%|████████████████████████████████████████████████████████████| 11.1k/11.1k [00:00<00:00, 712kB/s]
Downloading data files: 100%|████████████████████████████████████████████████████████████| 1/1 [00:02<00:00, 2.82s/it]
Extracting data files: 100%|█████████████████████████████████████████████████████████████████████| 1/1 [00:00<?, ?it/s]
Downloading data: 100%|███████████████████████████████████████████████████████████| 35.6k/35.6k [00:00<00:00, 84.3kB/s]
Downloading data files: 100%|████████████████████████████████████████████████████████████| 1/1 [00:02<00:00, 2.93s/it]
Traceback (most recent call last):
File "apache_beam\runners\common.py", line 1417, in apache_beam.runners.common.DoFnRunner.process
File "apache_beam\runners\common.py", line 837, in apache_beam.runners.common.PerWindowInvoker.invoke_process
File "apache_beam\runners\common.py", line 981, in apache_beam.runners.common.PerWindowInvoker._invoke_process_per_window
File "apache_beam\runners\common.py", line 1571, in apache_beam.runners.common._OutputHandler.handle_process_outputs
File "G:\Python3.7\lib\site-packages\apache_beam\io\iobase.py", line 1193, in process
self.writer = self.sink.open_writer(init_result, str(uuid.uuid4()))
File "G:\Python3.7\lib\site-packages\apache_beam\options\value_provider.py", line 193, in _f
return fnc(self, *args, **kwargs)
File "G:\Python3.7\lib\site-packages\apache_beam\io\filebasedsink.py", line 202, in open_writer
return FileBasedSinkWriter(self, writer_path)
File "G:\Python3.7\lib\site-packages\apache_beam\io\filebasedsink.py", line 419, in __init__
self.temp_handle = self.sink.open(temp_shard_path)
File "G:\Python3.7\lib\site-packages\apache_beam\io\parquetio.py", line 553, in open
self._file_handle = super().open(temp_path)
File "G:\Python3.7\lib\site-packages\apache_beam\options\value_provider.py", line 193, in _f
return fnc(self, *args, **kwargs)
File "G:\Python3.7\lib\site-packages\apache_beam\io\filebasedsink.py", line 139, in open
temp_path, self.mime_type, self.compression_type)
File "G:\Python3.7\lib\site-packages\apache_beam\io\filesystems.py", line 224, in create
return filesystem.create(path, mime_type, compression_type)
File "G:\Python3.7\lib\site-packages\apache_beam\io\localfilesystem.py", line 163, in create
return self._path_open(path, 'wb', mime_type, compression_type)
File "G:\Python3.7\lib\site-packages\apache_beam\io\localfilesystem.py", line 140, in _path_open
raw_file = io.open(path, mode)
FileNotFoundError: [Errno 2] No such file or directory: 'C:\\Users\\Shilpa\\.cache\\huggingface\\datasets\\wikipedia\\20220401.aa-date=20220401,language=aa\\2.0.0\\aa542ed919df55cc5d3347f42dd4521d05ca68751f50dbc32bae2a7f1e167559.incomplete\\beam-temp-wikipedia-train-880233e8287e11edaf9d3ca067f2714e\\20a05238-6106-4420-a713-4eca6dd5959a.wikipedia-train'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "G:/abc/temp.py", line 32, in <module>
beam_runner='DirectRunner')
File "G:\Python3.7\lib\site-packages\datasets\load.py", line 1751, in load_dataset
use_auth_token=use_auth_token,
File "G:\Python3.7\lib\site-packages\datasets\builder.py", line 705, in download_and_prepare
dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs
File "G:\Python3.7\lib\site-packages\datasets\builder.py", line 1394, in _download_and_prepare
pipeline_results = pipeline.run()
File "G:\Python3.7\lib\site-packages\apache_beam\pipeline.py", line 574, in run
return self.runner.run_pipeline(self, self._options)
File "G:\Python3.7\lib\site-packages\apache_beam\runners\direct\direct_runner.py", line 131, in run_pipeline
return runner.run_pipeline(pipeline, options)
File "G:\Python3.7\lib\site-packages\apache_beam\runners\portability\fn_api_runner\fn_runner.py", line 201, in run_pipeline
options)
File "G:\Python3.7\lib\site-packages\apache_beam\runners\portability\fn_api_runner\fn_runner.py", line 212, in run_via_runner_api
return self.run_stages(stage_context, stages)
File "G:\Python3.7\lib\site-packages\apache_beam\runners\portability\fn_api_runner\fn_runner.py", line 443, in run_stages
runner_execution_context, bundle_context_manager, bundle_input)
File "G:\Python3.7\lib\site-packages\apache_beam\runners\portability\fn_api_runner\fn_runner.py", line 776, in _execute_bundle
bundle_manager))
File "G:\Python3.7\lib\site-packages\apache_beam\runners\portability\fn_api_runner\fn_runner.py", line 1000, in _run_bundle
data_input, data_output, input_timers, expected_timer_output)
File "G:\Python3.7\lib\site-packages\apache_beam\runners\portability\fn_api_runner\fn_runner.py", line 1309, in process_bundle
result_future = self._worker_handler.control_conn.push(process_bundle_req)
File "G:\Python3.7\lib\site-packages\apache_beam\runners\portability\fn_api_runner\worker_handlers.py", line 380, in push
response = self.worker.do_instruction(request)
File "G:\Python3.7\lib\site-packages\apache_beam\runners\worker\sdk_worker.py", line 598, in do_instruction
getattr(request, request_type), request.instruction_id)
File "G:\Python3.7\lib\site-packages\apache_beam\runners\worker\sdk_worker.py", line 635, in process_bundle
bundle_processor.process_bundle(instruction_id))
File "G:\Python3.7\lib\site-packages\apache_beam\runners\worker\bundle_processor.py", line 1004, in process_bundle
element.data)
File "G:\Python3.7\lib\site-packages\apache_beam\runners\worker\bundle_processor.py", line 227, in process_encoded
self.output(decoded_value)
File "apache_beam\runners\worker\operations.py", line 526, in apache_beam.runners.worker.operations.Operation.output
File "apache_beam\runners\worker\operations.py", line 528, in apache_beam.runners.worker.operations.Operation.output
File "apache_beam\runners\worker\operations.py", line 237, in apache_beam.runners.worker.operations.SingletonElementConsumerSet.receive
File "apache_beam\runners\worker\operations.py", line 240, in apache_beam.runners.worker.operations.SingletonElementConsumerSet.receive
File "apache_beam\runners\worker\operations.py", line 907, in apache_beam.runners.worker.operations.DoOperation.process
File "apache_beam\runners\worker\operations.py", line 908, in apache_beam.runners.worker.operations.DoOperation.process
File "apache_beam\runners\common.py", line 1419, in apache_beam.runners.common.DoFnRunner.process
File "apache_beam\runners\common.py", line 1491, in apache_beam.runners.common.DoFnRunner._reraise_augmented
File "apache_beam\runners\common.py", line 1417, in apache_beam.runners.common.DoFnRunner.process
File "apache_beam\runners\common.py", line 623, in apache_beam.runners.common.SimpleInvoker.invoke_process
File "apache_beam\runners\common.py", line 1581, in apache_beam.runners.common._OutputHandler.handle_process_outputs
File "apache_beam\runners\common.py", line 1694, in apache_beam.runners.common._OutputHandler._write_value_to_tag
File "apache_beam\runners\worker\operations.py", line 240, in apache_beam.runners.worker.operations.SingletonElementConsumerSet.receive
File "apache_beam\runners\worker\operations.py", line 907, in apache_beam.runners.worker.operations.DoOperation.process
File "apache_beam\runners\worker\operations.py", line 908, in apache_beam.runners.worker.operations.DoOperation.process
File "apache_beam\runners\common.py", line 1419, in apache_beam.runners.common.DoFnRunner.process
File "apache_beam\runners\common.py", line 1491, in apache_beam.runners.common.DoFnRunner._reraise_augmented
File "apache_beam\runners\common.py", line 1417, in apache_beam.runners.common.DoFnRunner.process
File "apache_beam\runners\common.py", line 623, in apache_beam.runners.common.SimpleInvoker.invoke_process
File "apache_beam\runners\common.py", line 1581, in apache_beam.runners.common._OutputHandler.handle_process_outputs
File "apache_beam\runners\common.py", line 1694, in apache_beam.runners.common._OutputHandler._write_value_to_tag
File "apache_beam\runners\worker\operations.py", line 240, in apache_beam.runners.worker.operations.SingletonElementConsumerSet.receive
File "apache_beam\runners\worker\operations.py", line 907, in apache_beam.runners.worker.operations.DoOperation.process
File "apache_beam\runners\worker\operations.py", line 908, in apache_beam.runners.worker.operations.DoOperation.process
File "apache_beam\runners\common.py", line 1419, in apache_beam.runners.common.DoFnRunner.process
File "apache_beam\runners\common.py", line 1491, in apache_beam.runners.common.DoFnRunner._reraise_augmented
File "apache_beam\runners\common.py", line 1417, in apache_beam.runners.common.DoFnRunner.process
File "apache_beam\runners\common.py", line 837, in apache_beam.runners.common.PerWindowInvoker.invoke_process
File "apache_beam\runners\common.py", line 981, in apache_beam.runners.common.PerWindowInvoker._invoke_process_per_window
File "apache_beam\runners\common.py", line 1581, in apache_beam.runners.common._OutputHandler.handle_process_outputs
File "apache_beam\runners\common.py", line 1694, in apache_beam.runners.common._OutputHandler._write_value_to_tag
File "apache_beam\runners\worker\operations.py", line 240, in apache_beam.runners.worker.operations.SingletonElementConsumerSet.receive
File "apache_beam\runners\worker\operations.py", line 907, in apache_beam.runners.worker.operations.DoOperation.process
File "apache_beam\runners\worker\operations.py", line 908, in apache_beam.runners.worker.operations.DoOperation.process
File "apache_beam\runners\common.py", line 1419, in apache_beam.runners.common.DoFnRunner.process
File "apache_beam\runners\common.py", line 1491, in apache_beam.runners.common.DoFnRunner._reraise_augmented
File "apache_beam\runners\common.py", line 1417, in apache_beam.runners.common.DoFnRunner.process
File "apache_beam\runners\common.py", line 623, in apache_beam.runners.common.SimpleInvoker.invoke_process
File "apache_beam\runners\common.py", line 1581, in apache_beam.runners.common._OutputHandler.handle_process_outputs
File "apache_beam\runners\common.py", line 1694, in apache_beam.runners.common._OutputHandler._write_value_to_tag
File "apache_beam\runners\worker\operations.py", line 324, in apache_beam.runners.worker.operations.GeneralPurposeConsumerSet.receive
File "apache_beam\runners\worker\operations.py", line 905, in apache_beam.runners.worker.operations.DoOperation.process
File "apache_beam\runners\worker\operations.py", line 907, in apache_beam.runners.worker.operations.DoOperation.process
File "apache_beam\runners\worker\operations.py", line 908, in apache_beam.runners.worker.operations.DoOperation.process
File "apache_beam\runners\common.py", line 1419, in apache_beam.runners.common.DoFnRunner.process
File "apache_beam\runners\common.py", line 1491, in apache_beam.runners.common.DoFnRunner._reraise_augmented
File "apache_beam\runners\common.py", line 1417, in apache_beam.runners.common.DoFnRunner.process
File "apache_beam\runners\common.py", line 623, in apache_beam.runners.common.SimpleInvoker.invoke_process
File "apache_beam\runners\common.py", line 1581, in apache_beam.runners.common._OutputHandler.handle_process_outputs
File "apache_beam\runners\common.py", line 1694, in apache_beam.runners.common._OutputHandler._write_value_to_tag
File "apache_beam\runners\worker\operations.py", line 240, in apache_beam.runners.worker.operations.SingletonElementConsumerSet.receive
File "apache_beam\runners\worker\operations.py", line 907, in apache_beam.runners.worker.operations.DoOperation.process
File "apache_beam\runners\worker\operations.py", line 908, in apache_beam.runners.worker.operations.DoOperation.process
File "apache_beam\runners\common.py", line 1419, in apache_beam.runners.common.DoFnRunner.process
File "apache_beam\runners\common.py", line 1491, in apache_beam.runners.common.DoFnRunner._reraise_augmented
File "apache_beam\runners\common.py", line 1417, in apache_beam.runners.common.DoFnRunner.process
File "apache_beam\runners\common.py", line 837, in apache_beam.runners.common.PerWindowInvoker.invoke_process
File "apache_beam\runners\common.py", line 981, in apache_beam.runners.common.PerWindowInvoker._invoke_process_per_window
File "apache_beam\runners\common.py", line 1581, in apache_beam.runners.common._OutputHandler.handle_process_outputs
File "apache_beam\runners\common.py", line 1694, in apache_beam.runners.common._OutputHandler._write_value_to_tag
File "apache_beam\runners\worker\operations.py", line 240, in apache_beam.runners.worker.operations.SingletonElementConsumerSet.receive
File "apache_beam\runners\worker\operations.py", line 907, in apache_beam.runners.worker.operations.DoOperation.process
File "apache_beam\runners\worker\operations.py", line 908, in apache_beam.runners.worker.operations.DoOperation.process
File "apache_beam\runners\common.py", line 1419, in apache_beam.runners.common.DoFnRunner.process
File "apache_beam\runners\common.py", line 1507, in apache_beam.runners.common.DoFnRunner._reraise_augmented
File "apache_beam\runners\common.py", line 1417, in apache_beam.runners.common.DoFnRunner.process
File "apache_beam\runners\common.py", line 837, in apache_beam.runners.common.PerWindowInvoker.invoke_process
File "apache_beam\runners\common.py", line 981, in apache_beam.runners.common.PerWindowInvoker._invoke_process_per_window
File "apache_beam\runners\common.py", line 1571, in apache_beam.runners.common._OutputHandler.handle_process_outputs
File "G:\Python3.7\lib\site-packages\apache_beam\io\iobase.py", line 1193, in process
self.writer = self.sink.open_writer(init_result, str(uuid.uuid4()))
File "G:\Python3.7\lib\site-packages\apache_beam\options\value_provider.py", line 193, in _f
return fnc(self, *args, **kwargs)
File "G:\Python3.7\lib\site-packages\apache_beam\io\filebasedsink.py", line 202, in open_writer
return FileBasedSinkWriter(self, writer_path)
File "G:\Python3.7\lib\site-packages\apache_beam\io\filebasedsink.py", line 419, in __init__
self.temp_handle = self.sink.open(temp_shard_path)
File "G:\Python3.7\lib\site-packages\apache_beam\io\parquetio.py", line 553, in open
self._file_handle = super().open(temp_path)
File "G:\Python3.7\lib\site-packages\apache_beam\options\value_provider.py", line 193, in _f
return fnc(self, *args, **kwargs)
File "G:\Python3.7\lib\site-packages\apache_beam\io\filebasedsink.py", line 139, in open
temp_path, self.mime_type, self.compression_type)
File "G:\Python3.7\lib\site-packages\apache_beam\io\filesystems.py", line 224, in create
return filesystem.create(path, mime_type, compression_type)
File "G:\Python3.7\lib\site-packages\apache_beam\io\localfilesystem.py", line 163, in create
return self._path_open(path, 'wb', mime_type, compression_type)
File "G:\Python3.7\lib\site-packages\apache_beam\io\localfilesystem.py", line 140, in _path_open
raw_file = io.open(path, mode)
RuntimeError: FileNotFoundError: [Errno 2] No such file or directory: 'C:\\Users\\Shilpa\\.cache\\huggingface\\datasets\\wikipedia\\20220401.aa-date=20220401,language=aa\\2.0.0\\aa542ed919df55cc5d3347f42dd4521d05ca68751f50dbc32bae2a7f1e167559.incomplete\\beam-temp-wikipedia-train-880233e8287e11edaf9d3ca067f2714e\\20a05238-6106-4420-a713-4eca6dd5959a.wikipedia-train' [while running 'train/Save to parquet/Write/WriteImpl/WriteBundles']
## Environment info
Python: 3.7.6
Windows 10 Pro
datasets :2.4.0
apache_beam: 2.41.0
mwparserfromhell: 0.6.4
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4915/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4915/timeline | null | reopened | true |
https://api.github.com/repos/huggingface/datasets/issues/4914 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4914/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4914/comments | https://api.github.com/repos/huggingface/datasets/issues/4914/events | https://github.com/huggingface/datasets/pull/4914 | 1,355,482,624 | PR_kwDODunzps4-CFyN | 4,914 | Support streaming swda dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | [] | closed | false | null | [] | null | [] | 2022-08-30T09:46:28Z | 2022-08-30T11:16:33Z | 2022-08-30T11:14:16Z | MEMBER | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/4914.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4914",
"merged_at": "2022-08-30T11:14:15Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4914.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4914"
} | Support streaming swda dataset. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4914/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4914/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/4913 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4913/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4913/comments | https://api.github.com/repos/huggingface/datasets/issues/4913/events | https://github.com/huggingface/datasets/pull/4913 | 1,355,232,007 | PR_kwDODunzps4-BP00 | 4,913 | Add license and citation information to cosmos_qa dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | [] | closed | false | null | [] | null | [] | 2022-08-30T06:23:19Z | 2022-08-30T09:49:31Z | 2022-08-30T09:47:35Z | MEMBER | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/4913.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4913",
"merged_at": "2022-08-30T09:47:35Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4913.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4913"
} | This PR adds the license information to `cosmos_qa` dataset, once reported via email by Yejin Choi, the dataset is licensed under CC BY 4.0.
This PR also updates the citation information. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4913/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4913/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/4912 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4912/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4912/comments | https://api.github.com/repos/huggingface/datasets/issues/4912/events | https://github.com/huggingface/datasets/issues/4912 | 1,355,078,864 | I_kwDODunzps5QxNzQ | 4,912 | datasets map() handles all data at a stroke and takes long time | {
"avatar_url": "https://avatars.githubusercontent.com/u/40711748?v=4",
"events_url": "https://api.github.com/users/BruceStayHungry/events{/privacy}",
"followers_url": "https://api.github.com/users/BruceStayHungry/followers",
"following_url": "https://api.github.com/users/BruceStayHungry/following{/other_user}",
"gists_url": "https://api.github.com/users/BruceStayHungry/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/BruceStayHungry",
"id": 40711748,
"login": "BruceStayHungry",
"node_id": "MDQ6VXNlcjQwNzExNzQ4",
"organizations_url": "https://api.github.com/users/BruceStayHungry/orgs",
"received_events_url": "https://api.github.com/users/BruceStayHungry/received_events",
"repos_url": "https://api.github.com/users/BruceStayHungry/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/BruceStayHungry/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BruceStayHungry/subscriptions",
"type": "User",
"url": "https://api.github.com/users/BruceStayHungry"
} | [] | closed | false | null | [] | null | [] | 2022-08-30T02:25:56Z | 2022-09-06T09:23:35Z | 2022-09-06T09:23:35Z | NONE | null | null | null | **1. Background**
Huggingface datasets package advises using `map()` to process data in batches. In the example code on pretraining masked language model, they use `map()` to tokenize all data at a stroke before the train loop.
The corresponding code:
```
with accelerator.main_process_first():
tokenized_datasets = raw_datasets.map(
tokenize_function,
batched=True,
num_proc=args.preprocessing_num_workers,
remove_columns=column_names,
load_from_cache_file=not args.overwrite_cache,
desc="Running tokenizer on every text in dataset"
)
```
**2. The problem**
Thus, when I try the same pertaining code with a much larger corpus, it takes quite a long time to tokenize.
Also, we can choose to tokenize data in `data-collator`. In this way, the program only tokenizes one batch in the next training step and avoids getting stuck in tokenization.
**3. My question**
As described above, my questions are:
* **Which is better? Process in `map()` or in `data-collator`**
* **Why huggingface advises `map()` function?** There should be some advantages to using `map()`
Thanks for your answers! | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4912/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4912/timeline | null | completed | true |
https://api.github.com/repos/huggingface/datasets/issues/4911 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4911/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4911/comments | https://api.github.com/repos/huggingface/datasets/issues/4911/events | https://github.com/huggingface/datasets/issues/4911 | 1,354,426,978 | I_kwDODunzps5Quupi | 4,911 | [Tests] Ensure `datasets` supports renamed repositories | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [
{
"color": "BDE59C",
"default": false,
"description": "Issues a bit more difficult than \"Good First\" issues",
"id": 3761482852,
"name": "good second issue",
"node_id": "LA_kwDODunzps7gM6xk",
"url": "https://api.github.com/repos/huggingface/datasets/labels/good%20second%20issue"
}
] | open | false | null | [] | null | [] | 2022-08-29T14:46:14Z | 2022-08-29T15:31:03Z | null | MEMBER | null | null | null | On https://hf.co/datasets you can rename a dataset (or sometimes move it to another user/org). The website handles redirections correctly and AFAIK `datasets` does as well.
However it would be nice to have an integration test to make sure we don't break support for renamed datasets.
To implement this we can use the /api/repos/move endpoint on hub-ci to rename/move a repo (it is documented at https://huggingface.co/docs/hub/api) | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4911/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4911/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/4910 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4910/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4910/comments | https://api.github.com/repos/huggingface/datasets/issues/4910/events | https://github.com/huggingface/datasets/issues/4910 | 1,354,374,328 | I_kwDODunzps5Quhy4 | 4,910 | Identical keywords in build_kwargs and config_kwargs lead to TypeError in load_dataset_builder() | {
"avatar_url": "https://avatars.githubusercontent.com/u/57184353?v=4",
"events_url": "https://api.github.com/users/bablf/events{/privacy}",
"followers_url": "https://api.github.com/users/bablf/followers",
"following_url": "https://api.github.com/users/bablf/following{/other_user}",
"gists_url": "https://api.github.com/users/bablf/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/bablf",
"id": 57184353,
"login": "bablf",
"node_id": "MDQ6VXNlcjU3MTg0MzUz",
"organizations_url": "https://api.github.com/users/bablf/orgs",
"received_events_url": "https://api.github.com/users/bablf/received_events",
"repos_url": "https://api.github.com/users/bablf/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/bablf/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bablf/subscriptions",
"type": "User",
"url": "https://api.github.com/users/bablf"
} | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
},
{
"color": "7057ff",
"default": true,
"description": "Good for newcomers",
"id": 1935892877,
"name": "good first issue",
"node_id": "MDU6TGFiZWwxOTM1ODkyODc3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/good%20first%20issue"
}
] | open | false | {
"avatar_url": "https://avatars.githubusercontent.com/u/21123710?v=4",
"events_url": "https://api.github.com/users/thepurpleowl/events{/privacy}",
"followers_url": "https://api.github.com/users/thepurpleowl/followers",
"following_url": "https://api.github.com/users/thepurpleowl/following{/other_user}",
"gists_url": "https://api.github.com/users/thepurpleowl/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/thepurpleowl",
"id": 21123710,
"login": "thepurpleowl",
"node_id": "MDQ6VXNlcjIxMTIzNzEw",
"organizations_url": "https://api.github.com/users/thepurpleowl/orgs",
"received_events_url": "https://api.github.com/users/thepurpleowl/received_events",
"repos_url": "https://api.github.com/users/thepurpleowl/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/thepurpleowl/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/thepurpleowl/subscriptions",
"type": "User",
"url": "https://api.github.com/users/thepurpleowl"
} | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/21123710?v=4",
"events_url": "https://api.github.com/users/thepurpleowl/events{/privacy}",
"followers_url": "https://api.github.com/users/thepurpleowl/followers",
"following_url": "https://api.github.com/users/thepurpleowl/following{/other_user}",
"gists_url": "https://api.github.com/users/thepurpleowl/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/thepurpleowl",
"id": 21123710,
"login": "thepurpleowl",
"node_id": "MDQ6VXNlcjIxMTIzNzEw",
"organizations_url": "https://api.github.com/users/thepurpleowl/orgs",
"received_events_url": "https://api.github.com/users/thepurpleowl/received_events",
"repos_url": "https://api.github.com/users/thepurpleowl/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/thepurpleowl/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/thepurpleowl/subscriptions",
"type": "User",
"url": "https://api.github.com/users/thepurpleowl"
}
] | null | [] | 2022-08-29T14:11:48Z | 2022-09-13T11:58:46Z | null | NONE | null | null | null | ## Describe the bug
In `load_dataset_builder()`, `build_kwargs` and `config_kwargs` can contain the same keywords leading to a TypeError("type object got multiple values for keyword argument "xyz").
I ran into this problem with the keyword: `base_path`. It might happen with other kwargs as well. I think a quickfix would be
```python
builder_cls = import_main_class(dataset_module.module_path)
builder_kwargs = dataset_module.builder_kwargs
data_files = builder_kwargs.pop("data_files", data_files)
config_name = builder_kwargs.pop("config_name", name)
hash = builder_kwargs.pop("hash")
base_path = builder_kwargs.pop("base_path")
```
and then pass base_path into `builder_cls`.
## Steps to reproduce the bug
```python
from datasets import load_dataset
load_dataset("rotten_tomatoes", base_path="./sample_data")
```
## Expected results
The docs state: `**config_kwargs` — Keyword arguments to be passed to the [BuilderConfig](https://huggingface.co/docs/datasets/v2.4.0/en/package_reference/builder_classes#datasets.BuilderConfig) and used in the [DatasetBuilder](https://huggingface.co/docs/datasets/v2.4.0/en/package_reference/builder_classes#datasets.DatasetBuilder).
So I would expect to be able to pass the base_path into `load_dataset()`.
## Actual results
TypeError("type object got multiple values for keyword argument "base_path").
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 2.4.0
- Platform: macOS-12.5-arm64-arm-64bit
- Python version: 3.8.9
- PyArrow version: 9.0.0
| {
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4910/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4910/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/4909 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4909/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4909/comments | https://api.github.com/repos/huggingface/datasets/issues/4909/events | https://github.com/huggingface/datasets/pull/4909 | 1,353,997,788 | PR_kwDODunzps499Fhe | 4,909 | Update GLUE evaluation metadata | {
"avatar_url": "https://avatars.githubusercontent.com/u/26859204?v=4",
"events_url": "https://api.github.com/users/lewtun/events{/privacy}",
"followers_url": "https://api.github.com/users/lewtun/followers",
"following_url": "https://api.github.com/users/lewtun/following{/other_user}",
"gists_url": "https://api.github.com/users/lewtun/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lewtun",
"id": 26859204,
"login": "lewtun",
"node_id": "MDQ6VXNlcjI2ODU5MjA0",
"organizations_url": "https://api.github.com/users/lewtun/orgs",
"received_events_url": "https://api.github.com/users/lewtun/received_events",
"repos_url": "https://api.github.com/users/lewtun/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lewtun/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lewtun/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lewtun"
} | [] | closed | false | null | [] | null | [] | 2022-08-29T09:43:44Z | 2022-08-29T14:53:29Z | 2022-08-29T14:51:18Z | MEMBER | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/4909.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4909",
"merged_at": "2022-08-29T14:51:18Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4909.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4909"
} | This PR updates the evaluation metadata for GLUE to:
* Include defaults for all configs except `ax` (which only has a `test` split with no known labels)
* Fix the default split from `test` to `validation` since `test` splits in GLUE have no labels (they're private)
* Fix the `task_id` for some existing defaults
cc @sashavor @douwekiela | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4909/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4909/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/4908 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4908/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4908/comments | https://api.github.com/repos/huggingface/datasets/issues/4908/events | https://github.com/huggingface/datasets/pull/4908 | 1,353,995,574 | PR_kwDODunzps499FDS | 4,908 | Fix missing tags in dataset cards | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | [] | closed | false | null | [] | null | [] | 2022-08-29T09:41:53Z | 2022-09-22T14:35:56Z | 2022-08-29T16:13:07Z | MEMBER | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/4908.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4908",
"merged_at": "2022-08-29T16:13:07Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4908.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4908"
} | Fix missing tags in dataset cards:
- asnq
- clue
- common_gen
- cosmos_qa
- guardian_authorship
- hindi_discourse
- py_ast
- x_stance
This PR partially fixes the missing tags in dataset cards. Subsequent PRs will follow to complete this task.
Related to:
- #4833
- #4891
- #4896 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4908/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4908/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/4907 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4907/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4907/comments | https://api.github.com/repos/huggingface/datasets/issues/4907/events | https://github.com/huggingface/datasets/issues/4907 | 1,353,808,348 | I_kwDODunzps5QsXnc | 4,907 | None Type error for swda datasets | {
"avatar_url": "https://avatars.githubusercontent.com/u/8229163?v=4",
"events_url": "https://api.github.com/users/hannan72/events{/privacy}",
"followers_url": "https://api.github.com/users/hannan72/followers",
"following_url": "https://api.github.com/users/hannan72/following{/other_user}",
"gists_url": "https://api.github.com/users/hannan72/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/hannan72",
"id": 8229163,
"login": "hannan72",
"node_id": "MDQ6VXNlcjgyMjkxNjM=",
"organizations_url": "https://api.github.com/users/hannan72/orgs",
"received_events_url": "https://api.github.com/users/hannan72/received_events",
"repos_url": "https://api.github.com/users/hannan72/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/hannan72/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/hannan72/subscriptions",
"type": "User",
"url": "https://api.github.com/users/hannan72"
} | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | closed | false | null | [] | null | [] | 2022-08-29T07:05:20Z | 2022-08-30T14:43:41Z | 2022-08-30T14:43:41Z | NONE | null | null | null | ## Describe the bug
I got `'NoneType' object is not callable` error while calling the swda datasets.
## Steps to reproduce the bug
```python
from datasets import load_dataset
dataset = load_dataset("swda")
```
## Expected results
Run without error
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 2.4.0
- Python version: 3.8.10
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4907/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4907/timeline | null | completed | true |
https://api.github.com/repos/huggingface/datasets/issues/4906 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4906/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4906/comments | https://api.github.com/repos/huggingface/datasets/issues/4906/events | https://github.com/huggingface/datasets/issues/4906 | 1,353,223,925 | I_kwDODunzps5QqI71 | 4,906 | Can't import datasets AttributeError: partially initialized module 'datasets' has no attribute 'utils' (most likely due to a circular import) | {
"avatar_url": "https://avatars.githubusercontent.com/u/63536981?v=4",
"events_url": "https://api.github.com/users/OPterminator/events{/privacy}",
"followers_url": "https://api.github.com/users/OPterminator/followers",
"following_url": "https://api.github.com/users/OPterminator/following{/other_user}",
"gists_url": "https://api.github.com/users/OPterminator/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/OPterminator",
"id": 63536981,
"login": "OPterminator",
"node_id": "MDQ6VXNlcjYzNTM2OTgx",
"organizations_url": "https://api.github.com/users/OPterminator/orgs",
"received_events_url": "https://api.github.com/users/OPterminator/received_events",
"repos_url": "https://api.github.com/users/OPterminator/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/OPterminator/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/OPterminator/subscriptions",
"type": "User",
"url": "https://api.github.com/users/OPterminator"
} | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | closed | false | null | [] | null | [] | 2022-08-28T02:23:24Z | 2022-10-03T12:22:50Z | 2022-10-03T12:22:50Z | NONE | null | null | null | ## Describe the bug
A clear and concise description of what the bug is.
Not able to import datasets
## Steps to reproduce the bug
```python
# Sample code to reproduce the bug
import os
os.environ["WANDB_API_KEY"] = "0" ## to silence warning
import numpy as np
import random
import sklearn
import matplotlib.pyplot as plt
import pandas as pd
import sys
import tensorflow as tf
import plotly.express as px
import transformers
import tokenizers
import nlp as nlp
import utils
import datasets
```
## Expected results
A clear and concise description of the expected results.
import should work normal
## Actual results
Specify the actual results or traceback.
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
<ipython-input-21-b3b5b0b62103> in <module>
13 import nlp as nlp
14 import utils
---> 15 import datasets
~\anaconda3\lib\site-packages\datasets\__init__.py in <module>
44 from .fingerprint import disable_caching, enable_caching, is_caching_enabled, set_caching_enabled
45 from .info import DatasetInfo, MetricInfo
---> 46 from .inspect import (
47 get_dataset_config_info,
48 get_dataset_config_names,
~\anaconda3\lib\site-packages\datasets\inspect.py in <module>
28 from .download.streaming_download_manager import StreamingDownloadManager
29 from .info import DatasetInfo
---> 30 from .load import dataset_module_factory, import_main_class, load_dataset_builder, metric_module_factory
31 from .utils.file_utils import relative_to_absolute_path
32 from .utils.logging import get_logger
~\anaconda3\lib\site-packages\datasets\load.py in <module>
53 from .iterable_dataset import IterableDataset
54 from .metric import Metric
---> 55 from .packaged_modules import (
56 _EXTENSION_TO_MODULE,
57 _MODULE_SUPPORTS_METADATA,
~\anaconda3\lib\site-packages\datasets\packaged_modules\__init__.py in <module>
4 from typing import List
5
----> 6 from .csv import csv
7 from .imagefolder import imagefolder
8 from .json import json
~\anaconda3\lib\site-packages\datasets\packaged_modules\csv\csv.py in <module>
13
14
---> 15 logger = datasets.utils.logging.get_logger(__name__)
16
17 _PANDAS_READ_CSV_NO_DEFAULT_PARAMETERS = ["names", "prefix"]
AttributeError: partially initialized module 'datasets' has no attribute 'utils' (most likely due to a circular import)
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
Copy-and-paste the text below in your GitHub issue.
- `datasets` version: 2.4.0
- Platform: Windows-10-10.0.22000-SP0
- Python version: 3.8.8
- PyArrow version: 9.0.0
- Pandas version: 1.2.4
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4906/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4906/timeline | null | completed | true |
https://api.github.com/repos/huggingface/datasets/issues/4904 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4904/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4904/comments | https://api.github.com/repos/huggingface/datasets/issues/4904/events | https://github.com/huggingface/datasets/pull/4904 | 1,353,002,837 | PR_kwDODunzps4959Ad | 4,904 | [LibriSpeech] Fix dev split local_extracted_archive for 'all' config | {
"avatar_url": "https://avatars.githubusercontent.com/u/93869735?v=4",
"events_url": "https://api.github.com/users/sanchit-gandhi/events{/privacy}",
"followers_url": "https://api.github.com/users/sanchit-gandhi/followers",
"following_url": "https://api.github.com/users/sanchit-gandhi/following{/other_user}",
"gists_url": "https://api.github.com/users/sanchit-gandhi/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/sanchit-gandhi",
"id": 93869735,
"login": "sanchit-gandhi",
"node_id": "U_kgDOBZhWpw",
"organizations_url": "https://api.github.com/users/sanchit-gandhi/orgs",
"received_events_url": "https://api.github.com/users/sanchit-gandhi/received_events",
"repos_url": "https://api.github.com/users/sanchit-gandhi/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/sanchit-gandhi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sanchit-gandhi/subscriptions",
"type": "User",
"url": "https://api.github.com/users/sanchit-gandhi"
} | [] | closed | false | null | [] | null | [] | 2022-08-27T10:04:57Z | 2022-08-30T10:06:21Z | 2022-08-30T10:03:25Z | CONTRIBUTOR | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/4904.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4904",
"merged_at": "2022-08-30T10:03:25Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4904.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4904"
} | We define the keys for the `_DL_URLS` of the dev split as `dev.clean` and `dev.other`:
https://github.com/huggingface/datasets/blob/2e7142a3c6500b560da45e8d5128e320a09fcbd4/datasets/librispeech_asr/librispeech_asr.py#L60-L61
These keys get forwarded to the `dl_manager` and thus the `local_extracted_archive`.
However, when calling `SplitGenerator` for the dev sets, we query the `local_extracted_archive` keys `validation.clean` and `validation.other`:
https://github.com/huggingface/datasets/blob/2e7142a3c6500b560da45e8d5128e320a09fcbd4/datasets/librispeech_asr/librispeech_asr.py#L212
https://github.com/huggingface/datasets/blob/2e7142a3c6500b560da45e8d5128e320a09fcbd4/datasets/librispeech_asr/librispeech_asr.py#L219
The consequence of this is that the `local_extracted_archive` arg passed to `_generate_examples` is always `None`, as the keys `validation.clean` and `validation.other` do not exists in the `local_extracted_archive`.
When defining the `audio_file` in `_generate_examples`, since `local_extracted_archive` is always `None`, we always omit the `local_extracted_archive` path from the `audio_file` path, **even** if in non-streaming mode:
https://github.com/huggingface/datasets/blob/2e7142a3c6500b560da45e8d5128e320a09fcbd4/datasets/librispeech_asr/librispeech_asr.py#L259-L263
Thus, `audio_file` will only ever be the streaming path (`audio_file`, not `os.path.join(local_extracted_archive, audio_file)`).
This PR fixes the `.get()` keys for the `local_extracted_archive` for the dev splits.
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4904/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4904/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/4903 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4903/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4903/comments | https://api.github.com/repos/huggingface/datasets/issues/4903/events | https://github.com/huggingface/datasets/pull/4903 | 1,352,539,075 | PR_kwDODunzps494aud | 4,903 | Fix CI reporting | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | [] | closed | false | null | [] | null | [] | 2022-08-26T17:16:30Z | 2022-08-26T17:49:33Z | 2022-08-26T17:46:59Z | MEMBER | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/4903.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4903",
"merged_at": "2022-08-26T17:46:59Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4903.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4903"
} | Fix CI so that it reports defaults (failed and error) besides the custom (xfailed and xpassed) in the test summary.
This PR fixes a regression introduced by:
- #4845
This introduced the reporting of xfailed and xpassed, but wrongly removed the reporting of the defaults failed and error. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4903/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4903/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/4902 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4902/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4902/comments | https://api.github.com/repos/huggingface/datasets/issues/4902/events | https://github.com/huggingface/datasets/issues/4902 | 1,352,469,196 | I_kwDODunzps5QnQrM | 4,902 | Name the default config `default` | {
"avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4",
"events_url": "https://api.github.com/users/severo/events{/privacy}",
"followers_url": "https://api.github.com/users/severo/followers",
"following_url": "https://api.github.com/users/severo/following{/other_user}",
"gists_url": "https://api.github.com/users/severo/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/severo",
"id": 1676121,
"login": "severo",
"node_id": "MDQ6VXNlcjE2NzYxMjE=",
"organizations_url": "https://api.github.com/users/severo/orgs",
"received_events_url": "https://api.github.com/users/severo/received_events",
"repos_url": "https://api.github.com/users/severo/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/severo/subscriptions",
"type": "User",
"url": "https://api.github.com/users/severo"
} | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
},
{
"color": "d876e3",
"default": true,
"description": "Further information is requested",
"id": 1935892912,
"name": "question",
"node_id": "MDU6TGFiZWwxOTM1ODkyOTEy",
"url": "https://api.github.com/repos/huggingface/datasets/labels/question"
}
] | open | false | null | [] | null | [] | 2022-08-26T16:16:22Z | 2022-08-26T16:16:38Z | null | CONTRIBUTOR | null | null | null | Currently, if a dataset has no configuration, a default configuration is created from the dataset name.
For example, for a dataset loaded from the hub repository, such as https://huggingface.co/datasets/user/dataset (repo id is `user/dataset`), the default configuration will be `user--dataset`.
It might be easier to handle to set it to `default`, or another reserved word. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 1,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4902/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4902/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/4901 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4901/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4901/comments | https://api.github.com/repos/huggingface/datasets/issues/4901/events | https://github.com/huggingface/datasets/pull/4901 | 1,352,438,915 | PR_kwDODunzps494FNX | 4,901 | Raise ManualDownloadError from get_dataset_config_info | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | [] | closed | false | null | [] | null | [] | 2022-08-26T15:45:56Z | 2022-08-30T10:42:21Z | 2022-08-30T10:40:04Z | MEMBER | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/4901.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4901",
"merged_at": "2022-08-30T10:40:04Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4901.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4901"
} | This PRs raises a specific `ManualDownloadError` when `get_dataset_config_info` is called for a dataset that requires manual download.
Related to:
- #4898
CC: @severo | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4901/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4901/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/4900 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4900/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4900/comments | https://api.github.com/repos/huggingface/datasets/issues/4900/events | https://github.com/huggingface/datasets/issues/4900 | 1,352,405,855 | I_kwDODunzps5QnBNf | 4,900 | Dataset Viewer issue for asaxena1990/Dummy_dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/56627657?v=4",
"events_url": "https://api.github.com/users/ankurcl/events{/privacy}",
"followers_url": "https://api.github.com/users/ankurcl/followers",
"following_url": "https://api.github.com/users/ankurcl/following{/other_user}",
"gists_url": "https://api.github.com/users/ankurcl/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/ankurcl",
"id": 56627657,
"login": "ankurcl",
"node_id": "MDQ6VXNlcjU2NjI3NjU3",
"organizations_url": "https://api.github.com/users/ankurcl/orgs",
"received_events_url": "https://api.github.com/users/ankurcl/received_events",
"repos_url": "https://api.github.com/users/ankurcl/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/ankurcl/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ankurcl/subscriptions",
"type": "User",
"url": "https://api.github.com/users/ankurcl"
} | [] | open | false | null | [] | null | [] | 2022-08-26T15:15:44Z | 2022-08-26T16:48:11Z | null | NONE | null | null | null | ### Link
_No response_
### Description
_No response_
### Owner
_No response_ | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4900/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4900/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/4899 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4899/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4899/comments | https://api.github.com/repos/huggingface/datasets/issues/4899/events | https://github.com/huggingface/datasets/pull/4899 | 1,352,031,286 | PR_kwDODunzps492uTO | 4,899 | Re-add code and und language tags | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | [] | closed | false | null | [] | null | [] | 2022-08-26T09:48:57Z | 2022-08-26T10:27:18Z | 2022-08-26T10:24:20Z | MEMBER | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/4899.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4899",
"merged_at": "2022-08-26T10:24:20Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4899.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4899"
} | This PR fixes the removal of 2 language tags done by:
- #4882
The tags are:
- "code": this is not a IANA tag but needed
- "und": this is one of the special scoped tags removed by 0d53202b9abce6fd0358cb00d06fcfd904b875af
- used in "mc4" and "udhr" datasets | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4899/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4899/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/4898 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4898/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4898/comments | https://api.github.com/repos/huggingface/datasets/issues/4898/events | https://github.com/huggingface/datasets/issues/4898 | 1,351,851,254 | I_kwDODunzps5Qk5z2 | 4,898 | Dataset Viewer issue for timit_asr | {
"avatar_url": "https://avatars.githubusercontent.com/u/91126978?v=4",
"events_url": "https://api.github.com/users/InayatUllah932/events{/privacy}",
"followers_url": "https://api.github.com/users/InayatUllah932/followers",
"following_url": "https://api.github.com/users/InayatUllah932/following{/other_user}",
"gists_url": "https://api.github.com/users/InayatUllah932/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/InayatUllah932",
"id": 91126978,
"login": "InayatUllah932",
"node_id": "MDQ6VXNlcjkxMTI2OTc4",
"organizations_url": "https://api.github.com/users/InayatUllah932/orgs",
"received_events_url": "https://api.github.com/users/InayatUllah932/received_events",
"repos_url": "https://api.github.com/users/InayatUllah932/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/InayatUllah932/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/InayatUllah932/subscriptions",
"type": "User",
"url": "https://api.github.com/users/InayatUllah932"
} | [] | closed | false | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
}
] | null | [] | 2022-08-26T07:12:05Z | 2022-10-03T12:40:28Z | 2022-10-03T12:40:27Z | NONE | null | null | null | ### Link
_No response_
### Description
_No response_
### Owner
_No response_ | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4898/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4898/timeline | null | completed | true |
https://api.github.com/repos/huggingface/datasets/issues/4897 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4897/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4897/comments | https://api.github.com/repos/huggingface/datasets/issues/4897/events | https://github.com/huggingface/datasets/issues/4897 | 1,351,784,727 | I_kwDODunzps5QkpkX | 4,897 | datasets generate large arrow file | {
"avatar_url": "https://avatars.githubusercontent.com/u/18533904?v=4",
"events_url": "https://api.github.com/users/osayes/events{/privacy}",
"followers_url": "https://api.github.com/users/osayes/followers",
"following_url": "https://api.github.com/users/osayes/following{/other_user}",
"gists_url": "https://api.github.com/users/osayes/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/osayes",
"id": 18533904,
"login": "osayes",
"node_id": "MDQ6VXNlcjE4NTMzOTA0",
"organizations_url": "https://api.github.com/users/osayes/orgs",
"received_events_url": "https://api.github.com/users/osayes/received_events",
"repos_url": "https://api.github.com/users/osayes/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/osayes/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/osayes/subscriptions",
"type": "User",
"url": "https://api.github.com/users/osayes"
} | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | closed | false | null | [] | null | [] | 2022-08-26T05:51:16Z | 2022-09-18T05:07:52Z | 2022-09-18T05:07:52Z | NONE | null | null | null | Checking the large file in disk, and found the large cache file in the cifar10 data directory:

As we know, the size of cifar10 dataset is ~130MB, but the cache file has almost 30GB size, there may be some problems here. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4897/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4897/timeline | null | completed | true |
https://api.github.com/repos/huggingface/datasets/issues/4896 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4896/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4896/comments | https://api.github.com/repos/huggingface/datasets/issues/4896/events | https://github.com/huggingface/datasets/pull/4896 | 1,351,180,409 | PR_kwDODunzps49z4fU | 4,896 | Fix missing tags in dataset cards | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | [] | closed | false | null | [] | null | [] | 2022-08-25T16:41:43Z | 2022-09-22T14:37:16Z | 2022-08-26T04:41:48Z | MEMBER | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/4896.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4896",
"merged_at": "2022-08-26T04:41:48Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4896.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4896"
} | Fix missing tags in dataset cards:
- anli
- coarse_discourse
- commonsense_qa
- cos_e
- ilist
- lc_quad
- web_questions
- xsum
This PR partially fixes the missing tags in dataset cards. Subsequent PRs will follow to complete this task.
Related to:
- #4833
- #4891 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4896/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4896/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/4895 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4895/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4895/comments | https://api.github.com/repos/huggingface/datasets/issues/4895/events | https://github.com/huggingface/datasets/issues/4895 | 1,350,798,527 | I_kwDODunzps5Qg4y_ | 4,895 | load_dataset method returns Unknown split "validation" even if this dir exists | {
"avatar_url": "https://avatars.githubusercontent.com/u/13418507?v=4",
"events_url": "https://api.github.com/users/SamSamhuns/events{/privacy}",
"followers_url": "https://api.github.com/users/SamSamhuns/followers",
"following_url": "https://api.github.com/users/SamSamhuns/following{/other_user}",
"gists_url": "https://api.github.com/users/SamSamhuns/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/SamSamhuns",
"id": 13418507,
"login": "SamSamhuns",
"node_id": "MDQ6VXNlcjEzNDE4NTA3",
"organizations_url": "https://api.github.com/users/SamSamhuns/orgs",
"received_events_url": "https://api.github.com/users/SamSamhuns/received_events",
"repos_url": "https://api.github.com/users/SamSamhuns/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/SamSamhuns/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/SamSamhuns/subscriptions",
"type": "User",
"url": "https://api.github.com/users/SamSamhuns"
} | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | closed | false | null | [] | null | [] | 2022-08-25T12:11:00Z | 2022-10-06T17:49:28Z | 2022-09-29T08:07:50Z | NONE | null | null | null | ## Describe the bug
The `datasets.load_dataset` returns a `ValueError: Unknown split "validation". Should be one of ['train', 'test'].` when running `load_dataset(local_data_dir_path, split="validation")` even if the `validation` sub-directory exists in the local data path.
The data directories are as follows and attached to this issue:
```
test_data1
|_ train
|_ 1012.png
|_ metadata.jsonl
...
|_ test
...
|_ validation
|_ 234.png
|_ metadata.jsonl
...
test_data2
|_ train
|_ train_1012.png
|_ metadata.jsonl
...
|_ test
...
|_ validation
|_ val_234.png
|_ metadata.jsonl
...
```
They contain the same image files and `metadata.jsonl` but the images in `test_data2` have the split names prepended i.e.
`train_1012.png, val_234.png` and the images in `test_data1` do not have the split names prepended to the image names i.e. `1012.png, 234.png`
I actually saw in another issue `val` was not recognized as a split name but here I would expect the files to take the split from the parent directory name i.e. val should become part of the validation split?
## Steps to reproduce the bug
```python
import datasets
datasets.logging.set_verbosity_error()
from datasets import load_dataset, get_dataset_split_names
# the following only finds train, validation and test splits correctly
path = "./test_data1"
print("######################", get_dataset_split_names(path), "######################")
dataset_list = []
for spt in ["train", "test", "validation"]:
dataset = load_dataset(path, split=spt)
dataset_list.append(dataset)
# the following only finds train and test splits
path = "./test_data2"
print("######################", get_dataset_split_names(path), "######################")
dataset_list = []
for spt in ["train", "test", "validation"]:
dataset = load_dataset(path, split=spt)
dataset_list.append(dataset)
```
## Expected results
```
###################### ['train', 'test', 'validation'] ######################
###################### ['train', 'test', 'validation'] ######################
```
## Actual results
```
Traceback (most recent call last):
File "test_data_loader.py", line 11, in <module>
dataset = load_dataset(path, split=spt)
File "/home/venv/lib/python3.8/site-packages/datasets/load.py", line 1758, in load_dataset
ds = builder_instance.as_dataset(split=split, ignore_verifications=ignore_verifications, in_memory=keep_in_memory)
File "/home/venv/lib/python3.8/site-packages/datasets/builder.py", line 893, in as_dataset
datasets = map_nested(
File "/home/venv/lib/python3.8/site-packages/datasets/utils/py_utils.py", line 385, in map_nested
return function(data_struct)
File "/home/venv/lib/python3.8/site-packages/datasets/builder.py", line 924, in _build_single_dataset
ds = self._as_dataset(
File "/home/venv/lib/python3.8/site-packages/datasets/builder.py", line 993, in _as_dataset
dataset_kwargs = ArrowReader(self._cache_dir, self.info).read(
File "/home/venv/lib/python3.8/site-packages/datasets/arrow_reader.py", line 211, in read
files = self.get_file_instructions(name, instructions, split_infos)
File "/home/venv/lib/python3.8/site-packages/datasets/arrow_reader.py", line 184, in get_file_instructions
file_instructions = make_file_instructions(
File "/home/venv/lib/python3.8/site-packages/datasets/arrow_reader.py", line 107, in make_file_instructions
absolute_instructions = instruction.to_absolute(name2len)
File "/home/venv/lib/python3.8/site-packages/datasets/arrow_reader.py", line 616, in to_absolute
return [_rel_to_abs_instr(rel_instr, name2len) for rel_instr in self._relative_instructions]
File "/home/venv/lib/python3.8/site-packages/datasets/arrow_reader.py", line 616, in <listcomp>
return [_rel_to_abs_instr(rel_instr, name2len) for rel_instr in self._relative_instructions]
File "/home/venv/lib/python3.8/site-packages/datasets/arrow_reader.py", line 433, in _rel_to_abs_instr
raise ValueError(f'Unknown split "{split}". Should be one of {list(name2len)}.')
ValueError: Unknown split "validation". Should be one of ['train', 'test'].
```
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version:
- Platform: Linux Ubuntu 18.04
- Python version: 3.8.12
- PyArrow version: 9.0.0
Data files
[test_data1.zip](https://github.com/huggingface/datasets/files/9424463/test_data1.zip)
[test_data2.zip](https://github.com/huggingface/datasets/files/9424468/test_data2.zip)
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4895/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4895/timeline | null | completed | true |
https://api.github.com/repos/huggingface/datasets/issues/4894 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4894/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4894/comments | https://api.github.com/repos/huggingface/datasets/issues/4894/events | https://github.com/huggingface/datasets/pull/4894 | 1,350,667,270 | PR_kwDODunzps49yIvr | 4,894 | Add citation information to makhzan dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | [] | closed | false | null | [] | null | [] | 2022-08-25T10:16:40Z | 2022-08-30T06:21:54Z | 2022-08-25T13:19:41Z | MEMBER | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/4894.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4894",
"merged_at": "2022-08-25T13:19:41Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4894.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4894"
} | This PR adds the citation information to `makhzan` dataset, once they have replied to our request for that information:
- https://github.com/zeerakahmed/makhzan/issues/43 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4894/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4894/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/4893 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4893/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4893/comments | https://api.github.com/repos/huggingface/datasets/issues/4893/events | https://github.com/huggingface/datasets/issues/4893 | 1,350,655,674 | I_kwDODunzps5QgV66 | 4,893 | Oversampling strategy for iterable datasets in `interleave_datasets` | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [
{
"color": "BDE59C",
"default": false,
"description": "Issues a bit more difficult than \"Good First\" issues",
"id": 3761482852,
"name": "good second issue",
"node_id": "LA_kwDODunzps7gM6xk",
"url": "https://api.github.com/repos/huggingface/datasets/labels/good%20second%20issue"
}
] | closed | false | {
"avatar_url": "https://avatars.githubusercontent.com/u/52246514?v=4",
"events_url": "https://api.github.com/users/ylacombe/events{/privacy}",
"followers_url": "https://api.github.com/users/ylacombe/followers",
"following_url": "https://api.github.com/users/ylacombe/following{/other_user}",
"gists_url": "https://api.github.com/users/ylacombe/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/ylacombe",
"id": 52246514,
"login": "ylacombe",
"node_id": "MDQ6VXNlcjUyMjQ2NTE0",
"organizations_url": "https://api.github.com/users/ylacombe/orgs",
"received_events_url": "https://api.github.com/users/ylacombe/received_events",
"repos_url": "https://api.github.com/users/ylacombe/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/ylacombe/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ylacombe/subscriptions",
"type": "User",
"url": "https://api.github.com/users/ylacombe"
} | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/52246514?v=4",
"events_url": "https://api.github.com/users/ylacombe/events{/privacy}",
"followers_url": "https://api.github.com/users/ylacombe/followers",
"following_url": "https://api.github.com/users/ylacombe/following{/other_user}",
"gists_url": "https://api.github.com/users/ylacombe/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/ylacombe",
"id": 52246514,
"login": "ylacombe",
"node_id": "MDQ6VXNlcjUyMjQ2NTE0",
"organizations_url": "https://api.github.com/users/ylacombe/orgs",
"received_events_url": "https://api.github.com/users/ylacombe/received_events",
"repos_url": "https://api.github.com/users/ylacombe/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/ylacombe/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ylacombe/subscriptions",
"type": "User",
"url": "https://api.github.com/users/ylacombe"
}
] | null | [] | 2022-08-25T10:06:55Z | 2022-10-03T12:37:46Z | 2022-10-03T12:37:46Z | MEMBER | null | null | null | In https://github.com/huggingface/datasets/pull/4831 @ylacombe added an oversampling strategy for `interleave_datasets`. However right now it doesn't work for datasets loaded using `load_dataset(..., streaming=True)`, which are `IterableDataset` objects.
It would be nice to expand `interleave_datasets` for iterable datasets as well to support this oversampling strategy
```python
>>> from datasets.iterable_dataset import IterableDataset, ExamplesIterable
>>> d1 = IterableDataset(ExamplesIterable(lambda: [(yield i, {"a": i}) for i in [0, 1, 2]], {}))
>>> d2 = IterableDataset(ExamplesIterable(lambda: [(yield i, {"a": i}) for i in [10, 11, 12, 13]], {}))
>>> d3 = IterableDataset(ExamplesIterable(lambda: [(yield i, {"a": i}) for i in [20, 21, 22, 23, 24]], {}))
>>> dataset = interleave_datasets([d1, d2, d3]) # is supported
>>> [x["a"] for x in dataset]
[0, 10, 20, 1, 11, 21, 2, 12, 22]
>>> dataset = interleave_datasets([d1, d2, d3], stopping_strategy="all_exhausted") # is not supported yet
>>> [x["a"] for x in dataset]
[0, 10, 20, 1, 11, 21, 2, 12, 22, 0, 13, 23, 1, 0, 24]
```
This can be implemented by adding the strategy to both `CyclingMultiSourcesExamplesIterable` and `RandomlyCyclingMultiSourcesExamplesIterable` used in `_interleave_iterable_datasets` in `iterable_dataset.py`
I would be happy to share some guidance if anyone would like to give it a shot :) | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4893/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4893/timeline | null | completed | true |
https://api.github.com/repos/huggingface/datasets/issues/4892 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4892/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4892/comments | https://api.github.com/repos/huggingface/datasets/issues/4892/events | https://github.com/huggingface/datasets/pull/4892 | 1,350,636,499 | PR_kwDODunzps49yCD3 | 4,892 | Add citation to ro_sts and ro_sts_parallel datasets | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | [] | closed | false | null | [] | null | [] | 2022-08-25T09:51:06Z | 2022-08-25T10:49:56Z | 2022-08-25T10:49:56Z | MEMBER | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/4892.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4892",
"merged_at": "2022-08-25T10:49:56Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4892.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4892"
} | This PR adds the citation information to `ro_sts_parallel` and `ro_sts_parallel` datasets, once they have replied our request for that information:
- https://github.com/dumitrescustefan/RO-STS/issues/4 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4892/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4892/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/4891 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4891/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4891/comments | https://api.github.com/repos/huggingface/datasets/issues/4891/events | https://github.com/huggingface/datasets/pull/4891 | 1,350,589,813 | PR_kwDODunzps49x382 | 4,891 | Fix missing tags in dataset cards | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | [] | closed | false | null | [] | null | [] | 2022-08-25T09:14:17Z | 2022-09-22T14:39:02Z | 2022-08-25T13:43:34Z | MEMBER | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/4891.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4891",
"merged_at": "2022-08-25T13:43:34Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4891.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4891"
} | Fix missing tags in dataset cards:
- aslg_pc12
- librispeech_lm
- mwsc
- opus100
- qasc
- quail
- squadshifts
- winograd_wsc
This PR partially fixes the missing tags in dataset cards. Subsequent PRs will follow to complete this task.
Related to:
- #4833
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4891/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4891/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/4890 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4890/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4890/comments | https://api.github.com/repos/huggingface/datasets/issues/4890/events | https://github.com/huggingface/datasets/pull/4890 | 1,350,578,029 | PR_kwDODunzps49x1YC | 4,890 | add Dataset.from_list | {
"avatar_url": "https://avatars.githubusercontent.com/u/48946947?v=4",
"events_url": "https://api.github.com/users/sanderland/events{/privacy}",
"followers_url": "https://api.github.com/users/sanderland/followers",
"following_url": "https://api.github.com/users/sanderland/following{/other_user}",
"gists_url": "https://api.github.com/users/sanderland/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/sanderland",
"id": 48946947,
"login": "sanderland",
"node_id": "MDQ6VXNlcjQ4OTQ2OTQ3",
"organizations_url": "https://api.github.com/users/sanderland/orgs",
"received_events_url": "https://api.github.com/users/sanderland/received_events",
"repos_url": "https://api.github.com/users/sanderland/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/sanderland/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sanderland/subscriptions",
"type": "User",
"url": "https://api.github.com/users/sanderland"
} | [] | closed | false | null | [] | null | [] | 2022-08-25T09:05:58Z | 2022-09-02T10:22:59Z | 2022-09-02T10:20:33Z | CONTRIBUTOR | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/4890.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4890",
"merged_at": "2022-09-02T10:20:33Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4890.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4890"
} | As discussed in #4885
I initially added this bit at the end, thinking filling this field was necessary as it is done in from_dict.
However, it seems the constructor takes care of filling info when it is empty.
```
if info.features is None:
info.features = Features(
{
col: generate_from_arrow_type(coldata.type)
for col, coldata in zip(pa_table.column_names, pa_table.columns)
}
)
``` | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4890/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4890/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/4889 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4889/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4889/comments | https://api.github.com/repos/huggingface/datasets/issues/4889/events | https://github.com/huggingface/datasets/issues/4889 | 1,349,758,525 | I_kwDODunzps5Qc649 | 4,889 | torchaudio 11.0 yields different results than torchaudio 12.1 when loading MP3 | {
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/patrickvonplaten",
"id": 23423619,
"login": "patrickvonplaten",
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"type": "User",
"url": "https://api.github.com/users/patrickvonplaten"
} | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | open | false | null | [] | null | [] | 2022-08-24T16:54:43Z | 2022-10-05T13:54:04Z | null | MEMBER | null | null | null | ## Describe the bug
When loading Common Voice with torchaudio 0.11.0 the results are different to 0.12.1 which leads to problems in transformers see: https://github.com/huggingface/transformers/pull/18749
## Steps to reproduce the bug
If you run the following code once with `torchaudio==0.11.0+cu102` and `torchaudio==0.12.1+cu102` you can see that the tensors differ. This is a pretty big breaking change and makes some integration tests fail in Transformers.
```python
#!/usr/bin/env python3
from datasets import load_dataset
import datasets
import numpy as np
import torch
import torchaudio
print("torch vesion", torch.__version__)
print("torchaudio vesion", torchaudio.__version__)
save_audio = True
load_audios = False
if save_audio:
ds = load_dataset("common_voice", "en", split="train", streaming=True)
ds = ds.cast_column("audio", datasets.Audio(sampling_rate=16_000))
ds_iter = iter(ds)
sample = next(ds_iter)
np.save(f"audio_sample_{torch.__version__}", sample["audio"]["array"])
print(sample["audio"]["array"])
if load_audios:
array_torch_11 = np.load("/home/patrick/audio_sample_1.11.0+cu102.npy")
print("Array 11 Shape", array_torch_11.shape)
print("Array 11 abs sum", np.sum(np.abs(array_torch_11)))
array_torch_12 = np.load("/home/patrick/audio_sample_1.12.1+cu102.npy")
print("Array 12 Shape", array_torch_12.shape)
print("Array 12 abs sum", np.sum(np.abs(array_torch_12)))
```
Having saved the tensors the print output yields:
```
torch vesion 1.12.1+cu102
torchaudio vesion 0.12.1+cu102
Array 11 Shape (122880,)
Array 11 abs sum 1396.4988
Array 12 Shape (123264,)
Array 12 abs sum 1396.5193
```
## Expected results
torchaudio 11.0 and 12.1 should yield same results.
## Actual results
See above.
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 2.1.1.dev0
- Platform: Linux-5.18.10-76051810-generic-x86_64-with-glibc2.34
- Python version: 3.9.7
- PyArrow version: 6.0.1
- Pandas version: 1.4.2
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4889/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4889/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/4888 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4888/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4888/comments | https://api.github.com/repos/huggingface/datasets/issues/4888/events | https://github.com/huggingface/datasets/issues/4888 | 1,349,447,521 | I_kwDODunzps5Qbu9h | 4,888 | Dataset Viewer issue for subjqa | {
"avatar_url": "https://avatars.githubusercontent.com/u/26859204?v=4",
"events_url": "https://api.github.com/users/lewtun/events{/privacy}",
"followers_url": "https://api.github.com/users/lewtun/followers",
"following_url": "https://api.github.com/users/lewtun/following{/other_user}",
"gists_url": "https://api.github.com/users/lewtun/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lewtun",
"id": 26859204,
"login": "lewtun",
"node_id": "MDQ6VXNlcjI2ODU5MjA0",
"organizations_url": "https://api.github.com/users/lewtun/orgs",
"received_events_url": "https://api.github.com/users/lewtun/received_events",
"repos_url": "https://api.github.com/users/lewtun/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lewtun/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lewtun/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lewtun"
} | [
{
"color": "E5583E",
"default": false,
"description": "Related to the dataset viewer on huggingface.co",
"id": 3470211881,
"name": "dataset-viewer",
"node_id": "LA_kwDODunzps7O1zsp",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset-viewer"
}
] | closed | false | {
"avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4",
"events_url": "https://api.github.com/users/severo/events{/privacy}",
"followers_url": "https://api.github.com/users/severo/followers",
"following_url": "https://api.github.com/users/severo/following{/other_user}",
"gists_url": "https://api.github.com/users/severo/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/severo",
"id": 1676121,
"login": "severo",
"node_id": "MDQ6VXNlcjE2NzYxMjE=",
"organizations_url": "https://api.github.com/users/severo/orgs",
"received_events_url": "https://api.github.com/users/severo/received_events",
"repos_url": "https://api.github.com/users/severo/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/severo/subscriptions",
"type": "User",
"url": "https://api.github.com/users/severo"
} | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4",
"events_url": "https://api.github.com/users/severo/events{/privacy}",
"followers_url": "https://api.github.com/users/severo/followers",
"following_url": "https://api.github.com/users/severo/following{/other_user}",
"gists_url": "https://api.github.com/users/severo/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/severo",
"id": 1676121,
"login": "severo",
"node_id": "MDQ6VXNlcjE2NzYxMjE=",
"organizations_url": "https://api.github.com/users/severo/orgs",
"received_events_url": "https://api.github.com/users/severo/received_events",
"repos_url": "https://api.github.com/users/severo/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/severo/subscriptions",
"type": "User",
"url": "https://api.github.com/users/severo"
}
] | null | [] | 2022-08-24T13:26:20Z | 2022-09-08T08:23:42Z | 2022-09-08T08:23:42Z | MEMBER | null | null | null | ### Link
https://huggingface.co/datasets/subjqa
### Description
Getting the following error for this dataset:
```
Status code: 500
Exception: Status500Error
Message: 2 or more items returned, instead of 1
```
Not sure what's causing it though 🤔
### Owner
Yes | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4888/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4888/timeline | null | completed | true |
https://api.github.com/repos/huggingface/datasets/issues/4887 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4887/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4887/comments | https://api.github.com/repos/huggingface/datasets/issues/4887/events | https://github.com/huggingface/datasets/pull/4887 | 1,349,426,693 | PR_kwDODunzps49t_PM | 4,887 | Add "cc-by-nc-sa-2.0" to list of licenses | {
"avatar_url": "https://avatars.githubusercontent.com/u/7246357?v=4",
"events_url": "https://api.github.com/users/osanseviero/events{/privacy}",
"followers_url": "https://api.github.com/users/osanseviero/followers",
"following_url": "https://api.github.com/users/osanseviero/following{/other_user}",
"gists_url": "https://api.github.com/users/osanseviero/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/osanseviero",
"id": 7246357,
"login": "osanseviero",
"node_id": "MDQ6VXNlcjcyNDYzNTc=",
"organizations_url": "https://api.github.com/users/osanseviero/orgs",
"received_events_url": "https://api.github.com/users/osanseviero/received_events",
"repos_url": "https://api.github.com/users/osanseviero/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/osanseviero/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/osanseviero/subscriptions",
"type": "User",
"url": "https://api.github.com/users/osanseviero"
} | [] | closed | false | null | [] | null | [] | 2022-08-24T13:11:49Z | 2022-08-26T10:31:32Z | 2022-08-26T10:29:20Z | MEMBER | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/4887.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4887",
"merged_at": "2022-08-26T10:29:20Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4887.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4887"
} | Datasets side of https://github.com/huggingface/hub-docs/pull/285 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4887/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4887/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/4886 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4886/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4886/comments | https://api.github.com/repos/huggingface/datasets/issues/4886/events | https://github.com/huggingface/datasets/issues/4886 | 1,349,285,569 | I_kwDODunzps5QbHbB | 4,886 | Loading huggan/CelebA-HQ throws pyarrow.lib.ArrowInvalid | {
"avatar_url": "https://avatars.githubusercontent.com/u/11850255?v=4",
"events_url": "https://api.github.com/users/JeanKaddour/events{/privacy}",
"followers_url": "https://api.github.com/users/JeanKaddour/followers",
"following_url": "https://api.github.com/users/JeanKaddour/following{/other_user}",
"gists_url": "https://api.github.com/users/JeanKaddour/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/JeanKaddour",
"id": 11850255,
"login": "JeanKaddour",
"node_id": "MDQ6VXNlcjExODUwMjU1",
"organizations_url": "https://api.github.com/users/JeanKaddour/orgs",
"received_events_url": "https://api.github.com/users/JeanKaddour/received_events",
"repos_url": "https://api.github.com/users/JeanKaddour/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/JeanKaddour/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/JeanKaddour/subscriptions",
"type": "User",
"url": "https://api.github.com/users/JeanKaddour"
} | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | open | false | null | [] | null | [] | 2022-08-24T11:24:21Z | 2022-09-08T16:29:04Z | null | NONE | null | null | null | ## Describe the bug
Loading huggan/CelebA-HQ throws pyarrow.lib.ArrowInvalid
## Steps to reproduce the bug
```python
from datasets import load_dataset
dataset = load_dataset('huggan/CelebA-HQ')
```
## Expected results
See https://colab.research.google.com/drive/141LJCcM2XyqprPY83nIQ-Zk3BbxWeahq?usp=sharing#scrollTo=N3ml_7f8kzDd
## Actual results
```
File "/home/jean/projects/cold_diffusion/celebA.py", line 4, in <module>
dataset = load_dataset('huggan/CelebA-HQ')
File "/home/jean/miniconda3/envs/seq/lib/python3.10/site-packages/datasets/load.py", line 1793, in load_dataset
builder_instance.download_and_prepare(
File "/home/jean/miniconda3/envs/seq/lib/python3.10/site-packages/datasets/builder.py", line 704, in download_and_prepare
self._download_and_prepare(
File "/home/jean/miniconda3/envs/seq/lib/python3.10/site-packages/datasets/builder.py", line 793, in _download_and_prepare
self._prepare_split(split_generator, **prepare_split_kwargs)
File "/home/jean/miniconda3/envs/seq/lib/python3.10/site-packages/datasets/builder.py", line 1274, in _prepare_split
for key, table in logging.tqdm(
File "/home/jean/miniconda3/envs/seq/lib/python3.10/site-packages/tqdm/std.py", line 1195, in __iter__
for obj in iterable:
File "/home/jean/miniconda3/envs/seq/lib/python3.10/site-packages/datasets/packaged_modules/parquet/parquet.py", line 67, in _generate_tables
parquet_file = pq.ParquetFile(f)
File "/home/jean/miniconda3/envs/seq/lib/python3.10/site-packages/pyarrow/parquet/__init__.py", line 286, in __init__
self.reader.open(
File "pyarrow/_parquet.pyx", line 1227, in pyarrow._parquet.ParquetReader.open
File "pyarrow/error.pxi", line 100, in pyarrow.lib.check_status
pyarrow.lib.ArrowInvalid: Parquet magic bytes not found in footer. Either the file is corrupted or this is not a parquet file.
```
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: datasets-2.4.1.dev0
- Platform: Ubuntu 18.04
- Python version: 3.10
- PyArrow version: pyarrow 9.0.0
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4886/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4886/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/4885 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4885/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4885/comments | https://api.github.com/repos/huggingface/datasets/issues/4885/events | https://github.com/huggingface/datasets/issues/4885 | 1,349,181,448 | I_kwDODunzps5QauAI | 4,885 | Create dataset from list of dicts | {
"avatar_url": "https://avatars.githubusercontent.com/u/48946947?v=4",
"events_url": "https://api.github.com/users/sanderland/events{/privacy}",
"followers_url": "https://api.github.com/users/sanderland/followers",
"following_url": "https://api.github.com/users/sanderland/following{/other_user}",
"gists_url": "https://api.github.com/users/sanderland/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/sanderland",
"id": 48946947,
"login": "sanderland",
"node_id": "MDQ6VXNlcjQ4OTQ2OTQ3",
"organizations_url": "https://api.github.com/users/sanderland/orgs",
"received_events_url": "https://api.github.com/users/sanderland/received_events",
"repos_url": "https://api.github.com/users/sanderland/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/sanderland/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sanderland/subscriptions",
"type": "User",
"url": "https://api.github.com/users/sanderland"
} | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | closed | false | null | [] | null | [] | 2022-08-24T10:01:24Z | 2022-09-08T16:02:52Z | 2022-09-08T16:02:52Z | CONTRIBUTOR | null | null | null | I often find myself with data from a variety of sources, and a list of dicts is very common among these.
However, converting this to a Dataset is a little awkward, requiring either
```Dataset.from_pandas(pd.DataFrame(formatted_training_data))```
Which can error out on some more exotic values as 2-d arrays for reasons that are not entirely clear
> ArrowInvalid: ('Can only convert 1-dimensional array values', 'Conversion failed for column labels with type object')
Alternatively:
```Dataset.from_dict({k: [s[k] for s in formatted_training_data] for k in formatted_training_data[0].keys()})```
Which works, but is a little ugly.
**Describe the solution you'd like**
Either `.from_dict` accepting a list of dicts, or a `.from_records` function accepting such.
I am happy to PR this, just wanted to check you are happy to accept this I haven't missed something obvious, and which of the solutions would be prefered.
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4885/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4885/timeline | null | completed | true |
https://api.github.com/repos/huggingface/datasets/issues/4884 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4884/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4884/comments | https://api.github.com/repos/huggingface/datasets/issues/4884/events | https://github.com/huggingface/datasets/pull/4884 | 1,349,105,946 | PR_kwDODunzps49s6Aj | 4,884 | Fix documentation card of math_qa dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | [] | closed | false | null | [] | null | [] | 2022-08-24T09:00:56Z | 2022-08-24T11:33:17Z | 2022-08-24T11:33:16Z | MEMBER | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/4884.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4884",
"merged_at": "2022-08-24T11:33:16Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4884.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4884"
} | Fix documentation card of math_qa dataset. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4884/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4884/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/4883 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4883/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4883/comments | https://api.github.com/repos/huggingface/datasets/issues/4883/events | https://github.com/huggingface/datasets/issues/4883 | 1,349,083,235 | I_kwDODunzps5QaWBj | 4,883 | With dataloader RSS memory consumed by HF datasets monotonically increases | {
"avatar_url": "https://avatars.githubusercontent.com/u/3616806?v=4",
"events_url": "https://api.github.com/users/apsdehal/events{/privacy}",
"followers_url": "https://api.github.com/users/apsdehal/followers",
"following_url": "https://api.github.com/users/apsdehal/following{/other_user}",
"gists_url": "https://api.github.com/users/apsdehal/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/apsdehal",
"id": 3616806,
"login": "apsdehal",
"node_id": "MDQ6VXNlcjM2MTY4MDY=",
"organizations_url": "https://api.github.com/users/apsdehal/orgs",
"received_events_url": "https://api.github.com/users/apsdehal/received_events",
"repos_url": "https://api.github.com/users/apsdehal/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/apsdehal/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/apsdehal/subscriptions",
"type": "User",
"url": "https://api.github.com/users/apsdehal"
} | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | open | false | null | [] | null | [] | 2022-08-24T08:42:54Z | 2022-09-29T16:16:31Z | null | MEMBER | null | null | null | ## Describe the bug
When the HF datasets is used in conjunction with PyTorch Dataloader, the RSS memory of the process keeps on increasing when it should stay constant.
## Steps to reproduce the bug
Run and observe the output of this snippet which logs RSS memory.
```python
import psutil
import os
from transformers import BertTokenizer
from datasets import load_dataset
from torch.utils.data import DataLoader
BATCH_SIZE = 32
NUM_TRIES = 10
tokenizer = BertTokenizer.from_pretrained("bert-base-uncased")
def transform(x):
x.update(tokenizer(x["text"], return_tensors="pt", max_length=64, padding="max_length", truncation=True))
x.pop("text")
x.pop("label")
return x
dataset = load_dataset("imdb", split="train")
dataset.set_transform(transform)
train_loader = DataLoader(dataset, batch_size=BATCH_SIZE, shuffle=True, num_workers=4)
mem_before = psutil.Process(os.getpid()).memory_info().rss / (1024 * 1024)
count = 0
while count < NUM_TRIES:
for idx, batch in enumerate(train_loader):
mem_after = psutil.Process(os.getpid()).memory_info().rss / (1024 * 1024)
print(count, idx, mem_after - mem_before)
count += 1
```
## Expected results
Memory should not increase after initial setup and loading of the dataset
## Actual results
Memory continuously increases as can be seen in the log.
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 2.3.2
- Platform: Linux-4.19.0-21-cloud-amd64-x86_64-with-glibc2.10
- Python version: 3.8.13
- PyArrow version: 7.0.0
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 2,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 2,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4883/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4883/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/4882 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4882/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4882/comments | https://api.github.com/repos/huggingface/datasets/issues/4882/events | https://github.com/huggingface/datasets/pull/4882 | 1,348,913,665 | PR_kwDODunzps49sRtv | 4,882 | Fix language tags resource file | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | [] | closed | false | null | [] | null | [] | 2022-08-24T06:06:01Z | 2022-08-24T13:58:33Z | 2022-08-24T13:58:30Z | MEMBER | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/4882.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4882",
"merged_at": "2022-08-24T13:58:30Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4882.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4882"
} | This PR fixes/updates/adds ALL language tags from IANA (as of 2022-08-08).
This PR also removes all BCP47 suffixes (the languages file only contains language subtags, i.e. ISO 639 1 or 2 codes; no script/region/variant suffixes). See:
- #4753 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4882/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4882/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/4881 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4881/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4881/comments | https://api.github.com/repos/huggingface/datasets/issues/4881/events | https://github.com/huggingface/datasets/issues/4881 | 1,348,495,777 | I_kwDODunzps5QYGmh | 4,881 | Language names and language codes: connecting to a big database (rather than slow enrichment of custom list) | {
"avatar_url": "https://avatars.githubusercontent.com/u/6072524?v=4",
"events_url": "https://api.github.com/users/alexis-michaud/events{/privacy}",
"followers_url": "https://api.github.com/users/alexis-michaud/followers",
"following_url": "https://api.github.com/users/alexis-michaud/following{/other_user}",
"gists_url": "https://api.github.com/users/alexis-michaud/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/alexis-michaud",
"id": 6072524,
"login": "alexis-michaud",
"node_id": "MDQ6VXNlcjYwNzI1MjQ=",
"organizations_url": "https://api.github.com/users/alexis-michaud/orgs",
"received_events_url": "https://api.github.com/users/alexis-michaud/received_events",
"repos_url": "https://api.github.com/users/alexis-michaud/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/alexis-michaud/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/alexis-michaud/subscriptions",
"type": "User",
"url": "https://api.github.com/users/alexis-michaud"
} | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | open | false | null | [] | null | [] | 2022-08-23T20:14:24Z | 2022-09-14T07:32:30Z | null | NONE | null | null | null | **The problem:**
Language diversity is an important dimension of the diversity of datasets. To find one's way around datasets, being able to search by language name and by standardized codes appears crucial.
Currently the list of language codes is [here](https://github.com/huggingface/datasets/blob/main/src/datasets/utils/resources/languages.json), right? At about 1,500 entries, it is roughly at 1/4th of the world's diversity of extant languages. (Probably less, as the list of 1,418 contains variants that are linguistically very close: 108 varieties of English, for instance.)
Looking forward to ever increasing coverage, how will the list of language names and language codes improve over time?
Enrichment of the custom list by HFT contributors (like [here](https://github.com/huggingface/datasets/pull/4880)) has several issues:
* progress is likely to be slow:

(input required from reviewers, etc.)
* the more contributors, the less consistency can be expected among contributions. No need to elaborate on how much confusion is likely to ensue as datasets accumulate.
* there is no information on which language relates with which: no encoding of the special closeness between the languages of the Northwestern Germanic branch (English+Dutch+German etc.), for instance. Information on phylogenetic closeness can be relevant to run experiments on transfer of technology from one language to its close relatives.
**A solution that seems desirable:**
Connecting to an established database that (i) aims at full coverage of the world's languages and (ii) has information on higher-level groupings, alternative names, etc.
It takes a lot of hard work to do such databases. Two important initiatives are [Ethnologue](https://www.ethnologue.com/) (ISO standard) and [Glottolog](https://glottolog.org/). Both have pros and cons. Glottolog contains references to Ethnologue identifiers, so adopting Glottolog entails getting the advantages of both sets of language codes.
Both seem technically accessible & 'developer-friendly'. Glottolog has a [GitHub repo](https://github.com/glottolog/glottolog). For Ethnologue, harvesting tools have been devised (see [here](https://github.com/lyy1994/ethnologue); I did not try it out).
In case a conversation with linguists seemed in order here, I'd be happy to participate ('pro bono', of course), & to rustle up more colleagues as useful, to help this useful development happen.
With appreciation of HFT, | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4881/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4881/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/4880 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4880/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4880/comments | https://api.github.com/repos/huggingface/datasets/issues/4880/events | https://github.com/huggingface/datasets/pull/4880 | 1,348,452,776 | PR_kwDODunzps49qyJr | 4,880 | Added names of less-studied languages | {
"avatar_url": "https://avatars.githubusercontent.com/u/23100612?v=4",
"events_url": "https://api.github.com/users/BenjaminGalliot/events{/privacy}",
"followers_url": "https://api.github.com/users/BenjaminGalliot/followers",
"following_url": "https://api.github.com/users/BenjaminGalliot/following{/other_user}",
"gists_url": "https://api.github.com/users/BenjaminGalliot/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/BenjaminGalliot",
"id": 23100612,
"login": "BenjaminGalliot",
"node_id": "MDQ6VXNlcjIzMTAwNjEy",
"organizations_url": "https://api.github.com/users/BenjaminGalliot/orgs",
"received_events_url": "https://api.github.com/users/BenjaminGalliot/received_events",
"repos_url": "https://api.github.com/users/BenjaminGalliot/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/BenjaminGalliot/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BenjaminGalliot/subscriptions",
"type": "User",
"url": "https://api.github.com/users/BenjaminGalliot"
} | [] | closed | false | null | [] | null | [] | 2022-08-23T19:32:38Z | 2022-08-24T12:52:46Z | 2022-08-24T12:52:46Z | CONTRIBUTOR | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/4880.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4880",
"merged_at": "2022-08-24T12:52:46Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4880.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4880"
} | Added names of less-studied languages (nru – Narua and jya – Japhug) for existing datasets. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4880/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4880/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/4879 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4879/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4879/comments | https://api.github.com/repos/huggingface/datasets/issues/4879/events | https://github.com/huggingface/datasets/pull/4879 | 1,348,346,407 | PR_kwDODunzps49qbOl | 4,879 | Fix Citation Information section in dataset cards | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | [] | closed | false | null | [] | null | [] | 2022-08-23T18:06:43Z | 2022-09-27T14:04:45Z | 2022-08-24T04:09:07Z | MEMBER | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/4879.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4879",
"merged_at": "2022-08-24T04:09:07Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4879.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4879"
} | Fix Citation Information section in dataset cards:
- cc_news
- conllpp
- datacommons_factcheck
- gnad10
- id_panl_bppt
- jigsaw_toxicity_pred
- kinnews_kirnews
- kor_sarcasm
- makhzan
- reasoning_bg
- ro_sts
- ro_sts_parallel
- sanskrit_classic
- telugu_news
- thaiqa_squad
- wiki_movies
This PR partially fixes the Citation Information section in dataset cards. Subsequent PRs will follow to complete this task. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4879/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4879/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/4878 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4878/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4878/comments | https://api.github.com/repos/huggingface/datasets/issues/4878/events | https://github.com/huggingface/datasets/issues/4878 | 1,348,270,141 | I_kwDODunzps5QXPg9 | 4,878 | [not really a bug] `identical_ok` is deprecated in huggingface-hub's `upload_file` | {
"avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4",
"events_url": "https://api.github.com/users/severo/events{/privacy}",
"followers_url": "https://api.github.com/users/severo/followers",
"following_url": "https://api.github.com/users/severo/following{/other_user}",
"gists_url": "https://api.github.com/users/severo/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/severo",
"id": 1676121,
"login": "severo",
"node_id": "MDQ6VXNlcjE2NzYxMjE=",
"organizations_url": "https://api.github.com/users/severo/orgs",
"received_events_url": "https://api.github.com/users/severo/received_events",
"repos_url": "https://api.github.com/users/severo/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/severo/subscriptions",
"type": "User",
"url": "https://api.github.com/users/severo"
} | [
{
"color": "008672",
"default": true,
"description": "Extra attention is needed",
"id": 1935892884,
"name": "help wanted",
"node_id": "MDU6TGFiZWwxOTM1ODkyODg0",
"url": "https://api.github.com/repos/huggingface/datasets/labels/help%20wanted"
},
{
"color": "d876e3",
"default": true,
"description": "Further information is requested",
"id": 1935892912,
"name": "question",
"node_id": "MDU6TGFiZWwxOTM1ODkyOTEy",
"url": "https://api.github.com/repos/huggingface/datasets/labels/question"
}
] | closed | false | null | [] | null | [] | 2022-08-23T17:09:55Z | 2022-09-13T14:00:06Z | 2022-09-13T14:00:05Z | CONTRIBUTOR | null | null | null | In the huggingface-hub dependency, the `identical_ok` argument has no effect in `upload_file` (and it will be removed soon)
See
https://github.com/huggingface/huggingface_hub/blob/43499582b19df1ed081a5b2bd7a364e9cacdc91d/src/huggingface_hub/hf_api.py#L2164-L2169
It's used here:
https://github.com/huggingface/datasets/blob/fcfcc951a73efbc677f9def9a8707d0af93d5890/src/datasets/dataset_dict.py#L1373-L1381
https://github.com/huggingface/datasets/blob/fdcb8b144ce3ef241410281e125bd03e87b8caa1/src/datasets/arrow_dataset.py#L4354-L4362
https://github.com/huggingface/datasets/blob/fdcb8b144ce3ef241410281e125bd03e87b8caa1/src/datasets/arrow_dataset.py#L4197-L4213
We should remove it.
Maybe the third code sample has an unexpected behavior since it uses the non-default value `identical_ok = False`, but the argument is ignored. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4878/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4878/timeline | null | completed | true |
https://api.github.com/repos/huggingface/datasets/issues/4877 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4877/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4877/comments | https://api.github.com/repos/huggingface/datasets/issues/4877/events | https://github.com/huggingface/datasets/pull/4877 | 1,348,246,755 | PR_kwDODunzps49qF-w | 4,877 | Fix documentation card of covid_qa_castorini dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | [] | closed | false | null | [] | null | [] | 2022-08-23T16:52:33Z | 2022-08-23T18:05:01Z | 2022-08-23T18:05:00Z | MEMBER | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/4877.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4877",
"merged_at": "2022-08-23T18:05:00Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4877.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4877"
} | Fix documentation card of covid_qa_castorini dataset. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4877/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4877/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/4876 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4876/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4876/comments | https://api.github.com/repos/huggingface/datasets/issues/4876/events | https://github.com/huggingface/datasets/issues/4876 | 1,348,202,678 | I_kwDODunzps5QW_C2 | 4,876 | Move DatasetInfo from `datasets_infos.json` to the YAML tags in `README.md` | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [] | closed | false | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
}
] | null | [] | 2022-08-23T16:16:41Z | 2022-10-03T09:11:13Z | 2022-10-03T09:11:13Z | MEMBER | null | null | null | Currently there are two places to find metadata for datasets:
- datasets_infos.json, which contains **per dataset config**
- description
- citation
- license
- splits and sizes
- checksums of the data files
- feature types
- and more
- YAML tags, which contain
- license
- language
- train-eval-index
- and more
It would be nice to have a single place instead. We can rely on the YAML tags more than the JSON file for consistency with models. And it would all be indexed by our back-end directly, which is nice to have.
One way would be to move everything to the YAML tags except the checksums (there can be tens of thousands of them). The description/citation is already in the dataset card so we probably don't need to have them in the YAML card, it would be redundant.
Here is an example for SQuAD
```yaml
download_size: 35142551
dataset_size: 89789763
version: 1.0.0
splits:
- name: train
num_examples: 87599
num_bytes: 79317110
- name: validation
num_examples: 10570
num_bytes: 10472653
features:
- name: id
dtype: string
- name: title
dtype: string
- name: context
dtype: string
- name: question
dtype: string
- name: answers
struct:
- name: text
list:
dtype: string
- name: answer_start
list:
dtype: int32
```
Since there is only one configuration for SQuAD, this structure is ok. For datasets with several configs we can see in a second step, but IMO it would be ok to have these fields per config using another syntax
```yaml
configs:
- config: unlabeled
splits:
- name: train
num_examples: 10000
features:
- name: text
dtype: string
- config: labeled
splits:
- name: train
num_examples: 100
features:
- name: text
dtype: string
- name: label
dtype: ClassLabel
names:
- negative
- positive
```
So in the end you could specify a YAML tag either at the top level (for all configs) or per config in the `configs` field
Alternatively we could keep config specific stuff in the `dataset_infos.json` as it it today
Not sure yet what's the best approach here but cc @julien-c @mariosasko @albertvillanova @polinaeterna for feedback :) | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 4,
"heart": 3,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 7,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4876/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4876/timeline | null | completed | true |
https://api.github.com/repos/huggingface/datasets/issues/4875 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4875/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4875/comments | https://api.github.com/repos/huggingface/datasets/issues/4875/events | https://github.com/huggingface/datasets/issues/4875 | 1,348,095,686 | I_kwDODunzps5QWk7G | 4,875 | `_resolve_features` ignores the token | {
"avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4",
"events_url": "https://api.github.com/users/severo/events{/privacy}",
"followers_url": "https://api.github.com/users/severo/followers",
"following_url": "https://api.github.com/users/severo/following{/other_user}",
"gists_url": "https://api.github.com/users/severo/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/severo",
"id": 1676121,
"login": "severo",
"node_id": "MDQ6VXNlcjE2NzYxMjE=",
"organizations_url": "https://api.github.com/users/severo/orgs",
"received_events_url": "https://api.github.com/users/severo/received_events",
"repos_url": "https://api.github.com/users/severo/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/severo/subscriptions",
"type": "User",
"url": "https://api.github.com/users/severo"
} | [] | open | false | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
}
] | null | [] | 2022-08-23T14:57:36Z | 2022-10-17T13:45:47Z | null | CONTRIBUTOR | null | null | null | ## Describe the bug
When calling [`_resolve_features()`](https://github.com/huggingface/datasets/blob/54b532a8a2f5353fdb0207578162153f7b2da2ec/src/datasets/iterable_dataset.py#L1255) on a gated dataset, ie. a dataset which requires a token to be loaded, the token seems to be ignored even if it has been provided to `load_dataset` before.
## Steps to reproduce the bug
```python
import os
os.environ["HF_ENDPOINT"] = "https://hub-ci.huggingface.co/"
hf_token = "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"
from datasets import load_dataset
# public
dataset_name = "__DUMMY_DATASETS_SERVER_USER__/repo_csv_data-16612654226756"
config_name = "__DUMMY_DATASETS_SERVER_USER__--repo_csv_data-16612654226756"
split_name = "train"
iterable_dataset = load_dataset(
dataset_name,
name=config_name,
split=split_name,
streaming=True,
use_auth_token=hf_token,
)
iterable_dataset = iterable_dataset._resolve_features()
print(iterable_dataset.features)
# gated
dataset_name = "__DUMMY_DATASETS_SERVER_USER__/repo_csv_data-16612654317644"
config_name = "__DUMMY_DATASETS_SERVER_USER__--repo_csv_data-16612654317644"
split_name = "train"
iterable_dataset = load_dataset(
dataset_name,
name=config_name,
split=split_name,
streaming=True,
use_auth_token=hf_token,
)
try:
iterable_dataset = iterable_dataset._resolve_features()
except FileNotFoundError as e:
print("FAILS")
```
## Expected results
I expect to have the same result on a public dataset and on a gated (or private) dataset, if the token has been provided.
## Actual results
An exception is thrown on gated datasets.
## Environment info
- `datasets` version: 2.4.0
- Platform: Linux-5.15.0-1017-aws-x86_64-with-glibc2.35
- Python version: 3.9.6
- PyArrow version: 7.0.0
- Pandas version: 1.4.2 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4875/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4875/timeline | null | reopened | true |
https://api.github.com/repos/huggingface/datasets/issues/4874 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4874/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4874/comments | https://api.github.com/repos/huggingface/datasets/issues/4874/events | https://github.com/huggingface/datasets/pull/4874 | 1,347,618,197 | PR_kwDODunzps49n_nI | 4,874 | [docs] Some tiny doc tweaks | {
"avatar_url": "https://avatars.githubusercontent.com/u/326577?v=4",
"events_url": "https://api.github.com/users/julien-c/events{/privacy}",
"followers_url": "https://api.github.com/users/julien-c/followers",
"following_url": "https://api.github.com/users/julien-c/following{/other_user}",
"gists_url": "https://api.github.com/users/julien-c/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/julien-c",
"id": 326577,
"login": "julien-c",
"node_id": "MDQ6VXNlcjMyNjU3Nw==",
"organizations_url": "https://api.github.com/users/julien-c/orgs",
"received_events_url": "https://api.github.com/users/julien-c/received_events",
"repos_url": "https://api.github.com/users/julien-c/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/julien-c/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/julien-c/subscriptions",
"type": "User",
"url": "https://api.github.com/users/julien-c"
} | [] | closed | false | null | [] | null | [] | 2022-08-23T09:19:40Z | 2022-08-24T17:27:57Z | 2022-08-24T17:27:56Z | MEMBER | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/4874.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4874",
"merged_at": "2022-08-24T17:27:56Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4874.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4874"
} | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4874/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4874/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/4873 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4873/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4873/comments | https://api.github.com/repos/huggingface/datasets/issues/4873/events | https://github.com/huggingface/datasets/issues/4873 | 1,347,592,022 | I_kwDODunzps5QUp9W | 4,873 | Multiple dataloader memory error | {
"avatar_url": "https://avatars.githubusercontent.com/u/13767887?v=4",
"events_url": "https://api.github.com/users/cyk1337/events{/privacy}",
"followers_url": "https://api.github.com/users/cyk1337/followers",
"following_url": "https://api.github.com/users/cyk1337/following{/other_user}",
"gists_url": "https://api.github.com/users/cyk1337/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/cyk1337",
"id": 13767887,
"login": "cyk1337",
"node_id": "MDQ6VXNlcjEzNzY3ODg3",
"organizations_url": "https://api.github.com/users/cyk1337/orgs",
"received_events_url": "https://api.github.com/users/cyk1337/received_events",
"repos_url": "https://api.github.com/users/cyk1337/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/cyk1337/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/cyk1337/subscriptions",
"type": "User",
"url": "https://api.github.com/users/cyk1337"
} | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | open | false | null | [] | null | [] | 2022-08-23T08:59:50Z | 2022-09-09T03:02:57Z | null | NONE | null | null | null | For the use of multiple datasets and tasks, we use around more than 200+ dataloaders, then pass it into `dataloader1, dataloader2, ..., dataloader200=accelerate.prepare(dataloader1, dataloader2, ..., dataloader200)`
It causes the memory error when generating batches. Any solutions to it?
```bash
File "/home/xxx/my_code/src/utils/data_utils.py", line 54, in generate_batch
x = next(iterator)
File "/home/xxx/anaconda3/envs/pt1.7/lib/python3.7/site-packages/accelerate/data_loader.py", line 301, in __iter__
for batch in super().__iter__():
File "/home/xxx/anaconda3/envs/pt1.7/lib/python3.7/site-packages/torch/utils/data/dataloader.py", line 435, in __next__
data = self._next_data()
File "/home/xxx/anaconda3/envs/pt1.7/lib/python3.7/site-packages/torch/utils/data/dataloader.py", line 475, in _next_data
data = self._dataset_fetcher.fetch(index) # may raise StopIteration
File "/home/xxx/anaconda3/envs/pt1.7/lib/python3.7/site-packages/torch/utils/data/_utils/fetch.py", line 28, in fetch
data.append(next(self.dataset_iter))
File "/home/xxx/anaconda3/envs/pt1.7/lib/python3.7/site-packages/accelerate/data_loader.py", line 249, in __iter__
for element in self.dataset:
File "/home/xxx/anaconda3/envs/pt1.7/lib/python3.7/site-packages/datasets/iterable_dataset.py", line 503, in __iter__
for key, example in self._iter():
File "/home/xxx/anaconda3/envs/pt1.7/lib/python3.7/site-packages/datasets/iterable_dataset.py", line 500, in _iter
yield from ex_iterable
File "/home/xxx/anaconda3/envs/pt1.7/lib/python3.7/site-packages/datasets/iterable_dataset.py", line 231, in __iter__
new_key = "_".join(str(key) for key in keys)
MemoryError
``` | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4873/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4873/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/4872 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4872/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4872/comments | https://api.github.com/repos/huggingface/datasets/issues/4872/events | https://github.com/huggingface/datasets/pull/4872 | 1,347,180,765 | PR_kwDODunzps49mjU9 | 4,872 | Docs for creating an audio dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/59462357?v=4",
"events_url": "https://api.github.com/users/stevhliu/events{/privacy}",
"followers_url": "https://api.github.com/users/stevhliu/followers",
"following_url": "https://api.github.com/users/stevhliu/following{/other_user}",
"gists_url": "https://api.github.com/users/stevhliu/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/stevhliu",
"id": 59462357,
"login": "stevhliu",
"node_id": "MDQ6VXNlcjU5NDYyMzU3",
"organizations_url": "https://api.github.com/users/stevhliu/orgs",
"received_events_url": "https://api.github.com/users/stevhliu/received_events",
"repos_url": "https://api.github.com/users/stevhliu/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/stevhliu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stevhliu/subscriptions",
"type": "User",
"url": "https://api.github.com/users/stevhliu"
} | [
{
"color": "0075ca",
"default": true,
"description": "Improvements or additions to documentation",
"id": 1935892861,
"name": "documentation",
"node_id": "MDU6TGFiZWwxOTM1ODkyODYx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/documentation"
}
] | closed | false | null | [] | null | [] | 2022-08-23T01:07:09Z | 2022-09-22T17:19:13Z | 2022-09-21T10:27:04Z | MEMBER | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/4872.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4872",
"merged_at": "2022-09-21T10:27:04Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4872.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4872"
} | This PR is a first draft of how to create audio datasets (`AudioFolder` and loading script). Feel free to let me know if there are any specificities I'm missing for this. 🙂 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4872/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4872/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/4871 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4871/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4871/comments | https://api.github.com/repos/huggingface/datasets/issues/4871/events | https://github.com/huggingface/datasets/pull/4871 | 1,346,703,568 | PR_kwDODunzps49k9Rm | 4,871 | Fix: wmt datasets - fix CWMT zh subsets | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [] | closed | false | null | [] | null | [] | 2022-08-22T16:42:09Z | 2022-08-23T10:00:20Z | 2022-08-23T10:00:19Z | MEMBER | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/4871.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4871",
"merged_at": "2022-08-23T10:00:19Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4871.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4871"
} | Fix https://github.com/huggingface/datasets/issues/4575
TODO: run `datasets-cli test`:
- [x] wmt17
- [x] wmt18
- [x] wmt19 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4871/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4871/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/4870 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4870/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4870/comments | https://api.github.com/repos/huggingface/datasets/issues/4870/events | https://github.com/huggingface/datasets/pull/4870 | 1,346,160,498 | PR_kwDODunzps49jGxD | 4,870 | audio folder check CI | {
"avatar_url": "https://avatars.githubusercontent.com/u/16348744?v=4",
"events_url": "https://api.github.com/users/polinaeterna/events{/privacy}",
"followers_url": "https://api.github.com/users/polinaeterna/followers",
"following_url": "https://api.github.com/users/polinaeterna/following{/other_user}",
"gists_url": "https://api.github.com/users/polinaeterna/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/polinaeterna",
"id": 16348744,
"login": "polinaeterna",
"node_id": "MDQ6VXNlcjE2MzQ4NzQ0",
"organizations_url": "https://api.github.com/users/polinaeterna/orgs",
"received_events_url": "https://api.github.com/users/polinaeterna/received_events",
"repos_url": "https://api.github.com/users/polinaeterna/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/polinaeterna/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/polinaeterna/subscriptions",
"type": "User",
"url": "https://api.github.com/users/polinaeterna"
} | [] | closed | false | null | [] | null | [] | 2022-08-22T10:15:53Z | 2022-11-02T11:54:35Z | 2022-08-22T12:19:40Z | CONTRIBUTOR | null | true | {
"diff_url": "https://github.com/huggingface/datasets/pull/4870.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4870",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/4870.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4870"
} | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4870/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4870/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/4869 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4869/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4869/comments | https://api.github.com/repos/huggingface/datasets/issues/4869/events | https://github.com/huggingface/datasets/pull/4869 | 1,345,513,758 | PR_kwDODunzps49hBGY | 4,869 | Fix typos in documentation | {
"avatar_url": "https://avatars.githubusercontent.com/u/85993954?v=4",
"events_url": "https://api.github.com/users/fl-lo/events{/privacy}",
"followers_url": "https://api.github.com/users/fl-lo/followers",
"following_url": "https://api.github.com/users/fl-lo/following{/other_user}",
"gists_url": "https://api.github.com/users/fl-lo/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/fl-lo",
"id": 85993954,
"login": "fl-lo",
"node_id": "MDQ6VXNlcjg1OTkzOTU0",
"organizations_url": "https://api.github.com/users/fl-lo/orgs",
"received_events_url": "https://api.github.com/users/fl-lo/received_events",
"repos_url": "https://api.github.com/users/fl-lo/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/fl-lo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/fl-lo/subscriptions",
"type": "User",
"url": "https://api.github.com/users/fl-lo"
} | [] | closed | false | null | [] | null | [] | 2022-08-21T15:10:03Z | 2022-08-22T09:25:39Z | 2022-08-22T09:09:58Z | CONTRIBUTOR | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/4869.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4869",
"merged_at": "2022-08-22T09:09:58Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4869.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4869"
} | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4869/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4869/timeline | null | null | true |
Subsets and Splits