url
stringlengths 58
61
| repository_url
stringclasses 1
value | labels_url
stringlengths 72
75
| comments_url
stringlengths 67
70
| events_url
stringlengths 65
68
| html_url
stringlengths 46
51
| id
int64 599M
1.23B
| node_id
stringlengths 18
32
| number
int64 1
4.3k
| title
stringlengths 1
276
| user
dict | labels
list | state
stringclasses 2
values | locked
bool 1
class | assignee
dict | assignees
list | milestone
dict | comments
sequence | created_at
int64 1,587B
1,652B
| updated_at
int64 1,587B
1,652B
| closed_at
int64 1,587B
1,652B
⌀ | author_association
stringclasses 3
values | active_lock_reason
null | draft
bool 2
classes | pull_request
dict | body
stringlengths 0
228k
⌀ | reactions
dict | timeline_url
stringlengths 67
70
| performed_via_github_app
null | is_pull_request
bool 2
classes |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://api.github.com/repos/huggingface/datasets/issues/3989 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3989/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3989/comments | https://api.github.com/repos/huggingface/datasets/issues/3989/events | https://github.com/huggingface/datasets/pull/3989 | 1,176,955,078 | PR_kwDODunzps400l1S | 3,989 | Remove old wikipedia leftovers | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"> This makes me think we shouldn't advise the use of load_dataset in dataset scripts, since it doesn't guarantee that the cache will work as expected (the cache directory is not set correctly, and the required disk space for downloaded files is not recorded)\r\n\r\n@lhoestq, do you think it could be a good idea to add a comment in this script WARNING that using load_dataset in a script is not good practice and that people should avoid using that script as a template to create other scripts? ",
"good idea ! :)"
] | 1,647,962,746,000 | 1,648,740,926,000 | 1,648,740,616,000 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3989",
"html_url": "https://github.com/huggingface/datasets/pull/3989",
"diff_url": "https://github.com/huggingface/datasets/pull/3989.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3989.patch",
"merged_at": 1648740616000
} | After updating Wikipedia dataset, remove old wikipedia leftovers from doc.
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3989/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3989/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3988 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3988/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3988/comments | https://api.github.com/repos/huggingface/datasets/issues/3988/events | https://github.com/huggingface/datasets/pull/3988 | 1,176,858,540 | PR_kwDODunzps400RGb | 3,988 | More consistent references in docs | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"Looks good, thanks for working on this!"
] | 1,647,958,721,000 | 1,647,968,792,000 | 1,647,967,844,000 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3988",
"html_url": "https://github.com/huggingface/datasets/pull/3988",
"diff_url": "https://github.com/huggingface/datasets/pull/3988.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3988.patch",
"merged_at": 1647967843000
} | Aligns the internal references with style discussed in https://github.com/huggingface/datasets/pull/3980.
cc @stevhliu | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3988/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3988/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3987 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3987/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3987/comments | https://api.github.com/repos/huggingface/datasets/issues/3987/events | https://github.com/huggingface/datasets/pull/3987 | 1,176,481,659 | PR_kwDODunzps40zAxF | 3,987 | Fix Faiss custom_index device | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,647,940,284,000 | 1,648,124,339,000 | 1,648,124,052,000 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3987",
"html_url": "https://github.com/huggingface/datasets/pull/3987",
"diff_url": "https://github.com/huggingface/datasets/pull/3987.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3987.patch",
"merged_at": 1648124052000
} | Currently, if both `custom_index` and `device` are passed to `FaissIndex`, `device` is silently ignored.
This PR fixes this by raising a ValueError if both arguments are passed.
Alternatively, the `custom_index` could be transferred to the target `device`. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3987/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3987/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3986 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3986/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3986/comments | https://api.github.com/repos/huggingface/datasets/issues/3986/events | https://github.com/huggingface/datasets/issues/3986 | 1,176,429,565 | I_kwDODunzps5GHuP9 | 3,986 | Dataset loads indefinitely after modifying default cache path (~/.cache/huggingface) | {
"login": "kelvinAI",
"id": 10686779,
"node_id": "MDQ6VXNlcjEwNjg2Nzc5",
"avatar_url": "https://avatars.githubusercontent.com/u/10686779?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/kelvinAI",
"html_url": "https://github.com/kelvinAI",
"followers_url": "https://api.github.com/users/kelvinAI/followers",
"following_url": "https://api.github.com/users/kelvinAI/following{/other_user}",
"gists_url": "https://api.github.com/users/kelvinAI/gists{/gist_id}",
"starred_url": "https://api.github.com/users/kelvinAI/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kelvinAI/subscriptions",
"organizations_url": "https://api.github.com/users/kelvinAI/orgs",
"repos_url": "https://api.github.com/users/kelvinAI/repos",
"events_url": "https://api.github.com/users/kelvinAI/events{/privacy}",
"received_events_url": "https://api.github.com/users/kelvinAI/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | open | false | null | [] | null | [
"Hi ! I didn't managed to reproduce the issue. When you kill the process, is there any stacktrace that shows at what point in the code python is hanging ?",
"Hi @lhoestq , I've traced the issue back to file locking. It's similar to this thread, using Lustre filesystem as well. https://github.com/huggingface/datasets/issues/329 . In this case the user was able to modify and add -o flock option while mounting and it solved the problem. \r\nHowever in other cases such as mine, we do not have the permissions to modify the commands while mounting. I'm still trying to figure out a workaround. Any ideas how can we use a mounted Lustre filesystem with no flock option?\r\n"
] | 1,647,937,401,000 | 1,648,698,691,000 | null | NONE | null | null | null | ## Describe the bug
Dataset loads indefinitely after modifying cache path (~/.cache/huggingface)
If none of the environment variables are set, this custom dataset loads fine ( json-based dataset with custom dataset load script)
** Update: Transformer modules faces the same issue as well during loading
## A clear and concise description of what the bug is.
Issue:
- Dataset loading stalls / freezes indefinitely when HF_HOME is changed to a custom directory
- No error code, had to terminate the process
- There are some files created in the cache directory:
```
custom_cache_dir
| -- modules
| -- __init__.py
| -- datasets_modules
| -- __init__.py
| -- datasets
| -- __init__.py
| -- script.py (Dataset loading script)
| -- script.lock
```
There's no error nor any logs thrown so I'm out of ideas of how to to debug this. The custom dataset works fine if the default ~/.cache dir is used, but unfortunately it's out of space and we do not have permissions to modify the disk.
## Steps to reproduce the bug
What I've tried:
- Modifying HF_HOME (https://github.com/huggingface/transformers/issues/8703)
- Modifying HF_DATASETS_CACHE (https://huggingface.co/docs/datasets/v1.12.0/cache.html)
- Modifying cache_dir param during runtime
```python
>>> from datasets import load_dataset
>>> dataset = load_dataset('test_dataset', cache_dir='/path/to/new/cache')
```
- Disabling dataset cache
```python
>>> from datasets import set_caching_enabled
>>> set_caching_enabled(False)
```
## Expected results
Datasets should load / cache as usual with the only exception that cache directory is different
## Actual results
Any actions taken above to change the cache directory results in loading indefinitely without terminating.
## Environment info
- `transformers` version: 4.18.0.dev0
- Platform: Linux-4.15.0-54-generic-x86_64-with-glibc2.10
- Python version: 3.8.8
- Huggingface_hub version: 0.4.0
- PyTorch version (GPU?): 1.8.1+cu102 (True)
- Tensorflow version (GPU?): 2.4.1 (False)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: No
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3986/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3986/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/3985 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3985/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3985/comments | https://api.github.com/repos/huggingface/datasets/issues/3985/events | https://github.com/huggingface/datasets/issues/3985 | 1,175,982,937 | I_kwDODunzps5GGBNZ | 3,985 | [image feature] Too many files open error when image feature is returned as a path | {
"login": "apsdehal",
"id": 3616806,
"node_id": "MDQ6VXNlcjM2MTY4MDY=",
"avatar_url": "https://avatars.githubusercontent.com/u/3616806?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/apsdehal",
"html_url": "https://github.com/apsdehal",
"followers_url": "https://api.github.com/users/apsdehal/followers",
"following_url": "https://api.github.com/users/apsdehal/following{/other_user}",
"gists_url": "https://api.github.com/users/apsdehal/gists{/gist_id}",
"starred_url": "https://api.github.com/users/apsdehal/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/apsdehal/subscriptions",
"organizations_url": "https://api.github.com/users/apsdehal/orgs",
"repos_url": "https://api.github.com/users/apsdehal/repos",
"events_url": "https://api.github.com/users/apsdehal/events{/privacy}",
"received_events_url": "https://api.github.com/users/apsdehal/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | [] | 1,647,899,645,000 | 1,648,059,567,000 | 1,648,059,567,000 | MEMBER | null | null | null | ## Describe the bug
PR in context: #3967. If I load the dataset in this PR (TextVQA), and do a simple list comprehension on the dataset, I get `Too many open files error`. This is happening due to the way we are loading the image feature when a str path is returned from the `_generate_examples`. Specifically at https://github.com/huggingface/datasets/blob/508eb4ab5d52f590baa677b4f64b1cc069139f7b/src/datasets/features/image.py#L110, we are open the file handle to the image but never closing it. This in my understanding is causing the issue.
## Steps to reproduce the bug
Pull the PR locally and run the following code
```python
from datasets import load_dataset
dataset = load_dataset("./datasets/textvqa")["train"]
data = [item for item in dataset]
# Error happens
```
## Expected results
List comprehension should work smoothly
## Actual results
`Too many open files error`
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 2.0.1.dev0
- Platform: macOS-12.2-arm64-arm-64bit
- Python version: 3.10.0
- PyArrow version: 7.0.0
- Pandas version: 1.4.1
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3985/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3985/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/3984 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3984/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3984/comments | https://api.github.com/repos/huggingface/datasets/issues/3984/events | https://github.com/huggingface/datasets/issues/3984 | 1,175,822,117 | I_kwDODunzps5GFZ8l | 3,984 | Local and automatic tests fail | {
"login": "MarkusSagen",
"id": 20767068,
"node_id": "MDQ6VXNlcjIwNzY3MDY4",
"avatar_url": "https://avatars.githubusercontent.com/u/20767068?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/MarkusSagen",
"html_url": "https://github.com/MarkusSagen",
"followers_url": "https://api.github.com/users/MarkusSagen/followers",
"following_url": "https://api.github.com/users/MarkusSagen/following{/other_user}",
"gists_url": "https://api.github.com/users/MarkusSagen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/MarkusSagen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/MarkusSagen/subscriptions",
"organizations_url": "https://api.github.com/users/MarkusSagen/orgs",
"repos_url": "https://api.github.com/users/MarkusSagen/repos",
"events_url": "https://api.github.com/users/MarkusSagen/events{/privacy}",
"received_events_url": "https://api.github.com/users/MarkusSagen/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | open | false | null | [] | null | [
"Hi ! To be able to run the tests, you need to install all the test dependencies and additional ones with\r\n```\r\npip install -e .[tests]\r\npip install -r additional-tests-requirements.txt --no-deps\r\n```\r\n\r\nIn particular, you probably need to `sacrebleu`. It looks like it wasn't able to instantiate `sacrebleu.TER` properly."
] | 1,647,889,657,000 | 1,648,473,525,000 | null | NONE | null | null | null | ## Describe the bug
Running the tests from CircleCI on a PR or locally fails, even with no changes. Tests seem to fail on `test_metric_common.py`
## Steps to reproduce the bug
```shell
git clone https://huggingface/datasets.git
cd datasets
```
```python
python -m pip install -e .
pytest
```
## Expected results
All tests passing
## Actual results
```
tests/test_metric_common.py:91:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
../.pyenv/versions/3.8.5/lib/python3.8/doctest.py:1336: in __run
exec(compile(example.source, filename, "single",
<doctest datasets_modules.metrics.ter.c0cfb5adedac7eb15ffa47bba6a70fabd80f3eb906ee508abf5e1906285d1155.ter.Ter[3]>:1: in <module>
???
../datasets/src/datasets/metric.py:430: in compute
output = self._compute(**inputs, **compute_kwargs)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = Metric(name: "ter", features: {'predictions': Value(dtype='string', id='sequence'), 'references': Sequence(feature=Val...ences=references)
>>> print(results)
{'score': 0.0, 'num_edits': 0, 'ref_length': 6.5}
""", stored examples: 0)
predictions = ['hello there general kenobi', 'foo bar foobar']
references = [['hello there general kenobi', 'hello there !'], ['foo bar foobar', 'foo bar foobar']]
normalized = False, no_punct = False, asian_support = False, case_sensitive = False
def _compute(
self,
predictions,
references,
normalized: bool = False,
no_punct: bool = False,
asian_support: bool = False,
case_sensitive: bool = False,
):
references_per_prediction = len(references[0])
if any(len(refs) != references_per_prediction for refs in references):
raise ValueError("Sacrebleu requires the same number of references for each prediction")
transformed_references = [[refs[i] for refs in references] for i in range(references_per_prediction)]
> sb_ter = TER(normalized, no_punct, asian_support, case_sensitive)
E TypeError: __init__() takes 2 positional arguments but 5 were given
/tmp/pytest-of-markussagen/pytest-1/cache/modules/datasets_modules/metrics/ter/c0cfb5adedac7eb15ffa47bba6a70fabd80f3eb906ee508abf5e1906285d1155/ter.py:130: TypeError
------------------------------ Captured stdout call -------------------------------
Trying:
predictions = ["hello there general kenobi", "foo bar foobar"]
Expecting nothing
ok
Trying:
references = [["hello there general kenobi", "hello there !"], ["foo bar foobar", "foo bar foobar"]]
Expecting nothing
ok
Trying:
ter = datasets.load_metric("ter")
Expecting nothing
ok
Trying:
results = ter.compute(predictions=predictions, references=references)
Expecting nothing
================================ warnings summary =================================
../.pyenv/versions/3.8.5/envs/huggingface/lib/python3.8/site-packages/hdfs/config.py:15
/home/markussagen/.pyenv/versions/3.8.5/envs/huggingface/lib/python3.8/site-packages/hdfs/config.py:15: DeprecationWarning: the imp module is deprecated in favour of importlib; see the module's documentation for alternative uses
from imp import load_source
../datasets/src/datasets/commands/test.py:35
/home/markussagen/datasets/src/datasets/commands/test.py:35: PytestCollectionWarning: cannot collect test class 'TestCommand' because it has a __init__ constructor (from: tests/commands/test_test.py)
class TestCommand(BaseDatasetsCLICommand):
tests/commands/test_test.py:33
/home/markussagen/mydataset/tests/commands/test_test.py:33: PytestCollectionWarning: cannot collect test class 'TestCommandArgs' because it has a __new__ constructor (from: tests/commands/test_test.py)
class TestCommandArgs:
tests/test_arrow_dataset.py: 760 warnings
tests/test_formatting.py: 60 warnings
tests/test_search.py: 31 warnings
tests/features/test_array_xd.py: 117 warnings
/home/markussagen/datasets/src/datasets/formatting/formatting.py:197: DeprecationWarning: `np.object` is a deprecated alias for the builtin `object`. To silence this warning, use `object` by itself. Doing this will not modify any behavior and is safe.
Deprecated in NumPy 1.20; for more details and guidance: https://numpy.org/devdocs/release/1.20.0-notes.html#deprecations
(isinstance(x, np.ndarray) and (x.dtype == np.object or x.shape != array[0].shape))
tests/test_arrow_dataset.py: 154 warnings
tests/features/test_array_xd.py: 1 warning
/home/markussagen/datasets/src/datasets/formatting/formatting.py:201: DeprecationWarning: `np.object` is a deprecated alias for the builtin `object`. To silence this warning, use `object` by itself. Doing this will not modify any behavior and is safe.
Deprecated in NumPy 1.20; for more details and guidance: https://numpy.org/devdocs/release/1.20.0-notes.html#deprecations
return np.array(array, copy=False, **{**self.np_array_kwargs, "dtype": np.object})
tests/test_arrow_dataset.py: 60 warnings
/home/markussagen/datasets/src/datasets/arrow_dataset.py:3105: DeprecationWarning: `np.str` is a deprecated alias for the builtin `str`. To silence this warning, use `str` by itself. Doing this will not modify any behavior and is safe. If you specifically wanted the numpy scalar type, use `np.str_` here.
Deprecated in NumPy 1.20; for more details and guidance: https://numpy.org/devdocs/release/1.20.0-notes.html#deprecations
elif np.issubdtype(values.dtype, np.str):
tests/test_arrow_dataset.py: 138 warnings
tests/test_formatting.py: 21 warnings
/home/markussagen/datasets/src/datasets/formatting/tf_formatter.py:69: DeprecationWarning: `np.object` is a deprecated alias for the builtin `object`. To silence this warning, use `object` by itself. Doing this will not modify any behavior and is safe.
Deprecated in NumPy 1.20; for more details and guidance: https://numpy.org/devdocs/release/1.20.0-notes.html#deprecations
data_struct.dtype == np.object
tests/test_arrow_dataset.py: 240 warnings
tests/test_formatting.py: 20 warnings
/home/markussagen/datasets/src/datasets/formatting/torch_formatter.py:49: DeprecationWarning: `np.object` is a deprecated alias for the builtin `object`. To silence this warning, use `object` by itself. Doing this will not modify any behavior and is safe.
Deprecated in NumPy 1.20; for more details and guidance: https://numpy.org/devdocs/release/1.20.0-notes.html#deprecations
if data_struct.dtype == np.object: # pytorch tensors cannot be instantied from an array of objects
tests/test_arrow_dataset.py: 12 warnings
tests/test_search.py: 2 warnings
tests/features/test_array_xd.py: 6 warnings
tests/features/test_image.py: 4 warnings
/home/markussagen/datasets/src/datasets/features/features.py:1129: DeprecationWarning: `np.object` is a deprecated alias for the builtin `object`. To silence this warning, use `object` by itself. Doing this will not modify any behavior and is safe.
Deprecated in NumPy 1.20; for more details and guidance: https://numpy.org/devdocs/release/1.20.0-notes.html#deprecations
[0] + [len(arr) for arr in l_arr], dtype=np.object
tests/test_dataset_common.py::LocalDatasetTest::test_builder_class_banking77
/tmp/pytest-of-markussagen/pytest-1/cache/modules/datasets_modules/datasets/banking77/aec0289529599d4572d76ab00c8944cb84f88410ad0c9e7da26189d31f62a55b/banking77.py:24: DeprecationWarning: invalid escape sequence \~
_CITATION = """\
tests/test_dataset_common.py::LocalDatasetTest::test_builder_class_universal_dependencies
/tmp/pytest-of-markussagen/pytest-1/cache/modules/datasets_modules/datasets/universal_dependencies/065e728dfe9a8371434a6e87132c2386a6eacab1a076d3a12aa417b994e6ef7d/universal_dependencies.py:6: DeprecationWarning: invalid escape sequence \=
_CITATION = """\
tests/test_filesystem.py: 105 warnings
/home/markussagen/.pyenv/versions/3.8.5/envs/huggingface/lib/python3.8/site-packages/responses/__init__.py:398: DeprecationWarning: stream argument is deprecated. Use stream parameter in request directly
warn(
tests/test_formatting.py::FormatterTest::test_jax_formatter
tests/test_formatting.py::FormatterTest::test_jax_formatter
tests/test_formatting.py::FormatterTest::test_jax_formatter
tests/test_formatting.py::FormatterTest::test_jax_formatter
tests/test_formatting.py::FormatterTest::test_jax_formatter_np_array_kwargs
tests/test_formatting.py::FormatterTest::test_jax_formatter_np_array_kwargs
tests/test_formatting.py::FormatterTest::test_jax_formatter_np_array_kwargs
tests/test_formatting.py::FormatterTest::test_jax_formatter_np_array_kwargs
/home/markussagen/datasets/src/datasets/formatting/jax_formatter.py:57: DeprecationWarning: `np.object` is a deprecated alias for the builtin `object`. To silence this warning, use `object` by itself. Doing this will not modify any behavior and is safe.
Deprecated in NumPy 1.20; for more details and guidance: https://numpy.org/devdocs/release/1.20.0-notes.html#deprecations
if data_struct.dtype == np.object: # jax arrays cannot be instantied from an array of objects
tests/test_formatting.py::FormatterTest::test_jax_formatter
tests/test_formatting.py::FormatterTest::test_jax_formatter
tests/test_formatting.py::FormatterTest::test_jax_formatter
/home/markussagen/.pyenv/versions/3.8.5/envs/huggingface/lib/python3.8/site-packages/jax/_src/numpy/lax_numpy.py:3567: UserWarning: Explicitly requested dtype <class 'jax._src.numpy.lax_numpy.int64'> requested in array is not available, and will be truncated to dtype int32. To enable more dtypes, set the jax_enable_x64 configuration option or the JAX_ENABLE_X64 shell environment variable. See https://github.com/google/jax#current-gotchas for more.
lax._check_user_dtype_supported(dtype, "array")
tests/test_metric_common.py::LocalMetricTest::test_load_metric_frugalscore
/home/markussagen/.pyenv/versions/3.8.5/envs/huggingface/lib/python3.8/site-packages/apscheduler/util.py:95: PytzUsageWarning: The zone attribute is specific to pytz's interface; please migrate to a new time zone provider. For more details on how to do so, see https://pytz-deprecation-shim.readthedocs.io/en/latest/migration.html
if obj.zone == 'local':
tests/test_upstream_hub.py::TestPushToHub::test_push_dataset_to_hub_custom_features
_audio
/home/markussagen/.pyenv/versions/3.8.5/envs/huggingface/lib/python3.8/site-packages/librosa/core/constantq.py:1059: DeprecationWarning: `np.complex` is a deprecated alias for the builtin `complex`. To silence this warning, use `complex` by itself. Doing this will not modify any behavior and is safe. If you specifically wanted the numpy scalar type, use `np.complex128` here.
Deprecated in NumPy 1.20; for more details and guidance: https://numpy.org/devdocs/release/1.20.0-notes.html#deprecations
dtype=np.complex,
tests/features/test_array_xd.py::test_array_xd_with_none
/home/markussagen/mydataset/tests/features/test_array_xd.py:338: DeprecationWarning: `np.object` is a deprecated alias for the builtin `object`. To silence this warning, use `object` by itself. Doing this will not modify any behavior and is safe.
Deprecated in NumPy 1.20; for more details and guidance: https://numpy.org/devdocs/release/1.20.0-notes.html#deprecations
assert isinstance(arr, np.ndarray) and arr.dtype == np.object and arr.shape == (3,)
-- Docs: https://docs.pytest.org/en/stable/warnings.html
============================= short test summary info =============================
FAILED tests/test_metric_common.py::LocalMetricTest::test_load_metric_bleurt - I...
FAILED tests/test_metric_common.py::LocalMetricTest::test_load_metric_chrf - Att...
FAILED tests/test_metric_common.py::LocalMetricTest::test_load_metric_code_eval
FAILED tests/test_metric_common.py::LocalMetricTest::test_load_metric_comet - Im...
FAILED tests/test_metric_common.py::LocalMetricTest::test_load_metric_competition_math
FAILED tests/test_metric_common.py::LocalMetricTest::test_load_metric_coval - Im...
FAILED tests/test_metric_common.py::LocalMetricTest::test_load_metric_frugalscore
FAILED tests/test_metric_common.py::LocalMetricTest::test_load_metric_perplexity
FAILED tests/test_metric_common.py::LocalMetricTest::test_load_metric_ter - Type...
```
## Environment info
- `datasets` version: 2.0.1.dev0
- Platform: Linux-5.16.11-76051611-generic-x86_64-with-glibc2.33
- Python version: 3.8.5
- PyArrow version: 5.0.0
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3984/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3984/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/3983 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3983/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3983/comments | https://api.github.com/repos/huggingface/datasets/issues/3983/events | https://github.com/huggingface/datasets/issues/3983 | 1,175,759,412 | I_kwDODunzps5GFKo0 | 3,983 | Infinitely attempting lock | {
"login": "jyrr",
"id": 11869652,
"node_id": "MDQ6VXNlcjExODY5NjUy",
"avatar_url": "https://avatars.githubusercontent.com/u/11869652?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jyrr",
"html_url": "https://github.com/jyrr",
"followers_url": "https://api.github.com/users/jyrr/followers",
"following_url": "https://api.github.com/users/jyrr/following{/other_user}",
"gists_url": "https://api.github.com/users/jyrr/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jyrr/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jyrr/subscriptions",
"organizations_url": "https://api.github.com/users/jyrr/orgs",
"repos_url": "https://api.github.com/users/jyrr/repos",
"events_url": "https://api.github.com/users/jyrr/events{/privacy}",
"received_events_url": "https://api.github.com/users/jyrr/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi ! Thanks for reporting. We're using `py-filelock` as our locking mechanism.\r\n\r\nCan you try deleting the .lock file mentioned in the logs and try again ? Make sure that no other process is generating the `cnn_dailymail` dataset.\r\n\r\nIf it doesn't work, could you try to set up a lock using the latest version of `py-filelock` and see if it works ?\r\n\r\n```\r\npip install filelock\r\n```\r\nhere is a code example from the `py-filelock` documentation that you can try:\r\n```python\r\nfrom filelock import Timeout, FileLock\r\n\r\nlock = FileLock(\"high_ground.txt.lock\")\r\nwith lock:\r\n with open(\"high_ground.txt\", \"a\") as f:\r\n f.write(\"You were the chosen one.\")\r\n```"
] | 1,647,886,317,000 | 1,651,853,538,000 | 1,651,853,538,000 | NONE | null | null | null | I am trying to run one of the examples of the `transformers` repo, which makes use of `datasets`.
Important to note is that I am trying to run this via a Databricks notebook, and all the files reside in the Databricks Filesystem (DBFS).
```
%sh
python /dbfs/transformers/examples/pytorch/summarization/run_summarization.py \
--model_name_or_path t5-small \
--do_train \
--do_eval \
--dataset_name cnn_dailymail \
--dataset_config "3.0.0" \
--source_prefix "summarize: " \
--output_dir /dbfs/transformers/tmp/tst-summarization \
--per_device_train_batch_size=4 \
--per_device_eval_batch_size=4 \
--overwrite_output_dir \
--predict_with_generate \
--log_level debug \
--cache_dir /dbfs/transformers/cache
```
All goes well until acquiring a lock --
```
03/21/2022 17:53:19 - DEBUG - datasets.utils.filelock - Attempting to acquire lock 140386484514192 on /dbfs/transformers/cache/_dbfs_transformers_cache_cnn_dailymail_3.0.0_3.0.0_3cb851bf7cf5826e45d49db2863f627cba583cbc32342df7349dfe6c38060234.lock
03/21/2022 17:53:19 - DEBUG - datasets.utils.filelock - Lock 140386484514192 not acquired on /dbfs/transformers/cache/_dbfs_transformers_cache_cnn_dailymail_3.0.0_3.0.0_3cb851bf7cf5826e45d49db2863f627cba583cbc32342df7349dfe6c38060234.lock, waiting 0.05 seconds ...
03/21/2022 17:53:20 - DEBUG - datasets.utils.filelock - Attempting to acquire lock 140386484514192 on /dbfs/transformers/cache/_dbfs_transformers_cache_cnn_dailymail_3.0.0_3.0.0_3cb851bf7cf5826e45d49db2863f627cba583cbc32342df7349dfe6c38060234.lock
03/21/2022 17:53:20 - DEBUG - datasets.utils.filelock - Lock 140386484514192 not acquired on /dbfs/transformers/cache/_dbfs_transformers_cache_cnn_dailymail_3.0.0_3.0.0_3cb851bf7cf5826e45d49db2863f627cba583cbc32342df7349dfe6c38060234.lock, waiting 0.05 seconds ...
03/21/2022 17:53:20 - DEBUG - datasets.utils.filelock - Attempting to acquire lock 140386484514192 on /dbfs/transformers/cache/_dbfs_transformers_cache_cnn_dailymail_3.0.0_3.0.0_3cb851bf7cf5826e45d49db2863f627cba583cbc32342df7349dfe6c38060234.lock
03/21/2022 17:53:20 - DEBUG - datasets.utils.filelock - Lock 140386484514192 not acquired on /dbfs/transformers/cache/_dbfs_transformers_cache_cnn_dailymail_3.0.0_3.0.0_3cb851bf7cf5826e45d49db2863f627cba583cbc32342df7349dfe6c38060234.lock, waiting 0.05 seconds ...
03/21/2022 17:53:20 - DEBUG - datasets.utils.filelock - Attempting to acquire lock 140386484514192 on /dbfs/transformers/cache/_dbfs_transformers_cache_cnn_dailymail_3.0.0_3.0.0_3cb851bf7cf5826e45d49db2863f627cba583cbc32342df7349dfe6c38060234.lock
03/21/2022 17:53:20 - DEBUG - datasets.utils.filelock - Lock 140386484514192 not acquired on /dbfs/transformers/cache/_dbfs_transformers_cache_cnn_dailymail_3.0.0_3.0.0_3cb851bf7cf5826e45d49db2863f627cba583cbc32342df7349dfe6c38060234.lock, waiting 0.05 seconds ...
03/21/2022 17:53:20 - DEBUG - datasets.utils.filelock - Attempting to acquire lock 140386484514192 on /dbfs/transformers/cache/_dbfs_transformers_cache_cnn_dailymail_3.0.0_3.0.0_3cb851bf7cf5826e45d49db2863f627cba583cbc32342df7349dfe6c38060234.lock
03/21/2022 17:53:20 - DEBUG - datasets.utils.filelock - Lock 140386484514192 not acquired on /dbfs/transformers/cache/_dbfs_transformers_cache_cnn_dailymail_3.0.0_3.0.0_3cb851bf7cf5826e45d49db2863f627cba583cbc32342df7349dfe6c38060234.lock, waiting 0.05 seconds ...
03/21/2022 17:53:20 - DEBUG - datasets.utils.filelock - Attempting to acquire lock 140386484514192 on /dbfs/transformers/cache/_dbfs_transformers_cache_cnn_dailymail_3.0.0_3.0.0_3cb851bf7cf5826e45d49db2863f627cba583cbc32342df7349dfe6c38060234.lock
03/21/2022 17:53:20 - DEBUG - datasets.utils.filelock - Lock 140386484514192 not acquired on /dbfs/transformers/cache/_dbfs_transformers_cache_cnn_dailymail_3.0.0_3.0.0_3cb851bf7cf5826e45d49db2863f627cba583cbc32342df7349dfe6c38060234.lock, waiting 0.05 seconds ...
```
and so on.
I imagine this has to do with DBFS -- is there a way to tackle this? | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3983/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3983/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/3982 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3982/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3982/comments | https://api.github.com/repos/huggingface/datasets/issues/3982/events | https://github.com/huggingface/datasets/pull/3982 | 1,175,478,099 | PR_kwDODunzps40vrR_ | 3,982 | Exclude Google Drive tests of the CI | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"I was thinking exactly the same: running unit tests that request continuously a third-party API is not a good idea."
] | 1,647,873,256,000 | 1,648,744,682,000 | 1,647,874,295,000 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3982",
"html_url": "https://github.com/huggingface/datasets/pull/3982",
"diff_url": "https://github.com/huggingface/datasets/pull/3982.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3982.patch",
"merged_at": 1647874295000
} | These tests make the CI spam the Google Drive API, the CI now gets banned by Google Drive very often.
I think we can just skip these tests from the CI for now.
In the future we could have a CI job that runs only once a day or once a week for such cases
cc @albertvillanova @mariosasko @severo
Close #3415
![image](https://user-images.githubusercontent.com/42851186/159283608-fdeca1ac-b57f-4fa3-bf09-6fa5361c494f.png)
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3982/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3982/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3981 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3981/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3981/comments | https://api.github.com/repos/huggingface/datasets/issues/3981/events | https://github.com/huggingface/datasets/pull/3981 | 1,175,423,517 | PR_kwDODunzps40vfra | 3,981 | Add TER metric card | {
"login": "emibaylor",
"id": 27527747,
"node_id": "MDQ6VXNlcjI3NTI3NzQ3",
"avatar_url": "https://avatars.githubusercontent.com/u/27527747?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/emibaylor",
"html_url": "https://github.com/emibaylor",
"followers_url": "https://api.github.com/users/emibaylor/followers",
"following_url": "https://api.github.com/users/emibaylor/following{/other_user}",
"gists_url": "https://api.github.com/users/emibaylor/gists{/gist_id}",
"starred_url": "https://api.github.com/users/emibaylor/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/emibaylor/subscriptions",
"organizations_url": "https://api.github.com/users/emibaylor/orgs",
"repos_url": "https://api.github.com/users/emibaylor/repos",
"events_url": "https://api.github.com/users/emibaylor/events{/privacy}",
"received_events_url": "https://api.github.com/users/emibaylor/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,647,870,876,000 | 1,648,562,231,000 | 1,648,561,900,000 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3981",
"html_url": "https://github.com/huggingface/datasets/pull/3981",
"diff_url": "https://github.com/huggingface/datasets/pull/3981.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3981.patch",
"merged_at": 1648561900000
} | Add TER metric card
This card is still missing content for the following sections:
- **Limitations & Biases**
- **Values from Papers**
If anyone has any ideas for either of the above, feel free to either add them or point me to them and I'll add them! | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3981/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3981/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3980 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3980/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3980/comments | https://api.github.com/repos/huggingface/datasets/issues/3980/events | https://github.com/huggingface/datasets/pull/3980 | 1,175,412,905 | PR_kwDODunzps40vdcH | 3,980 | Add tip on how to speed up loading with ImageFolder | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"Thanks for adding that tip! 👍 \r\n\r\nFor the docs syntax, it might be better if we hide the package name/full path to the class or function and only show the name of it. I think it's easier for users to read the function name (eg,`cast_column`) instead of the full path which can be a bit lengthy for some functions like `datasets.IterableDataset.remove_columns` (and if we like this idea, we can align the rest of the docs on it). ",
"> For the docs syntax, it might be better if we hide the package name/full path to the class or function and only show the name of it. I think it's easier for users to read the function name (eg,cast_column) instead of the full path which can be a bit lengthy for some functions like datasets.IterableDataset.remove_columns (and if we like this idea, we can align the rest of the docs on it).\r\n\r\nThat's also OK, as long as we are consistent.\r\n\r\n@lhoestq @albertvillanova @polinaeterna Which one of these two styles do you prefer?",
"Agree on hiding `datasets` name. Not sure about hiding class name as it's anyway not visible for users if they use `Dataset.cast_column` or `IterableDataset.cast_column` when working with their datasets. But I agree that the most important thing is to be consistent :)",
"Good points! :)\r\n\r\nI think it'll be good to show the class name since some functions have different parameters. For example, if users click on `IterableDataset.map` and then `Dataset.map`, they'll see different parameters and have to figure out why (which isn't too difficult I guess lol). But showing the class name avoids any confusion upfront. "
] | 1,647,870,358,000 | 1,647,956,385,000 | 1,647,956,096,000 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3980",
"html_url": "https://github.com/huggingface/datasets/pull/3980",
"diff_url": "https://github.com/huggingface/datasets/pull/3980.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3980.patch",
"merged_at": 1647956096000
} | This PR does two things:
* adds a tip on how to speed up loading of a large number of files with ImageFolder (motivated by [this issue](https://github.com/huggingface/datasets/issues/3960))
* replaces the current references to the `Dataset` methods in the Image Processing doc with their fully qualified counterparts (to align it with the Audio Processing doc)
cc @stevhliu | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3980/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3980/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3979 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3979/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3979/comments | https://api.github.com/repos/huggingface/datasets/issues/3979/events | https://github.com/huggingface/datasets/pull/3979 | 1,175,258,969 | PR_kwDODunzps40u8NY | 3,979 | Fix google drive streaming for small files | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"Actually the CI fails because of this\r\n![image](https://user-images.githubusercontent.com/42851186/159281771-78e611b1-6b04-4a87-8324-b6ba2d8c6a6a.png)\r\n\r\nIt looks like we can't have a proper way to test google drive in the CI right now. Though it seems to work locally if you're not banned. I think I'll just disable those tests for now",
"this fix will not be included?",
"No we can't do anything except stop using google drive when possible"
] | 1,647,862,726,000 | 1,648,141,151,000 | 1,647,872,758,000 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3979",
"html_url": "https://github.com/huggingface/datasets/pull/3979",
"diff_url": "https://github.com/huggingface/datasets/pull/3979.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3979.patch",
"merged_at": null
} | Google drive did another change recently, following #3787 #3843 .
In particular Google Drive now returns 403 for GET requests with `confirm=t` when a files doesn't have a virus warning message. I fixed this by passing `confirm=t` if and only if when there is one (i.e. when status code is 200 for HEAD) | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3979/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3979/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3978 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3978/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3978/comments | https://api.github.com/repos/huggingface/datasets/issues/3978/events | https://github.com/huggingface/datasets/issues/3978 | 1,175,226,456 | I_kwDODunzps5GDIhY | 3,978 | I can't view HFcallback dataset for ASR Space | {
"login": "kingabzpro",
"id": 36753484,
"node_id": "MDQ6VXNlcjM2NzUzNDg0",
"avatar_url": "https://avatars.githubusercontent.com/u/36753484?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/kingabzpro",
"html_url": "https://github.com/kingabzpro",
"followers_url": "https://api.github.com/users/kingabzpro/followers",
"following_url": "https://api.github.com/users/kingabzpro/following{/other_user}",
"gists_url": "https://api.github.com/users/kingabzpro/gists{/gist_id}",
"starred_url": "https://api.github.com/users/kingabzpro/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kingabzpro/subscriptions",
"organizations_url": "https://api.github.com/users/kingabzpro/orgs",
"repos_url": "https://api.github.com/users/kingabzpro/repos",
"events_url": "https://api.github.com/users/kingabzpro/events{/privacy}",
"received_events_url": "https://api.github.com/users/kingabzpro/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | [
"the dataset viewer is working on this dataset. I imagine the issue is that we would expect to be able to listen to the audio files in the `Please Record Your Voice file` column, right?\r\n\r\nmaybe @lhoestq or @albertvillanova could help\r\n\r\n<img width=\"1019\" alt=\"Capture d’écran 2022-03-24 à 17 36 20\" src=\"https://user-images.githubusercontent.com/1676121/159966006-57dcf8f7-b65f-4200-ac8c-66859318a8bb.png\">\r\n",
"The structure of the dataset is not supported. Only the CSV file is parsed and the audio files are ignored.\r\n\r\nWe're working on supporting audio datasets with a specific structure in #3963 ",
"Got it."
] | 1,647,860,869,000 | 1,649,079,278,000 | null | NONE | null | null | null | ## Dataset viewer issue for '*Urdu-ASR-flags*'
**Link:** *[link to the dataset viewer page](https://huggingface.co/datasets/kingabzpro/Urdu-ASR-flags)*
*I think dataset should show some thing and if you want me to add script, please show me the documentation. I thought this was suppose to be automatic task.*
Am I the one who added this dataset ? Yes
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3978/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3978/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/3977 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3977/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3977/comments | https://api.github.com/repos/huggingface/datasets/issues/3977/events | https://github.com/huggingface/datasets/issues/3977 | 1,175,049,927 | I_kwDODunzps5GCdbH | 3,977 | Adapt `docs/README.md` for datasets | {
"login": "qqaatw",
"id": 24835382,
"node_id": "MDQ6VXNlcjI0ODM1Mzgy",
"avatar_url": "https://avatars.githubusercontent.com/u/24835382?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/qqaatw",
"html_url": "https://github.com/qqaatw",
"followers_url": "https://api.github.com/users/qqaatw/followers",
"following_url": "https://api.github.com/users/qqaatw/following{/other_user}",
"gists_url": "https://api.github.com/users/qqaatw/gists{/gist_id}",
"starred_url": "https://api.github.com/users/qqaatw/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/qqaatw/subscriptions",
"organizations_url": "https://api.github.com/users/qqaatw/orgs",
"repos_url": "https://api.github.com/users/qqaatw/repos",
"events_url": "https://api.github.com/users/qqaatw/events{/privacy}",
"received_events_url": "https://api.github.com/users/qqaatw/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892861,
"node_id": "MDU6TGFiZWwxOTM1ODkyODYx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/documentation",
"name": "documentation",
"color": "0075ca",
"default": true,
"description": "Improvements or additions to documentation"
}
] | open | false | null | [] | null | [
"Thanks for reporting @qqaatw.\r\n\r\nYes, we should definitely adapt that file for `datasets`. "
] | 1,647,851,209,000 | 1,647,852,855,000 | null | CONTRIBUTOR | null | null | null | ## Describe the bug
Currently `docs/README.md` is a direct copy from `transformers`, we should probably adapt this file for `datasets`. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3977/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3977/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/3976 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3976/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3976/comments | https://api.github.com/repos/huggingface/datasets/issues/3976/events | https://github.com/huggingface/datasets/pull/3976 | 1,175,043,780 | PR_kwDODunzps40uOY6 | 3,976 | Fix main classes reference in docs | {
"login": "qqaatw",
"id": 24835382,
"node_id": "MDQ6VXNlcjI0ODM1Mzgy",
"avatar_url": "https://avatars.githubusercontent.com/u/24835382?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/qqaatw",
"html_url": "https://github.com/qqaatw",
"followers_url": "https://api.github.com/users/qqaatw/followers",
"following_url": "https://api.github.com/users/qqaatw/following{/other_user}",
"gists_url": "https://api.github.com/users/qqaatw/gists{/gist_id}",
"starred_url": "https://api.github.com/users/qqaatw/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/qqaatw/subscriptions",
"organizations_url": "https://api.github.com/users/qqaatw/orgs",
"repos_url": "https://api.github.com/users/qqaatw/repos",
"events_url": "https://api.github.com/users/qqaatw/events{/privacy}",
"received_events_url": "https://api.github.com/users/qqaatw/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_3976). All of your documentation changes will be reflected on that endpoint.",
"Not sure why some section titles end with `[[datasets.xxx]]`, like this: https://moon-ci-docs.huggingface.co/docs/datasets/pr_3976/en/package_reference/main_classes#datasetdict[[datasets.datasetdict]]",
"Thanks ! I think this has been fixed already in https://github.com/huggingface/datasets/pull/3925 though\r\n\r\nI'm closing this one then if it's fine for you"
] | 1,647,850,786,000 | 1,649,773,179,000 | 1,649,773,178,000 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3976",
"html_url": "https://github.com/huggingface/datasets/pull/3976",
"diff_url": "https://github.com/huggingface/datasets/pull/3976.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3976.patch",
"merged_at": null
} | Currently the section index (on the page's right side) of the [main classes reference](https://huggingface.co/docs/datasets/master/en/package_reference/main_classes) incorrectly displays `Tensor returned:`, this PR fixes this issue by wrapping code examples in this page with markdown code block.
There are other examples in datasets library having this issue. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3976/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3976/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3975 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3975/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3975/comments | https://api.github.com/repos/huggingface/datasets/issues/3975/events | https://github.com/huggingface/datasets/pull/3975 | 1,174,678,942 | PR_kwDODunzps40tKdS | 3,975 | Update many missing tags to dataset README's | {
"login": "MarkusSagen",
"id": 20767068,
"node_id": "MDQ6VXNlcjIwNzY3MDY4",
"avatar_url": "https://avatars.githubusercontent.com/u/20767068?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/MarkusSagen",
"html_url": "https://github.com/MarkusSagen",
"followers_url": "https://api.github.com/users/MarkusSagen/followers",
"following_url": "https://api.github.com/users/MarkusSagen/following{/other_user}",
"gists_url": "https://api.github.com/users/MarkusSagen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/MarkusSagen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/MarkusSagen/subscriptions",
"organizations_url": "https://api.github.com/users/MarkusSagen/orgs",
"repos_url": "https://api.github.com/users/MarkusSagen/repos",
"events_url": "https://api.github.com/users/MarkusSagen/events{/privacy}",
"received_events_url": "https://api.github.com/users/MarkusSagen/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,647,808,947,000 | 1,647,887,992,000 | 1,647,887,992,000 | NONE | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3975",
"html_url": "https://github.com/huggingface/datasets/pull/3975",
"diff_url": "https://github.com/huggingface/datasets/pull/3975.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3975.patch",
"merged_at": null
} | I've started to go through the datasets available and noticed that there are 127 datasets that does not have all the tags so I started filling them in; starting with some of the most common and QA datasets
Not 100% certain that the task_id is correct for SuperGLUE
If anyone is browsing the issues and would like to help make Hugging face datasets even more feature complete and awesome, feel free to use this tool I wrote to find the missing tags in the [datacards](https://github.com/Hugging-Face-Supporter/datacards) | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3975/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3975/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3974 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3974/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3974/comments | https://api.github.com/repos/huggingface/datasets/issues/3974/events | https://github.com/huggingface/datasets/pull/3974 | 1,174,485,044 | PR_kwDODunzps40ssrA | 3,974 | Add XFUN dataset | {
"login": "qqaatw",
"id": 24835382,
"node_id": "MDQ6VXNlcjI0ODM1Mzgy",
"avatar_url": "https://avatars.githubusercontent.com/u/24835382?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/qqaatw",
"html_url": "https://github.com/qqaatw",
"followers_url": "https://api.github.com/users/qqaatw/followers",
"following_url": "https://api.github.com/users/qqaatw/following{/other_user}",
"gists_url": "https://api.github.com/users/qqaatw/gists{/gist_id}",
"starred_url": "https://api.github.com/users/qqaatw/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/qqaatw/subscriptions",
"organizations_url": "https://api.github.com/users/qqaatw/orgs",
"repos_url": "https://api.github.com/users/qqaatw/repos",
"events_url": "https://api.github.com/users/qqaatw/events{/privacy}",
"received_events_url": "https://api.github.com/users/qqaatw/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_3974). All of your documentation changes will be reflected on that endpoint.",
"Not sure how to generate dummy data.\r\n\r\nThe downloaded file structure is \r\n\r\n- document file paths\r\n - (a json file containing all documents info, document images folder)\r\n - (a json file containing all documents info, document images folder)\r\n - ...",
"Hey @mariosasko, thanks for the review. I'm not sure how to suggest these changes to the owner @ranpox, and I did spend some time to write the model card and hope to get it on the official repo. Is that possible?",
"Since the author is not responding, maybe we can go ahead with this PR ?",
"Go for it!\n\nOn Tue, Apr 12, 2022 at 10:24 AM Quentin Lhoest ***@***.***>\nwrote:\n\n> Since the author is not responding, maybe we can go ahead with this PR ?\n>\n> —\n> Reply to this email directly, view it on GitHub\n> <https://github.com/huggingface/datasets/pull/3974#issuecomment-1096797650>,\n> or unsubscribe\n> <https://github.com/notifications/unsubscribe-auth/ATFNL66EVUFWS3P2FOAS7SLVEWBP3ANCNFSM5RFH3MXA>\n> .\n> You are receiving this because you are subscribed to this thread.Message\n> ID: ***@***.***>\n>\n",
"@qqaatw Do you plan to finish this PR? I can give you some pointers and help you with the code if needed.",
"@mariosasko Yes, I'll apply all of the suggestions when I have some time."
] | 1,647,768,294,000 | 1,650,385,871,000 | null | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3974",
"html_url": "https://github.com/huggingface/datasets/pull/3974",
"diff_url": "https://github.com/huggingface/datasets/pull/3974.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3974.patch",
"merged_at": null
} | This PR adds XFUN dataset.
Home page and repository: https://github.com/doc-analysis/XFUND
Source code: https://github.com/microsoft/unilm/blob/master/layoutlmft/layoutlmft/data/datasets/xfun.py | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3974/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3974/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3973 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3973/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3973/comments | https://api.github.com/repos/huggingface/datasets/issues/3973/events | https://github.com/huggingface/datasets/issues/3973 | 1,174,455,431 | I_kwDODunzps5GAMSH | 3,973 | ConnectionError and SSLError | {
"login": "yanyu2015",
"id": 11142054,
"node_id": "MDQ6VXNlcjExMTQyMDU0",
"avatar_url": "https://avatars.githubusercontent.com/u/11142054?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yanyu2015",
"html_url": "https://github.com/yanyu2015",
"followers_url": "https://api.github.com/users/yanyu2015/followers",
"following_url": "https://api.github.com/users/yanyu2015/following{/other_user}",
"gists_url": "https://api.github.com/users/yanyu2015/gists{/gist_id}",
"starred_url": "https://api.github.com/users/yanyu2015/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yanyu2015/subscriptions",
"organizations_url": "https://api.github.com/users/yanyu2015/orgs",
"repos_url": "https://api.github.com/users/yanyu2015/repos",
"events_url": "https://api.github.com/users/yanyu2015/events{/privacy}",
"received_events_url": "https://api.github.com/users/yanyu2015/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | [
"Hi ! You can download the `oscar.py` file from this repository at `/datasets/oscar/oscar.py`.\r\n\r\nThen you can load the dataset by passing the local path to `oscar.py` to `load_dataset`:\r\n```python\r\nload_dataset(\"path/to/oscar.py\", \"unshuffled_deduplicated_it\")\r\n```",
"it works,but another error occurs.\r\n```\r\nConnectionError: Couldn't reach https://s3.amazonaws.com/datasets.huggingface.co/oscar/1.0/unshuffled/deduplicated/it/it_sha256.txt (SSLError(MaxRetryError(\"HTTPSConnectionPool(host='s3.amazonaws.com', port=443): Max retries exceeded with url: /datasets.huggingface.co/oscar/1.0/unshuffled/deduplicated/it/it_sha256.txt (Caused by SSLError(SSLEOFError(8, 'EOF occurred in violation of protocol (_ssl.c:1129)')))\")))\r\n```\r\nI can access `https://s3.amazonaws.com/datasets.huggingface.co/oscar/1.0/unshuffled/deduplicated/it/it_sha256.txt` and `https://aws.amazon.com/cn/s3/` directly, so why it reports a SSLError, should I need tomodify the host file?",
"Could it be an issue with your python environment or your version of OpenSSL ?",
"you are so wise!\r\nit report [ConnectionError] in python 3.9.7\r\nand works well in python 3.8.12\r\n\r\nI need you help again: how can I specify the path for download files?\r\nthe data is too large and my C hardware is not enough",
"Cool ! And you can specify the path for download files with to the `cache_dir` parameter:\r\n```python\r\nfrom datasets import load_dataset\r\ndataset = load_dataset('oscar', 'unshuffled_deduplicated_it', cache_dir='path/to/directory')",
"It takes me some days to download data completely, Despise sometimes it occurs again, change py version is feasible way to avoid this ConnectionEror.\r\nparameter `cache_dir` works well, thanks for your kindness again!"
] | 1,647,758,737,000 | 1,648,628,012,000 | 1,648,628,012,000 | NONE | null | null | null | code
```
from datasets import load_dataset
dataset = load_dataset('oscar', 'unshuffled_deduplicated_it')
```
bug report
```
---------------------------------------------------------------------------
ConnectionError Traceback (most recent call last)
~\AppData\Local\Temp/ipykernel_29788/2615425180.py in <module>
----> 1 dataset = load_dataset('oscar', 'unshuffled_deduplicated_it')
D:\DataScience\PythonSet\IDES\anaconda\lib\site-packages\datasets\load.py in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, ignore_verifications, keep_in_memory, save_infos, revision, use_auth_token, task, streaming, **config_kwargs)
1658
1659 # Create a dataset builder
-> 1660 builder_instance = load_dataset_builder(
1661 path=path,
1662 name=name,
D:\DataScience\PythonSet\IDES\anaconda\lib\site-packages\datasets\load.py in load_dataset_builder(path, name, data_dir, data_files, cache_dir, features, download_config, download_mode, revision, use_auth_token, **config_kwargs)
1484 download_config = download_config.copy() if download_config else DownloadConfig()
1485 download_config.use_auth_token = use_auth_token
-> 1486 dataset_module = dataset_module_factory(
1487 path,
1488 revision=revision,
D:\DataScience\PythonSet\IDES\anaconda\lib\site-packages\datasets\load.py in dataset_module_factory(path, revision, download_config, download_mode, force_local_path, dynamic_modules_path, data_dir, data_files, **download_kwargs)
1236 f"Couldn't find '{path}' on the Hugging Face Hub either: {type(e1).__name__}: {e1}"
1237 ) from None
-> 1238 raise e1 from None
1239 else:
1240 raise FileNotFoundError(
D:\DataScience\PythonSet\IDES\anaconda\lib\site-packages\datasets\load.py in dataset_module_factory(path, revision, download_config, download_mode, force_local_path, dynamic_modules_path, data_dir, data_files, **download_kwargs)
1173 if path.count("/") == 0: # even though the dataset is on the Hub, we get it from GitHub for now
1174 # TODO(QL): use a Hub dataset module factory instead of GitHub
-> 1175 return GithubDatasetModuleFactory(
1176 path,
1177 revision=revision,
D:\DataScience\PythonSet\IDES\anaconda\lib\site-packages\datasets\load.py in get_module(self)
531 revision = self.revision
532 try:
--> 533 local_path = self.download_loading_script(revision)
534 except FileNotFoundError:
535 if revision is not None or os.getenv("HF_SCRIPTS_VERSION", None) is not None:
D:\DataScience\PythonSet\IDES\anaconda\lib\site-packages\datasets\load.py in download_loading_script(self, revision)
511 if download_config.download_desc is None:
512 download_config.download_desc = "Downloading builder script"
--> 513 return cached_path(file_path, download_config=download_config)
514
515 def download_dataset_infos_file(self, revision: Optional[str]) -> str:
D:\DataScience\PythonSet\IDES\anaconda\lib\site-packages\datasets\utils\file_utils.py in cached_path(url_or_filename, download_config, **download_kwargs)
232 if is_remote_url(url_or_filename):
233 # URL, so get it from the cache (downloading if necessary)
--> 234 output_path = get_from_cache(
235 url_or_filename,
236 cache_dir=cache_dir,
D:\DataScience\PythonSet\IDES\anaconda\lib\site-packages\datasets\utils\file_utils.py in get_from_cache(url, cache_dir, force_download, proxies, etag_timeout, resume_download, user_agent, local_files_only, use_etag, max_retries, use_auth_token, ignore_url_params, download_desc)
580 _raise_if_offline_mode_is_enabled(f"Tried to reach {url}")
581 if head_error is not None:
--> 582 raise ConnectionError(f"Couldn't reach {url} ({repr(head_error)})")
583 elif response is not None:
584 raise ConnectionError(f"Couldn't reach {url} (error {response.status_code})")
ConnectionError: Couldn't reach https://raw.githubusercontent.com/huggingface/datasets/2.0.0/datasets/oscar/oscar.py (SSLError(MaxRetryError("HTTPSConnectionPool(host='raw.githubusercontent.com', port=443): Max retries exceeded with url: /huggingface/datasets/2.0.0/datasets/oscar/oscar.py (Caused by SSLError(SSLEOFError(8, 'EOF occurred in violation of protocol (_ssl.c:1129)')))")))
```
It may be caused by Caused by SSLError(in China?) because it works well on google colab.
So how can I download this dataset manually?
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3973/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3973/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/3972 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3972/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3972/comments | https://api.github.com/repos/huggingface/datasets/issues/3972/events | https://github.com/huggingface/datasets/pull/3972 | 1,174,402,033 | PR_kwDODunzps40sdVu | 3,972 | Adding Roman Urdu Hate Speech dataset | {
"login": "bp-high",
"id": 53102161,
"node_id": "MDQ6VXNlcjUzMTAyMTYx",
"avatar_url": "https://avatars.githubusercontent.com/u/53102161?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bp-high",
"html_url": "https://github.com/bp-high",
"followers_url": "https://api.github.com/users/bp-high/followers",
"following_url": "https://api.github.com/users/bp-high/following{/other_user}",
"gists_url": "https://api.github.com/users/bp-high/gists{/gist_id}",
"starred_url": "https://api.github.com/users/bp-high/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bp-high/subscriptions",
"organizations_url": "https://api.github.com/users/bp-high/orgs",
"repos_url": "https://api.github.com/users/bp-high/repos",
"events_url": "https://api.github.com/users/bp-high/events{/privacy}",
"received_events_url": "https://api.github.com/users/bp-high/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"@lhoestq can you review when you have some time? Also were the previous CI fails due to the Google Drive tests which were excluded by #3982 ?",
"> were the previous CI fails due to the Google Drive tests which were excluded by https://github.com/huggingface/datasets/pull/3982 ?\r\n\r\nYes exactly, merging `master` into your branch fixed the CI ;)",
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,647,735,566,000 | 1,648,223,779,000 | 1,648,223,480,000 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3972",
"html_url": "https://github.com/huggingface/datasets/pull/3972",
"diff_url": "https://github.com/huggingface/datasets/pull/3972.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3972.patch",
"merged_at": 1648223480000
} | This Pull request will add the Roman Urdu Hate speech Dataset. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3972/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3972/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3971 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3971/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3971/comments | https://api.github.com/repos/huggingface/datasets/issues/3971/events | https://github.com/huggingface/datasets/pull/3971 | 1,174,329,442 | PR_kwDODunzps40sS4W | 3,971 | Applied index-filters on scores in search.py. | {
"login": "vishalsrao",
"id": 36671559,
"node_id": "MDQ6VXNlcjM2NjcxNTU5",
"avatar_url": "https://avatars.githubusercontent.com/u/36671559?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/vishalsrao",
"html_url": "https://github.com/vishalsrao",
"followers_url": "https://api.github.com/users/vishalsrao/followers",
"following_url": "https://api.github.com/users/vishalsrao/following{/other_user}",
"gists_url": "https://api.github.com/users/vishalsrao/gists{/gist_id}",
"starred_url": "https://api.github.com/users/vishalsrao/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/vishalsrao/subscriptions",
"organizations_url": "https://api.github.com/users/vishalsrao/orgs",
"repos_url": "https://api.github.com/users/vishalsrao/repos",
"events_url": "https://api.github.com/users/vishalsrao/events{/privacy}",
"received_events_url": "https://api.github.com/users/vishalsrao/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,647,715,422,000 | 1,649,774,903,000 | 1,649,774,518,000 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3971",
"html_url": "https://github.com/huggingface/datasets/pull/3971",
"diff_url": "https://github.com/huggingface/datasets/pull/3971.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3971.patch",
"merged_at": 1649774518000
} | Updated search.py to resolve the issue mentioned in https://github.com/huggingface/datasets/issues/3961.
Applied index-filters on scores in get_nearest_examples and get_nearest_examples_batch methods of search.py. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3971/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3971/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3970 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3970/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3970/comments | https://api.github.com/repos/huggingface/datasets/issues/3970/events | https://github.com/huggingface/datasets/pull/3970 | 1,174,327,367 | PR_kwDODunzps40sSfx | 3,970 | Apply index-filters on scores in get_nearest_examples and get_nearest… | {
"login": "vishalsrao",
"id": 36671559,
"node_id": "MDQ6VXNlcjM2NjcxNTU5",
"avatar_url": "https://avatars.githubusercontent.com/u/36671559?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/vishalsrao",
"html_url": "https://github.com/vishalsrao",
"followers_url": "https://api.github.com/users/vishalsrao/followers",
"following_url": "https://api.github.com/users/vishalsrao/following{/other_user}",
"gists_url": "https://api.github.com/users/vishalsrao/gists{/gist_id}",
"starred_url": "https://api.github.com/users/vishalsrao/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/vishalsrao/subscriptions",
"organizations_url": "https://api.github.com/users/vishalsrao/orgs",
"repos_url": "https://api.github.com/users/vishalsrao/repos",
"events_url": "https://api.github.com/users/vishalsrao/events{/privacy}",
"received_events_url": "https://api.github.com/users/vishalsrao/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,647,714,751,000 | 1,647,715,092,000 | 1,647,715,092,000 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3970",
"html_url": "https://github.com/huggingface/datasets/pull/3970",
"diff_url": "https://github.com/huggingface/datasets/pull/3970.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3970.patch",
"merged_at": null
} | Updated search.py to resolve the issue mentioned in https://github.com/huggingface/datasets/issues/3961.
Applied index-filters on scores in get_nearest_examples and get_nearest_examples_batch methods of search.py. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3970/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3970/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3969 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3969/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3969/comments | https://api.github.com/repos/huggingface/datasets/issues/3969/events | https://github.com/huggingface/datasets/issues/3969 | 1,174,273,824 | I_kwDODunzps5F_f8g | 3,969 | Cannot preview cnn_dailymail dataset | {
"login": "hasan-besh",
"id": 75482871,
"node_id": "MDQ6VXNlcjc1NDgyODcx",
"avatar_url": "https://avatars.githubusercontent.com/u/75482871?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/hasan-besh",
"html_url": "https://github.com/hasan-besh",
"followers_url": "https://api.github.com/users/hasan-besh/followers",
"following_url": "https://api.github.com/users/hasan-besh/following{/other_user}",
"gists_url": "https://api.github.com/users/hasan-besh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/hasan-besh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/hasan-besh/subscriptions",
"organizations_url": "https://api.github.com/users/hasan-besh/orgs",
"repos_url": "https://api.github.com/users/hasan-besh/repos",
"events_url": "https://api.github.com/users/hasan-besh/events{/privacy}",
"received_events_url": "https://api.github.com/users/hasan-besh/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"I guess the cache got corrupted due to a previous issue with Google Drive service.\r\n\r\nThe cache should be regenerated, e.g. by passing `download_mode=\"force_redownload\"`.\r\n\r\nCC: @severo ",
"Note that the dataset preview uses its own cache, not `datasets`' cache. So `download_mode=\"force_redownload\"` doesn't help. But yes indeed the cache must be refreshed.\r\n\r\nThe CNN Dailymail dataste is currently hosted on Google Drive, which is an unreliable host and we've had many issues with it. Unless we found another most reliable host for the data, we will keep running into issues from time to time.\r\n\r\nAt Hugging Face we're not allowed to host the CNN Dailymail data by ourselves AFAIK",
"Yes @lhoestq, I didn't explain myself well: my previous message was addressed to @severo. ",
"I remove the tag dataset-viewer, since it's more an issue with the hosting on Google Drive",
"Sounds good. I was looking for another host of this dataset but couldn't find any (yet)",
"It seems like the issue is with the streaming mode, not with the hosting:\r\n\r\n```python\r\n>>> import datasets\r\n>>> dataset = datasets.load_dataset('cnn_dailymail', name=\"3.0.0\", split=\"train\", streaming=True, download_mode=\"force_redownload\")\r\nDownloading builder script: 9.35kB [00:00, 10.2MB/s]\r\nDownloading metadata: 9.50kB [00:00, 12.2MB/s]\r\n>>> len(list(dataset))\r\n0\r\n>>> dataset = datasets.load_dataset('cnn_dailymail', name=\"3.0.0\", split=\"train\", streaming=False)\r\nReusing dataset cnn_dailymail (/home/slesage/.cache/huggingface/datasets/cnn_dailymail/3.0.0/3.0.0/3cb851bf7cf5826e45d49db2863f627cba583cbc32342df7349dfe6c38060234)\r\n>>> len(dataset)\r\n287113\r\n```\r\n\r\nNote, in particular, that the streaming mode is failing silently, returning 0 row while I would have expected an exception instead. The result is that the dataset viewer shows `No data` instead of a detailed error.\r\n\r\n<img width=\"1511\" alt=\"Capture d’écran 2022-04-12 à 11 50 46\" src=\"https://user-images.githubusercontent.com/1676121/162935341-d50f1e73-d053-41d4-917f-e79708a0ca23.png\">\r\n",
"Well this is because the host (Google Drive) returns a document that is not the actual data, but an error page",
"Do you think that `datasets` should detect this anyway and throw an exception?",
"Yes it definitely should ! I don't have the bandwidth to work on this right now though",
"Indeed, streaming was not supported: tgz archives were not properly iterated.\r\n\r\nI've opened a PR to support streaming.\r\n\r\nHowever, keep in mind that Google Drive will keep generating issues from time to time, like 403,..."
] | 1,647,698,937,000 | 1,650,469,969,000 | 1,650,469,969,000 | NONE | null | null | null | ## Dataset viewer issue for '*cnn_dailymail*'
**Link:** https://huggingface.co/datasets/cnn_dailymail
*short description of the issue*
Am I the one who added this dataset ? Yes-No
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3969/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3969/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/3968 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3968/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3968/comments | https://api.github.com/repos/huggingface/datasets/issues/3968/events | https://github.com/huggingface/datasets/issues/3968 | 1,174,193,962 | I_kwDODunzps5F_Mcq | 3,968 | Cannot preview 'indonesian-nlp/eli5_id' dataset | {
"login": "cahya-wirawan",
"id": 7669893,
"node_id": "MDQ6VXNlcjc2Njk4OTM=",
"avatar_url": "https://avatars.githubusercontent.com/u/7669893?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/cahya-wirawan",
"html_url": "https://github.com/cahya-wirawan",
"followers_url": "https://api.github.com/users/cahya-wirawan/followers",
"following_url": "https://api.github.com/users/cahya-wirawan/following{/other_user}",
"gists_url": "https://api.github.com/users/cahya-wirawan/gists{/gist_id}",
"starred_url": "https://api.github.com/users/cahya-wirawan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/cahya-wirawan/subscriptions",
"organizations_url": "https://api.github.com/users/cahya-wirawan/orgs",
"repos_url": "https://api.github.com/users/cahya-wirawan/repos",
"events_url": "https://api.github.com/users/cahya-wirawan/events{/privacy}",
"received_events_url": "https://api.github.com/users/cahya-wirawan/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 3470211881,
"node_id": "LA_kwDODunzps7O1zsp",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset-viewer",
"name": "dataset-viewer",
"color": "E5583E",
"default": false,
"description": "Related to the dataset viewer on huggingface.co"
}
] | closed | false | {
"login": "severo",
"id": 1676121,
"node_id": "MDQ6VXNlcjE2NzYxMjE=",
"avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/severo",
"html_url": "https://github.com/severo",
"followers_url": "https://api.github.com/users/severo/followers",
"following_url": "https://api.github.com/users/severo/following{/other_user}",
"gists_url": "https://api.github.com/users/severo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/severo/subscriptions",
"organizations_url": "https://api.github.com/users/severo/orgs",
"repos_url": "https://api.github.com/users/severo/repos",
"events_url": "https://api.github.com/users/severo/events{/privacy}",
"received_events_url": "https://api.github.com/users/severo/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "severo",
"id": 1676121,
"node_id": "MDQ6VXNlcjE2NzYxMjE=",
"avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/severo",
"html_url": "https://github.com/severo",
"followers_url": "https://api.github.com/users/severo/followers",
"following_url": "https://api.github.com/users/severo/following{/other_user}",
"gists_url": "https://api.github.com/users/severo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/severo/subscriptions",
"organizations_url": "https://api.github.com/users/severo/orgs",
"repos_url": "https://api.github.com/users/severo/repos",
"events_url": "https://api.github.com/users/severo/events{/privacy}",
"received_events_url": "https://api.github.com/users/severo/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"Hi @cahya-wirawan, thanks for reporting.\r\n\r\nYour dataset is working OK in streaming mode:\r\n```python\r\nIn [1]: from datasets import load_dataset\r\n ...: ds = load_dataset(\"indonesian-nlp/eli5_id\", split=\"train\", streaming=True)\r\n ...: item = next(iter(ds))\r\n ...: item\r\nUsing custom data configuration indonesian-nlp--eli5_id-9fe728a7e760fb7b\r\n\r\nOut[1]: \r\n{'q_id': '1oy5tc',\r\n 'title': 'dalam sepak bola apa gunanya menyia-nyiakan dua permainan pertama dengan terburu-buru - di tengah - bukan permainan terburu-buru biasa saya mendapatkannya',\r\n 'selftext': '',\r\n 'document': '',\r\n 'subreddit': 'explainlikeimfive',\r\n 'answers': {'a_id': ['ccwtgnz', 'ccwtmho', 'ccwt946', 'ccwvj0u'],\r\n 'text': ['Jaga pertahanan tetap jujur, rasakan operan terburu-buru, buka permainan yang lewat. Pelanggaran yang terlalu satu dimensi akan gagal. Dan mereka yang bergegas ke tengah kadang-kadang dapat dibuka lebar-lebar untuk ukuran yard yang besar.',\r\n 'Jika Anda melempar bola sepanjang waktu, maka pertahanan akan beradaptasi untuk selalu menutupi umpan. Dengan melakukan permainan lari sederhana sesekali, Anda memaksa pertahanan untuk tetap dekat dan menjaga dari lari. Terkadang, pelanggaran dapat membuat pertahanan lengah dengan berpura-pura berlari dan membebaskan penerima mereka. Selain itu, Anda tidak perlu mendapatkan yard besar di setiap permainan. Terkadang, paling baik mendapatkan beberapa yard sekaligus. Selama Anda mendapatkan yang pertama, Anda dalam kondisi yang baik.',\r\n 'Dalam kebanyakan kasus, O-Line seharusnya membuat lubang untuk dilalui kembali. Jika Anda menjalankan terlalu banyak permainan ke luar / melempar, pertahanan akan mengejar. Juga, 2 permainan 5 yard memberi Anda satu set down baru.',\r\n 'Saya Anda tidak suka jenis drama itu, tonton CFL. Kami hanya mendapatkan 3 down sehingga Anda tidak bisa menyia-nyiakannya. Lebih banyak lagi yang lewat.'],\r\n 'score': [3, 2, 2, 2]},\r\n 'title_urls': {'url': []},\r\n 'selftext_urls': {'url': []},\r\n 'answers_urls': {'url': []}}\r\n```\r\nTherefore, it should be properly rendered in the previewer. Let me ping @severo to have a look at it.",
"Thanks @albertvillanova for checking it. Btw, I have another dataset indonesian-nlp/lfqa_id which has the same issue. However, this dataset is still private, is it the reason why the preview doesn't work?",
"Yes, preview is not supported on private datasets yet. We are working on that though...",
"Thanks for the confirmation ",
"Fixed. Thanks for your feedback."
] | 1,647,672,849,000 | 1,648,139,664,000 | 1,648,139,664,000 | CONTRIBUTOR | null | null | null | ## Dataset viewer issue for '*indonesian-nlp/eli5_id*'
**Link:** https://huggingface.co/datasets/indonesian-nlp/eli5_id
I can not see the dataset preview.
```
Server Error
Status code: 400
Exception: Status400Error
Message: Not found. Maybe the cache is missing, or maybe the dataset does not exist.
```
Am I the one who added this dataset ? Yes
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3968/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3968/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/3967 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3967/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3967/comments | https://api.github.com/repos/huggingface/datasets/issues/3967/events | https://github.com/huggingface/datasets/pull/3967 | 1,174,107,128 | PR_kwDODunzps40rpny | 3,967 | [feat] Add TextVQA dataset | {
"login": "apsdehal",
"id": 3616806,
"node_id": "MDQ6VXNlcjM2MTY4MDY=",
"avatar_url": "https://avatars.githubusercontent.com/u/3616806?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/apsdehal",
"html_url": "https://github.com/apsdehal",
"followers_url": "https://api.github.com/users/apsdehal/followers",
"following_url": "https://api.github.com/users/apsdehal/following{/other_user}",
"gists_url": "https://api.github.com/users/apsdehal/gists{/gist_id}",
"starred_url": "https://api.github.com/users/apsdehal/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/apsdehal/subscriptions",
"organizations_url": "https://api.github.com/users/apsdehal/orgs",
"repos_url": "https://api.github.com/users/apsdehal/repos",
"events_url": "https://api.github.com/users/apsdehal/events{/privacy}",
"received_events_url": "https://api.github.com/users/apsdehal/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"Hey :) Have you had a chance to continue this PR ? Let me know if you have questions or if I can help",
"Hey @lhoestq, let me wrap this up soon. I will resolve your comments in next push."
] | 1,647,646,179,000 | 1,651,733,491,000 | 1,651,733,069,000 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3967",
"html_url": "https://github.com/huggingface/datasets/pull/3967",
"diff_url": "https://github.com/huggingface/datasets/pull/3967.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3967.patch",
"merged_at": 1651733069000
} | This would be the first classification-based vision-and-language dataset in the datasets library.
Currently, the dataset downloads everything you need beforehand. See the [paper](https://arxiv.org/abs/1904.08920) for more details.
Test Plan:
- Ran the full and the dummy data test locally | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3967/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3967/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3966 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3966/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3966/comments | https://api.github.com/repos/huggingface/datasets/issues/3966/events | https://github.com/huggingface/datasets/pull/3966 | 1,173,883,084 | PR_kwDODunzps40rBNE | 3,966 | Create metric card for BERTScore | {
"login": "sashavor",
"id": 14205986,
"node_id": "MDQ6VXNlcjE0MjA1OTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/14205986?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sashavor",
"html_url": "https://github.com/sashavor",
"followers_url": "https://api.github.com/users/sashavor/followers",
"following_url": "https://api.github.com/users/sashavor/following{/other_user}",
"gists_url": "https://api.github.com/users/sashavor/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sashavor/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sashavor/subscriptions",
"organizations_url": "https://api.github.com/users/sashavor/orgs",
"repos_url": "https://api.github.com/users/sashavor/repos",
"events_url": "https://api.github.com/users/sashavor/events{/privacy}",
"received_events_url": "https://api.github.com/users/sashavor/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,647,627,716,000 | 1,647,956,128,000 | 1,647,955,856,000 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3966",
"html_url": "https://github.com/huggingface/datasets/pull/3966",
"diff_url": "https://github.com/huggingface/datasets/pull/3966.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3966.patch",
"merged_at": 1647955856000
} | Proposing a metric card for BERTScore | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3966/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3966/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3965 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3965/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3965/comments | https://api.github.com/repos/huggingface/datasets/issues/3965/events | https://github.com/huggingface/datasets/issues/3965 | 1,173,708,739 | I_kwDODunzps5F9V_D | 3,965 | TypeError: Couldn't cast array of type for JSONLines dataset | {
"login": "lewtun",
"id": 26859204,
"node_id": "MDQ6VXNlcjI2ODU5MjA0",
"avatar_url": "https://avatars.githubusercontent.com/u/26859204?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lewtun",
"html_url": "https://github.com/lewtun",
"followers_url": "https://api.github.com/users/lewtun/followers",
"following_url": "https://api.github.com/users/lewtun/following{/other_user}",
"gists_url": "https://api.github.com/users/lewtun/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lewtun/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lewtun/subscriptions",
"organizations_url": "https://api.github.com/users/lewtun/orgs",
"repos_url": "https://api.github.com/users/lewtun/repos",
"events_url": "https://api.github.com/users/lewtun/events{/privacy}",
"received_events_url": "https://api.github.com/users/lewtun/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,647,616,673,000 | 1,651,853,631,000 | 1,651,853,631,000 | MEMBER | null | null | null | ## Describe the bug
One of the [course participants](https://discuss.huggingface.co/t/chapter-5-questions/11744/20?u=lewtun) is having trouble loading a JSONLines dataset that's composed of the GitHub issues from `spacy` (see stack trace below).
This reminds me a bit of #2799 where one can load the dataset in `pandas` but not in `datasets` and perhaps increasing the `block_size` is needed again.
## Steps to reproduce the bug
```python
from datasets import load_dataset
from huggingface_hub import hf_hub_url
import pandas as pd
# returns 'https://huggingface.co/datasets/Evan/spaCy-github-issues/resolve/main/spacy-issues.jsonl'
data_files = hf_hub_url(repo_id="Evan/spaCy-github-issues", filename="spacy-issues.jsonl", repo_type="dataset")
# throws TypeError: Couldn't cast array of type
dset = load_dataset("json", data_files=data_files, split="test")
# no problem with pandas - note this take a while as the file is >2GB
df = pd.read_json(data_files, orient="records", lines=True)
df.head()
```
## Expected results
I can load any line-separated JSON file, similar to pandas.
## Actual results
```
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/Users/lewtun/miniconda3/envs/hf/lib/python3.9/site-packages/datasets/load.py", line 1702, in load_dataset
builder_instance.download_and_prepare(
File "/Users/lewtun/miniconda3/envs/hf/lib/python3.9/site-packages/datasets/builder.py", line 594, in download_and_prepare
self._download_and_prepare(
File "/Users/lewtun/miniconda3/envs/hf/lib/python3.9/site-packages/datasets/builder.py", line 683, in _download_and_prepare
self._prepare_split(split_generator, **prepare_split_kwargs)
File "/Users/lewtun/miniconda3/envs/hf/lib/python3.9/site-packages/datasets/builder.py", line 1136, in _prepare_split
writer.write_table(table)
File "/Users/lewtun/miniconda3/envs/hf/lib/python3.9/site-packages/datasets/arrow_writer.py", line 511, in write_table
pa_table = table_cast(pa_table, self._schema)
File "/Users/lewtun/miniconda3/envs/hf/lib/python3.9/site-packages/datasets/table.py", line 1121, in table_cast
return cast_table_to_features(table, Features.from_arrow_schema(schema))
File "/Users/lewtun/miniconda3/envs/hf/lib/python3.9/site-packages/datasets/table.py", line 1102, in cast_table_to_features
arrays = [cast_array_to_feature(table[name], feature) for name, feature in features.items()]
File "/Users/lewtun/miniconda3/envs/hf/lib/python3.9/site-packages/datasets/table.py", line 1102, in <listcomp>
arrays = [cast_array_to_feature(table[name], feature) for name, feature in features.items()]
File "/Users/lewtun/miniconda3/envs/hf/lib/python3.9/site-packages/datasets/table.py", line 944, in wrapper
return func(array, *args, **kwargs)
File "/Users/lewtun/miniconda3/envs/hf/lib/python3.9/site-packages/datasets/table.py", line 918, in wrapper
return pa.chunked_array([func(chunk, *args, **kwargs) for chunk in array.chunks])
File "/Users/lewtun/miniconda3/envs/hf/lib/python3.9/site-packages/datasets/table.py", line 918, in <listcomp>
return pa.chunked_array([func(chunk, *args, **kwargs) for chunk in array.chunks])
File "/Users/lewtun/miniconda3/envs/hf/lib/python3.9/site-packages/datasets/table.py", line 1086, in cast_array_to_feature
return array_cast(array, feature(), allow_number_to_str=allow_number_to_str)
File "/Users/lewtun/miniconda3/envs/hf/lib/python3.9/site-packages/datasets/table.py", line 944, in wrapper
return func(array, *args, **kwargs)
File "/Users/lewtun/miniconda3/envs/hf/lib/python3.9/site-packages/datasets/table.py", line 920, in wrapper
return func(array, *args, **kwargs)
File "/Users/lewtun/miniconda3/envs/hf/lib/python3.9/site-packages/datasets/table.py", line 1019, in array_cast
raise TypeError(f"Couldn't cast array of type\n{array.type}\nto\n{pa_type}")
TypeError: Couldn't cast array of type
struct<url: string, html_url: string, labels_url: string, id: int64, node_id: string, number: int64, title: string, description: string, creator: struct<login: string, id: int64, node_id: string, avatar_url: string, gravatar_id: string, url: string, html_url: string, followers_url: string, following_url: string, gists_url: string, starred_url: string, subscriptions_url: string, organizations_url: string, repos_url: string, events_url: string, received_events_url: string, type: string, site_admin: bool>, open_issues: int64, closed_issues: int64, state: string, created_at: timestamp[s], updated_at: timestamp[s], due_on: null, closed_at: timestamp[s]>
to
null
```
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 2.0.0
- Platform: macOS-10.16-x86_64-i386-64bit
- Python version: 3.9.7
- PyArrow version: 7.0.0
- Pandas version: 1.4.1
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3965/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3965/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/3964 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3964/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3964/comments | https://api.github.com/repos/huggingface/datasets/issues/3964/events | https://github.com/huggingface/datasets/issues/3964 | 1,173,564,993 | I_kwDODunzps5F8y5B | 3,964 | Add default Audio Loader | {
"login": "polinaeterna",
"id": 16348744,
"node_id": "MDQ6VXNlcjE2MzQ4NzQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/16348744?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/polinaeterna",
"html_url": "https://github.com/polinaeterna",
"followers_url": "https://api.github.com/users/polinaeterna/followers",
"following_url": "https://api.github.com/users/polinaeterna/following{/other_user}",
"gists_url": "https://api.github.com/users/polinaeterna/gists{/gist_id}",
"starred_url": "https://api.github.com/users/polinaeterna/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/polinaeterna/subscriptions",
"organizations_url": "https://api.github.com/users/polinaeterna/orgs",
"repos_url": "https://api.github.com/users/polinaeterna/repos",
"events_url": "https://api.github.com/users/polinaeterna/events{/privacy}",
"received_events_url": "https://api.github.com/users/polinaeterna/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | open | false | {
"login": "polinaeterna",
"id": 16348744,
"node_id": "MDQ6VXNlcjE2MzQ4NzQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/16348744?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/polinaeterna",
"html_url": "https://github.com/polinaeterna",
"followers_url": "https://api.github.com/users/polinaeterna/followers",
"following_url": "https://api.github.com/users/polinaeterna/following{/other_user}",
"gists_url": "https://api.github.com/users/polinaeterna/gists{/gist_id}",
"starred_url": "https://api.github.com/users/polinaeterna/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/polinaeterna/subscriptions",
"organizations_url": "https://api.github.com/users/polinaeterna/orgs",
"repos_url": "https://api.github.com/users/polinaeterna/repos",
"events_url": "https://api.github.com/users/polinaeterna/events{/privacy}",
"received_events_url": "https://api.github.com/users/polinaeterna/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "polinaeterna",
"id": 16348744,
"node_id": "MDQ6VXNlcjE2MzQ4NzQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/16348744?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/polinaeterna",
"html_url": "https://github.com/polinaeterna",
"followers_url": "https://api.github.com/users/polinaeterna/followers",
"following_url": "https://api.github.com/users/polinaeterna/following{/other_user}",
"gists_url": "https://api.github.com/users/polinaeterna/gists{/gist_id}",
"starred_url": "https://api.github.com/users/polinaeterna/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/polinaeterna/subscriptions",
"organizations_url": "https://api.github.com/users/polinaeterna/orgs",
"repos_url": "https://api.github.com/users/polinaeterna/repos",
"events_url": "https://api.github.com/users/polinaeterna/events{/privacy}",
"received_events_url": "https://api.github.com/users/polinaeterna/received_events",
"type": "User",
"site_admin": false
}
] | null | [] | 1,647,608,335,000 | 1,647,610,379,000 | null | CONTRIBUTOR | null | null | null | **Is your feature request related to a problem? Please describe.**
Writing a custom loading dataset script might be a bit challenging for users.
**Describe the solution you'd like**
Add default Audio loader (analogous to ImageFolder) for small datasets with standard directory structure.
**Describe alternatives you've considered**
Create a custom loading script? that's what users doing now.
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3964/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3964/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/3963 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3963/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3963/comments | https://api.github.com/repos/huggingface/datasets/issues/3963/events | https://github.com/huggingface/datasets/pull/3963 | 1,173,492,562 | PR_kwDODunzps40puyZ | 3,963 | Add Audio Folder | {
"login": "polinaeterna",
"id": 16348744,
"node_id": "MDQ6VXNlcjE2MzQ4NzQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/16348744?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/polinaeterna",
"html_url": "https://github.com/polinaeterna",
"followers_url": "https://api.github.com/users/polinaeterna/followers",
"following_url": "https://api.github.com/users/polinaeterna/following{/other_user}",
"gists_url": "https://api.github.com/users/polinaeterna/gists{/gist_id}",
"starred_url": "https://api.github.com/users/polinaeterna/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/polinaeterna/subscriptions",
"organizations_url": "https://api.github.com/users/polinaeterna/orgs",
"repos_url": "https://api.github.com/users/polinaeterna/repos",
"events_url": "https://api.github.com/users/polinaeterna/events{/privacy}",
"received_events_url": "https://api.github.com/users/polinaeterna/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | {
"login": "polinaeterna",
"id": 16348744,
"node_id": "MDQ6VXNlcjE2MzQ4NzQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/16348744?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/polinaeterna",
"html_url": "https://github.com/polinaeterna",
"followers_url": "https://api.github.com/users/polinaeterna/followers",
"following_url": "https://api.github.com/users/polinaeterna/following{/other_user}",
"gists_url": "https://api.github.com/users/polinaeterna/gists{/gist_id}",
"starred_url": "https://api.github.com/users/polinaeterna/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/polinaeterna/subscriptions",
"organizations_url": "https://api.github.com/users/polinaeterna/orgs",
"repos_url": "https://api.github.com/users/polinaeterna/repos",
"events_url": "https://api.github.com/users/polinaeterna/events{/privacy}",
"received_events_url": "https://api.github.com/users/polinaeterna/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "polinaeterna",
"id": 16348744,
"node_id": "MDQ6VXNlcjE2MzQ4NzQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/16348744?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/polinaeterna",
"html_url": "https://github.com/polinaeterna",
"followers_url": "https://api.github.com/users/polinaeterna/followers",
"following_url": "https://api.github.com/users/polinaeterna/following{/other_user}",
"gists_url": "https://api.github.com/users/polinaeterna/gists{/gist_id}",
"starred_url": "https://api.github.com/users/polinaeterna/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/polinaeterna/subscriptions",
"organizations_url": "https://api.github.com/users/polinaeterna/orgs",
"repos_url": "https://api.github.com/users/polinaeterna/repos",
"events_url": "https://api.github.com/users/polinaeterna/events{/privacy}",
"received_events_url": "https://api.github.com/users/polinaeterna/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_3963). All of your documentation changes will be reflected on that endpoint.",
"Feel free to merge `master` into this branch to fix the CI errors related to Google Drive :)\r\n\r\nI think we can just remove the test that is based on dummy data, or make it have the `sampling_rate` parameter hardcoded in the test",
"IMO it's important to keep this loader aligned with `imagefolder`. I'm aware that the current `imagefolder` API is limiting because only labels can be inferred from the directory structure, which means it can only be used for classification and self-supervised pretraining. However, to make the loader more generic, we plan to support [metadata files](https://huggingface.slack.com/archives/C02JB9L6JKF/p1645450017434029?thread_ts=1645157416.389499&cid=C02JB9L6JKF) (will work on that this week), and in the audio case, these files can store transcripts.\r\n\r\nStreaming TAR archives (`iter_archive`) is not supported by any of the loaders currently, so we can add that in a separate PR for all of them (to keep this PR simple).\r\n\r\nWDYT?",
"> Streaming TAR archives (iter_archive) is not supported by any of the loaders currently, so we can add that in a separate PR for all of them (to keep this PR simple).\r\n\r\nYes definitely, we can see that later\r\n\r\n> to make the loader more generic, we plan to support [metadata files](https://huggingface.slack.com/archives/C02JB9L6JKF/p1645450017434029?thread_ts=1645157416.389499&cid=C02JB9L6JKF) (will work on that this week), and in the audio case, these files can store transcripts.\r\n\r\nCould you share an example of what the structure would look like in this case ?\r\n\r\nNote that for audio we ultimately should be able to load several splits at once (common voice, librispeech, etc. all have splits), unlike the current imagefolder implementation that puts everything in `train` (EDIT: I mean, when we pass `data_dir`). If we want consistency then we would need the same for imagefolder.",
"> I think we can just remove the test that is based on dummy data, or make it have the sampling_rate parameter hardcoded in the test\r\n\r\nNot sure what to do with `test_builder_class` and `test_load_dataset_offline`, I don't really want to drop these tests completely but do you think it's a good idea to hardcode builder loading like this: 🤔\r\n```\r\nif dataset_name == \"audiofolder\":\r\n builder = builder_cls(name=name, cache_dir=tmp_cache_dir, sampling_rate=16_000)\r\nelse:\r\n builder = builder_cls(name=name, cache_dir=tmp_cache_dir)\r\n```\r\n@mariosasko totally agree on that APIs should be aligned, do you think we should implement metadata support first? Or maybe we can merge this PR with explicit single transcript file and add full metadata support further.\r\n\r\nSplits support is definitely a required feature too, I think we can implement it in the future PR too. \r\n",
"btw i've found a workaround for splits generation :D\r\n\r\n```\r\nfrom datasets.data_files import DataFilesDict\r\n\r\nds = load_dataset(\r\n \"audiofolder\",\r\n data_files=DataFilesDict(\r\n {\r\n \"train\":\"../audiofolder/AudioTestSplits/train.zip\",\r\n \"test\": \"../audiofolder/AudioTestSplits/test.zip\"\r\n }\r\n ),\r\n sampling_rate=16_000\r\n)\r\n```",
"> Not sure what to do with test_builder_class and test_load_dataset_offline, I don't really want to drop these tests completely but do you think it's a good idea to hardcode builder loading like this: 🤔\r\n\r\nYes it's fine. If you you're not a fan of having such parameters directly at the core of the code you can declare a global variable `PACKAGED_MODULES_TEST_KWARGS = {\"audiofolder\": {\"sampling_rate\": 16_000}}` and do\r\n```python\r\nbuilder_kwargs = PACKAGED_MODULES_TEST_KWARGS.get(name, {})\r\nbuilder = builder_cls(name=name, cache_dir=tmp_cache_dir, **builder_kwargs)\r\n```\r\n\r\n> btw i've found a workaround for splits generation :D\r\n\r\nYes that works :) Note that you don't have to use `DataFilesDict` and you can pass a python dict directly (`DataFilesDict` is for internal usage only)",
"@lhoestq @mariosasko please take a look at the code and feel free to add your comments and discuss the potential issues\r\n \r\nafter we are satisfied with the code, I'll write the documentation ",
"@lhoestq it appeared that this PR already exists... https://github.com/huggingface/datasets/pull/3364",
"> The current problem with this loader is that it supports the ASR task by default, which could be surprising for the users thinking that this is the Image Folder counterpart for audio. To avoid this, we should support the audio classification task by default instead (we can add a template for it in this PR), where the label column is inferred from the directory structure.\r\n\r\nRight indeed, good catch. It's better to keep polishing the API rather than pushing fast something that can be confusing for users. Let's go for maximum alignment between the two then @polinaeterna ?",
"@mariosasko sorry, I didn't understand from your previous message that by aligning with the ImageFolder you mean inferring labels from directories names. Sure, that's not a problem, I can add the corresponding code. Do you also mean that in this version we should get rid of transcription file and feature and add it in the future when the metadata support https://github.com/huggingface/datasets/pull/4069 will be merged? \r\nMy understanding was that support for ASR task is more crucial than audio classification as it's more \"common\", but I would ask @anton-l and @patrickvonplaten about this. Anyway, it's not a problem to implement the classification task first, and the ASR one later. ",
"> Do you also mean that in this version we should get rid of transcription file and feature and add it in the future when the metadata support https://github.com/huggingface/datasets/pull/4069 will be merged?\r\n\r\nWe can wait for the linked PR to be merged first and then add the changes to this PR to have support for ASR from the get-go.",
"Don't follow 100% here, but as @polinaeterna said I think ASR is much more common than audio classification. Also, do you guys think a lot of users will use both the audio and image folder functionality ? Is it very important to have audio and image aligned here? Note that in Transformers while all models follow a common API, audio and vision models can be very different with respect to pre- and post-processing",
"> I think ASR is much more common than audio classification\r\n\r\nI agree, the main focus is ASR\r\n\r\n> do you guys think a lot of users will use both the audio and image folder functionality ?\r\n\r\nYup I think so, people don't just use public academic datasets right ? `imagefolder` is almost used 1k times a week, and it's just the beginning.\r\n\r\n> Is it very important to have audio and image aligned here?\r\n\r\nIf we can get some consistency for free, let's take it ^^ This way it will be easy for users to go from one modality to another, and documentation will be simpler.\r\n\r\n> Note that in Transformers while all models follow a common API, audio and vision models can be very different with respect to pre- and post-processing\r\n\r\nThat make total sense. Here this is mainly about raw data loading (before preprocessing) so we just need to make something generic, no matter what task the data is used for. Even though actually we know that ASR will be the main usage for now :p\r\n\r\nLet me know if it's clearer now or if you have other questions !"
] | 1,647,603,609,000 | 1,652,096,981,000 | null | CONTRIBUTOR | null | true | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3963",
"html_url": "https://github.com/huggingface/datasets/pull/3963",
"diff_url": "https://github.com/huggingface/datasets/pull/3963.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3963.patch",
"merged_at": null
} | Would resolve #3964
AudioFolder loads a .txt file with transcriptions and creates a dataset with all audiofiles in provided directory that has a transcription (independently of the directory structure) as a single split (train).
Can be loaded via:
```python
# for local dirs
dataset = load_dataset("audiofolder", data_dir="/path/to/folder", transcripts_filename="transcripts.txt")
```
```python
# for local and remote zip archives
dataset = load_dataset("audiofolder", data_files="path/to/archive/archive.zip", transcripts_filename="transcripts.txt")
```
default transcriptions filename is `transcripts.txt`. it should have the following structure:
```
audio_id_1 transcription text 1
audio_id_1 transcription text 1
```
separator is `\t`!
---
sorry for first old commits from other branch, don't know how that happened... | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3963/reactions",
"total_count": 2,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 2,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3963/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3962 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3962/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3962/comments | https://api.github.com/repos/huggingface/datasets/issues/3962/events | https://github.com/huggingface/datasets/pull/3962 | 1,173,482,291 | PR_kwDODunzps40psq2 | 3,962 | Fix flatten of Sequence feature type | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,647,602,862,000 | 1,647,873,647,000 | 1,647,873,372,000 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3962",
"html_url": "https://github.com/huggingface/datasets/pull/3962",
"diff_url": "https://github.com/huggingface/datasets/pull/3962.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3962.patch",
"merged_at": 1647873372000
} | The `Sequence` features type is not correctly flattened if it contains a dictionary. This PR fixes this, and I added a test case for this.
Close https://github.com/huggingface/datasets/issues/3795 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3962/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3962/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3961 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3961/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3961/comments | https://api.github.com/repos/huggingface/datasets/issues/3961/events | https://github.com/huggingface/datasets/issues/3961 | 1,173,223,086 | I_kwDODunzps5F7fau | 3,961 | Scores from Index at extra positions are not filtered out | {
"login": "vishalsrao",
"id": 36671559,
"node_id": "MDQ6VXNlcjM2NjcxNTU5",
"avatar_url": "https://avatars.githubusercontent.com/u/36671559?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/vishalsrao",
"html_url": "https://github.com/vishalsrao",
"followers_url": "https://api.github.com/users/vishalsrao/followers",
"following_url": "https://api.github.com/users/vishalsrao/following{/other_user}",
"gists_url": "https://api.github.com/users/vishalsrao/gists{/gist_id}",
"starred_url": "https://api.github.com/users/vishalsrao/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/vishalsrao/subscriptions",
"organizations_url": "https://api.github.com/users/vishalsrao/orgs",
"repos_url": "https://api.github.com/users/vishalsrao/repos",
"events_url": "https://api.github.com/users/vishalsrao/events{/privacy}",
"received_events_url": "https://api.github.com/users/vishalsrao/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | [
"Hi! Yes, that makes sense! Would you like to submit a PR to fix this?",
"Created PR https://github.com/huggingface/datasets/pull/3971"
] | 1,647,584,003,000 | 1,649,774,518,000 | 1,649,774,518,000 | CONTRIBUTOR | null | null | null | If a FAISS index has fewer records than the requested number of top results (k), then it returns -1 in indices for the additional positions. The get_nearest_examples method only filters out the extra results from the dataset samples. It would be better to filter out extra scores too.
Reference: https://github.com/huggingface/datasets/blob/2.0.0/src/datasets/search.py#L693
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3961/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3961/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/3960 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3960/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3960/comments | https://api.github.com/repos/huggingface/datasets/issues/3960/events | https://github.com/huggingface/datasets/issues/3960 | 1,173,148,884 | I_kwDODunzps5F7NTU | 3,960 | Load local dataset error | {
"login": "TXacs",
"id": 60869411,
"node_id": "MDQ6VXNlcjYwODY5NDEx",
"avatar_url": "https://avatars.githubusercontent.com/u/60869411?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/TXacs",
"html_url": "https://github.com/TXacs",
"followers_url": "https://api.github.com/users/TXacs/followers",
"following_url": "https://api.github.com/users/TXacs/following{/other_user}",
"gists_url": "https://api.github.com/users/TXacs/gists{/gist_id}",
"starred_url": "https://api.github.com/users/TXacs/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/TXacs/subscriptions",
"organizations_url": "https://api.github.com/users/TXacs/orgs",
"repos_url": "https://api.github.com/users/TXacs/repos",
"events_url": "https://api.github.com/users/TXacs/events{/privacy}",
"received_events_url": "https://api.github.com/users/TXacs/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 2067388877,
"node_id": "MDU6TGFiZWwyMDY3Mzg4ODc3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20bug",
"name": "dataset bug",
"color": "2edb81",
"default": false,
"description": "A bug in a dataset script provided in the library"
}
] | open | false | null | [] | null | [
"Hi! Instead of @nateraw's `image-folder`, I suggest using the newly released `imagefolder` dataset:\r\n```python\r\n>>> from datasets import load_dataset\r\n>>> data_files={'train': ['/ssd/datasets/imagenet/pytorch/train/**'], 'validation': ['/ssd/datasets/imagenet/pytorch/val/**']}\r\n>>> ds = load_dataset('imagefolder', data_files=data_files, cache_dir='./', task='image-classification')\r\n```\r\n\r\n\r\nLet us know if that resolves the issue.",
"> Hi! Instead of @nateraw's `image-folder`, I suggest using the newly released `imagefolder` dataset:\r\n> \r\n> ```python\r\n> >>> from datasets import load_dataset\r\n> >>> data_files={'train': ['/ssd/datasets/imagenet/pytorch/train/**'], 'validation': ['/ssd/datasets/imagenet/pytorch/val/**']}\r\n> >>> ds = load_dataset('imagefolder', data_files=data_files, cache_dir='./', task='image-classification')\r\n> ```\r\n> \r\n> Let us know if that resolves the issue.\r\n\r\nSorry, replied late.\r\nThanks a lot! It's worked for me. But it seems much slower than before, and now gets stuck.....\r\n\r\n```\r\n>>> from datasets import load_dataset\r\n>>> data_files={'train': ['/ssd/datasets/imagenet/pytorch/train/**'], 'validation': ['/ssd/datasets/imagenet/pytorch/val/**']}\r\n>>> ds = load_dataset('imagefolder', data_files=data_files, cache_dir='./', task='image-classification')\r\nResolving data files: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1281167/1281167 [00:02<00:00, 437283.97it/s]\r\nResolving data files: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 50001/50001 [00:00<00:00, 89094.29it/s]\r\nUsing custom data configuration default-baebca6347576b33\r\nDownloading and preparing dataset image_folder/default to ./image_folder/default-baebca6347576b33/0.0.0/ee92df8e96c6907f3c851a987be3fd03d4b93b247e727b69a8e23ac94392a091...\r\nDownloading data files #0: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 80073/80073 [00:00<00:00, 82289.56obj/s]\r\nDownloading data files #1: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 80073/80073 [00:01<00:00, 73559.11obj/s]\r\nDownloading data files #2: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 80073/80073 [00:00<00:00, 81600.46obj/s]\r\nDownloading data files #3: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 80073/80073 [00:01<00:00, 79691.56obj/s]\r\nDownloading data files #4: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 80073/80073 [00:00<00:00, 82341.37obj/s]\r\nDownloading data files #5: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 80073/80073 [00:01<00:00, 75784.46obj/s]\r\nDownloading data files #6: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 80073/80073 [00:00<00:00, 81466.18obj/s]\r\nDownloading data files #7: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 80073/80073 [00:00<00:00, 82320.27obj/s]\r\nDownloading data files #8: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 80073/80073 [00:01<00:00, 78094.00obj/s]\r\nDownloading data files #9: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 80073/80073 [00:00<00:00, 84057.59obj/s]\r\nDownloading data files #10: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 80073/80073 [00:00<00:00, 83082.31obj/s]\r\nDownloading data files #11: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 80073/80073 [00:01<00:00, 79944.21obj/s]\r\nDownloading data files #12: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 80073/80073 [00:00<00:00, 84569.77obj/s]\r\nDownloading data files #13: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 80073/80073 [00:00<00:00, 84949.63obj/s]\r\nDownloading data files #14: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 80073/80073 [00:00<00:00, 80666.53obj/s]\r\nDownloading data files #15: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 80072/80072 [00:01<00:00, 76723.20obj/s]\r\n^[[Bloading data files #8: 94%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████▌ | 75061/80073 [00:00<00:00, 82609.89obj/s]\r\nDownloading data files #9: 85%|████████████████████████████████████████████████████████████████████████████████████████████████████▍ | 68120/80073 [00:00<00:00, 83868.54obj/s]\r\nDownloading data files #9: 96%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████▏ | 76784/80073 [00:00<00:00, 84722.34obj/s]\r\nDownloading data files #10: 75%|███████████████████████████████████████████████████████████████████████████████████████▋ | 59995/80073 [00:00<00:00, 84148.19obj/s]\r\nDownloading data files #10: 97%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████ | 77412/80073 [00:00<00:00, 85724.53obj/s]\r\nDownloading data files #11: 71%|███████████████████████████████████████████████████████████████████████████████████▎ | 57032/80073 [00:00<00:00, 79930.58obj/s]\r\nDownloading data files #11: 92%|███████████████████████████████████████████████████████████████████████████████████████████████████████████ | 73277/80073 [00:00<00:00, 78091.27obj/s]\r\nDownloading data files #12: 86%|█████████████████████████████████████████████████████████████████████████████████████████████████████ | 69125/80073 [00:00<00:00, 84723.02obj/s]\r\nDownloading data files #12: 97%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████▋ | 77803/80073 [00:00<00:00, 85351.59obj/s]\r\nDownloading data files #13: 75%|████████████████████████████████████████████████████████████████████████████████████████▏ | 60356/80073 [00:00<00:00, 84833.35obj/s]\r\nDownloading data files #13: 97%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████ | 77368/80073 [00:00<00:00, 84475.10obj/s]\r\nDownloading data files #14: 72%|████████████████████████████████████████████████████████████████████████████████████▍ | 57751/80073 [00:00<00:00, 80727.33obj/s]\r\nDownloading data files #14: 92%|████████████████████████████████████████████████████████████████████████████████████████████████████████████▏ | 74022/80073 [00:00<00:00, 78703.16obj/s]\r\nDownloading data files #15: 78%|███████████████████████████████████████████████████████████████████████████████████████████▋ | 62724/80072 [00:00<00:00, 78387.33obj/s]\r\nDownloading data files #15: 99%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████▎ | 78933/80072 [00:01<00:00, 79353.63obj/s]\r\n```",
"Wait a long time, it completed. I don't know why it's so slow...",
"You can pass `ignore_verifications=True` in `load_dataset` to make it fast (to skip checksum verification). I'll add this tip to the docs.",
"> You can pass `ignore_verifications=True` in `load_dataset` to make it fast (to skip checksum verification). I'll add this tip to the docs.\r\n\r\nThanks!It's worked well.",
"> You can pass `ignore_verifications=True` in `load_dataset` to make it fast (to skip checksum verification). I'll add this tip to the docs.\r\n\r\nI find current `load_dataset` loads ImageNet still slowly, even add `ignore_verifications=True`.\r\nFirst loading, it costs about 20 min in my servers.\r\n```\r\nreal\t19m23.023s\r\nuser\t21m18.360s\r\nsys\t7m59.080s\r\n```\r\n\r\nSecond reusing, it costs about 15 min in my servers.\r\n```\r\nreal\t15m20.735s\r\nuser\t12m22.979s\r\nsys\t5m46.960s\r\n```\r\n\r\nI think it's too much slow, is there other method to make it faster?",
"And in transformers the [ViT example](https://github.com/huggingface/transformers/blob/main/examples/pytorch/image-classification/run_image_classification.py), could you make some changes ? Like the `collect_fn`\r\n```python\r\ndef collate_fn(examples):\r\n pixel_values = torch.stack([example[\"pixel_values\"] for example in examples])\r\n labels = torch.tensor([example[\"labels\"] for example in examples])\r\n return {\"pixel_values\": pixel_values, \"labels\": labels}\r\n```\r\nHow to know the keys of example?",
"Loading the image files slowly, is it because the multiple processes load files at the same time?",
"Could you please share the output you get after the second loading? Also, feel free to interrupt (`KeyboardInterrupt`) the process while waiting for it to end and share a traceback to show us where the process hangs. \r\n\r\n> And in transformers the [ViT example](https://github.com/huggingface/transformers/blob/main/examples/pytorch/image-classification/run_image_classification.py), could you make some changes ? Like the `collect_fn`\r\n> \r\n> ```python\r\n> def collate_fn(examples):\r\n> pixel_values = torch.stack([example[\"pixel_values\"] for example in examples])\r\n> labels = torch.tensor([example[\"labels\"] for example in examples])\r\n> return {\"pixel_values\": pixel_values, \"labels\": labels}\r\n> ```\r\n> \r\n> How to know the keys of example?\r\n\r\nWhat do you mean by \"could you make some changes\".The `ViT` script doesn't remove unused columns by default, so the keys of an example are equal to the columns of the given dataset.\r\n\r\n",
"> Could you please share the output you get after the second loading? Also, feel free to interrupt (`KeyboardInterrupt`) the process while waiting for it to end and share a traceback to show us where the process hangs.\r\n> \r\n> > And in transformers the [ViT example](https://github.com/huggingface/transformers/blob/main/examples/pytorch/image-classification/run_image_classification.py), could you make some changes ? Like the `collect_fn`\r\n> > ```python\r\n> > def collate_fn(examples):\r\n> > pixel_values = torch.stack([example[\"pixel_values\"] for example in examples])\r\n> > labels = torch.tensor([example[\"labels\"] for example in examples])\r\n> > return {\"pixel_values\": pixel_values, \"labels\": labels}\r\n> > ```\r\n> > \r\n> > \r\n> > \r\n> > \r\n> > \r\n> > \r\n> > \r\n> > \r\n> > \r\n> > \r\n> > \r\n> > How to know the keys of example?\r\n> \r\n> What do you mean by \"could you make some changes\".The `ViT` script doesn't remove unused columns by default, so the keys of an example are equal to the columns of the given dataset.\r\n\r\nThanks for your reply!\r\n\r\n1. I did not record the second output, so I run it again. \r\n```\r\n(merak) txacs@master:/dat/txacs/test$ time python test.py \r\nResolving data files: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1281167/1281167 [00:02<00:00, 469497.89it/s]\r\nResolving data files: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 50001/50001 [00:00<00:00, 70123.73it/s]\r\nUsing custom data configuration default-baebca6347576b33\r\nReusing dataset image_folder (./image_folder/default-baebca6347576b33/0.0.0/ee92df8e96c6907f3c851a987be3fd03d4b93b247e727b69a8e23ac94392a091)\r\n100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 2/2 [00:10<00:00, 5.37s/it]\r\nLoading cached processed dataset at ./image_folder/default-baebca6347576b33/0.0.0/ee92df8e96c6907f3c851a987be3fd03d4b93b247e727b69a8e23ac94392a091/cache-cd3fbdc025e03f8c.arrow\r\nLoading cached processed dataset at ./image_folder/default-baebca6347576b33/0.0.0/ee92df8e96c6907f3c851a987be3fd03d4b93b247e727b69a8e23ac94392a091/cache-b5a9de701bbdbb2b.arrow\r\nDatasetDict({\r\n train: Dataset({\r\n features: ['image', 'labels'],\r\n num_rows: 1281167\r\n })\r\n validation: Dataset({\r\n features: ['image', 'labels'],\r\n num_rows: 50000\r\n })\r\n})\r\n\r\nreal\t10m10.413s\r\nuser\t9m33.195s\r\nsys\t2m47.528s\r\n```\r\nAlthough it cost less time than the last, but still slowly.\r\n\r\n2. Sorry, forgive my poor statement. I solved it, updating to new script 'run_image_classification.py'.",
"Thanks for rerunning the code to record the output. Is it the `\"Resolving data files\"` part on your machine that takes a long time to complete, or is it `\"Loading cached processed dataset at ...\"˙`? We plan to speed up the latter by splitting bigger Arrow files into smaller ones, but your dataset doesn't seem that big, so not sure if that's the issue.",
"> Thanks for rerunning the code to record the output. Is it the `\"Resolving data files\"` part on your machine that takes a long time to complete, or is it `\"Loading cached processed dataset at ...\"˙`? We plan to speed up the latter by splitting bigger Arrow files into smaller ones, but your dataset doesn't seem that big, so not sure if that's the issue.\r\n\r\nSounds good! The main position, which costs long time, is from program starting to `\"Resolving data files\"`. I hope you can solve it early, thanks!"
] | 1,647,574,369,000 | 1,648,691,974,000 | null | NONE | null | null | null | When i used the datasets==1.11.0, it's all right. Util update the latest version, it get the error like this:
```
>>> from datasets import load_dataset
>>> data_files={'train': ['/ssd/datasets/imagenet/pytorch/train'], 'validation': ['/ssd/datasets/imagenet/pytorch/val']}
>>> ds = load_dataset('nateraw/image-folder', data_files=data_files, cache_dir='./', task='image-classification')
[] https://huggingface.co/datasets/nateraw/image-folder/resolve/main/ /dat/txacs/git/txacs/examples/image-classification/https:/huggingface.co/datasets/nateraw/image-folder/resolve/main
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/gf3/home/txacs/gv3/anaconda3/envs/txacs/lib/python3.6/site-packages/datasets/load.py", line 1671, in load_dataset
**config_kwargs,
File "/gf3/home/txacs/gv3/anaconda3/envs/txacs/lib/python3.6/site-packages/datasets/load.py", line 1521, in load_dataset_builder
**config_kwargs,
File "/gf3/home/txacs/gv3/anaconda3/envs/txacs/lib/python3.6/site-packages/datasets/builder.py", line 1031, in __init__
super().__init__(*args, **kwargs)
File "/gf3/home/txacs/gv3/anaconda3/envs/txacs/lib/python3.6/site-packages/datasets/builder.py", line 255, in __init__
sanitize_patterns(data_files), base_path=base_path, use_auth_token=use_auth_token
File "/gf3/home/txacs/gv3/anaconda3/envs/txacs/lib/python3.6/site-packages/datasets/data_files.py", line 584, in from_local_or_remote
if not isinstance(patterns_for_key, DataFilesList)
File "/gf3/home/txacs/gv3/anaconda3/envs/txacs/lib/python3.6/site-packages/datasets/data_files.py", line 546, in from_local_or_remote
data_files = resolve_patterns_locally_or_by_urls(base_path, patterns, allowed_extensions)
File "/gf3/home/txacs/gv3/anaconda3/envs/txacs/lib/python3.6/site-packages/datasets/data_files.py", line 196, in resolve_patterns_locally_or_by_urls
for path in _resolve_single_pattern_locally(base_path, pattern, allowed_extensions):
File "/gf3/home/txacs/gv3/anaconda3/envs/txacs/lib/python3.6/site-packages/datasets/data_files.py", line 146, in _resolve_single_pattern_locally
raise FileNotFoundError(error_msg)
FileNotFoundError: Unable to find '/ssd/datasets/imagenet/pytorch/train' at /dat/txacs/git/txacs/examples/image-classification/https:/huggingface.co/datasets/nateraw/image-folder/resolve/main
```
I need some help to solve the problem, thanks! | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3960/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3960/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/3959 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3959/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3959/comments | https://api.github.com/repos/huggingface/datasets/issues/3959/events | https://github.com/huggingface/datasets/issues/3959 | 1,172,872,695 | I_kwDODunzps5F6J33 | 3,959 | Medium-sized dataset conversion from pandas causes a crash | {
"login": "Antymon",
"id": 641005,
"node_id": "MDQ6VXNlcjY0MTAwNQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/641005?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Antymon",
"html_url": "https://github.com/Antymon",
"followers_url": "https://api.github.com/users/Antymon/followers",
"following_url": "https://api.github.com/users/Antymon/following{/other_user}",
"gists_url": "https://api.github.com/users/Antymon/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Antymon/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Antymon/subscriptions",
"organizations_url": "https://api.github.com/users/Antymon/orgs",
"repos_url": "https://api.github.com/users/Antymon/repos",
"events_url": "https://api.github.com/users/Antymon/events{/privacy}",
"received_events_url": "https://api.github.com/users/Antymon/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | [
"Hi ! It looks like an issue with pyarrow, could you try updating pyarrow and try again ?"
] | 1,647,548,435,000 | 1,650,458,137,000 | 1,650,458,137,000 | NONE | null | null | null | Hi, I am suffering from the following issue:
## Describe the bug
Conversion to arrow dataset from pandas dataframe of a certain size deterministically causes the following crash:
```
File "/home/datasets_crash.py", line 7, in <module>
arrow=datasets.Dataset.from_pandas(d)
File "/home/.conda/envs/tools/lib/python3.9/site-packages/datasets/arrow_dataset.py", line 783, in from_pandas
table = InMemoryTable.from_pandas(
File "/home/.conda/envs/tools/lib/python3.9/site-packages/datasets/table.py", line 379, in from_pandas
return cls(pa.Table.from_pandas(*args, **kwargs))
File "pyarrow/table.pxi", line 1487, in pyarrow.lib.Table.from_pandas
File "pyarrow/table.pxi", line 1532, in pyarrow.lib.Table.from_arrays
File "pyarrow/table.pxi", line 1181, in pyarrow.lib.Table.validate
File "pyarrow/error.pxi", line 84, in pyarrow.lib.check_status
pyarrow.lib.ArrowInvalid: Column 1: In chunk 0: Invalid: List child array invalid: Invalid: Struct child array #1 has length smaller than expected for struct array (1192457 < 1192458)
```
## Steps to reproduce the bug
I have a dataset made from replicated single example mocking a dict representation of a publication.
I copy over this example 140k times and create a pandas frame.
I use 'Dataset.from_pandas' and boom
```python
# Sample code to reproduce the bug
import copy
import datasets
import pandas
# serialized dict is quite long to be realistic representation of a publication content
paper_as_dict=eval("{'article_id': '2020-11-05T14:25:05.321Z02bc3286-91b7-486a-9c74-4f457fbc586a', 'sections': [{'section_id': 'body.0', 'paragraphs': [{'sentences': ['11010111001000000011010011110011101110111011000100001010011100101001111010110111101011101111101010101110001111011110111010111', '1101100110110010010101010100110011000111001100100000011100010111010000011100001101111000000011010111001111001010101111110011010010111011000110100110010', '101011011000010100000010011001011011000000110011011110000101001110110000010001100110111100011100110101010010110000101', '1101101110101010101000000010101011111001111000101000110001110100111000100000011001110100110000110100111011001010110011101001001110']}]}, {'section_id': 'body.1', 'paragraphs': [{'sentences': ['11111100100100111000101001011110100110011001011011001001100110100111011010000110011000010001010100101110001001101011110111110101111100001001001000011110110010110011100110110111110011100011111000101010111010101011001110000100000001001010010010011101111100011010', '10101000110000110111110011101111000101010010001001010000001111001100000010001000001110111110010011101000000111011', '111010011111101111110011111110110001000111100101001000100110101111110000111000111111110000101001101000110011010111011101001010110110001000100000001110001111100110110001110001001100011010100110100010100111000110110100010010100101011110000110000101010010001110101100000']}, {'sentences': ['111110011110110110001111001101011110010110100011101010110101011001101110110111100000111101010110011110111101001111000101110001001010010101100111111001001000011101000100110000101', '011101101101111101001100101010000010111101100101110100101000001100010100110011010010100001101001110111100011010011011111000111111101110001010111010011010110001000010101100110000100010110101110110011001010011001100111101100001001', '1110001011011010101001100001110001110001000111111111101110100001011101101001110100000110000011010001101010101110101110101101001010100100010000000010110010010010', '11101111000111111100111110010000111101110010010101001111011001111110011000011100110001010010000100101010', '111000110110110010101100010010100001100100110010101000001000011101000100101011011010000011001011011111001101100001110010100001111110111001001010101100100110001011011100000101010010000000001100010000101100110110111101110010100010011101110110111010011011000011001010111011100000000010101001011000100000011010100011101001011001010010011110100100']}, {'sentences': ['001101111100001101001001001110000110010101011101001001111111011000111001111011101011110111000000100001110110101110001010001111110100010', '0000110010110101001100011011000011001101001110001000000110010101000011101011110110000000100111000001010000101011111011110001001100001110101010101110101011111000000011001111011110001010010111010000100100000001111001011100101111010101111001001101100101001101111000111011010110010001010010010111010000001101101111100101000111101011001000101', '00000101100101100111101010000101011100101100001100011001100100001100001010001010010011001001111001000010100010000110100111110000001000101000111100010111110011000100000111100010000100010111100010101', '111100110010100110000010010101010101110011110100000101110000000111010101111001011110010101001110000001001000010110010010011110111110010110100101110011001101110111001111100011100100011110010010100101011111111']}, {'sentences': ['1100001110101111000001011001100110001011100011110110010011001000101000011110010101010011011000111010000101010011010000000111011001000010100101000011111101000000000101111000', '1110101000100110001111000011000101110111001100101010011001100011010011111111111010101011010101010011000101001100100000110010100110110110110001101100', '00010001100100101100100111111110111111101000100110101111101111110101110001010001011100000000000011010101101001111010001110101101110011001011111101110100010000111101', '011100011101011001000110010110100100000010100010010110011000000010101110011111111101010010010001100110101010010001100010110011110001011011101010111111100100110110010111101001100101010111001', '10111000011010101111110110011010101011111001000001010010111111010010111111100100010100110100101101110100110011001000110100000111000100110000001000111010', '0010011111111011100111010001111001011101001010000010110000010111000101001101000011101110100100000000100100010010101010100011100101001000100110110000010111111110000011011101111000111010']}]}, {'section_id': 'body.2.0', 'paragraphs': [{'sentences': ['110010010011001110100100011001111100010011110111101011011011001010010010010011101011', '000110101110011011101011000000100011111000001100011011110101101011000110011010001010001101101100000111100101001011111001001101111', '1000011100100000100100100010010000111011000100110010000011110111100110110001101001010100011111010100101000111', '11110111111000110010000000000100010010110001100010001010000111011000101100011010010101110110011010110101001101110011101011101100000001000100101011010110110100101011101010010101101000011110000010101011001011000001000000001010110000100010000100011110101001111100001000100000111000001010011111111110101010100011011000010000111000110', '1001000111011000111110001111111001100001000000101000111011101101100101010110001101000000001111010111100011111000000100001001110', '100110010111010101111010100000010001110101111001010010001100001110100100100101110011010101001000100101000100100011001110001100111000010010011011000010011010010000110001000000100011110010110110011010001100111010111110011']}, {'sentences': ['10010101011100010111011111001001001010100011001001111101101001000000001111101110000111101011000001001011101110101001100010010001101111001110000100010010001001101111011111110010011011110011', '110001110010110000101111000000110010010010100000010100001111101101000101100000000110000000011111011001111000010110110001011010011011101100100110011000100110101010111010111111000111001111010110010001001110100001011011000110000000111101110000001111011011101110100000100010000110001000000110100000', '101010000000010000110110111000110000100111000001110100101101101010001010010010101010100111010110001001000101011110010011001001001110111001101101100100011110011011110101100010110111001010000001000110100000001010011111111110111010011110001001110100011011000101011000110110011011010110100100011111111011100111110110000110011011110110110011101010101111001101010110101000000001100101111010000101110', '1010100110111111111000110110111110010100000100001110101110111001011000010001110110001111111110000101001001110010001110000111010101111010111111011100100011100111111101101111000010001100101000010001100110110100110111111100100011001011000001111110010100110111000010011110111011001101100000101011111110101000011000010', '00000001110000101001110101110011101001110011000111111101111101111000010011100000101000001011001110', '101000111010010000011010011010011010010010100010110100011100100111011101010100101110100111010001000000', '01101000110001101011001101100010100011011010000000001010101000010101000110100010000000110001110001010010000000101101000011000100000110011101100001010100011111101010010110001101110101010111101100001110000011001101', '0010010111000011110010011110001010100000111100001011010100100010101010010011101101100110001001111001000110000111011110010000110101010110111111010110100000011010001001010001000110001101101000101110001011110000101101110000110010110010111001100010011011100011', '00110111110000000100110111101011000100100110001000001001101011001000010100100001100111100110000110110101111010000010101000000101000011001011101001', '0100100001000111001110110110000001000100111001101101110100100111010111110001110010110111100110011111001001000011101110100101111011000110100000111010011101']}, {'sentences': ['100001001011101111111100110111011110001101111101100001000110110000100101011000000100000', '10101001001111110101001010100110011110101101001']}]}, {'section_id': 'body.2.0.0', 'paragraphs': [{'sentences': ['1110101100001100011000101000010000100010101101010110101011100101110110110111010101001100100000000111011001000100011110101011111010100101001010000010001001101010100011110010101110011001100010000100110011000011101010001000111001000001100', '101000000011001001110101000100101010000111000111100010010001111111100110001100000100011010011010010101101111010101010000110011101001111001111011111001110001010000110101101011101111010000001100', '01100001011110010100000101001101111101010011100010011001011110110010010011100101000', '0011100111000101111000010001111100000111000101110001111010001100001000111010000101100001110101100111111', '00001100000011110001011010010110000000111110110001111000110000011011001110000000100011001010110000010000010001101010101100000010011011000101011111100010010', '1011101011101111000001100100111000011000010010011110011000110111010010111100111101100110011010000110000111000110111110101111000001000010011101111000110000100011110101101101001101000110010000001000010011011010101100', '1000010011100011100000010011011111111110101101111011101010010111000000101011000000110101111000010011', '01100000110011001110101111101101011001011101000010001100101010100011010101010100111011011110100010100111', '011011010100011011110010101000110001111110110']}]}, {'section_id': 'body.2.0.1', 'paragraphs': [{'sentences': ['00111011011101000100100111000001101001011000111100100010101001010011001011000010011111001100000100010001100101110011001000110001101011010111011111011000010011010010111010011111101000110111011100010011100111111110110111011', '011011010101101101010000001011010110011111011110100111010101010110001101000010011111000011100', '110001000110010000000111101110111110101110111000101000010001110101000101001000111000010001011101010000110001010001101001001110111110111010111010011101000101101010000', '001000111110100110000001111100000111001110111001110111001000111010001001100111001101000001001001010111000111011100001111011001111110001011000111110011111101011101000100101001111011100001000110101010101111111110011111111011000101110001000000000100111011111011001100111', '11010101100010010100010010010101001011001011000001100010101111111101001101110011001010010100000111010101', '01110000110011111000110010011010000011100000010010001111100010010100100001011011111110001100', '011101111100011101100111110101111001101010010001001110101100001101000000111000']}]}, {'section_id': 'body.2.0.2', 'paragraphs': [{'sentences': ['0111011000110100110000001011001110111000011110100111011000000001000010001111111001101111011100101110101101000111000101000010000111011010110000011101111110111110100111000111000011', '00100110111000110101100111000110100010011010010101001010011000000101000110100110011010011111000100000011000000010001010000100111101011111111101010001111010000001011100001110100000101001101101010011011101000', '000001110001010010100101010100010101001100011001001101101101110111011111101010010111010110110111011110101100001000011110111011001', '0001110010111110100110110011000001111100100100110101011010010101010100101000010101000100101000011011', '1000010010010101001100101110010111010100000110101110000000111001111111001011111010000011110001011001001001000101', '0001111100111010010100010111010110011011000000001111010010110001000011010001100111101110001110000011010101111100001000011010110100000100100001111011110110000000101000010001111001010010110101110111101101110111000100', '1000101100001000100001101110111110000100000001000010101111010011010010010111011010100011001000100100001010001100110']}]}, {'section_id': 'body.2.0.3', 'paragraphs': [{'sentences': ['1010100111100011110110101011100001011010011010100100010011000110111000001010010110111001001101111000010100100110101001010001010001000110010000001', '100010101010100111000011111101010100101110011000100011100100100111000010000011001010010111011010000101010011011110111001010110', '0110000110110110110011011000011010010000001010011000010001011110110010000100011111010100110111111010010111000101111', '10100100000011100010110110011111011011101101111000001001010100001001011010000011001010101100000', '1011111111100001001100000010000100110010101000010100111111110010110011101110000101101011101', '10001111110000011100100000101100000000010000100000011100110000011110111010011101010111101001111000100000000110000011010010001100110111100001001011101011001111110010100111001001010001010011010010010111001101110101110000101011', '101101111111101101010010000110111110000110000111001001010011111101011001011010101100010100110101101011100111100100110010001011110001110010000011101100100100001001110010000010011111100110101']}]}, {'section_id': 'body.2.1', 'paragraphs': [{'sentences': ['1010010011010011001111111001000110010001101111101011001011011000101001010101010001000110100011110101110001110110111010010010100100111000101100100101111110100000011111001101010111101010100101011011110111111110', '000010101101111100000110010110011001111100001101011101000100010001001001000000101101000001110000011010111100000010010000010101110101100010011000101110110111111001000101000111000110100001001100001010101010100011', '0000000011101110111100100010111100101010110001111101110110010000100100010000101001101111001111001001100110010011010000101001110010000000100101011101001010100100011101101001011000010111110100101010110110011001110000110010010111110110101100001011101001100111010001000010111010001010000100010010011110111100110011100011111101101000011100111110101010100110001100100000100011011010111000111110010110100010111101001001101000001100100010000111110000011101111100111101000000000']}, {'sentences': ['01011000010110011000000101101000110101011010100111011001001001100001101101111101111001101111100101111001101011011001011110110110110100001100111111010100101110111111101000101100101010110011111011100101101010100110111001111100100011001110011101000110100000001100001100110001110101001000011010000110101011010000001111100100000100101110011000001001010011011101100011000001100000011', '1001100000101000000011110100110001100001101001100011010000111111010110101111001000100111000011010100100000110110001', '10010011000110110111010110000010010000000111101000100101100111101101001100111110101001001111100001110011110000010101000001000000010100011011110011000100110101001100110111111001101000011010100110000000011110001000101010101000110010010']}]}, {'section_id': 'body.2.2', 'paragraphs': [{'sentences': ['000011000000010011000001101111000101000111111111111010001011110000011001010111010101010110001111110000010', '10101001101011101010001111011000110100000100011110010001100111111101101100010010111110110101101011000011000001101110010111011111100111110000000101110010111', '100001011110010111010110001101101001100000000001000010110101011001111100101101101111010010111111000000111001111010011111000100010001111011110001010000110010101010111110100101011011100001010101000001011011111111101', '1000110111111011101000110101001111111111000100011001000011010100001010011110001111010011011111000111011100101001011111001000010101110110101000111011111111010010001101001010110111000011110101011000010000110', '1011100000100000010101101111001001100110111000010001011010111111000000001010101001111011101011010101101001111101101100101001011101000011011010001001101100100111101111111100010011010101111011100001100001000100101100100110101000010000011000000011001100000110000001', '0001001101111001111111010000001101010110110110100110110100000100110101101010010101011000010010111011000010111110000001110101110111000010011000100110111001000111011000100101110111111', '0110010010011000011010001111001100101001100001001000010100101100010110000000101010110001001010001100111101010001110010010000111011100101101010111111101001100010001011100110010100110111010101000100001110000101110011111011111000010101010110101100010010111100100010010100111110111100101010100011101001110110010000011110001010101010000100010000100111001111011101', '000001010000010001100000101011000000110101000100010111111100101111111000110111001001110110101111110011100001001000011001010000011011', '0101101001010101001101010100011000111011001000100001110100110011100000001001010110001101010110011100111111100101101111101111011001111111110010111010011011011111011011110000101011010', '11000001110111000001100100001110000111001010000101011011101010111001011100010010010111111111000011111110010111100011100110001001100011111010100111110111001110010', '0100010110100001010101110111100011100100010111111011101001100101111110101011010010101111001000101001111000001110001100011001110010100110101100110100100000001010101101011110011001000101100111001001001110100', '100000100010011111001101010000100110011110001100000010010110110100000111111011010100101111010111001110101000100001111101001110000011010110000010100', '00100110000011100101000110110001000011101000011010101000010001111011100001111111001011100111101000001000000110110001000101111010010010001100111', '0110110100011001110011001111100010101001011111011001011001101101010010101101110101010100001000100100000111101110001001110111000110011101101010100000101', '0011111010010011011101010110100110000011000011100100101011011001110110001110001111000011010111011000110100111111011101110111000010010000011011010011011100000011101100110110100100000010110101110100110101001100111011101001010111011011110100110101110010011011010001010111110011001000010100010101010010110010010110000100110001000011010011000100101011010100100111010']}]}]}")
d=pandas.DataFrame.from_records(copy.deepcopy(paper_as_dict) for _ in range(140_100))
arrow=datasets.Dataset.from_pandas(d)
```
## Expected results
The dataset should be converted without error.
## Actual results
Error `pyarrow.lib.ArrowInvalid: Column 1: In chunk 0: Invalid: List child array invalid: Invalid: Struct child array #1 has length smaller than expected for struct array (1192457 < 1192458)`
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: datasets==1.18.4 pandas==1.3.5
- Platform: macOS 11.6 or CentOS Linux 7 (Core)
- Python version: Python 3.9.7
- PyArrow version: pyarrow==3.0.0
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3959/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3959/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/3958 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3958/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3958/comments | https://api.github.com/repos/huggingface/datasets/issues/3958/events | https://github.com/huggingface/datasets/pull/3958 | 1,172,657,981 | PR_kwDODunzps40nQU2 | 3,958 | Update Wikipedia metadata | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_3958). All of your documentation changes will be reflected on that endpoint.",
"Once this last PR validated, I can take care of the integration of all the wikipedia update branch into master, @lhoestq. "
] | 1,647,539,405,000 | 1,647,865,608,000 | 1,647,865,607,000 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3958",
"html_url": "https://github.com/huggingface/datasets/pull/3958",
"diff_url": "https://github.com/huggingface/datasets/pull/3958.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3958.patch",
"merged_at": 1647865607000
} | This PR updates:
- dataset card
- metadata JSON | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3958/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3958/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3957 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3957/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3957/comments | https://api.github.com/repos/huggingface/datasets/issues/3957/events | https://github.com/huggingface/datasets/pull/3957 | 1,172,401,455 | PR_kwDODunzps40magW | 3,957 | Fix xtreme s metrics | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Sorry for the commit history mess, but will be squashed anyways so should be fine",
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,647,524,344,000 | 1,647,611,179,000 | 1,647,610,936,000 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3957",
"html_url": "https://github.com/huggingface/datasets/pull/3957",
"diff_url": "https://github.com/huggingface/datasets/pull/3957.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3957.patch",
"merged_at": 1647610936000
} | We in fact do need BABEL in xtreme-s | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3957/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3957/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3956 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3956/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3956/comments | https://api.github.com/repos/huggingface/datasets/issues/3956/events | https://github.com/huggingface/datasets/issues/3956 | 1,172,272,327 | I_kwDODunzps5F33TH | 3,956 | TypeError: __init__() missing 1 required positional argument: 'scheme' | {
"login": "amirj",
"id": 1645137,
"node_id": "MDQ6VXNlcjE2NDUxMzc=",
"avatar_url": "https://avatars.githubusercontent.com/u/1645137?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/amirj",
"html_url": "https://github.com/amirj",
"followers_url": "https://api.github.com/users/amirj/followers",
"following_url": "https://api.github.com/users/amirj/following{/other_user}",
"gists_url": "https://api.github.com/users/amirj/gists{/gist_id}",
"starred_url": "https://api.github.com/users/amirj/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/amirj/subscriptions",
"organizations_url": "https://api.github.com/users/amirj/orgs",
"repos_url": "https://api.github.com/users/amirj/repos",
"events_url": "https://api.github.com/users/amirj/events{/privacy}",
"received_events_url": "https://api.github.com/users/amirj/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"Hi @amirj, thanks for reporting.\r\n\r\nAt first sight, your issue seems a version incompatibility between your Elasticsearch client and your Elasticsearch server.\r\n\r\nFeel free to have a look at Elasticsearch client docs: https://www.elastic.co/guide/en/elasticsearch/client/python-api/current/overview.html#_compatibility\r\n> Language clients are forward compatible; meaning that clients support communicating with greater or equal minor versions of Elasticsearch. Elasticsearch language clients are only backwards compatible with default distributions and without guarantees made.",
"@albertvillanova It doesn't seem a version incompatibility between the client and server, since the following code is working:\r\n\r\n```\r\nfrom elasticsearch import Elasticsearch\r\nes_client = Elasticsearch(\"http://localhost:9200\")\r\ndataset.add_elasticsearch_index(column=\"e1\", es_client=es_client, es_index_name=\"e1_index\")\r\n```",
"Hi @amirj, \r\n\r\nI really think it is a version incompatibility issue between your Elasticsearch client and server:\r\n- Your Elasticsearch server NodeConfig expects a positional argument named 'scheme'\r\n- Whereas your Elasticsearch client passes only keyword arguments: `NodeConfig(**options)`\r\n\r\nMoreover:\r\n- Looking at your stack trace, I deduce you are using Elasticsearch client **\"8\"** major version:\r\n - the Elasticsearch file \"elasticsearch/_sync/client/utils.py\" was created in version \"8.0.0a1\": https://github.com/elastic/elasticsearch-py/commit/21fa13b0f03b7b27ace9e19a1f763d40bd2e2ba4\r\n - you can check your Elasticsearch client version by running this Python code:\r\n ```python\r\n import elasticsearch\r\n print(elasticsearch.__version__)\r\n ```\r\n\r\n- However, in the *Environment info*, you informed that the major version of your Eleasticsearch cluster server is **\"7\"** (\"7.10.2-SNAPSHOT\")\r\n\r\nCould you please align the Elasticsearch client/server major versions (as pointed out in Elasticsearch docs) and check if the problem persists?",
"I'm closing this issue, @amirj.\r\n\r\nFeel free to re-open it if the problem persists. \r\n\r\n",
"```\r\nfrom elasticsearch import Elasticsearch\r\nes = Elasticsearch([{'host': 'localhost', 'port': 9200}])\r\n```\r\n```\r\nTypeError Traceback (most recent call last)\r\n<ipython-input-8-675c6ffe5293> in <module>\r\n 1 #es = Elasticsearch([{'host':'localhost', 'port':9200}])\r\n 2 from elasticsearch import Elasticsearch\r\n----> 3 es = Elasticsearch([{'host': 'localhost', 'port': 9200}])\r\n\r\nC:\\ProgramData\\Anaconda3\\lib\\site-packages\\elasticsearch\\_sync\\client\\__init__.py in __init__(self, hosts, cloud_id, api_key, basic_auth, bearer_auth, opaque_id, headers, connections_per_node, http_compress, verify_certs, ca_certs, client_cert, client_key, ssl_assert_hostname, ssl_assert_fingerprint, ssl_version, ssl_context, ssl_show_warn, transport_class, request_timeout, node_class, node_pool_class, randomize_nodes_in_pool, node_selector_class, dead_node_backoff_factor, max_dead_node_backoff, serializer, serializers, default_mimetype, max_retries, retry_on_status, retry_on_timeout, sniff_on_start, sniff_before_requests, sniff_on_node_failure, sniff_timeout, min_delay_between_sniffing, sniffed_node_callback, meta_header, timeout, randomize_hosts, host_info_callback, sniffer_timeout, sniff_on_connection_fail, http_auth, maxsize, _transport)\r\n 310 \r\n 311 if _transport is None:\r\n--> 312 node_configs = client_node_configs(\r\n 313 hosts,\r\n 314 cloud_id=cloud_id,\r\n\r\nC:\\ProgramData\\Anaconda3\\lib\\site-packages\\elasticsearch\\_sync\\client\\utils.py in client_node_configs(hosts, cloud_id, **kwargs)\r\n 99 else:\r\n 100 assert hosts is not None\r\n--> 101 node_configs = hosts_to_node_configs(hosts)\r\n 102 \r\n 103 # Remove all values which are 'DEFAULT' to avoid overwriting actual defaults.\r\n\r\nC:\\ProgramData\\Anaconda3\\lib\\site-packages\\elasticsearch\\_sync\\client\\utils.py in hosts_to_node_configs(hosts)\r\n 142 \r\n 143 elif isinstance(host, Mapping):\r\n--> 144 node_configs.append(host_mapping_to_node_config(host))\r\n 145 else:\r\n 146 raise ValueError(\r\n\r\nC:\\ProgramData\\Anaconda3\\lib\\site-packages\\elasticsearch\\_sync\\client\\utils.py in host_mapping_to_node_config(host)\r\n 209 options[\"path_prefix\"] = options.pop(\"url_prefix\")\r\n 210 \r\n--> 211 return NodeConfig(**options) # type: ignore\r\n 212 \r\n 213 \r\n\r\nTypeError: __init__() missing 1 required positional argument: 'scheme'\r\n```",
"I am facing the same issue, and version is same for the both i.e(8.1.3)",
"@raj713335, thanks for reporting.\r\n\r\nPlease note that in your code example, you are not using our `datasets` library. \r\n\r\nThus, I think you should report that issue to `elasticsearch` library: https://github.com/elastic/elasticsearch-py\r\n\r\n"
] | 1,647,517,393,000 | 1,651,682,230,000 | 1,648,454,401,000 | NONE | null | null | null | ## Describe the bug
Based on [this tutorial](https://huggingface.co/docs/datasets/faiss_es#elasticsearch) the provided code should add Elasticsearch index but raised the following error, probably the new Elasticsearch version is not compatible though the tutorial doesn't provide any information about the supporting Elasticsearch version.
## Steps to reproduce the bug
```python
# Sample code to reproduce the bug
from datasets import load_dataset
squad = load_dataset('squad', split='validation')
squad.add_elasticsearch_index("context", host="localhost", port="9200")
```
## Expected results
[Creating an elastic index based on the provided tutorial](https://huggingface.co/docs/datasets/faiss_es#elasticsearch)
## Actual results
```
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-6-8fb51aa33961> in <module>
1 from datasets import load_dataset
2 squad = load_dataset('squad', split='validation')
----> 3 squad.add_elasticsearch_index("context", host="localhost", port="9200")
~/opt/anaconda3/lib/python3.8/site-packages/datasets/arrow_dataset.py in add_elasticsearch_index(self, column, index_name, host, port, es_client, es_index_name, es_index_config)
3777 """
3778 with self.formatted_as(type=None, columns=[column]):
-> 3779 super().add_elasticsearch_index(
3780 column=column,
3781 index_name=index_name,
~/opt/anaconda3/lib/python3.8/site-packages/datasets/search.py in add_elasticsearch_index(self, column, index_name, host, port, es_client, es_index_name, es_index_config)
587 """
588 index_name = index_name if index_name is not None else column
--> 589 es_index = ElasticSearchIndex(
590 host=host, port=port, es_client=es_client, es_index_name=es_index_name, es_index_config=es_index_config
591 )
~/opt/anaconda3/lib/python3.8/site-packages/datasets/search.py in __init__(self, host, port, es_client, es_index_name, es_index_config)
123 from elasticsearch import Elasticsearch # noqa: F811
124
--> 125 self.es_client = es_client if es_client is not None else Elasticsearch([{"host": host, "port": str(port)}])
126 self.es_index_name = (
127 es_index_name
~/opt/anaconda3/lib/python3.8/site-packages/elasticsearch/_sync/client/__init__.py in __init__(self, hosts, cloud_id, api_key, basic_auth, bearer_auth, opaque_id, headers, connections_per_node, http_compress, verify_certs, ca_certs, client_cert, client_key, ssl_assert_hostname, ssl_assert_fingerprint, ssl_version, ssl_context, ssl_show_warn, transport_class, request_timeout, node_class, node_pool_class, randomize_nodes_in_pool, node_selector_class, dead_node_backoff_factor, max_dead_node_backoff, serializer, serializers, default_mimetype, max_retries, retry_on_status, retry_on_timeout, sniff_on_start, sniff_before_requests, sniff_on_node_failure, sniff_timeout, min_delay_between_sniffing, sniffed_node_callback, meta_header, timeout, randomize_hosts, host_info_callback, sniffer_timeout, sniff_on_connection_fail, http_auth, maxsize, _transport)
310
311 if _transport is None:
--> 312 node_configs = client_node_configs(
313 hosts,
314 cloud_id=cloud_id,
~/opt/anaconda3/lib/python3.8/site-packages/elasticsearch/_sync/client/utils.py in client_node_configs(hosts, cloud_id, **kwargs)
99 else:
100 assert hosts is not None
--> 101 node_configs = hosts_to_node_configs(hosts)
102
103 # Remove all values which are 'DEFAULT' to avoid overwriting actual defaults.
~/opt/anaconda3/lib/python3.8/site-packages/elasticsearch/_sync/client/utils.py in hosts_to_node_configs(hosts)
142
143 elif isinstance(host, Mapping):
--> 144 node_configs.append(host_mapping_to_node_config(host))
145 else:
146 raise ValueError(
~/opt/anaconda3/lib/python3.8/site-packages/elasticsearch/_sync/client/utils.py in host_mapping_to_node_config(host)
209 options["path_prefix"] = options.pop("url_prefix")
210
--> 211 return NodeConfig(**options) # type: ignore
212
213
TypeError: __init__() missing 1 required positional argument: 'scheme'
```
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 2.0.0
- Platform: Mac
- Python version: 3.8.0
- PyArrow version: 7.0.0
- ElaticSearch Info:
{
"name" : "byname",
"cluster_name" : "elasticsearch_brew",
"cluster_uuid" : "9xkjrltiQIG0J95ciWhqRA",
"version" : {
"number" : "7.10.2-SNAPSHOT",
"build_flavor" : "oss",
"build_type" : "tar",
"build_hash" : "unknown",
"build_date" : "2021-01-16T01:41:27.115673Z",
"build_snapshot" : true,
"lucene_version" : "8.7.0",
"minimum_wire_compatibility_version" : "6.8.0",
"minimum_index_compatibility_version" : "6.0.0-beta1"
},
"tagline" : "You Know, for Search"
}
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3956/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3956/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/3955 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3955/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3955/comments | https://api.github.com/repos/huggingface/datasets/issues/3955/events | https://github.com/huggingface/datasets/pull/3955 | 1,172,246,647 | PR_kwDODunzps40l5kG | 3,955 | Remove unncessary 'pylint disable' message in ReadMe | {
"login": "Datta0",
"id": 39181234,
"node_id": "MDQ6VXNlcjM5MTgxMjM0",
"avatar_url": "https://avatars.githubusercontent.com/u/39181234?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Datta0",
"html_url": "https://github.com/Datta0",
"followers_url": "https://api.github.com/users/Datta0/followers",
"following_url": "https://api.github.com/users/Datta0/following{/other_user}",
"gists_url": "https://api.github.com/users/Datta0/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Datta0/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Datta0/subscriptions",
"organizations_url": "https://api.github.com/users/Datta0/orgs",
"repos_url": "https://api.github.com/users/Datta0/repos",
"events_url": "https://api.github.com/users/Datta0/events{/privacy}",
"received_events_url": "https://api.github.com/users/Datta0/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,647,515,815,000 | 1,649,773,715,000 | 1,649,773,715,000 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3955",
"html_url": "https://github.com/huggingface/datasets/pull/3955",
"diff_url": "https://github.com/huggingface/datasets/pull/3955.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3955.patch",
"merged_at": 1649773715000
} | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3955/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3955/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3954 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3954/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3954/comments | https://api.github.com/repos/huggingface/datasets/issues/3954/events | https://github.com/huggingface/datasets/issues/3954 | 1,172,141,664 | I_kwDODunzps5F3XZg | 3,954 | The dataset preview is not available for tdklab/Hebrew_Squad_v1.1 dataset | {
"login": "MatanBenChorin",
"id": 49593805,
"node_id": "MDQ6VXNlcjQ5NTkzODA1",
"avatar_url": "https://avatars.githubusercontent.com/u/49593805?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/MatanBenChorin",
"html_url": "https://github.com/MatanBenChorin",
"followers_url": "https://api.github.com/users/MatanBenChorin/followers",
"following_url": "https://api.github.com/users/MatanBenChorin/following{/other_user}",
"gists_url": "https://api.github.com/users/MatanBenChorin/gists{/gist_id}",
"starred_url": "https://api.github.com/users/MatanBenChorin/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/MatanBenChorin/subscriptions",
"organizations_url": "https://api.github.com/users/MatanBenChorin/orgs",
"repos_url": "https://api.github.com/users/MatanBenChorin/repos",
"events_url": "https://api.github.com/users/MatanBenChorin/events{/privacy}",
"received_events_url": "https://api.github.com/users/MatanBenChorin/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi @MatanBenChorin, thanks for reporting.\r\n\r\nPlease, take into account that the preview may take some time until it properly renders (we are working to reduce this time).\r\n\r\nMaybe @severo can give more details on this.",
"Hi, \r\nThank you",
"Thanks for reporting. We are looking at it and will give updates here.",
"I imagine the dataset has been moved to https://huggingface.co/datasets/tdklab/Hebrew_Squad_v1, which still has an issue:\r\n\r\n```\r\nServer Error\r\n\r\nStatus code: 400\r\nException: NameError\r\nMessage: name 'HebrewSquad' is not defined\r\n```",
"The issue is not related to the dataset viewer but to the loading script (cc @albertvillanova @lhoestq @mariosasko)\r\n\r\n```python\r\n>>> import datasets as ds\r\n>>> hf_token = \"hf_...\" # <- required because the dataset is gated\r\n>>> d = ds.load_dataset('tdklab/Hebrew_Squad_v1', use_auth_token=hf_token)\r\n...\r\nNameError: name 'HebrewSquad' is not defined\r\n```",
"Yes indeed there is an error in [Hebrew_Squad_v1.py:L40](https://huggingface.co/datasets/tdklab/Hebrew_Squad_v1/blob/main/Hebrew_Squad_v1.py#L40)\r\n\r\nHere is the fix @MatanBenChorin :\r\n\r\n```diff\r\n- HebrewSquad(\r\n+ HebrewSquadConfig(\r\n```"
] | 1,647,509,891,000 | 1,650,458,347,000 | 1,650,458,347,000 | NONE | null | null | null | ## Dataset viewer issue for 'tdklab/Hebrew_Squad_v1.1'
**Link:** https://huggingface.co/api/datasets/tdklab/Hebrew_Squad_v1.1?full=true
The dataset preview is not available for this dataset.
Am I the one who added this dataset ? Yes | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3954/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3954/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/3953 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3953/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3953/comments | https://api.github.com/repos/huggingface/datasets/issues/3953/events | https://github.com/huggingface/datasets/issues/3953 | 1,172,123,736 | I_kwDODunzps5F3TBY | 3,953 | Add ImageNet Sketch | {
"login": "NielsRogge",
"id": 48327001,
"node_id": "MDQ6VXNlcjQ4MzI3MDAx",
"avatar_url": "https://avatars.githubusercontent.com/u/48327001?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/NielsRogge",
"html_url": "https://github.com/NielsRogge",
"followers_url": "https://api.github.com/users/NielsRogge/followers",
"following_url": "https://api.github.com/users/NielsRogge/following{/other_user}",
"gists_url": "https://api.github.com/users/NielsRogge/gists{/gist_id}",
"starred_url": "https://api.github.com/users/NielsRogge/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/NielsRogge/subscriptions",
"organizations_url": "https://api.github.com/users/NielsRogge/orgs",
"repos_url": "https://api.github.com/users/NielsRogge/repos",
"events_url": "https://api.github.com/users/NielsRogge/events{/privacy}",
"received_events_url": "https://api.github.com/users/NielsRogge/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2067376369,
"node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request",
"name": "dataset request",
"color": "e99695",
"default": false,
"description": "Requesting to add a new dataset"
},
{
"id": 3608941089,
"node_id": "LA_kwDODunzps7XHBIh",
"url": "https://api.github.com/repos/huggingface/datasets/labels/vision",
"name": "vision",
"color": "bfdadc",
"default": false,
"description": "Vision datasets"
}
] | open | false | null | [] | null | [
"Can you assign this task to me? @nreimers @mariosasko ",
"Hi! Sure! Let us know if you need any pointers."
] | 1,647,508,831,000 | 1,651,577,270,000 | null | CONTRIBUTOR | null | null | null | ## Adding a Dataset
- **Name:** ImageNet Sketch
- **Description:** ImageNet-Sketch is a dataset consisting of sketch-like images, that matches the ImageNet classification validation set in categories and scale.
- **Paper:** [Learning Robust Global Representations by Penalizing Local Predictive Power](https://arxiv.org/abs/1905.13549)
- **Data:** https://github.com/HaohanWang/ImageNet-Sketch
- **Motivation:** Allows for evaluating the robustness of vision models.
Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md).
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3953/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3953/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/3952 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3952/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3952/comments | https://api.github.com/repos/huggingface/datasets/issues/3952/events | https://github.com/huggingface/datasets/issues/3952 | 1,171,895,531 | I_kwDODunzps5F2bTr | 3,952 | Checksum error for glue sst2, stsb, rte etc datasets | {
"login": "ravindra-ut",
"id": 22090962,
"node_id": "MDQ6VXNlcjIyMDkwOTYy",
"avatar_url": "https://avatars.githubusercontent.com/u/22090962?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ravindra-ut",
"html_url": "https://github.com/ravindra-ut",
"followers_url": "https://api.github.com/users/ravindra-ut/followers",
"following_url": "https://api.github.com/users/ravindra-ut/following{/other_user}",
"gists_url": "https://api.github.com/users/ravindra-ut/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ravindra-ut/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ravindra-ut/subscriptions",
"organizations_url": "https://api.github.com/users/ravindra-ut/orgs",
"repos_url": "https://api.github.com/users/ravindra-ut/repos",
"events_url": "https://api.github.com/users/ravindra-ut/events{/privacy}",
"received_events_url": "https://api.github.com/users/ravindra-ut/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | [
"Hi, @ravindra-ut.\r\n\r\nI'm sorry but I can't reproduce your problem:\r\n```python\r\nIn [1]: from datasets import load_dataset\r\n\r\nIn [2]: ds = load_dataset(\"glue\", \"sst2\")\r\nDownloading builder script: 28.8kB [00:00, 11.6MB/s] \r\nDownloading metadata: 28.7kB [00:00, 12.9MB/s] \r\nDownloading and preparing dataset glue/sst2 (download: 7.09 MiB, generated: 4.81 MiB, post-processed: Unknown size, total: 11.90 MiB) to .../.cache/huggingface/datasets/glue/sst2/1.0.0/dacbe3125aa31d7f70367a07a8a9e72a5a0bfeb5fc42e75c9db75b96da6053ad...\r\nDownloading data: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 7.44M/7.44M [00:01<00:00, 5.82MB/s]\r\nDataset glue downloaded and prepared to .../.cache/huggingface/datasets/glue/sst2/1.0.0/dacbe3125aa31d7f70367a07a8a9e72a5a0bfeb5fc42e75c9db75b96da6053ad. Subsequent calls will reuse this data. \r\n100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 3/3 [00:00<00:00, 895.96it/s]\r\n\r\nIn [3]: ds\r\nOut[2]: \r\nDatasetDict({\r\n train: Dataset({\r\n features: ['sentence', 'label', 'idx'],\r\n num_rows: 67349\r\n })\r\n validation: Dataset({\r\n features: ['sentence', 'label', 'idx'],\r\n num_rows: 872\r\n })\r\n test: Dataset({\r\n features: ['sentence', 'label', 'idx'],\r\n num_rows: 1821\r\n })\r\n})\r\n``` \r\n\r\nMoreover, I see in your traceback that your error was for an URL at https://firebasestorage.googleapis.com\r\nHowever, the URLs were updated on Sep 16, 2020 (`datasets` version 1.0.2) to https://dl.fbaipublicfiles.com: https://github.com/huggingface/datasets/commit/2f03041a21c03abaececb911760c3fe4f420c229\r\n\r\nCould you please try to update `datasets`\r\n```shell\r\npip install -U datasets\r\n```\r\nand then force redownload\r\n```python\r\nds = load_dataset(\"glue\", \"sst2\", download_mode=\"force_redownload\")\r\n```\r\nto update the cache?\r\n\r\nPlease, feel free to reopen this issue if the problem persists."
] | 1,647,488,747,000 | 1,647,501,015,000 | 1,647,501,014,000 | NONE | null | null | null | ## Describe the bug
Checksum error for glue sst2, stsb, rte etc datasets
## Steps to reproduce the bug
```python
>>> nlp.load_dataset('glue', 'sst2')
Downloading and preparing dataset glue/sst2 (download: 7.09 MiB, generated: 4.81 MiB, post-processed: Unknown sizetotal: 11.90 MiB) to
Downloading: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 73.0/73.0 [00:00<00:00, 18.2kB/s]
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/Library/Python/3.8/lib/python/site-packages/nlp/load.py", line 548, in load_dataset
builder_instance.download_and_prepare(
File "/Library/Python/3.8/lib/python/site-packages/nlp/builder.py", line 462, in download_and_prepare
self._download_and_prepare(
File "/Library/Python/3.8/lib/python/site-packages/nlp/builder.py", line 521, in _download_and_prepare
verify_checksums(
File "/Library/Python/3.8/lib/python/site-packages/nlp/utils/info_utils.py", line 38, in verify_checksums
raise NonMatchingChecksumError(error_msg + str(bad_urls))
nlp.utils.info_utils.NonMatchingChecksumError: Checksums didn't match for dataset source files:
['https://firebasestorage.googleapis.com/v0/b/mtl-sentence-representations.appspot.com/o/data%2FSST-2.zip?alt=media&token=aabc5f6b-e466-44a2-b9b4-cf6337f84ac8']
```
## Expected results
dataset load should succeed without checksum error.
## Actual results
```
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/Library/Python/3.8/lib/python/site-packages/nlp/load.py", line 548, in load_dataset
builder_instance.download_and_prepare(
File "/Library/Python/3.8/lib/python/site-packages/nlp/builder.py", line 462, in download_and_prepare
self._download_and_prepare(
File "/Library/Python/3.8/lib/python/site-packages/nlp/builder.py", line 521, in _download_and_prepare
verify_checksums(
File "/Library/Python/3.8/lib/python/site-packages/nlp/utils/info_utils.py", line 38, in verify_checksums
raise NonMatchingChecksumError(error_msg + str(bad_urls))
nlp.utils.info_utils.NonMatchingChecksumError: Checksums didn't match for dataset source files:
['https://firebasestorage.googleapis.com/v0/b/mtl-sentence-representations.appspot.com/o/data%2FSST-2.zip?alt=media&token=aabc5f6b-e466-44a2-b9b4-cf6337f84ac8']
```
## Environment info
- `datasets` version: '1.18.3'
- Platform: Mac OS
- Python version: Python 3.8.9
- PyArrow version: '7.0.0'
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3952/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3952/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/3951 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3951/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3951/comments | https://api.github.com/repos/huggingface/datasets/issues/3951/events | https://github.com/huggingface/datasets/issues/3951 | 1,171,568,814 | I_kwDODunzps5F1Liu | 3,951 | Forked streaming datasets try to `open` data urls rather than use network | {
"login": "dlwh",
"id": 9633,
"node_id": "MDQ6VXNlcjk2MzM=",
"avatar_url": "https://avatars.githubusercontent.com/u/9633?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dlwh",
"html_url": "https://github.com/dlwh",
"followers_url": "https://api.github.com/users/dlwh/followers",
"following_url": "https://api.github.com/users/dlwh/following{/other_user}",
"gists_url": "https://api.github.com/users/dlwh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dlwh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dlwh/subscriptions",
"organizations_url": "https://api.github.com/users/dlwh/orgs",
"repos_url": "https://api.github.com/users/dlwh/repos",
"events_url": "https://api.github.com/users/dlwh/events{/privacy}",
"received_events_url": "https://api.github.com/users/dlwh/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | open | false | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"Thanks for reporting this second issue as well. We definitely want to make streaming datasets fully working in a distributed setup and with the best performance. Right now it only supports single process.\r\n\r\nIn this issue it seems that the streaming capabilities that we offer to dataset builders are not transferred to the forked process (so it fails to open remote files and start streaming data from them). In particular `open` is supposed to be mocked by our `xopen` function that is an extended open that supports remote files. Let me try to fix this"
] | 1,647,465,662,000 | 1,648,472,470,000 | null | NONE | null | null | null | ## Describe the bug
Building on #3950, if you bypass the pickling problem you still can't use the dataset. Somehow something gets confused and the forked processes try to `open` urls rather than anything else.
## Steps to reproduce the bug
```python
from multiprocessing import freeze_support
import transformers
from transformers import Trainer, AutoModelForCausalLM, TrainingArguments
import datasets
import torch.utils.data
# work around #3950
class TorchIterableDataset(datasets.IterableDataset, torch.utils.data.IterableDataset):
pass
def _ensure_format(v: datasets.IterableDataset) -> datasets.IterableDataset:
return TorchIterableDataset(v._ex_iterable, v.info, v.split, "torch", v._shuffling)
if __name__ == '__main__':
freeze_support()
ds = datasets.load_dataset('oscar', "unshuffled_deduplicated_en", split='train', streaming=True)
ds = _ensure_format(ds)
model = AutoModelForCausalLM.from_pretrained("distilgpt2")
Trainer(model, train_dataset=ds, args=TrainingArguments("out", max_steps=1000, dataloader_num_workers=4)).train()
```
## Expected results
I'd expect the dataset to load the url correctly and produce examples.
## Actual results
```
warnings.warn(
***** Running training *****
Num examples = 8000
Num Epochs = 9223372036854775807
Instantaneous batch size per device = 8
Total train batch size (w. parallel, distributed & accumulation) = 8
Gradient Accumulation steps = 1
Total optimization steps = 1000
0%| | 0/1000 [00:00<?, ?it/s]Traceback (most recent call last):
File "/Users/dlwh/src/mistral/src/stream_fork_crash.py", line 22, in <module>
Trainer(model, train_dataset=ds, args=TrainingArguments("out", max_steps=1000, dataloader_num_workers=4)).train()
File "/Users/dlwh/.conda/envs/mistral/lib/python3.8/site-packages/transformers/trainer.py", line 1339, in train
for step, inputs in enumerate(epoch_iterator):
File "/Users/dlwh/.conda/envs/mistral/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 521, in __next__
data = self._next_data()
File "/Users/dlwh/.conda/envs/mistral/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 1203, in _next_data
return self._process_data(data)
File "/Users/dlwh/.conda/envs/mistral/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 1229, in _process_data
data.reraise()
File "/Users/dlwh/.conda/envs/mistral/lib/python3.8/site-packages/torch/_utils.py", line 434, in reraise
raise exception
FileNotFoundError: Caught FileNotFoundError in DataLoader worker process 0.
Original Traceback (most recent call last):
File "/Users/dlwh/.conda/envs/mistral/lib/python3.8/site-packages/torch/utils/data/_utils/worker.py", line 287, in _worker_loop
data = fetcher.fetch(index)
File "/Users/dlwh/.conda/envs/mistral/lib/python3.8/site-packages/torch/utils/data/_utils/fetch.py", line 32, in fetch
data.append(next(self.dataset_iter))
File "/Users/dlwh/.conda/envs/mistral/lib/python3.8/site-packages/datasets/iterable_dataset.py", line 497, in __iter__
for key, example in self._iter():
File "/Users/dlwh/.conda/envs/mistral/lib/python3.8/site-packages/datasets/iterable_dataset.py", line 494, in _iter
yield from ex_iterable
File "/Users/dlwh/.conda/envs/mistral/lib/python3.8/site-packages/datasets/iterable_dataset.py", line 87, in __iter__
yield from self.generate_examples_fn(**self.kwargs)
File "/Users/dlwh/.cache/huggingface/modules/datasets_modules/datasets/oscar/84838bd49d2295f62008383b05620571535451d84545037bb94d6f3501651df2/oscar.py", line 358, in _generate_examples
with gzip.open(open(filepath, "rb"), "rt", encoding="utf-8") as f:
FileNotFoundError: [Errno 2] No such file or directory: 'https://s3.amazonaws.com/datasets.huggingface.co/oscar/1.0/unshuffled/deduplicated/en/en_part_1.txt.gz'
Error in atexit._run_exitfuncs:
Traceback (most recent call last):
File "/Users/dlwh/.conda/envs/mistral/lib/python3.8/multiprocessing/popen_fork.py", line 27, in poll
pid, sts = os.waitpid(self.pid, flag)
File "/Users/dlwh/.conda/envs/mistral/lib/python3.8/site-packages/torch/utils/data/_utils/signal_handling.py", line 66, in handler
_error_if_any_worker_fails()
RuntimeError: DataLoader worker (pid 6932) is killed by signal: Terminated: 15.
0%| | 0/1000 [00:02<?, ?it/s]
```
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 2.0.0
- Platform: macOS-12.2-arm64-arm-64bit
- Python version: 3.8.12
- PyArrow version: 7.0.0
- Pandas version: 1.4.1
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3951/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3951/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/3950 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3950/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3950/comments | https://api.github.com/repos/huggingface/datasets/issues/3950/events | https://github.com/huggingface/datasets/issues/3950 | 1,171,560,585 | I_kwDODunzps5F1JiJ | 3,950 | Streaming Datasets don't work with Transformers Trainer when dataloader_num_workers>1 | {
"login": "dlwh",
"id": 9633,
"node_id": "MDQ6VXNlcjk2MzM=",
"avatar_url": "https://avatars.githubusercontent.com/u/9633?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dlwh",
"html_url": "https://github.com/dlwh",
"followers_url": "https://api.github.com/users/dlwh/followers",
"following_url": "https://api.github.com/users/dlwh/following{/other_user}",
"gists_url": "https://api.github.com/users/dlwh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dlwh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dlwh/subscriptions",
"organizations_url": "https://api.github.com/users/dlwh/orgs",
"repos_url": "https://api.github.com/users/dlwh/repos",
"events_url": "https://api.github.com/users/dlwh/events{/privacy}",
"received_events_url": "https://api.github.com/users/dlwh/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 1935892877,
"node_id": "MDU6TGFiZWwxOTM1ODkyODc3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/good%20first%20issue",
"name": "good first issue",
"color": "7057ff",
"default": true,
"description": "Good for newcomers"
}
] | open | false | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"Hi, thanks for reporting. This could be related to https://github.com/huggingface/datasets/issues/3148 too\r\n\r\nWe should definitely make `TorchIterableDataset` picklable by moving it in the main code instead of inside a function. If you'd like to contribute, feel free to open a Pull Request :)\r\n\r\nI'm also taking a look at your second issue, which is more technical"
] | 1,647,465,251,000 | 1,649,076,320,000 | null | NONE | null | null | null | ## Describe the bug
Streaming Datasets can't be pickled, so any interaction between them and multiprocessing results in a crash.
## Steps to reproduce the bug
```python
import transformers
from transformers import Trainer, AutoModelForCausalLM, TrainingArguments
import datasets
ds = datasets.load_dataset('oscar', "unshuffled_deduplicated_en", split='train', streaming=True).with_format("torch")
model = AutoModelForCausalLM.from_pretrained("distilgpt2")
Trainer(model, train_dataset=ds, args=TrainingArguments("out", max_steps=1000, dataloader_num_workers=4)).train()
```
## Expected results
For this code I'd expect a crash related to not having preprocessed the data, but instead we get a pickling error.
## Actual results
```
0%| | 0/1000 [00:00<?, ?it/s]Traceback (most recent call last):
File "/Users/dlwh/src/mistral/src/stream_fork_crash.py", line 7, in <module>
Trainer(model, train_dataset=ds, args=TrainingArguments("out", max_steps=1000, dataloader_num_workers=4)).train()
File "/Users/dlwh/.conda/envs/mistral/lib/python3.8/site-packages/transformers/trainer.py", line 1339, in train
for step, inputs in enumerate(epoch_iterator):
File "/Users/dlwh/.conda/envs/mistral/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 359, in __iter__
return self._get_iterator()
File "/Users/dlwh/.conda/envs/mistral/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 305, in _get_iterator
return _MultiProcessingDataLoaderIter(self)
File "/Users/dlwh/.conda/envs/mistral/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 918, in __init__
w.start()
File "/Users/dlwh/.conda/envs/mistral/lib/python3.8/multiprocessing/process.py", line 121, in start
self._popen = self._Popen(self)
File "/Users/dlwh/.conda/envs/mistral/lib/python3.8/multiprocessing/context.py", line 224, in _Popen
return _default_context.get_context().Process._Popen(process_obj)
File "/Users/dlwh/.conda/envs/mistral/lib/python3.8/multiprocessing/context.py", line 284, in _Popen
return Popen(process_obj)
File "/Users/dlwh/.conda/envs/mistral/lib/python3.8/multiprocessing/popen_spawn_posix.py", line 32, in __init__
super().__init__(process_obj)
File "/Users/dlwh/.conda/envs/mistral/lib/python3.8/multiprocessing/popen_fork.py", line 19, in __init__
self._launch(process_obj)
File "/Users/dlwh/.conda/envs/mistral/lib/python3.8/multiprocessing/popen_spawn_posix.py", line 47, in _launch
reduction.dump(process_obj, fp)
File "/Users/dlwh/.conda/envs/mistral/lib/python3.8/multiprocessing/reduction.py", line 60, in dump
ForkingPickler(file, protocol).dump(obj)
AttributeError: Can't pickle local object 'iterable_dataset.<locals>.TorchIterableDataset'
0%| | 0/1000 [00:00<?, ?it/s]
```
This immediate crash can be fixed by not using a local class to make the `TorchIterableDataset` (Note that you have to do with_format("torch") or you get an exception because the dataset has no len) However, any lambdas etc used as maps will also trigger this crash. A more permanent fix would be to move away from multiprocessing and instead use something like pathos or multiprocessing_on_dill (https://stackoverflow.com/questions/19984152/what-can-multiprocessing-and-dill-do-together)
Note that if you bypass this crash you get another crash. (I'll file a separate bug).
## Environment info
- `datasets` version: 2.0.0
- Platform: macOS-12.2-arm64-arm-64bit
- Python version: 3.8.12
- PyArrow version: 7.0.0
- Pandas version: 1.4.1
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3950/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3950/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/3949 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3949/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3949/comments | https://api.github.com/repos/huggingface/datasets/issues/3949/events | https://github.com/huggingface/datasets/pull/3949 | 1,171,467,981 | PR_kwDODunzps40jia- | 3,949 | Remove GLEU metric | {
"login": "emibaylor",
"id": 27527747,
"node_id": "MDQ6VXNlcjI3NTI3NzQ3",
"avatar_url": "https://avatars.githubusercontent.com/u/27527747?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/emibaylor",
"html_url": "https://github.com/emibaylor",
"followers_url": "https://api.github.com/users/emibaylor/followers",
"following_url": "https://api.github.com/users/emibaylor/following{/other_user}",
"gists_url": "https://api.github.com/users/emibaylor/gists{/gist_id}",
"starred_url": "https://api.github.com/users/emibaylor/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/emibaylor/subscriptions",
"organizations_url": "https://api.github.com/users/emibaylor/orgs",
"repos_url": "https://api.github.com/users/emibaylor/repos",
"events_url": "https://api.github.com/users/emibaylor/events{/privacy}",
"received_events_url": "https://api.github.com/users/emibaylor/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,647,459,331,000 | 1,649,796,206,000 | 1,649,795,829,000 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3949",
"html_url": "https://github.com/huggingface/datasets/pull/3949",
"diff_url": "https://github.com/huggingface/datasets/pull/3949.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3949.patch",
"merged_at": 1649795829000
} | Remove the GLEU metric as it is not actually implemented. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3949/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 1,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3949/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3948 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3948/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3948/comments | https://api.github.com/repos/huggingface/datasets/issues/3948/events | https://github.com/huggingface/datasets/pull/3948 | 1,171,460,560 | PR_kwDODunzps40jg1F | 3,948 | Google BLEU Metric Card | {
"login": "emibaylor",
"id": 27527747,
"node_id": "MDQ6VXNlcjI3NTI3NzQ3",
"avatar_url": "https://avatars.githubusercontent.com/u/27527747?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/emibaylor",
"html_url": "https://github.com/emibaylor",
"followers_url": "https://api.github.com/users/emibaylor/followers",
"following_url": "https://api.github.com/users/emibaylor/following{/other_user}",
"gists_url": "https://api.github.com/users/emibaylor/gists{/gist_id}",
"starred_url": "https://api.github.com/users/emibaylor/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/emibaylor/subscriptions",
"organizations_url": "https://api.github.com/users/emibaylor/orgs",
"repos_url": "https://api.github.com/users/emibaylor/repos",
"events_url": "https://api.github.com/users/emibaylor/events{/privacy}",
"received_events_url": "https://api.github.com/users/emibaylor/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"A few things that aren't clear for me:\r\n- \"Because it performs better on individual sentence pairs as compared to BLEU, Google BLEU has also been used in RL experiments.\" -- why is this the case? why would that make it more usable for RL? (also, you should put \"Reinforcement Learning\" explicitly, not just the acronym)\r\n- (Minor issue) -- I put inputs before the first example code, I think that's clearer somehow\r\n\r\nOtherwise, it looks great, good job @emibaylor !\r\n"
] | 1,647,458,837,000 | 1,647,878,666,000 | 1,647,878,665,000 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3948",
"html_url": "https://github.com/huggingface/datasets/pull/3948",
"diff_url": "https://github.com/huggingface/datasets/pull/3948.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3948.patch",
"merged_at": 1647878665000
} | Add metric card for Google BLEU (GLEU) metric
One thing I noticed while writing this up is that, while this metric was made specifically to be better than BLEU at the sentence level instead of the corpus level, the current implementation only allows the calculation of the corpus-level statistic. I think changing this would be a good thing to put on the to do list for the future. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3948/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3948/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3947 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3947/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3947/comments | https://api.github.com/repos/huggingface/datasets/issues/3947/events | https://github.com/huggingface/datasets/pull/3947 | 1,171,452,854 | PR_kwDODunzps40jfLq | 3,947 | BLEU metric card | {
"login": "emibaylor",
"id": 27527747,
"node_id": "MDQ6VXNlcjI3NTI3NzQ3",
"avatar_url": "https://avatars.githubusercontent.com/u/27527747?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/emibaylor",
"html_url": "https://github.com/emibaylor",
"followers_url": "https://api.github.com/users/emibaylor/followers",
"following_url": "https://api.github.com/users/emibaylor/following{/other_user}",
"gists_url": "https://api.github.com/users/emibaylor/gists{/gist_id}",
"starred_url": "https://api.github.com/users/emibaylor/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/emibaylor/subscriptions",
"organizations_url": "https://api.github.com/users/emibaylor/orgs",
"repos_url": "https://api.github.com/users/emibaylor/repos",
"events_url": "https://api.github.com/users/emibaylor/events{/privacy}",
"received_events_url": "https://api.github.com/users/emibaylor/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"Some thoughts:\r\n- For values, e.g. \"Defaults to False\", I would put False in code: `False`. Same for : \"Defaults to `4`.\"\r\n- I would put the following remark in \"Limitations\": \r\n> \"BLEU's output is always a number between 0 and 1. This value indicates how similar the candidate text is to the reference texts, with values closer to 1 representing more similar texts. Few human translations will attain a score of 1, since this would indicate that the candidate is identical to one of the reference translations. For this reason, it is not necessary to attain a score of 1. Because there are more opportunities to match, adding additional reference translations will increase the BLEU score.\"\r\n\r\n- Add some values from the original BLEU paper (https://aclanthology.org/P02-1040.pdf)"
] | 1,647,458,407,000 | 1,648,565,990,000 | 1,648,565,654,000 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3947",
"html_url": "https://github.com/huggingface/datasets/pull/3947",
"diff_url": "https://github.com/huggingface/datasets/pull/3947.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3947.patch",
"merged_at": 1648565653000
} | Add BLEU metric card | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3947/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3947/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3946 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3946/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3946/comments | https://api.github.com/repos/huggingface/datasets/issues/3946/events | https://github.com/huggingface/datasets/pull/3946 | 1,171,239,287 | PR_kwDODunzps40i1L3 | 3,946 | Add newline to text dataset builder for controlling universal newlines mode | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_3946). All of your documentation changes will be reflected on that endpoint.",
"The failing CI test has nothing to do with this PR."
] | 1,647,447,071,000 | 1,649,774,505,000 | null | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3946",
"html_url": "https://github.com/huggingface/datasets/pull/3946",
"diff_url": "https://github.com/huggingface/datasets/pull/3946.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3946.patch",
"merged_at": null
} | Fix #3804. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3946/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3946/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3945 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3945/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3945/comments | https://api.github.com/repos/huggingface/datasets/issues/3945/events | https://github.com/huggingface/datasets/pull/3945 | 1,171,222,257 | PR_kwDODunzps40ixmc | 3,945 | Fix comet metric | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"Finally I'm done updating the dependencies ^^'\r\n\r\ncc @sashavor can you review my changes in the metric card please ?",
"Looks good to me! Just fixed a tiny typo :wink: ",
"Thanks !"
] | 1,647,446,207,000 | 1,647,961,812,000 | 1,647,961,530,000 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3945",
"html_url": "https://github.com/huggingface/datasets/pull/3945",
"diff_url": "https://github.com/huggingface/datasets/pull/3945.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3945.patch",
"merged_at": 1647961530000
} | The COMET metric has been broken for a while since big breaking changes happened. We did not catch them in the CI because the slow test mocks the download_model function that was changed.
This PR fixes the metric, updates the download_model mock and updates the doctest. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3945/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3945/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3944 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3944/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3944/comments | https://api.github.com/repos/huggingface/datasets/issues/3944/events | https://github.com/huggingface/datasets/pull/3944 | 1,171,209,510 | PR_kwDODunzps40iu4n | 3,944 | Create README.md | {
"login": "sashavor",
"id": 14205986,
"node_id": "MDQ6VXNlcjE0MjA1OTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/14205986?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sashavor",
"html_url": "https://github.com/sashavor",
"followers_url": "https://api.github.com/users/sashavor/followers",
"following_url": "https://api.github.com/users/sashavor/following{/other_user}",
"gists_url": "https://api.github.com/users/sashavor/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sashavor/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sashavor/subscriptions",
"organizations_url": "https://api.github.com/users/sashavor/orgs",
"repos_url": "https://api.github.com/users/sashavor/repos",
"events_url": "https://api.github.com/users/sashavor/events{/privacy}",
"received_events_url": "https://api.github.com/users/sashavor/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,647,445,586,000 | 1,647,539,454,000 | 1,647,539,225,000 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3944",
"html_url": "https://github.com/huggingface/datasets/pull/3944",
"diff_url": "https://github.com/huggingface/datasets/pull/3944.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3944.patch",
"merged_at": 1647539225000
} | Proposing COMET metric card | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3944/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3944/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3943 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3943/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3943/comments | https://api.github.com/repos/huggingface/datasets/issues/3943/events | https://github.com/huggingface/datasets/pull/3943 | 1,171,185,070 | PR_kwDODunzps40ipnu | 3,943 | [Doc] Don't use v for version tags on GitHub | {
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_3943). All of your documentation changes will be reflected on that endpoint."
] | 1,647,444,510,000 | 1,647,517,586,000 | 1,647,517,585,000 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3943",
"html_url": "https://github.com/huggingface/datasets/pull/3943",
"diff_url": "https://github.com/huggingface/datasets/pull/3943.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3943.patch",
"merged_at": 1647517585000
} | This removes the `v` automatically used by `doc-builder` for versions. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3943/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3943/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3942 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3942/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3942/comments | https://api.github.com/repos/huggingface/datasets/issues/3942/events | https://github.com/huggingface/datasets/issues/3942 | 1,171,177,122 | I_kwDODunzps5Fzr6i | 3,942 | reddit_tifu dataset: Checksums didn't match for dataset source files | {
"login": "XingxingZhang",
"id": 8507585,
"node_id": "MDQ6VXNlcjg1MDc1ODU=",
"avatar_url": "https://avatars.githubusercontent.com/u/8507585?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/XingxingZhang",
"html_url": "https://github.com/XingxingZhang",
"followers_url": "https://api.github.com/users/XingxingZhang/followers",
"following_url": "https://api.github.com/users/XingxingZhang/following{/other_user}",
"gists_url": "https://api.github.com/users/XingxingZhang/gists{/gist_id}",
"starred_url": "https://api.github.com/users/XingxingZhang/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/XingxingZhang/subscriptions",
"organizations_url": "https://api.github.com/users/XingxingZhang/orgs",
"repos_url": "https://api.github.com/users/XingxingZhang/repos",
"events_url": "https://api.github.com/users/XingxingZhang/events{/privacy}",
"received_events_url": "https://api.github.com/users/XingxingZhang/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 1935892865,
"node_id": "MDU6TGFiZWwxOTM1ODkyODY1",
"url": "https://api.github.com/repos/huggingface/datasets/labels/duplicate",
"name": "duplicate",
"color": "cfd3d7",
"default": true,
"description": "This issue or pull request already exists"
}
] | closed | false | null | [] | null | [
"Hi @XingxingZhang, \r\n\r\nWe have already fixed this. You should update `datasets` version to at least 1.18.4:\r\n```shell\r\npip install -U datasets\r\n```\r\nAnd then force the redownload:\r\n```python\r\nload_dataset(\"...\", download_mode=\"force_redownload\")\r\n```\r\n\r\nDuplicate of:\r\n- #3773",
"thanks @albertvillanova . by upgrading to 1.18.4 and using `load_dataset(\"...\", download_mode=\"force_redownload\")` fixed \r\n the bug.\r\n\r\nusing the following as you suggested in another thread can also fixed the bug\r\n```\r\npip install git+https://github.com/huggingface/datasets#egg=datasets\r\n```\r\n",
"The latter solution (installing from GitHub) was proposed because the fix was not released yet. But last week we made the 1.18.4 patch release (with the fix), so no longer necessary to install from GitHub.\r\n\r\nYou can now install from PyPI, as usual:\r\n```shell\r\npip install -U datasets\r\n```\r\n"
] | 1,647,444,210,000 | 1,647,446,263,000 | 1,647,445,165,000 | NONE | null | null | null | ## Describe the bug
When loading the reddit_tifu dataset, it throws the exception "Checksums didn't match for dataset source files"
## Steps to reproduce the bug
```python
import datasets
from datasets import load_dataset
print(datasets.__version__)
# load_dataset('billsum')
load_dataset('reddit_tifu', 'short')
```
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 1.17.0
- Platform: mac os
- Python version: Python 3.7.6
- PyArrow version: 3.0.0
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3942/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3942/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/3941 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3941/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3941/comments | https://api.github.com/repos/huggingface/datasets/issues/3941/events | https://github.com/huggingface/datasets/issues/3941 | 1,171,132,709 | I_kwDODunzps5FzhEl | 3,941 | billsum dataset: Checksums didn't match for dataset source files: | {
"login": "XingxingZhang",
"id": 8507585,
"node_id": "MDQ6VXNlcjg1MDc1ODU=",
"avatar_url": "https://avatars.githubusercontent.com/u/8507585?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/XingxingZhang",
"html_url": "https://github.com/XingxingZhang",
"followers_url": "https://api.github.com/users/XingxingZhang/followers",
"following_url": "https://api.github.com/users/XingxingZhang/following{/other_user}",
"gists_url": "https://api.github.com/users/XingxingZhang/gists{/gist_id}",
"starred_url": "https://api.github.com/users/XingxingZhang/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/XingxingZhang/subscriptions",
"organizations_url": "https://api.github.com/users/XingxingZhang/orgs",
"repos_url": "https://api.github.com/users/XingxingZhang/repos",
"events_url": "https://api.github.com/users/XingxingZhang/events{/privacy}",
"received_events_url": "https://api.github.com/users/XingxingZhang/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | [
"Hi @XingxingZhang, thanks for reporting.\r\n\r\nThis was due to a change in Google Drive service:\r\n- #3786 \r\n\r\nWe have already fixed it:\r\n- #3787\r\n\r\nYou should update `datasets` version to at least 1.18.4:\r\n```shell\r\npip install -U datasets\r\n```\r\nAnd then force the redownload:\r\n```python\r\nload_dataset(\"...\", download_mode=\"force_redownload\")\r\n```",
"thanks @albertvillanova "
] | 1,647,442,328,000 | 1,647,446,228,000 | 1,647,445,604,000 | NONE | null | null | null | ## Describe the bug
When loading the `billsum` dataset, it throws the exception "Checksums didn't match for dataset source files"
```
File "virtualenv_projects/codex/lib/python3.7/site-packages/datasets/utils/info_utils.py", line 40, in verify_checksums
raise NonMatchingChecksumError(error_msg + str(bad_urls))
datasets.utils.info_utils.NonMatchingChecksumError: Checksums didn't match for dataset source files:
['https://drive.google.com/uc?export=download&id=1g89WgFHMRbr4QrvA0ngh26PY081Nv3lx']
```
## Steps to reproduce the bug
```python
import datasets
from datasets import load_dataset
print(datasets.__version__)
load_dataset('billsum')
```
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 1.17.0
- Platform: mac os
- Python version: Python 3.7.6
- PyArrow version: 3.0.0
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3941/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3941/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/3940 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3940/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3940/comments | https://api.github.com/repos/huggingface/datasets/issues/3940/events | https://github.com/huggingface/datasets/pull/3940 | 1,171,106,853 | PR_kwDODunzps40iYxr | 3,940 | Create CoVAL metric card | {
"login": "sashavor",
"id": 14205986,
"node_id": "MDQ6VXNlcjE0MjA1OTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/14205986?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sashavor",
"html_url": "https://github.com/sashavor",
"followers_url": "https://api.github.com/users/sashavor/followers",
"following_url": "https://api.github.com/users/sashavor/following{/other_user}",
"gists_url": "https://api.github.com/users/sashavor/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sashavor/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sashavor/subscriptions",
"organizations_url": "https://api.github.com/users/sashavor/orgs",
"repos_url": "https://api.github.com/users/sashavor/repos",
"events_url": "https://api.github.com/users/sashavor/events{/privacy}",
"received_events_url": "https://api.github.com/users/sashavor/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,647,441,109,000 | 1,647,625,079,000 | 1,647,624,914,000 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3940",
"html_url": "https://github.com/huggingface/datasets/pull/3940",
"diff_url": "https://github.com/huggingface/datasets/pull/3940.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3940.patch",
"merged_at": 1647624914000
} | Initial CoVAL metric card | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3940/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3940/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3939 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3939/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3939/comments | https://api.github.com/repos/huggingface/datasets/issues/3939/events | https://github.com/huggingface/datasets/issues/3939 | 1,170,882,331 | I_kwDODunzps5Fyj8b | 3,939 | Source links broken | {
"login": "qqaatw",
"id": 24835382,
"node_id": "MDQ6VXNlcjI0ODM1Mzgy",
"avatar_url": "https://avatars.githubusercontent.com/u/24835382?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/qqaatw",
"html_url": "https://github.com/qqaatw",
"followers_url": "https://api.github.com/users/qqaatw/followers",
"following_url": "https://api.github.com/users/qqaatw/following{/other_user}",
"gists_url": "https://api.github.com/users/qqaatw/gists{/gist_id}",
"starred_url": "https://api.github.com/users/qqaatw/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/qqaatw/subscriptions",
"organizations_url": "https://api.github.com/users/qqaatw/orgs",
"repos_url": "https://api.github.com/users/qqaatw/repos",
"events_url": "https://api.github.com/users/qqaatw/events{/privacy}",
"received_events_url": "https://api.github.com/users/qqaatw/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | [
"Thanks for reporting @qqaatw.\r\n\r\n@mishig25 @sgugger do you think this can be tweaked in the new doc framework?\r\n- From: https://github.com/huggingface/datasets/blob/v2.0.0/\r\n- To: https://github.com/huggingface/datasets/blob/2.0.0/",
"@qqaatw thanks a lot for notifying about this issue!\r\n\r\nin comparison, transformers tags start with `v` like [this one](https://github.com/huggingface/transformers/blob/v4.17.0/src/transformers/models/bert/configuration_bert.py#L54).\r\n\r\nTherefore, we have to do one of 2 options below:\r\n1. Make necessary changes on doc-builder side\r\nOR\r\n2. Make [datasets tags](https://github.com/huggingface/datasets/tags) start with `v`, just like [transformers](https://github.com/huggingface/transformers/tags) (so that tag naming can be consistent amongst hf repos)\r\n\r\nI'll let you decide @albertvillanova @lhoestq @sgugger ",
"I think option 2 is the easiest and would provide harmony in the HF ecosystem but we can also add a doc config parameter to decide whether the default version has a v or not if `datasets` folks prefer their tags without a v :-)",
"For me it is OK to conform to the rest of libraries and tag/release with a preceding \"v\", rather than adding an extra argument to the doc builder just for `datasets`.\r\n\r\nLet me know if it is also OK for you @lhoestq. ",
"https://github.com/huggingface/doc-build/commit/f41c1e8ff900724213af4c75d287d8b61ecf6141\r\n\r\nhotfix so that `datasets` docs source button works correctly on hf.co/docs/datasets",
"We could add a tag for each release without a 'v' but it could be confusing on github to see both tags `v2.0.0` and `2.0.0` IMO (not sure if many users check them though). Removing the tags without 'v' would break our versioning for github datasets: the library looks for dataset scripts at the URLs like `https://raw.githubusercontent.com/huggingface/datasets/{revision}/datasets/{path}/{name}` where `revision` is equal to `datasets.__version__` (which doesn't start with a 'v') for all released versions of `datasets`.\r\n\r\nI think we could just have a parameter for the documentation - and having different URLs schemes for the source links that the users don't even see (they simply click on a button) is probably fine",
"This is done in #3943 to go along with [doc-builder#146](https://github.com/huggingface/doc-builder/pull/146).\r\n\r\nNote that this will only work for future versions, so once those two are merged, the actual v2.0.0 doc should be fixed. The easiest is to cherry-pick this commit on the v2.0.0 release branch (or on a new branch created from the 2.0.0 tag, with a name that triggers the doc building job, for instance v2.0.0-release)",
"Thanks for fixing @sgugger."
] | 1,647,429,467,000 | 1,647,664,892,000 | 1,647,664,892,000 | CONTRIBUTOR | null | null | null | ## Describe the bug
The source links of v2.0.0 docs are broken:
For exmaple, clicking the source button of this [class](https://huggingface.co/docs/datasets/v2.0.0/en/package_reference/main_classes#datasets.ClassLabel) will direct users to `https://github.com/huggingface/datasets/blob/v2.0.0/src/datasets/features/features.py#L747`
here, the `v2.0.0` should be `2.0.0`.
## Steps to reproduce the bug
```python
# Sample code to reproduce the bug
```
## Expected results
Redirecting to this link: `https://github.com/huggingface/datasets/blob/2.0.0/src/datasets/features/features.py#L747`
## Actual results
Described above.
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version:
- Platform:
- Python version:
- PyArrow version:
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3939/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3939/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/3938 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3938/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3938/comments | https://api.github.com/repos/huggingface/datasets/issues/3938/events | https://github.com/huggingface/datasets/pull/3938 | 1,170,875,417 | PR_kwDODunzps40hnjM | 3,938 | Avoid info log messages from transformers in FrugalScore metric | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_3938). All of your documentation changes will be reflected on that endpoint."
] | 1,647,429,089,000 | 1,647,506,245,000 | 1,647,506,244,000 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3938",
"html_url": "https://github.com/huggingface/datasets/pull/3938",
"diff_url": "https://github.com/huggingface/datasets/pull/3938.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3938.patch",
"merged_at": 1647506244000
} | Fix #3928. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3938/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3938/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3937 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3937/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3937/comments | https://api.github.com/repos/huggingface/datasets/issues/3937/events | https://github.com/huggingface/datasets/issues/3937 | 1,170,832,006 | I_kwDODunzps5FyXqG | 3,937 | Missing languages in lvwerra/github-code dataset | {
"login": "Eytan-S",
"id": 38702500,
"node_id": "MDQ6VXNlcjM4NzAyNTAw",
"avatar_url": "https://avatars.githubusercontent.com/u/38702500?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Eytan-S",
"html_url": "https://github.com/Eytan-S",
"followers_url": "https://api.github.com/users/Eytan-S/followers",
"following_url": "https://api.github.com/users/Eytan-S/following{/other_user}",
"gists_url": "https://api.github.com/users/Eytan-S/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Eytan-S/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Eytan-S/subscriptions",
"organizations_url": "https://api.github.com/users/Eytan-S/orgs",
"repos_url": "https://api.github.com/users/Eytan-S/repos",
"events_url": "https://api.github.com/users/Eytan-S/events{/privacy}",
"received_events_url": "https://api.github.com/users/Eytan-S/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2067401494,
"node_id": "MDU6TGFiZWwyMDY3NDAxNDk0",
"url": "https://api.github.com/repos/huggingface/datasets/labels/Dataset%20discussion",
"name": "Dataset discussion",
"color": "72f99f",
"default": false,
"description": "Discussions on the datasets"
}
] | closed | false | {
"login": "lvwerra",
"id": 8264887,
"node_id": "MDQ6VXNlcjgyNjQ4ODc=",
"avatar_url": "https://avatars.githubusercontent.com/u/8264887?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lvwerra",
"html_url": "https://github.com/lvwerra",
"followers_url": "https://api.github.com/users/lvwerra/followers",
"following_url": "https://api.github.com/users/lvwerra/following{/other_user}",
"gists_url": "https://api.github.com/users/lvwerra/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lvwerra/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lvwerra/subscriptions",
"organizations_url": "https://api.github.com/users/lvwerra/orgs",
"repos_url": "https://api.github.com/users/lvwerra/repos",
"events_url": "https://api.github.com/users/lvwerra/events{/privacy}",
"received_events_url": "https://api.github.com/users/lvwerra/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "lvwerra",
"id": 8264887,
"node_id": "MDQ6VXNlcjgyNjQ4ODc=",
"avatar_url": "https://avatars.githubusercontent.com/u/8264887?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lvwerra",
"html_url": "https://github.com/lvwerra",
"followers_url": "https://api.github.com/users/lvwerra/followers",
"following_url": "https://api.github.com/users/lvwerra/following{/other_user}",
"gists_url": "https://api.github.com/users/lvwerra/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lvwerra/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lvwerra/subscriptions",
"organizations_url": "https://api.github.com/users/lvwerra/orgs",
"repos_url": "https://api.github.com/users/lvwerra/repos",
"events_url": "https://api.github.com/users/lvwerra/events{/privacy}",
"received_events_url": "https://api.github.com/users/lvwerra/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"Thanks for contacting @Eytan-S.\r\n\r\nI think @lvwerra could better answer this. ",
"That seems to be an oversight - I originally planned to include them in the dataset and for some reason they were in the list of languages but not in the query. Since there is an issue with the deduplication step I'll rerun the pipeline anyway and will double check the query.\r\n\r\nThanks for reporting this @Eytan-S!",
"Can confirm that the two languages are indeed missing from the dataset. Here are the file counts per language:\r\n```Python\r\n{'Assembly': 82847,\r\n 'Batchfile': 236755,\r\n 'C': 14127969,\r\n 'C#': 6793439,\r\n 'C++': 7368473,\r\n 'CMake': 175076,\r\n 'CSS': 1733625,\r\n 'Dockerfile': 331966,\r\n 'FORTRAN': 141963,\r\n 'GO': 2259363,\r\n 'Haskell': 340521,\r\n 'HTML': 11165464,\r\n 'Java': 19515696,\r\n 'JavaScript': 11829024,\r\n 'Julia': 58177,\r\n 'Lua': 576279,\r\n 'Makefile': 679338,\r\n 'Markdown': 8454049,\r\n 'PHP': 11181930,\r\n 'Perl': 497490,\r\n 'PowerShell': 136827,\r\n 'Python': 7203553,\r\n 'Ruby': 4479767,\r\n 'Rust': 321765,\r\n 'SQL': 655657,\r\n 'Scala': 0,\r\n 'Shell': 1382786,\r\n 'TypeScript': 0,\r\n 'TeX': 250764,\r\n 'Visual Basic': 155371}\r\n ```",
"@Eytan-S check out v1.1 of the `github-code` dataset where issue should be fixed:\r\n\r\n| | Language |File Count| Size (GB)|\r\n|---:|:-------------|---------:|-------:|\r\n| 0 | Java | 19548190 | 107.7 |\r\n| 1 | C | 14143113 | 183.83 |\r\n| 2 | JavaScript | 11839883 | 87.82 |\r\n| 3 | HTML | 11178557 | 118.12 |\r\n| 4 | PHP | 11177610 | 61.41 |\r\n| 5 | Markdown | 8464626 | 23.09 |\r\n| 6 | C++ | 7380520 | 87.73 |\r\n| 7 | Python | 7226626 | 52.03 |\r\n| 8 | C# | 6811652 | 36.83 |\r\n| 9 | Ruby | 4473331 | 10.95 |\r\n| 10 | GO | 2265436 | 19.28 |\r\n| 11 | TypeScript | 1940406 | 24.59 |\r\n| 12 | CSS | 1734406 | 22.67 |\r\n| 13 | Shell | 1385648 | 3.01 |\r\n| 14 | Scala | 835755 | 3.87 |\r\n| 15 | Makefile | 679430 | 2.92 |\r\n| 16 | SQL | 656671 | 5.67 |\r\n| 17 | Lua | 578554 | 2.81 |\r\n| 18 | Perl | 497949 | 4.7 |\r\n| 19 | Dockerfile | 366505 | 0.71 |\r\n| 20 | Haskell | 340623 | 1.85 |\r\n| 21 | Rust | 322431 | 2.68 |\r\n| 22 | TeX | 251015 | 2.15 |\r\n| 23 | Batchfile | 236945 | 0.7 |\r\n| 24 | CMake | 175282 | 0.54 |\r\n| 25 | Visual Basic | 155652 | 1.91 |\r\n| 26 | FORTRAN | 142038 | 1.62 |\r\n| 27 | PowerShell | 136846 | 0.69 |\r\n| 28 | Assembly | 82905 | 0.78 |\r\n| 29 | Julia | 58317 | 0.29 |",
"Thanks @lvwerra. "
] | 1,647,426,723,000 | 1,647,932,963,000 | 1,647,874,247,000 | NONE | null | null | null | Hi,
I'm working with the github-code dataset. First of all, thank you for creating this amazing dataset!
I've noticed that two languages are missing from the dataset: TypeScript and Scala.
Looks like they're also omitted from the query you used to get the original code.
Are there any plans to add them in the future?
Thanks! | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3937/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3937/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/3936 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3936/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3936/comments | https://api.github.com/repos/huggingface/datasets/issues/3936/events | https://github.com/huggingface/datasets/pull/3936 | 1,170,713,473 | PR_kwDODunzps40hE-P | 3,936 | Fix Wikipedia version and re-add tests | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_3936). All of your documentation changes will be reflected on that endpoint."
] | 1,647,420,484,000 | 1,647,450,247,000 | 1,647,450,245,000 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3936",
"html_url": "https://github.com/huggingface/datasets/pull/3936",
"diff_url": "https://github.com/huggingface/datasets/pull/3936.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3936.patch",
"merged_at": 1647450245000
} | To keep backward compatibility when loading using "wikipedia" dataset ID (https://huggingface.co/datasets/wikipedia), we have created the pre-processed data for the same languages we were offering before, but with updated date "20220301":
- de
- en
- fr
- frr
- it
- simple
These pre-processed data can be accessed, e.g.:
```python
ds = load_dataset("wikipedia", "20220301.frr", split="train")
```
The next step will be to offer the pre-processed data for many other languages, but when loading using "wikimedia/wikipedia": https://huggingface.co/datasets/wikimedia/wikipedia | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3936/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3936/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3934 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3934/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3934/comments | https://api.github.com/repos/huggingface/datasets/issues/3934/events | https://github.com/huggingface/datasets/pull/3934 | 1,170,292,492 | PR_kwDODunzps40ftiC | 3,934 | Create MAUVE metric card | {
"login": "sashavor",
"id": 14205986,
"node_id": "MDQ6VXNlcjE0MjA1OTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/14205986?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sashavor",
"html_url": "https://github.com/sashavor",
"followers_url": "https://api.github.com/users/sashavor/followers",
"following_url": "https://api.github.com/users/sashavor/following{/other_user}",
"gists_url": "https://api.github.com/users/sashavor/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sashavor/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sashavor/subscriptions",
"organizations_url": "https://api.github.com/users/sashavor/orgs",
"repos_url": "https://api.github.com/users/sashavor/repos",
"events_url": "https://api.github.com/users/sashavor/events{/privacy}",
"received_events_url": "https://api.github.com/users/sashavor/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,647,380,167,000 | 1,647,625,094,000 | 1,647,624,853,000 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3934",
"html_url": "https://github.com/huggingface/datasets/pull/3934",
"diff_url": "https://github.com/huggingface/datasets/pull/3934.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3934.patch",
"merged_at": 1647624853000
} | Proposing a MAUVE metric card | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3934/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3934/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3933 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3933/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3933/comments | https://api.github.com/repos/huggingface/datasets/issues/3933/events | https://github.com/huggingface/datasets/pull/3933 | 1,170,253,605 | PR_kwDODunzps40flNM | 3,933 | Update README.md | {
"login": "sashavor",
"id": 14205986,
"node_id": "MDQ6VXNlcjE0MjA1OTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/14205986?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sashavor",
"html_url": "https://github.com/sashavor",
"followers_url": "https://api.github.com/users/sashavor/followers",
"following_url": "https://api.github.com/users/sashavor/following{/other_user}",
"gists_url": "https://api.github.com/users/sashavor/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sashavor/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sashavor/subscriptions",
"organizations_url": "https://api.github.com/users/sashavor/orgs",
"repos_url": "https://api.github.com/users/sashavor/repos",
"events_url": "https://api.github.com/users/sashavor/events{/privacy}",
"received_events_url": "https://api.github.com/users/sashavor/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,647,377,525,000 | 1,647,539,484,000 | 1,647,539,257,000 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3933",
"html_url": "https://github.com/huggingface/datasets/pull/3933",
"diff_url": "https://github.com/huggingface/datasets/pull/3933.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3933.patch",
"merged_at": 1647539257000
} | Fixing missing triple quote | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3933/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3933/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3932 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3932/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3932/comments | https://api.github.com/repos/huggingface/datasets/issues/3932/events | https://github.com/huggingface/datasets/pull/3932 | 1,170,221,773 | PR_kwDODunzps40fd0T | 3,932 | Create SARI metric card | {
"login": "sashavor",
"id": 14205986,
"node_id": "MDQ6VXNlcjE0MjA1OTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/14205986?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sashavor",
"html_url": "https://github.com/sashavor",
"followers_url": "https://api.github.com/users/sashavor/followers",
"following_url": "https://api.github.com/users/sashavor/following{/other_user}",
"gists_url": "https://api.github.com/users/sashavor/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sashavor/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sashavor/subscriptions",
"organizations_url": "https://api.github.com/users/sashavor/orgs",
"repos_url": "https://api.github.com/users/sashavor/repos",
"events_url": "https://api.github.com/users/sashavor/events{/privacy}",
"received_events_url": "https://api.github.com/users/sashavor/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,647,376,643,000 | 1,647,625,021,000 | 1,647,624,775,000 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3932",
"html_url": "https://github.com/huggingface/datasets/pull/3932",
"diff_url": "https://github.com/huggingface/datasets/pull/3932.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3932.patch",
"merged_at": 1647624775000
} | SARI metric card! (do we have an expert in text simplification to validate?.. :sweat_smile: ) | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3932/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3932/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3931 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3931/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3931/comments | https://api.github.com/repos/huggingface/datasets/issues/3931/events | https://github.com/huggingface/datasets/pull/3931 | 1,170,097,208 | PR_kwDODunzps40fBjx | 3,931 | Add align_labels_with_mapping docs | {
"login": "stevhliu",
"id": 59462357,
"node_id": "MDQ6VXNlcjU5NDYyMzU3",
"avatar_url": "https://avatars.githubusercontent.com/u/59462357?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stevhliu",
"html_url": "https://github.com/stevhliu",
"followers_url": "https://api.github.com/users/stevhliu/followers",
"following_url": "https://api.github.com/users/stevhliu/following{/other_user}",
"gists_url": "https://api.github.com/users/stevhliu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stevhliu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stevhliu/subscriptions",
"organizations_url": "https://api.github.com/users/stevhliu/orgs",
"repos_url": "https://api.github.com/users/stevhliu/repos",
"events_url": "https://api.github.com/users/stevhliu/events{/privacy}",
"received_events_url": "https://api.github.com/users/stevhliu/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892861,
"node_id": "MDU6TGFiZWwxOTM1ODkyODYx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/documentation",
"name": "documentation",
"color": "0075ca",
"default": true,
"description": "Improvements or additions to documentation"
}
] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,647,372,297,000 | 1,647,620,911,000 | 1,647,620,673,000 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3931",
"html_url": "https://github.com/huggingface/datasets/pull/3931",
"diff_url": "https://github.com/huggingface/datasets/pull/3931.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3931.patch",
"merged_at": 1647620673000
} | This PR documents the `align_labels_with_mapping` function to ensure predicted labels are aligned with the dataset, or to assign a different mapping of labels to ids (requested by @mariosasko 🎉 ).
For this specific code sample, the current dataset has a `mixed` label that the original [dataset](https://huggingface.co/datasets/poem_sentiment#data-fields) didn't. Is there a way to remove this label so it is completely aligned with the original dataset mappings? Otherwise, I'll just leave it as it is. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3931/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3931/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3930 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3930/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3930/comments | https://api.github.com/repos/huggingface/datasets/issues/3930/events | https://github.com/huggingface/datasets/pull/3930 | 1,170,087,793 | PR_kwDODunzps40e_fb | 3,930 | Create README.md | {
"login": "sashavor",
"id": 14205986,
"node_id": "MDQ6VXNlcjE0MjA1OTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/14205986?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sashavor",
"html_url": "https://github.com/sashavor",
"followers_url": "https://api.github.com/users/sashavor/followers",
"following_url": "https://api.github.com/users/sashavor/following{/other_user}",
"gists_url": "https://api.github.com/users/sashavor/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sashavor/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sashavor/subscriptions",
"organizations_url": "https://api.github.com/users/sashavor/orgs",
"repos_url": "https://api.github.com/users/sashavor/repos",
"events_url": "https://api.github.com/users/sashavor/events{/privacy}",
"received_events_url": "https://api.github.com/users/sashavor/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,647,371,819,000 | 1,649,085,795,000 | 1,649,085,448,000 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3930",
"html_url": "https://github.com/huggingface/datasets/pull/3930",
"diff_url": "https://github.com/huggingface/datasets/pull/3930.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3930.patch",
"merged_at": 1649085448000
} | Creating a README for IndicGLUE
cc @mcmillanmajora for fact checking in terms of languages (also, are there any limitations of the dataset or eval metric that I'm not aware of?) | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3930/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3930/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3929 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3929/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3929/comments | https://api.github.com/repos/huggingface/datasets/issues/3929/events | https://github.com/huggingface/datasets/issues/3929 | 1,170,066,235 | I_kwDODunzps5Fvcs7 | 3,929 | Load a local dataset twice | {
"login": "caush",
"id": 28349961,
"node_id": "MDQ6VXNlcjI4MzQ5OTYx",
"avatar_url": "https://avatars.githubusercontent.com/u/28349961?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/caush",
"html_url": "https://github.com/caush",
"followers_url": "https://api.github.com/users/caush/followers",
"following_url": "https://api.github.com/users/caush/following{/other_user}",
"gists_url": "https://api.github.com/users/caush/gists{/gist_id}",
"starred_url": "https://api.github.com/users/caush/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/caush/subscriptions",
"organizations_url": "https://api.github.com/users/caush/orgs",
"repos_url": "https://api.github.com/users/caush/repos",
"events_url": "https://api.github.com/users/caush/events{/privacy}",
"received_events_url": "https://api.github.com/users/caush/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"Hi @caush, thanks for reporting:\r\n\r\nIn order to load local CSV files, you can use our \"csv\" loading script: https://huggingface.co/docs/datasets/loading#csv\r\n```python\r\ndataset = load_dataset(\"csv\", data_files=[\"data/file1.csv\", \"data/file2.csv\"])\r\n```\r\nOR:\r\n```python\r\ndataset = load_dataset(\"csv\", data_dir=\"data\")\r\n```\r\n\r\nAlternatively, you may also use:\r\n```python\r\ndataset = load_dataset(\"data\")"
] | 1,647,370,766,000 | 1,647,424,509,000 | 1,647,424,446,000 | NONE | null | null | null | ## Describe the bug
Load a local "dataset" composed of two csv files twice.
## Steps to reproduce the bug
Put the two joined files in a repository named "Data".
Then in python:
import datasets as ds
ds.load_dataset('Data', data_files = {'file1.csv', 'file2.csv'})
## Expected results
Should give something like (because files have only one data row):
Title, clicks
Truc et astuce, 123
Machin, 12
## Actual results
Gives
Title, clicks
Truc et astuce, 123
Machin, 12
Truc et astuce, 123
Machin, 12
## Environment info
[file1.csv](https://github.com/huggingface/datasets/files/8256322/file1.csv)
[file2.csv](https://github.com/huggingface/datasets/files/8256323/file2.csv)
- `datasets` version: 2.0.0
- Platform: Linux-5.4.0-65-generic-x86_64-with-glibc2.10
- Python version: 3.8.12
- PyArrow version: 7.0.0
- Pandas version: 1.4.1 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3929/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3929/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/3928 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3928/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3928/comments | https://api.github.com/repos/huggingface/datasets/issues/3928/events | https://github.com/huggingface/datasets/issues/3928 | 1,170,017,132 | I_kwDODunzps5FvQts | 3,928 | Frugal score deprecations | {
"login": "Ierezell",
"id": 30974685,
"node_id": "MDQ6VXNlcjMwOTc0Njg1",
"avatar_url": "https://avatars.githubusercontent.com/u/30974685?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Ierezell",
"html_url": "https://github.com/Ierezell",
"followers_url": "https://api.github.com/users/Ierezell/followers",
"following_url": "https://api.github.com/users/Ierezell/following{/other_user}",
"gists_url": "https://api.github.com/users/Ierezell/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Ierezell/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Ierezell/subscriptions",
"organizations_url": "https://api.github.com/users/Ierezell/orgs",
"repos_url": "https://api.github.com/users/Ierezell/repos",
"events_url": "https://api.github.com/users/Ierezell/events{/privacy}",
"received_events_url": "https://api.github.com/users/Ierezell/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"Hi @Ierezell, thanks for reporting.\r\n\r\nI'm making a PR to suppress those logs from the terminal. "
] | 1,647,367,842,000 | 1,647,506,244,000 | 1,647,506,244,000 | NONE | null | null | null | ## Describe the bug
The frugal score returns a really verbose output with warnings that can be easily changed.
## Steps to reproduce the bug
```python
# Sample code to reproduce the bug
from datasets.load import load_metric
frugal = load_metric("frugalscore")
frugal.compute(predictions=["Do you like spinachis"], references=["Do you like spinach"])
```
## Expected results
A clear and concise description of the expected results.
```
{'scores': [0.9946]}
```
## Actual results
Specify the actual results or traceback.
```
PyTorch: setting up devices
The default value for the training argument `--report_to` will change in v5 (from all installed integrations to none). In v5, you will need to use `--report_to all` to get the same behavior as now. You should start updating your code and make this info disappear :-).
100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1/1 [00:00<00:00, 864.09ba/s]
Using amp half precision backend
The following columns in the test set don't have a corresponding argument in `BertForSequenceClassification.forward` and have been ignored: sentence2, sentence1. If sentence2, sentence1 are not expected by `BertForSequenceClassification.forward`, you can safely ignore this message.
***** Running Prediction *****
Num examples = 1
Batch size = 64
100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1/1 [00:00<00:00, 4644.85it/s]
{'scores': [0.9946]}
```
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 1.17.0
- Platform: Linux-5.13.0-30-generic-x86_64-with-glibc2.29
- Python version: 3.8.10
- PyArrow version: 7.0.0
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3928/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3928/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/3927 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3927/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3927/comments | https://api.github.com/repos/huggingface/datasets/issues/3927/events | https://github.com/huggingface/datasets/pull/3927 | 1,170,016,465 | PR_kwDODunzps40ewN2 | 3,927 | Update main readme | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"What do you think @albertvillanova ?"
] | 1,647,367,799,000 | 1,648,548,827,000 | 1,648,548,500,000 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3927",
"html_url": "https://github.com/huggingface/datasets/pull/3927",
"diff_url": "https://github.com/huggingface/datasets/pull/3927.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3927.patch",
"merged_at": 1648548500000
} | The main readme was still focused on text datasets - I extended it by mentioning that we also support image and audio datasets | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3927/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3927/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3926 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3926/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3926/comments | https://api.github.com/repos/huggingface/datasets/issues/3926/events | https://github.com/huggingface/datasets/pull/3926 | 1,169,945,052 | PR_kwDODunzps40ehVP | 3,926 | Doc maintenance | {
"login": "stevhliu",
"id": 59462357,
"node_id": "MDQ6VXNlcjU5NDYyMzU3",
"avatar_url": "https://avatars.githubusercontent.com/u/59462357?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stevhliu",
"html_url": "https://github.com/stevhliu",
"followers_url": "https://api.github.com/users/stevhliu/followers",
"following_url": "https://api.github.com/users/stevhliu/following{/other_user}",
"gists_url": "https://api.github.com/users/stevhliu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stevhliu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stevhliu/subscriptions",
"organizations_url": "https://api.github.com/users/stevhliu/orgs",
"repos_url": "https://api.github.com/users/stevhliu/repos",
"events_url": "https://api.github.com/users/stevhliu/events{/privacy}",
"received_events_url": "https://api.github.com/users/stevhliu/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892861,
"node_id": "MDU6TGFiZWwxOTM1ODkyODYx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/documentation",
"name": "documentation",
"color": "0075ca",
"default": true,
"description": "Improvements or additions to documentation"
}
] | closed | false | null | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_3926). All of your documentation changes will be reflected on that endpoint."
] | 1,647,363,646,000 | 1,647,372,435,000 | 1,647,372,432,000 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3926",
"html_url": "https://github.com/huggingface/datasets/pull/3926",
"diff_url": "https://github.com/huggingface/datasets/pull/3926.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3926.patch",
"merged_at": 1647372432000
} | This PR adds some minor maintenance to the docs. The main fix is properly linking to pages in the callouts because some of the links would just redirect to a non-existent section on the same page. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3926/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3926/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3925 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3925/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3925/comments | https://api.github.com/repos/huggingface/datasets/issues/3925/events | https://github.com/huggingface/datasets/pull/3925 | 1,169,913,769 | PR_kwDODunzps40eaq8 | 3,925 | Fix main_classes docs index | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"Hmm it's still not good \r\n![image](https://user-images.githubusercontent.com/42851186/158429361-e19ce25b-c259-4ded-8473-075deafdbb96.png)\r\n\r\nany idea what could cause this ?",
"Ok fixed :)"
] | 1,647,362,026,000 | 1,647,956,951,000 | 1,647,956,644,000 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3925",
"html_url": "https://github.com/huggingface/datasets/pull/3925",
"diff_url": "https://github.com/huggingface/datasets/pull/3925.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3925.patch",
"merged_at": 1647956644000
} | Currently the `main_classes` documentation has a wrong index. I believe this comes from issues in the examples of the Translation feature types
![image](https://user-images.githubusercontent.com/42851186/158426345-2ee1ceef-ddf3-4a6f-a93e-d1a8f38a44f5.png)
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3925/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3925/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3924 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3924/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3924/comments | https://api.github.com/repos/huggingface/datasets/issues/3924/events | https://github.com/huggingface/datasets/pull/3924 | 1,169,805,813 | PR_kwDODunzps40eED5 | 3,924 | Document cases for github datasets | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_3924). All of your documentation changes will be reflected on that endpoint.",
"Yay!"
] | 1,647,357,010,000 | 1,649,183,595,000 | 1,647,358,883,000 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3924",
"html_url": "https://github.com/huggingface/datasets/pull/3924",
"diff_url": "https://github.com/huggingface/datasets/pull/3924.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3924.patch",
"merged_at": 1647358883000
} | In general we recommend adding the new dataset under a username or organization in the Hugging Face Hub at [hf.co/datasets](hf.co/datasets), but users can still add a dataset on github in some cases.
I added a paragraph in the documentation to explain in which cases it can make more sense to open a PR on github:
- when you need the dataset to be reviewed
- when you need long-term maintenance from the HF team
- when there’s no clear org name / namespace that you can put the dataset under | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3924/reactions",
"total_count": 2,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3924/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3923 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3923/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3923/comments | https://api.github.com/repos/huggingface/datasets/issues/3923/events | https://github.com/huggingface/datasets/pull/3923 | 1,169,773,869 | PR_kwDODunzps40d9YU | 3,923 | Add methods to IterableDatasetDict | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_3923). All of your documentation changes will be reflected on that endpoint."
] | 1,647,355,563,000 | 1,647,362,708,000 | 1,647,362,706,000 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3923",
"html_url": "https://github.com/huggingface/datasets/pull/3923",
"diff_url": "https://github.com/huggingface/datasets/pull/3923.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3923.patch",
"merged_at": 1647362706000
} | Following the new methods added in #3826 and https://github.com/huggingface/datasets/pull/3862 I added several methods to IterableDatasetDict:
- map
- filter
- shuffle
- with_format
- cast
- cast_column
- remove_columns
- rename_column
- rename_columns
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3923/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3923/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3922 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3922/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3922/comments | https://api.github.com/repos/huggingface/datasets/issues/3922/events | https://github.com/huggingface/datasets/pull/3922 | 1,169,761,293 | PR_kwDODunzps40d6vm | 3,922 | Fix NonMatchingChecksumError in MultiWOZ 2.2 dataset | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_3922). All of your documentation changes will be reflected on that endpoint.",
"Unrelated CI test failure. This PR can be merged."
] | 1,647,354,988,000 | 1,647,360,424,000 | 1,647,360,423,000 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3922",
"html_url": "https://github.com/huggingface/datasets/pull/3922",
"diff_url": "https://github.com/huggingface/datasets/pull/3922.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3922.patch",
"merged_at": 1647360422000
} | Fix #2957 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3922/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3922/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3921 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3921/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3921/comments | https://api.github.com/repos/huggingface/datasets/issues/3921/events | https://github.com/huggingface/datasets/pull/3921 | 1,169,749,338 | PR_kwDODunzps40d4Mk | 3,921 | Fix NonMatchingChecksumError in CRD3 dataset | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_3921). All of your documentation changes will be reflected on that endpoint.",
"Unrelated test failure. This PR can be merged."
] | 1,647,354,434,000 | 1,647,359,667,000 | 1,647,359,666,000 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3921",
"html_url": "https://github.com/huggingface/datasets/pull/3921",
"diff_url": "https://github.com/huggingface/datasets/pull/3921.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3921.patch",
"merged_at": 1647359666000
} | Fix #3051 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3921/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3921/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3920 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3920/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3920/comments | https://api.github.com/repos/huggingface/datasets/issues/3920/events | https://github.com/huggingface/datasets/issues/3920 | 1,169,532,807 | I_kwDODunzps5FtaeH | 3,920 | 'datasets.features' is not a package | {
"login": "Arij-Aladel",
"id": 68355048,
"node_id": "MDQ6VXNlcjY4MzU1MDQ4",
"avatar_url": "https://avatars.githubusercontent.com/u/68355048?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Arij-Aladel",
"html_url": "https://github.com/Arij-Aladel",
"followers_url": "https://api.github.com/users/Arij-Aladel/followers",
"following_url": "https://api.github.com/users/Arij-Aladel/following{/other_user}",
"gists_url": "https://api.github.com/users/Arij-Aladel/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Arij-Aladel/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Arij-Aladel/subscriptions",
"organizations_url": "https://api.github.com/users/Arij-Aladel/orgs",
"repos_url": "https://api.github.com/users/Arij-Aladel/repos",
"events_url": "https://api.github.com/users/Arij-Aladel/events{/privacy}",
"received_events_url": "https://api.github.com/users/Arij-Aladel/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi @Arij-Aladel,\r\n\r\nYou are using a very old version of our library `datasets`: 1.8.0\r\nCurrent version is 2.0.0 (and the previous one was 1.18.4)\r\n\r\nPlease, try to update `datasets` library and check if the problem persists:\r\n```shell\r\n/env/bin/pip install -U datasets",
"The problem I can no I have build my project on this version and old version on transformers. I have preprocessed the data again to use it. Thank for your reply"
] | 1,647,342,863,000 | 1,647,422,232,000 | 1,647,422,232,000 | NONE | null | null | null | @albertvillanova
python 3.9
os: ubuntu 20.04
In conda environment
torch installed by
```/env/bin/pip install torch==1.9.0+cu111 torchvision==0.10.0+cu111 torchaudio==0.9.0 -f https://download.pytorch.org/whl/torch_stable.html```
datasets package is installed by
```
/env/bin/pip install datasets==1.8.0
```
During runing the code I have this error
```
[6]<stderr>: File "/home/arij/Memory-transformer-with-hierarchical-attention_MLM/env/lib/python3.9/site-packages/torch/serialization.py", line 875, in find_class
[6]<stderr>: return super().find_class(mod_name, name)
[6]<stderr>:ModuleNotFoundError: No module named 'datasets.features.features'; 'datasets.features' is not a package
```
precisely this error appears when
torch.load('data_file.pt')
```
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/arij/Memory-transformer-with-hierarchical-attention_MLM/env/lib/python3.9/site-packages/torch/serialization.py", line 607, in load
return _load(opened_zipfile, map_location, pickle_module, **pickle_load_args)
File "/home/arij/Memory-transformer-with-hierarchical-attention_MLM/env/lib/python3.9/site-packages/torch/serialization.py", line 882, in _load
result = unpickler.load()
File "/home/arij/Memory-transformer-with-hierarchical-attention_MLM/env/lib/python3.9/site-packages/torch/serialization.py", line 875, in find_class
return super().find_class(mod_name, name)
ModuleNotFoundError: No module named 'datasets.features.features'; 'datasets.features' is not a package
```
Why I am getting this error?
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3920/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3920/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/3919 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3919/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3919/comments | https://api.github.com/repos/huggingface/datasets/issues/3919/events | https://github.com/huggingface/datasets/issues/3919 | 1,169,497,210 | I_kwDODunzps5FtRx6 | 3,919 | AttributeError: 'DatasetDict' object has no attribute 'features' | {
"login": "jswapnil10",
"id": 48145785,
"node_id": "MDQ6VXNlcjQ4MTQ1Nzg1",
"avatar_url": "https://avatars.githubusercontent.com/u/48145785?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jswapnil10",
"html_url": "https://github.com/jswapnil10",
"followers_url": "https://api.github.com/users/jswapnil10/followers",
"following_url": "https://api.github.com/users/jswapnil10/following{/other_user}",
"gists_url": "https://api.github.com/users/jswapnil10/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jswapnil10/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jswapnil10/subscriptions",
"organizations_url": "https://api.github.com/users/jswapnil10/orgs",
"repos_url": "https://api.github.com/users/jswapnil10/repos",
"events_url": "https://api.github.com/users/jswapnil10/events{/privacy}",
"received_events_url": "https://api.github.com/users/jswapnil10/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | [
"You are likely trying to get the `features` from a `DatasetDict`, a dictionary containing `Datasets`. You probably first want to index into a particular split from your `DatasetDict` i.e. `dataset['train'].features`. \r\n\r\nFor example \r\n\r\n```python \r\nds = load_dataset('mnist')\r\nds.features\r\n```\r\nReturns \r\n```python\r\n---------------------------------------------------------------------------\r\n\r\nAttributeError Traceback (most recent call last)\r\n\r\n[<ipython-input-39-791c1f9df6c2>](https://localhost:8080/#) in <module>()\r\n----> 1 ds.features\r\n\r\nAttributeError: 'DatasetDict' object has no attribute 'features'\r\n```\r\n\r\nIf we look at the dataset variable, we see it is a `DatasetDict`:\r\n\r\n```python \r\nprint(ds)\r\nDatasetDict({\r\n train: Dataset({\r\n features: ['image', 'label'],\r\n num_rows: 60000\r\n })\r\n test: Dataset({\r\n features: ['image', 'label'],\r\n num_rows: 10000\r\n })\r\n})\r\n```\r\n\r\nWe can grab the features from a split by indexing into `train`:\r\n```python\r\nds['train'].features\r\n{'image': Image(decode=True, id=None),\r\n 'label': ClassLabel(num_classes=10, names=['0', '1', '2', '3', '4', '5', '6', '7', '8', '9'], id=None)}\r\n```\r\n\r\nHope that helps ",
"Yes, Thanks for that clarification,"
] | 1,647,341,219,000 | 1,647,490,574,000 | 1,647,490,574,000 | NONE | null | null | null | ## Describe the bug
Receiving the error when trying to check for Dataset features
## Steps to reproduce the bug
from datasets import Dataset
dataset = Dataset.from_pandas(df[['id', 'words', 'bboxes', 'ner_tags', 'image_path']])
dataset.features
## Expected results
A clear and concise description of the expected results.
## Actual results
Getting the following errror
AttributeError: 'DatasetDict' object has no attribute 'features'
## Environment info
Copy-and-paste the text below in your GitHub issue.
- `datasets` version: 1.18.4
- Platform: Linux-4.14.252-131.483.amzn1.x86_64-x86_64-with-glibc2.9
- Python version: 3.6.13
- PyArrow version: 6.0.1
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3919/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3919/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/3918 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3918/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3918/comments | https://api.github.com/repos/huggingface/datasets/issues/3918/events | https://github.com/huggingface/datasets/issues/3918 | 1,169,366,117 | I_kwDODunzps5Fsxxl | 3,918 | datasets.utils.info_utils.NonMatchingChecksumError: Checksums didn't match for dataset source files | {
"login": "willowdong",
"id": 51409295,
"node_id": "MDQ6VXNlcjUxNDA5Mjk1",
"avatar_url": "https://avatars.githubusercontent.com/u/51409295?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/willowdong",
"html_url": "https://github.com/willowdong",
"followers_url": "https://api.github.com/users/willowdong/followers",
"following_url": "https://api.github.com/users/willowdong/following{/other_user}",
"gists_url": "https://api.github.com/users/willowdong/gists{/gist_id}",
"starred_url": "https://api.github.com/users/willowdong/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/willowdong/subscriptions",
"organizations_url": "https://api.github.com/users/willowdong/orgs",
"repos_url": "https://api.github.com/users/willowdong/repos",
"events_url": "https://api.github.com/users/willowdong/events{/privacy}",
"received_events_url": "https://api.github.com/users/willowdong/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 1935892865,
"node_id": "MDU6TGFiZWwxOTM1ODkyODY1",
"url": "https://api.github.com/repos/huggingface/datasets/labels/duplicate",
"name": "duplicate",
"color": "cfd3d7",
"default": true,
"description": "This issue or pull request already exists"
}
] | closed | false | null | [] | null | [
"Hi @willowdong! These issues were fixed on master. We will have a new release of `datasets` later today. In the meantime, you can avoid these issues by installing `datasets` from master as follows:\r\n```bash\r\npip install git+https://github.com/huggingface/datasets.git\r\n```",
"You should force redownload:\r\n```python\r\ndataset = load_dataset(\"multi_news\", download_mode=\"force_redownload\")\r\ndataset_2 = load_dataset(\"reddit_tifu\", \"long\", download_mode=\"force_redownload\")",
"Fixed by:\r\n- #3787 \r\n- #3843"
] | 1,647,334,425,000 | 1,647,445,018,000 | 1,647,352,885,000 | NONE | null | null | null | ## Describe the bug
Can't load the dataset
## Steps to reproduce the bug
```python
# Sample code to reproduce the bug
```
from datasets import load_dataset
dataset = load_dataset('multi_news')
dataset_2=load_dataset("reddit_tifu", "long")
## Actual results
raise NonMatchingChecksumError(error_msg + str(bad_urls))
datasets.utils.info_utils.NonMatchingChecksumError: Checksums didn't match for dataset source files:
['https://drive.google.com/uc?export=download&id=1ffWfITKFMJeqjT8loC8aiCLRNJpc_XnF']
## Environment info
- `datasets` version: 1.18.4
- Platform: macOS-10.16-x86_64-i386-64bit
- Python version: 3.8.0
- PyArrow version: 6.0.1
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3918/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3918/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/3917 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3917/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3917/comments | https://api.github.com/repos/huggingface/datasets/issues/3917/events | https://github.com/huggingface/datasets/pull/3917 | 1,168,906,154 | PR_kwDODunzps40bGZA | 3,917 | Create README.md | {
"login": "sashavor",
"id": 14205986,
"node_id": "MDQ6VXNlcjE0MjA1OTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/14205986?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sashavor",
"html_url": "https://github.com/sashavor",
"followers_url": "https://api.github.com/users/sashavor/followers",
"following_url": "https://api.github.com/users/sashavor/following{/other_user}",
"gists_url": "https://api.github.com/users/sashavor/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sashavor/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sashavor/subscriptions",
"organizations_url": "https://api.github.com/users/sashavor/orgs",
"repos_url": "https://api.github.com/users/sashavor/repos",
"events_url": "https://api.github.com/users/sashavor/events{/privacy}",
"received_events_url": "https://api.github.com/users/sashavor/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_3917). All of your documentation changes will be reflected on that endpoint."
] | 1,647,292,090,000 | 1,647,539,139,000 | 1,647,539,139,000 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3917",
"html_url": "https://github.com/huggingface/datasets/pull/3917",
"diff_url": "https://github.com/huggingface/datasets/pull/3917.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3917.patch",
"merged_at": 1647539139000
} | This follows the same structure as the GLUE metric card, hope that works for everyone :) | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3917/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3917/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3916 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3916/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3916/comments | https://api.github.com/repos/huggingface/datasets/issues/3916/events | https://github.com/huggingface/datasets/pull/3916 | 1,168,869,191 | PR_kwDODunzps40a-cR | 3,916 | Create README.md for GLUE | {
"login": "sashavor",
"id": 14205986,
"node_id": "MDQ6VXNlcjE0MjA1OTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/14205986?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sashavor",
"html_url": "https://github.com/sashavor",
"followers_url": "https://api.github.com/users/sashavor/followers",
"following_url": "https://api.github.com/users/sashavor/following{/other_user}",
"gists_url": "https://api.github.com/users/sashavor/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sashavor/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sashavor/subscriptions",
"organizations_url": "https://api.github.com/users/sashavor/orgs",
"repos_url": "https://api.github.com/users/sashavor/repos",
"events_url": "https://api.github.com/users/sashavor/events{/privacy}",
"received_events_url": "https://api.github.com/users/sashavor/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_3916). All of your documentation changes will be reflected on that endpoint."
] | 1,647,289,642,000 | 1,647,364,017,000 | 1,647,364,016,000 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3916",
"html_url": "https://github.com/huggingface/datasets/pull/3916",
"diff_url": "https://github.com/huggingface/datasets/pull/3916.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3916.patch",
"merged_at": 1647364016000
} | I still have a hesitation regarding the format of inputs -- whether it's a list or a list of lists? -- hopefully @lhoestq will be able to clarify.
Also tagging @yjernite for the Limitations section. Happy to hear your thoughts! | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3916/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3916/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3915 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3915/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3915/comments | https://api.github.com/repos/huggingface/datasets/issues/3915/events | https://github.com/huggingface/datasets/pull/3915 | 1,168,848,101 | PR_kwDODunzps40a54e | 3,915 | Metric card template | {
"login": "emibaylor",
"id": 27527747,
"node_id": "MDQ6VXNlcjI3NTI3NzQ3",
"avatar_url": "https://avatars.githubusercontent.com/u/27527747?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/emibaylor",
"html_url": "https://github.com/emibaylor",
"followers_url": "https://api.github.com/users/emibaylor/followers",
"following_url": "https://api.github.com/users/emibaylor/following{/other_user}",
"gists_url": "https://api.github.com/users/emibaylor/gists{/gist_id}",
"starred_url": "https://api.github.com/users/emibaylor/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/emibaylor/subscriptions",
"organizations_url": "https://api.github.com/users/emibaylor/orgs",
"repos_url": "https://api.github.com/users/emibaylor/repos",
"events_url": "https://api.github.com/users/emibaylor/events{/privacy}",
"received_events_url": "https://api.github.com/users/emibaylor/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"Looks like a great start! I have a general comment and a few specific comments.\r\n\r\nMy general comment is I wonder if we need a post for this template and the data and model card templates (or a combined one?) explaining why this documentation is needed and how it serves both the writer and the audience.\r\n\r\nSpecific comments:\r\n- Maybe we can add some more desiderata to the overview instructions like: what task was the metric originally developed for, what tasks is it used for now, what is the range of possible outputs?\r\n- In the data card, we call the data instances inputs `fields`. It might be good to synchronize on that across the templates and change `input_name` to `input_field`? Also are the instructions for the `input_name` complete? It ends with 'In the *' and I'm not sure what that refers to.\r\n- 'Values' seems ambiguous to me, maybe 'scores' would be more explicit? Also could add a request for the range of possible outputs.\r\n- We could add a reference in the examples section to the overview section if that's where further explanation should go. Suggestion to add: 'Provide a range of examples that show both typical and atypical results' or something similar.\r\n- I'm not sure if we'd want to add this to the example section or make a new section, but it would be good to prompt somewhere for links to specific use cases in HF\r\n- In the limitations and bias section, add 'with links'\r\n",
"Looks like a great start! I have a general comment and a few specific comments.\r\n\r\nMy general comment is I wonder if we need a post for this template and the data and model card templates (or a combined one?) explaining why this documentation is needed and how it serves both the writer and the audience.\r\n\r\nSpecific comments:\r\n- Maybe we can add some more desiderata to the overview instructions like: what task was the metric originally developed for, what tasks is it used for now, what is the range of possible outputs?\r\n- In the data card, we call the data instances `fields`. It might be good to synchronize on that across the templates and change `input_name` to `input_field`? Also are the instructions for the `input_name` complete? It ends with 'In the *' and I'm not sure what that refers to.\r\n- 'Values' seems ambiguous to me, maybe 'scores' would be more explicit? Also could add a request for the range of possible outputs.\r\n- We could add a reference to the examples section to the overview section if that's where further explanation should go. Suggestion to add: 'Provide a range of examples that show both typical and atypical results' or something similar.\r\n- I'm not sure if we'd want to add this to the example section or make a new section, but it would be good to prompt somewhere for links to specific use cases in HF\r\n- In the limitations and bias section, add 'with links'\r\n",
"Thanks for your feedback, @mcmillanmajora ! I totally agree that we should write a post -- we were going to write one up when we are done with a good chunk of the metric cards, but we can also do that earlier :smile: \r\n\r\nWith regards to your more specific comments:\r\n\r\n- It is our intention to put what the metric was developed for (whether it is a specific task or dataset, for example). You can see the [WER](https://github.com/huggingface/datasets/tree/master/metrics/wer) metric card for that.\r\n- `input_field` works for me!\r\n- the values aren't always scores, it's more like the values the metric can take. And it does include the range of possible values, including the max and min, that are outputted.\r\n- I like the suggestion to add: 'Provide a range of examples that show both typical and atypical results' :hugs: \r\n- I have been putting specific use cases in 'Further references', just because there isn't always something to put there, especially for less popular metrics",
"Oh cool! I was just looking at the template, it definitely helps seeing an example metric card. Based on just the instructions, I had assumed that examples meant research papers where the metric was used to evaluate a model, but I like the explicit coding examples! ",
"Oh cool! I was just looking at the template, it definitely helps seeing an example metric card. Based on just the instructions, I had assumed that examples meant research papers where the metric was used to evaluate a model, but I like the explicit coding examples! "
] | 1,647,288,428,000 | 1,651,661,049,000 | 1,651,660,626,000 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3915",
"html_url": "https://github.com/huggingface/datasets/pull/3915",
"diff_url": "https://github.com/huggingface/datasets/pull/3915.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3915.patch",
"merged_at": 1651660626000
} | Adding a metric card template, based on ideas and edits from @sashavor and I, as well as from comments from @lhoestq and others (thank you!).
All feedback is welcome, but am especially curious about feedback in terms of:
- things that should be included but aren't
- things that are included but should be changed or removed
- the instructions I included, and whether they should be added to, clarified, or deleted altogether | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3915/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3915/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3914 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3914/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3914/comments | https://api.github.com/repos/huggingface/datasets/issues/3914/events | https://github.com/huggingface/datasets/pull/3914 | 1,168,777,880 | PR_kwDODunzps40aq2r | 3,914 | Use templates for doc-builidng jobs | {
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_3914). All of your documentation changes will be reflected on that endpoint.",
"You can ignore the CI failures btw, they're unrelated to this PR"
] | 1,647,283,986,000 | 1,647,529,379,000 | 1,647,529,378,000 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3914",
"html_url": "https://github.com/huggingface/datasets/pull/3914",
"diff_url": "https://github.com/huggingface/datasets/pull/3914.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3914.patch",
"merged_at": 1647529378000
} | This PR updates the jobs for all doc-building related things by using the templates introduced on `doc-builder`. By putting those once there, we make sure every repo gets the latest fixes on the doc-building github actions :-)
Note: all libraries must share the same docker image for those doc-building jobs. For now, all the one used (`huggingface/transformers-doc-builder`) contains all extra steps of the datasets install for docbuling (mainly libsndfile) but if in the future some additional steps are necessary on top of `pip install -e .[dev]`, this docker image will need to be updated with the extra deps. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3914/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3914/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3913 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3913/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3913/comments | https://api.github.com/repos/huggingface/datasets/issues/3913/events | https://github.com/huggingface/datasets/pull/3913 | 1,168,723,950 | PR_kwDODunzps40afYJ | 3,913 | Deterministic split order in DatasetDict.map | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_3913). All of your documentation changes will be reflected on that endpoint.",
"I'm surprised this is needed because the order of the `dict` keys is deterministic as of Python 3.6 (documented in 3.7). Is there a reproducer for this behavior? I wouldn't make this change unless it's absolutely needed because `sorted` modifies the initial order of the keys.",
"Indeed this doesn't fix the issue apparently. Actually this is probably because the tokenizer used to process the second split is in a state that has been modified by the first split.\r\n\r\nTherefore after reloading the first split from the cache, then the second split can't be reloaded since the tokenizer hasn't seen the first split (and therefore is considered a different tokenizer)."
] | 1,647,280,717,000 | 1,647,341,115,000 | 1,647,341,115,000 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3913",
"html_url": "https://github.com/huggingface/datasets/pull/3913",
"diff_url": "https://github.com/huggingface/datasets/pull/3913.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3913.patch",
"merged_at": null
} | The order in which the splits are processed by `map` is not deterministic in `DatasetDict.map`. This can cause caching issues when the processing function is stateful and sensible to the order in which examples are processed
Close https://github.com/huggingface/datasets/issues/3847 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3913/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3913/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3912 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3912/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3912/comments | https://api.github.com/repos/huggingface/datasets/issues/3912/events | https://github.com/huggingface/datasets/pull/3912 | 1,168,720,098 | PR_kwDODunzps40aekr | 3,912 | add draft of registering function for pandas | {
"login": "lvwerra",
"id": 8264887,
"node_id": "MDQ6VXNlcjgyNjQ4ODc=",
"avatar_url": "https://avatars.githubusercontent.com/u/8264887?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lvwerra",
"html_url": "https://github.com/lvwerra",
"followers_url": "https://api.github.com/users/lvwerra/followers",
"following_url": "https://api.github.com/users/lvwerra/following{/other_user}",
"gists_url": "https://api.github.com/users/lvwerra/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lvwerra/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lvwerra/subscriptions",
"organizations_url": "https://api.github.com/users/lvwerra/orgs",
"repos_url": "https://api.github.com/users/lvwerra/repos",
"events_url": "https://api.github.com/users/lvwerra/events{/privacy}",
"received_events_url": "https://api.github.com/users/lvwerra/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_3912). All of your documentation changes will be reflected on that endpoint.",
"That's cool ! Though I would expect such an integration to only require `huggingface_hub`, not the full `datasets` library. \r\n Indeed if users want to use the `datasets` lib they could just to `Dataset.from_pandas(df).push_to_hub()` already. Therefore I would explore something that doesn't not necessarily requires `datasets`.\r\n\r\nFor other could storage solutions (S3, GCS, etc.), pandas allows users to pass URIs like `s3://bucket-name/path/data.csv` to the `read_xxx` and `to_xxx` (for csv, parquet, json, etc). It also support passing the **root directory** like `s3://bucket-name/dataset-dir` instead of a single file name.\r\n\r\nIn the Hugging Face Hub case, we have one dataset = one repository. We can enter pandas' paradigm by saying one dataset = one repository = one root directory. Here is what we could have:\r\n\r\n### push to Hub:\r\n```python\r\n\"\"\"\r\nDemo script for writing a pandas data frame to a CSV file on HF using fsspec-supported pandas APIs\r\n\"\"\"\r\nimport pandas as pd\r\n\r\nHF_USER = os.getenv(\"HF_USER\")\r\nHF_TOKEN = os.getenv(\"HF_TOKEN\")\r\n\r\nbooks_df = pd.DataFrame(\r\n data={\"Title\": [\"Book I\", \"Book II\", \"Book III\"], \"Price\": [56.6, 59.87, 74.54]},\r\n columns=[\"Title\", \"Price\"],\r\n)\r\n\r\ndataset_name = \"books1\"\r\n\r\nbooks_df.to_csv(\r\n f\"hf://{HF_USER}/{dataset_name}\",\r\n index=False,\r\n storage_options={\r\n \"repo_type\": \"dataset\",\r\n \"token\": HF_TOKEN,\r\n },\r\n)\r\n\r\n```\r\n\r\n### load from Hub:\r\n```python\r\n\"\"\"\r\nDemo script for reading a CSV file from HF into a pandas data frame using fsspec-supported pandas\r\nAPIs\r\n\"\"\"\r\nimport pandas as pd\r\n\r\nHF_USER = os.getenv(\"HF_USER\")\r\nHF_TOKEN = os.getenv(\"HF_TOKEN\")\r\n\r\ndataset_name = \"books1\"\r\n\r\nbooks_df = pd.read_csv(\r\n f\"hf://{HF_USER}/{dataset_name}\",\r\n storage_options={\r\n \"repo_type\": \"dataset\",\r\n \"token\": HF_TOKEN,\r\n },\r\n)\r\n\r\nprint(books_df)\r\n```\r\n\r\nAnd you could do the same with Parquet data using `read/to_parquet` or other formats. Formats like CSV, Parquet or JSON Lines would work out of the box with `datasets`. This API would also allow anyone to use Dask with the Hugging Face Hub for example.\r\n\r\nWhat do you think ?"
] | 1,647,280,469,000 | 1,647,877,299,000 | null | MEMBER | null | true | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3912",
"html_url": "https://github.com/huggingface/datasets/pull/3912",
"diff_url": "https://github.com/huggingface/datasets/pull/3912.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3912.patch",
"merged_at": null
} | This PR adds a register function for `pandas`. It allows to directly push `DataFrame` objects to the hub and in return loading datasets on the hub from `DataFrame`. The motivation for this integration is to enable the vast number of `pandas` users to be able to easily push `DataFrames` to the hub.
Here is an example:
```python
import pandas as pd
from datasets import register_pandas
register_pandas()
# push to hub
df = pd.DataFrame.from_dict({"test": [1,2,3]})
df.push_to_hub("my_test")
# load from hub
df_retrieved = pd.DataFrame.load_from_hub("lvwerra/my_test")
```
It follows a similar philosophy as the `tqdm` [integration](https://github.com/tqdm/tqdm#pandas-integration). Also see [this issue](https://github.com/pandas-dev/pandas/issues/46000) on the `pandas` repository.
This is just a rough draft of what such integration could look like but I would like appreciate some feedback on this: is this something you would like to add the library and is this the way to go? cc @lhoestq @albertvillanova @julien-c | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3912/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3912/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3911 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3911/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3911/comments | https://api.github.com/repos/huggingface/datasets/issues/3911/events | https://github.com/huggingface/datasets/pull/3911 | 1,168,652,374 | PR_kwDODunzps40aQHz | 3,911 | Create README.md for CER metric | {
"login": "sashavor",
"id": 14205986,
"node_id": "MDQ6VXNlcjE0MjA1OTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/14205986?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sashavor",
"html_url": "https://github.com/sashavor",
"followers_url": "https://api.github.com/users/sashavor/followers",
"following_url": "https://api.github.com/users/sashavor/following{/other_user}",
"gists_url": "https://api.github.com/users/sashavor/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sashavor/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sashavor/subscriptions",
"organizations_url": "https://api.github.com/users/sashavor/orgs",
"repos_url": "https://api.github.com/users/sashavor/repos",
"events_url": "https://api.github.com/users/sashavor/events{/privacy}",
"received_events_url": "https://api.github.com/users/sashavor/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,647,276,891,000 | 1,647,539,380,000 | 1,647,539,154,000 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3911",
"html_url": "https://github.com/huggingface/datasets/pull/3911",
"diff_url": "https://github.com/huggingface/datasets/pull/3911.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3911.patch",
"merged_at": 1647539154000
} | Initial proposal for a CER metric card
cc @patrickvonplaten - wdyt this time around? :smile: | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3911/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3911/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3910 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3910/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3910/comments | https://api.github.com/repos/huggingface/datasets/issues/3910/events | https://github.com/huggingface/datasets/pull/3910 | 1,168,579,694 | PR_kwDODunzps40aAiX | 3,910 | Fix text loader to split only on universal newlines | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_3910). All of your documentation changes will be reflected on that endpoint.",
"Looks like the test needs to be updated for windows ^^'",
"I don't think this is the same issue as in https://github.com/oscar-corpus/corpus/issues/18, where the OSCAR metadata has line offsets that use only `\\n` as the newline marker to count lines, not `\\r\\n` or `\\r`.\r\n\r\nIt looks like the OSCAR data loader is opening the data files with `gzip.open` directly and I don't think this text loader is used, but I'm not familiar with a lot of `datasets` internals so I could be mistaken?",
"You are right @adrianeboyd.\r\n\r\nThis PR fixes #3729.\r\n\r\nAdditionally, this PR is somehow related to the OSCAR issue. However, the OSCAR issue have multiple root causes: one is the offset initialization (as you pointed out); other is similar to this case: Unicode newlines are not properly handled.\r\n\r\nI will make a change proposal for OSCAR this afternoon.",
"@lhoestq I'm working on fixing the Windows tests on my Windows machine...",
"I finally changed the approach in order to avoid having \"\\r\\n\" and \"\\r\" line breaks in Python `str` read from files on Windows/old Macintosh machines."
] | 1,647,273,298,000 | 1,647,360,971,000 | 1,647,360,969,000 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3910",
"html_url": "https://github.com/huggingface/datasets/pull/3910",
"diff_url": "https://github.com/huggingface/datasets/pull/3910.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3910.patch",
"merged_at": 1647360969000
} | Currently, `text` loader breaks on a superset of universal newlines, which also contains Unicode line boundaries. See: https://docs.python.org/3/library/stdtypes.html#str.splitlines
However, the expected behavior is to get the lines splitted only on universal newlines: "\n", "\r\n" and "\r".
See: oscar-corpus/corpus#18
Fix #3729. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3910/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3910/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3909 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3909/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3909/comments | https://api.github.com/repos/huggingface/datasets/issues/3909/events | https://github.com/huggingface/datasets/issues/3909 | 1,168,578,058 | I_kwDODunzps5FpxYK | 3,909 | Error loading file audio when downloading the Common Voice dataset directly from the Hub | {
"login": "aliceinland",
"id": 30385910,
"node_id": "MDQ6VXNlcjMwMzg1OTEw",
"avatar_url": "https://avatars.githubusercontent.com/u/30385910?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/aliceinland",
"html_url": "https://github.com/aliceinland",
"followers_url": "https://api.github.com/users/aliceinland/followers",
"following_url": "https://api.github.com/users/aliceinland/following{/other_user}",
"gists_url": "https://api.github.com/users/aliceinland/gists{/gist_id}",
"starred_url": "https://api.github.com/users/aliceinland/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/aliceinland/subscriptions",
"organizations_url": "https://api.github.com/users/aliceinland/orgs",
"repos_url": "https://api.github.com/users/aliceinland/repos",
"events_url": "https://api.github.com/users/aliceinland/events{/privacy}",
"received_events_url": "https://api.github.com/users/aliceinland/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | open | false | null | [] | null | [
"Hi ! It could an issue with torchaudio, which version of torchaudio are you using ? Can you also try updating `datasets` to 2.0.0 and see if it works ?",
"I _might_ have a similar issue. I'm trying to use the librispeech_asr dataset and read it with soundfile.\r\n\r\n```python\r\nfrom datasets import load_dataset, load_metric\r\nfrom transformers import Speech2TextForConditionalGeneration, Speech2TextProcessor\r\nimport soundfile as sf\r\n\r\nlibrispeech_eval = load_dataset(\"librispeech_asr\", \"clean\", split=\"test\") # change to \"other\" for other test dataset\r\nwer = load_metric(\"wer\")\r\n\r\nmodel = Speech2TextForConditionalGeneration.from_pretrained(\"facebook/s2t-small-librispeech-asr\").to(\"cuda\")\r\nprocessor = Speech2TextProcessor.from_pretrained(\"facebook/s2t-small-librispeech-asr\", do_upper_case=True)\r\n\r\ndef map_to_array(batch):\r\n speech, _ = sf.read(batch[\"file\"])\r\n batch[\"speech\"] = speech\r\n return batch\r\n\r\nlibrispeech_eval = librispeech_eval.map(map_to_array)\r\n\r\ndef map_to_pred(batch):\r\n features = processor(batch[\"speech\"], sampling_rate=16000, padding=True, return_tensors=\"pt\")\r\n input_features = features.input_features.to(\"cuda\")\r\n attention_mask = features.attention_mask.to(\"cuda\")\r\n\r\n gen_tokens = model.generate(input_ids=input_features, attention_mask=attention_mask)\r\n batch[\"transcription\"] = processor.batch_decode(gen_tokens, skip_special_tokens=True)\r\n return batch\r\n\r\nresult = librispeech_eval.map(map_to_pred, batched=True, batch_size=8, remove_columns=[\"speech\"])\r\n\r\nprint(\"WER:\", wer(predictions=result[\"transcription\"], references=result[\"text\"]))\r\n```\r\n\r\nThe code is taken directly from \"https://huggingface.co/facebook/s2t-small-librispeech-asr\".\r\n\r\nThe short error code is \"RuntimeError: Error opening '6930-75918-0000.flac': System error.\" (it can't find the first file), and I agree, I can't find the file either. The dataset has downloaded correctly (it says), but on the location, there are only \".arrow\" files, no \".flac\" files.\r\n\r\n**Error message:**\r\n\r\n```python\r\nRuntimeError Traceback (most recent call last)\r\nInput In [15], in <cell line: 16>()\r\n 13 batch[\"speech\"] = speech\r\n 14 return batch\r\n---> 16 librispeech_eval = librispeech_eval.map(map_to_array)\r\n 18 def map_to_pred(batch):\r\n 19 features = processor(batch[\"speech\"], sampling_rate=16000, padding=True, return_tensors=\"pt\")\r\n\r\nFile C:\\ProgramData\\Miniconda3\\envs\\noise_cancel\\lib\\site-packages\\datasets\\arrow_dataset.py:1953, in Dataset.map(self, function, with_indices, with_rank, input_columns, batched, batch_size, drop_last_batch, remove_columns, keep_in_memory, load_from_cache_file, cache_file_name, writer_batch_size, features, disable_nullable, fn_kwargs, num_proc, suffix_template, new_fingerprint, desc)\r\n 1950 disable_tqdm = not logging.is_progress_bar_enabled()\r\n 1952 if num_proc is None or num_proc == 1:\r\n-> 1953 return self._map_single(\r\n 1954 function=function,\r\n 1955 with_indices=with_indices,\r\n 1956 with_rank=with_rank,\r\n 1957 input_columns=input_columns,\r\n 1958 batched=batched,\r\n 1959 batch_size=batch_size,\r\n 1960 drop_last_batch=drop_last_batch,\r\n 1961 remove_columns=remove_columns,\r\n 1962 keep_in_memory=keep_in_memory,\r\n 1963 load_from_cache_file=load_from_cache_file,\r\n 1964 cache_file_name=cache_file_name,\r\n 1965 writer_batch_size=writer_batch_size,\r\n 1966 features=features,\r\n 1967 disable_nullable=disable_nullable,\r\n 1968 fn_kwargs=fn_kwargs,\r\n 1969 new_fingerprint=new_fingerprint,\r\n 1970 disable_tqdm=disable_tqdm,\r\n 1971 desc=desc,\r\n 1972 )\r\n 1973 else:\r\n 1975 def format_cache_file_name(cache_file_name, rank):\r\n\r\nFile C:\\ProgramData\\Miniconda3\\envs\\noise_cancel\\lib\\site-packages\\datasets\\arrow_dataset.py:519, in transmit_tasks.<locals>.wrapper(*args, **kwargs)\r\n 517 self: \"Dataset\" = kwargs.pop(\"self\")\r\n 518 # apply actual function\r\n--> 519 out: Union[\"Dataset\", \"DatasetDict\"] = func(self, *args, **kwargs)\r\n 520 datasets: List[\"Dataset\"] = list(out.values()) if isinstance(out, dict) else [out]\r\n 521 for dataset in datasets:\r\n 522 # Remove task templates if a column mapping of the template is no longer valid\r\n\r\nFile C:\\ProgramData\\Miniconda3\\envs\\noise_cancel\\lib\\site-packages\\datasets\\arrow_dataset.py:486, in transmit_format.<locals>.wrapper(*args, **kwargs)\r\n 479 self_format = {\r\n 480 \"type\": self._format_type,\r\n 481 \"format_kwargs\": self._format_kwargs,\r\n 482 \"columns\": self._format_columns,\r\n 483 \"output_all_columns\": self._output_all_columns,\r\n 484 }\r\n 485 # apply actual function\r\n--> 486 out: Union[\"Dataset\", \"DatasetDict\"] = func(self, *args, **kwargs)\r\n 487 datasets: List[\"Dataset\"] = list(out.values()) if isinstance(out, dict) else [out]\r\n 488 # re-apply format to the output\r\n\r\nFile C:\\ProgramData\\Miniconda3\\envs\\noise_cancel\\lib\\site-packages\\datasets\\fingerprint.py:458, in fingerprint_transform.<locals>._fingerprint.<locals>.wrapper(*args, **kwargs)\r\n 452 kwargs[fingerprint_name] = update_fingerprint(\r\n 453 self._fingerprint, transform, kwargs_for_fingerprint\r\n 454 )\r\n 456 # Call actual function\r\n--> 458 out = func(self, *args, **kwargs)\r\n 460 # Update fingerprint of in-place transforms + update in-place history of transforms\r\n 462 if inplace: # update after calling func so that the fingerprint doesn't change if the function fails\r\n\r\nFile C:\\ProgramData\\Miniconda3\\envs\\noise_cancel\\lib\\site-packages\\datasets\\arrow_dataset.py:2318, in Dataset._map_single(self, function, with_indices, with_rank, input_columns, batched, batch_size, drop_last_batch, remove_columns, keep_in_memory, load_from_cache_file, cache_file_name, writer_batch_size, features, disable_nullable, fn_kwargs, new_fingerprint, rank, offset, disable_tqdm, desc, cache_only)\r\n 2316 if not batched:\r\n 2317 for i, example in enumerate(pbar):\r\n-> 2318 example = apply_function_on_filtered_inputs(example, i, offset=offset)\r\n 2319 if update_data:\r\n 2320 if i == 0:\r\n\r\nFile C:\\ProgramData\\Miniconda3\\envs\\noise_cancel\\lib\\site-packages\\datasets\\arrow_dataset.py:2218, in Dataset._map_single.<locals>.apply_function_on_filtered_inputs(inputs, indices, check_same_num_examples, offset)\r\n 2216 if with_rank:\r\n 2217 additional_args += (rank,)\r\n-> 2218 processed_inputs = function(*fn_args, *additional_args, **fn_kwargs)\r\n 2219 if update_data is None:\r\n 2220 # Check if the function returns updated examples\r\n 2221 update_data = isinstance(processed_inputs, (Mapping, pa.Table))\r\n\r\nFile C:\\ProgramData\\Miniconda3\\envs\\noise_cancel\\lib\\site-packages\\datasets\\arrow_dataset.py:1913, in Dataset.map.<locals>.decorate.<locals>.decorated(item, *args, **kwargs)\r\n 1909 decorated_item = (\r\n 1910 Example(item, features=self.features) if not batched else Batch(item, features=self.features)\r\n 1911 )\r\n 1912 # Use the LazyDict internally, while mapping the function\r\n-> 1913 result = f(decorated_item, *args, **kwargs)\r\n 1914 # Return a standard dict\r\n 1915 return result.data if isinstance(result, LazyDict) else result\r\n\r\nInput In [15], in map_to_array(batch)\r\n 11 def map_to_array(batch):\r\n---> 12 speech, _ = sf.read(batch[\"file\"])\r\n 13 batch[\"speech\"] = speech\r\n 14 return batch\r\n\r\nFile C:\\ProgramData\\Miniconda3\\envs\\noise_cancel\\lib\\site-packages\\soundfile.py:256, in read(file, frames, start, stop, dtype, always_2d, fill_value, out, samplerate, channels, format, subtype, endian, closefd)\r\n 170 def read(file, frames=-1, start=0, stop=None, dtype='float64', always_2d=False,\r\n 171 fill_value=None, out=None, samplerate=None, channels=None,\r\n 172 format=None, subtype=None, endian=None, closefd=True):\r\n 173 \"\"\"Provide audio data from a sound file as NumPy array.\r\n 174 \r\n 175 By default, the whole file is read from the beginning, but the\r\n (...)\r\n 254 \r\n 255 \"\"\"\r\n--> 256 with SoundFile(file, 'r', samplerate, channels,\r\n 257 subtype, endian, format, closefd) as f:\r\n 258 frames = f._prepare_read(start, stop, frames)\r\n 259 data = f.read(frames, dtype, always_2d, fill_value, out)\r\n\r\nFile C:\\ProgramData\\Miniconda3\\envs\\noise_cancel\\lib\\site-packages\\soundfile.py:629, in SoundFile.__init__(self, file, mode, samplerate, channels, subtype, endian, format, closefd)\r\n 626 self._mode = mode\r\n 627 self._info = _create_info_struct(file, mode, samplerate, channels,\r\n 628 format, subtype, endian)\r\n--> 629 self._file = self._open(file, mode_int, closefd)\r\n 630 if set(mode).issuperset('r+') and self.seekable():\r\n 631 # Move write position to 0 (like in Python file objects)\r\n 632 self.seek(0)\r\n\r\nFile C:\\ProgramData\\Miniconda3\\envs\\noise_cancel\\lib\\site-packages\\soundfile.py:1183, in SoundFile._open(self, file, mode_int, closefd)\r\n 1181 else:\r\n 1182 raise TypeError(\"Invalid file: {0!r}\".format(self.name))\r\n-> 1183 _error_check(_snd.sf_error(file_ptr),\r\n 1184 \"Error opening {0!r}: \".format(self.name))\r\n 1185 if mode_int == _snd.SFM_WRITE:\r\n 1186 # Due to a bug in libsndfile version <= 1.0.25, frames != 0\r\n 1187 # when opening a named pipe in SFM_WRITE mode.\r\n 1188 # See http://github.com/erikd/libsndfile/issues/77.\r\n 1189 self._info.frames = 0\r\n\r\nFile C:\\ProgramData\\Miniconda3\\envs\\noise_cancel\\lib\\site-packages\\soundfile.py:1357, in _error_check(err, prefix)\r\n 1355 if err != 0:\r\n 1356 err_str = _snd.sf_error_number(err)\r\n-> 1357 raise RuntimeError(prefix + _ffi.string(err_str).decode('utf-8', 'replace'))\r\n\r\nRuntimeError: Error opening '6930-75918-0000.flac': System error.\r\n```\r\n\r\n**Package versions:**\r\n```python\r\npython: 3.9\r\ntransformers: 4.17.0\r\ndatasets: 2.0.0\r\nSoundFile: 0.10.3.post1\r\n```\r\n",
"Hi ! In `datasets` 2.0 can access the audio array with `librispeech_eval[0][\"audio\"][\"array\"]` already, no need to use `map_to_array`. See our documentation on [how to process audio data](https://huggingface.co/docs/datasets/audio_process) :)\r\n\r\ncc @patrickvonplaten we will need to update the readme at [facebook/s2t-small-librispeech-asr](https://huggingface.co/facebook/s2t-small-librispeech-asr) as well as https://huggingface.co/docs/transformers/model_doc/speech_to_text",
"Thanks!\r\n\r\nAnd sorry for posting this problem in what turned on to be an unrelated thread.\r\n\r\nI rewrote the code, and the model works. The WER is 0.137 however, so I'm not sure if I have missed a step. I will look further into that at a later point. The transcriptions look good through manual inspection.\r\n\r\nThe rewritten code:\r\n```python\r\nfrom datasets import load_dataset, load_metric\r\nfrom transformers import Speech2TextForConditionalGeneration, Speech2TextProcessor, Wav2Vec2Processor\r\n\r\nlibrispeech_eval = load_dataset(\"librispeech_asr\", \"clean\", split=\"test\") # change to \"other\" for other test dataset\r\nwer = load_metric(\"wer\")\r\n\r\nmodel = Speech2TextForConditionalGeneration.from_pretrained(\"facebook/s2t-small-librispeech-asr\").to(\"cuda\")\r\nprocessor = Speech2TextProcessor.from_pretrained(\"facebook/s2t-small-librispeech-asr\", do_upper_case=True)\r\n\r\ndef map_to_pred(batch):\r\n audio = batch[\"audio\"]\r\n features = processor(audio[\"array\"], sampling_rate=audio[\"sampling_rate\"], padding=True, return_tensors=\"pt\")\r\n input_features = features.input_features.to(\"cuda\")\r\n attention_mask = features.attention_mask.to(\"cuda\")\r\n\r\n gen_tokens = model.generate(input_features=input_features, attention_mask=attention_mask)\r\n batch[\"transcription\"] = processor.batch_decode(gen_tokens, skip_special_tokens=True)\r\n return batch\r\n\r\nresult = librispeech_eval.map(map_to_pred)#, batched=True, batch_size=8)\r\n\r\nprint(\"WER:\", wer.compute(predictions=result[\"transcription\"], references=result[\"text\"]))\r\n```",
"I think the issue comes from the fact that you set `batched=False` while `map_to_pred` still returns a list of strings for \"transcription\". You can fix it by adding `[0]` at the end of this line to get the string:\r\n```python\r\nbatch[\"transcription\"] = processor.batch_decode(gen_tokens, skip_special_tokens=True)[0]\r\n```",
"Updating as many model cards now as I can find",
"https://github.com/huggingface/transformers/pull/16611"
] | 1,647,273,230,000 | 1,649,176,618,000 | null | NONE | null | null | null | ## Describe the bug
When loading the Common_Voice dataset, by downloading it directly from the Hugging Face hub, some files can not be opened.
## Steps to reproduce the bug
```python
import torch
import torchaudio
from datasets import load_dataset, load_metric
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import re
test_dataset = load_dataset("common_voice", "it", split="test")
#test_dataset = load_dataset('csv', data_files = {'test': '/workspace/Dataset/Common_Voice/cv-corpus80/it/test.csv'})
wer = load_metric("wer")
processor = Wav2Vec2Processor.from_pretrained("joorock12/wav2vec2-large-xlsr-italian")
model = Wav2Vec2ForCTC.from_pretrained("joorock12/wav2vec2-large-xlsr-italian")
model.to("cuda")
chars_to_ignore_regex = '[\,\?\.\!\-\;\:\"\“\'\�]'
resampler = torchaudio.transforms.Resample(48_000, 16_000)
```
## Expected results
The common voice dataset downloaded and correctly loaded whit the use of the hugging face datasets library.
## Actual results
The error is:
```python
0ex [00:00, ?ex/s]
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
<ipython-input-48-ef87f4129e6e> in <module>
7 return batch
8
----> 9 test_dataset = test_dataset.map(speech_file_to_array_fn)
/opt/conda/lib/python3.8/site-packages/datasets/arrow_dataset.py in map(self, function, with_indices, with_rank, input_columns, batched, batch_size, drop_last_batch, remove_columns, keep_in_memory, load_from_cache_file, cache_file_name, writer_batch_size, features, disable_nullable, fn_kwargs, num_proc, suffix_template, new_fingerprint, desc)
2107
2108 if num_proc is None or num_proc == 1:
-> 2109 return self._map_single(
2110 function=function,
2111 with_indices=with_indices,
/opt/conda/lib/python3.8/site-packages/datasets/arrow_dataset.py in wrapper(*args, **kwargs)
516 self: "Dataset" = kwargs.pop("self")
517 # apply actual function
--> 518 out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
519 datasets: List["Dataset"] = list(out.values()) if isinstance(out, dict) else [out]
520 for dataset in datasets:
/opt/conda/lib/python3.8/site-packages/datasets/arrow_dataset.py in wrapper(*args, **kwargs)
483 }
484 # apply actual function
--> 485 out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
486 datasets: List["Dataset"] = list(out.values()) if isinstance(out, dict) else [out]
487 # re-apply format to the output
/opt/conda/lib/python3.8/site-packages/datasets/fingerprint.py in wrapper(*args, **kwargs)
411 # Call actual function
412
--> 413 out = func(self, *args, **kwargs)
414
415 # Update fingerprint of in-place transforms + update in-place history of transforms
/opt/conda/lib/python3.8/site-packages/datasets/arrow_dataset.py in _map_single(self, function, with_indices, with_rank, input_columns, batched, batch_size, drop_last_batch, remove_columns, keep_in_memory, load_from_cache_file, cache_file_name, writer_batch_size, features, disable_nullable, fn_kwargs, new_fingerprint, rank, offset, disable_tqdm, desc, cache_only)
2465 if not batched:
2466 for i, example in enumerate(pbar):
-> 2467 example = apply_function_on_filtered_inputs(example, i, offset=offset)
2468 if update_data:
2469 if i == 0:
/opt/conda/lib/python3.8/site-packages/datasets/arrow_dataset.py in apply_function_on_filtered_inputs(inputs, indices, check_same_num_examples, offset)
2372 if with_rank:
2373 additional_args += (rank,)
-> 2374 processed_inputs = function(*fn_args, *additional_args, **fn_kwargs)
2375 if update_data is None:
2376 # Check if the function returns updated examples
/opt/conda/lib/python3.8/site-packages/datasets/arrow_dataset.py in decorated(item, *args, **kwargs)
2067 )
2068 # Use the LazyDict internally, while mapping the function
-> 2069 result = f(decorated_item, *args, **kwargs)
2070 # Return a standard dict
2071 return result.data if isinstance(result, LazyDict) else result
<ipython-input-48-ef87f4129e6e> in speech_file_to_array_fn(batch)
3 def speech_file_to_array_fn(batch):
4 batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower()
----> 5 speech_array, sampling_rate = torchaudio.load(batch["path"])
6 batch["speech"] = resampler(speech_array).squeeze().numpy()
7 return batch
/opt/conda/lib/python3.8/site-packages/torchaudio/backend/sox_io_backend.py in load(filepath, frame_offset, num_frames, normalize, channels_first, format)
150 filepath, frame_offset, num_frames, normalize, channels_first, format)
151 filepath = os.fspath(filepath)
--> 152 return torch.ops.torchaudio.sox_io_load_audio_file(
153 filepath, frame_offset, num_frames, normalize, channels_first, format)
154
RuntimeError: Error loading audio file: failed to open file common_voice_it_17415776.mp3 ```
## Environment info
- `datasets` version: 1.18.4
- Platform: Linux-5.4.0-x86_64-with-glibc2.10
- Python version: 3.8.5
- PyArrow version: 7.0.0 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3909/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3909/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/3908 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3908/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3908/comments | https://api.github.com/repos/huggingface/datasets/issues/3908/events | https://github.com/huggingface/datasets/pull/3908 | 1,168,576,963 | PR_kwDODunzps40Z_9F | 3,908 | Update README.md for SQuAD v2 metric | {
"login": "sashavor",
"id": 14205986,
"node_id": "MDQ6VXNlcjE0MjA1OTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/14205986?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sashavor",
"html_url": "https://github.com/sashavor",
"followers_url": "https://api.github.com/users/sashavor/followers",
"following_url": "https://api.github.com/users/sashavor/following{/other_user}",
"gists_url": "https://api.github.com/users/sashavor/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sashavor/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sashavor/subscriptions",
"organizations_url": "https://api.github.com/users/sashavor/orgs",
"repos_url": "https://api.github.com/users/sashavor/repos",
"events_url": "https://api.github.com/users/sashavor/events{/privacy}",
"received_events_url": "https://api.github.com/users/sashavor/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_3908). All of your documentation changes will be reflected on that endpoint."
] | 1,647,273,190,000 | 1,647,363,851,000 | 1,647,363,851,000 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3908",
"html_url": "https://github.com/huggingface/datasets/pull/3908",
"diff_url": "https://github.com/huggingface/datasets/pull/3908.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3908.patch",
"merged_at": 1647363850000
} | Putting "Values from popular papers" as a subsection of "Output values" | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3908/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3908/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3907 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3907/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3907/comments | https://api.github.com/repos/huggingface/datasets/issues/3907/events | https://github.com/huggingface/datasets/pull/3907 | 1,168,575,998 | PR_kwDODunzps40Z_vd | 3,907 | Update README.md for SQuAD metric | {
"login": "sashavor",
"id": 14205986,
"node_id": "MDQ6VXNlcjE0MjA1OTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/14205986?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sashavor",
"html_url": "https://github.com/sashavor",
"followers_url": "https://api.github.com/users/sashavor/followers",
"following_url": "https://api.github.com/users/sashavor/following{/other_user}",
"gists_url": "https://api.github.com/users/sashavor/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sashavor/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sashavor/subscriptions",
"organizations_url": "https://api.github.com/users/sashavor/orgs",
"repos_url": "https://api.github.com/users/sashavor/repos",
"events_url": "https://api.github.com/users/sashavor/events{/privacy}",
"received_events_url": "https://api.github.com/users/sashavor/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_3907). All of your documentation changes will be reflected on that endpoint."
] | 1,647,273,151,000 | 1,647,363,860,000 | 1,647,363,859,000 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3907",
"html_url": "https://github.com/huggingface/datasets/pull/3907",
"diff_url": "https://github.com/huggingface/datasets/pull/3907.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3907.patch",
"merged_at": 1647363859000
} | Putting "Values from popular papers" as a subsection of "Output values" | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3907/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3907/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3906 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3906/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3906/comments | https://api.github.com/repos/huggingface/datasets/issues/3906/events | https://github.com/huggingface/datasets/issues/3906 | 1,168,496,328 | I_kwDODunzps5FpdbI | 3,906 | NonMatchingChecksumError on Spider dataset | {
"login": "kolk",
"id": 9049591,
"node_id": "MDQ6VXNlcjkwNDk1OTE=",
"avatar_url": "https://avatars.githubusercontent.com/u/9049591?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/kolk",
"html_url": "https://github.com/kolk",
"followers_url": "https://api.github.com/users/kolk/followers",
"following_url": "https://api.github.com/users/kolk/following{/other_user}",
"gists_url": "https://api.github.com/users/kolk/gists{/gist_id}",
"starred_url": "https://api.github.com/users/kolk/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kolk/subscriptions",
"organizations_url": "https://api.github.com/users/kolk/orgs",
"repos_url": "https://api.github.com/users/kolk/repos",
"events_url": "https://api.github.com/users/kolk/events{/privacy}",
"received_events_url": "https://api.github.com/users/kolk/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | [
"Hi @kolk, thanks for reporting.\r\n\r\nIndeed, Google Drive service recently changed their service and we had to add a fix to our library to cope with that change:\r\n- #3787 \r\n\r\nWe just made patch release last week: 1.18.4 https://github.com/huggingface/datasets/releases/tag/1.18.4\r\n\r\nPlease, feel free to update your local `datasets` version, so that you get the fix:\r\n```shell\r\npip install -U datasets\r\n```"
] | 1,647,269,693,000 | 1,647,328,191,000 | 1,647,328,191,000 | NONE | null | null | null | ## Describe the bug
Failure to generate dataset ```spider``` because of checksums error for dataset source files.
## Steps to reproduce the bug
```
from datasets import load_dataset
spider = load_dataset("spider")
```
## Expected results
Checksums should match for files from url ['https://drive.google.com/uc?export=download&id=1_AckYkinAnhqmRQtGsQgUKAnTHxxX5J0']
## Actual results
```
>>> load_dataset("spider")
load_dataset("spider")
Downloading and preparing dataset spider/spider (download: 95.12 MiB, generated: 5.17 MiB, post-processed: Unknown size, total: 100.29 MiB) to /home/user/.cache/huggingface/datasets/spider/spider/1.0.0/79778ebea87c59b19411f1eb3eda317e9dd5f7788a556d837ef25c3ae6e5e8b7...
Traceback (most recent call last):
File "/home/user/py3_env/lib/python3.8/site-packages/IPython/core/interactiveshell.py", line 3441, in run_code
exec(code_obj, self.user_global_ns, self.user_ns)
File "<ipython-input-5-d4cb54197348>", line 1, in <module>
load_dataset("spider")
File "/home/user/py3_env/lib/python3.8/site-packages/datasets/load.py", line 1702, in load_dataset
builder_instance.download_and_prepare(
File "/home/user/py3_env/lib/python3.8/site-packages/datasets/builder.py", line 594, in download_and_prepare
self._download_and_prepare(
File "/home/user/py3_env/lib/python3.8/site-packages/datasets/builder.py", line 665, in _download_and_prepare
verify_checksums(
File "/home/user/py3_env/lib/python3.8/site-packages/datasets/utils/info_utils.py", line 40, in verify_checksums
raise NonMatchingChecksumError(error_msg + str(bad_urls))
datasets.utils.info_utils.NonMatchingChecksumError: Checksums didn't match for dataset source files:
['https://drive.google.com/uc?export=download&id=1_AckYkinAnhqmRQtGsQgUKAnTHxxX5J0']
```
## Environment info
datasets version: 1.18.3
Platform: Ubuntu 20 LTS
Python version: 3.8.10
PyArrow version: 6.0.1
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3906/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3906/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/3905 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3905/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3905/comments | https://api.github.com/repos/huggingface/datasets/issues/3905/events | https://github.com/huggingface/datasets/pull/3905 | 1,168,320,568 | PR_kwDODunzps40ZJQJ | 3,905 | Perplexity Metric Card | {
"login": "emibaylor",
"id": 27527747,
"node_id": "MDQ6VXNlcjI3NTI3NzQ3",
"avatar_url": "https://avatars.githubusercontent.com/u/27527747?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/emibaylor",
"html_url": "https://github.com/emibaylor",
"followers_url": "https://api.github.com/users/emibaylor/followers",
"following_url": "https://api.github.com/users/emibaylor/following{/other_user}",
"gists_url": "https://api.github.com/users/emibaylor/gists{/gist_id}",
"starred_url": "https://api.github.com/users/emibaylor/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/emibaylor/subscriptions",
"organizations_url": "https://api.github.com/users/emibaylor/orgs",
"repos_url": "https://api.github.com/users/emibaylor/repos",
"events_url": "https://api.github.com/users/emibaylor/events{/privacy}",
"received_events_url": "https://api.github.com/users/emibaylor/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_3905). All of your documentation changes will be reflected on that endpoint.",
"I'm wondering if we should add that perplexity can be used for analyzing datasets as well",
"Otherwise, looks good! Good job, @emibaylor !"
] | 1,647,261,580,000 | 1,647,459,536,000 | 1,647,459,536,000 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3905",
"html_url": "https://github.com/huggingface/datasets/pull/3905",
"diff_url": "https://github.com/huggingface/datasets/pull/3905.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3905.patch",
"merged_at": 1647459536000
} | Add Perplexity metric card
Note that it is currently still missing the citation, but I plan to add it later today. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3905/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3905/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3904 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3904/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3904/comments | https://api.github.com/repos/huggingface/datasets/issues/3904/events | https://github.com/huggingface/datasets/issues/3904 | 1,167,730,095 | I_kwDODunzps5FmiWv | 3,904 | CONLL2003 Dataset not available | {
"login": "omarespejel",
"id": 4755430,
"node_id": "MDQ6VXNlcjQ3NTU0MzA=",
"avatar_url": "https://avatars.githubusercontent.com/u/4755430?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/omarespejel",
"html_url": "https://github.com/omarespejel",
"followers_url": "https://api.github.com/users/omarespejel/followers",
"following_url": "https://api.github.com/users/omarespejel/following{/other_user}",
"gists_url": "https://api.github.com/users/omarespejel/gists{/gist_id}",
"starred_url": "https://api.github.com/users/omarespejel/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/omarespejel/subscriptions",
"organizations_url": "https://api.github.com/users/omarespejel/orgs",
"repos_url": "https://api.github.com/users/omarespejel/repos",
"events_url": "https://api.github.com/users/omarespejel/events{/privacy}",
"received_events_url": "https://api.github.com/users/omarespejel/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2067388877,
"node_id": "MDU6TGFiZWwyMDY3Mzg4ODc3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20bug",
"name": "dataset bug",
"color": "2edb81",
"default": false,
"description": "A bug in a dataset script provided in the library"
}
] | closed | false | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"Thanks for reporting, @omarespejel.\r\n\r\nI'm sorry but I can't reproduce the issue: the loading of the dataset works perfecto for me and I can reach the data URL: https://data.deepai.org/conll2003.zip\r\n\r\nMight it be due to a temporary problem in the data owner site (https://data.deepai.org/) that is fixed now?\r\nCould you please try loading the dataset again and tell if the problem persists?",
"@omarespejel I'm closing this issue. Feel free to reopen it if the problem persists."
] | 1,647,215,175,000 | 1,647,505,292,000 | 1,647,505,292,000 | NONE | null | null | null | ## Describe the bug
[CONLL2003](https://huggingface.co/datasets/conll2003) Dataset can no longer reach 'https://data.deepai.org/conll2003.zip'
![image](https://user-images.githubusercontent.com/4755430/158084483-ff83631c-5154-4823-892d-577bf1166db0.png)
## Steps to reproduce the bug
```python
from datasets import load_dataset
datasets = load_dataset("conll2003")
```
## Expected results
Download the conll2003 dataset.
## Actual results
Error: `ConnectionError: Couldn't reach https://data.deepai.org/conll2003.zip (error 502)`
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3904/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3904/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/3903 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3903/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3903/comments | https://api.github.com/repos/huggingface/datasets/issues/3903/events | https://github.com/huggingface/datasets/pull/3903 | 1,167,521,627 | PR_kwDODunzps40WnkI | 3,903 | Add Biwi Kinect Head Pose dataset. | {
"login": "dnaveenr",
"id": 17746528,
"node_id": "MDQ6VXNlcjE3NzQ2NTI4",
"avatar_url": "https://avatars.githubusercontent.com/u/17746528?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dnaveenr",
"html_url": "https://github.com/dnaveenr",
"followers_url": "https://api.github.com/users/dnaveenr/followers",
"following_url": "https://api.github.com/users/dnaveenr/following{/other_user}",
"gists_url": "https://api.github.com/users/dnaveenr/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dnaveenr/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dnaveenr/subscriptions",
"organizations_url": "https://api.github.com/users/dnaveenr/orgs",
"repos_url": "https://api.github.com/users/dnaveenr/repos",
"events_url": "https://api.github.com/users/dnaveenr/events{/privacy}",
"received_events_url": "https://api.github.com/users/dnaveenr/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_3903). All of your documentation changes will be reflected on that endpoint.",
"Thanks for the detailed explanation of the structure!\r\n\r\n1. IMO it makes the most sense to yield one example for each person (so the total of 24 examples), so the features dict should be similar to this:\r\n \r\n ```python\r\n features = Features({\r\n \"rgb\": Sequence(Image()), # for the png frames\r\n \"rgb_cal\": {\"intrisic_mat\": Array2D(shape=(3, 3), dtype=\"float32\"), \"extrinsic_mat\": {\"rotation\": Array2D(shape=(3, 3), dtype=\"float32\"), \"translation\": Sequence(Value(\"float32\", length=3)}},\r\n \"depth\": Sequence(Value(\"string\")), # for the depth frames\r\n \"depth_cal\": the same as \"rgb_cal\",\r\n \"head_pose_gt\": Sequence({\"center\": Sequence(Value(\"float32\", length=3), \"rotation\": Array2D(shape=(3, 3), dtype=\"float32\")}),\r\n \"head_template\": Value(\"string\"), # for the person's obj file\r\n\r\n })\r\n ```\r\n We can add a \"Data Processing\" section to the card to explain how to parse the files.\r\n\r\n\r\n2. Yes, it's ok to parse the files as long as it doesn't take too much time/memory (e.g., it's ok to parse the `*_pose.txt` or `*.cal` files, but it's better to leave the `*_depth.bin` or `*.obj` files unprocessed and yield the paths to them)",
"Thanks for the suggestions @mariosasko, yielding one example for each person would make things much easier.\r\nOkay. I'll look at parsing the files and then displaying the information.",
"Added the following : \r\n- Features, I have included sequence_number and subject_id along with the features you had suggested.\r\n- Tested loading of the dataset along with dummy_data and full_data tests.\r\n- Created the dataset_infos.json file.\r\n\r\nTo-Do :\r\n- [x] Update Dataset Cards with more details.\r\n- [x] \"Data Processing\" section\r\n\r\nAny inputs on what to include in the \"Data Processing\" section ?\r\n",
"@mariosasko Please could you review this when you get time. Thank you.",
"In the Data Processing section, I've added example code for a compressed binary depth image file. Updated the Readme as well. ",
"@mariosasko / @lhoestq , Please could you review this when you get time. Thank you.",
"Created an issue here: https://github.com/huggingface/datasets/issues/4152",
"Got it. Thanks for the comments. I've collapsed the C++ code in the readme and added the suggestions.",
"Hi ! The `AttributeError ` bug has been fixed, feel free to merge `master` into your branch ;)"
] | 1,647,161,961,000 | 1,651,856,759,000 | null | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3903",
"html_url": "https://github.com/huggingface/datasets/pull/3903",
"diff_url": "https://github.com/huggingface/datasets/pull/3903.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3903.patch",
"merged_at": null
} | This PR adds the Biwi Kinect Head Pose dataset.
Dataset Request : Add Biwi Kinect Head Pose Database [#3822](https://github.com/huggingface/datasets/issues/3822)
The Biwi Kinect Head Pose Database is acquired with the Microsoft Kinect sensor, a structured IR light device.It contains 15K images of 20 people with 6 females and 14 males where 4 people were recorded twice.
For each frame, there is :
- a depth image, (.bin file)
- a corresponding rgb image (both 640x480 pixels),
- annotation ( present inside a .txt file)
The ground truth is the 3D location of the head and its rotation.
The dataset structure is as follows :
```
- 01.obj
- 01
- frame_00003_depth.bin
- frame_00003_pose.txt
- frame_00003_rgb.png
.
.
.
- 02.obj
- 02
- frame_00003_depth.bin
- frame_00003_pose.txt
- frame_00003_rgb.png
.
.
.
```
Preview of frame_00003_pose.txt :
```
0.988397 0.0731349 0.133128
-0.0441539 0.976945 -0.208876
-0.145334 0.200575 0.968838
126.665 40.4515 876.198
```
I have used the following dataset features :
```
features=datasets.Features(
{
"person_id": datasets.Value("string"),
"frame_number": datasets.Value("string"),
"depth_image": datasets.Value("string"),
"rgb_image": datasets.Image(),
"3D_head_center": datasets.Array2D(shape=(3, 3), dtype="float"),
"3D_head_rotation": datasets.Value("float"),
}
```
I am giving the path to the depth_image here.
I need some inputs for the following :
1. For each person, the dataset has the following additional information :
```
For each sequence, the corresponding .obj file represents a head template deformed to match the neutral face of that specific person. [*.obj file]
In each folder, two .cal files contain calibration information for the depth and the color camera, e.g., the intrinsic camera matrix of the depth camera and the global rotation and translation to the rgb camera.
```
Wanted to know how we can represent these features ?
2. For _generate_examples , do I parse the directories and fetch the required information ? This would mean reading the .txt file to obtain the "3D_head_center" and "3D_head_rotation" details. We could precompute the features information and have a metadata file and use the metadata file to yield information in _generate_examples ? Wanted your thoughts for the best approach for this ?
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3903/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3903/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3902 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3902/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3902/comments | https://api.github.com/repos/huggingface/datasets/issues/3902/events | https://github.com/huggingface/datasets/issues/3902 | 1,167,403,377 | I_kwDODunzps5FlSlx | 3,902 | Can't import datasets: partially initialized module 'fsspec' has no attribute 'utils' | {
"login": "arunasank",
"id": 3166852,
"node_id": "MDQ6VXNlcjMxNjY4NTI=",
"avatar_url": "https://avatars.githubusercontent.com/u/3166852?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/arunasank",
"html_url": "https://github.com/arunasank",
"followers_url": "https://api.github.com/users/arunasank/followers",
"following_url": "https://api.github.com/users/arunasank/following{/other_user}",
"gists_url": "https://api.github.com/users/arunasank/gists{/gist_id}",
"starred_url": "https://api.github.com/users/arunasank/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/arunasank/subscriptions",
"organizations_url": "https://api.github.com/users/arunasank/orgs",
"repos_url": "https://api.github.com/users/arunasank/repos",
"events_url": "https://api.github.com/users/arunasank/events{/privacy}",
"received_events_url": "https://api.github.com/users/arunasank/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"Update: `\"python3 -c \"from from datasets import Dataset, DatasetDict\"` works, but not if I import without the `python3 -c`",
"Hi @arunasank, thanks for reporting.\r\n\r\nIt seems that this can be caused because you are using an old version of `fsspec`: the reason why it works if you run `python3` seems to be that `python3` runs in a Python virtual env (with an updated version of `fsspec`); whereas the error arises when you run the import from other Python virtual env (with an old version of `fsspec`).\r\n\r\nIn order to fix this, you should update `fsspec` from within the \"problematic\" Python virtual env:\r\n```\r\npip install -U \"fsspec[http]>=2021.05.0\"",
"I'm closing this issue, @arunasank.\r\n\r\nFeel free to re-open it if the problem persists. "
] | 1,647,120,123,000 | 1,647,933,042,000 | 1,647,933,041,000 | NONE | null | null | null | ## Describe the bug
Unable to import datasets
## Steps to reproduce the bug
```python
from datasets import Dataset, DatasetDict
```
## Expected results
The import works without errors
## Actual results
```
AttributeError Traceback (most recent call last)
<ipython-input-37-c8cfcbe62127> in <module>
11 # from tqdm import tqdm
12 # import torch
---> 13 from datasets import Dataset
14 # from transformers import Trainer, TrainingArguments, AutoModel, AutoTokenizer, AutoModelForMaskedLM, DataCollatorForLanguageModeling
15 # from sentence_transformers import SentenceTransformer
~/.local/lib/python3.8/site-packages/datasets/__init__.py in <module>
31 )
32
---> 33 from .arrow_dataset import Dataset, concatenate_datasets
34 from .arrow_reader import ArrowReader, ReadInstruction
35 from .arrow_writer import ArrowWriter
~/.local/lib/python3.8/site-packages/datasets/arrow_dataset.py in <module>
46 )
47
---> 48 import fsspec
49 import numpy as np
50 import pandas as pd
~/.local/lib/python3.8/site-packages/fsspec/__init__.py in <module>
10 from . import _version, caching
11 from .callbacks import Callback
---> 12 from .core import get_fs_token_paths, open, open_files, open_local
13 from .exceptions import FSTimeoutError
14 from .mapping import FSMap, get_mapper
~/.local/lib/python3.8/site-packages/fsspec/core.py in <module>
16 caches,
17 )
---> 18 from .compression import compr
19 from .registry import filesystem, get_filesystem_class
20 from .utils import (
~/.local/lib/python3.8/site-packages/fsspec/compression.py in <module>
68
69
---> 70 register_compression("zip", unzip, "zip")
71 register_compression("bz2", BZ2File, "bz2")
72
~/.local/lib/python3.8/site-packages/fsspec/compression.py in register_compression(name, callback, extensions, force)
44
45 for ext in extensions:
---> 46 if ext in fsspec.utils.compressions and not force:
47 raise ValueError(
48 "Duplicate compression file extension: %s (%s)" % (ext, name)
AttributeError: partially initialized module 'fsspec' has no attribute 'utils' (most likely due to a circular import)
```
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 1.18.4
- Platform: Jupyter notebook
- Python version: 3.8.10
- PyArrow version: 7.0.0
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3902/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3902/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/3901 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3901/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3901/comments | https://api.github.com/repos/huggingface/datasets/issues/3901/events | https://github.com/huggingface/datasets/issues/3901 | 1,167,339,773 | I_kwDODunzps5FlDD9 | 3,901 | Dataset viewer issue for IndicParaphrase- the preview doesn't show | {
"login": "ratishsp",
"id": 3006607,
"node_id": "MDQ6VXNlcjMwMDY2MDc=",
"avatar_url": "https://avatars.githubusercontent.com/u/3006607?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ratishsp",
"html_url": "https://github.com/ratishsp",
"followers_url": "https://api.github.com/users/ratishsp/followers",
"following_url": "https://api.github.com/users/ratishsp/following{/other_user}",
"gists_url": "https://api.github.com/users/ratishsp/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ratishsp/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ratishsp/subscriptions",
"organizations_url": "https://api.github.com/users/ratishsp/orgs",
"repos_url": "https://api.github.com/users/ratishsp/repos",
"events_url": "https://api.github.com/users/ratishsp/events{/privacy}",
"received_events_url": "https://api.github.com/users/ratishsp/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 3470211881,
"node_id": "LA_kwDODunzps7O1zsp",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset-viewer",
"name": "dataset-viewer",
"color": "E5583E",
"default": false,
"description": "Related to the dataset viewer on huggingface.co"
}
] | closed | false | null | [] | null | [
"It seems to have been fixed:\r\n\r\n<img width=\"1534\" alt=\"Capture d’écran 2022-04-12 à 14 10 07\" src=\"https://user-images.githubusercontent.com/1676121/162959599-6b7fef7c-8411-4e03-8f00-90040a658079.png\">\r\n"
] | 1,647,104,165,000 | 1,649,765,450,000 | 1,649,765,449,000 | NONE | null | null | null | ## Dataset viewer issue for '*IndicParaphrase*'
**Link:** *[IndicParaphrase](https://huggingface.co/datasets/ai4bharat/IndicParaphrase/viewer/hi/validation)*
*The preview of the dataset doesn't come up.
The error on the console is:
Status code: 400
Exception: FileNotFoundError
Message: [Errno 2] No such file or directory: '/home/hf/datasets-preview-backend/hi_IndicParaphrase_v1.0.tar'*
Am I the one who added this dataset ? Yes
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3901/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3901/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/3900 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3900/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3900/comments | https://api.github.com/repos/huggingface/datasets/issues/3900/events | https://github.com/huggingface/datasets/pull/3900 | 1,167,224,903 | PR_kwDODunzps40VxRh | 3,900 | Add MetaShift dataset | {
"login": "dnaveenr",
"id": 17746528,
"node_id": "MDQ6VXNlcjE3NzQ2NTI4",
"avatar_url": "https://avatars.githubusercontent.com/u/17746528?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dnaveenr",
"html_url": "https://github.com/dnaveenr",
"followers_url": "https://api.github.com/users/dnaveenr/followers",
"following_url": "https://api.github.com/users/dnaveenr/following{/other_user}",
"gists_url": "https://api.github.com/users/dnaveenr/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dnaveenr/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dnaveenr/subscriptions",
"organizations_url": "https://api.github.com/users/dnaveenr/orgs",
"repos_url": "https://api.github.com/users/dnaveenr/repos",
"events_url": "https://api.github.com/users/dnaveenr/events{/privacy}",
"received_events_url": "https://api.github.com/users/dnaveenr/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"@lhoestq Please could you review this when you get time. Thank you.",
"Thanks a lot for your inputs @mariosasko .\r\n> Maybe we can add the generated meta-graphs to the card as images (with attributions)?\r\n\r\nYes. We can do this for the default set of classes. Will add this.\r\n\r\n> Would be cool if we could have them as additional configs. Also, maybe we could have configs that expose [image metadata](https://github.com/Weixin-Liang/MetaShift/tree/main/dataset/meta_data) from the https://nlp.stanford.edu/data/gqa/sceneGraphs.zip file (this file is downloaded in the script but not used).\r\n\r\nI'll try adding the bonus section as additional config. \r\nRegarding exposing the image metadata with a config parameter, how will we showcase/display this information ?\r\n",
"> Regarding exposing the image metadata with a config parameter, how will we showcase/display this information ?\r\n\r\nOh, I forgot to mention that. Let's add a `Dataset Usage` section to the card to document the params (similar to this: https://huggingface.co/datasets/electricity_load_diagrams#dataset-usage). Also, feel free to add the constants that can be tuned as config params (e.g. `IMAGE_SUBSET_SIZE_THRESHOLD` or the `5` in `len(subject_data) <= 5`).",
"Okay. Got it. Will add these and constants as config parameters.\r\n\r\nThe image metadata from scene graphs looks like this : \r\n```json\r\n{\r\n \"2407890\": {\r\n \"width\": 640,\r\n \"height\": 480,\r\n \"location\": \"living room\",\r\n \"weather\": none,\r\n \"objects\": {\r\n \"271881\": {\r\n \"name\": \"chair\",\r\n \"x\": 220,\r\n \"y\": 310,\r\n \"w\": 50,\r\n \"h\": 80,\r\n \"attributes\": [\"brown\", \"wooden\", \"small\"],\r\n \"relations\": {\r\n \"32452\": {\r\n \"name\": \"on\",\r\n \"object\": \"275312\"\r\n },\r\n \"32452\": {\r\n \"name\": \"near\",\r\n \"object\": \"279472\"\r\n } \r\n }\r\n }\r\n }\r\n }\r\n}\r\n```\r\n``load_dataset(\"metashift\", selected_classes=[\"cat\", \"dog\", ...], image_metadata=True)``\r\nHow do we showcase/display the image metadata(json) information ?\r\n",
"> How do we showcase/display the image metadata(json) information ?\r\n\r\nWe can add the JSON fields as keys to the features dict:\r\n```python\r\n if self.config.image_metadata:\r\n features.update({\"width\": Value(\"int\"), \"height\": Value(\"int\"), \"location\": Value(\"string\"), ...}) \r\n```\r\n\r\nP.S. Would rename `image_metadata` to `with_image_metadata` ",
"I have added the following : \r\n- Added the meta-graphs to the card as images under the Section \"Dataset Meta-Graphs\".\r\n- Generate the Attributes-Dataset using config parameter. [ [Link](https://github.com/Weixin-Liang/MetaShift#bonus-generate-the-metashift-attributes-dataset-subsets-defined-by-subject-attributes) ]\r\n- Expose image metadata using config parameter.\r\nFormat of the image metadata is as follows : [Link](https://cs.stanford.edu/people/dorarad/gqa/download.html)\r\nI have modified the \"Objects\" which is dict to a list of dicts with an additional parameter named object_id. \r\nI have defined the structure as follows : \r\n```\r\n{\r\n \"width\": datasets.Value(\"int64\"),\r\n \"height\": datasets.Value(\"int64\"),\r\n \"location\": datasets.Value(\"string\"),\r\n \"weather\": datasets.Value(\"string\"),\r\n \"objects\": datasets.Sequence(\r\n {\r\n \"object_id\": datasets.Value(\"string\"),\r\n \"name\": datasets.Value(\"string\"),\r\n \"x\": datasets.Value(\"int64\"),\r\n \"y\": datasets.Value(\"int64\"),\r\n \"w\": datasets.Value(\"int64\"),\r\n \"h\": datasets.Value(\"int64\"),\r\n \"attributes\": datasets.Sequence(datasets.Value(\"string\")),\r\n \"relations\": datasets.Sequence(\r\n {\r\n \"name\": datasets.Value(\"string\"),\r\n \"object\": datasets.Value(\"string\"),\r\n }\r\n ),\r\n }\r\n ),\r\n}\r\n```\r\nProblem is that objects is not being shown as list of dicts. The output looks as follows : \r\n\r\n> metashift_dataset['train'][0]\r\n\r\n```json \r\n{'image_id': '2338755', 'image': <PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=500x281 at 0x7F066C5A49D0>, 'label': 0, 'context': 'ground', 'width': 500, 'height': 281, 'location': None, 'weather': None, 'objects': {'object_id': ['3070704', '3070705', '3070706', '2416713', '3070702', '2790660', '3063157', '2354960', '2037127', '2392939', '2912743', '2125407', '2735257', '3260906', '2351018', '3288269', '3699852', '2734378', '3421201', '2863115'], 'name': ['bicycle', 'bicycle', 'bicycle', 'boot', 'bicycle', 'motorcycle', 'pepperoni', 'head', 'building', 'wall', 'shorts', 'people', 'wheel', 'bricks', 'man', 'cat', 'boot', 'door', 'ground', 'building'], 'x': [137, 371, 458, 215, 468, 399, 368, 245, 0, 140, 260, 284, 138, 451, 339, 187, 210, 26, 0, 313], 'y': [116, 86, 94, 150, 91, 80, 107, 22, 0, 44, 109, 69, 145, 226, 69, 22, 230, 0, 119, 0], 'w': [197, 27, 15, 73, 24, 53, 9, 37, 289, 46, 43, 30, 74, 28, 35, 116, 53, 107, 500, 55], 'h': [126, 25, 38, 128, 43, 50, 16, 44, 158, 73, 51, 52, 97, 15, 73, 252, 46, 147, 162, 77], 'attributes': [[], [], [], ['white'], [], [], [], [], [], [], [], [], [], [], [], ['white'], ['white'], ['large', 'black'], ['brick'], []], 'relations': [{'name': ['to the left of'], 'object': ['3260906']}, {'name': ['to the left of', 'to the right of', 'to the right of', 'to the left of', 'to the right of', 'to the left of', 'to the right of'], 'object': ['3070706', '2351018', '2125407', '2790660', '2037127', '3070702', '3288269']}, {'name': ['to the right of', 'to the right of', 'to the left of', 'to the right of', 'to the right of'], 'object': ['2351018', '3070705', '3070702', '2790660', '3063157']}, {'name': ['to the right of'], 'object': ['2735257']}, {'name': ['to the right of', 'to the right of', 'to the right of', 'to the right of', 'to the right of'], 'object': ['2351018', '2790660', '3070706', '3070705', '3063157']}, {'name': ['to the right of', 'to the right of', 'to the left of', 'to the left of', 'to the right of', 'to the right of', 'to the right of', 'to the right of'], 'object': ['3070705', '2351018', '3070702', '3070706', '3063157', '2125407', '2037127', '3288269']}, {'name': ['to the right of', 'to the left of', 'to the left of', 'to the right of', 'to the right of', 'to the left of', 'to the right of'], 'object': ['2037127', '3070706', '3070702', '2912743', '3288269', '2790660', '2125407']}, {'name': ['to the left of', 'to the right of'], 'object': ['2863115', '2734378']}, {'name': ['to the left of', 'to the left of', 'to the left of', 'to the left of', 'to the left of', 'to the left of'], 'object': ['3070705', '2351018', '3063157', '2125407', '2790660', '2863115']}, {'name': ['to the left of', 'to the right of', 'to the left of'], 'object': ['2125407', '2734378', '3288269']}, {'name': ['to the left of', 'on', 'to the left of'], 'object': ['2351018', '3288269', '3063157']}, {'name': ['to the left of', 'to the left of', 'to the right of', 'to the left of', 'to the right of', 'to the left of'], 'object': ['3063157', '2351018', '2037127', '3070705', '2392939', '2790660']}, {'name': ['to the left of', 'to the left of'], 'object': ['2416713', '3288269']}, {'name': ['to the right of'], 'object': ['3070704']}, {'name': ['to the right of', 'to the left of', 'to the right of', 'to the left of', 'to the left of', 'to the right of', 'to the left of', 'to the right of', 'walking down'], 'object': ['2037127', '2790660', '2125407', '3070705', '3070706', '2912743', '3070702', '3288269', '3421201']}, {'name': ['to the right of', 'to the right of', 'to the left of', 'to the right of', 'to the left of', 'to the left of', 'to the left of', 'to the left of'], 'object': ['2392939', '2734378', '2790660', '2735257', '3063157', '3070705', '2351018', '2863115']}, {'name': [], 'object': []}, {'name': ['of', 'to the left of', 'to the left of', 'to the left of'], 'object': ['2037127', '2354960', '3288269', '2392939']}, {'name': [], 'object': []}, {'name': ['to the right of', 'to the right of', 'to the right of'], 'object': ['2037127', '3288269', '2354960']}]}}\r\n```\r\nExpected output of image_metadata would be : \r\n```\r\n{'height': 281,\r\n 'location': None,\r\n 'objects': [{'attributes': [],\r\n 'h': 126,\r\n 'name': 'bicycle',\r\n 'object_id': '3070704',\r\n 'relations': [{'name': 'to the left of', 'object': '3260906'}],\r\n 'w': 197,\r\n 'x': 137,\r\n 'y': 116},\r\n {'attributes': [],\r\n 'h': 25,\r\n 'name': 'bicycle',\r\n 'object_id': '3070705',\r\n 'relations': [{'name': 'to the left of', 'object': '3070706'},\r\n {'name': 'to the right of', 'object': '2351018'},\r\n {'name': 'to the right of', 'object': '2125407'},\r\n {'name': 'to the left of', 'object': '2790660'},\r\n {'name': 'to the right of', 'object': '2037127'},\r\n {'name': 'to the left of', 'object': '3070702'},\r\n {'name': 'to the right of', 'object': '3288269'}],\r\n 'w': 27,\r\n 'x': 371,\r\n 'y': 86},\r\n {'attributes': ['white'],\r\n 'h': 252,\r\n 'name': 'cat',\r\n 'object_id': '3288269',\r\n 'relations': [{'name': 'to the right of', 'object': '2392939'},\r\n {'name': 'to the right of', 'object': '2734378'},\r\n {'name': 'to the left of', 'object': '2790660'},\r\n {'name': 'to the right of', 'object': '2735257'},\r\n {'name': 'to the left of', 'object': '3063157'},\r\n {'name': 'to the left of', 'object': '3070705'},\r\n {'name': 'to the left of', 'object': '2351018'},\r\n {'name': 'to the left of', 'object': '2863115'}],\r\n 'w': 116,\r\n 'x': 187,\r\n 'y': 22},\r\n {'attributes': ['white'],\r\n 'h': 46,\r\n 'name': 'boot',\r\n 'object_id': '3699852',\r\n 'relations': [],\r\n 'w': 53,\r\n 'x': 210,\r\n 'y': 230},\r\n .\r\n .\r\n .\r\n {'attributes': ['large', 'black'],\r\n 'h': 147,\r\n 'name': 'door',\r\n 'object_id': '2734378',\r\n 'relations': [{'name': 'of', 'object': '2037127'},\r\n {'name': 'to the left of', 'object': '2354960'},\r\n {'name': 'to the left of', 'object': '3288269'},\r\n {'name': 'to the left of', 'object': '2392939'}],\r\n 'w': 107,\r\n 'x': 26,\r\n 'y': 0},\r\n {'attributes': ['brick'],\r\n 'h': 162,\r\n 'name': 'ground',\r\n 'object_id': '3421201',\r\n 'relations': [],\r\n 'w': 500,\r\n 'x': 0,\r\n 'y': 119},\r\n {'attributes': [],\r\n 'h': 77,\r\n 'name': 'building',\r\n 'object_id': '2863115',\r\n 'relations': [{'name': 'to the right of', 'object': '2037127'},\r\n {'name': 'to the right of', 'object': '3288269'},\r\n {'name': 'to the right of', 'object': '2354960'}],\r\n 'w': 55,\r\n 'x': 313,\r\n 'y': 0}],\r\n 'weather': None,\r\n 'width': 500}\r\n\r\n```\r\n\r\nMay I know how to get the list of dicts representation correctly ?\r\n\r\n---\r\nTo-Do : \r\n\r\n- [x] Generate dataset_infos.json file.\r\n- [x] Add “Dataset Usage” section in the cards and write about the config parameters. \r\n- [x] Add the constants that can be tuned as config params.\r\n",
"> Problem is that objects is not being shown as list of dicts. The output looks as follows :\r\n\r\nThat's expected. We convert a sequence of dictionaries to a dictionary of sequences to keep the formatting aligned with Tensorflow Datasets. You could disable this behavior by replacing `\"objects\": datasets.Sequence(object_fields_dict)` with `\"objects\": [object_fields_dict]`, but that's not what we usually do, so let's keep it like that. \r\n\r\nAlso, to limit the size of the dataset repo, please remove the pushed images and pass URLs to the images instead under the `src` attribute (and specify `alt` in case the URLs go down).\r\n\r\nI'll do a proper review again after you are finished with the dummy data.",
"> That's expected.\r\n\r\nOkay. Got it. Thanks. I thought I was doing something wrong.\r\n\r\n> Also, to limit the size of the dataset repo, please remove the pushed images and pass URLs to the images instead under the src attribute (and specify alt in case the URLs go down).\r\n\r\nSure. Where do we host these images ? Can I upload them to any free image hosting platform or is there any particular website you use ?\r\n\r\n> I'll do a proper review again after you are finished with the dummy data.\r\n\r\nSure. Thanks. I'm working on this part. Will update you.\r\n",
"Update : \r\n- I have generated the dataset_infos.json file.\r\n\r\n> I suggest you try to generate the dataset_infos.json file first, and then I can help with the dummy data.\r\n\r\nI am having issues creating the dummy data. I get the following which I use the command : \r\n\r\n`datasets-cli dummy_data datasets/metashift`\r\n\r\n```\r\nDataset metashift with config MetashiftConfig(name='metashift', version=1.0.0, data_dir=None, data_files=None, description=None) seems to already open files in the method `_split_generators(...)`. You might consider to instead only open files in the method `_generate_examples(...)` instead. If this is not possible the dummy data has to be created with less guidance. Make sure you create the file dummy_data/full-candidate-subsets.pkl.\r\nTraceback (most recent call last):\r\n File \"datasets-cli\", line 33, in <module>\r\n sys.exit(load_entry_point('datasets', 'console_scripts', 'datasets-cli')())\r\n File \"/datasets/commands/datasets_cli.py\", line 33, in main\r\n service.run()\r\n File \"/datasets/commands/dummy_data.py\", line 324, in run\r\n dataset_builder=dataset_builder, mock_dl_manager=mock_dl_manager\r\n File \"/datasets/commands/dummy_data.py\", line 407, in _print_dummy_data_instructions\r\n for split in generator_splits:\r\nUnboundLocalError: local variable 'generator_splits' referenced before assignment\r\n```",
"> Feel free to host the images online (on imgur for example) :)\r\n\r\nSure. Will do that.\r\n\r\nThanks for the explanation regarding the dummy data zip files. I will try it out and let you know.",
"Instead of uploading the images to a hosting service, you can directly reference their GitHub URLs (open the image in the MetaShift repo -> click Download -> copy the image URL). For instance, this is the URL of one of the images:`https://raw.githubusercontent.com/Weixin-Liang/MetaShift/main/docs/figures/Cat-MetaGraph.jpg`. Also, feel free to replace `main` with the most recent commit hash in the copied URLs to make them more robust.",
"@mariosasko I've actually created metagraphs for all the default classes other than those present in the GitHub Repo and included all of them. :) The Repo has them only for two classes.\r\n\r\nIn case we want to limit the no.of meta graphs included, we can stick to the github URLs from the repo itself.\r\n",
"Update : \r\n- I could add the dummy data and get the dummy data test to work. Since we have a preprocessing step on the dataset, one of the .pkl file size is on the higher side. This was done for the tests to pass. I hope that is okay. The dummy.zip file size is about 273K.\r\n\r\nTo-Do :\r\n- [x] Update Dataset Structure in the data cards to include Data Instances when config parameters are used.\r\n\r\nPlease could you review when you get time. Thank you.",
"Thanks a lot for your suggestions, Mario. The thing I learnt from the review is that I need to make better sentence formations. I will keep this in mind. :) ",
"Thanks a lot for your support. @mariosasko and @lhoestq .\r\n\r\n> Super impressed by your work on this, congrats :)\r\n\r\nIts my first dataset contribution to the 🤗 Datasets library, I'm super excited. Thank you. :)\r\n\r\nAlso, I think we can close this request issue now, [#3813](https://github.com/huggingface/datasets/issues/3813)"
] | 1,647,074,658,000 | 1,648,832,388,000 | 1,648,826,190,000 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3900",
"html_url": "https://github.com/huggingface/datasets/pull/3900",
"diff_url": "https://github.com/huggingface/datasets/pull/3900.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3900.patch",
"merged_at": 1648826190000
} | This PR adds the MetaShift dataset.
Dataset Request : Add MetaShift dataset [#3813](https://github.com/huggingface/datasets/issues/3813)
@lhoestq As discussed,
- I have copied the preprocessing script and modified it as required to not create new directories and folders and instead yield the images.
- I do the preprocessing in _split_generators to get the required data which is then passed to _generate_examples.
- Beyond the generated MetaShift dataset, the original preprocess script also generates the meta-graphs for each class, I have currently not included this part. [ Ref : [Link](https://github.com/Weixin-Liang/MetaShift#generate-full-metashift) ]
- There is a Bonus section, the authors share. I have currently not included this part. [ Ref : [Link](https://github.com/Weixin-Liang/MetaShift#bonus-generate-the-metashift-attributes-dataset-subsets-defined-by-subject-attributes) ]
- I had a basic test script which downloaded the dataset and tested the basic functionality. Things seems fine.
For real data, I performed the following test :
```
RUN_SLOW=1 pytest tests/test_dataset_common.py::LocalDatasetTest::test_load_real_dataset_metashift
============================================== test session starts ===============================================
platform linux -- Python 3.7.11, pytest-7.0.1, pluggy-1.0.0
rootdir: ./datasets
plugins: hydra-core-1.1.1, datadir-1.3.1, forked-1.4.0, xdist-2.5.0
collected 1 item
tests/test_dataset_common.py . [100%]
========================================= 1 passed in 4821.25s (1:20:21) =========================================
```
- I couldn't get the dummy dataset. Need some inputs here.
Error as follows :
```
Using custom data configuration default
Dataset metashift with config None seems to already open files in the method `_split_generators(...)`. You might consider to instead only open files in the method `_generate_examples(...)` instead. If this is not possible the dummy data has to be created with less guidance. Make sure you create the file dummy_data/full-candidate-subsets.pkl.
for split in generator_splits:
UnboundLocalError: local variable 'generator_splits' referenced before assignment
```
To-Do :
- [x] Currently I am using the default _SELECTED_CLASSES. I need to use config option here as suggested
- [x] Complete fields in the Dataset Card.
- [x] Tagging the dataset using the Datasets Tagging app.
Need your help and suggestions for improvement. Thank you
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3900/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3900/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3899 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3899/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3899/comments | https://api.github.com/repos/huggingface/datasets/issues/3899/events | https://github.com/huggingface/datasets/pull/3899 | 1,166,931,812 | PR_kwDODunzps40UzR3 | 3,899 | Add exact match metric | {
"login": "emibaylor",
"id": 27527747,
"node_id": "MDQ6VXNlcjI3NTI3NzQ3",
"avatar_url": "https://avatars.githubusercontent.com/u/27527747?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/emibaylor",
"html_url": "https://github.com/emibaylor",
"followers_url": "https://api.github.com/users/emibaylor/followers",
"following_url": "https://api.github.com/users/emibaylor/following{/other_user}",
"gists_url": "https://api.github.com/users/emibaylor/gists{/gist_id}",
"starred_url": "https://api.github.com/users/emibaylor/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/emibaylor/subscriptions",
"organizations_url": "https://api.github.com/users/emibaylor/orgs",
"repos_url": "https://api.github.com/users/emibaylor/repos",
"events_url": "https://api.github.com/users/emibaylor/events{/privacy}",
"received_events_url": "https://api.github.com/users/emibaylor/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,647,037,300,000 | 1,647,879,003,000 | 1,647,878,735,000 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3899",
"html_url": "https://github.com/huggingface/datasets/pull/3899",
"diff_url": "https://github.com/huggingface/datasets/pull/3899.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3899.patch",
"merged_at": 1647878734000
} | Adding the exact match metric and its metric card.
Note: Some of the tests have failed, but I wanted to make a PR anyway so that the rest of the code can be reviewed if anyone has time. I'll look into + work on fixing the failed tests when I'm back online after the weekend | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3899/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3899/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3898 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3898/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3898/comments | https://api.github.com/repos/huggingface/datasets/issues/3898/events | https://github.com/huggingface/datasets/pull/3898 | 1,166,778,250 | PR_kwDODunzps40UWG4 | 3,898 | Create README.md for WER metric | {
"login": "sashavor",
"id": 14205986,
"node_id": "MDQ6VXNlcjE0MjA1OTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/14205986?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sashavor",
"html_url": "https://github.com/sashavor",
"followers_url": "https://api.github.com/users/sashavor/followers",
"following_url": "https://api.github.com/users/sashavor/following{/other_user}",
"gists_url": "https://api.github.com/users/sashavor/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sashavor/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sashavor/subscriptions",
"organizations_url": "https://api.github.com/users/sashavor/orgs",
"repos_url": "https://api.github.com/users/sashavor/repos",
"events_url": "https://api.github.com/users/sashavor/events{/privacy}",
"received_events_url": "https://api.github.com/users/sashavor/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_3898). All of your documentation changes will be reflected on that endpoint.",
"For ASR you can probably ping @patrickvonplaten ",
"Ah only noticed now that ` # Values from popular papers` is from a template. @lhoestq @sashavor - not really sure if this section is useful in general really. \r\n\r\nIMO, it's more confusing/misleading than it helps. E.g. a value of 0.03 WER on a fake read-out audio dataset is not better than a WER of 0.3 on a real-world noisy, conversational audio dataset. I think the same holds true for other metrics no? I can think of very little metrics where a metric value is not dataset dependent. E.g. perplexity is super dataset dependent, summarization metrics like ROUGE as well, ...\r\n\r\nAlso, I don't really see what this section tries to achieve - is the idea here to give the reader some papers that use this metric to better understand in which context it is used? Should we maybe rename the section to `Popular papers making use of this metric` or something? \r\n\r\n",
"I put \"Values from popular papers\" as a subsection of \"Output values\" -- I hope that's a compromise that works for everyone :hugs: "
] | 1,647,026,949,000 | 1,647,363,900,000 | 1,647,363,899,000 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3898",
"html_url": "https://github.com/huggingface/datasets/pull/3898",
"diff_url": "https://github.com/huggingface/datasets/pull/3898.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3898.patch",
"merged_at": 1647363899000
} | Proposing a draft WER metric card, @lhoestq I'm not very certain about "Values from popular papers" -- I don't know ASR very well, what do you think of the examples I found? | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3898/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3898/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3897 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3897/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3897/comments | https://api.github.com/repos/huggingface/datasets/issues/3897/events | https://github.com/huggingface/datasets/pull/3897 | 1,166,715,104 | PR_kwDODunzps40UJH4 | 3,897 | Align tqdm control/cache control with Transformers | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_3897). All of your documentation changes will be reflected on that endpoint."
] | 1,647,022,342,000 | 1,647,270,070,000 | 1,647,270,068,000 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3897",
"html_url": "https://github.com/huggingface/datasets/pull/3897",
"diff_url": "https://github.com/huggingface/datasets/pull/3897.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3897.patch",
"merged_at": 1647270068000
} | This PR:
* aligns the `tqdm` logic with Transformers (follows https://github.com/huggingface/transformers/pull/15167) by moving the code to `utils/logging.py`, adding `enable_progres_bar`/`disable_progres_bar` and removing `set_progress_bar_enabled` (a note for @lhoestq: I'm not adding `logging.tqdm` to the public namespace in this PR to avoid the situation where `from datasets import *; tqdm` would overshadow the standard `tqdm`
* aligns the cache control with the new `tqdm` logic by adding `enable_caching`/`disable_caching` to the public namespace and deprecating `set_caching_enabled` (not fully removing it because it's used more often than `set_progress_bar_enabled` and has a dedicated example in the old docs)
Fix #3586 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3897/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3897/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3896 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3896/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3896/comments | https://api.github.com/repos/huggingface/datasets/issues/3896/events | https://github.com/huggingface/datasets/issues/3896 | 1,166,628,270 | I_kwDODunzps5FiVWu | 3,896 | Missing google file for `multi_news` dataset | {
"login": "severo",
"id": 1676121,
"node_id": "MDQ6VXNlcjE2NzYxMjE=",
"avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/severo",
"html_url": "https://github.com/severo",
"followers_url": "https://api.github.com/users/severo/followers",
"following_url": "https://api.github.com/users/severo/following{/other_user}",
"gists_url": "https://api.github.com/users/severo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/severo/subscriptions",
"organizations_url": "https://api.github.com/users/severo/orgs",
"repos_url": "https://api.github.com/users/severo/repos",
"events_url": "https://api.github.com/users/severo/events{/privacy}",
"received_events_url": "https://api.github.com/users/severo/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 3470211881,
"node_id": "LA_kwDODunzps7O1zsp",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset-viewer",
"name": "dataset-viewer",
"color": "E5583E",
"default": false,
"description": "Related to the dataset viewer on huggingface.co"
}
] | closed | false | null | [] | null | [
"reported by @abidlabs ",
"related to https://github.com/huggingface/datasets/pull/3843?",
"`datasets` 1.18.4 fixes the issue when you load the dataset with `load_dataset`.\r\n\r\nWhen loading in streaming mode, the fix is indeed on https://github.com/huggingface/datasets/pull/3843 which will be merged soon :)",
"That is. The PR #3843 was just opened a bit later we had made our 1.18.4 patch release...\r\nOnce merged, that will fix this issue. ",
"OK. Should fix the viewer for 50 datasets\r\n\r\n<img width=\"148\" alt=\"Capture d’écran 2022-03-14 à 11 51 02\" src=\"https://user-images.githubusercontent.com/1676121/158157853-6c544a47-2d6d-4ac4-964a-6f10951ec36b.png\">\r\n"
] | 1,647,016,690,000 | 1,647,347,423,000 | 1,647,347,423,000 | CONTRIBUTOR | null | null | null | ## Dataset viewer issue for '*multi_news*'
**Link:** https://huggingface.co/datasets/multi_news
```
Server error
Status code: 400
Exception: FileNotFoundError
Message: https://drive.google.com/uc?export=download&id=1vRY2wM6rlOZrf9exGTm5pXj5ExlVwJ0C/multi-news-original/train.src
```
Am I the one who added this dataset ? No
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3896/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3896/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/3895 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3895/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3895/comments | https://api.github.com/repos/huggingface/datasets/issues/3895/events | https://github.com/huggingface/datasets/pull/3895 | 1,166,619,182 | PR_kwDODunzps40T1C8 | 3,895 | Fix code examples indentation | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_3895). All of your documentation changes will be reflected on that endpoint.",
"Still not rendered properly: https://moon-ci-docs.huggingface.co/docs/datasets/pr_3895/en/package_reference/main_classes#datasets.Dataset.align_labels_with_mapping",
"My last commit should have fixed it, I don't know why the dev doc build is not showing my last changes",
"Let me merge this and we can see on `master` how it renders, until the dev doc build is fixed"
] | 1,647,016,144,000 | 1,647,020,070,000 | 1,647,020,069,000 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3895",
"html_url": "https://github.com/huggingface/datasets/pull/3895",
"diff_url": "https://github.com/huggingface/datasets/pull/3895.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3895.patch",
"merged_at": 1647020069000
} | Some code examples are currently not rendered correctly. I think this is because they are over-indented
cc @mariosasko | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3895/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3895/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3894 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3894/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3894/comments | https://api.github.com/repos/huggingface/datasets/issues/3894/events | https://github.com/huggingface/datasets/pull/3894 | 1,166,611,270 | PR_kwDODunzps40TzXW | 3,894 | [docs] make dummy data creation optional | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_3894). All of your documentation changes will be reflected on that endpoint.",
"The dev doc build rendering doesn't seem to be updated with my last commit for some reason",
"Merging it anyway since I'd like to share this page with users 🙃 "
] | 1,647,015,694,000 | 1,647,019,676,000 | 1,647,019,675,000 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3894",
"html_url": "https://github.com/huggingface/datasets/pull/3894",
"diff_url": "https://github.com/huggingface/datasets/pull/3894.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3894.patch",
"merged_at": 1647019675000
} | Related to #3507 : dummy data for datasets created on the Hugging Face Hub are optional.
We can discuss later to make them optional for datasets in this repository as well | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3894/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3894/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3893 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3893/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3893/comments | https://api.github.com/repos/huggingface/datasets/issues/3893/events | https://github.com/huggingface/datasets/pull/3893 | 1,166,551,684 | PR_kwDODunzps40TmxB | 3,893 | Add default branch for doc building | {
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_3893). All of your documentation changes will be reflected on that endpoint.",
"Yes! And when we discovered on the Transformers side that this check fails on the GitHub actions, we added a config attribute to have a default. Setting in Transformers fixed the issue of the doc being deployed to main, so porting the fix here too :-)"
] | 1,647,012,267,000 | 1,647,012,875,000 | 1,647,012,874,000 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3893",
"html_url": "https://github.com/huggingface/datasets/pull/3893",
"diff_url": "https://github.com/huggingface/datasets/pull/3893.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3893.patch",
"merged_at": 1647012874000
} | Since other libraries use `main` as their default branch and it's now the standard default, you have to specify a different name in the doc config if you're using `master` like datasets (`doc-builder` tries to guess it, but in the job, we have weird checkout of merge commits so it doesn't always manage to get it right).
This PR makes sure it will always use master for the dev doc (until you decide to switchto main) | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3893/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3893/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3892 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3892/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3892/comments | https://api.github.com/repos/huggingface/datasets/issues/3892/events | https://github.com/huggingface/datasets/pull/3892 | 1,166,227,003 | PR_kwDODunzps40ShYB | 3,892 | Fix CLI test checksums | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_3892). All of your documentation changes will be reflected on that endpoint.",
"Feel free to merge if it's good for you :)",
"I've added a test @lhoestq. Once all green, I'll merge. ",
"Last failing tests do not have nothing to do with this PR."
] | 1,646,993,044,000 | 1,647,347,304,000 | 1,647,347,303,000 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3892",
"html_url": "https://github.com/huggingface/datasets/pull/3892",
"diff_url": "https://github.com/huggingface/datasets/pull/3892.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3892.patch",
"merged_at": 1647347303000
} | Previous PR:
- #3796
introduced a side effect: `datasets-cli test` generates `dataset_infos.json` with `None` checksum values.
See:
- #3805
This PR introduces a way for `datasets-cli test` to force to record infos, even if `verify_infos=False`
Close #3848.
CC: @craffel | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3892/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3892/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3891 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3891/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3891/comments | https://api.github.com/repos/huggingface/datasets/issues/3891/events | https://github.com/huggingface/datasets/pull/3891 | 1,165,503,732 | PR_kwDODunzps40QKIG | 3,891 | Fix race condition in doc build | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_3891). All of your documentation changes will be reflected on that endpoint."
] | 1,646,932,630,000 | 1,646,932,980,000 | 1,646,932,650,000 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3891",
"html_url": "https://github.com/huggingface/datasets/pull/3891",
"diff_url": "https://github.com/huggingface/datasets/pull/3891.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3891.patch",
"merged_at": 1646932650000
} | Following https://github.com/huggingface/datasets/runs/5499386744 it seems that race conditions that create issues when updating the doc. I took the same approach as in `transformers` to fix race conditions | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3891/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3891/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3890 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3890/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3890/comments | https://api.github.com/repos/huggingface/datasets/issues/3890/events | https://github.com/huggingface/datasets/pull/3890 | 1,165,502,838 | PR_kwDODunzps40QJ8V | 3,890 | Update beans download urls | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_3890). All of your documentation changes will be reflected on that endpoint.",
"@albertvillanova Thanks for investigating and fixing that issue. I regenerated the `dataset_infos.json` file."
] | 1,646,932,576,000 | 1,647,362,850,000 | 1,647,358,008,000 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3890",
"html_url": "https://github.com/huggingface/datasets/pull/3890",
"diff_url": "https://github.com/huggingface/datasets/pull/3890.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3890.patch",
"merged_at": 1647358007000
} | Replace the old URLs with the Hub [URLs](https://huggingface.co/datasets/beans/tree/main/data).
Also reported by @stevhliu.
Fix #3889 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3890/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3890/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3889 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3889/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3889/comments | https://api.github.com/repos/huggingface/datasets/issues/3889/events | https://github.com/huggingface/datasets/issues/3889 | 1,165,456,083 | I_kwDODunzps5Fd3LT | 3,889 | Cannot load beans dataset (Couldn't reach the dataset) | {
"login": "ivsanro1",
"id": 30293331,
"node_id": "MDQ6VXNlcjMwMjkzMzMx",
"avatar_url": "https://avatars.githubusercontent.com/u/30293331?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ivsanro1",
"html_url": "https://github.com/ivsanro1",
"followers_url": "https://api.github.com/users/ivsanro1/followers",
"following_url": "https://api.github.com/users/ivsanro1/following{/other_user}",
"gists_url": "https://api.github.com/users/ivsanro1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ivsanro1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ivsanro1/subscriptions",
"organizations_url": "https://api.github.com/users/ivsanro1/orgs",
"repos_url": "https://api.github.com/users/ivsanro1/repos",
"events_url": "https://api.github.com/users/ivsanro1/events{/privacy}",
"received_events_url": "https://api.github.com/users/ivsanro1/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2067388877,
"node_id": "MDU6TGFiZWwyMDY3Mzg4ODc3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20bug",
"name": "dataset bug",
"color": "2edb81",
"default": false,
"description": "A bug in a dataset script provided in the library"
}
] | closed | false | null | [] | null | [
"Hi ! A pull request is open to fix the dataset, we'll release a patch soon with a new release of `datasets` :)"
] | 1,646,930,048,000 | 1,647,358,007,000 | 1,647,358,007,000 | NONE | null | null | null | ## Describe the bug
The beans dataset is unavailable to download.
## Steps to reproduce the bug
```python
from datasets import load_dataset
ds = load_dataset('beans')
```
## Expected results
The dataset would be downloaded with no issue.
## Actual results
```
ConnectionError: Couldn't reach https://storage.googleapis.com/ibeans/train.zip (error 403)
```
[It looks like the billing of this project has been disabled because it is associated with a delinquent account.](https://storage.googleapis.com/ibeans/train.zip )
## Environment info
Google Colab
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3889/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3889/timeline | null | false |