url
stringlengths 61
61
| repository_url
stringclasses 1
value | labels_url
stringlengths 75
75
| comments_url
stringlengths 70
70
| events_url
stringlengths 68
68
| html_url
stringlengths 49
51
| id
int64 1.14B
1.87B
| node_id
stringlengths 18
19
| number
int64 3.74k
6.19k
| title
stringlengths 1
290
| user
dict | labels
list | state
stringclasses 2
values | locked
bool 1
class | assignee
dict | assignees
list | milestone
dict | comments
sequence | created_at
timestamp[s] | updated_at
timestamp[s] | closed_at
timestamp[s] | author_association
stringclasses 3
values | active_lock_reason
null | body
stringlengths 2
33.9k
⌀ | reactions
dict | timeline_url
stringlengths 70
70
| performed_via_github_app
null | state_reason
stringclasses 3
values | draft
bool 2
classes | pull_request
dict | is_pull_request
bool 2
classes |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://api.github.com/repos/huggingface/datasets/issues/6193 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6193/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6193/comments | https://api.github.com/repos/huggingface/datasets/issues/6193/events | https://github.com/huggingface/datasets/issues/6193 | 1,872,285,153 | I_kwDODunzps5vmM3h | 6,193 | Dataset loading script method does not work with .pyc file | {
"login": "riteshkumarumassedu",
"id": 43389071,
"node_id": "MDQ6VXNlcjQzMzg5MDcx",
"avatar_url": "https://avatars.githubusercontent.com/u/43389071?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/riteshkumarumassedu",
"html_url": "https://github.com/riteshkumarumassedu",
"followers_url": "https://api.github.com/users/riteshkumarumassedu/followers",
"following_url": "https://api.github.com/users/riteshkumarumassedu/following{/other_user}",
"gists_url": "https://api.github.com/users/riteshkumarumassedu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/riteshkumarumassedu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/riteshkumarumassedu/subscriptions",
"organizations_url": "https://api.github.com/users/riteshkumarumassedu/orgs",
"repos_url": "https://api.github.com/users/riteshkumarumassedu/repos",
"events_url": "https://api.github.com/users/riteshkumarumassedu/events{/privacy}",
"received_events_url": "https://api.github.com/users/riteshkumarumassedu/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | [] | 2023-08-29T19:35:06 | 2023-08-29T19:35:06 | null | NONE | null | ### Describe the bug
The huggingface dataset library specifically looks for ‘.py’ file while loading the dataset using loading script approach and it does not work with ‘.pyc’ file.
While deploying in production, it becomes an issue when we are restricted to use only .pyc files. Is there any work around for this ?
### Steps to reproduce the bug
1. Create a dataset loading script to read the custom data.
2. compile the code to make sure that .pyc file is created
3. Delete the loading script and re-run the code. Usually, python should make use of complied .pyc files. However, in this case, the dataset library errors out with the message that it's unable to find the data loader loading script.
### Expected behavior
The code should make use of .pyc file and run without any error.
### Environment info
NA | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6193/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6193/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/6192 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6192/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6192/comments | https://api.github.com/repos/huggingface/datasets/issues/6192/events | https://github.com/huggingface/datasets/pull/6192 | 1,871,911,640 | PR_kwDODunzps5ZDGnI | 6,192 | Set minimal fsspec version requirement to 2023.1.0 | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | [
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005972 / 0.011353 (-0.005381) | 0.003636 / 0.011008 (-0.007372) | 0.080254 / 0.038508 (0.041746) | 0.059564 / 0.023109 (0.036455) | 0.310615 / 0.275898 (0.034717) | 0.359307 / 0.323480 (0.035827) | 0.003408 / 0.007986 (-0.004578) | 0.002941 / 0.004328 (-0.001388) | 0.063699 / 0.004250 (0.059449) | 0.046072 / 0.037052 (0.009020) | 0.318670 / 0.258489 (0.060181) | 0.369677 / 0.293841 (0.075836) | 0.026995 / 0.128546 (-0.101552) | 0.007954 / 0.075646 (-0.067693) | 0.261667 / 0.419271 (-0.157604) | 0.045167 / 0.043533 (0.001634) | 0.314276 / 0.255139 (0.059137) | 0.348871 / 0.283200 (0.065672) | 0.021748 / 0.141683 (-0.119935) | 1.438598 / 1.452155 (-0.013557) | 1.530119 / 1.492716 (0.037403) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.196894 / 0.018006 (0.178888) | 0.445757 / 0.000490 (0.445267) | 0.002842 / 0.000200 (0.002642) | 0.000069 / 0.000054 (0.000015) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.024923 / 0.037411 (-0.012488) | 0.075186 / 0.014526 (0.060661) | 0.087193 / 0.176557 (-0.089364) | 0.147496 / 0.737135 (-0.589639) | 0.087083 / 0.296338 (-0.209255) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.423545 / 0.215209 (0.208336) | 4.187927 / 2.077655 (2.110273) | 2.008656 / 1.504120 (0.504536) | 1.791313 / 1.541195 (0.250119) | 1.849836 / 1.468490 (0.381346) | 0.499458 / 4.584777 (-4.085318) | 2.983206 / 3.745712 (-0.762506) | 2.801005 / 5.269862 (-2.468856) | 1.886207 / 4.565676 (-2.679469) | 0.057343 / 0.424275 (-0.366932) | 0.006666 / 0.007607 (-0.000941) | 0.483948 / 0.226044 (0.257904) | 4.874818 / 2.268929 (2.605890) | 2.439393 / 55.444624 (-53.005231) | 2.049861 / 6.876477 (-4.826616) | 2.217050 / 2.142072 (0.074977) | 0.589760 / 4.805227 (-4.215467) | 0.125298 / 6.500664 (-6.375366) | 0.061123 / 0.075469 (-0.014347) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.234721 / 1.841788 (-0.607067) | 18.193756 / 8.074308 (10.119448) | 13.682835 / 10.191392 (3.491443) | 0.129345 / 0.680424 (-0.551078) | 0.016589 / 0.534201 (-0.517612) | 0.332355 / 0.579283 (-0.246928) | 0.358408 / 0.434364 (-0.075955) | 0.382044 / 0.540337 (-0.158293) | 0.535403 / 1.386936 (-0.851533) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006193 / 0.011353 (-0.005160) | 0.003674 / 0.011008 (-0.007335) | 0.062481 / 0.038508 (0.023973) | 0.062096 / 0.023109 (0.038987) | 0.449592 / 0.275898 (0.173694) | 0.479245 / 0.323480 (0.155765) | 0.004793 / 0.007986 (-0.003193) | 0.002896 / 0.004328 (-0.001433) | 0.062887 / 0.004250 (0.058636) | 0.050049 / 0.037052 (0.012997) | 0.454940 / 0.258489 (0.196451) | 0.486115 / 0.293841 (0.192274) | 0.028585 / 0.128546 (-0.099961) | 0.007954 / 0.075646 (-0.067692) | 0.067744 / 0.419271 (-0.351528) | 0.040473 / 0.043533 (-0.003060) | 0.448408 / 0.255139 (0.193269) | 0.472423 / 0.283200 (0.189223) | 0.020549 / 0.141683 (-0.121133) | 1.563618 / 1.452155 (0.111463) | 1.520149 / 1.492716 (0.027432) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.226604 / 0.018006 (0.208598) | 0.417615 / 0.000490 (0.417126) | 0.003386 / 0.000200 (0.003186) | 0.000074 / 0.000054 (0.000019) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.027264 / 0.037411 (-0.010147) | 0.081709 / 0.014526 (0.067184) | 0.091793 / 0.176557 (-0.084763) | 0.145559 / 0.737135 (-0.591576) | 0.091869 / 0.296338 (-0.204469) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.462917 / 0.215209 (0.247708) | 4.629512 / 2.077655 (2.551857) | 2.555715 / 1.504120 (1.051595) | 2.388064 / 1.541195 (0.846870) | 2.458320 / 1.468490 (0.989830) | 0.511615 / 4.584777 (-4.073162) | 3.124566 / 3.745712 (-0.621146) | 2.839190 / 5.269862 (-2.430672) | 1.894551 / 4.565676 (-2.671126) | 0.059565 / 0.424275 (-0.364710) | 0.006481 / 0.007607 (-0.001126) | 0.532023 / 0.226044 (0.305979) | 5.361507 / 2.268929 (3.092579) | 2.982594 / 55.444624 (-52.462031) | 2.644870 / 6.876477 (-4.231606) | 2.831476 / 2.142072 (0.689404) | 0.607381 / 4.805227 (-4.197846) | 0.126067 / 6.500664 (-6.374597) | 0.062130 / 0.075469 (-0.013339) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.350442 / 1.841788 (-0.491345) | 18.829553 / 8.074308 (10.755245) | 14.796701 / 10.191392 (4.605309) | 0.145393 / 0.680424 (-0.535031) | 0.018218 / 0.534201 (-0.515983) | 0.335500 / 0.579283 (-0.243783) | 0.359190 / 0.434364 (-0.075174) | 0.388377 / 0.540337 (-0.151960) | 0.534994 / 1.386936 (-0.851942) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#ff7629eb72f499d841d64aa03f97e0b1707d1cc7 \"CML watermark\")\n",
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6192). All of your documentation changes will be reflected on that endpoint."
] | 2023-08-29T15:23:41 | 2023-08-29T15:31:58 | null | CONTRIBUTOR | null | Fix https://github.com/huggingface/datasets/issues/6141
Colab installs 2023.6.0, so we should be good 🙂
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6192/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6192/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6192",
"html_url": "https://github.com/huggingface/datasets/pull/6192",
"diff_url": "https://github.com/huggingface/datasets/pull/6192.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/6192.patch",
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/6191 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6191/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6191/comments | https://api.github.com/repos/huggingface/datasets/issues/6191/events | https://github.com/huggingface/datasets/pull/6191 | 1,871,634,840 | PR_kwDODunzps5ZCKmv | 6,191 | Add missing `revision` argument | {
"login": "qgallouedec",
"id": 45557362,
"node_id": "MDQ6VXNlcjQ1NTU3MzYy",
"avatar_url": "https://avatars.githubusercontent.com/u/45557362?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/qgallouedec",
"html_url": "https://github.com/qgallouedec",
"followers_url": "https://api.github.com/users/qgallouedec/followers",
"following_url": "https://api.github.com/users/qgallouedec/following{/other_user}",
"gists_url": "https://api.github.com/users/qgallouedec/gists{/gist_id}",
"starred_url": "https://api.github.com/users/qgallouedec/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/qgallouedec/subscriptions",
"organizations_url": "https://api.github.com/users/qgallouedec/orgs",
"repos_url": "https://api.github.com/users/qgallouedec/repos",
"events_url": "https://api.github.com/users/qgallouedec/events{/privacy}",
"received_events_url": "https://api.github.com/users/qgallouedec/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6191). All of your documentation changes will be reflected on that endpoint."
] | 2023-08-29T13:05:04 | 2023-08-29T13:30:30 | null | CONTRIBUTOR | null | I've noticed that when you're not working on the main branch, there are sometimes errors in the files returned. After some investigation, I realized that the revision was not properly passed everywhere. This PR proposes a fix. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6191/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6191/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6191",
"html_url": "https://github.com/huggingface/datasets/pull/6191",
"diff_url": "https://github.com/huggingface/datasets/pull/6191.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/6191.patch",
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/6190 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6190/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6190/comments | https://api.github.com/repos/huggingface/datasets/issues/6190/events | https://github.com/huggingface/datasets/issues/6190 | 1,871,582,175 | I_kwDODunzps5vjhPf | 6,190 | `Invalid user token` even when correct user token is passed! | {
"login": "Vaibhavs10",
"id": 18682411,
"node_id": "MDQ6VXNlcjE4NjgyNDEx",
"avatar_url": "https://avatars.githubusercontent.com/u/18682411?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Vaibhavs10",
"html_url": "https://github.com/Vaibhavs10",
"followers_url": "https://api.github.com/users/Vaibhavs10/followers",
"following_url": "https://api.github.com/users/Vaibhavs10/following{/other_user}",
"gists_url": "https://api.github.com/users/Vaibhavs10/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Vaibhavs10/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Vaibhavs10/subscriptions",
"organizations_url": "https://api.github.com/users/Vaibhavs10/orgs",
"repos_url": "https://api.github.com/users/Vaibhavs10/repos",
"events_url": "https://api.github.com/users/Vaibhavs10/events{/privacy}",
"received_events_url": "https://api.github.com/users/Vaibhavs10/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"This is because `download_config.use_auth_token` is deprecated - you should use `download_config.token` instead",
"Works! Thanks for the quick fix! <3"
] | 2023-08-29T12:37:03 | 2023-08-29T13:01:10 | 2023-08-29T13:01:09 | MEMBER | null | ### Describe the bug
I'm working on a dataset which comprises other datasets on the hub.
URL: https://huggingface.co/datasets/open-asr-leaderboard/datasets-test-only
Note: Some of the sub-datasets in this metadataset require explicit access.
All the other datasets work fine, except, `common_voice`.
### Steps to reproduce the bug
https://github.com/Vaibhavs10/scratchpad/blob/main/cv_datasets_bug_repro.ipynb
### Expected behavior
It should work if the provided access token is valid (as it does for all the other datasets)
### Environment info
datasets version -> 2.14.4 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6190/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6190/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/6189 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6189/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6189/comments | https://api.github.com/repos/huggingface/datasets/issues/6189/events | https://github.com/huggingface/datasets/pull/6189 | 1,871,569,855 | PR_kwDODunzps5ZB8Z9 | 6,189 | Don't alter input in Features.from_dict | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006166 / 0.011353 (-0.005187) | 0.003643 / 0.011008 (-0.007365) | 0.080966 / 0.038508 (0.042458) | 0.060538 / 0.023109 (0.037429) | 0.309205 / 0.275898 (0.033307) | 0.351007 / 0.323480 (0.027527) | 0.003592 / 0.007986 (-0.004393) | 0.002880 / 0.004328 (-0.001448) | 0.062957 / 0.004250 (0.058707) | 0.049015 / 0.037052 (0.011963) | 0.309436 / 0.258489 (0.050947) | 0.362695 / 0.293841 (0.068854) | 0.027818 / 0.128546 (-0.100728) | 0.008030 / 0.075646 (-0.067616) | 0.262678 / 0.419271 (-0.156594) | 0.046024 / 0.043533 (0.002491) | 0.316246 / 0.255139 (0.061107) | 0.337454 / 0.283200 (0.054254) | 0.022529 / 0.141683 (-0.119154) | 1.432492 / 1.452155 (-0.019662) | 1.499646 / 1.492716 (0.006929) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.190931 / 0.018006 (0.172925) | 0.428053 / 0.000490 (0.427564) | 0.002839 / 0.000200 (0.002639) | 0.000069 / 0.000054 (0.000014) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.024042 / 0.037411 (-0.013370) | 0.073952 / 0.014526 (0.059426) | 0.905973 / 0.176557 (0.729417) | 0.177767 / 0.737135 (-0.559368) | 0.125779 / 0.296338 (-0.170559) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.398997 / 0.215209 (0.183788) | 3.959575 / 2.077655 (1.881920) | 1.907038 / 1.504120 (0.402918) | 1.732908 / 1.541195 (0.191713) | 1.757038 / 1.468490 (0.288548) | 0.495917 / 4.584777 (-4.088860) | 3.021437 / 3.745712 (-0.724275) | 2.793960 / 5.269862 (-2.475901) | 1.827753 / 4.565676 (-2.737923) | 0.057143 / 0.424275 (-0.367132) | 0.006583 / 0.007607 (-0.001024) | 0.469402 / 0.226044 (0.243357) | 4.685623 / 2.268929 (2.416695) | 2.325200 / 55.444624 (-53.119424) | 1.985559 / 6.876477 (-4.890918) | 2.151208 / 2.142072 (0.009136) | 0.589498 / 4.805227 (-4.215730) | 0.125433 / 6.500664 (-6.375231) | 0.060834 / 0.075469 (-0.014636) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.228217 / 1.841788 (-0.613571) | 18.076089 / 8.074308 (10.001780) | 13.814460 / 10.191392 (3.623068) | 0.144674 / 0.680424 (-0.535750) | 0.016749 / 0.534201 (-0.517452) | 0.332839 / 0.579283 (-0.246444) | 0.357211 / 0.434364 (-0.077153) | 0.380367 / 0.540337 (-0.159971) | 0.531177 / 1.386936 (-0.855759) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006006 / 0.011353 (-0.005347) | 0.003552 / 0.011008 (-0.007456) | 0.061822 / 0.038508 (0.023313) | 0.057724 / 0.023109 (0.034615) | 0.462326 / 0.275898 (0.186428) | 0.492842 / 0.323480 (0.169362) | 0.004833 / 0.007986 (-0.003152) | 0.002847 / 0.004328 (-0.001481) | 0.062278 / 0.004250 (0.058028) | 0.046754 / 0.037052 (0.009702) | 0.464185 / 0.258489 (0.205696) | 0.496416 / 0.293841 (0.202576) | 0.028949 / 0.128546 (-0.099597) | 0.008038 / 0.075646 (-0.067608) | 0.067572 / 0.419271 (-0.351700) | 0.041176 / 0.043533 (-0.002356) | 0.460047 / 0.255139 (0.204908) | 0.482728 / 0.283200 (0.199528) | 0.020047 / 0.141683 (-0.121635) | 1.455958 / 1.452155 (0.003804) | 1.525730 / 1.492716 (0.033014) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.283643 / 0.018006 (0.265637) | 0.443046 / 0.000490 (0.442556) | 0.041019 / 0.000200 (0.040819) | 0.000340 / 0.000054 (0.000286) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.026229 / 0.037411 (-0.011182) | 0.081498 / 0.014526 (0.066972) | 0.091412 / 0.176557 (-0.085145) | 0.146621 / 0.737135 (-0.590514) | 0.092113 / 0.296338 (-0.204225) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.463525 / 0.215209 (0.248315) | 4.629852 / 2.077655 (2.552198) | 2.564831 / 1.504120 (1.060711) | 2.386976 / 1.541195 (0.845781) | 2.457757 / 1.468490 (0.989266) | 0.507317 / 4.584777 (-4.077460) | 3.142418 / 3.745712 (-0.603294) | 2.851642 / 5.269862 (-2.418219) | 1.894444 / 4.565676 (-2.671233) | 0.058495 / 0.424275 (-0.365780) | 0.006453 / 0.007607 (-0.001154) | 0.545363 / 0.226044 (0.319319) | 5.448092 / 2.268929 (3.179164) | 2.996328 / 55.444624 (-52.448296) | 2.664666 / 6.876477 (-4.211811) | 2.832247 / 2.142072 (0.690174) | 0.597631 / 4.805227 (-4.207596) | 0.126101 / 6.500664 (-6.374563) | 0.062573 / 0.075469 (-0.012896) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.366502 / 1.841788 (-0.475286) | 18.872990 / 8.074308 (10.798682) | 14.892114 / 10.191392 (4.700722) | 0.146668 / 0.680424 (-0.533756) | 0.017876 / 0.534201 (-0.516325) | 0.338490 / 0.579283 (-0.240793) | 0.357471 / 0.434364 (-0.076893) | 0.398730 / 0.540337 (-0.141608) | 0.542464 / 1.386936 (-0.844472) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#a6ff3e846d86814fa6962326e9346a4f1f1e8a80 \"CML watermark\")\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.009132 / 0.011353 (-0.002221) | 0.005796 / 0.011008 (-0.005212) | 0.119495 / 0.038508 (0.080987) | 0.081708 / 0.023109 (0.058599) | 0.432940 / 0.275898 (0.157042) | 0.466793 / 0.323480 (0.143313) | 0.006464 / 0.007986 (-0.001521) | 0.004308 / 0.004328 (-0.000021) | 0.086344 / 0.004250 (0.082093) | 0.065987 / 0.037052 (0.028935) | 0.445213 / 0.258489 (0.186724) | 0.482405 / 0.293841 (0.188564) | 0.053553 / 0.128546 (-0.074993) | 0.015320 / 0.075646 (-0.060326) | 0.455669 / 0.419271 (0.036397) | 0.071619 / 0.043533 (0.028086) | 0.434843 / 0.255139 (0.179704) | 0.503224 / 0.283200 (0.220025) | 0.038280 / 0.141683 (-0.103403) | 1.901877 / 1.452155 (0.449722) | 2.040406 / 1.492716 (0.547690) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.268275 / 0.018006 (0.250269) | 0.622795 / 0.000490 (0.622305) | 0.004572 / 0.000200 (0.004372) | 0.000107 / 0.000054 (0.000053) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.032514 / 0.037411 (-0.004898) | 0.100619 / 0.014526 (0.086093) | 0.118407 / 0.176557 (-0.058149) | 0.190311 / 0.737135 (-0.546824) | 0.117160 / 0.296338 (-0.179178) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.629836 / 0.215209 (0.414627) | 6.236124 / 2.077655 (4.158470) | 2.750775 / 1.504120 (1.246655) | 2.380111 / 1.541195 (0.838916) | 2.487279 / 1.468490 (1.018789) | 0.849568 / 4.584777 (-3.735209) | 5.571308 / 3.745712 (1.825596) | 4.934114 / 5.269862 (-0.335747) | 3.205478 / 4.565676 (-1.360198) | 0.104804 / 0.424275 (-0.319471) | 0.009856 / 0.007607 (0.002248) | 0.753352 / 0.226044 (0.527308) | 7.523482 / 2.268929 (5.254554) | 3.660088 / 55.444624 (-51.784537) | 2.726493 / 6.876477 (-4.149984) | 3.011344 / 2.142072 (0.869271) | 1.093410 / 4.805227 (-3.711817) | 0.229758 / 6.500664 (-6.270906) | 0.081516 / 0.075469 (0.006047) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.700199 / 1.841788 (-0.141588) | 25.238736 / 8.074308 (17.164428) | 23.188131 / 10.191392 (12.996739) | 0.257862 / 0.680424 (-0.422562) | 0.028885 / 0.534201 (-0.505316) | 0.510693 / 0.579283 (-0.068590) | 0.648474 / 0.434364 (0.214110) | 0.576314 / 0.540337 (0.035976) | 0.800606 / 1.386936 (-0.586330) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.009426 / 0.011353 (-0.001927) | 0.006205 / 0.011008 (-0.004803) | 0.083947 / 0.038508 (0.045438) | 0.089164 / 0.023109 (0.066055) | 0.540500 / 0.275898 (0.264602) | 0.578825 / 0.323480 (0.255345) | 0.006792 / 0.007986 (-0.001194) | 0.005125 / 0.004328 (0.000797) | 0.083284 / 0.004250 (0.079034) | 0.067539 / 0.037052 (0.030487) | 0.544330 / 0.258489 (0.285841) | 0.593836 / 0.293841 (0.299995) | 0.050647 / 0.128546 (-0.077899) | 0.014688 / 0.075646 (-0.060959) | 0.095977 / 0.419271 (-0.323295) | 0.062326 / 0.043533 (0.018793) | 0.536096 / 0.255139 (0.280957) | 0.578691 / 0.283200 (0.295492) | 0.035488 / 0.141683 (-0.106194) | 1.911145 / 1.452155 (0.458990) | 1.977647 / 1.492716 (0.484931) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.368365 / 0.018006 (0.350359) | 0.609836 / 0.000490 (0.609346) | 0.054720 / 0.000200 (0.054520) | 0.000465 / 0.000054 (0.000411) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.036057 / 0.037411 (-0.001355) | 0.126434 / 0.014526 (0.111908) | 0.124740 / 0.176557 (-0.051817) | 0.198907 / 0.737135 (-0.538228) | 0.138201 / 0.296338 (-0.158137) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.684814 / 0.215209 (0.469605) | 6.738182 / 2.077655 (4.660527) | 3.231054 / 1.504120 (1.726934) | 2.889550 / 1.541195 (1.348355) | 2.933985 / 1.468490 (1.465495) | 0.867176 / 4.584777 (-3.717601) | 5.465475 / 3.745712 (1.719763) | 4.928370 / 5.269862 (-0.341492) | 3.126382 / 4.565676 (-1.439294) | 0.129673 / 0.424275 (-0.294603) | 0.009755 / 0.007607 (0.002148) | 0.797860 / 0.226044 (0.571816) | 8.003178 / 2.268929 (5.734250) | 4.081658 / 55.444624 (-51.362966) | 3.303837 / 6.876477 (-3.572640) | 3.574577 / 2.142072 (1.432505) | 1.064674 / 4.805227 (-3.740554) | 0.232894 / 6.500664 (-6.267770) | 0.082298 / 0.075469 (0.006829) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.858701 / 1.841788 (0.016913) | 25.839794 / 8.074308 (17.765485) | 24.291425 / 10.191392 (14.100033) | 0.250181 / 0.680424 (-0.430243) | 0.034479 / 0.534201 (-0.499722) | 0.540754 / 0.579283 (-0.038529) | 0.615996 / 0.434364 (0.181632) | 0.631499 / 0.540337 (0.091161) | 0.838719 / 1.386936 (-0.548217) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#0b6bb2f0e7a460d4ed04855eafe1184a7ce7c09c \"CML watermark\")\n"
] | 2023-08-29T12:29:47 | 2023-08-29T13:04:59 | 2023-08-29T12:52:48 | MEMBER | null | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6189/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6189/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6189",
"html_url": "https://github.com/huggingface/datasets/pull/6189",
"diff_url": "https://github.com/huggingface/datasets/pull/6189.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/6189.patch",
"merged_at": "2023-08-29T12:52:48"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/6188 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6188/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6188/comments | https://api.github.com/repos/huggingface/datasets/issues/6188/events | https://github.com/huggingface/datasets/issues/6188 | 1,870,987,640 | I_kwDODunzps5vhQF4 | 6,188 | [Feature Request] Check the length of batch before writing so that empty batch is allowed | {
"login": "namespace-Pt",
"id": 61188463,
"node_id": "MDQ6VXNlcjYxMTg4NDYz",
"avatar_url": "https://avatars.githubusercontent.com/u/61188463?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/namespace-Pt",
"html_url": "https://github.com/namespace-Pt",
"followers_url": "https://api.github.com/users/namespace-Pt/followers",
"following_url": "https://api.github.com/users/namespace-Pt/following{/other_user}",
"gists_url": "https://api.github.com/users/namespace-Pt/gists{/gist_id}",
"starred_url": "https://api.github.com/users/namespace-Pt/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/namespace-Pt/subscriptions",
"organizations_url": "https://api.github.com/users/namespace-Pt/orgs",
"repos_url": "https://api.github.com/users/namespace-Pt/repos",
"events_url": "https://api.github.com/users/namespace-Pt/events{/privacy}",
"received_events_url": "https://api.github.com/users/namespace-Pt/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | [] | 2023-08-29T06:37:34 | 2023-08-29T06:37:34 | null | NONE | null | ### Use Case
I use `dataset.map(process_fn, batched=True)` to process the dataset, with data **augmentations or filtering**. However, when all examples within a batch is filtered out, i.e. **an empty batch is returned**, the following error will be thrown:
```
ValueError: Schema and number of arrays unequal
```
This is because the empty batch does not comply with the schema of other batches. I think an empty batch should be allowed to facilitate coding (one does not need to assign an empty list manually for all keys.)
A simple fix is to check the length of `batch` before writing:
```
if len(batch):
writer.write_batch(batch)
```
instead of
https://github.com/huggingface/datasets/blob/74d60213dcbd7c99484c62ce1d3dfd90a1df0770/src/datasets/arrow_dataset.py#L3493
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6188/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6188/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/6187 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6187/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6187/comments | https://api.github.com/repos/huggingface/datasets/issues/6187/events | https://github.com/huggingface/datasets/issues/6187 | 1,870,936,143 | I_kwDODunzps5vhDhP | 6,187 | Couldn't find a dataset script at /content/tsv/tsv.py or any data file in the same directory | {
"login": "andysingal",
"id": 20493493,
"node_id": "MDQ6VXNlcjIwNDkzNDkz",
"avatar_url": "https://avatars.githubusercontent.com/u/20493493?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/andysingal",
"html_url": "https://github.com/andysingal",
"followers_url": "https://api.github.com/users/andysingal/followers",
"following_url": "https://api.github.com/users/andysingal/following{/other_user}",
"gists_url": "https://api.github.com/users/andysingal/gists{/gist_id}",
"starred_url": "https://api.github.com/users/andysingal/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/andysingal/subscriptions",
"organizations_url": "https://api.github.com/users/andysingal/orgs",
"repos_url": "https://api.github.com/users/andysingal/repos",
"events_url": "https://api.github.com/users/andysingal/events{/privacy}",
"received_events_url": "https://api.github.com/users/andysingal/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | [
"Hi! You can load this dataset with:\r\n```python\r\ndata_files = {\r\n \"train\": \"/content/PUBHEALTH/train.tsv\",\r\n \"validation\": \"/content/PUBHEALTH/dev.tsv\",\r\n \"test\": \"/content/PUBHEALTH/test.tsv\",\r\n}\r\n\r\ntsv_datasets_reloaded = load_dataset(\"csv\", data_files=data_files, sep=\"\\t\")\r\n```\r\n\r\nTo support your `load_dataset` call, defining aliases for the packaged builders, as suggested in https://github.com/huggingface/datasets/issues/5625, must be implemented. We can consider adding this feature if more people request it.\r\n \r\n(Also answered on the Discord [here](https://discord.com/channels/879548962464493619/1145956791134470224/1146071491260186744))"
] | 2023-08-29T05:49:56 | 2023-08-29T16:21:45 | null | NONE | null | ### Describe the bug
```
---------------------------------------------------------------------------
FileNotFoundError Traceback (most recent call last)
[<ipython-input-48-6a7b3e847019>](https://localhost:8080/#) in <cell line: 7>()
5 }
6
----> 7 csv_datasets_reloaded = load_dataset("tsv", data_files=data_files)
8 csv_datasets_reloaded
2 frames
[/usr/local/lib/python3.10/dist-packages/datasets/load.py](https://localhost:8080/#) in dataset_module_factory(path, revision, download_config, download_mode, dynamic_modules_path, data_dir, data_files, **download_kwargs)
1489 raise e1 from None
1490 if isinstance(e1, FileNotFoundError):
-> 1491 raise FileNotFoundError(
1492 f"Couldn't find a dataset script at {relative_to_absolute_path(combined_path)} or any data file in the same directory. "
1493 f"Couldn't find '{path}' on the Hugging Face Hub either: {type(e1).__name__}: {e1}"
FileNotFoundError: Couldn't find a dataset script at /content/tsv/tsv.py or any data file in the same directory. Couldn't find 'tsv' on the Hugging Face Hub either: FileNotFoundError: Dataset 'tsv' doesn't exist on the Hub
```
### Steps to reproduce the bug
```
data_files = {
"train": "/content/PUBHEALTH/train.tsv",
"validation": "/content/PUBHEALTH/dev.tsv",
"test": "/content/PUBHEALTH/test.tsv",
}
tsv_datasets_reloaded = load_dataset("tsv", data_files=data_files)
tsv_datasets_reloaded
```
```
---------------------------------------------------------------------------
FileNotFoundError Traceback (most recent call last)
<ipython-input-48-6a7b3e847019> in <cell line: 7>()
5 }
6
----> 7 csv_datasets_reloaded = load_dataset("tsv", data_files=data_files)
8 csv_datasets_reloaded
2 frames
/usr/local/lib/python3.10/dist-packages/datasets/load.py in dataset_module_factory(path, revision, download_config, download_mode, dynamic_modules_path, data_dir, data_files, **download_kwargs)
1489 raise e1 from None
1490 if isinstance(e1, FileNotFoundError):
-> 1491 raise FileNotFoundError(
1492 f"Couldn't find a dataset script at {relative_to_absolute_path(combined_path)} or any data file in the same directory. "
1493 f"Couldn't find '{path}' on the Hugging Face Hub either: {type(e1).__name__}: {e1}"
FileNotFoundError: Couldn't find a dataset script at /content/tsv/tsv.py or any data file in the same directory. Couldn't find 'tsv' on the Hugging Face Hub either: FileNotFoundError: Dataset 'tsv' doesn't exist on the Hub
```
### Expected behavior
load the data, push to hub
### Environment info
jupyter notebook RTX 3090 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6187/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6187/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/6186 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6186/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6186/comments | https://api.github.com/repos/huggingface/datasets/issues/6186/events | https://github.com/huggingface/datasets/issues/6186 | 1,869,431,457 | I_kwDODunzps5vbUKh | 6,186 | Feature request: add code example of multi-GPU processing | {
"login": "NielsRogge",
"id": 48327001,
"node_id": "MDQ6VXNlcjQ4MzI3MDAx",
"avatar_url": "https://avatars.githubusercontent.com/u/48327001?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/NielsRogge",
"html_url": "https://github.com/NielsRogge",
"followers_url": "https://api.github.com/users/NielsRogge/followers",
"following_url": "https://api.github.com/users/NielsRogge/following{/other_user}",
"gists_url": "https://api.github.com/users/NielsRogge/gists{/gist_id}",
"starred_url": "https://api.github.com/users/NielsRogge/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/NielsRogge/subscriptions",
"organizations_url": "https://api.github.com/users/NielsRogge/orgs",
"repos_url": "https://api.github.com/users/NielsRogge/repos",
"events_url": "https://api.github.com/users/NielsRogge/events{/privacy}",
"received_events_url": "https://api.github.com/users/NielsRogge/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892861,
"node_id": "MDU6TGFiZWwxOTM1ODkyODYx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/documentation",
"name": "documentation",
"color": "0075ca",
"default": true,
"description": "Improvements or additions to documentation"
},
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | open | false | null | [] | null | [
"That'd be a great idea! @mariosasko or @lhoestq, would it be possible to fix the code snippet or do you have another suggested way for doing this?"
] | 2023-08-28T10:00:59 | 2023-08-29T17:39:03 | null | CONTRIBUTOR | null | ### Feature request
Would be great to add a code example of how to do multi-GPU processing with 🤗 Datasets in the documentation. cc @stevhliu
Currently the docs has a small [section](https://huggingface.co/docs/datasets/v2.3.2/en/process#map) on this saying "your big GPU call goes here", however it didn't work for me out-of-the-box.
Let's say you have a PyTorch model that can do translation, and you have multiple GPUs. In that case, you'd like to duplicate the model on each GPU, each processing (translating) a chunk of the data in parallel.
Here's how I tried to do that:
```
from datasets import load_dataset
from transformers import AutoModelForSeq2SeqLM, AutoTokenizer
from multiprocess import set_start_method
import torch
import os
dataset = load_dataset("mlfoundations/datacomp_small")
tokenizer = AutoTokenizer.from_pretrained("facebook/nllb-200-distilled-600M")
model = AutoModelForSeq2SeqLM.from_pretrained("facebook/nllb-200-distilled-600M")
# put model on each available GPU
# also, should I do it like this or use nn.DataParallel?
model.to("cuda:0")
model.to("cuda:1")
set_start_method("spawn")
def translate_captions(batch, rank):
os.environ["CUDA_VISIBLE_DEVICES"] = str(rank % torch.cuda.device_count())
texts = batch["text"]
inputs = tokenizer(texts, padding=True, truncation=True, return_tensors="pt").to(model.device)
translated_tokens = model.generate(
**inputs, forced_bos_token_id=tokenizer.lang_code_to_id["eng_Latn"], max_length=30
)
translated_texts = tokenizer.batch_decode(translated_tokens, skip_special_tokens=True)
batch["translated_text"] = translated_texts
return batch
updated_dataset = dataset.map(translate_captions, with_rank=True, num_proc=2, batched=True, batch_size=256)
```
I've personally tried running this script on a machine with 2 A100 GPUs.
## Error 1
Running the code snippet above from the terminal (python script.py) resulted in the following error:
```
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "/home/niels/anaconda3/envs/datacomp/lib/python3.10/site-packages/multiprocess/spawn.py", line 116, in spawn_main
exitcode = _main(fd, parent_sentinel)
File "/home/niels/anaconda3/envs/datacomp/lib/python3.10/site-packages/multiprocess/spawn.py", line 125, in _main
prepare(preparation_data)
File "/home/niels/anaconda3/envs/datacomp/lib/python3.10/site-packages/multiprocess/spawn.py", line 236, in prepare
_fixup_main_from_path(data['init_main_from_path'])
File "/home/niels/anaconda3/envs/datacomp/lib/python3.10/site-packages/multiprocess/spawn.py", line 287, in _fixup_main_from_path
main_content = runpy.run_path(main_path,
File "/home/niels/anaconda3/envs/datacomp/lib/python3.10/runpy.py", line 289, in run_path
return _run_module_code(code, init_globals, run_name,
File "/home/niels/anaconda3/envs/datacomp/lib/python3.10/runpy.py", line 96, in _run_module_code
_run_code(code, mod_globals, init_globals,
File "/home/niels/anaconda3/envs/datacomp/lib/python3.10/runpy.py", line 86, in _run_code
exec(code, run_globals)
File "/home/niels/python_projects/datacomp/datasets_multi_gpu.py", line 16, in <module>
set_start_method("spawn")
File "/home/niels/anaconda3/envs/datacomp/lib/python3.10/site-packages/multiprocess/context.py", line 247, in set_start_method
raise RuntimeError('context has already been set')
RuntimeError: context has already been set
```
## Error 2
Then, based on [this Stackoverflow answer](https://stackoverflow.com/a/71616344/7762882), I put the `set_start_method("spawn")` section in a try: catch block. This resulted in the following error:
```
File "/home/niels/anaconda3/envs/datacomp/lib/python3.10/site-packages/datasets/dataset_dict.py", line 817, in <dictcomp>
k: dataset.map(
File "/home/niels/anaconda3/envs/datacomp/lib/python3.10/site-packages/datasets/arrow_dataset.py", line 2926, in map
with Pool(nb_of_missing_shards, initargs=initargs, initializer=initializer) as pool:
File "/home/niels/anaconda3/envs/datacomp/lib/python3.10/site-packages/multiprocess/context.py", line 119, in Pool
return Pool(processes, initializer, initargs, maxtasksperchild,
File "/home/niels/anaconda3/envs/datacomp/lib/python3.10/site-packages/multiprocess/pool.py", line 215, in __init__
self._repopulate_pool()
File "/home/niels/anaconda3/envs/datacomp/lib/python3.10/site-packages/multiprocess/pool.py", line 306, in _repopulate_pool
return self._repopulate_pool_static(self._ctx, self.Process,
File "/home/niels/anaconda3/envs/datacomp/lib/python3.10/site-packages/multiprocess/pool.py", line 329, in _repopulate_pool_static
w.start()
File "/home/niels/anaconda3/envs/datacomp/lib/python3.10/site-packages/multiprocess/process.py", line 121, in start
self._popen = self._Popen(self)
File "/home/niels/anaconda3/envs/datacomp/lib/python3.10/site-packages/multiprocess/context.py", line 288, in _Popen
return Popen(process_obj)
File "/home/niels/anaconda3/envs/datacomp/lib/python3.10/site-packages/multiprocess/popen_spawn_posix.py", line 32, in __init__
super().__init__(process_obj)
File "/home/niels/anaconda3/envs/datacomp/lib/python3.10/site-packages/multiprocess/popen_fork.py", line 19, in __init__
self._launch(process_obj)
File "/home/niels/anaconda3/envs/datacomp/lib/python3.10/site-packages/multiprocess/popen_spawn_posix.py", line 42, in _launch
prep_data = spawn.get_preparation_data(process_obj._name)
File "/home/niels/anaconda3/envs/datacomp/lib/python3.10/site-packages/multiprocess/spawn.py", line 154, in get_preparation_data
_check_not_importing_main()
File "/home/niels/anaconda3/envs/datacomp/lib/python3.10/site-packages/multiprocess/spawn.py", line 134, in _check_not_importing_main
raise RuntimeError('''
RuntimeError:
An attempt has been made to start a new process before the
current process has finished its bootstrapping phase.
This probably means that you are not using fork to start your
child processes and you have forgotten to use the proper idiom
in the main module:
if __name__ == '__main__':
freeze_support()
...
The "freeze_support()" line can be omitted if the program
is not going to be frozen to produce an executable.
```
So then I put the last line under a `if __name__ == '__main__':` block. Then the code snippet seemed to work, but it seemed that it's only leveraging a single GPU (based on monitoring `nvidia-smi`):
```
Mon Aug 28 12:19:24 2023
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 515.65.01 Driver Version: 515.65.01 CUDA Version: 11.7 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|===============================+======================+======================|
| 0 NVIDIA A100-SXM... On | 00000000:01:00.0 Off | 0 |
| N/A 55C P0 76W / 275W | 8747MiB / 81920MiB | 0% Default |
| | | Disabled |
+-------------------------------+----------------------+----------------------+
| 1 NVIDIA A100-SXM... On | 00000000:47:00.0 Off | 0 |
| N/A 67C P0 274W / 275W | 59835MiB / 81920MiB | 100% Default |
| | | Disabled |
```
Both GPUs should have equal GPU usage, but I've always noticed that the last GPU has way more usage than the other ones. This made me think that `os.environ["CUDA_VISIBLE_DEVICES"] = str(rank % torch.cuda.device_count())` might not work inside a Python script, especially if done after importing PyTorch?
### Motivation
Would be great to clarify how to do multi-GPU data processing.
### Your contribution
If my code snippet can be fixed, I can contribute it to the docs :) | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6186/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6186/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/6185 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6185/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6185/comments | https://api.github.com/repos/huggingface/datasets/issues/6185/events | https://github.com/huggingface/datasets/issues/6185 | 1,868,077,748 | I_kwDODunzps5vWJq0 | 6,185 | Error in saving the PIL image into *.arrow files using datasets.arrow_writer | {
"login": "HaozheZhao",
"id": 14247682,
"node_id": "MDQ6VXNlcjE0MjQ3Njgy",
"avatar_url": "https://avatars.githubusercontent.com/u/14247682?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/HaozheZhao",
"html_url": "https://github.com/HaozheZhao",
"followers_url": "https://api.github.com/users/HaozheZhao/followers",
"following_url": "https://api.github.com/users/HaozheZhao/following{/other_user}",
"gists_url": "https://api.github.com/users/HaozheZhao/gists{/gist_id}",
"starred_url": "https://api.github.com/users/HaozheZhao/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/HaozheZhao/subscriptions",
"organizations_url": "https://api.github.com/users/HaozheZhao/orgs",
"repos_url": "https://api.github.com/users/HaozheZhao/repos",
"events_url": "https://api.github.com/users/HaozheZhao/events{/privacy}",
"received_events_url": "https://api.github.com/users/HaozheZhao/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | [
"You can cast the `input_image` column to the `Image` type to fix the issue:\r\n```python\r\nds.cast_column(\"input_image\", datasets.Image())\r\n```"
] | 2023-08-26T12:15:57 | 2023-08-29T14:49:58 | null | NONE | null | ### Describe the bug
I am using the ArrowWriter from datasets.arrow_writer to save a json-style file as arrow files. Within the dictionary, it contains a feature called "image" which is a list of PIL.Image objects.
I am saving the json using the following script:
```
def save_to_arrow(path,temp):
with ArrowWriter(path=path,writer_batch_size=20) as writer:
writer.write_batch(temp)
writer.finalize()
```
However, when I attempt to restore the dataset and use the ```Dataset.from_file(path)``` function to load the arrow file, there seems to be an issue with the PIL.Image object in the dataset. The list of PIL.Images appears as follows rather than a normal PIL.Image object:
![1693051705440](https://github.com/huggingface/datasets/assets/14247682/03b204c2-d0fa-4d19-beff-6f4d7b83c848)
### Steps to reproduce the bug
1. Storing the data json into arrow files:
```
def save_to_arrow(path,temp):
with ArrowWriter(path=path,writer_batch_size=20) as writer:
writer.write_batch(temp)
writer.finalize()
save_to_arrow( path, json_file )
```
2. try to load the arrow file into the Dataset object using the ```Dataset.from_file(path)```
### Expected behavior
Except to saving the contained "image" feature as a list PIL.Image objects as the arrow file. And I can restore the dataset from the file.
### Environment info
- `datasets` version: 2.12.0
- Platform: Linux-5.4.0-150-generic-x86_64-with-glibc2.17
- Python version: 3.8.17
- Huggingface_hub version: 0.16.4
- PyArrow version: 12.0.1
- Pandas version: 1.4.4 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6185/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6185/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/6184 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6184/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6184/comments | https://api.github.com/repos/huggingface/datasets/issues/6184/events | https://github.com/huggingface/datasets/issues/6184 | 1,867,766,143 | I_kwDODunzps5vU9l_ | 6,184 | Map cache does not detect function changes in another module | {
"login": "jonathanasdf",
"id": 511073,
"node_id": "MDQ6VXNlcjUxMTA3Mw==",
"avatar_url": "https://avatars.githubusercontent.com/u/511073?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jonathanasdf",
"html_url": "https://github.com/jonathanasdf",
"followers_url": "https://api.github.com/users/jonathanasdf/followers",
"following_url": "https://api.github.com/users/jonathanasdf/following{/other_user}",
"gists_url": "https://api.github.com/users/jonathanasdf/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jonathanasdf/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jonathanasdf/subscriptions",
"organizations_url": "https://api.github.com/users/jonathanasdf/orgs",
"repos_url": "https://api.github.com/users/jonathanasdf/repos",
"events_url": "https://api.github.com/users/jonathanasdf/events{/privacy}",
"received_events_url": "https://api.github.com/users/jonathanasdf/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892865,
"node_id": "MDU6TGFiZWwxOTM1ODkyODY1",
"url": "https://api.github.com/repos/huggingface/datasets/labels/duplicate",
"name": "duplicate",
"color": "cfd3d7",
"default": true,
"description": "This issue or pull request already exists"
}
] | closed | false | null | [] | null | [
"This issue is a duplicate of https://github.com/huggingface/datasets/issues/3297. This is a limitation of `dill`, a package we use for caching (non-`__main__` module objects are serialized by reference). You can find more info about it here: https://github.com/uqfoundation/dill/issues/424.\r\n\r\nIn your case, moving \r\n```\r\ndata = datasets.load_dataset('json', data_files=['/tmp/test.json'], split='train')\r\ndata = data.map(transform)\r\n``` \r\nto `test.py` and setting `transform.__module__ = None` at the end of `dataset.py` should fix the issue.",
"I understand this may be a limitation of an upstream tool, but for a user for datasets this is very annoying, as when you have dozens of different datasets with different preprocessing functions you can't really move them all into the same file. It may be worth seeing if there is a way to specialize the dependency (eg. subclass it) and enforce behaviors that makes sense for your product.\r\n\r\nI was able to work around this for now by setting `__module__ = None`. If such workarounds are required for now it may be better to document it somewhere than a single obscure issue from a long time ago.\r\n\r\nAs this is a duplicate issue I'm closing it.\r\n\r\nI have another issue with the cache https://github.com/huggingface/datasets/issues/6179 can you take a look?"
] | 2023-08-25T22:59:14 | 2023-08-29T20:57:07 | 2023-08-29T20:56:49 | NONE | null | ```python
# dataset.py
import os
import datasets
if not os.path.exists('/tmp/test.json'):
with open('/tmp/test.json', 'w') as file:
file.write('[{"text": "hello"}]')
def transform(example):
text = example['text']
# text += ' world'
return {'text': text}
data = datasets.load_dataset('json', data_files=['/tmp/test.json'], split='train')
data = data.map(transform)
```
```python
# test.py
import dataset
print(next(iter(dataset.data)))
```
Initialize cache
```
python3 test.py
# {'text': 'hello'}
```
Edit dataset.py and uncomment the commented line, run again
```
python3 test.py
# {'text': 'hello'}
# expected: {'text': 'hello world'}
```
Clear cache and run again
```
rm -rf ~/.cache/huggingface/datasets/*
python3 test.py
# {'text': 'hello world'}
```
If instead the two files are combined, then changes to the function are detected correctly. But it's expected when working on any realistic codebase that things will be modularized into separate files. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6184/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6184/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/6183 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6183/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6183/comments | https://api.github.com/repos/huggingface/datasets/issues/6183/events | https://github.com/huggingface/datasets/issues/6183 | 1,867,743,276 | I_kwDODunzps5vU4As | 6,183 | Load dataset with non-existent file | {
"login": "freQuensy23-coder",
"id": 64750224,
"node_id": "MDQ6VXNlcjY0NzUwMjI0",
"avatar_url": "https://avatars.githubusercontent.com/u/64750224?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/freQuensy23-coder",
"html_url": "https://github.com/freQuensy23-coder",
"followers_url": "https://api.github.com/users/freQuensy23-coder/followers",
"following_url": "https://api.github.com/users/freQuensy23-coder/following{/other_user}",
"gists_url": "https://api.github.com/users/freQuensy23-coder/gists{/gist_id}",
"starred_url": "https://api.github.com/users/freQuensy23-coder/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/freQuensy23-coder/subscriptions",
"organizations_url": "https://api.github.com/users/freQuensy23-coder/orgs",
"repos_url": "https://api.github.com/users/freQuensy23-coder/repos",
"events_url": "https://api.github.com/users/freQuensy23-coder/events{/privacy}",
"received_events_url": "https://api.github.com/users/freQuensy23-coder/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Same problem",
"This was fixed in https://github.com/huggingface/datasets/pull/6155, which will be included in the next release (or you can install `datasets` from source to use it immediately)."
] | 2023-08-25T22:21:22 | 2023-08-29T13:26:22 | 2023-08-29T13:26:22 | NONE | null | ### Describe the bug
When load a dataset from datasets and pass a wrong path to json with the data, error message does not contain something abount "wrong path" or "file do not exist" -
```SchemaInferenceError: Please pass `features` or at least one example when writing data```
### Steps to reproduce the bug
```python
from datasets import load_dataset
load_dataset('json', data_files='/home/alexey/unreal_file.json')
```
### Expected behavior
Raise os FileNotFound error or custom error with informative message
### Environment info
```
# packages in environment at /home/alexey/.conda/envs/alex_LoRA:
#
# Name Version Build Channel
_libgcc_mutex 0.1 main
_openmp_mutex 5.1 1_gnu
accelerate 0.21.0 pypi_0 pypi
aiohttp 3.8.5 pypi_0 pypi
aiosignal 1.3.1 pypi_0 pypi
antlr4-python3-runtime 4.9.3 pypi_0 pypi
appdirs 1.4.4 pypi_0 pypi
asttokens 2.0.5 pyhd3eb1b0_0
async-timeout 4.0.3 pypi_0 pypi
attrs 23.1.0 pypi_0 pypi
backcall 0.2.0 pyhd3eb1b0_0
bitsandbytes 0.41.1 pypi_0 pypi
bzip2 1.0.8 h7b6447c_0
ca-certificates 2023.05.30 h06a4308_0
certifi 2023.7.22 pypi_0 pypi
charset-normalizer 3.2.0 pypi_0 pypi
click 8.1.6 pypi_0 pypi
cmake 3.27.2 pypi_0 pypi
comm 0.1.2 py310h06a4308_0
contourpy 1.1.0 pypi_0 pypi
cycler 0.11.0 pypi_0 pypi
datasets 2.14.4 pypi_0 pypi
debugpy 1.6.7 py310h6a678d5_0
decorator 5.1.1 pyhd3eb1b0_0
dill 0.3.7 pypi_0 pypi
docker-pycreds 0.4.0 pypi_0 pypi
executing 0.8.3 pyhd3eb1b0_0
filelock 3.12.2 pypi_0 pypi
fire 0.5.0 pypi_0 pypi
fonttools 4.42.0 pypi_0 pypi
frozenlist 1.4.0 pypi_0 pypi
fsspec 2023.6.0 pypi_0 pypi
gitdb 4.0.10 pypi_0 pypi
gitpython 3.1.32 pypi_0 pypi
huggingface-hub 0.16.4 pypi_0 pypi
idna 3.4 pypi_0 pypi
ipykernel 6.25.0 py310h2f386ee_0
ipython 8.12.2 py310h06a4308_0
ipython-genutils 0.2.0 pypi_0 pypi
ipywidgets 8.0.4 py310h06a4308_0
jedi 0.18.1 py310h06a4308_1
jinja2 3.1.2 pypi_0 pypi
jsonschema 4.19.0 pypi_0 pypi
jsonschema-specifications 2023.7.1 pypi_0 pypi
jupyter_client 8.1.0 py310h06a4308_0
jupyter_core 5.3.0 py310h06a4308_0
jupyterlab_widgets 3.0.5 py310h06a4308_0
kiwisolver 1.4.4 pypi_0 pypi
ld_impl_linux-64 2.38 h1181459_1
libffi 3.3 he6710b0_2
libgcc-ng 11.2.0 h1234567_1
libgomp 11.2.0 h1234567_1
libsodium 1.0.18 h7b6447c_0
libstdcxx-ng 11.2.0 h1234567_1
libuuid 1.41.5 h5eee18b_0
lightning-utilities 0.9.0 pypi_0 pypi
lit 16.0.6 pypi_0 pypi
markupsafe 2.1.3 pypi_0 pypi
matplotlib 3.7.2 pypi_0 pypi
matplotlib-inline 0.1.6 py310h06a4308_0
mpmath 1.3.0 pypi_0 pypi
multidict 6.0.4 pypi_0 pypi
multiprocess 0.70.15 pypi_0 pypi
nbformat 4.2.0 pypi_0 pypi
ncurses 6.4 h6a678d5_0
nest-asyncio 1.5.6 py310h06a4308_0
networkx 3.1 pypi_0 pypi
numpy 1.25.2 pypi_0 pypi
nvidia-cublas-cu11 11.10.3.66 pypi_0 pypi
nvidia-cuda-cupti-cu11 11.7.101 pypi_0 pypi
nvidia-cuda-nvrtc-cu11 11.7.99 pypi_0 pypi
nvidia-cuda-runtime-cu11 11.7.99 pypi_0 pypi
nvidia-cudnn-cu11 8.5.0.96 pypi_0 pypi
nvidia-cufft-cu11 10.9.0.58 pypi_0 pypi
nvidia-curand-cu11 10.2.10.91 pypi_0 pypi
nvidia-cusolver-cu11 11.4.0.1 pypi_0 pypi
nvidia-cusparse-cu11 11.7.4.91 pypi_0 pypi
nvidia-nccl-cu11 2.14.3 pypi_0 pypi
nvidia-nvtx-cu11 11.7.91 pypi_0 pypi
omegaconf 2.3.0 pypi_0 pypi
openssl 1.1.1v h7f8727e_0
packaging 23.0 py310h06a4308_0
pandas 2.0.3 pypi_0 pypi
parso 0.8.3 pyhd3eb1b0_0
pathtools 0.1.2 pypi_0 pypi
peft 0.4.0 pypi_0 pypi
pexpect 4.8.0 pyhd3eb1b0_3
pickleshare 0.7.5 pyhd3eb1b0_1003
pillow 10.0.0 pypi_0 pypi
pip 23.2.1 py310h06a4308_0
platformdirs 2.5.2 py310h06a4308_0
plotly 5.16.1 pypi_0 pypi
prompt-toolkit 3.0.36 py310h06a4308_0
protobuf 4.24.0 pypi_0 pypi
psutil 5.9.0 py310h5eee18b_0
ptyprocess 0.7.0 pyhd3eb1b0_2
pure_eval 0.2.2 pyhd3eb1b0_0
pyarrow 12.0.1 pypi_0 pypi
pygments 2.15.1 py310h06a4308_1
pyparsing 3.0.9 pypi_0 pypi
python 3.10.0 h12debd9_5
python-dateutil 2.8.2 pyhd3eb1b0_0
pytorch-lightning 2.0.6 pypi_0 pypi
pytz 2023.3 pypi_0 pypi
pyyaml 6.0.1 pypi_0 pypi
pyzmq 25.1.0 py310h6a678d5_0
readline 8.2 h5eee18b_0
referencing 0.30.2 pypi_0 pypi
regex 2023.8.8 pypi_0 pypi
requests 2.31.0 pypi_0 pypi
rpds-py 0.9.2 pypi_0 pypi
safetensors 0.3.2 pypi_0 pypi
scipy 1.11.1 pypi_0 pypi
sentencepiece 0.1.99 pypi_0 pypi
sentry-sdk 1.29.2 pypi_0 pypi
setproctitle 1.3.2 pypi_0 pypi
setuptools 68.0.0 py310h06a4308_0
six 1.16.0 pyhd3eb1b0_1
smmap 5.0.0 pypi_0 pypi
sqlite 3.41.2 h5eee18b_0
stack_data 0.2.0 pyhd3eb1b0_0
sympy 1.12 pypi_0 pypi
tenacity 8.2.3 pypi_0 pypi
termcolor 2.3.0 pypi_0 pypi
tk 8.6.12 h1ccaba5_0
tokenizers 0.13.3 pypi_0 pypi
torch 2.0.1 pypi_0 pypi
torchmetrics 1.0.3 pypi_0 pypi
tornado 6.3.2 py310h5eee18b_0
tqdm 4.66.1 pypi_0 pypi
traitlets 5.7.1 py310h06a4308_0
transformers 4.31.0 pypi_0 pypi
triton 2.0.0 pypi_0 pypi
typing-extensions 4.7.1 pypi_0 pypi
tzdata 2023.3 pypi_0 pypi
urllib3 2.0.4 pypi_0 pypi
wandb 0.15.8 pypi_0 pypi
wcwidth 0.2.5 pyhd3eb1b0_0
wheel 0.38.4 py310h06a4308_0
widgetsnbextension 4.0.5 py310h06a4308_0
xxhash 3.3.0 pypi_0 pypi
xz 5.4.2 h5eee18b_0
yarl 1.9.2 pypi_0 pypi
zeromq 4.3.4 h2531618_0
zlib 1.2.13 h5eee18b_0
active environment : None
user config file : /home/alexey/.condarc
populated config files :
conda version : 23.1.0
conda-build version : 3.22.0
python version : 3.9.13.final.0
virtual packages : __archspec=1=x86_64
__cuda=12.0=0
__glibc=2.35=0
__linux=5.19.0=0
__unix=0=0
base environment : /opt/anaconda/anaconda3 (read only)
conda av data dir : /opt/anaconda/anaconda3/etc/conda
conda av metadata url : None
channel URLs : https://repo.anaconda.com/pkgs/main/linux-64
https://repo.anaconda.com/pkgs/main/noarch
https://repo.anaconda.com/pkgs/r/linux-64
https://repo.anaconda.com/pkgs/r/noarch
package cache : /opt/anaconda/anaconda3/pkgs
/home/alexey/.conda/pkgs
envs directories : /home/alexey/.conda/envs
/opt/anaconda/anaconda3/envs
platform : linux-64
user-agent : conda/23.1.0 requests/2.31.0 CPython/3.9.13 Linux/5.19.0-46-generic ubuntu/22.04.2 glibc/2.35
UID:GID : 1009:1009
netrc file : /home/alexey/.netrc
offline mode : False
``` | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6183/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 1,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6183/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/6182 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6182/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6182/comments | https://api.github.com/repos/huggingface/datasets/issues/6182/events | https://github.com/huggingface/datasets/issues/6182 | 1,867,203,131 | I_kwDODunzps5vS0I7 | 6,182 | Loading Meteor metric in HF evaluate module crashes due to datasets import issue | {
"login": "dsashulya",
"id": 42322648,
"node_id": "MDQ6VXNlcjQyMzIyNjQ4",
"avatar_url": "https://avatars.githubusercontent.com/u/42322648?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dsashulya",
"html_url": "https://github.com/dsashulya",
"followers_url": "https://api.github.com/users/dsashulya/followers",
"following_url": "https://api.github.com/users/dsashulya/following{/other_user}",
"gists_url": "https://api.github.com/users/dsashulya/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dsashulya/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dsashulya/subscriptions",
"organizations_url": "https://api.github.com/users/dsashulya/orgs",
"repos_url": "https://api.github.com/users/dsashulya/repos",
"events_url": "https://api.github.com/users/dsashulya/events{/privacy}",
"received_events_url": "https://api.github.com/users/dsashulya/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | [
"Our minimal Python version requirement is 3.8, so we dropped `importlib_metadata`. \r\n\r\nFeel free to open a PR in the `evaluate` repo to replace the problematic import with\r\n```python\r\nif PY_VERSION < version.parse(\"3.8\"):\r\n import importlib_metadata\r\nelse:\r\n import importlib.metadata as importlib_metadata\r\n```"
] | 2023-08-25T14:54:06 | 2023-08-25T17:36:33 | null | NONE | null | ### Describe the bug
When using python3.9 and ```evaluate``` module loading Meteor metric crashes at a non-existent import from ```datasets.config``` in ```datasets v2.14```
### Steps to reproduce the bug
```
from evaluate import load
meteor = load("meteor")
```
produces the following error:
```
from datasets.config import importlib_metadata, version
ImportError: cannot import name 'importlib_metadata' from 'datasets.config' (<path_to_project>/venv/lib/python3.9/site-packages/datasets/config.py)
```
### Expected behavior
```datasets``` of v2.10 has the following workaround in ```config.py```:
```
if PY_VERSION < version.parse("3.8"):
import importlib_metadata
else:
import importlib.metadata as importlib_metadata
```
However, it's absent in v2.14 which might be the cause of the issue.
### Environment info
- `datasets` version: 2.14.4
- Platform: macOS-13.5-arm64-arm-64bit
- Python version: 3.9.6
- Huggingface_hub version: 0.16.4
- PyArrow version: 12.0.1
- Pandas version: 2.0.3
- Evaluate version: 0.4.0 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6182/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6182/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/6181 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6181/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6181/comments | https://api.github.com/repos/huggingface/datasets/issues/6181/events | https://github.com/huggingface/datasets/pull/6181 | 1,867,035,522 | PR_kwDODunzps5Yy2VO | 6,181 | Fix import in `image_load` doc | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.009072 / 0.011353 (-0.002281) | 0.006088 / 0.011008 (-0.004920) | 0.134520 / 0.038508 (0.096011) | 0.074935 / 0.023109 (0.051826) | 0.480364 / 0.275898 (0.204466) | 0.568943 / 0.323480 (0.245464) | 0.006821 / 0.007986 (-0.001164) | 0.004941 / 0.004328 (0.000612) | 0.083274 / 0.004250 (0.079023) | 0.061080 / 0.037052 (0.024028) | 0.478960 / 0.258489 (0.220471) | 0.542720 / 0.293841 (0.248879) | 0.058023 / 0.128546 (-0.070524) | 0.020120 / 0.075646 (-0.055526) | 0.492680 / 0.419271 (0.073409) | 0.079118 / 0.043533 (0.035585) | 0.425087 / 0.255139 (0.169948) | 0.603228 / 0.283200 (0.320028) | 0.044102 / 0.141683 (-0.097581) | 2.138848 / 1.452155 (0.686693) | 2.454418 / 1.492716 (0.961702) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.255745 / 0.018006 (0.237738) | 0.587559 / 0.000490 (0.587069) | 0.006872 / 0.000200 (0.006672) | 0.000111 / 0.000054 (0.000056) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.038480 / 0.037411 (0.001069) | 0.115479 / 0.014526 (0.100953) | 0.138395 / 0.176557 (-0.038161) | 0.218007 / 0.737135 (-0.519129) | 0.128866 / 0.296338 (-0.167472) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.756089 / 0.215209 (0.540880) | 7.754631 / 2.077655 (5.676976) | 3.615716 / 1.504120 (2.111596) | 2.994327 / 1.541195 (1.453132) | 3.196169 / 1.468490 (1.727679) | 1.066937 / 4.584777 (-3.517840) | 6.079595 / 3.745712 (2.333883) | 5.455523 / 5.269862 (0.185661) | 3.559036 / 4.565676 (-1.006640) | 0.113044 / 0.424275 (-0.311231) | 0.011401 / 0.007607 (0.003794) | 0.961475 / 0.226044 (0.735430) | 8.664226 / 2.268929 (6.395298) | 4.203804 / 55.444624 (-51.240821) | 3.122437 / 6.876477 (-3.754039) | 3.549168 / 2.142072 (1.407095) | 1.213035 / 4.805227 (-3.592193) | 0.274725 / 6.500664 (-6.225939) | 0.094499 / 0.075469 (0.019030) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.770299 / 1.841788 (-0.071489) | 27.644591 / 8.074308 (19.570283) | 23.239529 / 10.191392 (13.048137) | 0.270185 / 0.680424 (-0.410238) | 0.033563 / 0.534201 (-0.500638) | 0.588301 / 0.579283 (0.009018) | 0.658746 / 0.434364 (0.224382) | 0.644476 / 0.540337 (0.104139) | 0.834314 / 1.386936 (-0.552622) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.011021 / 0.011353 (-0.000332) | 0.006719 / 0.011008 (-0.004289) | 0.087669 / 0.038508 (0.049161) | 0.088905 / 0.023109 (0.065796) | 0.594230 / 0.275898 (0.318332) | 0.620929 / 0.323480 (0.297449) | 0.006776 / 0.007986 (-0.001210) | 0.004725 / 0.004328 (0.000396) | 0.082006 / 0.004250 (0.077756) | 0.072164 / 0.037052 (0.035111) | 0.604489 / 0.258489 (0.346000) | 0.598520 / 0.293841 (0.304679) | 0.057534 / 0.128546 (-0.071013) | 0.016799 / 0.075646 (-0.058847) | 0.115029 / 0.419271 (-0.304243) | 0.070013 / 0.043533 (0.026481) | 0.561773 / 0.255139 (0.306634) | 0.624097 / 0.283200 (0.340897) | 0.043518 / 0.141683 (-0.098164) | 2.017089 / 1.452155 (0.564934) | 2.188159 / 1.492716 (0.695443) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.386476 / 0.018006 (0.368469) | 0.633195 / 0.000490 (0.632705) | 0.028469 / 0.000200 (0.028269) | 0.000159 / 0.000054 (0.000104) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.040020 / 0.037411 (0.002609) | 0.112927 / 0.014526 (0.098402) | 0.143663 / 0.176557 (-0.032894) | 0.205931 / 0.737135 (-0.531204) | 0.177814 / 0.296338 (-0.118524) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.711542 / 0.215209 (0.496333) | 7.518535 / 2.077655 (5.440880) | 3.714930 / 1.504120 (2.210810) | 3.031999 / 1.541195 (1.490804) | 3.328497 / 1.468490 (1.860006) | 0.858912 / 4.584777 (-3.725865) | 6.108384 / 3.745712 (2.362672) | 5.184329 / 5.269862 (-0.085532) | 3.622589 / 4.565676 (-0.943087) | 0.096933 / 0.424275 (-0.327342) | 0.008727 / 0.007607 (0.001120) | 0.830102 / 0.226044 (0.604057) | 8.331959 / 2.268929 (6.063030) | 4.165106 / 55.444624 (-51.279519) | 3.477003 / 6.876477 (-3.399474) | 3.794225 / 2.142072 (1.652153) | 1.237667 / 4.805227 (-3.567561) | 0.233731 / 6.500664 (-6.266933) | 0.076682 / 0.075469 (0.001213) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.944813 / 1.841788 (0.103026) | 27.666997 / 8.074308 (19.592689) | 24.562677 / 10.191392 (14.371285) | 0.279320 / 0.680424 (-0.401104) | 0.037802 / 0.534201 (-0.496399) | 0.553579 / 0.579283 (-0.025704) | 0.718229 / 0.434364 (0.283865) | 0.623456 / 0.540337 (0.083118) | 0.856777 / 1.386936 (-0.530159) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#4c2a9d31d5e720e85976af8b457d45755a7e6911 \"CML watermark\")\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007716 / 0.011353 (-0.003637) | 0.004624 / 0.011008 (-0.006384) | 0.099987 / 0.038508 (0.061479) | 0.082651 / 0.023109 (0.059542) | 0.376277 / 0.275898 (0.100379) | 0.401210 / 0.323480 (0.077730) | 0.004528 / 0.007986 (-0.003458) | 0.003763 / 0.004328 (-0.000566) | 0.076274 / 0.004250 (0.072024) | 0.062933 / 0.037052 (0.025881) | 0.393881 / 0.258489 (0.135392) | 0.431695 / 0.293841 (0.137854) | 0.036795 / 0.128546 (-0.091752) | 0.009935 / 0.075646 (-0.065712) | 0.343638 / 0.419271 (-0.075634) | 0.061456 / 0.043533 (0.017923) | 0.372235 / 0.255139 (0.117096) | 0.412994 / 0.283200 (0.129794) | 0.027993 / 0.141683 (-0.113690) | 1.798018 / 1.452155 (0.345863) | 1.898502 / 1.492716 (0.405786) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.237330 / 0.018006 (0.219324) | 0.494956 / 0.000490 (0.494467) | 0.003543 / 0.000200 (0.003343) | 0.000113 / 0.000054 (0.000059) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.034084 / 0.037411 (-0.003327) | 0.093407 / 0.014526 (0.078881) | 0.108378 / 0.176557 (-0.068179) | 0.177016 / 0.737135 (-0.560119) | 0.108622 / 0.296338 (-0.187716) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.456449 / 0.215209 (0.241240) | 4.522405 / 2.077655 (2.444750) | 2.206564 / 1.504120 (0.702444) | 1.994185 / 1.541195 (0.452990) | 2.083785 / 1.468490 (0.615295) | 0.563352 / 4.584777 (-4.021425) | 4.207295 / 3.745712 (0.461583) | 3.783061 / 5.269862 (-1.486800) | 2.372874 / 4.565676 (-2.192802) | 0.066907 / 0.424275 (-0.357368) | 0.009013 / 0.007607 (0.001406) | 0.537852 / 0.226044 (0.311808) | 5.349928 / 2.268929 (3.081000) | 2.759409 / 55.444624 (-52.685215) | 2.345972 / 6.876477 (-4.530505) | 2.630559 / 2.142072 (0.488486) | 0.681134 / 4.805227 (-4.124093) | 0.157898 / 6.500664 (-6.342766) | 0.071638 / 0.075469 (-0.003831) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.470730 / 1.841788 (-0.371058) | 22.479252 / 8.074308 (14.404944) | 16.543080 / 10.191392 (6.351688) | 0.191943 / 0.680424 (-0.488481) | 0.021641 / 0.534201 (-0.512560) | 0.467571 / 0.579283 (-0.111712) | 0.486728 / 0.434364 (0.052364) | 0.543359 / 0.540337 (0.003021) | 0.733968 / 1.386936 (-0.652968) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008135 / 0.011353 (-0.003218) | 0.004662 / 0.011008 (-0.006347) | 0.077218 / 0.038508 (0.038710) | 0.092220 / 0.023109 (0.069111) | 0.481219 / 0.275898 (0.205321) | 0.530373 / 0.323480 (0.206893) | 0.006418 / 0.007986 (-0.001568) | 0.003924 / 0.004328 (-0.000404) | 0.076681 / 0.004250 (0.072431) | 0.068693 / 0.037052 (0.031641) | 0.491938 / 0.258489 (0.233449) | 0.540501 / 0.293841 (0.246660) | 0.038106 / 0.128546 (-0.090441) | 0.010035 / 0.075646 (-0.065611) | 0.084502 / 0.419271 (-0.334769) | 0.057234 / 0.043533 (0.013701) | 0.483239 / 0.255139 (0.228100) | 0.510026 / 0.283200 (0.226826) | 0.028770 / 0.141683 (-0.112913) | 1.854937 / 1.452155 (0.402783) | 1.948268 / 1.492716 (0.455552) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.380192 / 0.018006 (0.362186) | 0.523318 / 0.000490 (0.522828) | 0.051153 / 0.000200 (0.050953) | 0.000691 / 0.000054 (0.000637) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.036838 / 0.037411 (-0.000573) | 0.109202 / 0.014526 (0.094676) | 0.124110 / 0.176557 (-0.052446) | 0.186717 / 0.737135 (-0.550419) | 0.124088 / 0.296338 (-0.172250) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.506411 / 0.215209 (0.291202) | 5.045421 / 2.077655 (2.967766) | 2.711911 / 1.504120 (1.207791) | 2.531668 / 1.541195 (0.990474) | 2.635680 / 1.468490 (1.167190) | 0.578395 / 4.584777 (-4.006382) | 4.206891 / 3.745712 (0.461178) | 3.851063 / 5.269862 (-1.418799) | 2.388327 / 4.565676 (-2.177350) | 0.068041 / 0.424275 (-0.356234) | 0.008769 / 0.007607 (0.001162) | 0.594170 / 0.226044 (0.368125) | 5.953138 / 2.268929 (3.684210) | 3.290586 / 55.444624 (-52.154038) | 2.877086 / 6.876477 (-3.999390) | 3.138600 / 2.142072 (0.996528) | 0.686393 / 4.805227 (-4.118834) | 0.156541 / 6.500664 (-6.344123) | 0.071514 / 0.075469 (-0.003955) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.613514 / 1.841788 (-0.228274) | 23.593185 / 8.074308 (15.518877) | 17.146647 / 10.191392 (6.955255) | 0.177230 / 0.680424 (-0.503193) | 0.023661 / 0.534201 (-0.510540) | 0.472367 / 0.579283 (-0.106916) | 0.484614 / 0.434364 (0.050250) | 0.547150 / 0.540337 (0.006813) | 0.843726 / 1.386936 (-0.543210) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#dba64cd381bfe384cb64ab9826f6054a0f1df1ff \"CML watermark\")\n"
] | 2023-08-25T13:12:19 | 2023-08-25T16:12:46 | 2023-08-25T16:02:24 | CONTRIBUTOR | null | Reported on [Discord](https://discord.com/channels/879548962464493619/1144295822209581168/1144295822209581168) | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6181/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6181/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6181",
"html_url": "https://github.com/huggingface/datasets/pull/6181",
"diff_url": "https://github.com/huggingface/datasets/pull/6181.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/6181.patch",
"merged_at": "2023-08-25T16:02:24"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/6180 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6180/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6180/comments | https://api.github.com/repos/huggingface/datasets/issues/6180/events | https://github.com/huggingface/datasets/pull/6180 | 1,867,032,578 | PR_kwDODunzps5Yy1r- | 6,180 | Use `hf-internal-testing` repos for hosting test dataset repos | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006505 / 0.011353 (-0.004847) | 0.003950 / 0.011008 (-0.007058) | 0.084554 / 0.038508 (0.046046) | 0.074376 / 0.023109 (0.051267) | 0.350184 / 0.275898 (0.074286) | 0.380704 / 0.323480 (0.057224) | 0.004011 / 0.007986 (-0.003975) | 0.003890 / 0.004328 (-0.000438) | 0.065483 / 0.004250 (0.061232) | 0.054912 / 0.037052 (0.017860) | 0.359586 / 0.258489 (0.101097) | 0.403360 / 0.293841 (0.109519) | 0.030614 / 0.128546 (-0.097932) | 0.008530 / 0.075646 (-0.067117) | 0.288220 / 0.419271 (-0.131052) | 0.052270 / 0.043533 (0.008737) | 0.352557 / 0.255139 (0.097418) | 0.380509 / 0.283200 (0.097309) | 0.025513 / 0.141683 (-0.116170) | 1.488469 / 1.452155 (0.036315) | 1.559182 / 1.492716 (0.066466) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.266163 / 0.018006 (0.248157) | 0.596345 / 0.000490 (0.595855) | 0.004368 / 0.000200 (0.004168) | 0.000211 / 0.000054 (0.000156) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.027137 / 0.037411 (-0.010274) | 0.082251 / 0.014526 (0.067725) | 0.094745 / 0.176557 (-0.081812) | 0.148756 / 0.737135 (-0.588379) | 0.094580 / 0.296338 (-0.201758) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.383506 / 0.215209 (0.168297) | 3.823147 / 2.077655 (1.745493) | 1.859627 / 1.504120 (0.355507) | 1.687969 / 1.541195 (0.146775) | 1.720786 / 1.468490 (0.252296) | 0.476552 / 4.584777 (-4.108225) | 3.539558 / 3.745712 (-0.206154) | 3.209032 / 5.269862 (-2.060830) | 1.999643 / 4.565676 (-2.566034) | 0.056484 / 0.424275 (-0.367791) | 0.007443 / 0.007607 (-0.000164) | 0.456089 / 0.226044 (0.230044) | 4.562522 / 2.268929 (2.293593) | 2.348286 / 55.444624 (-53.096338) | 1.984323 / 6.876477 (-4.892154) | 2.148988 / 2.142072 (0.006915) | 0.570761 / 4.805227 (-4.234466) | 0.131439 / 6.500664 (-6.369225) | 0.059752 / 0.075469 (-0.015717) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.276803 / 1.841788 (-0.564985) | 19.406812 / 8.074308 (11.332504) | 13.979088 / 10.191392 (3.787696) | 0.157418 / 0.680424 (-0.523006) | 0.018051 / 0.534201 (-0.516150) | 0.392307 / 0.579283 (-0.186976) | 0.406603 / 0.434364 (-0.027760) | 0.458450 / 0.540337 (-0.081888) | 0.622569 / 1.386936 (-0.764367) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006552 / 0.011353 (-0.004800) | 0.004060 / 0.011008 (-0.006948) | 0.063522 / 0.038508 (0.025014) | 0.072537 / 0.023109 (0.049428) | 0.398452 / 0.275898 (0.122554) | 0.422059 / 0.323480 (0.098579) | 0.005577 / 0.007986 (-0.002409) | 0.003413 / 0.004328 (-0.000916) | 0.064095 / 0.004250 (0.059845) | 0.056883 / 0.037052 (0.019831) | 0.407119 / 0.258489 (0.148630) | 0.435889 / 0.293841 (0.142048) | 0.031549 / 0.128546 (-0.096998) | 0.008418 / 0.075646 (-0.067228) | 0.070315 / 0.419271 (-0.348957) | 0.047828 / 0.043533 (0.004295) | 0.398705 / 0.255139 (0.143566) | 0.416986 / 0.283200 (0.133786) | 0.022304 / 0.141683 (-0.119379) | 1.512597 / 1.452155 (0.060442) | 1.570588 / 1.492716 (0.077871) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.295100 / 0.018006 (0.277094) | 0.541883 / 0.000490 (0.541393) | 0.007375 / 0.000200 (0.007175) | 0.000100 / 0.000054 (0.000045) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.030877 / 0.037411 (-0.006534) | 0.090807 / 0.014526 (0.076281) | 0.106155 / 0.176557 (-0.070402) | 0.155546 / 0.737135 (-0.581589) | 0.103847 / 0.296338 (-0.192492) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.441176 / 0.215209 (0.225967) | 4.401025 / 2.077655 (2.323371) | 2.394764 / 1.504120 (0.890644) | 2.226434 / 1.541195 (0.685239) | 2.247248 / 1.468490 (0.778758) | 0.489149 / 4.584777 (-4.095628) | 3.642468 / 3.745712 (-0.103244) | 3.235597 / 5.269862 (-2.034265) | 1.992660 / 4.565676 (-2.573016) | 0.057457 / 0.424275 (-0.366818) | 0.007192 / 0.007607 (-0.000415) | 0.515978 / 0.226044 (0.289934) | 5.147728 / 2.268929 (2.878800) | 2.837394 / 55.444624 (-52.607230) | 2.505753 / 6.876477 (-4.370723) | 2.653090 / 2.142072 (0.511018) | 0.583274 / 4.805227 (-4.221954) | 0.132116 / 6.500664 (-6.368548) | 0.058794 / 0.075469 (-0.016675) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.331630 / 1.841788 (-0.510158) | 20.056890 / 8.074308 (11.982582) | 14.950561 / 10.191392 (4.759169) | 0.165449 / 0.680424 (-0.514975) | 0.020161 / 0.534201 (-0.514040) | 0.395791 / 0.579283 (-0.183492) | 0.415631 / 0.434364 (-0.018733) | 0.474440 / 0.540337 (-0.065898) | 0.643060 / 1.386936 (-0.743876) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#712185ed5e9cb3ff6d6528b4528882d51935f334 \"CML watermark\")\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007440 / 0.011353 (-0.003913) | 0.004456 / 0.011008 (-0.006552) | 0.099498 / 0.038508 (0.060990) | 0.077579 / 0.023109 (0.054470) | 0.374934 / 0.275898 (0.099036) | 0.409590 / 0.323480 (0.086110) | 0.005876 / 0.007986 (-0.002110) | 0.003642 / 0.004328 (-0.000687) | 0.076781 / 0.004250 (0.072531) | 0.060185 / 0.037052 (0.023133) | 0.374762 / 0.258489 (0.116273) | 0.445608 / 0.293841 (0.151767) | 0.036557 / 0.128546 (-0.091990) | 0.009941 / 0.075646 (-0.065706) | 0.345214 / 0.419271 (-0.074058) | 0.061912 / 0.043533 (0.018379) | 0.378346 / 0.255139 (0.123207) | 0.415275 / 0.283200 (0.132076) | 0.027396 / 0.141683 (-0.114287) | 1.776602 / 1.452155 (0.324447) | 1.827683 / 1.492716 (0.334967) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.235227 / 0.018006 (0.217221) | 0.491846 / 0.000490 (0.491356) | 0.004987 / 0.000200 (0.004787) | 0.000127 / 0.000054 (0.000073) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.032517 / 0.037411 (-0.004894) | 0.099217 / 0.014526 (0.084691) | 0.109749 / 0.176557 (-0.066807) | 0.176190 / 0.737135 (-0.560946) | 0.109868 / 0.296338 (-0.186471) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.455188 / 0.215209 (0.239979) | 4.560143 / 2.077655 (2.482489) | 2.249928 / 1.504120 (0.745809) | 2.032808 / 1.541195 (0.491614) | 2.090096 / 1.468490 (0.621605) | 0.567813 / 4.584777 (-4.016964) | 4.338299 / 3.745712 (0.592587) | 3.701589 / 5.269862 (-1.568273) | 2.404805 / 4.565676 (-2.160871) | 0.067931 / 0.424275 (-0.356344) | 0.009011 / 0.007607 (0.001404) | 0.542565 / 0.226044 (0.316521) | 5.406578 / 2.268929 (3.137650) | 2.773508 / 55.444624 (-52.671116) | 2.402926 / 6.876477 (-4.473550) | 2.679318 / 2.142072 (0.537246) | 0.683781 / 4.805227 (-4.121446) | 0.155970 / 6.500664 (-6.344694) | 0.070108 / 0.075469 (-0.005361) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.541583 / 1.841788 (-0.300205) | 21.592562 / 8.074308 (13.518254) | 16.426868 / 10.191392 (6.235476) | 0.168618 / 0.680424 (-0.511806) | 0.021560 / 0.534201 (-0.512641) | 0.467062 / 0.579283 (-0.112221) | 0.479968 / 0.434364 (0.045604) | 0.540747 / 0.540337 (0.000410) | 0.775502 / 1.386936 (-0.611434) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008632 / 0.011353 (-0.002721) | 0.004523 / 0.011008 (-0.006485) | 0.075814 / 0.038508 (0.037306) | 0.087096 / 0.023109 (0.063987) | 0.482136 / 0.275898 (0.206238) | 0.529969 / 0.323480 (0.206489) | 0.006882 / 0.007986 (-0.001103) | 0.003720 / 0.004328 (-0.000609) | 0.076232 / 0.004250 (0.071981) | 0.069307 / 0.037052 (0.032254) | 0.491554 / 0.258489 (0.233065) | 0.528989 / 0.293841 (0.235148) | 0.042219 / 0.128546 (-0.086327) | 0.009717 / 0.075646 (-0.065929) | 0.103006 / 0.419271 (-0.316266) | 0.060377 / 0.043533 (0.016844) | 0.484454 / 0.255139 (0.229315) | 0.536072 / 0.283200 (0.252872) | 0.027482 / 0.141683 (-0.114201) | 1.844677 / 1.452155 (0.392522) | 2.001800 / 1.492716 (0.509083) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.252367 / 0.018006 (0.234361) | 0.483601 / 0.000490 (0.483111) | 0.007445 / 0.000200 (0.007245) | 0.000163 / 0.000054 (0.000108) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.036463 / 0.037411 (-0.000948) | 0.108837 / 0.014526 (0.094311) | 0.122256 / 0.176557 (-0.054300) | 0.186455 / 0.737135 (-0.550681) | 0.122270 / 0.296338 (-0.174069) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.506291 / 0.215209 (0.291082) | 5.038044 / 2.077655 (2.960389) | 2.751017 / 1.504120 (1.246897) | 2.553655 / 1.541195 (1.012460) | 2.612724 / 1.468490 (1.144234) | 0.581755 / 4.584777 (-4.003022) | 4.376012 / 3.745712 (0.630300) | 3.749755 / 5.269862 (-1.520107) | 2.394059 / 4.565676 (-2.171618) | 0.068727 / 0.424275 (-0.355548) | 0.008714 / 0.007607 (0.001107) | 0.607371 / 0.226044 (0.381326) | 6.062053 / 2.268929 (3.793125) | 3.278378 / 55.444624 (-52.166247) | 2.866417 / 6.876477 (-4.010060) | 3.056150 / 2.142072 (0.914077) | 0.695090 / 4.805227 (-4.110137) | 0.155274 / 6.500664 (-6.345390) | 0.071106 / 0.075469 (-0.004363) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.584552 / 1.841788 (-0.257236) | 23.092569 / 8.074308 (15.018260) | 17.381905 / 10.191392 (7.190513) | 0.206535 / 0.680424 (-0.473888) | 0.025401 / 0.534201 (-0.508800) | 0.514297 / 0.579283 (-0.064986) | 0.507487 / 0.434364 (0.073123) | 0.566883 / 0.540337 (0.026545) | 0.811074 / 1.386936 (-0.575862) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#5fb01295bff860f09a4c466e745f3840f851efdc \"CML watermark\")\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008400 / 0.011353 (-0.002953) | 0.004872 / 0.011008 (-0.006136) | 0.104434 / 0.038508 (0.065926) | 0.074411 / 0.023109 (0.051302) | 0.395970 / 0.275898 (0.120072) | 0.431661 / 0.323480 (0.108181) | 0.005365 / 0.007986 (-0.002621) | 0.005495 / 0.004328 (0.001167) | 0.081255 / 0.004250 (0.077004) | 0.057141 / 0.037052 (0.020089) | 0.397242 / 0.258489 (0.138753) | 0.456052 / 0.293841 (0.162211) | 0.048362 / 0.128546 (-0.080184) | 0.014077 / 0.075646 (-0.061569) | 0.351128 / 0.419271 (-0.068143) | 0.067842 / 0.043533 (0.024309) | 0.372820 / 0.255139 (0.117681) | 0.407917 / 0.283200 (0.124717) | 0.037707 / 0.141683 (-0.103976) | 1.677136 / 1.452155 (0.224981) | 1.764614 / 1.492716 (0.271897) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.269850 / 0.018006 (0.251844) | 0.601458 / 0.000490 (0.600969) | 0.006500 / 0.000200 (0.006300) | 0.000107 / 0.000054 (0.000053) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.030340 / 0.037411 (-0.007072) | 0.098041 / 0.014526 (0.083515) | 0.107270 / 0.176557 (-0.069287) | 0.173502 / 0.737135 (-0.563633) | 0.113296 / 0.296338 (-0.183043) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.575788 / 0.215209 (0.360579) | 5.723878 / 2.077655 (3.646223) | 2.326339 / 1.504120 (0.822219) | 2.130667 / 1.541195 (0.589472) | 2.080885 / 1.468490 (0.612395) | 0.800936 / 4.584777 (-3.783841) | 5.227888 / 3.745712 (1.482176) | 4.592647 / 5.269862 (-0.677214) | 2.935765 / 4.565676 (-1.629911) | 0.095909 / 0.424275 (-0.328367) | 0.008763 / 0.007607 (0.001156) | 0.697362 / 0.226044 (0.471318) | 6.968105 / 2.268929 (4.699176) | 3.129070 / 55.444624 (-52.315554) | 2.554818 / 6.876477 (-4.321658) | 2.776005 / 2.142072 (0.633933) | 1.017064 / 4.805227 (-3.788163) | 0.211552 / 6.500664 (-6.289112) | 0.072132 / 0.075469 (-0.003338) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.517072 / 1.841788 (-0.324716) | 23.737742 / 8.074308 (15.663433) | 22.236447 / 10.191392 (12.045055) | 0.235408 / 0.680424 (-0.445016) | 0.031889 / 0.534201 (-0.502312) | 0.458997 / 0.579283 (-0.120286) | 0.610513 / 0.434364 (0.176149) | 0.536508 / 0.540337 (-0.003830) | 0.750137 / 1.386936 (-0.636799) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008696 / 0.011353 (-0.002657) | 0.005374 / 0.011008 (-0.005634) | 0.077974 / 0.038508 (0.039466) | 0.083471 / 0.023109 (0.060362) | 0.498890 / 0.275898 (0.222992) | 0.517570 / 0.323480 (0.194090) | 0.006523 / 0.007986 (-0.001462) | 0.004315 / 0.004328 (-0.000013) | 0.082262 / 0.004250 (0.078012) | 0.064828 / 0.037052 (0.027776) | 0.473101 / 0.258489 (0.214612) | 0.534172 / 0.293841 (0.240331) | 0.051884 / 0.128546 (-0.076662) | 0.015191 / 0.075646 (-0.060455) | 0.084307 / 0.419271 (-0.334965) | 0.066050 / 0.043533 (0.022517) | 0.518007 / 0.255139 (0.262868) | 0.511145 / 0.283200 (0.227946) | 0.045302 / 0.141683 (-0.096381) | 1.670973 / 1.452155 (0.218818) | 1.829225 / 1.492716 (0.336509) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.436537 / 0.018006 (0.418531) | 0.608380 / 0.000490 (0.607890) | 0.075211 / 0.000200 (0.075011) | 0.000733 / 0.000054 (0.000679) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.039117 / 0.037411 (0.001706) | 0.103525 / 0.014526 (0.088999) | 0.124413 / 0.176557 (-0.052144) | 0.192352 / 0.737135 (-0.544783) | 0.120379 / 0.296338 (-0.175959) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.673338 / 0.215209 (0.458129) | 6.799435 / 2.077655 (4.721780) | 3.600913 / 1.504120 (2.096793) | 2.881008 / 1.541195 (1.339814) | 2.667154 / 1.468490 (1.198664) | 0.868775 / 4.584777 (-3.716002) | 5.517063 / 3.745712 (1.771351) | 4.646706 / 5.269862 (-0.623156) | 2.914825 / 4.565676 (-1.650852) | 0.098784 / 0.424275 (-0.325491) | 0.011504 / 0.007607 (0.003897) | 0.724233 / 0.226044 (0.498188) | 7.311045 / 2.268929 (5.042117) | 3.685490 / 55.444624 (-51.759135) | 2.892360 / 6.876477 (-3.984117) | 3.253189 / 2.142072 (1.111117) | 0.983065 / 4.805227 (-3.822162) | 0.201097 / 6.500664 (-6.299567) | 0.068020 / 0.075469 (-0.007450) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.793904 / 1.841788 (-0.047884) | 24.451356 / 8.074308 (16.377048) | 21.697191 / 10.191392 (11.505799) | 0.228545 / 0.680424 (-0.451879) | 0.034600 / 0.534201 (-0.499601) | 0.483253 / 0.579283 (-0.096030) | 0.615103 / 0.434364 (0.180739) | 0.564600 / 0.540337 (0.024262) | 0.799688 / 1.386936 (-0.587248) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#74d60213dcbd7c99484c62ce1d3dfd90a1df0770 \"CML watermark\")\n"
] | 2023-08-25T13:10:26 | 2023-08-25T16:58:02 | 2023-08-25T16:46:22 | CONTRIBUTOR | null | Use `hf-internal-testing` for hosting instead of the maintainers' dataset repos. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6180/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6180/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6180",
"html_url": "https://github.com/huggingface/datasets/pull/6180",
"diff_url": "https://github.com/huggingface/datasets/pull/6180.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/6180.patch",
"merged_at": "2023-08-25T16:46:22"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/6179 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6179/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6179/comments | https://api.github.com/repos/huggingface/datasets/issues/6179/events | https://github.com/huggingface/datasets/issues/6179 | 1,867,009,016 | I_kwDODunzps5vSEv4 | 6,179 | Map cache with tokenizer | {
"login": "jonathanasdf",
"id": 511073,
"node_id": "MDQ6VXNlcjUxMTA3Mw==",
"avatar_url": "https://avatars.githubusercontent.com/u/511073?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jonathanasdf",
"html_url": "https://github.com/jonathanasdf",
"followers_url": "https://api.github.com/users/jonathanasdf/followers",
"following_url": "https://api.github.com/users/jonathanasdf/following{/other_user}",
"gists_url": "https://api.github.com/users/jonathanasdf/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jonathanasdf/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jonathanasdf/subscriptions",
"organizations_url": "https://api.github.com/users/jonathanasdf/orgs",
"repos_url": "https://api.github.com/users/jonathanasdf/repos",
"events_url": "https://api.github.com/users/jonathanasdf/events{/privacy}",
"received_events_url": "https://api.github.com/users/jonathanasdf/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | [
"https://github.com/huggingface/datasets/issues/5147 may be a solution, by passing in the tokenizer in a fn_kwargs and ignoring it in the fingerprint calculations",
"I have a similar issue. I was using a Jupyter Notebook and every time I call the map function it performs tokenization from scratch again although the cache files of last run still exists. \r\n\r\nI ran with 20 processes and now in the cache folder there are two groups of cached results of tokenized dataset:\r\n\r\n```\r\n.rw-r--r-- fad3ew bii_dsc_community 240 MB Sat Aug 26 12:56:46 2023 cache-1982fea76aa54a13_00001_of_00020.arrow\r\n.rw-r--r-- fad3ew bii_dsc_community 240 MB Sat Aug 26 13:02:08 2023 cache-1982fea76aa54a13_00004_of_00020.arrow\r\n.rw-r--r-- fad3ew bii_dsc_community 240 MB Sat Aug 26 12:56:40 2023 cache-1982fea76aa54a13_00005_of_00020.arrow\r\n.rw-r--r-- fad3ew bii_dsc_community 241 MB Sat Aug 26 12:50:59 2023 cache-1982fea76aa54a13_00006_of_00020.arrow\r\n.rw-r--r-- fad3ew bii_dsc_community 240 MB Sat Aug 26 12:57:37 2023 cache-1982fea76aa54a13_00007_of_00020.arrow\r\n.rw-r--r-- fad3ew bii_dsc_community 240 MB Sat Aug 26 12:57:31 2023 cache-1982fea76aa54a13_00008_of_00020.arrow\r\n.rw-r--r-- fad3ew bii_dsc_community 240 MB Sat Aug 26 12:59:47 2023 cache-1982fea76aa54a13_00010_of_00020.arrow\r\n.rw-r--r-- fad3ew bii_dsc_community 241 MB Sat Aug 26 12:59:44 2023 cache-1982fea76aa54a13_00011_of_00020.arrow\r\n.rw-r--r-- fad3ew bii_dsc_community 241 MB Sat Aug 26 12:55:24 2023 cache-1982fea76aa54a13_00012_of_00020.arrow\r\n.rw-r--r-- fad3ew bii_dsc_community 241 MB Sat Aug 26 12:56:21 2023 cache-1982fea76aa54a13_00013_of_00020.arrow\r\n.rw-r--r-- fad3ew bii_dsc_community 240 MB Sat Aug 26 12:57:24 2023 cache-1982fea76aa54a13_00014_of_00020.arrow\r\n.rw-r--r-- fad3ew bii_dsc_community 240 MB Sat Aug 26 13:00:48 2023 cache-1982fea76aa54a13_00015_of_00020.arrow\r\n.rw-r--r-- fad3ew bii_dsc_community 240 MB Sat Aug 26 12:56:56 2023 cache-1982fea76aa54a13_00017_of_00020.arrow\r\n.rw-r--r-- fad3ew bii_dsc_community 240 MB Sat Aug 26 12:56:54 2023 cache-1982fea76aa54a13_00018_of_00020.arrow\r\n.rw-r--r-- fad3ew bii_dsc_community 240 MB Sat Aug 26 12:57:27 2023 cache-1982fea76aa54a13_00019_of_00020.arrow\r\n.rw-r--r-- fad3ew bii_dsc_community 240 MB Wed Aug 23 19:15:40 2023 cache-454431f643cdc5e8_00000_of_00020.arrow\r\n.rw-r--r-- fad3ew bii_dsc_community 240 MB Wed Aug 23 19:16:46 2023 cache-454431f643cdc5e8_00001_of_00020.arrow\r\n.rw-r--r-- fad3ew bii_dsc_community 240 MB Wed Aug 23 19:14:53 2023 cache-454431f643cdc5e8_00002_of_00020.arrow\r\n.rw-r--r-- fad3ew bii_dsc_community 240 MB Wed Aug 23 19:13:10 2023 cache-454431f643cdc5e8_00003_of_00020.arrow\r\n.rw-r--r-- fad3ew bii_dsc_community 240 MB Wed Aug 23 19:13:04 2023 cache-454431f643cdc5e8_00004_of_00020.arrow\r\n.rw-r--r-- fad3ew bii_dsc_community 240 MB Wed Aug 23 19:16:42 2023 cache-454431f643cdc5e8_00005_of_00020.arrow\r\n.rw-r--r-- fad3ew bii_dsc_community 241 MB Wed Aug 23 19:01:29 2023 cache-454431f643cdc5e8_00006_of_00020.arrow\r\n.rw-r--r-- fad3ew bii_dsc_community 240 MB Wed Aug 23 19:16:41 2023 cache-454431f643cdc5e8_00007_of_00020.arrow\r\n.rw-r--r-- fad3ew bii_dsc_community 240 MB Wed Aug 23 19:14:04 2023 cache-454431f643cdc5e8_00008_of_00020.arrow\r\n.rw-r--r-- fad3ew bii_dsc_community 240 MB Wed Aug 23 19:17:41 2023 cache-454431f643cdc5e8_00009_of_00020.arrow\r\n.rw-r--r-- fad3ew bii_dsc_community 240 MB Wed Aug 23 19:17:06 2023 cache-454431f643cdc5e8_00010_of_00020.arrow\r\n.rw-r--r-- fad3ew bii_dsc_community 241 MB Wed Aug 23 19:17:16 2023 cache-454431f643cdc5e8_00011_of_00020.arrow\r\n.rw-r--r-- fad3ew bii_dsc_community 241 MB Wed Aug 23 19:15:13 2023 cache-454431f643cdc5e8_00012_of_00020.arrow\r\n.rw-r--r-- fad3ew bii_dsc_community 241 MB Wed Aug 23 19:16:01 2023 cache-454431f643cdc5e8_00013_of_00020.arrow\r\n.rw-r--r-- fad3ew bii_dsc_community 240 MB Wed Aug 23 19:16:35 2023 cache-454431f643cdc5e8_00014_of_00020.arrow\r\n.rw-r--r-- fad3ew bii_dsc_community 240 MB Wed Aug 23 19:16:20 2023 cache-454431f643cdc5e8_00015_of_00020.arrow\r\n.rw-r--r-- fad3ew bii_dsc_community 240 MB Wed Aug 23 19:14:48 2023 cache-454431f643cdc5e8_00016_of_00020.arrow\r\n.rw-r--r-- fad3ew bii_dsc_community 240 MB Wed Aug 23 18:59:32 2023 cache-454431f643cdc5e8_00017_of_00020.arrow\r\n.rw-r--r-- fad3ew bii_dsc_community 240 MB Wed Aug 23 19:17:58 2023 cache-454431f643cdc5e8_00018_of_00020.arrow\r\n.rw-r--r-- fad3ew bii_dsc_community 240 MB Wed Aug 23 19:15:25 2023 cache-454431f643cdc5e8_00019_of_00020.arrow\r\n```\r\n\r\ncan we specify the cache file for map so that it won't redo everything again?",
"@Luosuu [map](https://huggingface.co/docs/datasets/v2.14.4/en/package_reference/main_classes#datasets.Dataset.map) has cache_file_name parameter\r\n\r\nIn my case, I do want the cache to detect when the map function changes, so I can't pass a constant cache file name."
] | 2023-08-25T12:55:18 | 2023-08-26T22:08:07 | null | NONE | null | Similar issue to https://github.com/huggingface/datasets/issues/5985, but across different sessions rather than two calls in the same session.
Unlike that issue, explicitly calling tokenizer(my_args) before the map() doesn't help, because the tokenizer was created with a different hash to begin with...
setup
```
from transformers import AutoTokenizer
AutoTokenizer.from_pretrained('bert-base-uncased').save_pretrained("tok")
```
this prints different value each time
```
from transformers import AutoTokenizer
from datasets.utils.py_utils import dumps # Huggingface datasets
print(hash(dumps(AutoTokenizer.from_pretrained("tok"))))
``` | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6179/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6179/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/6178 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6178/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6178/comments | https://api.github.com/repos/huggingface/datasets/issues/6178/events | https://github.com/huggingface/datasets/issues/6178 | 1,866,610,102 | I_kwDODunzps5vQjW2 | 6,178 | 'import datasets' throws "invalid syntax error" | {
"login": "elia-ashraf",
"id": 128580829,
"node_id": "U_kgDOB6n83Q",
"avatar_url": "https://avatars.githubusercontent.com/u/128580829?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/elia-ashraf",
"html_url": "https://github.com/elia-ashraf",
"followers_url": "https://api.github.com/users/elia-ashraf/followers",
"following_url": "https://api.github.com/users/elia-ashraf/following{/other_user}",
"gists_url": "https://api.github.com/users/elia-ashraf/gists{/gist_id}",
"starred_url": "https://api.github.com/users/elia-ashraf/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/elia-ashraf/subscriptions",
"organizations_url": "https://api.github.com/users/elia-ashraf/orgs",
"repos_url": "https://api.github.com/users/elia-ashraf/repos",
"events_url": "https://api.github.com/users/elia-ashraf/events{/privacy}",
"received_events_url": "https://api.github.com/users/elia-ashraf/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | [
"This seems to be related to your environment and not the `datasets` code (e.g., this could happen when exposing the Python 3.9 site packages to a lower Python version (interpreter))"
] | 2023-08-25T08:35:14 | 2023-08-29T14:57:17 | null | NONE | null | ### Describe the bug
Hi,
I have been trying to import the datasets library but I keep gtting this error.
`Traceback (most recent call last):
File /opt/local/jupyterhub/lib64/python3.9/site-packages/IPython/core/interactiveshell.py:3508 in run_code
exec(code_obj, self.user_global_ns, self.user_ns)
Cell In[2], line 1
import datasets
File /opt/local/jupyterhub/lib64/python3.9/site-packages/datasets/__init__.py:22
from .arrow_dataset import Dataset
File /opt/local/jupyterhub/lib64/python3.9/site-packages/datasets/arrow_dataset.py:67
from .arrow_writer import ArrowWriter, OptimizedTypedSequence
File /opt/local/jupyterhub/lib64/python3.9/site-packages/datasets/arrow_writer.py:27
from .features import Features, Image, Value
File /opt/local/jupyterhub/lib64/python3.9/site-packages/datasets/features/__init__.py:17
from .audio import Audio
File /opt/local/jupyterhub/lib64/python3.9/site-packages/datasets/features/audio.py:11
from ..download.streaming_download_manager import xopen, xsplitext
File /opt/local/jupyterhub/lib64/python3.9/site-packages/datasets/download/__init__.py:10
from .streaming_download_manager import StreamingDownloadManager
File /opt/local/jupyterhub/lib64/python3.9/site-packages/datasets/download/streaming_download_manager.py:18
from aiohttp.client_exceptions import ClientError
File /opt/local/jupyterhub/lib64/python3.9/site-packages/aiohttp/__init__.py:7
from .connector import * # noqa
File /opt/local/jupyterhub/lib64/python3.9/site-packages/aiohttp/connector.py:12
from .client import ClientRequest
File /opt/local/jupyterhub/lib64/python3.9/site-packages/aiohttp/client.py:144
yield from asyncio.async(resp.release(), loop=loop)
^
SyntaxError: invalid syntax`
I have simply used these commands:
`import datasets`
and
`from datasets import load_dataset`
### Environment info
The library has been installed a virtual machine on JupyterHub. Although I have used this library multiple times (on the same VM) before, to train/test an ASR or other ML models, I had never encountered this error. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6178/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6178/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/6177 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6177/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6177/comments | https://api.github.com/repos/huggingface/datasets/issues/6177/events | https://github.com/huggingface/datasets/pull/6177 | 1,865,490,962 | PR_kwDODunzps5Ytky- | 6,177 | Use object detection images from `huggingface/documentation-images` | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005847 / 0.011353 (-0.005506) | 0.003488 / 0.011008 (-0.007521) | 0.079545 / 0.038508 (0.041037) | 0.055114 / 0.023109 (0.032005) | 0.312694 / 0.275898 (0.036796) | 0.338808 / 0.323480 (0.015329) | 0.004573 / 0.007986 (-0.003413) | 0.002818 / 0.004328 (-0.001510) | 0.062102 / 0.004250 (0.057852) | 0.044072 / 0.037052 (0.007019) | 0.317682 / 0.258489 (0.059192) | 0.354139 / 0.293841 (0.060298) | 0.026905 / 0.128546 (-0.101641) | 0.007990 / 0.075646 (-0.067656) | 0.260071 / 0.419271 (-0.159201) | 0.043658 / 0.043533 (0.000125) | 0.313828 / 0.255139 (0.058689) | 0.339678 / 0.283200 (0.056478) | 0.020076 / 0.141683 (-0.121607) | 1.446321 / 1.452155 (-0.005834) | 1.527046 / 1.492716 (0.034330) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.197801 / 0.018006 (0.179795) | 0.432874 / 0.000490 (0.432385) | 0.004093 / 0.000200 (0.003893) | 0.000069 / 0.000054 (0.000014) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.023505 / 0.037411 (-0.013906) | 0.072377 / 0.014526 (0.057852) | 0.081058 / 0.176557 (-0.095498) | 0.141628 / 0.737135 (-0.595507) | 0.081622 / 0.296338 (-0.214716) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.395005 / 0.215209 (0.179795) | 3.949006 / 2.077655 (1.871352) | 1.934028 / 1.504120 (0.429908) | 1.756065 / 1.541195 (0.214871) | 1.778719 / 1.468490 (0.310229) | 0.501279 / 4.584777 (-4.083498) | 3.032120 / 3.745712 (-0.713592) | 2.859751 / 5.269862 (-2.410110) | 1.885924 / 4.565676 (-2.679753) | 0.057236 / 0.424275 (-0.367039) | 0.006704 / 0.007607 (-0.000903) | 0.465794 / 0.226044 (0.239750) | 4.648622 / 2.268929 (2.379694) | 2.345649 / 55.444624 (-53.098975) | 1.981122 / 6.876477 (-4.895355) | 2.148235 / 2.142072 (0.006163) | 0.591466 / 4.805227 (-4.213761) | 0.125262 / 6.500664 (-6.375402) | 0.061305 / 0.075469 (-0.014164) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.243932 / 1.841788 (-0.597856) | 17.912110 / 8.074308 (9.837802) | 13.662097 / 10.191392 (3.470705) | 0.148051 / 0.680424 (-0.532373) | 0.016778 / 0.534201 (-0.517423) | 0.340342 / 0.579283 (-0.238941) | 0.351720 / 0.434364 (-0.082644) | 0.377837 / 0.540337 (-0.162501) | 0.521163 / 1.386936 (-0.865774) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006011 / 0.011353 (-0.005342) | 0.003549 / 0.011008 (-0.007459) | 0.063579 / 0.038508 (0.025071) | 0.056196 / 0.023109 (0.033087) | 0.448879 / 0.275898 (0.172981) | 0.491542 / 0.323480 (0.168062) | 0.004597 / 0.007986 (-0.003389) | 0.002790 / 0.004328 (-0.001539) | 0.063257 / 0.004250 (0.059006) | 0.045653 / 0.037052 (0.008600) | 0.459714 / 0.258489 (0.201225) | 0.491371 / 0.293841 (0.197530) | 0.028124 / 0.128546 (-0.100422) | 0.008016 / 0.075646 (-0.067630) | 0.069418 / 0.419271 (-0.349853) | 0.040393 / 0.043533 (-0.003140) | 0.450978 / 0.255139 (0.195839) | 0.472075 / 0.283200 (0.188875) | 0.020006 / 0.141683 (-0.121677) | 1.451946 / 1.452155 (-0.000209) | 1.513557 / 1.492716 (0.020840) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.225416 / 0.018006 (0.207410) | 0.412287 / 0.000490 (0.411797) | 0.004075 / 0.000200 (0.003875) | 0.000073 / 0.000054 (0.000019) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.025949 / 0.037411 (-0.011463) | 0.080633 / 0.014526 (0.066108) | 0.089960 / 0.176557 (-0.086597) | 0.144530 / 0.737135 (-0.592606) | 0.091427 / 0.296338 (-0.204911) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.462311 / 0.215209 (0.247102) | 4.605063 / 2.077655 (2.527408) | 2.541083 / 1.504120 (1.036963) | 2.356341 / 1.541195 (0.815147) | 2.389824 / 1.468490 (0.921334) | 0.507397 / 4.584777 (-4.077380) | 3.079023 / 3.745712 (-0.666689) | 2.792025 / 5.269862 (-2.477837) | 1.846931 / 4.565676 (-2.718746) | 0.058422 / 0.424275 (-0.365853) | 0.006409 / 0.007607 (-0.001199) | 0.530648 / 0.226044 (0.304604) | 5.321030 / 2.268929 (3.052101) | 2.978335 / 55.444624 (-52.466289) | 2.641188 / 6.876477 (-4.235288) | 2.780450 / 2.142072 (0.638378) | 0.593864 / 4.805227 (-4.211363) | 0.125394 / 6.500664 (-6.375270) | 0.061432 / 0.075469 (-0.014037) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.337142 / 1.841788 (-0.504646) | 18.841575 / 8.074308 (10.767267) | 14.678622 / 10.191392 (4.487230) | 0.144491 / 0.680424 (-0.535933) | 0.018145 / 0.534201 (-0.516056) | 0.339376 / 0.579283 (-0.239907) | 0.339129 / 0.434364 (-0.095235) | 0.394842 / 0.540337 (-0.145495) | 0.547924 / 1.386936 (-0.839012) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#57af0ab30796df59d28bf933e756ffbe5f34db1e \"CML watermark\")\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006478 / 0.011353 (-0.004875) | 0.003845 / 0.011008 (-0.007163) | 0.084179 / 0.038508 (0.045671) | 0.071327 / 0.023109 (0.048217) | 0.315206 / 0.275898 (0.039308) | 0.353477 / 0.323480 (0.029997) | 0.005267 / 0.007986 (-0.002719) | 0.003282 / 0.004328 (-0.001046) | 0.064062 / 0.004250 (0.059811) | 0.051940 / 0.037052 (0.014888) | 0.332004 / 0.258489 (0.073515) | 0.363199 / 0.293841 (0.069358) | 0.030546 / 0.128546 (-0.098000) | 0.008453 / 0.075646 (-0.067193) | 0.287636 / 0.419271 (-0.131636) | 0.051999 / 0.043533 (0.008466) | 0.325220 / 0.255139 (0.070081) | 0.355324 / 0.283200 (0.072125) | 0.023417 / 0.141683 (-0.118266) | 1.473370 / 1.452155 (0.021215) | 1.596903 / 1.492716 (0.104186) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.212645 / 0.018006 (0.194638) | 0.463766 / 0.000490 (0.463276) | 0.002834 / 0.000200 (0.002634) | 0.000079 / 0.000054 (0.000024) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.028424 / 0.037411 (-0.008987) | 0.082188 / 0.014526 (0.067662) | 0.777186 / 0.176557 (0.600629) | 0.218290 / 0.737135 (-0.518845) | 0.099098 / 0.296338 (-0.197240) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.387138 / 0.215209 (0.171929) | 3.845655 / 2.077655 (1.768000) | 1.929812 / 1.504120 (0.425692) | 1.718263 / 1.541195 (0.177069) | 1.760933 / 1.468490 (0.292443) | 0.475171 / 4.584777 (-4.109606) | 3.523366 / 3.745712 (-0.222346) | 3.167322 / 5.269862 (-2.102540) | 1.975164 / 4.565676 (-2.590513) | 0.056106 / 0.424275 (-0.368169) | 0.007448 / 0.007607 (-0.000159) | 0.459824 / 0.226044 (0.233779) | 4.590566 / 2.268929 (2.321638) | 2.377968 / 55.444624 (-53.066656) | 2.034052 / 6.876477 (-4.842425) | 2.224976 / 2.142072 (0.082904) | 0.575901 / 4.805227 (-4.229326) | 0.131546 / 6.500664 (-6.369118) | 0.059266 / 0.075469 (-0.016203) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.254783 / 1.841788 (-0.587005) | 19.497795 / 8.074308 (11.423487) | 13.937672 / 10.191392 (3.746280) | 0.164092 / 0.680424 (-0.516332) | 0.017915 / 0.534201 (-0.516286) | 0.391430 / 0.579283 (-0.187853) | 0.403681 / 0.434364 (-0.030683) | 0.457711 / 0.540337 (-0.082626) | 0.620395 / 1.386936 (-0.766541) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006793 / 0.011353 (-0.004560) | 0.004101 / 0.011008 (-0.006907) | 0.064780 / 0.038508 (0.026272) | 0.071087 / 0.023109 (0.047977) | 0.401963 / 0.275898 (0.126065) | 0.433085 / 0.323480 (0.109605) | 0.005348 / 0.007986 (-0.002638) | 0.003289 / 0.004328 (-0.001039) | 0.065209 / 0.004250 (0.060958) | 0.054202 / 0.037052 (0.017150) | 0.405629 / 0.258489 (0.147140) | 0.440326 / 0.293841 (0.146485) | 0.032283 / 0.128546 (-0.096263) | 0.008510 / 0.075646 (-0.067137) | 0.071144 / 0.419271 (-0.348127) | 0.047414 / 0.043533 (0.003881) | 0.402065 / 0.255139 (0.146926) | 0.421217 / 0.283200 (0.138017) | 0.021924 / 0.141683 (-0.119759) | 1.490067 / 1.452155 (0.037913) | 1.539134 / 1.492716 (0.046417) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.280072 / 0.018006 (0.262066) | 0.456130 / 0.000490 (0.455641) | 0.020926 / 0.000200 (0.020726) | 0.000107 / 0.000054 (0.000053) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.032040 / 0.037411 (-0.005371) | 0.092343 / 0.014526 (0.077817) | 0.104866 / 0.176557 (-0.071690) | 0.156631 / 0.737135 (-0.580505) | 0.107203 / 0.296338 (-0.189136) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.426268 / 0.215209 (0.211059) | 4.255539 / 2.077655 (2.177884) | 2.285077 / 1.504120 (0.780957) | 2.114277 / 1.541195 (0.573083) | 2.159242 / 1.468490 (0.690752) | 0.489421 / 4.584777 (-4.095356) | 3.630797 / 3.745712 (-0.114915) | 3.205238 / 5.269862 (-2.064624) | 1.985846 / 4.565676 (-2.579830) | 0.057436 / 0.424275 (-0.366839) | 0.007154 / 0.007607 (-0.000454) | 0.507294 / 0.226044 (0.281250) | 5.050105 / 2.268929 (2.781176) | 2.750474 / 55.444624 (-52.694151) | 2.404116 / 6.876477 (-4.472360) | 2.576483 / 2.142072 (0.434411) | 0.584909 / 4.805227 (-4.220318) | 0.130695 / 6.500664 (-6.369969) | 0.059743 / 0.075469 (-0.015726) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.352702 / 1.841788 (-0.489086) | 19.687944 / 8.074308 (11.613636) | 14.991847 / 10.191392 (4.800455) | 0.185164 / 0.680424 (-0.495260) | 0.020314 / 0.534201 (-0.513887) | 0.395162 / 0.579283 (-0.184121) | 0.408917 / 0.434364 (-0.025447) | 0.467049 / 0.540337 (-0.073288) | 0.649209 / 1.386936 (-0.737727) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#885518608ceab83b7ed8ceba7a0b72bc68096026 \"CML watermark\")\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006142 / 0.011353 (-0.005211) | 0.003621 / 0.011008 (-0.007387) | 0.079880 / 0.038508 (0.041372) | 0.059283 / 0.023109 (0.036173) | 0.310971 / 0.275898 (0.035072) | 0.351620 / 0.323480 (0.028140) | 0.003453 / 0.007986 (-0.004532) | 0.003785 / 0.004328 (-0.000543) | 0.062395 / 0.004250 (0.058145) | 0.047614 / 0.037052 (0.010562) | 0.312688 / 0.258489 (0.054199) | 0.363762 / 0.293841 (0.069921) | 0.027051 / 0.128546 (-0.101495) | 0.007920 / 0.075646 (-0.067726) | 0.261080 / 0.419271 (-0.158192) | 0.044476 / 0.043533 (0.000943) | 0.312615 / 0.255139 (0.057476) | 0.343672 / 0.283200 (0.060472) | 0.022723 / 0.141683 (-0.118960) | 1.441449 / 1.452155 (-0.010706) | 1.509253 / 1.492716 (0.016536) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.193171 / 0.018006 (0.175165) | 0.434771 / 0.000490 (0.434281) | 0.003114 / 0.000200 (0.002914) | 0.000065 / 0.000054 (0.000011) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.024209 / 0.037411 (-0.013203) | 0.073891 / 0.014526 (0.059365) | 0.083497 / 0.176557 (-0.093060) | 0.144962 / 0.737135 (-0.592173) | 0.084594 / 0.296338 (-0.211745) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.392512 / 0.215209 (0.177303) | 3.912692 / 2.077655 (1.835037) | 1.914010 / 1.504120 (0.409890) | 1.743827 / 1.541195 (0.202632) | 1.829244 / 1.468490 (0.360753) | 0.497740 / 4.584777 (-4.087037) | 2.979222 / 3.745712 (-0.766490) | 2.849786 / 5.269862 (-2.420076) | 1.874411 / 4.565676 (-2.691265) | 0.057270 / 0.424275 (-0.367005) | 0.006673 / 0.007607 (-0.000934) | 0.460724 / 0.226044 (0.234679) | 4.600617 / 2.268929 (2.331689) | 2.333178 / 55.444624 (-53.111446) | 1.999902 / 6.876477 (-4.876575) | 2.170600 / 2.142072 (0.028528) | 0.587716 / 4.805227 (-4.217511) | 0.126374 / 6.500664 (-6.374290) | 0.061926 / 0.075469 (-0.013543) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.229767 / 1.841788 (-0.612021) | 18.494462 / 8.074308 (10.420154) | 13.799801 / 10.191392 (3.608409) | 0.137952 / 0.680424 (-0.542472) | 0.017037 / 0.534201 (-0.517164) | 0.333252 / 0.579283 (-0.246031) | 0.357276 / 0.434364 (-0.077088) | 0.380069 / 0.540337 (-0.160268) | 0.526968 / 1.386936 (-0.859968) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006185 / 0.011353 (-0.005168) | 0.003595 / 0.011008 (-0.007413) | 0.063371 / 0.038508 (0.024863) | 0.060461 / 0.023109 (0.037351) | 0.455016 / 0.275898 (0.179118) | 0.490505 / 0.323480 (0.167026) | 0.004738 / 0.007986 (-0.003247) | 0.002852 / 0.004328 (-0.001477) | 0.064161 / 0.004250 (0.059910) | 0.047411 / 0.037052 (0.010359) | 0.453815 / 0.258489 (0.195326) | 0.485354 / 0.293841 (0.191513) | 0.028358 / 0.128546 (-0.100188) | 0.008101 / 0.075646 (-0.067545) | 0.068399 / 0.419271 (-0.350873) | 0.040928 / 0.043533 (-0.002605) | 0.462263 / 0.255139 (0.207124) | 0.478773 / 0.283200 (0.195574) | 0.019787 / 0.141683 (-0.121896) | 1.475798 / 1.452155 (0.023643) | 1.563890 / 1.492716 (0.071174) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.239701 / 0.018006 (0.221695) | 0.417442 / 0.000490 (0.416953) | 0.005895 / 0.000200 (0.005695) | 0.000087 / 0.000054 (0.000033) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.026155 / 0.037411 (-0.011256) | 0.081264 / 0.014526 (0.066738) | 0.089734 / 0.176557 (-0.086822) | 0.143965 / 0.737135 (-0.593171) | 0.092156 / 0.296338 (-0.204182) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.456420 / 0.215209 (0.241211) | 4.545675 / 2.077655 (2.468020) | 2.477141 / 1.504120 (0.973022) | 2.295142 / 1.541195 (0.753947) | 2.349525 / 1.468490 (0.881035) | 0.502485 / 4.584777 (-4.082292) | 3.072347 / 3.745712 (-0.673365) | 2.798565 / 5.269862 (-2.471296) | 1.849030 / 4.565676 (-2.716647) | 0.057789 / 0.424275 (-0.366487) | 0.006436 / 0.007607 (-0.001172) | 0.529648 / 0.226044 (0.303604) | 5.285670 / 2.268929 (3.016741) | 2.954964 / 55.444624 (-52.489660) | 2.593161 / 6.876477 (-4.283316) | 2.735254 / 2.142072 (0.593181) | 0.587635 / 4.805227 (-4.217592) | 0.124732 / 6.500664 (-6.375932) | 0.060999 / 0.075469 (-0.014470) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.354957 / 1.841788 (-0.486831) | 18.803998 / 8.074308 (10.729690) | 14.902712 / 10.191392 (4.711320) | 0.146729 / 0.680424 (-0.533695) | 0.017989 / 0.534201 (-0.516212) | 0.333633 / 0.579283 (-0.245650) | 0.347685 / 0.434364 (-0.086679) | 0.386497 / 0.540337 (-0.153840) | 0.590885 / 1.386936 (-0.796051) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#392d8a46f4da066408785281d9b87760f7273254 \"CML watermark\")\n"
] | 2023-08-24T16:16:09 | 2023-08-25T16:30:00 | 2023-08-25T16:21:17 | CONTRIBUTOR | null | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6177/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6177/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6177",
"html_url": "https://github.com/huggingface/datasets/pull/6177",
"diff_url": "https://github.com/huggingface/datasets/pull/6177.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/6177.patch",
"merged_at": "2023-08-25T16:21:17"
} | true |
End of preview. Expand
in Dataset Viewer.
- Downloads last month
- 35