url
stringlengths 58
61
| repository_url
stringclasses 1
value | labels_url
stringlengths 72
75
| comments_url
stringlengths 67
70
| events_url
stringlengths 65
68
| html_url
stringlengths 46
51
| id
int64 599M
1.83B
| node_id
stringlengths 18
32
| number
int64 1
6.09k
| title
stringlengths 1
290
| labels
list | state
stringclasses 2
values | locked
bool 1
class | milestone
dict | comments
int64 0
54
| created_at
stringlengths 20
20
| updated_at
stringlengths 20
20
| closed_at
stringlengths 20
20
⌀ | active_lock_reason
null | body
stringlengths 0
228k
⌀ | reactions
dict | timeline_url
stringlengths 67
70
| performed_via_github_app
null | state_reason
stringclasses 3
values | draft
bool 2
classes | pull_request
dict | is_pull_request
bool 2
classes | comments_text
list |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://api.github.com/repos/huggingface/datasets/issues/4008
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/4008/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/4008/comments
|
https://api.github.com/repos/huggingface/datasets/issues/4008/events
|
https://github.com/huggingface/datasets/pull/4008
| 1,179,591,068 |
PR_kwDODunzps409Ixp
| 4,008 |
Support streaming daily_dialog dataset
|
[] |
closed
| false | null | 1 |
2022-03-24T14:23:23Z
|
2022-03-24T15:29:01Z
|
2022-03-24T14:46:58Z
| null | null |
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4008/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/4008/timeline
| null | null | false |
{
"diff_url": "https://github.com/huggingface/datasets/pull/4008.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4008",
"merged_at": "2022-03-24T14:46:58Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4008.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4008"
}
| true |
[
"Yay! I love this dataset!"
] |
https://api.github.com/repos/huggingface/datasets/issues/4176
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/4176/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/4176/comments
|
https://api.github.com/repos/huggingface/datasets/issues/4176/events
|
https://github.com/huggingface/datasets/issues/4176
| 1,206,515,563 |
I_kwDODunzps5H6fdr
| 4,176 |
Very slow between two operations
|
[
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] |
closed
| false | null | 0 |
2022-04-17T23:52:29Z
|
2022-04-18T00:03:00Z
|
2022-04-18T00:03:00Z
| null |
Hello, in the processing stage, I use two operations. The first one : map + filter, is very fast and it uses the full cores, while the socond step is very slow and did not use full cores.
Also, there is a significant lag between them. Am I missing something ?
```
raw_datasets = raw_datasets.map(split_func,
batched=False,
num_proc=args.preprocessing_num_workers,
load_from_cache_file=not args.overwrite_cache,
desc = "running split para ==>")\
.filter(lambda example: example['text1']!='' and example['text2']!='',
num_proc=args.preprocessing_num_workers, desc="filtering ==>")
processed_datasets = raw_datasets.map(
preprocess_function,
batched=True,
num_proc=args.preprocessing_num_workers,
remove_columns=column_names,
load_from_cache_file=not args.overwrite_cache,
desc="Running tokenizer on dataset===>",
)
```
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4176/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/4176/timeline
| null |
completed
| null | null | false |
[] |
https://api.github.com/repos/huggingface/datasets/issues/1275
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/1275/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/1275/comments
|
https://api.github.com/repos/huggingface/datasets/issues/1275/events
|
https://github.com/huggingface/datasets/pull/1275
| 758,958,066 |
MDExOlB1bGxSZXF1ZXN0NTM0MDM2NjIw
| 1,275 |
Yoruba GV NER added
|
[] |
closed
| false | null | 1 |
2020-12-08T00:31:38Z
|
2020-12-08T23:25:28Z
|
2020-12-08T23:25:28Z
| null |
I just added Yoruba GV NER dataset from this paper https://www.aclweb.org/anthology/2020.lrec-1.335/
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1275/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/1275/timeline
| null | null | false |
{
"diff_url": "https://github.com/huggingface/datasets/pull/1275.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1275",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/1275.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1275"
}
| true |
[
"Thank you. Okay, I will add the dataset card."
] |
https://api.github.com/repos/huggingface/datasets/issues/5802
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5802/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5802/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5802/events
|
https://github.com/huggingface/datasets/pull/5802
| 1,686,509,799 |
PR_kwDODunzps5PR199
| 5,802 |
Validate non-empty data_files
|
[] |
closed
| false | null | 2 |
2023-04-27T09:51:36Z
|
2023-04-27T14:59:47Z
|
2023-04-27T14:51:40Z
| null |
This PR adds validation of `data_files`, so that they are non-empty (str, list, or dict) or `None` (default).
See: https://github.com/huggingface/datasets/pull/5787#discussion_r1178862327
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5802/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5802/timeline
| null | null | false |
{
"diff_url": "https://github.com/huggingface/datasets/pull/5802.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5802",
"merged_at": "2023-04-27T14:51:40Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5802.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5802"
}
| true |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007818 / 0.011353 (-0.003535) | 0.005456 / 0.011008 (-0.005552) | 0.114685 / 0.038508 (0.076177) | 0.038398 / 0.023109 (0.015289) | 0.351289 / 0.275898 (0.075391) | 0.389170 / 0.323480 (0.065690) | 0.006213 / 0.007986 (-0.001773) | 0.005796 / 0.004328 (0.001467) | 0.085315 / 0.004250 (0.081065) | 0.049251 / 0.037052 (0.012198) | 0.368119 / 0.258489 (0.109630) | 0.394725 / 0.293841 (0.100884) | 0.040390 / 0.128546 (-0.088157) | 0.014076 / 0.075646 (-0.061570) | 0.393771 / 0.419271 (-0.025500) | 0.058929 / 0.043533 (0.015397) | 0.349526 / 0.255139 (0.094387) | 0.378409 / 0.283200 (0.095210) | 0.114354 / 0.141683 (-0.027329) | 1.749244 / 1.452155 (0.297089) | 1.847946 / 1.492716 (0.355229) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.241648 / 0.018006 (0.223641) | 0.468419 / 0.000490 (0.467929) | 0.004311 / 0.000200 (0.004111) | 0.000091 / 0.000054 (0.000036) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.029978 / 0.037411 (-0.007433) | 0.121832 / 0.014526 (0.107306) | 0.133516 / 0.176557 (-0.043041) | 0.199174 / 0.737135 (-0.537961) | 0.138181 / 0.296338 (-0.158158) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.478346 / 0.215209 (0.263137) | 4.723967 / 2.077655 (2.646312) | 2.107724 / 1.504120 (0.603604) | 1.874810 / 1.541195 (0.333615) | 1.911568 / 1.468490 (0.443078) | 0.800966 / 4.584777 (-3.783811) | 4.399032 / 3.745712 (0.653320) | 2.346160 / 5.269862 (-2.923702) | 1.506673 / 4.565676 (-3.059004) | 0.099119 / 0.424275 (-0.325156) | 0.014055 / 0.007607 (0.006448) | 0.582419 / 0.226044 (0.356375) | 5.789147 / 2.268929 (3.520218) | 2.632443 / 55.444624 (-52.812182) | 2.217630 / 6.876477 (-4.658846) | 2.337709 / 2.142072 (0.195637) | 0.995345 / 4.805227 (-3.809882) | 0.200040 / 6.500664 (-6.300624) | 0.076855 / 0.075469 (0.001386) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.386104 / 1.841788 (-0.455683) | 17.109772 / 8.074308 (9.035464) | 16.147612 / 10.191392 (5.956220) | 0.162846 / 0.680424 (-0.517577) | 0.020692 / 0.534201 (-0.513509) | 0.495752 / 0.579283 (-0.083531) | 0.475715 / 0.434364 (0.041351) | 0.619826 / 0.540337 (0.079488) | 0.720745 / 1.386936 (-0.666191) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008255 / 0.011353 (-0.003098) | 0.006118 / 0.011008 (-0.004890) | 0.088004 / 0.038508 (0.049496) | 0.039225 / 0.023109 (0.016116) | 0.399290 / 0.275898 (0.123392) | 0.432272 / 0.323480 (0.108792) | 0.007382 / 0.007986 (-0.000603) | 0.004576 / 0.004328 (0.000248) | 0.086511 / 0.004250 (0.082260) | 0.050472 / 0.037052 (0.013420) | 0.404160 / 0.258489 (0.145671) | 0.445356 / 0.293841 (0.151515) | 0.041549 / 0.128546 (-0.086997) | 0.014148 / 0.075646 (-0.061498) | 0.101697 / 0.419271 (-0.317574) | 0.057474 / 0.043533 (0.013941) | 0.395093 / 0.255139 (0.139954) | 0.418613 / 0.283200 (0.135414) | 0.123217 / 0.141683 (-0.018466) | 1.726146 / 1.452155 (0.273991) | 1.852746 / 1.492716 (0.360029) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.256876 / 0.018006 (0.238870) | 0.476336 / 0.000490 (0.475846) | 0.000465 / 0.000200 (0.000265) | 0.000068 / 0.000054 (0.000013) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.034304 / 0.037411 (-0.003107) | 0.132617 / 0.014526 (0.118091) | 0.141712 / 0.176557 (-0.034845) | 0.198101 / 0.737135 (-0.539034) | 0.150877 / 0.296338 (-0.145461) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.504717 / 0.215209 (0.289508) | 5.035060 / 2.077655 (2.957405) | 2.494812 / 1.504120 (0.990692) | 2.306601 / 1.541195 (0.765406) | 2.481860 / 1.468490 (1.013370) | 0.826041 / 4.584777 (-3.758736) | 4.414748 / 3.745712 (0.669036) | 2.417899 / 5.269862 (-2.851963) | 1.574548 / 4.565676 (-2.991128) | 0.101712 / 0.424275 (-0.322563) | 0.014388 / 0.007607 (0.006781) | 0.616674 / 0.226044 (0.390630) | 6.180382 / 2.268929 (3.911453) | 2.969110 / 55.444624 (-52.475514) | 2.574383 / 6.876477 (-4.302094) | 2.711008 / 2.142072 (0.568935) | 0.997679 / 4.805227 (-3.807548) | 0.201241 / 6.500664 (-6.299423) | 0.076132 / 0.075469 (0.000663) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.542704 / 1.841788 (-0.299084) | 17.610700 / 8.074308 (9.536392) | 16.152973 / 10.191392 (5.961581) | 0.166040 / 0.680424 (-0.514384) | 0.020286 / 0.534201 (-0.513915) | 0.506724 / 0.579283 (-0.072559) | 0.484348 / 0.434364 (0.049984) | 0.606524 / 0.540337 (0.066187) | 0.734997 / 1.386936 (-0.651939) |\n\n</details>\n</details>\n\n\n"
] |
https://api.github.com/repos/huggingface/datasets/issues/3445
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/3445/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/3445/comments
|
https://api.github.com/repos/huggingface/datasets/issues/3445/events
|
https://github.com/huggingface/datasets/issues/3445
| 1,082,370,968 |
I_kwDODunzps5Ag6uY
| 3,445 |
question
|
[
{
"color": "E5583E",
"default": false,
"description": "Related to the dataset viewer on huggingface.co",
"id": 3470211881,
"name": "dataset-viewer",
"node_id": "LA_kwDODunzps7O1zsp",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset-viewer"
}
] |
closed
| false | null | 1 |
2021-12-16T15:57:00Z
|
2022-01-03T10:09:00Z
|
2022-01-03T10:09:00Z
| null |
## Dataset viewer issue for '*name of the dataset*'
**Link:** *link to the dataset viewer page*
*short description of the issue*
Am I the one who added this dataset ? Yes-No
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3445/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/3445/timeline
| null |
completed
| null | null | false |
[
"Hi ! What's your question ?"
] |
https://api.github.com/repos/huggingface/datasets/issues/5462
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5462/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5462/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5462/events
|
https://github.com/huggingface/datasets/pull/5462
| 1,556,572,144 |
PR_kwDODunzps5Iglqu
| 5,462 |
Concatenate on axis=1 with misaligned blocks
|
[] |
closed
| false | null | 4 |
2023-01-25T12:33:22Z
|
2023-01-26T09:37:00Z
|
2023-01-26T09:27:19Z
| null |
Allow to concatenate on axis 1 two tables made of misaligned blocks.
For example if the first table has 2 row blocks of 3 rows each, and the second table has 3 row blocks or 2 rows each.
To do that, I slice the row blocks to re-align the blocks.
Fix https://github.com/huggingface/datasets/issues/5413
|
{
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5462/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5462/timeline
| null | null | false |
{
"diff_url": "https://github.com/huggingface/datasets/pull/5462.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5462",
"merged_at": "2023-01-26T09:27:19Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5462.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5462"
}
| true |
[
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008860 / 0.011353 (-0.002493) | 0.004564 / 0.011008 (-0.006444) | 0.101556 / 0.038508 (0.063048) | 0.030000 / 0.023109 (0.006891) | 0.304404 / 0.275898 (0.028506) | 0.366247 / 0.323480 (0.042767) | 0.007182 / 0.007986 (-0.000804) | 0.003583 / 0.004328 (-0.000746) | 0.079665 / 0.004250 (0.075415) | 0.036529 / 0.037052 (-0.000523) | 0.310998 / 0.258489 (0.052509) | 0.346954 / 0.293841 (0.053113) | 0.034098 / 0.128546 (-0.094448) | 0.011576 / 0.075646 (-0.064070) | 0.320448 / 0.419271 (-0.098824) | 0.043328 / 0.043533 (-0.000205) | 0.307317 / 0.255139 (0.052178) | 0.325071 / 0.283200 (0.041871) | 0.096406 / 0.141683 (-0.045277) | 1.540331 / 1.452155 (0.088176) | 1.589533 / 1.492716 (0.096817) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.011034 / 0.018006 (-0.006972) | 0.422066 / 0.000490 (0.421577) | 0.002409 / 0.000200 (0.002209) | 0.000071 / 0.000054 (0.000017) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.023703 / 0.037411 (-0.013708) | 0.099935 / 0.014526 (0.085409) | 0.105966 / 0.176557 (-0.070591) | 0.142259 / 0.737135 (-0.594876) | 0.109327 / 0.296338 (-0.187011) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.418381 / 0.215209 (0.203172) | 4.177564 / 2.077655 (2.099909) | 1.880196 / 1.504120 (0.376076) | 1.669169 / 1.541195 (0.127974) | 1.725989 / 1.468490 (0.257499) | 0.689384 / 4.584777 (-3.895393) | 3.380963 / 3.745712 (-0.364749) | 1.884192 / 5.269862 (-3.385670) | 1.162409 / 4.565676 (-3.403268) | 0.082045 / 0.424275 (-0.342230) | 0.012575 / 0.007607 (0.004968) | 0.525824 / 0.226044 (0.299779) | 5.272574 / 2.268929 (3.003646) | 2.283492 / 55.444624 (-53.161132) | 1.947390 / 6.876477 (-4.929087) | 2.013790 / 2.142072 (-0.128283) | 0.806280 / 4.805227 (-3.998948) | 0.149267 / 6.500664 (-6.351397) | 0.066967 / 0.075469 (-0.008502) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.216511 / 1.841788 (-0.625277) | 13.869829 / 8.074308 (5.795521) | 14.189967 / 10.191392 (3.998575) | 0.148716 / 0.680424 (-0.531708) | 0.028324 / 0.534201 (-0.505877) | 0.390856 / 0.579283 (-0.188427) | 0.404389 / 0.434364 (-0.029975) | 0.456050 / 0.540337 (-0.084287) | 0.544139 / 1.386936 (-0.842797) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006727 / 0.011353 (-0.004626) | 0.004515 / 0.011008 (-0.006494) | 0.098791 / 0.038508 (0.060283) | 0.027596 / 0.023109 (0.004487) | 0.439066 / 0.275898 (0.163168) | 0.480555 / 0.323480 (0.157076) | 0.005066 / 0.007986 (-0.002920) | 0.004669 / 0.004328 (0.000341) | 0.075334 / 0.004250 (0.071084) | 0.039779 / 0.037052 (0.002726) | 0.439860 / 0.258489 (0.181371) | 0.480787 / 0.293841 (0.186946) | 0.031550 / 0.128546 (-0.096996) | 0.011668 / 0.075646 (-0.063978) | 0.317348 / 0.419271 (-0.101923) | 0.041312 / 0.043533 (-0.002220) | 0.442934 / 0.255139 (0.187795) | 0.463677 / 0.283200 (0.180478) | 0.090066 / 0.141683 (-0.051617) | 1.544152 / 1.452155 (0.091998) | 1.584455 / 1.492716 (0.091738) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.224284 / 0.018006 (0.206278) | 0.406982 / 0.000490 (0.406492) | 0.000427 / 0.000200 (0.000227) | 0.000061 / 0.000054 (0.000006) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.024914 / 0.037411 (-0.012497) | 0.102608 / 0.014526 (0.088082) | 0.106931 / 0.176557 (-0.069626) | 0.140828 / 0.737135 (-0.596308) | 0.112015 / 0.296338 (-0.184324) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.471078 / 0.215209 (0.255869) | 4.705742 / 2.077655 (2.628088) | 2.437442 / 1.504120 (0.933322) | 2.242768 / 1.541195 (0.701573) | 2.302158 / 1.468490 (0.833668) | 0.697314 / 4.584777 (-3.887462) | 3.357730 / 3.745712 (-0.387982) | 1.913306 / 5.269862 (-3.356556) | 1.173879 / 4.565676 (-3.391798) | 0.083257 / 0.424275 (-0.341018) | 0.012480 / 0.007607 (0.004873) | 0.573407 / 0.226044 (0.347362) | 5.728650 / 2.268929 (3.459721) | 2.868863 / 55.444624 (-52.575761) | 2.548640 / 6.876477 (-4.327837) | 2.596622 / 2.142072 (0.454549) | 0.805563 / 4.805227 (-3.999664) | 0.150860 / 6.500664 (-6.349804) | 0.068344 / 0.075469 (-0.007125) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.300368 / 1.841788 (-0.541420) | 13.920451 / 8.074308 (5.846143) | 14.222430 / 10.191392 (4.031038) | 0.152497 / 0.680424 (-0.527927) | 0.017415 / 0.534201 (-0.516786) | 0.378827 / 0.579283 (-0.200456) | 0.384165 / 0.434364 (-0.050199) | 0.439364 / 0.540337 (-0.100973) | 0.525710 / 1.386936 (-0.861226) |\n\n</details>\n</details>\n\n\n",
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008482 / 0.011353 (-0.002871) | 0.004405 / 0.011008 (-0.006604) | 0.099662 / 0.038508 (0.061154) | 0.029062 / 0.023109 (0.005953) | 0.298329 / 0.275898 (0.022431) | 0.332837 / 0.323480 (0.009357) | 0.006760 / 0.007986 (-0.001225) | 0.003290 / 0.004328 (-0.001039) | 0.077659 / 0.004250 (0.073409) | 0.034745 / 0.037052 (-0.002307) | 0.303134 / 0.258489 (0.044644) | 0.346402 / 0.293841 (0.052561) | 0.033511 / 0.128546 (-0.095035) | 0.011464 / 0.075646 (-0.064183) | 0.322932 / 0.419271 (-0.096340) | 0.040697 / 0.043533 (-0.002836) | 0.301951 / 0.255139 (0.046812) | 0.328961 / 0.283200 (0.045761) | 0.084802 / 0.141683 (-0.056881) | 1.506247 / 1.452155 (0.054092) | 1.547631 / 1.492716 (0.054915) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.190370 / 0.018006 (0.172363) | 0.405786 / 0.000490 (0.405297) | 0.002196 / 0.000200 (0.001997) | 0.000072 / 0.000054 (0.000018) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.022958 / 0.037411 (-0.014453) | 0.095736 / 0.014526 (0.081210) | 0.103684 / 0.176557 (-0.072872) | 0.138200 / 0.737135 (-0.598936) | 0.105618 / 0.296338 (-0.190721) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.415239 / 0.215209 (0.200030) | 4.147223 / 2.077655 (2.069569) | 1.850322 / 1.504120 (0.346202) | 1.662815 / 1.541195 (0.121620) | 1.671563 / 1.468490 (0.203073) | 0.693806 / 4.584777 (-3.890971) | 3.352938 / 3.745712 (-0.392774) | 1.849257 / 5.269862 (-3.420604) | 1.161603 / 4.565676 (-3.404074) | 0.081884 / 0.424275 (-0.342391) | 0.012726 / 0.007607 (0.005119) | 0.521105 / 0.226044 (0.295061) | 5.231910 / 2.268929 (2.962981) | 2.306073 / 55.444624 (-53.138551) | 1.950449 / 6.876477 (-4.926028) | 1.988433 / 2.142072 (-0.153640) | 0.811168 / 4.805227 (-3.994059) | 0.149960 / 6.500664 (-6.350704) | 0.064845 / 0.075469 (-0.010624) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.221487 / 1.841788 (-0.620301) | 13.756534 / 8.074308 (5.682226) | 13.825369 / 10.191392 (3.633977) | 0.155641 / 0.680424 (-0.524783) | 0.028444 / 0.534201 (-0.505757) | 0.390364 / 0.579283 (-0.188919) | 0.397592 / 0.434364 (-0.036772) | 0.455905 / 0.540337 (-0.084433) | 0.534606 / 1.386936 (-0.852330) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006281 / 0.011353 (-0.005071) | 0.004533 / 0.011008 (-0.006475) | 0.098328 / 0.038508 (0.059820) | 0.026998 / 0.023109 (0.003889) | 0.424814 / 0.275898 (0.148915) | 0.457653 / 0.323480 (0.134173) | 0.004617 / 0.007986 (-0.003368) | 0.003320 / 0.004328 (-0.001009) | 0.075884 / 0.004250 (0.071634) | 0.035865 / 0.037052 (-0.001187) | 0.431674 / 0.258489 (0.173185) | 0.468286 / 0.293841 (0.174445) | 0.031915 / 0.128546 (-0.096631) | 0.011680 / 0.075646 (-0.063967) | 0.319575 / 0.419271 (-0.099696) | 0.047792 / 0.043533 (0.004259) | 0.428191 / 0.255139 (0.173052) | 0.445657 / 0.283200 (0.162458) | 0.090464 / 0.141683 (-0.051218) | 1.465480 / 1.452155 (0.013326) | 1.548985 / 1.492716 (0.056268) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.185671 / 0.018006 (0.167664) | 0.399274 / 0.000490 (0.398784) | 0.002822 / 0.000200 (0.002622) | 0.000083 / 0.000054 (0.000028) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.025934 / 0.037411 (-0.011477) | 0.099480 / 0.014526 (0.084954) | 0.110264 / 0.176557 (-0.066293) | 0.140558 / 0.737135 (-0.596577) | 0.110832 / 0.296338 (-0.185507) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.473491 / 0.215209 (0.258282) | 4.722507 / 2.077655 (2.644852) | 2.456242 / 1.504120 (0.952122) | 2.255999 / 1.541195 (0.714804) | 2.300816 / 1.468490 (0.832326) | 0.698226 / 4.584777 (-3.886551) | 3.397296 / 3.745712 (-0.348416) | 2.741674 / 5.269862 (-2.528187) | 1.462103 / 4.565676 (-3.103573) | 0.082736 / 0.424275 (-0.341539) | 0.012183 / 0.007607 (0.004576) | 0.580144 / 0.226044 (0.354099) | 5.794351 / 2.268929 (3.525422) | 2.881201 / 55.444624 (-52.563423) | 2.544384 / 6.876477 (-4.332093) | 2.555227 / 2.142072 (0.413154) | 0.805849 / 4.805227 (-3.999378) | 0.151822 / 6.500664 (-6.348842) | 0.067477 / 0.075469 (-0.007992) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.300224 / 1.841788 (-0.541564) | 13.595361 / 8.074308 (5.521053) | 13.967622 / 10.191392 (3.776230) | 0.129222 / 0.680424 (-0.551202) | 0.016939 / 0.534201 (-0.517262) | 0.375190 / 0.579283 (-0.204094) | 0.383511 / 0.434364 (-0.050853) | 0.437179 / 0.540337 (-0.103158) | 0.525674 / 1.386936 (-0.861262) |\n\n</details>\n</details>\n\n\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.012364 / 0.011353 (0.001011) | 0.006098 / 0.011008 (-0.004911) | 0.158908 / 0.038508 (0.120400) | 0.039798 / 0.023109 (0.016689) | 0.383786 / 0.275898 (0.107888) | 0.533961 / 0.323480 (0.210481) | 0.012079 / 0.007986 (0.004094) | 0.006483 / 0.004328 (0.002155) | 0.109660 / 0.004250 (0.105410) | 0.048391 / 0.037052 (0.011339) | 0.447426 / 0.258489 (0.188937) | 0.477292 / 0.293841 (0.183451) | 0.066492 / 0.128546 (-0.062054) | 0.021155 / 0.075646 (-0.054492) | 0.474473 / 0.419271 (0.055202) | 0.063520 / 0.043533 (0.019987) | 0.444941 / 0.255139 (0.189802) | 0.450675 / 0.283200 (0.167475) | 0.129236 / 0.141683 (-0.012447) | 2.009362 / 1.452155 (0.557207) | 1.912067 / 1.492716 (0.419350) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.260384 / 0.018006 (0.242378) | 0.577654 / 0.000490 (0.577165) | 0.004977 / 0.000200 (0.004777) | 0.000110 / 0.000054 (0.000056) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.028101 / 0.037411 (-0.009310) | 0.161680 / 0.014526 (0.147154) | 0.146107 / 0.176557 (-0.030450) | 0.173878 / 0.737135 (-0.563257) | 0.186149 / 0.296338 (-0.110190) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.689835 / 0.215209 (0.474626) | 6.775888 / 2.077655 (4.698234) | 2.885499 / 1.504120 (1.381379) | 2.486855 / 1.541195 (0.945660) | 2.540831 / 1.468490 (1.072341) | 1.328135 / 4.584777 (-3.256642) | 5.964983 / 3.745712 (2.219271) | 3.400713 / 5.269862 (-1.869149) | 2.423257 / 4.565676 (-2.142419) | 0.129767 / 0.424275 (-0.294508) | 0.017936 / 0.007607 (0.010328) | 0.909284 / 0.226044 (0.683239) | 8.778791 / 2.268929 (6.509863) | 3.890757 / 55.444624 (-51.553867) | 3.072116 / 6.876477 (-3.804360) | 3.085390 / 2.142072 (0.943318) | 1.571710 / 4.805227 (-3.233517) | 0.279290 / 6.500664 (-6.221374) | 0.087775 / 0.075469 (0.012306) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.751223 / 1.841788 (-0.090564) | 20.313135 / 8.074308 (12.238827) | 22.793800 / 10.191392 (12.602408) | 0.296052 / 0.680424 (-0.384372) | 0.053420 / 0.534201 (-0.480781) | 0.600626 / 0.579283 (0.021343) | 0.634505 / 0.434364 (0.200142) | 0.724000 / 0.540337 (0.183663) | 0.869283 / 1.386936 (-0.517653) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.014876 / 0.011353 (0.003523) | 0.008113 / 0.011008 (-0.002895) | 0.177038 / 0.038508 (0.138530) | 0.050825 / 0.023109 (0.027716) | 0.473989 / 0.275898 (0.198091) | 0.601058 / 0.323480 (0.277578) | 0.007536 / 0.007986 (-0.000450) | 0.006761 / 0.004328 (0.002432) | 0.105260 / 0.004250 (0.101010) | 0.073960 / 0.037052 (0.036908) | 0.447711 / 0.258489 (0.189222) | 0.609998 / 0.293841 (0.316157) | 0.061280 / 0.128546 (-0.067267) | 0.019370 / 0.075646 (-0.056276) | 0.510466 / 0.419271 (0.091194) | 0.062695 / 0.043533 (0.019162) | 0.436778 / 0.255139 (0.181639) | 0.489916 / 0.283200 (0.206717) | 0.137305 / 0.141683 (-0.004378) | 1.801554 / 1.452155 (0.349399) | 2.082409 / 1.492716 (0.589692) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.291304 / 0.018006 (0.273298) | 0.599041 / 0.000490 (0.598551) | 0.008017 / 0.000200 (0.007817) | 0.000127 / 0.000054 (0.000072) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.031243 / 0.037411 (-0.006169) | 0.139689 / 0.014526 (0.125163) | 0.138678 / 0.176557 (-0.037878) | 0.180458 / 0.737135 (-0.556677) | 0.149753 / 0.296338 (-0.146585) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.699692 / 0.215209 (0.484482) | 7.273327 / 2.077655 (5.195672) | 3.222650 / 1.504120 (1.718530) | 2.679424 / 1.541195 (1.138229) | 2.842378 / 1.468490 (1.373888) | 1.394633 / 4.584777 (-3.190143) | 6.379970 / 3.745712 (2.634258) | 5.944663 / 5.269862 (0.674801) | 3.105214 / 4.565676 (-1.460462) | 0.138790 / 0.424275 (-0.285485) | 0.014211 / 0.007607 (0.006604) | 0.815275 / 0.226044 (0.589230) | 8.549334 / 2.268929 (6.280405) | 3.754795 / 55.444624 (-51.689829) | 3.125222 / 6.876477 (-3.751255) | 3.269639 / 2.142072 (1.127566) | 1.464187 / 4.805227 (-3.341040) | 0.314557 / 6.500664 (-6.186107) | 0.107354 / 0.075469 (0.031885) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.480793 / 1.841788 (-0.360995) | 16.770328 / 8.074308 (8.696019) | 18.054861 / 10.191392 (7.863469) | 0.198257 / 0.680424 (-0.482167) | 0.026493 / 0.534201 (-0.507708) | 0.489701 / 0.579283 (-0.089582) | 0.540890 / 0.434364 (0.106526) | 0.566675 / 0.540337 (0.026337) | 0.661918 / 1.386936 (-0.725018) |\n\n</details>\n</details>\n\n\n"
] |
https://api.github.com/repos/huggingface/datasets/issues/3986
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/3986/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/3986/comments
|
https://api.github.com/repos/huggingface/datasets/issues/3986/events
|
https://github.com/huggingface/datasets/issues/3986
| 1,176,429,565 |
I_kwDODunzps5GHuP9
| 3,986 |
Dataset loads indefinitely after modifying default cache path (~/.cache/huggingface)
|
[
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] |
open
| false | null | 5 |
2022-03-22T08:23:21Z
|
2023-03-06T16:55:04Z
| null | null |
## Describe the bug
Dataset loads indefinitely after modifying cache path (~/.cache/huggingface)
If none of the environment variables are set, this custom dataset loads fine ( json-based dataset with custom dataset load script)
** Update: Transformer modules faces the same issue as well during loading
## A clear and concise description of what the bug is.
Issue:
- Dataset loading stalls / freezes indefinitely when HF_HOME is changed to a custom directory
- No error code, had to terminate the process
- There are some files created in the cache directory:
```
custom_cache_dir
| -- modules
| -- __init__.py
| -- datasets_modules
| -- __init__.py
| -- datasets
| -- __init__.py
| -- script.py (Dataset loading script)
| -- script.lock
```
There's no error nor any logs thrown so I'm out of ideas of how to to debug this. The custom dataset works fine if the default ~/.cache dir is used, but unfortunately it's out of space and we do not have permissions to modify the disk.
## Steps to reproduce the bug
What I've tried:
- Modifying HF_HOME (https://github.com/huggingface/transformers/issues/8703)
- Modifying HF_DATASETS_CACHE (https://huggingface.co/docs/datasets/v1.12.0/cache.html)
- Modifying cache_dir param during runtime
```python
>>> from datasets import load_dataset
>>> dataset = load_dataset('test_dataset', cache_dir='/path/to/new/cache')
```
- Disabling dataset cache
```python
>>> from datasets import set_caching_enabled
>>> set_caching_enabled(False)
```
## Expected results
Datasets should load / cache as usual with the only exception that cache directory is different
## Actual results
Any actions taken above to change the cache directory results in loading indefinitely without terminating.
## Environment info
- `transformers` version: 4.18.0.dev0
- Platform: Linux-4.15.0-54-generic-x86_64-with-glibc2.10
- Python version: 3.8.8
- Huggingface_hub version: 0.4.0
- PyTorch version (GPU?): 1.8.1+cu102 (True)
- Tensorflow version (GPU?): 2.4.1 (False)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: No
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3986/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/3986/timeline
| null | null | null | null | false |
[
"Hi ! I didn't managed to reproduce the issue. When you kill the process, is there any stacktrace that shows at what point in the code python is hanging ?",
"Hi @lhoestq , I've traced the issue back to file locking. It's similar to this thread, using Lustre filesystem as well. https://github.com/huggingface/datasets/issues/329 . In this case the user was able to modify and add -o flock option while mounting and it solved the problem. \r\nHowever in other cases such as mine, we do not have the permissions to modify the commands while mounting. I'm still trying to figure out a workaround. Any ideas how can we use a mounted Lustre filesystem with no flock option?\r\n",
"Hi @kelvinAI , I've had this issue on our institution's system which uses Lustre (in addition to our compute nodes being siloed off from external network access). The workaround I made for downloading/loading datasets was to set the `$HFHOME` environment variable to a location on the node's local storage (SSD), effectively a location that gets cleared regularly and sometimes gets used for temporary or cached files which is pretty common, e.g. \"scratch\" storage. Maybe your sysadmins, if you have them, could point you to subdirectories on a node that aren't linked to the Lustre filesystem. After downloading to scratch I found that the transformers, modules, and metrics cached folders were fine to move to my user drives on the Lustre filesystem but cached datasets that had fingerprints still had some issues with filelock, so it would help to use the function `my_dataset.save_to_disk('path/on/lustre_fs')` and static class function `Dataset.load_from_disk('path/on/lustre_fs')`. In rough steps:\r\n\r\n1. Initially download to scratch storage with `ds = datasets.load_dataset(dataset_name)`\r\n2. Call `ds.save_to_disk(my_path_on_lustre)` with a path in your user space on the Lustre filesystem\r\n3. Load datasets with `from datasets import Dataset; new_ds = Dataset.load_from_disk(my_path_on_lustre)`\r\n\r\nObviously this hinges on there existing scratch storage on the nodes you're using. Fingers crossed.",
"Hi @jpmcd , thanks for sharing your experience. For my case, the Lustre filesystem (with more storage space) is the scratch storage like the one you've mentioned. We have a local storage for each user but unfortunately there's not enough space in it to 'cache' huge datasets, hence that is why I tried changing HF_HOME to point to the scratch disk with more space and encountered the flock issue. Unfortunately I'm not aware of any viable solution to this for now so I simply fall back to using torch dataset. ",
"@jpmcd your comment saved me from pulling my hair out in frustration. Setting `HF_HOME` to a directory that's not on Lustre works like a charm. ✨ "
] |
https://api.github.com/repos/huggingface/datasets/issues/3335
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/3335/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/3335/comments
|
https://api.github.com/repos/huggingface/datasets/issues/3335/events
|
https://github.com/huggingface/datasets/pull/3335
| 1,066,064,126 |
PR_kwDODunzps4vISGy
| 3,335 |
add Speech commands dataset
|
[] |
closed
| false | null | 11 |
2021-11-29T13:52:47Z
|
2021-12-10T10:37:21Z
|
2021-12-10T10:30:15Z
| null |
closes #3283
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3335/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/3335/timeline
| null | null | false |
{
"diff_url": "https://github.com/huggingface/datasets/pull/3335.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3335",
"merged_at": "2021-12-10T10:30:15Z",
"patch_url": "https://github.com/huggingface/datasets/pull/3335.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3335"
}
| true |
[
"@anton-l ping",
"@lhoestq \r\nHi Quentin! Thank you for your feedback and suggestions! 🤗\r\n\r\nYes, that was actually what I wanted to do next - I mean the steaming stuff :)\r\nAlso, I need to make some changes to the readme (to account for the updated features set).\r\n\r\nHopefully, I will be done by tomorrow afternoon if that's ok. \r\n",
"@lhoestq Hi Quentin!\r\n\r\nI've implemented (hopefully, correctly) the streaming compatibility but the problem with the current approach is that we first need to iterate over the full archive anyway to get the list of filenames for train and validation sets (see [this](https://github.com/huggingface/datasets/pull/3335/files#diff-aeea540d136025e30a842856779e9c6485a5dc6fc9eb7fd6d3be2acd2f49b8e3R186), the same approach is implemented in TFDS version). Only after that, we can generate examples, so we cannot stream the dataset before the first iteration ends and it takes some time. It's probably not the most effective way. \r\n\r\nIf the streaming mode is turned off, this approach (with two iterations) is actually slower than the previous implementation (with archive extraction). \r\n\r\nMy suggestion is to host separate archives for each split prepared in advance. That way there would be no need for iterating over the common archive to collect train and validation filenames. @anton-l suggested to make AWS mirrors for them. I've prepared these archives, for now you can take a look at them [here](https://drive.google.com/drive/folders/1oMrZHzPgHAKprKJuvih91CM8KMSzh_pL?usp=sharing). I simplified their structure a bit so if we switch to using them, the code then should be changed (and simplified) a bit too.\r\n",
"Hi ! Thanks for the changes :)\r\n\r\n> My suggestion is to host separate archives for each split prepared in advance. That way there would be no need for iterating over the common archive to collect train and validation filenames. @anton-l suggested to make AWS mirrors for them. I've prepared these archives, for now you can take a look at them here. I simplified their structure a bit so if we switch to using them, the code then should be changed (and simplified) a bit too.\r\n\r\nI agree, I just uploaded them on AWS\r\n\r\nhttps://s3.amazonaws.com/datasets.huggingface.co/SpeechCommands/v0.01/v0.01_test.tar.gz\r\nhttps://s3.amazonaws.com/datasets.huggingface.co/SpeechCommands/v0.01/v0.01_train.tar.gz\r\nhttps://s3.amazonaws.com/datasets.huggingface.co/SpeechCommands/v0.01/v0.01_validation.tar.gz\r\nhttps://s3.amazonaws.com/datasets.huggingface.co/SpeechCommands/v0.02/v0.02_test.tar.gz\r\nhttps://s3.amazonaws.com/datasets.huggingface.co/SpeechCommands/v0.02/v0.02_validation.tar.gz\r\n\r\nNote that in the future we can move those files to actual repositories on the Hugging Face Hub, since we are migrating the datasets from this repository to the Hugging Face Hub (as mirrors), to make them more accessible to the community.",
"@lhoestq Thank you! Gonna look at this tomorrow :)",
"@lhoestq I've modified the code to fit new data format, now it works for v0.01 but doesn't work for v0.02 as the training archive is missing. Could you please create a mirror for that one too? You can find it [here](https://drive.google.com/file/d/1mPjnVMYb-VhPprGlOX8v9TBT1GT-rtcp/view?usp=sharing)\r\n\r\nAnd when it's done I'll need to regenerate all the meta / dummy stuff, and this version will be ready for a review :)",
"Here you go :)\r\nhttps://s3.amazonaws.com/datasets.huggingface.co/SpeechCommands/v0.02/v0.02_train.tar.gz",
"FYI I juste merged a fix for the Windows CI error on `master`, feel free to merge `master` again into your branch",
"All green ! I had to fix some minor stuff in the CI but it's good now\r\n\r\nNext step is to mark it as ready for review, and I think it's all good so we can merge 🚀 ",
"@lhoestq 🤗",
":tada: "
] |
https://api.github.com/repos/huggingface/datasets/issues/2347
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/2347/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/2347/comments
|
https://api.github.com/repos/huggingface/datasets/issues/2347/events
|
https://github.com/huggingface/datasets/issues/2347
| 887,404,868 |
MDU6SXNzdWU4ODc0MDQ4Njg=
| 2,347 |
Add an API to access the language and pretty name of a dataset
|
[
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] |
closed
| false | null | 6 |
2021-05-11T14:10:08Z
|
2022-10-05T17:16:54Z
|
2022-10-05T17:16:53Z
| null |
It would be super nice to have an API to get some metadata of the dataset from the name and args passed to `load_dataset`. This way we could programmatically infer the language and the name of a dataset when creating model cards automatically in the Transformers examples scripts.
|
{
"+1": 2,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 2,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2347/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/2347/timeline
| null |
completed
| null | null | false |
[
"Hi ! With @bhavitvyamalik we discussed about having something like\r\n```python\r\nfrom datasets import load_dataset_card\r\n\r\ndataset_card = load_dataset_card(\"squad\")\r\nprint(dataset_card.metadata.pretty_name)\r\n# Stanford Question Answering Dataset (SQuAD)\r\nprint(dataset_card.metadata.languages)\r\n# [\"en\"]\r\n\r\n```\r\nWhat do you think ?\r\n\r\nI don't know if you already have a way to load the model tags in `transformers` but we can agree on the API to have something consistent.\r\n\r\nAlso note that the pretty name would only be used to show users something prettier than a dataset id, but in the end the source of truth will stay the dataset id (here `squad`).",
"That works for me!",
"maybe use the hub-backed dataset_info method? (so there's only one parser of README.md metadata)?",
"What dataset_info method are you talking about @julien-c ? In `huggingface_hub` I can only see `model_info`.",
"hmm the equivalent method in `datasets` (which could go into `huggingface_hub` at some point)",
"Indeed, this info can now be fetched with `huggingface_hub.dataset_info`, so I think we can close this issue."
] |
https://api.github.com/repos/huggingface/datasets/issues/5585
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5585/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5585/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5585/events
|
https://github.com/huggingface/datasets/issues/5585
| 1,602,190,030 |
I_kwDODunzps5ff3rO
| 5,585 |
Cache is not transportable
|
[] |
closed
| false | null | 2 |
2023-02-28T00:53:06Z
|
2023-02-28T21:26:52Z
|
2023-02-28T21:26:52Z
| null |
### Describe the bug
I would like to share cache between two machines (a Windows host machine and a WSL instance).
I run most my code in WSL. I have just run out of space in the virtual drive. Rather than expand the drive size, I plan to move to cache to the host Windows machine, thereby sharing the downloads.
I'm hoping that I can just copy/paste the cache files, but I notice that a lot of the file names start with the path name, e.g. `_home_davidg_.cache_huggingface_datasets_conll2003_default-451...98.lock` where `home/davidg` is where the cache is in WSL.
This seems to suggest that the cache is not portable/cannot be centralised or shared. Is this the case, or are the files that start with path names not integral to the caching mechanism? Because copying the cache files _seems_ to work, but I'm not filled with confidence that something isn't going to break.
A related issue, when trying to load a dataset that should come from cache (running in WSL, pointing to cache on the Windows host) it seemed to work fine, but it still uses a WSL directory for `.cache\huggingface\modules\datasets_modules`. I see nothing in the docs about this, or how to point it to a different place.
I have asked a related question on the forum: https://discuss.huggingface.co/t/is-datasets-cache-operating-system-agnostic/32656
### Steps to reproduce the bug
View the cache directory in WSL/Windows.
### Expected behavior
Cache can be shared between (virtual) machines and be transportable.
It would be nice to have a simple way to say "Dear Hugging Face packages, please put ALL your cache in `blah/de/blah`" and have all the Hugging Face packages respect that single location.
### Environment info
```
- `datasets` version: 2.9.0
- Platform: Linux-5.10.102.1-microsoft-standard-WSL2-x86_64-with-glibc2.31
- Python version: 3.10.8
- PyArrow version: 11.0.0
- Pandas version: 1.5.3
- ```
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5585/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5585/timeline
| null |
completed
| null | null | false |
[
"Hi ! No the cache is not transportable in general. It will work on a shared filesystem if you use the same python environment, but not across machines/os/environments.\r\n\r\nIn particular, reloading cached datasets does work, but reloading cached processed datasets (e.g. from `map`) may not work. This is because some hashes used by caching are based on pickle dumps of the function you pass to `map`.\r\n\r\nFinally you may copy the cache to another machine, but all the `cached-*.arrow` files are unlikely to be reloaded.",
"OK good to know. Thanks @lhoestq !"
] |
https://api.github.com/repos/huggingface/datasets/issues/5181
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5181/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5181/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5181/events
|
https://github.com/huggingface/datasets/issues/5181
| 1,431,027,102 |
I_kwDODunzps5VS72e
| 5,181 |
Add a guide for semantic segmentation
|
[
{
"color": "0075ca",
"default": true,
"description": "Improvements or additions to documentation",
"id": 1935892861,
"name": "documentation",
"node_id": "MDU6TGFiZWwxOTM1ODkyODYx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/documentation"
}
] |
closed
| false | null | 2 |
2022-11-01T07:54:50Z
|
2022-11-04T18:23:36Z
|
2022-11-04T18:23:36Z
| null |
Currently, we have these guides for object detection and image classification:
* https://huggingface.co/docs/datasets/object_detection
* https://huggingface.co/docs/datasets/image_classification
I am proposing adding a similar guide for semantic segmentation.
I am happy to contribute a PR for it.
Cc: @osanseviero @nateraw
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5181/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5181/timeline
| null |
completed
| null | null | false |
[
"Sure this sounds great! Would this be pure torchvision, albumentations, or something else?",
"I am considering `torchvision` and `albumentations`. Also [works with TensorFlow](https://github.com/deep-diver/segformer-tf-transformers/blob/main/notebooks/TFSegFormer_Finetune.ipynb). \r\n\r\nI am assigning the issue to myself then. "
] |
https://api.github.com/repos/huggingface/datasets/issues/2762
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/2762/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/2762/comments
|
https://api.github.com/repos/huggingface/datasets/issues/2762/events
|
https://github.com/huggingface/datasets/issues/2762
| 961,652,046 |
MDU6SXNzdWU5NjE2NTIwNDY=
| 2,762 |
Add RVL-CDIP dataset
|
[
{
"color": "e99695",
"default": false,
"description": "Requesting to add a new dataset",
"id": 2067376369,
"name": "dataset request",
"node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request"
},
{
"color": "bfdadc",
"default": false,
"description": "Vision datasets",
"id": 3608941089,
"name": "vision",
"node_id": "LA_kwDODunzps7XHBIh",
"url": "https://api.github.com/repos/huggingface/datasets/labels/vision"
}
] |
closed
| false | null | 3 |
2021-08-05T09:57:05Z
|
2022-04-21T17:15:41Z
|
2022-04-21T17:15:41Z
| null |
## Adding a Dataset
- **Name:** RVL-CDIP
- **Description:** The RVL-CDIP (Ryerson Vision Lab Complex Document Information Processing) dataset consists of 400,000 grayscale images in 16 classes, with 25,000 images per class. There are 320,000 training images, 40,000 validation images, and 40,000 test images. The images are sized so their largest dimension does not exceed 1000 pixels.
- **Paper:** https://www.cs.cmu.edu/~aharley/icdar15/
- **Data:** https://www.cs.cmu.edu/~aharley/rvl-cdip/
- **Motivation:** I'm currently adding LayoutLMv2 and LayoutXLM to HuggingFace Transformers. LayoutLM (v1) already exists in the library. This dataset has a large value for document image classification (i.e. classifying scanned documents). LayoutLM models obtain SOTA on this dataset, so would be great to directly use it in notebooks.
Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md).
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2762/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/2762/timeline
| null |
completed
| null | null | false |
[
"cc @nateraw ",
"#self-assign",
"[labels_only.tar.gz](https://docs.google.com/uc?authuser=0&id=0B0NKIRwUL9KYcXo3bV9LU0t3SGs&export=download) on the RVL-CDIP website does not work for me.\r\n\r\n> 404. That’s an error. The requested URL was not found on this server.\r\n\r\nI contacted the author ( Adam Harley) regarding this, and he told me that the link works for him. Not sure what the issue is. But Adam shared the file (labels_only.tar.gz) with me as an attachment.\r\n\r\nAre we allowed to host this file(labels_only.tar.gz) elsewhere and use that link instead ?\r\n\r\nThank you.\r\n"
] |
https://api.github.com/repos/huggingface/datasets/issues/3557
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/3557/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/3557/comments
|
https://api.github.com/repos/huggingface/datasets/issues/3557/events
|
https://github.com/huggingface/datasets/pull/3557
| 1,097,946,034 |
PR_kwDODunzps4wvIHl
| 3,557 |
Fix bug in `ImageClassifcation` task template
|
[] |
closed
| false | null | 3 |
2022-01-10T14:09:59Z
|
2022-01-11T15:47:52Z
|
2022-01-11T15:47:52Z
| null |
Fixes a bug in the `ImageClassification` task template which requires specifying class labels twice in dataset scripts. Additionally, this PR refactors the API around the classification task templates for cleaner `labels` handling.
CC: @lewtun @nateraw
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3557/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/3557/timeline
| null | null | false |
{
"diff_url": "https://github.com/huggingface/datasets/pull/3557.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3557",
"merged_at": "2022-01-11T15:47:52Z",
"patch_url": "https://github.com/huggingface/datasets/pull/3557.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3557"
}
| true |
[
"The CI failures are unrelated to the changes in this PR.",
"> The CI failures are unrelated to the changes in this PR.\r\n\r\nIt seems that some of the failures are due to the tests on the dataset cards (e.g. CIFAR, MNIST, FASHION_MNIST). Perhaps it's worth addressing those in this PR to avoid confusing downstream developers who branch off `master` and suddenly have a failing CI?",
"@lewtun We only run these tests against the modified datasets on the PR branch, so this will not lead to errors after merging."
] |
https://api.github.com/repos/huggingface/datasets/issues/4125
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/4125/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/4125/comments
|
https://api.github.com/repos/huggingface/datasets/issues/4125/events
|
https://github.com/huggingface/datasets/pull/4125
| 1,196,633,936 |
PR_kwDODunzps411qeR
| 4,125 |
BIG-bench
|
[] |
closed
| false | null | 21 |
2022-04-07T22:33:30Z
|
2022-06-08T17:57:48Z
|
2022-06-08T17:32:32Z
| null |
This PR adds all BIG-bench json tasks to huggingface/datasets.
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4125/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/4125/timeline
| null | null | false |
{
"diff_url": "https://github.com/huggingface/datasets/pull/4125.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4125",
"merged_at": "2022-06-08T17:32:32Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4125.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4125"
}
| true |
[
"> It looks like the CI is failing on windows because our windows CI is unable to clone the bigbench repository (maybe it has to do with filenames that are longer than 256 characters, which windows don't like). Could the smaller installation of bigbench via pip solve this issue ?\r\n> Otherwise we can see how to remove this limitation in our windows CI.\r\n\r\nI'm not sure.\r\nIf it's git's fault that it can't handle the long filenames, it will possibly be resolved by the pip install. If it's an issue with windows not liking long filenames after it's installed, then it will not be resolved.\r\nI don't have a windows computer to try it on, but I might be able to tweek this PR and do an experiment to find out. \r\nWe're waiting for a quota increase for the pip install (https://github.com/pypa/pypi-support/issues/1782). It's been pending for 2-3 weeks, and I don't have an estimate for when it will be resolved. \r\n\r\n>Regarding the dummy data zip files, I think we can just keep datasets/bigbench/dummy/abstract_narrative_understanding/1.0.0/dummy_data.zip and remove all the other ones. We just require to have at least one dummy_data.zip file.\r\n\r\nSounds great. I will trim that down. ",
"Do you know what are the other tests dependencies that have conflicts with bigbench ? I can try to split the CI to end up with a compatible list of test dependencies",
"Hi @lhoestq,\r\n\r\nI haven't played with eliminating requirements form the test dependencies, and I've been trying to resolve this by modifying the bigbench repo to become compatible. \r\nIn the original bigbench repo, the version requirements were strict, and specifically it had a datasets==1.17.0 requirement which was causing trouble. \r\nI'm working on PR https://github.com/google/BIG-bench/pull/766 to get some more flexible versions that might be compatible with the test dependencies in HF/datasets.\r\nWe're somewhat flexible in modifying these version numbers if we can figure out what the exact conflict is. \r\n\r\nI've spent some time experimenting with different versions, but I don't have a very efficient way of doing this debugging on my work computer (which for some reason doesn't produce the same sets of errors running python 3.9 instead of 3.6 or 3.7 in the tests). \r\nIt currently fails at \r\n> The conflict is caused by:\r\n> bert-score 0.3.6 depends on matplotlib\r\n> big-bench 0.0.1 depends on matplotlib<4.0 and >=3.5.1\r\n\r\nwhich doesn't seem like it can be the real issue. \r\n\r\nIf you have any advice for how to resolve these conflicts, that would be greatly appreciated!",
"Hi again @lhoestq, \r\nAfter some more or less random guessing of conflicting packages, I've managed to find a configuration that seems to be compatible with HF/datasets. \r\n\r\nThe errors went away after removing version limits on matplotlib and scipy, and loosening numpy from 1.19 -> 1.17 in the bigbench requirements. \r\n\r\nI might do some more tweaking to see if it lets me set some minimal limits on matplotlib and scipy, but I think we at least can move forward.\r\n\r\nThe WIN tests are still failing, now because of \r\n\r\n> Did not find path entry C:\\tools\\miniconda3\\bin\r\n>C:\\tools\\miniconda3\\envs\\py37\\python.exe: No module named pytest\r\n\r\nI have no way of debugging this locally, and unless there's some way to get more verbose logs, I don't know why it's not finding pytest. Would you be able to take a quick look? \r\n\r\nUpdate: Actually, I see it's still failing because of the long filenames. So perhaps the pytest error is just because the previous steps failed. ",
"One more update on the WIN errors. \r\nI think all the long filenames are in files in the github repo that does not need to be included. \r\nWe will try to remove them .",
"Hi ! The remaining error seems to be a `UnicodeDecodeError` from `setup.py`. I think you can fix your setup.py:\r\n```diff\r\n- with open(os.path.join(os.path.dirname(__file__), fname)) as f:\r\n+ with open(os.path.join(os.path.dirname(__file__), fname), encoding=\"utf-8\") as f:\r\n```\r\nIndeed on windows, when you `open` a file it doesn't always use \"utf-8\" by default",
"Hi @lhoestq, \r\nThe dependency issues seems to now be resolved 🎉 \r\n\r\nNow, the WIN tests are failing at\r\n> ERROR tests/test_arrow_dataset.py::test_dummy_dataset_serialize_s3 - botocore...\r\n> ERROR tests/test_dataset_dict.py::test_dummy_dataset_serialize_s3 - botocore...\r\n\r\nIs this testing the dummy dataset that's added in bigbench? If so, I might need some help getting the right format in.\r\n\r\nThe error message I'm seeing is \r\n> raise EndpointConnectionError(endpoint_url=request.url, error=e)\r\n> E botocore.exceptions.EndpointConnectionError: Could not connect to the endpoint URL: \"http://127.0.0.1:5555/test\"\r\n\r\nWhich seems unrelated, but perhaps the real issue is somewhere I'm not seeing? ",
"Woohoo awesome !\r\n\r\nLet me check the CI error",
"Can you try to re-run the CI, just in case CircleCI messed up ?",
"Hi @lhoestq, \r\nRerunning did not seem to solve the problem. \r\nThe `test_dummy_dataset_serialize_s3` error still seems to remain.",
"Hi again @lhoestq, \r\nI'm not sure if this is informative or not in terms of debugging, but I deleted the dummy data and the errors for windows still fail and the others still pass. \r\nDo you have any idea what could be causing this error on windows?",
"_The documentation is not available anymore as the PR was closed or merged._",
"Now the last question: let's have the dataset under`google/bigbench` @andersjohanandreassen ?\r\n\r\nI think it would be nicer, this way you and anyone in your team can update the dataset card whevener you want without going through a github PR. You just need to join the https://huggingface.co/google page using your google email :)",
"Hi @lhoestq, \r\n\r\nThank you so much for the help! I really appreciate it!!!\r\n\r\nAfter some discussion with the other bigbench organizers, I think there is a slight preference for bigbench to not be under google/bigbench since this is a collaboration with researchers from many different institutions/organizations beyond Google. \r\n\r\nI see the drawback with the updates to the dataset card having to go through a PR, but hopefully that won't be very frequent. \r\n\r\nWe're finalizing putting the bigbench api on pip, so once that's finalized I just need to update the setup.py with the correct dependency and I think we are ready to merge. ",
"Ok perfect, thank you !",
"I noticed that in the latest windows CI run it takes forever to install the dependencies, was there any change in the bigbench dependencies recently ?",
"oh, sorry! I just did a double check on the dependencies, and it seems like there is at least one left that should have been removed. There's also one new one added. \r\nLet me get those removed again. Will ping you here when it's updated. ",
"It looks like there is a circular dependency in `bigbench` at https://storage.googleapis.com/public_research_data/bigbench/bigbench-0.0.1.tar.gz\r\n\r\n```python\r\n>>> import bigbench.api.util as bb_utils\r\nTraceback (most recent call last):\r\n File \"<stdin>\", line 1, in <module>\r\n File \"/home/circleci/.pyenv/versions/3.6.15/lib/python3.6/site-packages/bigbench/api/util.py\", line 29, in <module>\r\n import bigbench.models.query_logging_model as query_logging_model\r\n File \"/home/circleci/.pyenv/versions/3.6.15/lib/python3.6/site-packages/bigbench/models/query_logging_model.py\", line 23, in <module>\r\n import bigbench.api.util as util\r\nAttributeError: module 'bigbench.api' has no attribute 'util'\r\n```",
"Hi @lhoestq , \r\nI think we are ready to merge! \r\n\r\nI have one minor question that I haven't been able to figure out: \r\nIs there a way to bypass the `verify_infos` from triggering? I have `max_examples` as an argument to allow for selecting a fixed subset of the datasets (some of the tasks have *very* many examples). But this is a variable that's not specified by the configs, so it raises an `NonMatchingSplitsSizesError`.\r\nI wasn't able to work my way around this, but perhaps there is a way to bypass this that I'm not seeing?\r\nIf this cannot be done, I'm happy to ignore this for now.\r\n\r\nRegarding pypi, we are working on a release there, but I'm told there is some issue that there is a problem regarding the upload, and we are not sure when it will be resolved, and it's not in my control. \r\nI think merging this PR with the GCS is a great idea, and I will open a new PR when the pypi version is ready. ",
"Cool ! Merging then :D\r\n\r\n> Is there a way to bypass the verify_infos from triggering? I have max_examples as an argument to allow for selecting a fixed subset of the datasets (some of the tasks have very many examples). But this is a variable that's not specified by the configs, so it raises an NonMatchingSplitsSizesError.\r\n\r\nThis is a bug, I opened an issue [here](https://github.com/huggingface/datasets/issues/4462). It should be easy to fix :)",
"The bigbench page is available here ! https://huggingface.co/datasets/bigbench\r\n\r\nI think we can update the dataset viewer to install bigbench on it, but since this is production code I'd rather use the version on pypi for bigbench when it comes out"
] |
https://api.github.com/repos/huggingface/datasets/issues/2926
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/2926/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/2926/comments
|
https://api.github.com/repos/huggingface/datasets/issues/2926/events
|
https://github.com/huggingface/datasets/issues/2926
| 997,463,277 |
I_kwDODunzps47dBTt
| 2,926 |
Error when downloading datasets to non-traditional cache directories
|
[
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] |
open
| false | null | 1 |
2021-09-15T19:59:46Z
|
2021-11-24T21:42:31Z
| null | null |
## Describe the bug
When the cache directory is linked (soft link) to a directory on a NetApp device, the download fails.
## Steps to reproduce the bug
```bash
ln -s /path/to/netapp/.cache ~/.cache
```
```python
load_dataset("imdb")
```
## Expected results
Successfully loading IMDB dataset
## Actual results
```
datasets.utils.info_utils.NonMatchingSplitsSizesError: [{'expected': SplitInfo(name='train', num_bytes=33432835,
num_examples=25000, dataset_name='imdb'), 'recorded': SplitInfo(name='train', num_bytes=0, num_examples=0,
dataset_name='imdb')}, {'expected': SplitInfo(name='test', num_bytes=32650697, num_examples=25000, dataset_name='imdb'),
'recorded': SplitInfo(name='test', num_bytes=659932, num_examples=503, dataset_name='imdb')}, {'expected':
SplitInfo(name='unsupervised', num_bytes=67106814, num_examples=50000, dataset_name='imdb'), 'recorded':
SplitInfo(name='unsupervised', num_bytes=0, num_examples=0, dataset_name='imdb')}]
```
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 1.1.2
- Platform: Ubuntu
- Python version: 3.8
## Extra notes
Stranger yet, trying to debug the phenomenon, I found the range of results to vary a lot without clear direction:
- With `cache_dir="/path/to/netapp/.cache"` the same thing happens.
- However, when linking `~/netapp/` to `/path/to/netapp` *and* setting `cache_dir="~/netapp/.cache/huggingface/datasets"` - it does work
- On the other hand, when linking `~/.cache` to `~/netapp/.cache` without using `cache_dir`, it does work anymore.
While I could test it only for a NetApp device, it might have to do with any other mounted FS.
Thanks :)
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2926/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/2926/timeline
| null | null | null | null | false |
[
"Same here !"
] |
https://api.github.com/repos/huggingface/datasets/issues/822
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/822/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/822/comments
|
https://api.github.com/repos/huggingface/datasets/issues/822/events
|
https://github.com/huggingface/datasets/issues/822
| 739,579,314 |
MDU6SXNzdWU3Mzk1NzkzMTQ=
| 822 |
datasets freezes
|
[
{
"color": "2edb81",
"default": false,
"description": "A bug in a dataset script provided in the library",
"id": 2067388877,
"name": "dataset bug",
"node_id": "MDU6TGFiZWwyMDY3Mzg4ODc3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20bug"
}
] |
closed
| false | null | 2 |
2020-11-10T05:10:19Z
|
2023-07-20T16:08:14Z
|
2023-07-20T16:08:13Z
| null |
Hi, I want to load these two datasets and convert them to Dataset format in torch and the code freezes for me, could you have a look please? thanks
dataset1 = load_dataset("squad", split="train[:10]")
dataset1 = dataset1.set_format(type='torch', columns=['context', 'answers', 'question'])
dataset2 = load_dataset("imdb", split="train[:10]")
dataset2 = dataset2.set_format(type="torch", columns=["text", "label"])
print(len(dataset1))
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/822/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/822/timeline
| null |
completed
| null | null | false |
[
"Pytorch is unable to convert strings to tensors unfortunately.\r\nYou can use `set_format(type=\"torch\")` on columns that can be converted to tensors, such as token ids.\r\n\r\nThis makes me think that we should probably raise an error or at least a warning when one tries to create pytorch tensors out of text columns",
"Ultimately, we decided to return a list instead of an error when formatting a string column with the format type `\"torch\"`.\r\n\r\nIf you think an error would be more appropriate, please open a new issue."
] |
https://api.github.com/repos/huggingface/datasets/issues/4910
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/4910/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/4910/comments
|
https://api.github.com/repos/huggingface/datasets/issues/4910/events
|
https://github.com/huggingface/datasets/issues/4910
| 1,354,374,328 |
I_kwDODunzps5Quhy4
| 4,910 |
Identical keywords in build_kwargs and config_kwargs lead to TypeError in load_dataset_builder()
|
[
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
},
{
"color": "7057ff",
"default": true,
"description": "Good for newcomers",
"id": 1935892877,
"name": "good first issue",
"node_id": "MDU6TGFiZWwxOTM1ODkyODc3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/good%20first%20issue"
}
] |
open
| false | null | 7 |
2022-08-29T14:11:48Z
|
2022-09-13T11:58:46Z
| null | null |
## Describe the bug
In `load_dataset_builder()`, `build_kwargs` and `config_kwargs` can contain the same keywords leading to a TypeError("type object got multiple values for keyword argument "xyz").
I ran into this problem with the keyword: `base_path`. It might happen with other kwargs as well. I think a quickfix would be
```python
builder_cls = import_main_class(dataset_module.module_path)
builder_kwargs = dataset_module.builder_kwargs
data_files = builder_kwargs.pop("data_files", data_files)
config_name = builder_kwargs.pop("config_name", name)
hash = builder_kwargs.pop("hash")
base_path = builder_kwargs.pop("base_path")
```
and then pass base_path into `builder_cls`.
## Steps to reproduce the bug
```python
from datasets import load_dataset
load_dataset("rotten_tomatoes", base_path="./sample_data")
```
## Expected results
The docs state: `**config_kwargs` — Keyword arguments to be passed to the [BuilderConfig](https://huggingface.co/docs/datasets/v2.4.0/en/package_reference/builder_classes#datasets.BuilderConfig) and used in the [DatasetBuilder](https://huggingface.co/docs/datasets/v2.4.0/en/package_reference/builder_classes#datasets.DatasetBuilder).
So I would expect to be able to pass the base_path into `load_dataset()`.
## Actual results
TypeError("type object got multiple values for keyword argument "base_path").
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 2.4.0
- Platform: macOS-12.5-arm64-arm-64bit
- Python version: 3.8.9
- PyArrow version: 9.0.0
|
{
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4910/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/4910/timeline
| null | null | null | null | false |
[
"I am getting similar error - `TypeError: type object got multiple values for keyword argument 'name'` while following this [tutorial](https://huggingface.co/docs/datasets/dataset_script#create-a-dataset-loading-script). I am getting this error with the `dataset-cli test` command.\r\n\r\n`datasets` version: 2.4.0",
"In my case, this was happening because I defined multiple `BuilderConfig` for multiple types, but didn't had all the data files that are requierd by those configs. \r\n\r\nI think this is different than the original issue by @bablf .",
"Hi ! I think this can be fixed by letting the config_kwargs take over the builder kwargs here:\r\n\r\nhttps://github.com/huggingface/datasets/blob/7feeb5648a63b6135a8259dedc3b1e19185ee4c7/src/datasets/load.py#L1533-L1534\r\n\r\nmaybe something like this ?\r\n```python\r\n **{**builder_kwargs, **config_kwargs}\r\n```\r\n\r\nLet me know if you'd like to contribute and fix this bug, so I can assign you :)\r\n\r\n> In my case, this was happening because I defined multiple BuilderConfig for multiple types, but didn't had all the data files that are requierd by those configs.\r\n> \r\n> I think this is different than the original issue by @bablf .\r\n\r\nFeel free to to open an new issue, I'd be happy to help\r\n",
"@lhoestq Yeah, I want to, please assign.",
"Cool thank you ! Let me know if you have questions or if I can help",
"@lhoestq On second thoughts, I think this might be expected behavior; although a better error message might help.\r\n\r\nReasoning: Given n configs, if no data file is provided for any config, then it should be an error. Then why it should not be the case if out of n configs, for some data files are provided but not for others. Also, I was using `--all_configs` flag with `dataset-cli test`.",
"Ok I see - maybe we should check the values of builder_kwargs raise an error if any key in config_kwargs tries to overwrite it ? The builder kwargs are determined from the builder's type and location (in some cases it forces the base_path, data_files and config name for example)"
] |
https://api.github.com/repos/huggingface/datasets/issues/4355
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/4355/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/4355/comments
|
https://api.github.com/repos/huggingface/datasets/issues/4355/events
|
https://github.com/huggingface/datasets/pull/4355
| 1,236,797,490 |
PR_kwDODunzps433EgP
| 4,355 |
Fix warning in upload_file
|
[] |
closed
| false | null | 1 |
2022-05-16T08:21:31Z
|
2022-05-16T11:28:02Z
|
2022-05-16T11:19:57Z
| null |
Fix warning:
```
FutureWarning: Pass path_or_fileobj='...' as keyword args. From version 0.7 passing these as positional arguments will result in an error
```
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4355/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/4355/timeline
| null | null | false |
{
"diff_url": "https://github.com/huggingface/datasets/pull/4355.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4355",
"merged_at": "2022-05-16T11:19:57Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4355.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4355"
}
| true |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] |
https://api.github.com/repos/huggingface/datasets/issues/3373
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/3373/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/3373/comments
|
https://api.github.com/repos/huggingface/datasets/issues/3373/events
|
https://github.com/huggingface/datasets/issues/3373
| 1,070,406,391 |
I_kwDODunzps4_zRr3
| 3,373 |
Support streaming zipped CSV dataset repo by passing only repo name
|
[
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] |
closed
| false | null | 0 |
2021-12-03T09:48:24Z
|
2021-12-16T18:03:31Z
|
2021-12-16T18:03:31Z
| null |
Given a community 🤗 dataset repository containing only a zipped CSV file (only raw data, no loading script), I would like to load it in streaming mode without passing `data_files`:
```
ds_name = "bigscience-catalogue-data/vietnamese_poetry_from_fsoft_ai_lab"
ds = load_dataset(ds_name, split="train", streaming=True, use_auth_token=True)
item = next(iter(ds))
```
Currently, it gives a `FileNotFoundError` because there is no glob (no "\*" after "zip://": "zip://*") in the passed URL:
```
'zip://::https://huggingface.co/datasets/bigscience-catalogue-data/vietnamese_poetry_from_fsoft_ai_lab/resolve/e5d45f1bd9a8a798cc14f0a45ebc1ce91907c792/poems_dataset.zip'
```
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3373/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/3373/timeline
| null |
completed
| null | null | false |
[] |
https://api.github.com/repos/huggingface/datasets/issues/5551
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5551/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5551/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5551/events
|
https://github.com/huggingface/datasets/pull/5551
| 1,592,140,836 |
PR_kwDODunzps5KXCof
| 5,551 |
Suggest scikit-learn instead of sklearn
|
[] |
closed
| false | null | 4 |
2023-02-20T16:16:57Z
|
2023-02-21T13:27:57Z
|
2023-02-21T13:21:07Z
| null |
This is kinda unimportant fix but, the suggested `pip install sklearn` does not work.
The current error message if sklearn is not installed:
```
ImportError: To be able to use [dataset name], you need to install the following dependency: sklearn.
Please install it using 'pip install sklearn' for instance.
```
|
{
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5551/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5551/timeline
| null | null | false |
{
"diff_url": "https://github.com/huggingface/datasets/pull/5551.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5551",
"merged_at": "2023-02-21T13:21:07Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5551.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5551"
}
| true |
[
"good catch!",
"_The documentation is not available anymore as the PR was closed or merged._",
"The test fail is unrelated to this PR and fixed on `main` - merging :)",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008942 / 0.011353 (-0.002411) | 0.004617 / 0.011008 (-0.006391) | 0.101310 / 0.038508 (0.062802) | 0.030997 / 0.023109 (0.007888) | 0.306292 / 0.275898 (0.030394) | 0.370533 / 0.323480 (0.047053) | 0.007318 / 0.007986 (-0.000667) | 0.003473 / 0.004328 (-0.000856) | 0.078557 / 0.004250 (0.074307) | 0.036312 / 0.037052 (-0.000740) | 0.308993 / 0.258489 (0.050504) | 0.344411 / 0.293841 (0.050570) | 0.034384 / 0.128546 (-0.094162) | 0.011631 / 0.075646 (-0.064016) | 0.323948 / 0.419271 (-0.095324) | 0.041176 / 0.043533 (-0.002357) | 0.302512 / 0.255139 (0.047373) | 0.322439 / 0.283200 (0.039239) | 0.088955 / 0.141683 (-0.052728) | 1.534918 / 1.452155 (0.082763) | 1.555803 / 1.492716 (0.063087) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.195639 / 0.018006 (0.177633) | 0.423068 / 0.000490 (0.422579) | 0.004101 / 0.000200 (0.003901) | 0.000079 / 0.000054 (0.000025) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.023691 / 0.037411 (-0.013721) | 0.100536 / 0.014526 (0.086011) | 0.108399 / 0.176557 (-0.068157) | 0.143515 / 0.737135 (-0.593620) | 0.111886 / 0.296338 (-0.184452) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.417519 / 0.215209 (0.202310) | 4.180463 / 2.077655 (2.102808) | 1.862511 / 1.504120 (0.358391) | 1.658724 / 1.541195 (0.117529) | 1.735847 / 1.468490 (0.267357) | 0.688257 / 4.584777 (-3.896520) | 3.447976 / 3.745712 (-0.297737) | 1.877939 / 5.269862 (-3.391922) | 1.157385 / 4.565676 (-3.408292) | 0.081418 / 0.424275 (-0.342857) | 0.012395 / 0.007607 (0.004788) | 0.518935 / 0.226044 (0.292891) | 5.220355 / 2.268929 (2.951427) | 2.308355 / 55.444624 (-53.136269) | 1.960026 / 6.876477 (-4.916450) | 2.013179 / 2.142072 (-0.128893) | 0.802850 / 4.805227 (-4.002377) | 0.146941 / 6.500664 (-6.353723) | 0.064080 / 0.075469 (-0.011389) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.284443 / 1.841788 (-0.557344) | 13.903755 / 8.074308 (5.829447) | 14.467101 / 10.191392 (4.275709) | 0.156813 / 0.680424 (-0.523611) | 0.028583 / 0.534201 (-0.505618) | 0.406349 / 0.579283 (-0.172934) | 0.413178 / 0.434364 (-0.021186) | 0.491283 / 0.540337 (-0.049055) | 0.571171 / 1.386936 (-0.815765) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006868 / 0.011353 (-0.004484) | 0.004593 / 0.011008 (-0.006416) | 0.077574 / 0.038508 (0.039066) | 0.027703 / 0.023109 (0.004593) | 0.342096 / 0.275898 (0.066198) | 0.378500 / 0.323480 (0.055020) | 0.005785 / 0.007986 (-0.002201) | 0.003342 / 0.004328 (-0.000986) | 0.076105 / 0.004250 (0.071855) | 0.040369 / 0.037052 (0.003317) | 0.343611 / 0.258489 (0.085122) | 0.391859 / 0.293841 (0.098018) | 0.032675 / 0.128546 (-0.095871) | 0.011623 / 0.075646 (-0.064023) | 0.086623 / 0.419271 (-0.332648) | 0.051955 / 0.043533 (0.008423) | 0.343425 / 0.255139 (0.088286) | 0.368887 / 0.283200 (0.085688) | 0.097117 / 0.141683 (-0.044566) | 1.499546 / 1.452155 (0.047391) | 1.593100 / 1.492716 (0.100383) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.193568 / 0.018006 (0.175562) | 0.409211 / 0.000490 (0.408722) | 0.003797 / 0.000200 (0.003597) | 0.000083 / 0.000054 (0.000029) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.024982 / 0.037411 (-0.012430) | 0.101367 / 0.014526 (0.086841) | 0.108546 / 0.176557 (-0.068010) | 0.144402 / 0.737135 (-0.592733) | 0.112233 / 0.296338 (-0.184105) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.432820 / 0.215209 (0.217611) | 4.341045 / 2.077655 (2.263391) | 2.058326 / 1.504120 (0.554207) | 1.853913 / 1.541195 (0.312718) | 1.942436 / 1.468490 (0.473946) | 0.699130 / 4.584777 (-3.885647) | 3.392879 / 3.745712 (-0.352833) | 1.908277 / 5.269862 (-3.361585) | 1.177998 / 4.565676 (-3.387678) | 0.082700 / 0.424275 (-0.341576) | 0.012505 / 0.007607 (0.004898) | 0.526286 / 0.226044 (0.300242) | 5.279599 / 2.268929 (3.010670) | 2.505771 / 55.444624 (-52.938854) | 2.158460 / 6.876477 (-4.718016) | 2.211437 / 2.142072 (0.069365) | 0.802065 / 4.805227 (-4.003163) | 0.150766 / 6.500664 (-6.349898) | 0.067639 / 0.075469 (-0.007830) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.286595 / 1.841788 (-0.555192) | 13.961894 / 8.074308 (5.887586) | 14.021865 / 10.191392 (3.830473) | 0.164590 / 0.680424 (-0.515834) | 0.016909 / 0.534201 (-0.517292) | 0.392215 / 0.579283 (-0.187069) | 0.408080 / 0.434364 (-0.026284) | 0.488247 / 0.540337 (-0.052090) | 0.575524 / 1.386936 (-0.811412) |\n\n</details>\n</details>\n\n\n"
] |
https://api.github.com/repos/huggingface/datasets/issues/805
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/805/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/805/comments
|
https://api.github.com/repos/huggingface/datasets/issues/805/events
|
https://github.com/huggingface/datasets/issues/805
| 737,019,360 |
MDU6SXNzdWU3MzcwMTkzNjA=
| 805 |
On loading a metric from datasets, I get the following error
|
[] |
closed
| false | null | 1 |
2020-11-05T15:14:38Z
|
2022-02-14T15:32:59Z
|
2022-02-14T15:32:59Z
| null |
`from datasets import load_metric`
`metric = load_metric('bleurt')`
Traceback:
210 class _ArrayXDExtensionType(pa.PyExtensionType):
211
212 ndims: int = None
AttributeError: module 'pyarrow' has no attribute 'PyExtensionType'
Any help will be appreciated. Thank you.
|
{
"+1": 2,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 2,
"url": "https://api.github.com/repos/huggingface/datasets/issues/805/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/805/timeline
| null |
completed
| null | null | false |
[
"Hi ! We support only pyarrow > 0.17.1 so that we have access to the `PyExtensionType` object.\r\nCould you update pyarrow and try again ?\r\n```\r\npip install --upgrade pyarrow\r\n```"
] |
https://api.github.com/repos/huggingface/datasets/issues/2755
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/2755/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/2755/comments
|
https://api.github.com/repos/huggingface/datasets/issues/2755/events
|
https://github.com/huggingface/datasets/pull/2755
| 959,115,888 |
MDExOlB1bGxSZXF1ZXN0NzAyMjgwMjI4
| 2,755 |
Fix metadata JSON for turkish_movie_sentiment dataset
|
[] |
closed
| false | null | 0 |
2021-08-03T13:25:44Z
|
2021-08-04T09:06:54Z
|
2021-08-04T09:06:53Z
| null |
Related to #2743.
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2755/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/2755/timeline
| null | null | false |
{
"diff_url": "https://github.com/huggingface/datasets/pull/2755.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2755",
"merged_at": "2021-08-04T09:06:53Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2755.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2755"
}
| true |
[] |
https://api.github.com/repos/huggingface/datasets/issues/1834
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/1834/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/1834/comments
|
https://api.github.com/repos/huggingface/datasets/issues/1834/events
|
https://github.com/huggingface/datasets/pull/1834
| 803,517,094 |
MDExOlB1bGxSZXF1ZXN0NTY5NDMzNDA4
| 1,834 |
Fixes base_url of limit dataset
|
[] |
closed
| false | null | 1 |
2021-02-08T12:26:35Z
|
2021-02-08T12:42:50Z
|
2021-02-08T12:42:50Z
| null |
`test.json` is not available in the master branch of the repository anymore. Linking to a specific commit.
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1834/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/1834/timeline
| null | null | false |
{
"diff_url": "https://github.com/huggingface/datasets/pull/1834.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1834",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/1834.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1834"
}
| true |
[
"OK, apparently it is a lot more complicated than simply changing the URL? Going to make an issue."
] |
https://api.github.com/repos/huggingface/datasets/issues/5976
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5976/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5976/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5976/events
|
https://github.com/huggingface/datasets/pull/5976
| 1,768,503,913 |
PR_kwDODunzps5TmAFp
| 5,976 |
Avoid stuck map operation when subprocesses crashes
|
[] |
closed
| false | null | 11 |
2023-06-21T21:18:31Z
|
2023-07-10T09:58:39Z
|
2023-07-10T09:50:07Z
| null |
I've been using Dataset.map() with `num_proc=os.cpu_count()` to leverage multicore processing for my datasets, but from time to time I get stuck processes waiting forever. Apparently, when one of the subprocesses is abruptly killed (OOM killer, segfault, SIGKILL, etc), the main process keeps waiting for the async task sent to that child process to finish.
It seems to be easy to reproduce the issue with the following script:
```
import os
from datasets import Dataset, Features, Value
def do_stuck(item):
os.kill(os.getpid(), 9)
data = {
"col1": list(range(5)),
"col2": list(range(5)),
}
ds = Dataset.from_dict(
data,
features=Features({
"col1": Value("int64"),
"col2": Value("int64"),
}),
)
print(ds.map(do_stuck, num_proc=4))
```
This is an old behavior in Python, which apparently was fixed a few years ago in `concurrent.futures.ProcessPoolExecutor` ([ref](https://bugs.python.org/issue9205)), but not in `multiprocessing.pool.Pool` / `multiprocess.pool.Pool`, which is used by `Dataset.map` ([ref](https://bugs.python.org/issue22393)).
This PR is an idea to try to detect when a child process gets killed, and raises a `RuntimeError` warning the dataset.map() caller.
EDIT: Related proposal for future improvement: https://github.com/huggingface/datasets/discussions/5977
|
{
"+1": 2,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 2,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5976/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5976/timeline
| null | null | false |
{
"diff_url": "https://github.com/huggingface/datasets/pull/5976.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5976",
"merged_at": "2023-07-10T09:50:07Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5976.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5976"
}
| true |
[
"Hi ! Do you think this can be fixed at the Pool level ? Ideally it should be the Pool responsibility to handle this, not the `map` code. We could even subclass Pool if needed (at least the one from `multiprocess`)",
"@lhoestq it makes sense to me. Just pushed a refactoring creating a `class ProcessPool(multiprocess.pool.Pool)` to keep track of the PID changes.",
"_The documentation is not available anymore as the PR was closed or merged._",
"I managed to raise an error without subclassing Pool with two additions to `iflatmap_unordered`:\r\n\r\n1. at the beggining\r\n```python\r\noriginal_pool = list(pool._pool)\r\n```\r\n\r\n2. in the loop\r\n```python\r\nif any(async_result._pool != original_pool for async_result in async_results) and queue.empty():\r\n raise RuntimeError(\r\n \"One of the subprocesses has abruptly died during map operation.\"\r\n \"To debug the error, disable multiprocessing.\"\r\n )\r\n```\r\n\r\nIt's still a fix that only works for `iflatmap_unordered` (so not for map, imap etc) but is maybe simpler that subclassing. It also works for both multiprocessing.Pool and multiprocess.Pool",
"@lhoestq sorry for the delay. Busy weeks here. \r\n\r\nI just pushed the change you requested. It looks closer to the original proposal, actually.\r\n\r\nIt seems that `map` actually uses `iflatmap_unordered` ([here](https://github.com/huggingface/datasets/blob/819bb4346434912eb405ce3f3e9f21dc25a2fe85/src/datasets/arrow_dataset.py#L1509)). I think this solution works fine for the `map` method (which is the one being tested by the new `tests/test_arrow_dataset.py::BaseDatasetTest::test_map_crash_subprocess`, right?).",
"Yes fixing iflatmap_unordered does fix Dataset.map, but it won't fix any Pool.map that we may use elsewhere so we'll have to keep this in mind.",
"It looks all good to me, feel free to fix code formatting by running `make style` and we can merge :)",
"> Yes fixing iflatmap_unordered does fix Dataset.map, but it won't fix any Pool.map that we may use elsewhere so we'll have to keep this in mind.\r\n\r\nRight, I agree. The best way moving forward is probably not using the buggy `multiprocess.Pool` anymore, and replace it with `concurrent.futures.ProcessPoolExecutor` as much as possible.\r\n\r\nAnyway, I've run `make style` now. Thanks for the support!",
"It looks like checking the async_result._pool doesn't always work - sorry about that. We might just go back to your original solution then. Would also be cool to open an issue in `multiprocess` to ask if they have a solution or if they plan to fix this.",
"@lhoestq no problem! Reverted to the previous version.\r\n\r\nTBH, given the discussions [in this python issue](https://github.com/python/cpython/issues/66587), I don't think the error in `multiprocess` will be merged upstream any time soon...",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006060 / 0.011353 (-0.005293) | 0.003695 / 0.011008 (-0.007313) | 0.080484 / 0.038508 (0.041976) | 0.061894 / 0.023109 (0.038785) | 0.312510 / 0.275898 (0.036612) | 0.352398 / 0.323480 (0.028918) | 0.004638 / 0.007986 (-0.003348) | 0.002918 / 0.004328 (-0.001410) | 0.062932 / 0.004250 (0.058681) | 0.050859 / 0.037052 (0.013807) | 0.316812 / 0.258489 (0.058323) | 0.357684 / 0.293841 (0.063843) | 0.027622 / 0.128546 (-0.100924) | 0.008012 / 0.075646 (-0.067634) | 0.260970 / 0.419271 (-0.158302) | 0.045807 / 0.043533 (0.002275) | 0.321235 / 0.255139 (0.066096) | 0.343162 / 0.283200 (0.059962) | 0.021136 / 0.141683 (-0.120547) | 1.465886 / 1.452155 (0.013731) | 1.500216 / 1.492716 (0.007500) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.187286 / 0.018006 (0.169279) | 0.428724 / 0.000490 (0.428235) | 0.003029 / 0.000200 (0.002829) | 0.000063 / 0.000054 (0.000008) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.022703 / 0.037411 (-0.014708) | 0.072740 / 0.014526 (0.058215) | 0.083436 / 0.176557 (-0.093120) | 0.144559 / 0.737135 (-0.592577) | 0.083958 / 0.296338 (-0.212380) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.435729 / 0.215209 (0.220520) | 4.351146 / 2.077655 (2.273491) | 2.316627 / 1.504120 (0.812508) | 2.144587 / 1.541195 (0.603393) | 2.209182 / 1.468490 (0.740692) | 0.501131 / 4.584777 (-4.083646) | 3.077085 / 3.745712 (-0.668627) | 4.353706 / 5.269862 (-0.916156) | 2.621523 / 4.565676 (-1.944154) | 0.058976 / 0.424275 (-0.365299) | 0.006467 / 0.007607 (-0.001141) | 0.506690 / 0.226044 (0.280646) | 5.085787 / 2.268929 (2.816858) | 2.731336 / 55.444624 (-52.713289) | 2.419451 / 6.876477 (-4.457025) | 2.583649 / 2.142072 (0.441577) | 0.589869 / 4.805227 (-4.215359) | 0.131040 / 6.500664 (-6.369624) | 0.061332 / 0.075469 (-0.014137) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.220542 / 1.841788 (-0.621245) | 18.169643 / 8.074308 (10.095335) | 13.251704 / 10.191392 (3.060312) | 0.142952 / 0.680424 (-0.537472) | 0.016639 / 0.534201 (-0.517562) | 0.334851 / 0.579283 (-0.244432) | 0.361865 / 0.434364 (-0.072499) | 0.380933 / 0.540337 (-0.159404) | 0.527374 / 1.386936 (-0.859562) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006319 / 0.011353 (-0.005034) | 0.003778 / 0.011008 (-0.007231) | 0.062388 / 0.038508 (0.023880) | 0.062228 / 0.023109 (0.039119) | 0.373727 / 0.275898 (0.097829) | 0.399442 / 0.323480 (0.075962) | 0.005434 / 0.007986 (-0.002551) | 0.003020 / 0.004328 (-0.001308) | 0.062774 / 0.004250 (0.058524) | 0.052784 / 0.037052 (0.015732) | 0.376428 / 0.258489 (0.117939) | 0.405039 / 0.293841 (0.111198) | 0.027884 / 0.128546 (-0.100662) | 0.008086 / 0.075646 (-0.067561) | 0.067078 / 0.419271 (-0.352194) | 0.042927 / 0.043533 (-0.000606) | 0.372142 / 0.255139 (0.117003) | 0.389604 / 0.283200 (0.106405) | 0.021582 / 0.141683 (-0.120101) | 1.473332 / 1.452155 (0.021177) | 1.536018 / 1.492716 (0.043302) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.184729 / 0.018006 (0.166723) | 0.421065 / 0.000490 (0.420575) | 0.002681 / 0.000200 (0.002481) | 0.000070 / 0.000054 (0.000015) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.026067 / 0.037411 (-0.011344) | 0.077138 / 0.014526 (0.062612) | 0.085178 / 0.176557 (-0.091379) | 0.139681 / 0.737135 (-0.597454) | 0.087528 / 0.296338 (-0.208810) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.444899 / 0.215209 (0.229690) | 4.459168 / 2.077655 (2.381513) | 2.408792 / 1.504120 (0.904672) | 2.237243 / 1.541195 (0.696048) | 2.296298 / 1.468490 (0.827808) | 0.498508 / 4.584777 (-4.086269) | 3.067064 / 3.745712 (-0.678648) | 4.470577 / 5.269862 (-0.799284) | 2.701972 / 4.565676 (-1.863705) | 0.057711 / 0.424275 (-0.366564) | 0.006443 / 0.007607 (-0.001164) | 0.524046 / 0.226044 (0.298002) | 5.229928 / 2.268929 (2.961000) | 2.862101 / 55.444624 (-52.582523) | 2.545972 / 6.876477 (-4.330504) | 2.606459 / 2.142072 (0.464387) | 0.593285 / 4.805227 (-4.211942) | 0.124913 / 6.500664 (-6.375751) | 0.061942 / 0.075469 (-0.013527) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.322162 / 1.841788 (-0.519625) | 18.745796 / 8.074308 (10.671488) | 13.955443 / 10.191392 (3.764051) | 0.145610 / 0.680424 (-0.534814) | 0.016817 / 0.534201 (-0.517384) | 0.331180 / 0.579283 (-0.248103) | 0.343019 / 0.434364 (-0.091345) | 0.379459 / 0.540337 (-0.160878) | 0.526403 / 1.386936 (-0.860533) |\n\n</details>\n</details>\n\n\n"
] |
https://api.github.com/repos/huggingface/datasets/issues/4279
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/4279/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/4279/comments
|
https://api.github.com/repos/huggingface/datasets/issues/4279/events
|
https://github.com/huggingface/datasets/pull/4279
| 1,225,300,273 |
PR_kwDODunzps43SXw5
| 4,279 |
Update minimal PyArrow version warning
|
[] |
closed
| false | null | 1 |
2022-05-04T12:26:09Z
|
2022-05-05T08:50:58Z
|
2022-05-05T08:43:47Z
| null |
Update the minimal PyArrow version warning (should've been part of #4250).
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4279/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/4279/timeline
| null | null | false |
{
"diff_url": "https://github.com/huggingface/datasets/pull/4279.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4279",
"merged_at": "2022-05-05T08:43:47Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4279.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4279"
}
| true |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] |
https://api.github.com/repos/huggingface/datasets/issues/2222
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/2222/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/2222/comments
|
https://api.github.com/repos/huggingface/datasets/issues/2222/events
|
https://github.com/huggingface/datasets/pull/2222
| 857,847,231 |
MDExOlB1bGxSZXF1ZXN0NjE1MTk5MTM5
| 2,222 |
Fix too long WindowsFileLock name
|
[
{
"color": "ffffff",
"default": true,
"description": "This will not be worked on",
"id": 1935892913,
"name": "wontfix",
"node_id": "MDU6TGFiZWwxOTM1ODkyOTEz",
"url": "https://api.github.com/repos/huggingface/datasets/labels/wontfix"
}
] |
closed
| false | null | 3 |
2021-04-14T12:26:52Z
|
2021-04-14T15:00:25Z
|
2021-04-14T14:46:19Z
| null |
Fix WindowsFileLock name longer than allowed MAX_PATH by shortening the basename.
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2222/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/2222/timeline
| null | null | false |
{
"diff_url": "https://github.com/huggingface/datasets/pull/2222.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2222",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/2222.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2222"
}
| true |
[
"Windows users should disable the max path length limit. It's a nightmare to handle it.\r\nAlso the lock path must not be changed in a random way. Otherwise from another process the lock path might not be the same and the locking mechanism won't work.",
"Do you agree with handling the case where MAX_PATH is not disabled? If not, we can close this PR.\r\n\r\nIf so, would it work a deterministic lock path instead of random?",
"I'd rather not handle this at all, since there will be other places in the code where the limit will break things"
] |
https://api.github.com/repos/huggingface/datasets/issues/6017
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/6017/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/6017/comments
|
https://api.github.com/repos/huggingface/datasets/issues/6017/events
|
https://github.com/huggingface/datasets/issues/6017
| 1,799,309,132 |
I_kwDODunzps5rP0dM
| 6,017 |
Switch to huggingface_hub's HfFileSystem
|
[
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] |
closed
| false | null | 0 |
2023-07-11T16:24:40Z
|
2023-07-17T17:01:01Z
|
2023-07-17T17:01:01Z
| null |
instead of the current datasets.filesystems.hffilesystem.HfFileSystem which can be slow in some cases
related to https://github.com/huggingface/datasets/issues/5846 and https://github.com/huggingface/datasets/pull/5919
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6017/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/6017/timeline
| null |
completed
| null | null | false |
[] |
https://api.github.com/repos/huggingface/datasets/issues/1570
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/1570/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/1570/comments
|
https://api.github.com/repos/huggingface/datasets/issues/1570/events
|
https://github.com/huggingface/datasets/pull/1570
| 766,830,545 |
MDExOlB1bGxSZXF1ZXN0NTM5NzM1MDY2
| 1,570 |
Documentation for loading CSV datasets misleads the user
|
[] |
closed
| false | null | 0 |
2020-12-14T19:04:37Z
|
2020-12-22T19:30:12Z
|
2020-12-21T13:47:09Z
| null |
Documentation for loading CSV datasets misleads the user into thinking setting `quote_char' to False will disable quoting.
There are two problems here:
i) `quote_char' is misspelled, must be `quotechar'
ii) the documentation should mention `quoting'
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1570/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/1570/timeline
| null | null | false |
{
"diff_url": "https://github.com/huggingface/datasets/pull/1570.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1570",
"merged_at": "2020-12-21T13:47:09Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1570.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1570"
}
| true |
[] |
https://api.github.com/repos/huggingface/datasets/issues/2007
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/2007/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/2007/comments
|
https://api.github.com/repos/huggingface/datasets/issues/2007/events
|
https://github.com/huggingface/datasets/issues/2007
| 824,518,158 |
MDU6SXNzdWU4MjQ1MTgxNTg=
| 2,007 |
How to not load huggingface datasets into memory
|
[] |
closed
| false | null | 2 |
2021-03-08T12:35:26Z
|
2021-08-04T18:02:25Z
|
2021-08-04T18:02:25Z
| null |
Hi
I am running this example from transformers library version 4.3.3:
(Here is the full documentation https://github.com/huggingface/transformers/issues/8771 but the running command should work out of the box)
USE_TF=0 deepspeed run_seq2seq.py --model_name_or_path google/mt5-base --dataset_name wmt16 --dataset_config_name ro-en --source_prefix "translate English to Romanian: " --task translation_en_to_ro --output_dir /test/test_large --do_train --do_eval --predict_with_generate --max_train_samples 500 --max_val_samples 500 --max_source_length 128 --max_target_length 128 --sortish_sampler --per_device_train_batch_size 8 --val_max_target_length 128 --deepspeed ds_config.json --num_train_epochs 1 --eval_steps 25000 --warmup_steps 500 --overwrite_output_dir
(Here please find the script: https://github.com/huggingface/transformers/blob/master/examples/seq2seq/run_seq2seq.py)
If you do not pass max_train_samples in above command to load the full dataset, then I get memory issue on a gpu with 24 GigBytes of memory.
I need to train large-scale mt5 model on large-scale datasets of wikipedia (multiple of them concatenated or other datasets in multiple languages like OPUS), could you help me how I can avoid loading the full data into memory? to make the scripts not related to data size?
In above example, I was hoping the script could work without relying on dataset size, so I can still train the model without subsampling training set.
thank you so much @lhoestq for your great help in advance
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2007/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/2007/timeline
| null |
completed
| null | null | false |
[
"So maybe a summary here: \r\nIf I could fit a large model with batch_size = X into memory, is there a way I could train this model for huge datasets with keeping setting the same? thanks ",
"The `datastets` library doesn't load datasets into memory. Therefore you can load a dataset that is terabytes big without filling up your RAM.\r\n\r\nThe only thing that's loaded into memory during training is the batch used in the training step.\r\nSo as long as your model works with batch_size = X, then you can load an even bigger dataset and it will work as well with the same batch_size.\r\n\r\nNote that you still have to take into account that some batches take more memory than others, depending on the texts lengths. If it works for a batch with batch_size = X and with texts of maximum length, then it will work for all batches.\r\n\r\nIn your case I guess that there are a few long sentences in the dataset. For those long sentences you get a memory error on your GPU because they're too long. By passing `max_train_samples` you may have taken a subset of the dataset that only contain short sentences. That's probably why in your case it worked only when you set `max_train_samples`.\r\nI'd suggest you to reduce the batch size so that the batches with long sentences can be loaded on the GPU.\r\n\r\nLet me know if that helps or if you have other questions"
] |
https://api.github.com/repos/huggingface/datasets/issues/2591
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/2591/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/2591/comments
|
https://api.github.com/repos/huggingface/datasets/issues/2591/events
|
https://github.com/huggingface/datasets/issues/2591
| 936,957,975 |
MDU6SXNzdWU5MzY5NTc5NzU=
| 2,591 |
Cached dataset overflowing disk space
|
[] |
closed
| false | null | 4 |
2021-07-05T10:43:19Z
|
2021-07-19T09:08:19Z
|
2021-07-19T09:08:19Z
| null |
I'm training a Swedish Wav2vec2 model on a Linux GPU and having issues that the huggingface cached dataset folder is completely filling up my disk space (I'm training on a dataset of around 500 gb).
The cache folder is 500gb (and now my disk space is full).
Is there a way to toggle caching or set the caching to be stored on a different device (I have another drive with 4 tb that could hold the caching files).
This might not technically be a bug, but I was unsure and I felt that the bug was the closest one.
Traceback (most recent call last):
File "/home/birger/miniconda3/envs/wav2vec2/lib/python3.7/site-packages/multiprocess/pool.py", line 121, in worker
result = (True, func(*args, **kwds))
File "/home/birger/miniconda3/envs/wav2vec2/lib/python3.7/site-packages/datasets/arrow_dataset.py", line 186, in wrapper
out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
File "/home/birger/miniconda3/envs/wav2vec2/lib/python3.7/site-packages/datasets/fingerprint.py", line 397, in wrapper
out = func(self, *args, **kwargs)
File "/home/birger/miniconda3/envs/wav2vec2/lib/python3.7/site-packages/datasets/arrow_dataset.py", line 1983, in _map_single
writer.finalize()
File "/home/birger/miniconda3/envs/wav2vec2/lib/python3.7/site-packages/datasets/arrow_writer.py", line 418, in finalize
self.pa_writer.close()
File "pyarrow/ipc.pxi", line 402, in pyarrow.lib._CRecordBatchWriter.close
File "pyarrow/error.pxi", line 97, in pyarrow.lib.check_status
OSError: [Errno 28] Error writing bytes to file. Detail: [errno 28] No space left on device
"""
The above exception was the direct cause of the following exception:
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2591/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/2591/timeline
| null |
completed
| null | null | false |
[
"Hi! I'm transferring this issue over to `datasets`",
"I'm using the datasets concatenate dataset to combine the datasets and then train.\r\ntrain_dataset = concatenate_datasets([dataset1, dataset2, common_voice_train])\r\n\r\n",
"Hi @BirgerMoell.\r\n\r\nYou have several options:\r\n- to set caching to be stored on a different path location, other than the default one (`~/.cache/huggingface/datasets`):\r\n - either setting the environment variable `HF_DATASETS_CACHE` with the path to the new cache location\r\n - or by passing it with the parameter `cache_dir` when loading each of the datasets: `dataset = load_dataset(..., cache_dir=your_new_location)`\r\n\r\n You can get all the information in the docs: https://huggingface.co/docs/datasets/loading_datasets.html#cache-directory\r\n- I wouldn't recommend disabling caching, because current implementation generates cache files anyway, although in a temporary directory and they are deleted when the session closes. See details here: https://huggingface.co/docs/datasets/processing.html#enable-or-disable-caching\r\n- You could alternatively load the datasets in streaming mode. This is a new feature which allows loading the datasets without downloading the entire files. More information here: https://huggingface.co/docs/datasets/dataset_streaming.html",
"Hi @BirgerMoell,\r\n\r\nWe are planning to add a new feature to datasets, which could be interesting in your case: Add the option to delete temporary files (decompressed files) from the cache directory (see: #2481, #2604).\r\n\r\nWe will ping you once this feature is implemented, so that the size of your cache directory will be considerably reduced."
] |
https://api.github.com/repos/huggingface/datasets/issues/1711
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/1711/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/1711/comments
|
https://api.github.com/repos/huggingface/datasets/issues/1711/events
|
https://github.com/huggingface/datasets/pull/1711
| 782,129,083 |
MDExOlB1bGxSZXF1ZXN0NTUxNzQxODA2
| 1,711 |
Fix windows path scheme in cached path
|
[] |
closed
| false | null | 0 |
2021-01-08T13:45:56Z
|
2021-01-11T09:23:20Z
|
2021-01-11T09:23:19Z
| null |
As noticed in #807 there's currently an issue with `cached_path` not raising `FileNotFoundError` on windows for absolute paths. This is due to the way we check for a path to be local or not. The check on the scheme using urlparse was incomplete.
I fixed this and added tests
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1711/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/1711/timeline
| null | null | false |
{
"diff_url": "https://github.com/huggingface/datasets/pull/1711.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1711",
"merged_at": "2021-01-11T09:23:19Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1711.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1711"
}
| true |
[] |
https://api.github.com/repos/huggingface/datasets/issues/620
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/620/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/620/comments
|
https://api.github.com/repos/huggingface/datasets/issues/620/events
|
https://github.com/huggingface/datasets/issues/620
| 699,815,135 |
MDU6SXNzdWU2OTk4MTUxMzU=
| 620 |
map/filter multiprocessing raises errors and corrupts datasets
|
[
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] |
closed
| false | null | 22 |
2020-09-11T22:30:06Z
|
2020-10-08T16:31:47Z
|
2020-10-08T16:31:46Z
| null |
After upgrading to the 1.0 started seeing errors in my data loading script after enabling multiprocessing.
```python
...
ner_ds_dict = ner_ds.train_test_split(test_size=test_pct, shuffle=True, seed=seed)
ner_ds_dict["validation"] = ner_ds_dict["test"]
rel_ds_dict = rel_ds.train_test_split(test_size=test_pct, shuffle=True, seed=seed)
rel_ds_dict["validation"] = rel_ds_dict["test"]
return ner_ds_dict, rel_ds_dict
```
The first train_test_split, `ner_ds`/`ner_ds_dict`, returns a `train` and `test` split that are iterable.
The second, `rel_ds`/`rel_ds_dict` in this case, returns a Dataset dict that has rows but if selected from or sliced into into returns an empty dictionary. eg `rel_ds_dict['train'][0] == {}` and `rel_ds_dict['train'][0:100] == {}`.
Ok I think I know the problem -- the rel_ds was mapped though a mapper with `num_proc=12`. If I remove `num_proc`. The dataset loads.
I also see errors with other map and filter functions when `num_proc` is set.
```
Done writing 67 indices in 536 bytes .
Done writing 67 indices in 536 bytes .
Fatal Python error: PyCOND_WAIT(gil_cond) failed
```
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/620/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/620/timeline
| null |
completed
| null | null | false |
[
"It seems that I ran into the same problem\r\n```\r\ndef tokenize(cols, example):\r\n for in_col, out_col in cols.items():\r\n example[out_col] = hf_tokenizer.convert_tokens_to_ids(hf_tokenizer.tokenize(example[in_col]))\r\n return example\r\ncola = datasets.load_dataset('glue', 'cola')\r\ntokenized_cola = cola.map(partial(tokenize, {'sentence': 'text_idxs'}),\r\n num_proc=2,)\r\n```\r\nand it outpus (exceprts)\r\n```\r\nConcatenating 2 shards from multiprocessing\r\nSet __getitem__(key) output type to python objects for ['idx', 'label', 'sentence', 'text_idxs'] columns (when key is int or slice) and don't output other (un-formatted) columns.\r\nTesting the mapped function outputs\r\nTesting finished, running the mapping function on the dataset\r\nDone writing 532 indices in 4256 bytes .\r\nDone writing 531 indices in 4248 bytes .\r\nProcess #0 will write at /home/yisiang/.cache/huggingface/datasets/glue/cola/1.0.0/930e9d141872db65102cabb9fa8ac01c11ffc8a1b72c2e364d8cdda4610df542/tokenized_test_00000_of_00002.arrow\r\nProcess #1 will write at /home/yisiang/.cache/huggingface/datasets/glue/cola/1.0.0/930e9d141872db65102cabb9fa8ac01c11ffc8a1b72c2e364d8cdda4610df542/tokenized_test_00001_of_00002.arrow\r\nSpawning 2 processes\r\n```\r\nand then the program never stop.",
"same problem.\r\n`encoded_dataset = core_data.map(lambda examples: tokenizer(examples[\"query\"], examples[\"document\"], padding=True, truncation='longest_first', return_tensors=\"pt\", max_length=384), num_proc=16, keep_in_memory=True)`\r\nit outputs:\r\n```\r\nSet __getitem__(key) output type to python objects for ['document', 'is_random', 'query'] columns (when key is int or slice) and don't output other (un-formatted) columns.\r\nDone writing 1787500 indices in 25568400000 bytes .\r\nSet __getitem__(key) output type to python objects for ['document', 'is_random', 'query'] columns (when key is int or slice) and don't output other (un-formatted) columns.\r\nDone writing 1787500 indices in 25568400000 bytes .\r\nSet __getitem__(key) output type to python objects for ['document', 'is_random', 'query'] columns (when key is int or slice) and don't output other (un-formatted) columns.\r\nDone writing 1787500 indices in 25568400000 bytes .\r\nSet __getitem__(key) output type to python objects for ['document', 'is_random', 'query'] columns (when key is int or slice) and don't output other (un-formatted) columns.\r\nDone writing 1787500 indices in 25568400000 bytes .\r\nSet __getitem__(key) output type to python objects for ['document', 'is_random', 'query'] columns (when key is int or slice) and don't output other (un-formatted) columns.\r\nDone writing 1787500 indices in 25568400000 bytes .\r\nSet __getitem__(key) output type to python objects for ['document', 'is_random', 'query'] columns (when key is int or slice) and don't output other (un-formatted) columns.\r\nDone writing 1787500 indices in 25568400000 bytes .\r\nSet __getitem__(key) output type to python objects for ['document', 'is_random', 'query'] columns (when key is int or slice) and don't output other (un-formatted) columns.\r\nDone writing 1787500 indices in 25568400000 bytes .\r\nSet __getitem__(key) output type to python objects for ['document', 'is_random', 'query'] columns (when key is int or slice) and don't output other (un-formatted) columns.\r\nDone writing 1787499 indices in 25568385696 bytes .\r\nSet __getitem__(key) output type to python objects for ['document', 'is_random', 'query'] columns (when key is int or slice) and don't output other (un-formatted) columns.\r\nSpawning 16 processes\r\n```",
"Thanks for reporting.\r\n\r\n\r\nWhich tokenizers are you using ? What platform are you on ? Can you tell me which version of datasets and pyarrow you're using ? @timothyjlaurent @richarddwang @HuangLianzhe \r\n\r\nAlso if you're able to reproduce the issue on google colab that would be very helpful.\r\n\r\nI tried to run your code @richarddwang with the bert tokenizer and I wasn't able to reproduce",
"Hi, Sorry that I forgot to see what my version was.\r\nBut after updating datasets to master (editable install), and latest pyarrow. \r\nIt works now ~",
"Sorry, I just noticed this.\r\nI'm running this on MACOS the version of datasets I'm was 1.0.0 but I've also tried it on 1.0.2. `pyarrow==1.0.1`, Python 3.6\r\n\r\nConsider this code:\r\n```python\r\n\r\n loader_path = str(Path(__file__).parent / \"prodigy_dataset_builder.py\")\r\n ds = load_dataset(\r\n loader_path, name=\"prodigy-ds\", data_files=list(file_paths), cache_dir=cache_dir\r\n )[\"train\"]\r\n valid_relations = set(vocabulary.relation_types.keys())\r\n\r\n ds = ds.filter(filter_good_rows, fn_kwargs=dict(valid_rel_labels=valid_relations))\r\n\r\n ds = ds.map(map_bpe_encodings, batched=True, fn_kwargs=dict(tokenizer=vocabulary.tokenizer), num_proc=10)\r\n\r\n # add all feature data\r\n ner_ds: Dataset = ds.map(\r\n add_bio_tags,\r\n fn_kwargs=dict(ner_label_map=vocabulary.ner_labels, tokenizer=vocabulary.tokenizer),\r\n )\r\n rel_ds: Dataset = ner_ds.map(\r\n relation_ds_factory,\r\n batched=True,\r\n writer_batch_size=100,\r\n fn_kwargs=dict(tokenizer=vocabulary.tokenizer, vocabulary=vocabulary),\r\n )\r\n```\r\nThe loader is essentially a jsonloader with some extra error handling. The data is a jsonlines format with text field and a list of span objects and relation objects. \r\n\r\nIn the `ner_ds` a field, `ner_labels` is added, this is used in the downstream `relation_ds_factory`. It all runs fine in a single process but I get a KeyError error if run with num_proc set\r\n:\r\n\r\n```\r\n File \"/Users/timothy.laurent/src/inv-text2struct/text2struct/model/dataset.py\", line 348, in relation_ds_factory\r\n ner_labels = example[\"ner_labels\"]\r\nKeyError: 'ner_labels'\r\n``` \r\n\r\nThis is just one example of what goes wrong. I've started just saving the dataset as arrow at the end because it takes a long time to map/filter/shuffle and the caching isn't working (tracked it down to byte differences in the pickled functions). \r\n\r\n^^ Interestingly if I heed the warning from Tokenizers and set the environment variable, `TOKENIZERS_PARALLELISM=true` the map just hangs:\r\n\r\n```\r\n[I 200921 21:43:18 filelock:318] Lock 5694118768 released on /Users/timothy.laurent/.cache/huggingface/datasets/_Users_timothy.laurent_.cache_huggingface_datasets_prodigy_dataset_builder_prodigy-ds-5f34378723c4e83f_0.0.0_e67d9b43d5cd82c50b1eae8f2097daf95b601a04dc03ddd504f2b234a5fa247a.lock\r\n100%|████████████████████████████████████████████████████████████████████████████████████████████████████████| 1/1 [00:00<00:00, 1.34ba/s]\r\n#0: 0%| | 0/1 [00:00<?, ?ba/s]\r\n#1: 0%| | 0/1 [00:00<?, ?ba/s]\r\n#2: 0%| | 0/1 [00:00<?, ?ba/s]\r\n#3: 0%| | 0/1 [00:00<?, ?ba/s]\r\n#4: 0%| | 0/1 [00:00<?, ?ba/s]\r\n#5: 0%| | 0/1 [00:00<?, ?ba/s]\r\n#6: 0%| | 0/1 [00:00<?, ?ba/s]\r\n#7: 0%| | 0/1 [00:00<?, ?ba/s]\r\n#8: 0%| | 0/1 [00:00<?, ?ba/s]\r\n```",
"Thank you, I was able to reproduce :)\r\nI'm on it",
"#659 should fix the `KeyError` issue. It was due to the formatting not getting updated the right way",
"Also maybe @n1t0 knows why setting `TOKENIZERS_PARALLELISM=true` creates deadlock issues when calling `map` with multiprocessing ?",
"@lhoestq \r\n\r\nThanks for taking a look. I pulled the master but I still see the key error.\r\n\r\n```\r\nTo disable this warning, you can either:\r\n - Avoid using `tokenizers` before the fork if possible\r\n - Explicitly set the environment variable TOKENIZERS_PARALLELISM=(true | false)\r\n#0: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████| 1/1 [00:00<00:00, 21.56ba/s]\r\n#1: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████| 1/1 [00:00<00:00, 17.71ba/s]\r\n#2: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████| 1/1 [00:00<00:00, 20.45ba/s]\r\n#3: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████| 1/1 [00:00<00:00, 26.05ba/s]\r\n#4: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████| 1/1 [00:00<00:00, 26.83ba/s]\r\n#5: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████| 1/1 [00:00<00:00, 27.00ba/s]\r\n#6: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████| 1/1 [00:00<00:00, 27.40ba/s]\r\n#7: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████| 1/1 [00:00<00:00, 25.91ba/s]\r\n#8: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████| 1/1 [00:00<00:00, 22.46ba/s]\r\n#9: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████| 1/1 [00:00<00:00, 20.15ba/s]\r\n#10: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████| 1/1 [00:00<00:00, 26.81ba/s]\r\n#11: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████| 1/1 [00:00<00:00, 27.45ba/s]\r\n100%|█████████████████████████████████████████████████████████████████████████████████████████████████████| 322/322 [00:00<00:00, 1462.85ex/s]\r\nTraceback (most recent call last): | 0/1 [00:00<?, ?ba/s]\r\n File \"text2struct/run_model.py\", line 372, in <module>\r\n main()\r\n File \"text2struct/run_model.py\", line 368, in main | 0/1 [00:00<?, ?ba/s]\r\n run_model(auto_envvar_prefix=\"GFB_CIES\") # pragma: no cover\r\n File \"/Users/timothy.laurent/.virtualenvs/inv-text2struct/lib/python3.6/site-packages/click/core.py\", line 829, in __call__\r\n return self.main(*args, **kwargs) | 0/1 [00:00<?, ?ba/s]\r\n File \"/Users/timothy.laurent/.virtualenvs/inv-text2struct/lib/python3.6/site-packages/click/core.py\", line 782, in main\r\n rv = self.invoke(ctx)\r\n File \"/Users/timothy.laurent/.virtualenvs/inv-text2struct/lib/python3.6/site-packages/click/core.py\", line 1236, in invoke\r\n return Command.invoke(self, ctx)\r\n File \"/Users/timothy.laurent/.virtualenvs/inv-text2struct/lib/python3.6/site-packages/click/core.py\", line 1066, in invoke\r\n return ctx.invoke(self.callback, **ctx.params)\r\n File \"/Users/timothy.laurent/.virtualenvs/inv-text2struct/lib/python3.6/site-packages/click/core.py\", line 610, in invoke\r\n return callback(*args, **kwargs)\r\n File \"/Users/timothy.laurent/.virtualenvs/inv-text2struct/lib/python3.6/site-packages/click/decorators.py\", line 21, in new_func\r\n return f(get_current_context(), *args, **kwargs)\r\n File \"text2struct/run_model.py\", line 136, in run_model\r\n ctx.invoke(ctx.command.commands[config_dict[\"mode\"]])\r\n File \"/Users/timothy.laurent/.virtualenvs/inv-text2struct/lib/python3.6/site-packages/click/core.py\", line 610, in invoke\r\n return callback(*args, **kwargs)\r\n File \"/Users/timothy.laurent/.virtualenvs/inv-text2struct/lib/python3.6/site-packages/click/decorators.py\", line 21, in new_func\r\n return f(get_current_context(), *args, **kwargs)\r\n File \"text2struct/run_model.py\", line 187, in train\r\n run_train_model(_parse_subcommand(ctx))\r\n File \"text2struct/run_model.py\", line 241, in run_train_model\r\n checkpoint_steps=config.train.checkpoint_steps,\r\n File \"/Users/timothy.laurent/src/inv-text2struct/text2struct/model/train.py\", line 153, in alternate_training\r\n max_len=config.model.dim.max_len,\r\n File \"/Users/timothy.laurent/src/inv-text2struct/text2struct/model/dataset.py\", line 466, in load_prodigy_tf_datasets\r\n folder, file_patterns, vocabulary, cache_dir=cache_dir, test_pct=test_pct\r\n File \"/Users/timothy.laurent/src/inv-text2struct/text2struct/model/dataset.py\", line 447, in load_prodigy_arrow_datasets\r\n fn_kwargs=dict(tokenizer=vocabulary.tokenizer, vocabulary=vocabulary),\r\n File \"/Users/timothy.laurent/.virtualenvs/inv-text2struct/lib/python3.6/site-packages/datasets/arrow_dataset.py\", line 1224, in map\r\n update_data = does_function_return_dict(test_inputs, test_indices)\r\n File \"/Users/timothy.laurent/.virtualenvs/inv-text2struct/lib/python3.6/site-packages/datasets/arrow_dataset.py\", line 1195, in does_function_return_dict\r\n function(*fn_args, indices, **fn_kwargs) if with_indices else function(*fn_args, **fn_kwargs)\r\n File \"/Users/timothy.laurent/src/inv-text2struct/text2struct/model/dataset.py\", line 348, in relation_ds_factory\r\n ner_labels = example[\"ner_labels\"]\r\nKeyError: 'ner_labels'\r\n\r\n```",
"The parallelism is automatically disabled on `tokenizers` when the process gets forked, while we already used the parallelism capabilities of a tokenizer. We have to do it in order to avoid having the process hang, because we cannot safely fork a multithreaded process (cf https://github.com/huggingface/tokenizers/issues/187).\r\nSo if possible, the tokenizers shouldn't be used before the fork, so that each process can then make use of the parallelism. Otherwise using `TOKENIZERS_PARALLELISM=false` is the way to go.",
"> Thanks for taking a look. I pulled the master but I still see the key error.\r\n\r\nI am no longer able to get the error since #659 was merged. Not sure why you still have it @timothyjlaurent \r\nMaybe it is a cache issue ? Could you try to use `load_from_cache_file=False` in your `.map()` calls ?",
"> The parallelism is automatically disabled on `tokenizers` when the process gets forked, while we already used the parallelism capabilities of a tokenizer. We have to do it in order to avoid having the process hang, because we cannot safely fork a multithreaded process (cf [huggingface/tokenizers#187](https://github.com/huggingface/tokenizers/issues/187)).\r\n> So if possible, the tokenizers shouldn't be used before the fork, so that each process can then make use of the parallelism. Otherwise using `TOKENIZERS_PARALLELISM=false` is the way to go.\r\n\r\nOk thanks :)\r\n\r\nIs there something we should do on the `datasets` side to avoid that that the program hangs ?\r\n\r\nAlso when doing `.map` with a tokenizer, the tokenizer is called once on the first examples of the dataset to check the function output before spawning the processes. Is that compatible with how tokenizers are supposed to be used with multiprocessing ?",
"#659 fixes the empty dict issue\r\n#688 fixes the hang issue",
"Hmmm I pulled the latest commit, `b93c5517f70a480533a44e0c42638392fd53d90`, and I'm still seeing both the hanging and the key error. ",
"Hi @timothyjlaurent \r\n\r\nThe hanging fix just got merged, that why you still had it.\r\n\r\nFor the key error it's possible that the code you ran reused cached datasets from where the KeyError bug was still there.\r\nCould you try to clear your cache or make sure that it doesn't reuse cached data with `.map(..., load_from_cache=False)` ?\r\nLet me know if it it helps",
"Hi @lhoestq , \r\n\r\nThanks for letting me know about the update.\r\n\r\nSo I don't think it's the caching - because hashing mechanism isn't stable for me -- but that's a different issue. In any case I `rm -rf ~/.cache/huggingface` to make a clean slate.\r\n\r\nI synced with master and I see the key error has gone away, I tried with and without the `TOKENIZERS_PARALLELISM` variable set and see the log line for setting the value false before the map.\r\n\r\nNow I'm seeing an issue with `.train_test_split()` on datasets that are the product of a multiprocess map.\r\n\r\nHere is the stack trace\r\n\r\n```\r\n File \"/Users/timothy.laurent/src/inv-text2struct/text2struct/model/dataset.py\", line 451, in load_prodigy_arrow_datasets\r\n ner_ds_dict = ner_ds.train_test_split(test_size=test_pct, shuffle=True, seed=seed)\r\n File \"/Users/timothy.laurent/.virtualenvs/inv-text2struct/src/datasets/src/datasets/arrow_dataset.py\", line 168, in wrapper\r\n dataset.set_format(**new_format)\r\n File \"/Users/timothy.laurent/.virtualenvs/inv-text2struct/src/datasets/src/datasets/fingerprint.py\", line 163, in wrapper\r\n out = func(self, *args, **kwargs)\r\n File \"/Users/timothy.laurent/.virtualenvs/inv-text2struct/src/datasets/src/datasets/arrow_dataset.py\", line 794, in set_format\r\n list(filter(lambda col: col not in self._data.column_names, columns)), self._data.column_names\r\nValueError: Columns ['train', 'test'] not in the dataset. Current columns in the dataset: ['_input_hash', '_task_hash', '_view_id', 'answer', 'encoding__ids', 'encoding__offsets', 'encoding__overflowing', 'encoding__tokens', 'encoding__words', 'ner_ids', 'ner_labels', 'relations', 'spans', 'text', 'tokens']\r\n```\r\n\r\n\r\n",
"Thanks for reporting.\r\nI'm going to fix that and add a test case so that it doesn't happen again :) \r\nI'll let you know when it's done\r\n\r\nIn the meantime if you could make a google colab that reproduces the issue it would be helpful ! @timothyjlaurent ",
"Sure thing, @lhoestq.\r\n\r\nhttps://colab.research.google.com/drive/1lg4fbyrUO6m8ssQ2dNdVFaUqMUfA2zZ3?usp=sharing",
"Thanks @timothyjlaurent ! I just merged a fix on master. I also checked your notebook and it looks like it's working now.\r\nI added some tests to make sure it works as expected now :)",
"Great, @lhoestq . I'm trying to verify in the colab:\r\nchanged\r\n```\r\n!pip install datasets\r\n```\r\nto \r\n\r\n```\r\n!pip install git+https://github.com/huggingface/datasets@master\r\n```\r\n\r\nBut I'm still seeing the error - I wonder why?",
"It works on my side @timothyjlaurent on google colab.\r\nDid you try to uninstall datasets first, before updating it to master's version ?",
"I didn't -- it was a new sessions --- buuut - look like it's working today -- woot! I'll close this issue. Thanks @lhoestq "
] |
https://api.github.com/repos/huggingface/datasets/issues/154
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/154/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/154/comments
|
https://api.github.com/repos/huggingface/datasets/issues/154/events
|
https://github.com/huggingface/datasets/pull/154
| 620,059,066 |
MDExOlB1bGxSZXF1ZXN0NDE5Mzc4Mzgw
| 154 |
add Ubuntu Dialogs Corpus datasets
|
[] |
closed
| false | null | 0 |
2020-05-18T09:34:48Z
|
2020-05-18T10:12:28Z
|
2020-05-18T10:12:27Z
| null |
This PR adds the Ubuntu Dialog Corpus datasets version 2.0.
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/154/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/154/timeline
| null | null | false |
{
"diff_url": "https://github.com/huggingface/datasets/pull/154.diff",
"html_url": "https://github.com/huggingface/datasets/pull/154",
"merged_at": "2020-05-18T10:12:27Z",
"patch_url": "https://github.com/huggingface/datasets/pull/154.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/154"
}
| true |
[] |
https://api.github.com/repos/huggingface/datasets/issues/4755
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/4755/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/4755/comments
|
https://api.github.com/repos/huggingface/datasets/issues/4755/events
|
https://github.com/huggingface/datasets/issues/4755
| 1,319,687,044 |
I_kwDODunzps5OqNOE
| 4,755 |
Datasets.map causes incorrect overflow_to_sample_mapping when used with tokenizers and small batch size
|
[
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] |
open
| false | null | 2 |
2022-07-27T14:54:11Z
|
2022-07-27T17:57:28Z
| null | null |
## Describe the bug
When using `tokenizer`, we can retrieve the field `overflow_to_sample_mapping`, since long samples will be overflown into multiple token sequences.
However, when tokenizing is done via `Dataset.map`, with `n_proc > 1`, the `overflow_to_sample_mapping` field is wrong. This seems to be because each tokenizer only looks at its share of the samples, and maps to the index _within its share_, but then `Dataset.map` collates them together.
## Steps to reproduce the bug
1. Make a dataset of 3 strings.
2. Tokenize via Dataset.map with n_proc = 8
3. Inspect the `overflow_to_sample_mapping` field
## Expected results
`[0, 1, 2]`
## Actual results
`[0, 0, 0]`
Notes:
1. I have not yet extracted a minimal example, but the above works reliably
2. If the dataset is large, I've yet to determine if this bug still happens a. not at all b. always c. on the small, leftover batch at the end.
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4755/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/4755/timeline
| null | null | null | null | false |
[
"I've built a minimal example that shows this bug without `n_proc`. It seems like it's a problem any way of using **tokenizers, `overflow_to_sample_mapping`, and Dataset.map, with a small batch size**:\r\n\r\n```\r\nimport datasets\r\nimport transformers\r\npretrained = 'deepset/tinyroberta-squad2'\r\ntokenizer = transformers.AutoTokenizer.from_pretrained(pretrained)\r\n\r\nquestions = ['Can you tell me why?', 'What time is it?']\r\ncontexts = ['This is context zero', 'Another paragraph goes here'] \r\n\r\ndef tok(questions, contexts):\r\n return tokenizer(text=questions,\r\n text_pair=contexts,\r\n truncation='only_second',\r\n return_overflowing_tokens=True,\r\n )\r\nprint(tok(questions, contexts)['overflow_to_sample_mapping'])\r\nassert tok(questions, contexts)['overflow_to_sample_mapping'] == [0, 1] # PASSES\r\n\r\ndef tok2(d):\r\n return tok(d['question'], d['context'])\r\n\r\ndef tok2(d):\r\n return tok(d['question'], d['context'])\r\n\r\nds = datasets.Dataset.from_dict({'question': questions, 'context': contexts})\r\ntokens = ds.map(tok2, batched=True, batch_size=1)\r\nprint(tokens['overflow_to_sample_mapping'])\r\nassert tokens['overflow_to_sample_mapping'] == [0, 1] # FAILS produces [0,0]\r\n```\r\n\r\nNote that even if the batch size would be larger, there will be instances where we will not have a lot of data, and end up using small batches. This can occur e.g. if `n_proc` causes batches to be underfill. I imagine it can also occur in other ways, e.g. the final leftover batch at the end.",
"A larger batch size does _not_ have this behavior:\r\n\r\n```\r\ndef tok2(d):\r\n return tok(d['question'], d['context'])\r\n\r\nds = datasets.Dataset.from_dict({'question': questions, 'context': contexts})\r\ntokens = ds.map(tok2, batched=True, batch_size=2)\r\nprint(tokens['overflow_to_sample_mapping'])\r\nassert tokens['overflow_to_sample_mapping'] == [0, 1] # PASSES\r\n```"
] |
https://api.github.com/repos/huggingface/datasets/issues/504
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/504/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/504/comments
|
https://api.github.com/repos/huggingface/datasets/issues/504/events
|
https://github.com/huggingface/datasets/pull/504
| 678,756,211 |
MDExOlB1bGxSZXF1ZXN0NDY3NjUxOTA5
| 504 |
Added downloading to Hyperpartisan news detection
|
[] |
closed
| false | null | 2 |
2020-08-13T21:53:46Z
|
2020-08-27T08:18:41Z
|
2020-08-27T08:18:41Z
| null |
Following the discussion on Slack and #349, I've updated the hyperpartisan dataset to pull directly from Zenodo rather than manual install, which should make this dataset much more accessible. Many thanks to @johanneskiesel !
Currently doesn't pass `test_load_real_dataset` - I'm using `self.config.name` which is `default` in this test. Might be related to #474
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/504/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/504/timeline
| null | null | false |
{
"diff_url": "https://github.com/huggingface/datasets/pull/504.diff",
"html_url": "https://github.com/huggingface/datasets/pull/504",
"merged_at": "2020-08-27T08:18:41Z",
"patch_url": "https://github.com/huggingface/datasets/pull/504.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/504"
}
| true |
[
"Thank you @ghomasHudson for making our dataset available! This is great!",
"The test passes since #527 :)"
] |
https://api.github.com/repos/huggingface/datasets/issues/4213
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/4213/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/4213/comments
|
https://api.github.com/repos/huggingface/datasets/issues/4213/events
|
https://github.com/huggingface/datasets/pull/4213
| 1,214,510,010 |
PR_kwDODunzps42uft_
| 4,213 |
ETT time series dataset
|
[] |
closed
| false | null | 2 |
2022-04-25T13:26:18Z
|
2022-05-05T12:19:21Z
|
2022-05-05T12:10:35Z
| null |
Ready for review.
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4213/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/4213/timeline
| null | null | false |
{
"diff_url": "https://github.com/huggingface/datasets/pull/4213.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4213",
"merged_at": "2022-05-05T12:10:35Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4213.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4213"
}
| true |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"thank you!\r\n"
] |
https://api.github.com/repos/huggingface/datasets/issues/3255
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/3255/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/3255/comments
|
https://api.github.com/repos/huggingface/datasets/issues/3255/events
|
https://github.com/huggingface/datasets/issues/3255
| 1,051,783,129 |
I_kwDODunzps4-sO_Z
| 3,255 |
SciELO dataset ConnectionError
|
[
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] |
closed
| false | null | 0 |
2021-11-12T09:57:14Z
|
2021-11-16T17:55:22Z
|
2021-11-16T17:55:22Z
| null |
## Describe the bug
I get `ConnectionError` when I am trying to load the SciELO dataset.
When I try the URL with `requests` I get:
```
>>> requests.head("https://ndownloader.figstatic.com/files/14019287")
<Response [302]>
```
And as far as I understand redirections in `datasets` are not supported for downloads.
https://github.com/huggingface/datasets/blob/807341d0db0728073ab605c812c67f927d148f38/datasets/scielo/scielo.py#L45
## Steps to reproduce the bug
```python
from datasets import load_dataset
dataset = load_dataset("scielo", "en-es")
```
## Expected results
Download SciELO dataset and load Dataset object
## Actual results
```
Downloading and preparing dataset scielo/en-es (download: 21.90 MiB, generated: 68.45 MiB, post-processed: Unknown size, total: 90.35 MiB) to /Users/test/.cache/huggingface/datasets/scielo/en-es/1.0.0/7e05d55a20257efeb9925ff5de65bd4884fc6ddb6d765f1ea3e8860449d90e0e...
Traceback (most recent call last):
File "scielo.py", line 3, in <module>
dataset = load_dataset("scielo", "en-es")
File "../lib/python3.8/site-packages/datasets/load.py", line 1632, in load_dataset
builder_instance.download_and_prepare(
File "../lib/python3.8/site-packages/datasets/builder.py", line 607, in download_and_prepare
self._download_and_prepare(
File "../lib/python3.8/site-packages/datasets/builder.py", line 675, in _download_and_prepare
split_generators = self._split_generators(dl_manager, **split_generators_kwargs)
File "/Users/test/.cache/huggingface/modules/datasets_modules/datasets/scielo/7e05d55a20257efeb9925ff5de65bd4884fc6ddb6d765f1ea3e8860449d90e0e/scielo.py", line 77, in _split_generators
data_dir = dl_manager.download_and_extract(_URLS[self.config.name])
File "../lib/python3.8/site-packages/datasets/utils/download_manager.py", line 284, in download_and_extract
return self.extract(self.download(url_or_urls))
File "../lib/python3.8/site-packages/datasets/utils/download_manager.py", line 196, in download
downloaded_path_or_paths = map_nested(
File "../lib/python3.8/site-packages/datasets/utils/py_utils.py", line 206, in map_nested
return function(data_struct)
File "../lib/python3.8/site-packages/datasets/utils/download_manager.py", line 217, in _download
return cached_path(url_or_filename, download_config=download_config)
File "../lib/python3.8/site-packages/datasets/utils/file_utils.py", line 295, in cached_path
output_path = get_from_cache(
File "../lib/python3.8/site-packages/datasets/utils/file_utils.py", line 594, in get_from_cache
raise ConnectionError("Couldn't reach {}".format(url))
ConnectionError: Couldn't reach https://ndownloader.figstatic.com/files/14019287
```
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 1.15.1
- Platform: macOS-10.16-x86_64-i386-64bit
- Python version: 3.8.12
- PyArrow version: 6.0.0
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3255/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/3255/timeline
| null |
completed
| null | null | false |
[] |
https://api.github.com/repos/huggingface/datasets/issues/4650
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/4650/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/4650/comments
|
https://api.github.com/repos/huggingface/datasets/issues/4650/events
|
https://github.com/huggingface/datasets/issues/4650
| 1,296,680,037 |
I_kwDODunzps5NScRl
| 4,650 |
Add SPECTER dataset
|
[
{
"color": "e99695",
"default": false,
"description": "Requesting to add a new dataset",
"id": 2067376369,
"name": "dataset request",
"node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request"
}
] |
open
| false | null | 1 |
2022-07-07T01:41:32Z
|
2022-07-14T02:07:49Z
| null | null |
## Adding a Dataset
- **Name:** *SPECTER*
- **Description:** *SPECTER: Document-level Representation Learning using Citation-informed Transformers*
- **Paper:** *https://doi.org/10.18653/v1/2020.acl-main.207*
- **Data:** *https://huggingface.co/datasets/sentence-transformers/embedding-training-data/resolve/main/specter_train_triples.jsonl.gz*
- **Motivation:** *Dataset for training and evaluating models of conversational response*
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4650/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/4650/timeline
| null | null | null | null | false |
[
"uploaded dataset [here](https://huggingface.co/datasets/embedding-data/SPECTER)"
] |
https://api.github.com/repos/huggingface/datasets/issues/3206
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/3206/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/3206/comments
|
https://api.github.com/repos/huggingface/datasets/issues/3206/events
|
https://github.com/huggingface/datasets/pull/3206
| 1,044,216,270 |
PR_kwDODunzps4uEZJe
| 3,206 |
[WIP] Allow user-defined hash functions via a registry
|
[] |
closed
| false | null | 13 |
2021-11-03T23:25:42Z
|
2021-11-05T12:38:11Z
|
2021-11-05T12:38:04Z
| null |
Inspired by the discussion on hashing in https://github.com/huggingface/datasets/issues/3178#issuecomment-959016329, @lhoestq suggested that it would be neat to allow users more control over the hashing process. Specifically, it would be great if users can specify specific hashing functions depending on the **class** of the object.
As an example, we found in the linked topic that loaded spaCy models (`Language` objects) have different hashes when `dump`'d, but their byte representation with `Language.to_bytes()` _is_ deterministic. It would therefore be great if we could specify that for `Language` objects, the hasher should hash the objects `to_bytes()` return value instead of the object itself.
This PR adds a new, but tiny, dependency to manage the registry, namely [`catalogue`](https://github.com/explosion/catalogue).
Two files have been changed (apart from the added dependency in `setup.py`) and one file has been added.
**utils.registry** (added)
This file defines our custom Registry and builds a registry called "hashers". A Registry is basically dictionary from names (str) to functions. A function can be added to the registry by a decorator, e.g.
```python
@hashers.register(spacy.Language)
def hash_spacy_language(nlp):
return Hasher.hash(nlp.to_bytes())
```
You'll notice that `spacy.Language` is not a string, even though the registry holds a str->func mapping. To accomplish this with classes in a dynamic way, catalogue.Registry needed to be subclassed and modified as `DatasetsRegistry`. All methods that use a name as an input are now modified so that classes are deterministically converted in strings in such a way that we can later retrieve the actual class from the string (below).
**utils.py_utils** (modified)
Added two functions to deal with classes and their qualified names, that is, their full descriptive name including the module. On the one hand it allows us to retrieve a string from a given class, e.g. given `Module` class, return `torch.nn.Module` str. Conversly, a function is added to convert such a full qualified name into a class. For instance, given the string `torch.nn.Module`, return the `Module` class. These straightforward methods allow us to interchangeably use classes and strings without any needed user interaction - they can just register a class, and behind the scenes `DatasetsRegistry` converts these to deterministic strings.
**fingerprint** (modified)
Updated Hasher.hash so that if the object to hash is an instance of a class in the registry, the registered function is used to hash the object instead of the default behavior. To do so we iterate over the registry `hashers` and convert its keys (strings) into classes, and then we can use `isinstance`.
```python
# Check if the current object is an instance that is
# applicable to the user-defined hashers. If so, hash
# with the user-defined function
for full_module_name, func in hashers.get_all().items():
registered_cls = get_cls_from_qualname(full_module_name)
if isinstance(value, registered_cls):
return func(value)
```
**Putting it all together**
To test this, you can try the following example with spaCy. First install spaCy from source and checkout a specific commit.
```shell
git clone https://github.com/explosion/spaCy.git
cd spaCy/
git checkout cab9209c3dfcd1b75dfe5657f10e52c4d847a3cf
cd ..
git clone https://github.com/BramVanroy/datasets.git
cd datasets
git checkout registry
pip install -e .
pip install ../spaCy
spacy download en_core_web_sm
```
Now you can run the following script. By default it will use the custom hasher function for the Language object. You can enable the default behavior by commenting out `@hashers.register...`.
```python
import spacy
from datasets.fingerprint import Hasher
from datasets.utils.registry import hashers
# Register a function so that when the Hasher encounters a spacy.Language object
# it uses this custom function to hash instead of the default
@hashers.register(spacy.Language)
def hash_spacy_language(nlp):
return Hasher.hash(nlp.to_bytes())
def main():
print(hashers.get_all())
nlp = spacy.load("en_core_web_sm")
dump1 = Hasher.hash(nlp)
nlp = spacy.load("en_core_web_sm")
dump2 = Hasher.hash(nlp)
print(dump1)
# succeeds when using the registered custom function
# fails if using the default
assert dump1 == dump2
if __name__ == '__main__':
main()
```
To do
====
- The above is just a proof-of-concept. I am open to changes/suggestions
- Tests still need to be written
- We should consider whether we can make `DatasetsRegistry` very restrictive and ONLY allowing classes. That would make testing easier - otherwise we also need to test for other sorts of objects.
- Maybe the `hashers` definition is better suited in `fingerprint`?
- Documentation/examples need to be updated
- Not sure why the logger is not working in `hash()`
- `get_cls_from_qualname` might need a fail-safe: is it possible for a full_qualname to not have a module, and if so how do we deal with that?
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3206/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/3206/timeline
| null | null | false |
{
"diff_url": "https://github.com/huggingface/datasets/pull/3206.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3206",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/3206.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3206"
}
| true |
[
"Hi @BramVanroy, thanks for your PR.\r\n\r\nThere was a bug in TensorFlow/Keras. We have made a temporary fix in master branch. Please, merge master into your PR branch, so that the CI tests pass.\r\n\r\n```\r\ngit checkout registry\r\ngit fetch upstream master\r\ngit merge upstream/master\r\n```",
"@albertvillanova Done. Although new tests will need to be added. I am looking for some feedback on my initial proposal in this PR. Reviews and ideas welcome!",
"Hi ! Thanks for diving into this :)\r\n\r\nWith this approach you get the right hash when doing `Hasher.hash(nlp)` but if you try to hash an object that has `nlp` as one of its attributes for example you will get different hashes every time.\r\n\r\nThis is because `Hasher.hash` is not recursive itself. Indeed what happens when you try to hash an object is that:\r\n1. it is dumped with our custom `dill` pickler (which is recursive)\r\n2. the bytes of the dump are hashed\r\n\r\nTo fix this we must integrate the custom hashing as a custom pickler dumping instead.\r\n\r\nNote that we're only using the `pickler.dumps` method and not `pickler.loads` since we only use it to get hashes, so it doesn't matter if `loads` doesn't reconstruct the object exactly. What's important it only to capture all the necessary information that defines how the object transforms the data (here `nlp.to_bytes()` determines how the spacy pipeline transforms the text).\r\n\r\nOur pickler already has a registry and you can register new dump functions with:\r\n```python\r\nimport dill\r\nimport spacy\r\nfrom datasets.utils.py_utils import pklregister\r\n\r\n@pklregister(spacy.Language)\r\ndef _save_spacy_language(pickler, nlp):\r\n pickler.save_reduce(...) # I think we can use nlp.to_bytes() here\r\n dill._dill.log.info(...)\r\n```\r\n\r\nYou can find some examples of custom dump functions in `py_utils.py`",
"Ah, darn it. Completely missed that register. Time wasted, unfortunately. \r\n\r\nTo better understand what you mean, I figured I'd try the basis of your snippet and I've noticed quite an annoying side-effect of how the pickle dispatch table seems to work. It explicitly uses an object's [`type()`](https://github.com/python/cpython/blob/87032cfa3dc975d7442fd57dea2c6a56d31c911a/Lib/pickle.py#L557-L558), which makes sense for pickling some (primitive) types it is not ideal for more complex ones, I think. `Hasher.hash` has the same issue as far as I can tell.\r\n\r\nhttps://github.com/huggingface/datasets/blob/d21ce54f2c2782f854f975eb1dc2be6f923b4314/src/datasets/fingerprint.py#L187-L191\r\n\r\nThis is very restrictive, and won't work for subclasses. In the case of spaCy, for instance, we register `Language`, but `nlp` is an instance of `English`, which is a _subclass_ of `Language`. These are different types, and so they will not match in the dispatch table. Maybe this is more general approach to cover such cases? Something like this is a start but too broad, but ideally a hierarchy is constructed and traversed of all classes in the table and the lowest class is selected to ensure that the most specific class function is dispatched.\r\n\r\n```python\r\n def hash(cls, value: Any) -> str:\r\n # Try to match the exact type\r\n if type(value) in cls.dispatch:\r\n return cls.dispatch[type(value)](cls, value)\r\n\r\n # Try to match instance (superclass)\r\n for type_cls, func in cls.dispatch.items():\r\n if isinstance(value, type_cls):\r\n return cls.dispatch[type_cls](cls, value)\r\n\r\n return cls.hash_default(value)\r\n```\r\n\r\nThis does not solve the problem for pickling, though. That is quite unfortunate IMO because that implies that users always have to specify the most specific class, which is not always obvious. (For instance, `spacy.load`'s signature returns `Language`, but as said before a subclass might be returned.)\r\n\r\nSecond, I am trying to understand `save_reduce` but I can find very little documentation about it, only the source code which is quite cryptic. Can you explain it a bit? The required arguments are not very clear to me and there is no docstring.\r\n\r\n```python\r\n def save_reduce(self, func, args, state=None, listitems=None, dictitems=None, obj=None):\r\n```",
"Here is an example illustrating the problem with sub-classes.\r\n\r\n```python\r\nimport spacy\r\n\r\nfrom spacy import Language\r\nfrom spacy.lang.en import English\r\n\r\nfrom datasets.utils.py_utils import Pickler, pklregister\r\n\r\n# Only useful in the registry (matching with `nlp`)\r\n# if you swap it out for very specific `English`\r\n@pklregister(English)\r\ndef hash_spacy_language(pickler, nlp):\r\n pass\r\n\r\n\r\ndef main():\r\n print(Pickler.dispatch)\r\n nlp = spacy.load(\"en_core_web_sm\")\r\n print(f\"NLP type {type(nlp)} in dispatch table? \", type(nlp) in Pickler.dispatch)\r\n\r\n\r\nif __name__ == '__main__':\r\n main()\r\n```",
"Indeed that's not ideal.\r\nMaybe we could integrate all the subclasses directly in `datasets`. That's simple to do but the catch is that if users have new subclasses of `Language` it won't work.\r\n\r\nOtherwise we can see how to make the API simpler for users by allowing subclasses\r\n```python\r\n# if you swap it out for very specific `English`\r\n@pklregister(Language, allow_subclasses=True)\r\ndef hash_spacy_language(pickler, nlp):\r\n pass\r\n```\r\n\r\nHere is an idea how to make this work, let me know what you think:\r\n\r\nWhen `Pickler.dumps` is called, it uses `Pickler.save_global` which is a method that is going to be called recursively on all the objects. We can customize this part, and make it work as we want when it encounters a subclass of `Language`.\r\n\r\nFor example when it encounters a subclass of `Language`, we can dynamically register the hashing function for the subclass (`English` for example) in `Pickler.save_global`, right before calling the actual `dill.Pickler.save_global(self, obj, name=name)`:\r\n```python\r\npklregister(type(obj))(hash_function_registered_for_parent_class)\r\ndill.Pickler.save_global(self, obj, name=name)\r\n```\r\n\r\nIn practice that means we can have an additional dispatch dictionary (similar to `Pickler.dispatch`) to store the hashing functions when `allow_subclasses=True`, and use this dictionary in `Pickler.save_global` to check if we need to use a hashing function registered with `allow_subclasses=True` and get `hash_function_registered_for_parent_class`.",
"If I understood you correctly, I do not think that that is enough because you are only doing this for a type and its direct parent class. You could do this for all superclasses (so traverse all ancestors and find the registered function for the first that is encountered). I can work on that, if you agree. The one thing that I am not sure about is how you want to create the secondary dispatch table. An empty dict as class variable in Pickler? (It doesn't have to be a true dispatcher, I think.)\r\n\r\nI do not think that dynamic registration is the ideal situation (it feels a bit hacky). An alternative would be to subclass Pickle and Dill to make sure that instead of just type() checking in the dispatch table also superclasses are considered. But that is probably overkill.",
"> You could do this for all superclasses (so traverse all ancestors and find the registered function for the first that is encountered)\r\n\r\nThat makes sense indeed !\r\n\r\n> The one thing that I am not sure about is how you want to create the secondary dispatch table. An empty dict as class variable in Pickler? (It doesn't have to be a true dispatcher, I think.)\r\n\r\nSure, let's try to not use too complicated stuff\r\n\r\n> I do not think that dynamic registration is the ideal situation (it feels a bit hacky). An alternative would be to subclass Pickle and Dill to make sure that instead of just type() checking in the dispatch table also superclasses are considered. But that is probably overkill.\r\n\r\nIndeed that would feel less hacky, but maybe it's too complex just for this. I feel like this part of the library is already hard to understand when you're not familiar with pickle. IMO having only a few changes that are simpler to understand is better than having a rewrite of `dill`'s core code.\r\n\r\nThanks a lot for your insights, it looks like we're going to have something that works well and that unlocks some nice flexibility for users :) Feel free to ping me anytime if I can help on this",
"Sure, thanks for brainstorming! I'll try to work on it this weekend. Will also revert the current changes in this PR and rename it. ",
"It seems like this is going in the right direction :). \r\n\r\n@BramVanroy Just one small suggestion for future contributions: instead of using `WIP` in the PR title, you can create a draft PR if you're still working on it.",
"Maybe I should just create a new (draft) PR then, seeing that I'll have to rename and revert the changes anyway? I'll link to this PR so that the discussion is at least referenced.",
"I can convert this PR to a draft PR. Let me know what would you prefer.",
"I think reverting my previous commits would make for a dirty (or confusing) commit history, so I'll just create a new one. Thanks."
] |
https://api.github.com/repos/huggingface/datasets/issues/5
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5/events
|
https://github.com/huggingface/datasets/issues/5
| 600,295,889 |
MDU6SXNzdWU2MDAyOTU4ODk=
| 5 |
ValueError when a split is empty
|
[] |
closed
| false | null | 3 |
2020-04-15T13:25:13Z
|
2020-04-29T09:23:05Z
|
2020-04-29T09:23:05Z
| null |
When a split is empty either TEST, VALIDATION or TRAIN I get the following error:
```
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/jplu/dev/jplu/datasets/src/nlp/load.py", line 295, in load
ds = dbuilder.as_dataset(**as_dataset_kwargs)
File "/home/jplu/dev/jplu/datasets/src/nlp/builder.py", line 587, in as_dataset
datasets = utils.map_nested(build_single_dataset, split, map_tuple=True)
File "/home/jplu/dev/jplu/datasets/src/nlp/utils/py_utils.py", line 158, in map_nested
for k, v in data_struct.items()
File "/home/jplu/dev/jplu/datasets/src/nlp/utils/py_utils.py", line 158, in <dictcomp>
for k, v in data_struct.items()
File "/home/jplu/dev/jplu/datasets/src/nlp/utils/py_utils.py", line 172, in map_nested
return function(data_struct)
File "/home/jplu/dev/jplu/datasets/src/nlp/builder.py", line 601, in _build_single_dataset
split=split,
File "/home/jplu/dev/jplu/datasets/src/nlp/builder.py", line 625, in _as_dataset
split_infos=self.info.splits.values(),
File "/home/jplu/dev/jplu/datasets/src/nlp/arrow_reader.py", line 200, in read
return py_utils.map_nested(_read_instruction_to_ds, instructions)
File "/home/jplu/dev/jplu/datasets/src/nlp/utils/py_utils.py", line 172, in map_nested
return function(data_struct)
File "/home/jplu/dev/jplu/datasets/src/nlp/arrow_reader.py", line 191, in _read_instruction_to_ds
file_instructions = make_file_instructions(name, split_infos, instruction)
File "/home/jplu/dev/jplu/datasets/src/nlp/arrow_reader.py", line 104, in make_file_instructions
absolute_instructions=absolute_instructions,
File "/home/jplu/dev/jplu/datasets/src/nlp/arrow_reader.py", line 122, in _make_file_instructions_from_absolutes
'Split empty. This might means that dataset hasn\'t been generated '
ValueError: Split empty. This might means that dataset hasn't been generated yet and info not restored from GCS, or that legacy dataset is used.
```
How to reproduce:
```python
import csv
import nlp
class Bbc(nlp.GeneratorBasedBuilder):
VERSION = nlp.Version("1.0.0")
def __init__(self, **config):
self.train = config.pop("train", None)
self.validation = config.pop("validation", None)
super(Bbc, self).__init__(**config)
def _info(self):
return nlp.DatasetInfo(builder=self, description="bla", features=nlp.features.FeaturesDict({"id": nlp.int32, "text": nlp.string, "label": nlp.string}))
def _split_generators(self, dl_manager):
return [nlp.SplitGenerator(name=nlp.Split.TRAIN, gen_kwargs={"filepath": self.train}),
nlp.SplitGenerator(name=nlp.Split.VALIDATION, gen_kwargs={"filepath": self.validation}),
nlp.SplitGenerator(name=nlp.Split.TEST, gen_kwargs={"filepath": None})]
def _generate_examples(self, filepath):
if not filepath:
return None, {}
with open(filepath) as f:
reader = csv.reader(f, delimiter=',', quotechar="\"")
lines = list(reader)[1:]
for idx, line in enumerate(lines):
yield idx, {"id": idx, "text": line[1], "label": line[0]}
```
```python
import nlp
dataset = nlp.load("bbc", builder_kwargs={"train": "bbc/data/train.csv", "validation": "bbc/data/test.csv"})
```
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5/timeline
| null |
completed
| null | null | false |
[
"To fix this I propose to modify only the file `arrow_reader.py` with few updates. First update, the following method:\r\n```python\r\ndef _make_file_instructions_from_absolutes(\r\n name,\r\n name2len,\r\n absolute_instructions,\r\n):\r\n \"\"\"Returns the files instructions from the absolute instructions list.\"\"\"\r\n # For each split, return the files instruction (skip/take)\r\n file_instructions = []\r\n num_examples = 0\r\n for abs_instr in absolute_instructions:\r\n length = name2len[abs_instr.splitname]\r\n if not length:\r\n raise ValueError(\r\n 'Split empty. This might means that dataset hasn\\'t been generated '\r\n 'yet and info not restored from GCS, or that legacy dataset is used.')\r\n filename = filename_for_dataset_split(\r\n dataset_name=name,\r\n split=abs_instr.splitname,\r\n filetype_suffix='arrow')\r\n from_ = 0 if abs_instr.from_ is None else abs_instr.from_\r\n to = length if abs_instr.to is None else abs_instr.to\r\n num_examples += to - from_\r\n single_file_instructions = [{\"filename\": filename, \"skip\": from_, \"take\": to - from_}]\r\n file_instructions.extend(single_file_instructions)\r\n return FileInstructions(\r\n num_examples=num_examples,\r\n file_instructions=file_instructions,\r\n )\r\n```\r\nBecomes:\r\n```python\r\ndef _make_file_instructions_from_absolutes(\r\n name,\r\n name2len,\r\n absolute_instructions,\r\n):\r\n \"\"\"Returns the files instructions from the absolute instructions list.\"\"\"\r\n # For each split, return the files instruction (skip/take)\r\n file_instructions = []\r\n num_examples = 0\r\n for abs_instr in absolute_instructions:\r\n length = name2len[abs_instr.splitname]\r\n ## Delete the if not length and the raise\r\n filename = filename_for_dataset_split(\r\n dataset_name=name,\r\n split=abs_instr.splitname,\r\n filetype_suffix='arrow')\r\n from_ = 0 if abs_instr.from_ is None else abs_instr.from_\r\n to = length if abs_instr.to is None else abs_instr.to\r\n num_examples += to - from_\r\n single_file_instructions = [{\"filename\": filename, \"skip\": from_, \"take\": to - from_}]\r\n file_instructions.extend(single_file_instructions)\r\n return FileInstructions(\r\n num_examples=num_examples,\r\n file_instructions=file_instructions,\r\n )\r\n```\r\n\r\nSecond update the following method:\r\n```python\r\ndef _read_files(files, info):\r\n \"\"\"Returns Dataset for given file instructions.\r\n\r\n Args:\r\n files: List[dict(filename, skip, take)], the files information.\r\n The filenames contain the absolute path, not relative.\r\n skip/take indicates which example read in the file: `ds.slice(skip, take)`\r\n \"\"\"\r\n pa_batches = []\r\n for f_dict in files:\r\n pa_table: pa.Table = _get_dataset_from_filename(f_dict)\r\n pa_batches.extend(pa_table.to_batches())\r\n pa_table = pa.Table.from_batches(pa_batches)\r\n ds = Dataset(arrow_table=pa_table, data_files=files, info=info)\r\n return ds\r\n```\r\nBecomes:\r\n```python\r\ndef _read_files(files, info):\r\n \"\"\"Returns Dataset for given file instructions.\r\n\r\n Args:\r\n files: List[dict(filename, skip, take)], the files information.\r\n The filenames contain the absolute path, not relative.\r\n skip/take indicates which example read in the file: `ds.slice(skip, take)`\r\n \"\"\"\r\n pa_batches = []\r\n for f_dict in files:\r\n pa_table: pa.Table = _get_dataset_from_filename(f_dict)\r\n pa_batches.extend(pa_table.to_batches())\r\n ## we modify the table only if there are some batches\r\n if pa_batches:\r\n pa_table = pa.Table.from_batches(pa_batches)\r\n ds = Dataset(arrow_table=pa_table, data_files=files, info=info)\r\n return ds\r\n```\r\n\r\nWith these two updates it works now. @thomwolf are you ok with this changes?",
"Yes sounds good to me!\r\nDo you want to make a PR? or I can do it as well",
"Fixed."
] |
https://api.github.com/repos/huggingface/datasets/issues/1723
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/1723/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/1723/comments
|
https://api.github.com/repos/huggingface/datasets/issues/1723/events
|
https://github.com/huggingface/datasets/pull/1723
| 783,982,100 |
MDExOlB1bGxSZXF1ZXN0NTUzMjQ4MzU1
| 1,723 |
ADD S3 support for downloading and uploading processed datasets
|
[] |
closed
| false | null | 1 |
2021-01-12T07:17:34Z
|
2021-01-26T17:02:08Z
|
2021-01-26T17:02:08Z
| null |
# What does this PR do?
This PR adds the functionality to load and save `datasets` from and to s3.
You can save `datasets` with either `Dataset.save_to_disk()` or `DatasetDict.save_to_disk`.
You can load `datasets` with either `load_from_disk` or `Dataset.load_from_disk()`, `DatasetDict.load_from_disk()`.
Loading `csv` or `json` datasets from s3 is not implemented.
To save/load datasets to s3 you either need to provide an `aws_profile`, which is set up on your machine, per default it uses the `default` profile or you have to pass an `aws_access_key_id` and `aws_secret_access_key`.
The implementation was done with the `fsspec` and `boto3`.
### Example `aws_profile` :
<details>
```python
dataset.save_to_disk("s3://moto-mock-s3-bucket/datasets/sdk", aws_profile="hf-sm")
load_from_disk("s3://moto-mock-s3-bucket/datasets/sdk", aws_profile="hf-sm")
```
</details>
### Example `aws_access_key_id` and `aws_secret_access_key` :
<details>
```python
dataset.save_to_disk("s3://moto-mock-s3-bucket/datasets/sdk",
aws_access_key_id="fake_access_key",
aws_secret_access_key="fake_secret_key"
)
load_from_disk("s3://moto-mock-s3-bucket/datasets/sdk",
aws_access_key_id="fake_access_key",
aws_secret_access_key="fake_secret_key"
)
```
</details>
If you want to load a dataset from a public s3 bucket you can pass `anon=True`
### Example `anon=True` :
<details>
```python
dataset.save_to_disk("s3://moto-mock-s3-bucket/datasets/sdk", aws_profile="hf-sm")
load_from_disk("s3://moto-mock-s3-bucketdatasets/sdk",anon=True)
```
</details>
### Full Example
```python
import datasets
dataset = datasets.load_dataset("imdb")
print(f"DatasetDict contains {len(dataset)} datasets")
print(f"train Dataset has the size of: {len(dataset['train'])}")
dataset.save_to_disk("s3://moto-mock-s3-bucket/datasets/sdk", aws_profile="hf-sm")
remote_dataset = datasets.load_from_disk("s3://moto-mock-s3-bucket/datasets/sdk", aws_profile="hf-sm")
print(f"DatasetDict contains {len(remote_dataset)} datasets")
print(f"train Dataset has the size of: {len(remote_dataset['train'])}")
```
Related to #878
I would also adjust the documentation after the code would be reviewed, as long as I leave the PR in "draft" status. Something that we can consider is renaming the functions and changing the `_disk` maybe to `_filesystem`
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 3,
"total_count": 3,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1723/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/1723/timeline
| null | null | false |
{
"diff_url": "https://github.com/huggingface/datasets/pull/1723.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1723",
"merged_at": "2021-01-26T17:02:07Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1723.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1723"
}
| true |
[
"I created the documentation for `FileSystem Integration for cloud storage` with loading and saving datasets to/from a filesystem with an example of using `datasets.filesystem.S3Filesystem`. I added a note on the `Saving a processed dataset on disk and reload` saying that it is also possible to use other filesystems and cloud storages such as S3 with a link to the newly created documentation page from me. \r\nI Attach a screenshot of it here. \r\n\r\n"
] |
https://api.github.com/repos/huggingface/datasets/issues/4275
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/4275/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/4275/comments
|
https://api.github.com/repos/huggingface/datasets/issues/4275/events
|
https://github.com/huggingface/datasets/issues/4275
| 1,224,943,414 |
I_kwDODunzps5JAyc2
| 4,275 |
CommonSenseQA has missing and inconsistent field names
|
[
{
"color": "2edb81",
"default": false,
"description": "A bug in a dataset script provided in the library",
"id": 2067388877,
"name": "dataset bug",
"node_id": "MDU6TGFiZWwyMDY3Mzg4ODc3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20bug"
}
] |
open
| false | null | 1 |
2022-05-04T05:38:59Z
|
2022-05-04T11:41:18Z
| null | null |
## Describe the bug
In short, CommonSenseQA implementation is inconsistent with the original dataset.
More precisely, we need to:
1. Add the dataset matching "id" field. The current dataset, instead, regenerates monotonically increasing id.
2. The [“question”][“stem”] field is flattened into "question". We should match the original dataset and unflatten it
3. Add the missing "question_concept" field in the question tree node
4. Anything else? Go over the data structure of the newly repaired CommonSenseQA and make sure it matches the original
## Expected results
Every data item of the CommonSenseQA should structurally and data-wise match the original CommonSenseQA dataset.
## Actual results
TBD
## Environment info
- `datasets` version: 2.1.0
- Platform: macOS-10.15.7-x86_64-i386-64bit
- Python version: 3.8.13
- PyArrow version: 7.0.0
- Pandas version: 1.4.2
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4275/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/4275/timeline
| null | null | null | null | false |
[
"Thanks for reporting, @vblagoje.\r\n\r\nI'm opening a PR to address this. "
] |
https://api.github.com/repos/huggingface/datasets/issues/3747
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/3747/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/3747/comments
|
https://api.github.com/repos/huggingface/datasets/issues/3747/events
|
https://github.com/huggingface/datasets/issues/3747
| 1,141,688,854 |
I_kwDODunzps5EDMoW
| 3,747 |
Passing invalid subset should throw an error
|
[
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] |
open
| false | null | 0 |
2022-02-17T18:16:11Z
|
2022-02-17T18:16:11Z
| null | null |
## Describe the bug
Only some datasets have a subset (as in `load_dataset(name, subset)`). If you pass an invalid subset, an error should be thrown.
## Steps to reproduce the bug
```python
import datasets
datasets.load_dataset('rotten_tomatoes', 'asdfasdfa')
```
## Expected results
This should break, since `'asdfasdfa'` isn't a subset of the `rotten_tomatoes` dataset.
## Actual results
This API call silently succeeds.
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3747/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/3747/timeline
| null | null | null | null | false |
[] |
https://api.github.com/repos/huggingface/datasets/issues/781
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/781/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/781/comments
|
https://api.github.com/repos/huggingface/datasets/issues/781/events
|
https://github.com/huggingface/datasets/pull/781
| 733,168,609 |
MDExOlB1bGxSZXF1ZXN0NTEyOTkyMzQw
| 781 |
Add XNLI train set
|
[] |
closed
| false | null | 5 |
2020-10-30T13:21:53Z
|
2022-06-09T23:26:46Z
|
2020-11-09T18:22:49Z
| null |
I added the train set that was built using the translated MNLI.
Now you can load the dataset specifying one language:
```python
from datasets import load_dataset
xnli_en = load_dataset("xnli", "en")
print(xnli_en["train"][0])
# {'hypothesis': 'Product and geography are what make cream skimming work .', 'label': 1, 'premise': 'Conceptually cream skimming has two basic dimensions - product and geography .'}
print(xnli_en["test"][0])
# {'hypothesis': 'I havent spoken to him again.', 'label': 2, 'premise': "Well, I wasn't even thinking about that, but I was so frustrated, and, I ended up talking to him again."}
```
Cc @sgugger
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 2,
"laugh": 0,
"rocket": 0,
"total_count": 2,
"url": "https://api.github.com/repos/huggingface/datasets/issues/781/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/781/timeline
| null | null | false |
{
"diff_url": "https://github.com/huggingface/datasets/pull/781.diff",
"html_url": "https://github.com/huggingface/datasets/pull/781",
"merged_at": "2020-11-09T18:22:49Z",
"patch_url": "https://github.com/huggingface/datasets/pull/781.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/781"
}
| true |
[
"Hi! Thanks for adding the translated MNLI! Do you know what translations system / model you used when you created the datasets in the other languages?",
"According to the [paper](https://arxiv.org/pdf/1809.05053.pdf) it's the result of the work of professional translators ;)",
"Thanks for getting back to me.\n\nThe training data is not from translators. And it appears to be machine\ntranslation for all languages. If we can know what system was used to\ncreate the training data that would be great!\n\nYifan.\n\n\nOn Thu, Jun 9, 2022, 05:51 Quentin Lhoest ***@***.***> wrote:\n\n> According to the paper <https://arxiv.org/pdf/1809.05053.pdf> it's the\n> result of the work of professional translators ;)\n>\n> —\n> Reply to this email directly, view it on GitHub\n> <https://github.com/huggingface/datasets/pull/781#issuecomment-1150914429>,\n> or unsubscribe\n> <https://github.com/notifications/unsubscribe-auth/AAKLKWDAPTMGB6BE5GJ4GULVOG5BLANCNFSM4TE67NMQ>\n> .\n> You are receiving this because you commented.Message ID:\n> ***@***.***>\n>\n",
"> The training data is not from translators.\r\n\r\nWhat makes you think that ? The paper litteraly says\r\n\r\n> we hire translators to translate the resulting sentences into 15 languages using the One Hour Translation platform.",
"However the annotators only did test and validation sets, as this was what\nin the paper: “we construct an evaluation set for XLU by extending the\ndevelopment and test sets of the Multi-Genre Natural Language Inference\nCorpus (MultiNLI) to 15 languages\".\n\nOn Thu, Jun 9, 2022 at 10:35 AM Quentin Lhoest ***@***.***>\nwrote:\n\n> The training data is not from translators.\n>\n> What makes you think that ? The paper litteraly says\n>\n> we hire translators to translate the resulting sentences into 15 languages\n> using the One Hour Translation platform.\n>\n> —\n> Reply to this email directly, view it on GitHub\n> <https://github.com/huggingface/datasets/pull/781#issuecomment-1151202195>,\n> or unsubscribe\n> <https://github.com/notifications/unsubscribe-auth/AAKLKWFZOQPLK4WSKFRLW6DVOH6LLANCNFSM4TE67NMQ>\n> .\n> You are receiving this because you commented.Message ID:\n> ***@***.***>\n>\n"
] |
https://api.github.com/repos/huggingface/datasets/issues/3618
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/3618/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/3618/comments
|
https://api.github.com/repos/huggingface/datasets/issues/3618/events
|
https://github.com/huggingface/datasets/issues/3618
| 1,112,123,365 |
I_kwDODunzps5CSafl
| 3,618 |
TIMIT Dataset not working with GPU
|
[
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] |
closed
| false | null | 3 |
2022-01-24T03:26:03Z
|
2023-07-25T15:20:20Z
|
2023-07-25T15:20:20Z
| null |
## Describe the bug
I am working trying to use the TIMIT dataset in order to fine-tune Wav2Vec2 model and I am unable to load the "audio" column from the dataset when working with a GPU.
I am working on Amazon Sagemaker Studio, on the Python 3 (PyTorch 1.8 Python 3.6 GPU Optimized) environment, with a single ml.g4dn.xlarge instance (corresponds to a Tesla T4 GPU).
I don't know if the issue is GPU related or Python environment related because everything works when I work off of the CPU Optimized environment with a non-GPU instance. My code also works on Google Colab with a GPU instance.
This issue is blocking because I cannot get the 'audio' column in any way due to this error, which means that I can't pass it to any functions. I later use the dataset.map function and that is where I originally noticed this error.
## Steps to reproduce the bug
```python
from datasets import load_dataset
timit_train = load_dataset('timit_asr', split='train')
print(timit_train['audio'])
```
## Expected results
Expected to see inside the 'audio' column, which contains an 'array' nested field with the array data I actually need.
## Actual results
Traceback
```
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-6-ceeac555e921> in <module>
----> 1 timit_train['audio']
/opt/conda/lib/python3.6/site-packages/datasets/arrow_dataset.py in __getitem__(self, key)
1917 """Can be used to index columns (by string names) or rows (by integer index or iterable of indices or bools)."""
1918 return self._getitem(
-> 1919 key,
1920 )
1921
/opt/conda/lib/python3.6/site-packages/datasets/arrow_dataset.py in _getitem(self, key, decoded, **kwargs)
1902 pa_subtable = query_table(self._data, key, indices=self._indices if self._indices is not None else None)
1903 formatted_output = format_table(
-> 1904 pa_subtable, key, formatter=formatter, format_columns=format_columns, output_all_columns=output_all_columns
1905 )
1906 return formatted_output
/opt/conda/lib/python3.6/site-packages/datasets/formatting/formatting.py in format_table(table, key, formatter, format_columns, output_all_columns)
529 python_formatter = PythonFormatter(features=None)
530 if format_columns is None:
--> 531 return formatter(pa_table, query_type=query_type)
532 elif query_type == "column":
533 if key in format_columns:
/opt/conda/lib/python3.6/site-packages/datasets/formatting/formatting.py in __call__(self, pa_table, query_type)
280 return self.format_row(pa_table)
281 elif query_type == "column":
--> 282 return self.format_column(pa_table)
283 elif query_type == "batch":
284 return self.format_batch(pa_table)
/opt/conda/lib/python3.6/site-packages/datasets/formatting/formatting.py in format_column(self, pa_table)
315 column = self.python_arrow_extractor().extract_column(pa_table)
316 if self.decoded:
--> 317 column = self.python_features_decoder.decode_column(column, pa_table.column_names[0])
318 return column
319
/opt/conda/lib/python3.6/site-packages/datasets/formatting/formatting.py in decode_column(self, column, column_name)
221
222 def decode_column(self, column: list, column_name: str) -> list:
--> 223 return self.features.decode_column(column, column_name) if self.features else column
224
225 def decode_batch(self, batch: dict) -> dict:
/opt/conda/lib/python3.6/site-packages/datasets/features/features.py in decode_column(self, column, column_name)
1337 return (
1338 [self[column_name].decode_example(value) if value is not None else None for value in column]
-> 1339 if self._column_requires_decoding[column_name]
1340 else column
1341 )
/opt/conda/lib/python3.6/site-packages/datasets/features/features.py in <listcomp>(.0)
1336 """
1337 return (
-> 1338 [self[column_name].decode_example(value) if value is not None else None for value in column]
1339 if self._column_requires_decoding[column_name]
1340 else column
/opt/conda/lib/python3.6/site-packages/datasets/features/audio.py in decode_example(self, value)
85 dict
86 """
---> 87 path, file = (value["path"], BytesIO(value["bytes"])) if value["bytes"] is not None else (value["path"], None)
88 if path is None and file is None:
89 raise ValueError(f"An audio sample should have one of 'path' or 'bytes' but both are None in {value}.")
TypeError: string indices must be integers
```
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 1.18.0
- Platform: Linux-4.14.256-197.484.amzn2.x86_64-x86_64-with-debian-buster-sid
- Python version: 3.6.13
- PyArrow version: 6.0.1
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3618/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/3618/timeline
| null |
completed
| null | null | false |
[
"Hi ! I think you should avoid calling `timit_train['audio']`. Indeed by doing so you're **loading all the audio column in memory**. This is problematic in your case because the TIMIT dataset is huge.\r\n\r\nIf you want to access the audio data of some samples, you should do this instead `timit_train[:10][\"train\"]` for example.\r\n\r\nOther than that, I'm not sure why you get a `TypeError: string indices must be integers`, do you have a code snippet that reproduces the issue that you can share here ?",
"I get the same error when I try to do `timit_train[0]` or really any indexing into the whole thing. \r\n\r\nReally, that IS the code snippet that reproduces the issue. If you index into other fields like 'file' or whatever, it works. As soon as one of the fields you're looking into is 'audio', you get that issue. It's a weird issue and I suspect it's Sagemaker/environment related, maybe the mix of libraries and dependencies are not good. \r\n\r\n\r\nExample code snippet with issue. \r\n```python\r\nfrom datasets import load_dataset\r\n\r\ntimit_train = load_dataset('timit_asr', split='train')\r\nprint(timit_train[0])\r\n```",
"Ok I see ! From the error you got, it looks like the `value` encoded in the arrow file of the TIMIT dataset you loaded is a string instead of a dictionary with keys \"path\" and \"bytes\" but we don't support this since 1.18\r\n\r\nCan you try regenerating the dataset with `load_dataset('timit_asr', download_mode=\"force_redownload\")` please ? I think it should fix the issue."
] |
https://api.github.com/repos/huggingface/datasets/issues/5945
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5945/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5945/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5945/events
|
https://github.com/huggingface/datasets/issues/5945
| 1,754,084,577 |
I_kwDODunzps5ojTTh
| 5,945 |
Failing to upload dataset to the hub
|
[] |
closed
| false | null | 3 |
2023-06-13T05:46:46Z
|
2023-07-24T11:56:40Z
|
2023-07-24T11:56:40Z
| null |
### Describe the bug
Trying to upload a dataset of hundreds of thousands of audio samples (the total volume is not very large, 60 gb) to the hub with push_to_hub, it doesn't work.
From time to time one piece of the data (parquet) gets pushed and then I get RemoteDisconnected even though my internet is stable.
Please help.
I'm trying to upload the dataset for almost a week.
Thanks
### Steps to reproduce the bug
not relevant
### Expected behavior
Be able to upload thedataset
### Environment info
python: 3.9
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5945/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5945/timeline
| null |
completed
| null | null | false |
[
"Hi ! Feel free to re-run your code later, it will resume automatically where you left",
"Tried many times in the last 2 weeks, problem remains.",
"Alternatively you can save your dataset in parquet files locally and upload them to the hub manually\r\n\r\n```python\r\nfrom tqdm import tqdm\r\nnum_shards = 60\r\nfor index in tqdm(range(num_shards)):\r\n ds.shard(num_shards=num_shards, index=index, contiguous=True).to_parquet(f\"{index:05d}.parquet\")\r\n````"
] |
https://api.github.com/repos/huggingface/datasets/issues/2441
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/2441/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/2441/comments
|
https://api.github.com/repos/huggingface/datasets/issues/2441/events
|
https://github.com/huggingface/datasets/issues/2441
| 908,554,713 |
MDU6SXNzdWU5MDg1NTQ3MTM=
| 2,441 |
DuplicatedKeysError on personal dataset
|
[
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] |
closed
| false | null | 2 |
2021-06-01T17:59:41Z
|
2021-06-04T23:50:03Z
|
2021-06-04T23:50:03Z
| null |
## Describe the bug
Ever since today, I have been getting a DuplicatedKeysError while trying to load my dataset from my own script.
Error returned when running this line: `dataset = load_dataset('/content/drive/MyDrive/Thesis/Datasets/book_preprocessing/goodreads_maharjan_trimmed_and_nered/goodreadsnered.py')`
Note that my script was working fine with earlier versions of the Datasets library. Cannot say with 100% certainty if I have been doing something wrong with my dataset script this whole time or if this is simply a bug with the new version of datasets.
## Steps to reproduce the bug
I cannot provide code to reproduce the error as I am working with my own dataset. I can however provide my script if requested.
## Expected results
For my data to be loaded.
## Actual results
**DuplicatedKeysError** exception is raised
```
Downloading and preparing dataset good_reads_practice_dataset/main_domain (download: Unknown size, generated: Unknown size, post-processed: Unknown size, total: Unknown size) to /root/.cache/huggingface/datasets/good_reads_practice_dataset/main_domain/1.1.0/64ff7c3fee2693afdddea75002eb6887d4fedc3d812ae3622128c8504ab21655...
---------------------------------------------------------------------------
DuplicatedKeysError Traceback (most recent call last)
<ipython-input-6-c342ea0dae9d> in <module>()
----> 1 dataset = load_dataset('/content/drive/MyDrive/Thesis/Datasets/book_preprocessing/goodreads_maharjan_trimmed_and_nered/goodreadsnered.py')
5 frames
/usr/local/lib/python3.7/dist-packages/datasets/load.py in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, ignore_verifications, keep_in_memory, save_infos, script_version, use_auth_token, task, **config_kwargs)
749 try_from_hf_gcs=try_from_hf_gcs,
750 base_path=base_path,
--> 751 use_auth_token=use_auth_token,
752 )
753
/usr/local/lib/python3.7/dist-packages/datasets/builder.py in download_and_prepare(self, download_config, download_mode, ignore_verifications, try_from_hf_gcs, dl_manager, base_path, use_auth_token, **download_and_prepare_kwargs)
573 if not downloaded_from_gcs:
574 self._download_and_prepare(
--> 575 dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs
576 )
577 # Sync info
/usr/local/lib/python3.7/dist-packages/datasets/builder.py in _download_and_prepare(self, dl_manager, verify_infos, **prepare_split_kwargs)
650 try:
651 # Prepare split will record examples associated to the split
--> 652 self._prepare_split(split_generator, **prepare_split_kwargs)
653 except OSError as e:
654 raise OSError(
/usr/local/lib/python3.7/dist-packages/datasets/builder.py in _prepare_split(self, split_generator)
990 writer.write(example, key)
991 finally:
--> 992 num_examples, num_bytes = writer.finalize()
993
994 split_generator.split_info.num_examples = num_examples
/usr/local/lib/python3.7/dist-packages/datasets/arrow_writer.py in finalize(self, close_stream)
407 # In case current_examples < writer_batch_size, but user uses finalize()
408 if self._check_duplicates:
--> 409 self.check_duplicate_keys()
410 # Re-intializing to empty list for next batch
411 self.hkey_record = []
/usr/local/lib/python3.7/dist-packages/datasets/arrow_writer.py in check_duplicate_keys(self)
347 for hash, key in self.hkey_record:
348 if hash in tmp_record:
--> 349 raise DuplicatedKeysError(key)
350 else:
351 tmp_record.add(hash)
DuplicatedKeysError: FAILURE TO GENERATE DATASET !
Found duplicate Key: 0
Keys should be unique and deterministic in nature
```
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 1.7.0
- Platform: Windows-10-10.0.19041-SP0
- Python version: 3.7.9
- PyArrow version: 3.0.0
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2441/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/2441/timeline
| null |
completed
| null | null | false |
[
"Hi ! In your dataset script you must be yielding examples like\r\n```python\r\nfor line in file:\r\n ...\r\n yield key, {...}\r\n```\r\n\r\nSince `datasets` 1.7.0 we enforce the keys to be unique.\r\nHowever it looks like your examples generator creates duplicate keys: at least two examples have key 0.\r\n\r\nYou can fix that by making sure that your keys are unique.\r\n\r\nFor example if you use a counter to define the key of each example, make sure that your counter is not reset to 0 in during examples generation (between two open files for examples).\r\n\r\nLet me know if you have other questions :)",
"Yup, I indeed was generating duplicate keys. Fixed it and now it's working."
] |
https://api.github.com/repos/huggingface/datasets/issues/4155
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/4155/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/4155/comments
|
https://api.github.com/repos/huggingface/datasets/issues/4155/events
|
https://github.com/huggingface/datasets/pull/4155
| 1,202,183,608 |
PR_kwDODunzps42Hqam
| 4,155 |
Make HANS dataset streamable
|
[] |
closed
| false | null | 1 |
2022-04-12T17:34:13Z
|
2022-04-13T12:03:46Z
|
2022-04-13T11:57:35Z
| null |
Fix #4133
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4155/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/4155/timeline
| null | null | false |
{
"diff_url": "https://github.com/huggingface/datasets/pull/4155.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4155",
"merged_at": "2022-04-13T11:57:34Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4155.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4155"
}
| true |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] |
https://api.github.com/repos/huggingface/datasets/issues/5865
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5865/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5865/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5865/events
|
https://github.com/huggingface/datasets/pull/5865
| 1,710,455,738 |
PR_kwDODunzps5QiHnw
| 5,865 |
Deprecate task api
|
[] |
closed
| false | null | 9 |
2023-05-15T16:48:24Z
|
2023-07-10T12:33:59Z
|
2023-07-10T12:24:01Z
| null |
The task API is not well adopted in the ecosystem, so this PR deprecates it. The `train_eval_index` is a newer, more flexible solution that should be used instead (I think?).
These are the projects that still use the task API :
* the image classification example in Transformers: [here](https://github.com/huggingface/transformers/blob/8f76dc8e5aaad58f2df7748b6d6970376f315a9a/examples/pytorch/image-classification/run_image_classification_no_trainer.py#L262) and [here](https://github.com/huggingface/transformers/blob/8f76dc8e5aaad58f2df7748b6d6970376f315a9a/examples/tensorflow/image-classification/run_image_classification.py#L277)
* autotrain: [here](https://github.com/huggingface/autotrain-backend/blob/455e274004b56f9377d64db4ab03671508fcc4cd/zeus/zeus/run/utils.py#L666)
* api-inference-community: [here](https://github.com/huggingface/api-inference-community/blob/fb8fb29d577a5bf01c82944db745489a6d6ed3d4/manage.py#L64) (but the rest of the code does not call the `resolve_dataset` function)
So we need to update these files after the merge.
cc @lewtun
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5865/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5865/timeline
| null | null | false |
{
"diff_url": "https://github.com/huggingface/datasets/pull/5865.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5865",
"merged_at": "2023-07-10T12:24:01Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5865.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5865"
}
| true |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"If it's easy to keep supporting it we can keep it no ? There are many datasets on the hub that implement the tasks templates in dataset scripts and it's maybe easier to keep task templates than opening PRs to those datasets.",
"do we know if people use the tasks api?\r\n\r\nedit: i mean, i'm fine with removing it if it's not used much, especially considering that it's not documented well.",
"@lhoestq \r\n\r\nLess than 80 public datasets (all canonical) implement `task_templates`, so updating them should be easy.\r\n\r\nPS: I skipped gated datasets when checking for the presence of `task_templates`, but it's safe to assume their contribution to the total count is insignificant.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006480 / 0.011353 (-0.004872) | 0.003904 / 0.011008 (-0.007104) | 0.084287 / 0.038508 (0.045779) | 0.071438 / 0.023109 (0.048329) | 0.309823 / 0.275898 (0.033925) | 0.341038 / 0.323480 (0.017558) | 0.005163 / 0.007986 (-0.002822) | 0.003291 / 0.004328 (-0.001037) | 0.064473 / 0.004250 (0.060222) | 0.053385 / 0.037052 (0.016332) | 0.323561 / 0.258489 (0.065072) | 0.346332 / 0.293841 (0.052491) | 0.030588 / 0.128546 (-0.097958) | 0.008342 / 0.075646 (-0.067305) | 0.287205 / 0.419271 (-0.132067) | 0.051953 / 0.043533 (0.008420) | 0.310925 / 0.255139 (0.055786) | 0.344443 / 0.283200 (0.061244) | 0.022754 / 0.141683 (-0.118928) | 1.459648 / 1.452155 (0.007494) | 1.528413 / 1.492716 (0.035697) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.206404 / 0.018006 (0.188398) | 0.461864 / 0.000490 (0.461374) | 0.004501 / 0.000200 (0.004302) | 0.000080 / 0.000054 (0.000026) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.026891 / 0.037411 (-0.010520) | 0.081206 / 0.014526 (0.066680) | 0.093648 / 0.176557 (-0.082908) | 0.148491 / 0.737135 (-0.588645) | 0.093874 / 0.296338 (-0.202464) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.401715 / 0.215209 (0.186506) | 4.018597 / 2.077655 (1.940943) | 2.029735 / 1.504120 (0.525615) | 1.860069 / 1.541195 (0.318875) | 1.935712 / 1.468490 (0.467222) | 0.485896 / 4.584777 (-4.098881) | 3.638177 / 3.745712 (-0.107535) | 5.124058 / 5.269862 (-0.145804) | 3.099666 / 4.565676 (-1.466011) | 0.057173 / 0.424275 (-0.367102) | 0.007240 / 0.007607 (-0.000367) | 0.478758 / 0.226044 (0.252713) | 4.798471 / 2.268929 (2.529543) | 2.502980 / 55.444624 (-52.941645) | 2.170650 / 6.876477 (-4.705827) | 2.381394 / 2.142072 (0.239321) | 0.578766 / 4.805227 (-4.226462) | 0.132342 / 6.500664 (-6.368322) | 0.059759 / 0.075469 (-0.015710) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.249238 / 1.841788 (-0.592549) | 19.224673 / 8.074308 (11.150365) | 13.786894 / 10.191392 (3.595502) | 0.164633 / 0.680424 (-0.515791) | 0.018065 / 0.534201 (-0.516136) | 0.390589 / 0.579283 (-0.188694) | 0.408993 / 0.434364 (-0.025370) | 0.457001 / 0.540337 (-0.083336) | 0.625327 / 1.386936 (-0.761609) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006827 / 0.011353 (-0.004526) | 0.004007 / 0.011008 (-0.007001) | 0.065239 / 0.038508 (0.026731) | 0.079829 / 0.023109 (0.056719) | 0.400323 / 0.275898 (0.124425) | 0.434158 / 0.323480 (0.110678) | 0.005314 / 0.007986 (-0.002671) | 0.003354 / 0.004328 (-0.000974) | 0.065044 / 0.004250 (0.060794) | 0.060315 / 0.037052 (0.023262) | 0.401513 / 0.258489 (0.143024) | 0.441119 / 0.293841 (0.147278) | 0.031783 / 0.128546 (-0.096763) | 0.008608 / 0.075646 (-0.067038) | 0.071755 / 0.419271 (-0.347517) | 0.048816 / 0.043533 (0.005283) | 0.393896 / 0.255139 (0.138757) | 0.412156 / 0.283200 (0.128956) | 0.024410 / 0.141683 (-0.117272) | 1.515159 / 1.452155 (0.063005) | 1.562217 / 1.492716 (0.069501) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.229993 / 0.018006 (0.211987) | 0.449898 / 0.000490 (0.449409) | 0.000376 / 0.000200 (0.000176) | 0.000056 / 0.000054 (0.000002) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.030297 / 0.037411 (-0.007115) | 0.086737 / 0.014526 (0.072212) | 0.098312 / 0.176557 (-0.078244) | 0.152890 / 0.737135 (-0.584246) | 0.099335 / 0.296338 (-0.197003) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.415786 / 0.215209 (0.200577) | 4.137606 / 2.077655 (2.059952) | 2.120082 / 1.504120 (0.615963) | 1.943984 / 1.541195 (0.402789) | 2.040821 / 1.468490 (0.572331) | 0.479273 / 4.584777 (-4.105504) | 3.563854 / 3.745712 (-0.181858) | 3.396071 / 5.269862 (-1.873790) | 2.011302 / 4.565676 (-2.554374) | 0.057202 / 0.424275 (-0.367073) | 0.007338 / 0.007607 (-0.000269) | 0.488378 / 0.226044 (0.262333) | 4.881615 / 2.268929 (2.612686) | 2.669685 / 55.444624 (-52.774939) | 2.258236 / 6.876477 (-4.618241) | 2.343303 / 2.142072 (0.201230) | 0.606762 / 4.805227 (-4.198466) | 0.133190 / 6.500664 (-6.367475) | 0.062971 / 0.075469 (-0.012498) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.345215 / 1.841788 (-0.496573) | 20.023713 / 8.074308 (11.949405) | 14.555777 / 10.191392 (4.364385) | 0.162388 / 0.680424 (-0.518036) | 0.018528 / 0.534201 (-0.515673) | 0.393055 / 0.579283 (-0.186229) | 0.411820 / 0.434364 (-0.022544) | 0.461705 / 0.540337 (-0.078633) | 0.629395 / 1.386936 (-0.757541) |\n\n</details>\n</details>\n\n\n",
"Ok ! I also know https://huggingface.co/datasets/hf-internal-testing/cats_vs_dogs_sample/blob/main/cats_vs_dogs_sample.py that needs to be updated as well",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.009100 / 0.011353 (-0.002253) | 0.005158 / 0.011008 (-0.005850) | 0.109291 / 0.038508 (0.070782) | 0.086053 / 0.023109 (0.062943) | 0.469859 / 0.275898 (0.193961) | 0.476142 / 0.323480 (0.152662) | 0.006739 / 0.007986 (-0.001247) | 0.005077 / 0.004328 (0.000748) | 0.078193 / 0.004250 (0.073943) | 0.065956 / 0.037052 (0.028904) | 0.490323 / 0.258489 (0.231834) | 0.497418 / 0.293841 (0.203577) | 0.060562 / 0.128546 (-0.067984) | 0.016321 / 0.075646 (-0.059325) | 0.379703 / 0.419271 (-0.039568) | 0.087335 / 0.043533 (0.043802) | 0.488240 / 0.255139 (0.233101) | 0.497391 / 0.283200 (0.214191) | 0.040699 / 0.141683 (-0.100984) | 1.778925 / 1.452155 (0.326770) | 1.856436 / 1.492716 (0.363720) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.236428 / 0.018006 (0.218422) | 0.551950 / 0.000490 (0.551460) | 0.007400 / 0.000200 (0.007201) | 0.000120 / 0.000054 (0.000066) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.028461 / 0.037411 (-0.008950) | 0.093441 / 0.014526 (0.078915) | 0.103868 / 0.176557 (-0.072688) | 0.176269 / 0.737135 (-0.560867) | 0.107760 / 0.296338 (-0.188578) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.593382 / 0.215209 (0.378173) | 5.863711 / 2.077655 (3.786057) | 2.493777 / 1.504120 (0.989657) | 2.088547 / 1.541195 (0.547352) | 2.173147 / 1.468490 (0.704656) | 0.875661 / 4.584777 (-3.709116) | 5.209023 / 3.745712 (1.463310) | 4.483261 / 5.269862 (-0.786600) | 2.843288 / 4.565676 (-1.722388) | 0.098488 / 0.424275 (-0.325787) | 0.008371 / 0.007607 (0.000764) | 0.668413 / 0.226044 (0.442368) | 6.709802 / 2.268929 (4.440873) | 3.132453 / 55.444624 (-52.312172) | 2.428736 / 6.876477 (-4.447741) | 2.560867 / 2.142072 (0.418794) | 0.983550 / 4.805227 (-3.821677) | 0.207072 / 6.500664 (-6.293592) | 0.073786 / 0.075469 (-0.001683) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.625871 / 1.841788 (-0.215917) | 23.481015 / 8.074308 (15.406707) | 20.556677 / 10.191392 (10.365285) | 0.238147 / 0.680424 (-0.442277) | 0.029453 / 0.534201 (-0.504748) | 0.464589 / 0.579283 (-0.114695) | 0.599129 / 0.434364 (0.164765) | 0.550146 / 0.540337 (0.009808) | 0.794646 / 1.386936 (-0.592290) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008613 / 0.011353 (-0.002739) | 0.004979 / 0.011008 (-0.006030) | 0.078095 / 0.038508 (0.039587) | 0.080285 / 0.023109 (0.057176) | 0.482881 / 0.275898 (0.206983) | 0.520442 / 0.323480 (0.196962) | 0.006241 / 0.007986 (-0.001744) | 0.003964 / 0.004328 (-0.000364) | 0.080027 / 0.004250 (0.075777) | 0.065209 / 0.037052 (0.028157) | 0.476113 / 0.258489 (0.217623) | 0.535383 / 0.293841 (0.241542) | 0.053084 / 0.128546 (-0.075462) | 0.014284 / 0.075646 (-0.061362) | 0.083859 / 0.419271 (-0.335413) | 0.061024 / 0.043533 (0.017492) | 0.477810 / 0.255139 (0.222671) | 0.508718 / 0.283200 (0.225518) | 0.036602 / 0.141683 (-0.105081) | 1.810422 / 1.452155 (0.358267) | 1.832833 / 1.492716 (0.340117) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.281443 / 0.018006 (0.263437) | 0.568249 / 0.000490 (0.567760) | 0.000493 / 0.000200 (0.000293) | 0.000077 / 0.000054 (0.000023) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.033302 / 0.037411 (-0.004110) | 0.100433 / 0.014526 (0.085907) | 0.105465 / 0.176557 (-0.071091) | 0.161986 / 0.737135 (-0.575149) | 0.115736 / 0.296338 (-0.180603) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.622892 / 0.215209 (0.407683) | 6.144361 / 2.077655 (4.066706) | 2.849443 / 1.504120 (1.345323) | 2.544097 / 1.541195 (1.002902) | 2.579859 / 1.468490 (1.111369) | 0.826078 / 4.584777 (-3.758699) | 5.021808 / 3.745712 (1.276096) | 4.694784 / 5.269862 (-0.575077) | 2.796263 / 4.565676 (-1.769413) | 0.090983 / 0.424275 (-0.333292) | 0.008445 / 0.007607 (0.000838) | 0.744675 / 0.226044 (0.518631) | 7.662989 / 2.268929 (5.394060) | 3.665611 / 55.444624 (-51.779013) | 2.942836 / 6.876477 (-3.933641) | 2.874402 / 2.142072 (0.732329) | 1.010097 / 4.805227 (-3.795130) | 0.218008 / 6.500664 (-6.282656) | 0.087359 / 0.075469 (0.011890) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.655631 / 1.841788 (-0.186157) | 23.539596 / 8.074308 (15.465288) | 20.909512 / 10.191392 (10.718120) | 0.202092 / 0.680424 (-0.478332) | 0.029807 / 0.534201 (-0.504394) | 0.487591 / 0.579283 (-0.091692) | 0.573719 / 0.434364 (0.139355) | 0.531168 / 0.540337 (-0.009170) | 0.742375 / 1.386936 (-0.644561) |\n\n</details>\n</details>\n\n\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006247 / 0.011353 (-0.005106) | 0.003650 / 0.011008 (-0.007358) | 0.079655 / 0.038508 (0.041147) | 0.060279 / 0.023109 (0.037170) | 0.309033 / 0.275898 (0.033135) | 0.338479 / 0.323480 (0.014999) | 0.004651 / 0.007986 (-0.003335) | 0.002849 / 0.004328 (-0.001480) | 0.062852 / 0.004250 (0.058602) | 0.049230 / 0.037052 (0.012178) | 0.312502 / 0.258489 (0.054012) | 0.354558 / 0.293841 (0.060717) | 0.027497 / 0.128546 (-0.101049) | 0.007885 / 0.075646 (-0.067762) | 0.260232 / 0.419271 (-0.159040) | 0.045459 / 0.043533 (0.001926) | 0.311629 / 0.255139 (0.056490) | 0.367806 / 0.283200 (0.084606) | 0.020875 / 0.141683 (-0.120808) | 1.423802 / 1.452155 (-0.028352) | 1.497729 / 1.492716 (0.005013) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.185629 / 0.018006 (0.167623) | 0.441421 / 0.000490 (0.440931) | 0.004847 / 0.000200 (0.004647) | 0.000074 / 0.000054 (0.000020) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.022428 / 0.037411 (-0.014984) | 0.073375 / 0.014526 (0.058849) | 0.083194 / 0.176557 (-0.093363) | 0.143984 / 0.737135 (-0.593151) | 0.084128 / 0.296338 (-0.212211) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.397220 / 0.215209 (0.182010) | 3.954394 / 2.077655 (1.876740) | 1.920638 / 1.504120 (0.416518) | 1.744284 / 1.541195 (0.203089) | 1.802623 / 1.468490 (0.334133) | 0.501988 / 4.584777 (-4.082789) | 3.096071 / 3.745712 (-0.649642) | 4.648267 / 5.269862 (-0.621595) | 2.770995 / 4.565676 (-1.794682) | 0.057513 / 0.424275 (-0.366762) | 0.006315 / 0.007607 (-0.001292) | 0.467683 / 0.226044 (0.241639) | 4.683959 / 2.268929 (2.415031) | 2.384980 / 55.444624 (-53.059645) | 2.030894 / 6.876477 (-4.845583) | 2.148374 / 2.142072 (0.006302) | 0.585142 / 4.805227 (-4.220085) | 0.123173 / 6.500664 (-6.377491) | 0.059140 / 0.075469 (-0.016329) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.244707 / 1.841788 (-0.597080) | 18.176043 / 8.074308 (10.101735) | 13.742770 / 10.191392 (3.551378) | 0.149692 / 0.680424 (-0.530732) | 0.016591 / 0.534201 (-0.517610) | 0.342138 / 0.579283 (-0.237145) | 0.353931 / 0.434364 (-0.080433) | 0.392317 / 0.540337 (-0.148020) | 0.524011 / 1.386936 (-0.862925) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005937 / 0.011353 (-0.005416) | 0.003609 / 0.011008 (-0.007399) | 0.061729 / 0.038508 (0.023221) | 0.057844 / 0.023109 (0.034735) | 0.418051 / 0.275898 (0.142153) | 0.453014 / 0.323480 (0.129534) | 0.004530 / 0.007986 (-0.003456) | 0.002861 / 0.004328 (-0.001468) | 0.062236 / 0.004250 (0.057986) | 0.048612 / 0.037052 (0.011560) | 0.418487 / 0.258489 (0.159998) | 0.455114 / 0.293841 (0.161273) | 0.027419 / 0.128546 (-0.101127) | 0.007919 / 0.075646 (-0.067728) | 0.066940 / 0.419271 (-0.352331) | 0.041816 / 0.043533 (-0.001717) | 0.419788 / 0.255139 (0.164649) | 0.439682 / 0.283200 (0.156483) | 0.020902 / 0.141683 (-0.120781) | 1.473993 / 1.452155 (0.021838) | 1.532438 / 1.492716 (0.039722) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.228766 / 0.018006 (0.210760) | 0.412189 / 0.000490 (0.411699) | 0.000371 / 0.000200 (0.000171) | 0.000054 / 0.000054 (-0.000000) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.026139 / 0.037411 (-0.011272) | 0.076626 / 0.014526 (0.062100) | 0.088262 / 0.176557 (-0.088295) | 0.143096 / 0.737135 (-0.594039) | 0.089642 / 0.296338 (-0.206696) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.423030 / 0.215209 (0.207821) | 4.218333 / 2.077655 (2.140679) | 2.280943 / 1.504120 (0.776823) | 2.051746 / 1.541195 (0.510551) | 2.101085 / 1.468490 (0.632595) | 0.495860 / 4.584777 (-4.088917) | 3.108065 / 3.745712 (-0.637647) | 2.944188 / 5.269862 (-2.325673) | 1.833693 / 4.565676 (-2.731984) | 0.057509 / 0.424275 (-0.366766) | 0.006406 / 0.007607 (-0.001201) | 0.497208 / 0.226044 (0.271164) | 4.974972 / 2.268929 (2.706044) | 2.786639 / 55.444624 (-52.657985) | 2.423815 / 6.876477 (-4.452662) | 2.446377 / 2.142072 (0.304305) | 0.584521 / 4.805227 (-4.220706) | 0.124129 / 6.500664 (-6.376535) | 0.061373 / 0.075469 (-0.014096) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.307076 / 1.841788 (-0.534711) | 18.443873 / 8.074308 (10.369565) | 13.835730 / 10.191392 (3.644338) | 0.159795 / 0.680424 (-0.520629) | 0.016643 / 0.534201 (-0.517558) | 0.334300 / 0.579283 (-0.244983) | 0.347136 / 0.434364 (-0.087228) | 0.394633 / 0.540337 (-0.145704) | 0.552445 / 1.386936 (-0.834491) |\n\n</details>\n</details>\n\n\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007273 / 0.011353 (-0.004080) | 0.004704 / 0.011008 (-0.006304) | 0.105857 / 0.038508 (0.067349) | 0.062493 / 0.023109 (0.039384) | 0.325704 / 0.275898 (0.049806) | 0.355795 / 0.323480 (0.032315) | 0.005552 / 0.007986 (-0.002433) | 0.003543 / 0.004328 (-0.000785) | 0.068098 / 0.004250 (0.063848) | 0.049563 / 0.037052 (0.012511) | 0.362956 / 0.258489 (0.104467) | 0.376047 / 0.293841 (0.082206) | 0.039272 / 0.128546 (-0.089275) | 0.011521 / 0.075646 (-0.064125) | 0.291899 / 0.419271 (-0.127373) | 0.056916 / 0.043533 (0.013383) | 0.365352 / 0.255139 (0.110213) | 0.357251 / 0.283200 (0.074051) | 0.031670 / 0.141683 (-0.110013) | 1.533294 / 1.452155 (0.081140) | 1.566580 / 1.492716 (0.073864) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.219812 / 0.018006 (0.201805) | 0.499808 / 0.000490 (0.499318) | 0.000343 / 0.000200 (0.000143) | 0.000066 / 0.000054 (0.000011) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.024011 / 0.037411 (-0.013400) | 0.079686 / 0.014526 (0.065161) | 0.087925 / 0.176557 (-0.088631) | 0.149065 / 0.737135 (-0.588071) | 0.088514 / 0.296338 (-0.207824) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.495003 / 0.215209 (0.279794) | 5.106371 / 2.077655 (3.028717) | 2.285497 / 1.504120 (0.781377) | 2.056052 / 1.541195 (0.514858) | 2.024913 / 1.468490 (0.556423) | 0.726048 / 4.584777 (-3.858729) | 4.873945 / 3.745712 (1.128233) | 7.488671 / 5.269862 (2.218809) | 4.361208 / 4.565676 (-0.204469) | 0.089014 / 0.424275 (-0.335261) | 0.007178 / 0.007607 (-0.000429) | 0.633669 / 0.226044 (0.407625) | 6.328154 / 2.268929 (4.059226) | 3.071598 / 55.444624 (-52.373026) | 2.416077 / 6.876477 (-4.460399) | 2.431033 / 2.142072 (0.288961) | 0.918167 / 4.805227 (-3.887060) | 0.193829 / 6.500664 (-6.306836) | 0.073446 / 0.075469 (-0.002023) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.344994 / 1.841788 (-0.496793) | 19.911699 / 8.074308 (11.837391) | 17.182697 / 10.191392 (6.991305) | 0.216932 / 0.680424 (-0.463492) | 0.025415 / 0.534201 (-0.508786) | 0.416806 / 0.579283 (-0.162477) | 0.524934 / 0.434364 (0.090570) | 0.510783 / 0.540337 (-0.029554) | 0.687856 / 1.386936 (-0.699081) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008469 / 0.011353 (-0.002884) | 0.003797 / 0.011008 (-0.007211) | 0.067276 / 0.038508 (0.028768) | 0.066825 / 0.023109 (0.043716) | 0.394976 / 0.275898 (0.119078) | 0.432563 / 0.323480 (0.109083) | 0.006003 / 0.007986 (-0.001982) | 0.003399 / 0.004328 (-0.000930) | 0.070899 / 0.004250 (0.066649) | 0.050940 / 0.037052 (0.013887) | 0.378291 / 0.258489 (0.119802) | 0.429889 / 0.293841 (0.136048) | 0.043245 / 0.128546 (-0.085302) | 0.012182 / 0.075646 (-0.063465) | 0.074560 / 0.419271 (-0.344711) | 0.065290 / 0.043533 (0.021757) | 0.371209 / 0.255139 (0.116070) | 0.389731 / 0.283200 (0.106532) | 0.045729 / 0.141683 (-0.095954) | 1.451785 / 1.452155 (-0.000370) | 1.598539 / 1.492716 (0.105822) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.261357 / 0.018006 (0.243351) | 0.520142 / 0.000490 (0.519653) | 0.008305 / 0.000200 (0.008105) | 0.000089 / 0.000054 (0.000034) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.026492 / 0.037411 (-0.010919) | 0.082430 / 0.014526 (0.067904) | 0.095979 / 0.176557 (-0.080578) | 0.151752 / 0.737135 (-0.585383) | 0.090086 / 0.296338 (-0.206252) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.535967 / 0.215209 (0.320758) | 5.228605 / 2.077655 (3.150950) | 2.395078 / 1.504120 (0.890959) | 2.185500 / 1.541195 (0.644306) | 2.219456 / 1.468490 (0.750966) | 0.764794 / 4.584777 (-3.819983) | 4.796617 / 3.745712 (1.050905) | 4.143450 / 5.269862 (-1.126411) | 2.527391 / 4.565676 (-2.038286) | 0.081418 / 0.424275 (-0.342857) | 0.007170 / 0.007607 (-0.000437) | 0.706071 / 0.226044 (0.480026) | 6.501060 / 2.268929 (4.232131) | 3.176315 / 55.444624 (-52.268309) | 2.443245 / 6.876477 (-4.433232) | 2.517832 / 2.142072 (0.375759) | 0.916254 / 4.805227 (-3.888973) | 0.184282 / 6.500664 (-6.316382) | 0.062613 / 0.075469 (-0.012857) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.444283 / 1.841788 (-0.397504) | 20.227311 / 8.074308 (12.153003) | 17.512856 / 10.191392 (7.321464) | 0.219556 / 0.680424 (-0.460868) | 0.024705 / 0.534201 (-0.509496) | 0.423215 / 0.579283 (-0.156068) | 0.513103 / 0.434364 (0.078739) | 0.473853 / 0.540337 (-0.066485) | 0.738165 / 1.386936 (-0.648771) |\n\n</details>\n</details>\n\n\n"
] |
https://api.github.com/repos/huggingface/datasets/issues/570
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/570/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/570/comments
|
https://api.github.com/repos/huggingface/datasets/issues/570/events
|
https://github.com/huggingface/datasets/pull/570
| 691,846,397 |
MDExOlB1bGxSZXF1ZXN0NDc4NTI3OTQz
| 570 |
add reuters21578 dataset
|
[] |
closed
| false | null | 0 |
2020-09-03T10:25:47Z
|
2020-09-03T10:46:52Z
|
2020-09-03T10:46:51Z
| null |
Reopen a PR this the merge.
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/570/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/570/timeline
| null | null | false |
{
"diff_url": "https://github.com/huggingface/datasets/pull/570.diff",
"html_url": "https://github.com/huggingface/datasets/pull/570",
"merged_at": "2020-09-03T10:46:51Z",
"patch_url": "https://github.com/huggingface/datasets/pull/570.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/570"
}
| true |
[] |
https://api.github.com/repos/huggingface/datasets/issues/5253
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5253/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5253/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5253/events
|
https://github.com/huggingface/datasets/pull/5253
| 1,452,588,206 |
PR_kwDODunzps5DE2io
| 5,253 |
typo
|
[] |
closed
| false | null | 0 |
2022-11-17T02:22:58Z
|
2022-11-18T10:53:11Z
|
2022-11-18T10:53:10Z
| null | null |
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5253/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5253/timeline
| null | null | false |
{
"diff_url": "https://github.com/huggingface/datasets/pull/5253.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5253",
"merged_at": "2022-11-18T10:53:10Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5253.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5253"
}
| true |
[] |
https://api.github.com/repos/huggingface/datasets/issues/1366
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/1366/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/1366/comments
|
https://api.github.com/repos/huggingface/datasets/issues/1366/events
|
https://github.com/huggingface/datasets/pull/1366
| 760,205,506 |
MDExOlB1bGxSZXF1ZXN0NTM1MDc1ODU2
| 1,366 |
Adding Hope EDI dataset
|
[] |
closed
| false | null | 1 |
2020-12-09T10:30:23Z
|
2020-12-14T14:27:57Z
|
2020-12-14T14:27:57Z
| null |
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1366/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/1366/timeline
| null | null | false |
{
"diff_url": "https://github.com/huggingface/datasets/pull/1366.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1366",
"merged_at": "2020-12-14T14:27:57Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1366.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1366"
}
| true |
[
"@lhoestq Have addressed your comments. Please review. Thanks."
] |
|
https://api.github.com/repos/huggingface/datasets/issues/1945
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/1945/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/1945/comments
|
https://api.github.com/repos/huggingface/datasets/issues/1945/events
|
https://github.com/huggingface/datasets/issues/1945
| 816,421,966 |
MDU6SXNzdWU4MTY0MjE5NjY=
| 1,945 |
AttributeError: 'DatasetDict' object has no attribute 'concatenate_datasets'
|
[] |
closed
| false | null | 1 |
2021-02-25T13:09:45Z
|
2021-02-25T13:20:35Z
|
2021-02-25T13:20:26Z
| null |
Hi
I am trying to concatenate a list of huggingface datastes as:
` train_dataset = datasets.concatenate_datasets(train_datasets)
`
Here is the `train_datasets` when I print:
```
[Dataset({
features: ['attention_mask', 'idx', 'input_ids', 'label', 'question1', 'question2', 'token_type_ids'],
num_rows: 120361
}), Dataset({
features: ['attention_mask', 'idx', 'input_ids', 'label', 'question1', 'question2', 'token_type_ids'],
num_rows: 2670
}), Dataset({
features: ['attention_mask', 'idx', 'input_ids', 'label', 'question1', 'question2', 'token_type_ids'],
num_rows: 6944
}), Dataset({
features: ['attention_mask', 'idx', 'input_ids', 'label', 'question1', 'question2', 'token_type_ids'],
num_rows: 38140
}), Dataset({
features: ['attention_mask', 'idx', 'input_ids', 'label', 'question1', 'question2', 'token_type_ids'],
num_rows: 173711
}), Dataset({
features: ['attention_mask', 'idx', 'input_ids', 'label', 'question1', 'question2', 'token_type_ids'],
num_rows: 1655
}), Dataset({
features: ['attention_mask', 'idx', 'input_ids', 'label', 'question1', 'question2', 'token_type_ids'],
num_rows: 4274
}), Dataset({
features: ['attention_mask', 'idx', 'input_ids', 'label', 'question1', 'question2', 'token_type_ids'],
num_rows: 2019
}), Dataset({
features: ['attention_mask', 'idx', 'input_ids', 'label', 'question1', 'question2', 'token_type_ids'],
num_rows: 2109
}), Dataset({
features: ['attention_mask', 'idx', 'input_ids', 'label', 'question1', 'question2', 'token_type_ids'],
num_rows: 11963
})]
```
I am getting the following error:
`AttributeError: 'DatasetDict' object has no attribute 'concatenate_datasets'
`
I was wondering if you could help me with this issue, thanks a lot
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1945/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/1945/timeline
| null |
completed
| null | null | false |
[
"sorry my mistake, datasets were overwritten closing now, thanks a lot"
] |
https://api.github.com/repos/huggingface/datasets/issues/3846
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/3846/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/3846/comments
|
https://api.github.com/repos/huggingface/datasets/issues/3846/events
|
https://github.com/huggingface/datasets/pull/3846
| 1,161,810,226 |
PR_kwDODunzps40D-uh
| 3,846 |
Update faiss device docstring
|
[] |
closed
| false | null | 1 |
2022-03-07T19:06:59Z
|
2022-03-07T19:21:23Z
|
2022-03-07T19:21:22Z
| null |
Following https://github.com/huggingface/datasets/pull/3721 I updated the docstring of the `device` argument of the FAISS related methods of `Dataset`
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3846/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/3846/timeline
| null | null | false |
{
"diff_url": "https://github.com/huggingface/datasets/pull/3846.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3846",
"merged_at": "2022-03-07T19:21:22Z",
"patch_url": "https://github.com/huggingface/datasets/pull/3846.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3846"
}
| true |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_3846). All of your documentation changes will be reflected on that endpoint."
] |
https://api.github.com/repos/huggingface/datasets/issues/2201
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/2201/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/2201/comments
|
https://api.github.com/repos/huggingface/datasets/issues/2201/events
|
https://github.com/huggingface/datasets/pull/2201
| 854,499,563 |
MDExOlB1bGxSZXF1ZXN0NjEyNDM1NTE3
| 2,201 |
Fix ArrowWriter overwriting features in ArrowBasedBuilder
|
[] |
closed
| false | null | 0 |
2021-04-09T12:56:19Z
|
2021-04-12T13:32:17Z
|
2021-04-12T13:32:16Z
| null |
This should fix the issues with CSV loading experienced in #2153 and #2200.
The CSV builder is an ArrowBasedBuilder that had an issue with its ArrowWriter used to write the arrow file from the csv data.
The writer wasn't initialized with the features passed by the user. Therefore the writer was inferring the features from the arrow data, discarding the features passed by the user.
I fixed that and I updated the tests
|
{
"+1": 2,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 2,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2201/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/2201/timeline
| null | null | false |
{
"diff_url": "https://github.com/huggingface/datasets/pull/2201.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2201",
"merged_at": "2021-04-12T13:32:16Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2201.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2201"
}
| true |
[] |
https://api.github.com/repos/huggingface/datasets/issues/4767
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/4767/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/4767/comments
|
https://api.github.com/repos/huggingface/datasets/issues/4767/events
|
https://github.com/huggingface/datasets/pull/4767
| 1,321,843,538 |
PR_kwDODunzps48TCpI
| 4,767 |
Add 2.4.0 version added to docstrings
|
[] |
closed
| false | null | 1 |
2022-07-29T07:01:56Z
|
2022-07-29T11:16:49Z
|
2022-07-29T11:03:58Z
| null | null |
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4767/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/4767/timeline
| null | null | false |
{
"diff_url": "https://github.com/huggingface/datasets/pull/4767.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4767",
"merged_at": "2022-07-29T11:03:58Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4767.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4767"
}
| true |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] |
https://api.github.com/repos/huggingface/datasets/issues/6063
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/6063/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/6063/comments
|
https://api.github.com/repos/huggingface/datasets/issues/6063/events
|
https://github.com/huggingface/datasets/pull/6063
| 1,818,679,485 |
PR_kwDODunzps5WPtxi
| 6,063 |
Release: 2.14.0
|
[] |
closed
| false | null | 4 |
2023-07-24T15:41:19Z
|
2023-07-24T16:05:16Z
|
2023-07-24T15:47:51Z
| null | null |
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6063/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/6063/timeline
| null | null | false |
{
"diff_url": "https://github.com/huggingface/datasets/pull/6063.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6063",
"merged_at": "2023-07-24T15:47:51Z",
"patch_url": "https://github.com/huggingface/datasets/pull/6063.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6063"
}
| true |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007703 / 0.011353 (-0.003650) | 0.004699 / 0.011008 (-0.006309) | 0.090195 / 0.038508 (0.051687) | 0.119165 / 0.023109 (0.096056) | 0.361435 / 0.275898 (0.085537) | 0.404429 / 0.323480 (0.080949) | 0.006172 / 0.007986 (-0.001814) | 0.003932 / 0.004328 (-0.000397) | 0.068384 / 0.004250 (0.064133) | 0.066730 / 0.037052 (0.029678) | 0.360978 / 0.258489 (0.102489) | 0.401301 / 0.293841 (0.107460) | 0.032836 / 0.128546 (-0.095710) | 0.010821 / 0.075646 (-0.064825) | 0.294526 / 0.419271 (-0.124745) | 0.068751 / 0.043533 (0.025218) | 0.368427 / 0.255139 (0.113288) | 0.376969 / 0.283200 (0.093770) | 0.040538 / 0.141683 (-0.101145) | 1.509966 / 1.452155 (0.057811) | 1.564885 / 1.492716 (0.072169) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.292243 / 0.018006 (0.274237) | 0.662067 / 0.000490 (0.661577) | 0.004966 / 0.000200 (0.004766) | 0.000103 / 0.000054 (0.000048) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.029050 / 0.037411 (-0.008361) | 0.099880 / 0.014526 (0.085354) | 0.109277 / 0.176557 (-0.067280) | 0.167877 / 0.737135 (-0.569258) | 0.110770 / 0.296338 (-0.185569) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.395742 / 0.215209 (0.180533) | 3.944152 / 2.077655 (1.866498) | 1.875295 / 1.504120 (0.371175) | 1.705088 / 1.541195 (0.163893) | 1.884443 / 1.468490 (0.415953) | 0.497243 / 4.584777 (-4.087534) | 3.749287 / 3.745712 (0.003575) | 4.418826 / 5.269862 (-0.851035) | 2.481149 / 4.565676 (-2.084528) | 0.058260 / 0.424275 (-0.366015) | 0.007744 / 0.007607 (0.000137) | 0.472531 / 0.226044 (0.246486) | 4.716022 / 2.268929 (2.447094) | 2.480446 / 55.444624 (-52.964179) | 2.163098 / 6.876477 (-4.713379) | 2.217555 / 2.142072 (0.075482) | 0.601965 / 4.805227 (-4.203262) | 0.139364 / 6.500664 (-6.361301) | 0.067097 / 0.075469 (-0.008372) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.330537 / 1.841788 (-0.511251) | 22.176270 / 8.074308 (14.101962) | 16.224981 / 10.191392 (6.033589) | 0.173708 / 0.680424 (-0.506715) | 0.019402 / 0.534201 (-0.514799) | 0.401994 / 0.579283 (-0.177289) | 0.432597 / 0.434364 (-0.001767) | 0.489933 / 0.540337 (-0.050404) | 0.672334 / 1.386936 (-0.714602) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008622 / 0.011353 (-0.002731) | 0.004609 / 0.011008 (-0.006399) | 0.067791 / 0.038508 (0.029283) | 0.112770 / 0.023109 (0.089661) | 0.380939 / 0.275898 (0.105041) | 0.416940 / 0.323480 (0.093460) | 0.006170 / 0.007986 (-0.001815) | 0.003876 / 0.004328 (-0.000452) | 0.066227 / 0.004250 (0.061976) | 0.073132 / 0.037052 (0.036080) | 0.390120 / 0.258489 (0.131631) | 0.420893 / 0.293841 (0.127052) | 0.033235 / 0.128546 (-0.095311) | 0.009659 / 0.075646 (-0.065987) | 0.072668 / 0.419271 (-0.346604) | 0.051333 / 0.043533 (0.007801) | 0.393828 / 0.255139 (0.138689) | 0.412376 / 0.283200 (0.129176) | 0.027760 / 0.141683 (-0.113923) | 1.494369 / 1.452155 (0.042214) | 1.592862 / 1.492716 (0.100145) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.345376 / 0.018006 (0.327369) | 0.609399 / 0.000490 (0.608909) | 0.000546 / 0.000200 (0.000346) | 0.000061 / 0.000054 (0.000007) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.035601 / 0.037411 (-0.001810) | 0.106527 / 0.014526 (0.092001) | 0.114388 / 0.176557 (-0.062168) | 0.175607 / 0.737135 (-0.561529) | 0.113009 / 0.296338 (-0.183329) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.417237 / 0.215209 (0.202028) | 4.136329 / 2.077655 (2.058675) | 2.147134 / 1.504120 (0.643014) | 2.009501 / 1.541195 (0.468306) | 2.139499 / 1.468490 (0.671009) | 0.491593 / 4.584777 (-4.093184) | 3.766734 / 3.745712 (0.021022) | 5.652446 / 5.269862 (0.382585) | 3.021654 / 4.565676 (-1.544022) | 0.058458 / 0.424275 (-0.365817) | 0.008271 / 0.007607 (0.000664) | 0.488229 / 0.226044 (0.262184) | 4.861343 / 2.268929 (2.592415) | 2.694142 / 55.444624 (-52.750482) | 2.489130 / 6.876477 (-4.387346) | 2.679376 / 2.142072 (0.537304) | 0.589959 / 4.805227 (-4.215268) | 0.137939 / 6.500664 (-6.362725) | 0.066833 / 0.075469 (-0.008636) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.444871 / 1.841788 (-0.396916) | 22.874961 / 8.074308 (14.800653) | 15.842130 / 10.191392 (5.650738) | 0.175529 / 0.680424 (-0.504895) | 0.019024 / 0.534201 (-0.515177) | 0.406551 / 0.579283 (-0.172732) | 0.430335 / 0.434364 (-0.004029) | 0.475750 / 0.540337 (-0.064587) | 0.624836 / 1.386936 (-0.762100) |\n\n</details>\n</details>\n\n\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006068 / 0.011353 (-0.005285) | 0.003694 / 0.011008 (-0.007315) | 0.080321 / 0.038508 (0.041813) | 0.061738 / 0.023109 (0.038629) | 0.329675 / 0.275898 (0.053777) | 0.364008 / 0.323480 (0.040528) | 0.004722 / 0.007986 (-0.003263) | 0.002857 / 0.004328 (-0.001471) | 0.062447 / 0.004250 (0.058197) | 0.047006 / 0.037052 (0.009953) | 0.335730 / 0.258489 (0.077241) | 0.373047 / 0.293841 (0.079206) | 0.027273 / 0.128546 (-0.101274) | 0.007979 / 0.075646 (-0.067667) | 0.262693 / 0.419271 (-0.156579) | 0.045416 / 0.043533 (0.001883) | 0.340774 / 0.255139 (0.085635) | 0.359667 / 0.283200 (0.076468) | 0.020848 / 0.141683 (-0.120835) | 1.450110 / 1.452155 (-0.002045) | 1.489511 / 1.492716 (-0.003206) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.185090 / 0.018006 (0.167084) | 0.429823 / 0.000490 (0.429334) | 0.000703 / 0.000200 (0.000503) | 0.000058 / 0.000054 (0.000003) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.024398 / 0.037411 (-0.013013) | 0.072983 / 0.014526 (0.058457) | 0.084012 / 0.176557 (-0.092544) | 0.146160 / 0.737135 (-0.590975) | 0.084068 / 0.296338 (-0.212270) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.432204 / 0.215209 (0.216995) | 4.320593 / 2.077655 (2.242939) | 2.261260 / 1.504120 (0.757140) | 2.087148 / 1.541195 (0.545954) | 2.144520 / 1.468490 (0.676029) | 0.501477 / 4.584777 (-4.083300) | 3.119557 / 3.745712 (-0.626156) | 3.572527 / 5.269862 (-1.697335) | 2.208836 / 4.565676 (-2.356840) | 0.057232 / 0.424275 (-0.367043) | 0.006494 / 0.007607 (-0.001113) | 0.508135 / 0.226044 (0.282091) | 5.090416 / 2.268929 (2.821488) | 2.739800 / 55.444624 (-52.704824) | 2.416105 / 6.876477 (-4.460372) | 2.616037 / 2.142072 (0.473965) | 0.583730 / 4.805227 (-4.221497) | 0.124312 / 6.500664 (-6.376352) | 0.060760 / 0.075469 (-0.014709) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.256097 / 1.841788 (-0.585691) | 18.326073 / 8.074308 (10.251765) | 13.859173 / 10.191392 (3.667781) | 0.143639 / 0.680424 (-0.536785) | 0.016649 / 0.534201 (-0.517552) | 0.331671 / 0.579283 (-0.247612) | 0.365370 / 0.434364 (-0.068994) | 0.392753 / 0.540337 (-0.147584) | 0.549302 / 1.386936 (-0.837634) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006054 / 0.011353 (-0.005299) | 0.003641 / 0.011008 (-0.007367) | 0.063109 / 0.038508 (0.024601) | 0.060482 / 0.023109 (0.037372) | 0.404047 / 0.275898 (0.128149) | 0.425436 / 0.323480 (0.101956) | 0.004603 / 0.007986 (-0.003382) | 0.002905 / 0.004328 (-0.001423) | 0.063207 / 0.004250 (0.058956) | 0.048248 / 0.037052 (0.011196) | 0.404325 / 0.258489 (0.145836) | 0.432652 / 0.293841 (0.138811) | 0.027630 / 0.128546 (-0.100916) | 0.008062 / 0.075646 (-0.067584) | 0.068367 / 0.419271 (-0.350905) | 0.042169 / 0.043533 (-0.001364) | 0.384903 / 0.255139 (0.129764) | 0.418617 / 0.283200 (0.135417) | 0.020767 / 0.141683 (-0.120915) | 1.463606 / 1.452155 (0.011451) | 1.512081 / 1.492716 (0.019365) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.229601 / 0.018006 (0.211594) | 0.417878 / 0.000490 (0.417388) | 0.000373 / 0.000200 (0.000173) | 0.000053 / 0.000054 (-0.000002) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.026354 / 0.037411 (-0.011057) | 0.078100 / 0.014526 (0.063574) | 0.087122 / 0.176557 (-0.089434) | 0.140017 / 0.737135 (-0.597118) | 0.089923 / 0.296338 (-0.206415) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.422405 / 0.215209 (0.207196) | 4.237383 / 2.077655 (2.159728) | 2.161104 / 1.504120 (0.656984) | 1.982337 / 1.541195 (0.441142) | 2.050216 / 1.468490 (0.581726) | 0.499281 / 4.584777 (-4.085496) | 2.996953 / 3.745712 (-0.748759) | 5.027069 / 5.269862 (-0.242792) | 2.804703 / 4.565676 (-1.760974) | 0.057707 / 0.424275 (-0.366568) | 0.006809 / 0.007607 (-0.000798) | 0.495196 / 0.226044 (0.269152) | 4.946593 / 2.268929 (2.677665) | 2.598965 / 55.444624 (-52.845660) | 2.349871 / 6.876477 (-4.526606) | 2.451665 / 2.142072 (0.309593) | 0.592314 / 4.805227 (-4.212913) | 0.125685 / 6.500664 (-6.374979) | 0.063252 / 0.075469 (-0.012217) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.325422 / 1.841788 (-0.516366) | 18.521059 / 8.074308 (10.446751) | 14.046757 / 10.191392 (3.855365) | 0.133009 / 0.680424 (-0.547415) | 0.017097 / 0.534201 (-0.517104) | 0.339804 / 0.579283 (-0.239479) | 0.345464 / 0.434364 (-0.088900) | 0.387623 / 0.540337 (-0.152714) | 0.519880 / 1.386936 (-0.867056) |\n\n</details>\n</details>\n\n\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008671 / 0.011353 (-0.002682) | 0.004681 / 0.011008 (-0.006327) | 0.107517 / 0.038508 (0.069008) | 0.078846 / 0.023109 (0.055737) | 0.449745 / 0.275898 (0.173847) | 0.504075 / 0.323480 (0.180596) | 0.005837 / 0.007986 (-0.002148) | 0.004031 / 0.004328 (-0.000297) | 0.092021 / 0.004250 (0.087771) | 0.065954 / 0.037052 (0.028902) | 0.442082 / 0.258489 (0.183593) | 0.529349 / 0.293841 (0.235508) | 0.052527 / 0.128546 (-0.076019) | 0.013854 / 0.075646 (-0.061792) | 0.367315 / 0.419271 (-0.051956) | 0.068731 / 0.043533 (0.025199) | 0.494733 / 0.255139 (0.239594) | 0.472801 / 0.283200 (0.189601) | 0.036791 / 0.141683 (-0.104892) | 1.877648 / 1.452155 (0.425493) | 1.928399 / 1.492716 (0.435683) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.231910 / 0.018006 (0.213904) | 0.553464 / 0.000490 (0.552974) | 0.011915 / 0.000200 (0.011715) | 0.000378 / 0.000054 (0.000324) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.028232 / 0.037411 (-0.009179) | 0.091441 / 0.014526 (0.076916) | 0.110394 / 0.176557 (-0.066162) | 0.187638 / 0.737135 (-0.549497) | 0.111810 / 0.296338 (-0.184529) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.599987 / 0.215209 (0.384778) | 6.008709 / 2.077655 (3.931054) | 2.518769 / 1.504120 (1.014650) | 2.197029 / 1.541195 (0.655834) | 2.217165 / 1.468490 (0.748675) | 0.894939 / 4.584777 (-3.689837) | 5.001217 / 3.745712 (1.255505) | 4.636482 / 5.269862 (-0.633379) | 3.237613 / 4.565676 (-1.328063) | 0.104227 / 0.424275 (-0.320048) | 0.008504 / 0.007607 (0.000897) | 0.750190 / 0.226044 (0.524145) | 7.514571 / 2.268929 (5.245642) | 3.358003 / 55.444624 (-52.086621) | 2.585649 / 6.876477 (-4.290827) | 2.731129 / 2.142072 (0.589056) | 1.088828 / 4.805227 (-3.716400) | 0.217308 / 6.500664 (-6.283356) | 0.076410 / 0.075469 (0.000941) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.620087 / 1.841788 (-0.221701) | 23.145743 / 8.074308 (15.071435) | 20.583403 / 10.191392 (10.392011) | 0.225467 / 0.680424 (-0.454956) | 0.029063 / 0.534201 (-0.505138) | 0.480563 / 0.579283 (-0.098720) | 0.539083 / 0.434364 (0.104719) | 0.563787 / 0.540337 (0.023449) | 0.782902 / 1.386936 (-0.604034) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.010113 / 0.011353 (-0.001239) | 0.004997 / 0.011008 (-0.006011) | 0.082974 / 0.038508 (0.044466) | 0.090375 / 0.023109 (0.067266) | 0.440273 / 0.275898 (0.164375) | 0.476939 / 0.323480 (0.153459) | 0.005955 / 0.007986 (-0.002031) | 0.004375 / 0.004328 (0.000046) | 0.080459 / 0.004250 (0.076209) | 0.061787 / 0.037052 (0.024734) | 0.477211 / 0.258489 (0.218722) | 0.487164 / 0.293841 (0.193323) | 0.054198 / 0.128546 (-0.074348) | 0.013945 / 0.075646 (-0.061701) | 0.093006 / 0.419271 (-0.326266) | 0.062685 / 0.043533 (0.019152) | 0.461373 / 0.255139 (0.206234) | 0.475766 / 0.283200 (0.192567) | 0.032059 / 0.141683 (-0.109623) | 1.857989 / 1.452155 (0.405834) | 1.837993 / 1.492716 (0.345277) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.243048 / 0.018006 (0.225042) | 0.535850 / 0.000490 (0.535360) | 0.007204 / 0.000200 (0.007004) | 0.000104 / 0.000054 (0.000049) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.032584 / 0.037411 (-0.004827) | 0.098151 / 0.014526 (0.083625) | 0.109691 / 0.176557 (-0.066866) | 0.172803 / 0.737135 (-0.564333) | 0.110469 / 0.296338 (-0.185869) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.635086 / 0.215209 (0.419877) | 6.500864 / 2.077655 (4.423210) | 2.996727 / 1.504120 (1.492607) | 2.537218 / 1.541195 (0.996023) | 2.572310 / 1.468490 (1.103820) | 0.870868 / 4.584777 (-3.713909) | 4.989744 / 3.745712 (1.244032) | 4.422174 / 5.269862 (-0.847687) | 2.935874 / 4.565676 (-1.629803) | 0.097118 / 0.424275 (-0.327157) | 0.009360 / 0.007607 (0.001753) | 0.790447 / 0.226044 (0.564403) | 7.859519 / 2.268929 (5.590591) | 3.975616 / 55.444624 (-51.469009) | 3.018271 / 6.876477 (-3.858206) | 3.111173 / 2.142072 (0.969101) | 1.085577 / 4.805227 (-3.719651) | 0.225719 / 6.500664 (-6.274945) | 0.080576 / 0.075469 (0.005107) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.802284 / 1.841788 (-0.039504) | 23.487921 / 8.074308 (15.413613) | 20.595171 / 10.191392 (10.403779) | 0.196610 / 0.680424 (-0.483814) | 0.027483 / 0.534201 (-0.506718) | 0.485840 / 0.579283 (-0.093443) | 0.542661 / 0.434364 (0.108297) | 0.580602 / 0.540337 (0.040265) | 0.768195 / 1.386936 (-0.618741) |\n\n</details>\n</details>\n\n\n"
] |
https://api.github.com/repos/huggingface/datasets/issues/1863
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/1863/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/1863/comments
|
https://api.github.com/repos/huggingface/datasets/issues/1863/events
|
https://github.com/huggingface/datasets/issues/1863
| 806,171,311 |
MDU6SXNzdWU4MDYxNzEzMTE=
| 1,863 |
Add WikiCREM
|
[
{
"color": "e99695",
"default": false,
"description": "Requesting to add a new dataset",
"id": 2067376369,
"name": "dataset request",
"node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request"
}
] |
open
| false | null | 2 |
2021-02-11T08:16:00Z
|
2021-03-07T07:27:13Z
| null | null |
## Adding a Dataset
- **Name:** WikiCREM
- **Description:** A large unsupervised corpus for coreference resolution.
- **Paper:** https://arxiv.org/abs/1905.06290
- **Github repo:**: https://github.com/vid-koci/bert-commonsense
- **Data:** https://ora.ox.ac.uk/objects/uuid:c83e94bb-7584-41a1-aef9-85b0e764d9e3
- **Motivation:** Coreference resolution, common sense reasoning
Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md).
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1863/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/1863/timeline
| null | null | null | null | false |
[
"Hi @NielsRogge I would like to work on this dataset.\r\n\r\nThanks!",
"Hi @udapy, are you working on this?"
] |
https://api.github.com/repos/huggingface/datasets/issues/5673
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5673/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5673/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5673/events
|
https://github.com/huggingface/datasets/pull/5673
| 1,641,066,352 |
PR_kwDODunzps5M6wc3
| 5,673 |
Pass down storage options
|
[] |
closed
| false | null | 5 |
2023-03-26T20:09:37Z
|
2023-03-28T15:03:38Z
|
2023-03-28T14:54:17Z
| null |
Remove implementation-specific kwargs from `file_utils.fsspec_get` and `file_utils.fsspec_head`, instead allowing them to be passed down via `storage_options`. This fixes an issue where s3fs did not recognize a timeout arg as well as fixes an issue mentioned in https://github.com/huggingface/datasets/issues/5281 by allowing users to pass down `storage_options` all the way from `datasets.load_dataset` to support implementation-specific credentials
Supports something like the following to provide credentials explicitly instead of relying on boto's methods of locating them
```
load_dataset(..., data_files=["s3://..."], storage_options={"profile": "..."})
```
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5673/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5673/timeline
| null | null | false |
{
"diff_url": "https://github.com/huggingface/datasets/pull/5673.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5673",
"merged_at": "2023-03-28T14:54:17Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5673.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5673"
}
| true |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"> download_and_prepare is not called when streaming a dataset, so we may need to have storage_options in the DatasetBuilder.__init__ ? This way it could also be passed later to as_streaming_dataset and the StreamingDownloadManager\r\n\r\n> Currently the storage_options parameter in download_and_prepare are for the target filesystem where the dataset must be downloaded and prepared as arrow files\r\n\r\nAh, I noted this when looking for ways to plumb down `storage_options` although I think I was looking at adding to `BuilderConfig`. The `DatasetBuilder` constructor looks more appropriate for this, will get that added in a future commit",
"Noting as experimental SGTM. The only tests I can think of to add at the moment would be mocks that assert the storage options get passed all the way down using `mock.assert_called_with` but if Hugging Face has some S3/GCS buckets for testing, maybe those would be better in a future PR. Let me know what you think",
"I think adding tests with the mockfs fixture will do the job. Tests and docs can be added when request_etag and is_remote_url support fsspec (right now they would fail with mockfs).\r\n\r\nLet's see in a subsequent PR, this is exciting ! :)",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.009217 / 0.011353 (-0.002136) | 0.006275 / 0.011008 (-0.004733) | 0.124361 / 0.038508 (0.085853) | 0.035680 / 0.023109 (0.012570) | 0.395255 / 0.275898 (0.119357) | 0.426104 / 0.323480 (0.102624) | 0.006822 / 0.007986 (-0.001163) | 0.004467 / 0.004328 (0.000138) | 0.099404 / 0.004250 (0.095153) | 0.051919 / 0.037052 (0.014867) | 0.388286 / 0.258489 (0.129797) | 0.426361 / 0.293841 (0.132520) | 0.053100 / 0.128546 (-0.075446) | 0.019453 / 0.075646 (-0.056194) | 0.433139 / 0.419271 (0.013867) | 0.063240 / 0.043533 (0.019707) | 0.381175 / 0.255139 (0.126036) | 0.411686 / 0.283200 (0.128487) | 0.104843 / 0.141683 (-0.036840) | 1.853582 / 1.452155 (0.401427) | 1.935644 / 1.492716 (0.442928) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.218969 / 0.018006 (0.200963) | 0.515011 / 0.000490 (0.514522) | 0.004017 / 0.000200 (0.003818) | 0.000097 / 0.000054 (0.000043) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.028975 / 0.037411 (-0.008437) | 0.125239 / 0.014526 (0.110713) | 0.131371 / 0.176557 (-0.045185) | 0.203864 / 0.737135 (-0.533271) | 0.140784 / 0.296338 (-0.155554) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.620701 / 0.215209 (0.405492) | 6.263557 / 2.077655 (4.185903) | 2.510058 / 1.504120 (1.005938) | 2.085892 / 1.541195 (0.544697) | 2.170362 / 1.468490 (0.701872) | 1.325600 / 4.584777 (-3.259177) | 5.583355 / 3.745712 (1.837642) | 5.092791 / 5.269862 (-0.177071) | 2.814766 / 4.565676 (-1.750911) | 0.153568 / 0.424275 (-0.270707) | 0.014850 / 0.007607 (0.007243) | 0.787011 / 0.226044 (0.560967) | 7.948813 / 2.268929 (5.679885) | 3.320831 / 55.444624 (-52.123793) | 2.526327 / 6.876477 (-4.350150) | 2.691651 / 2.142072 (0.549579) | 1.521199 / 4.805227 (-3.284028) | 0.269738 / 6.500664 (-6.230926) | 0.082959 / 0.075469 (0.007490) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.740056 / 1.841788 (-0.101732) | 17.699732 / 8.074308 (9.625424) | 22.450689 / 10.191392 (12.259297) | 0.229350 / 0.680424 (-0.451073) | 0.027486 / 0.534201 (-0.506715) | 0.536153 / 0.579283 (-0.043130) | 0.608166 / 0.434364 (0.173802) | 0.629144 / 0.540337 (0.088807) | 0.732671 / 1.386936 (-0.654265) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.010147 / 0.011353 (-0.001206) | 0.006484 / 0.011008 (-0.004524) | 0.098664 / 0.038508 (0.060156) | 0.036400 / 0.023109 (0.013291) | 0.432895 / 0.275898 (0.156997) | 0.466433 / 0.323480 (0.142954) | 0.008102 / 0.007986 (0.000117) | 0.004554 / 0.004328 (0.000225) | 0.100466 / 0.004250 (0.096216) | 0.054066 / 0.037052 (0.017013) | 0.439177 / 0.258489 (0.180688) | 0.502907 / 0.293841 (0.209066) | 0.059210 / 0.128546 (-0.069336) | 0.020220 / 0.075646 (-0.055426) | 0.124671 / 0.419271 (-0.294600) | 0.064278 / 0.043533 (0.020746) | 0.435659 / 0.255139 (0.180520) | 0.459670 / 0.283200 (0.176471) | 0.115574 / 0.141683 (-0.026109) | 1.826360 / 1.452155 (0.374205) | 1.943199 / 1.492716 (0.450483) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.238463 / 0.018006 (0.220457) | 0.534889 / 0.000490 (0.534400) | 0.000404 / 0.000200 (0.000204) | 0.000092 / 0.000054 (0.000038) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.033210 / 0.037411 (-0.004201) | 0.133529 / 0.014526 (0.119003) | 0.143813 / 0.176557 (-0.032743) | 0.213079 / 0.737135 (-0.524056) | 0.148427 / 0.296338 (-0.147912) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.656819 / 0.215209 (0.441610) | 6.414860 / 2.077655 (4.337205) | 2.756182 / 1.504120 (1.252062) | 2.405268 / 1.541195 (0.864073) | 2.436418 / 1.468490 (0.967928) | 1.289828 / 4.584777 (-3.294949) | 5.572731 / 3.745712 (1.827018) | 3.185432 / 5.269862 (-2.084429) | 2.093220 / 4.565676 (-2.472457) | 0.144817 / 0.424275 (-0.279458) | 0.015674 / 0.007607 (0.008067) | 0.801238 / 0.226044 (0.575194) | 7.955925 / 2.268929 (5.686996) | 3.605670 / 55.444624 (-51.838955) | 2.837568 / 6.876477 (-4.038908) | 2.873848 / 2.142072 (0.731775) | 1.493512 / 4.805227 (-3.311715) | 0.266251 / 6.500664 (-6.234413) | 0.082417 / 0.075469 (0.006948) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.608685 / 1.841788 (-0.233103) | 18.587875 / 8.074308 (10.513567) | 21.786119 / 10.191392 (11.594727) | 0.261748 / 0.680424 (-0.418675) | 0.026228 / 0.534201 (-0.507973) | 0.553538 / 0.579283 (-0.025745) | 0.599780 / 0.434364 (0.165416) | 0.665663 / 0.540337 (0.125325) | 0.792785 / 1.386936 (-0.594151) |\n\n</details>\n</details>\n\n\n"
] |
https://api.github.com/repos/huggingface/datasets/issues/5033
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5033/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5033/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5033/events
|
https://github.com/huggingface/datasets/pull/5033
| 1,388,842,236 |
PR_kwDODunzps4_wGSE
| 5,033 |
Remove redundant code from some dataset module factories
|
[] |
closed
| false | null | 1 |
2022-09-28T07:06:26Z
|
2022-09-28T16:57:51Z
|
2022-09-28T16:55:12Z
| null |
This PR removes some redundant code introduced by mistake after a refactoring in:
- #4576
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5033/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5033/timeline
| null | null | false |
{
"diff_url": "https://github.com/huggingface/datasets/pull/5033.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5033",
"merged_at": "2022-09-28T16:55:12Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5033.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5033"
}
| true |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] |
https://api.github.com/repos/huggingface/datasets/issues/1467
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/1467/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/1467/comments
|
https://api.github.com/repos/huggingface/datasets/issues/1467/events
|
https://github.com/huggingface/datasets/pull/1467
| 761,557,290 |
MDExOlB1bGxSZXF1ZXN0NTM2MjA3NDcx
| 1,467 |
adding snow_simplified_japanese_corpus
|
[] |
closed
| false | null | 2 |
2020-12-10T19:45:03Z
|
2020-12-17T13:22:48Z
|
2020-12-17T11:25:34Z
| null |
Adding simplified Japanese corpus "SNOW T15" and "SNOW T23".
They contain original Japanese, simplified Japanese, and original English (the original text is gotten from en-ja translation corpus). Hence, it can be used not only for Japanese simplification but also for en-ja translation.
- http://www.jnlp.org/SNOW/T15
- http://www.jnlp.org/SNOW/T23
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1467/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/1467/timeline
| null | null | false |
{
"diff_url": "https://github.com/huggingface/datasets/pull/1467.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1467",
"merged_at": "2020-12-17T11:25:34Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1467.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1467"
}
| true |
[
"merging since the CI is fixed on master",
"Thank you for the updates and merging!"
] |
https://api.github.com/repos/huggingface/datasets/issues/945
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/945/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/945/comments
|
https://api.github.com/repos/huggingface/datasets/issues/945/events
|
https://github.com/huggingface/datasets/pull/945
| 754,273,920 |
MDExOlB1bGxSZXF1ZXN0NTMwMjAyMDM1
| 945 |
Adding Babi dataset - English version
|
[] |
closed
| false | null | 1 |
2020-12-01T10:35:36Z
|
2020-12-04T15:43:05Z
|
2020-12-04T15:42:54Z
| null |
Adding the English version of bAbI.
Samples are taken from ParlAI for consistency with the main users at the moment.
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/945/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/945/timeline
| null | null | false |
{
"diff_url": "https://github.com/huggingface/datasets/pull/945.diff",
"html_url": "https://github.com/huggingface/datasets/pull/945",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/945.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/945"
}
| true |
[
"Replaced by #1126"
] |
https://api.github.com/repos/huggingface/datasets/issues/211
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/211/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/211/comments
|
https://api.github.com/repos/huggingface/datasets/issues/211/events
|
https://github.com/huggingface/datasets/issues/211
| 626,565,994 |
MDU6SXNzdWU2MjY1NjU5OTQ=
| 211 |
[Arrow writer, Trivia_qa] Could not convert TagMe with type str: converting to null type
|
[
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] |
closed
| false | null | 7 |
2020-05-28T14:38:14Z
|
2020-07-23T10:15:16Z
|
2020-07-23T10:15:16Z
| null |
Running the following code
```
import nlp
ds = nlp.load_dataset("trivia_qa", "rc", split="validation[:1%]") # this might take 2.3 min to download but it's cached afterwards...
ds.map(lambda x: x, load_from_cache_file=False)
```
triggers a `ArrowInvalid: Could not convert TagMe with type str: converting to null type` error.
On the other hand if we remove a certain column of `trivia_qa` which seems responsible for the bug, it works:
```
import nlp
ds = nlp.load_dataset("trivia_qa", "rc", split="validation[:1%]") # this might take 2.3 min to download but it's cached afterwards...
ds.map(lambda x: x, remove_columns=["entity_pages"], load_from_cache_file=False)
```
. Seems quite hard to debug what's going on here... @lhoestq @thomwolf - do you have a good first guess what the problem could be?
**Note** BTW: I think this could be a good test to check that the datasets work correctly: Take a tiny portion of the dataset and check that it can be written correctly.
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/211/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/211/timeline
| null |
completed
| null | null | false |
[
"Here the full error trace:\r\n\r\n```\r\nArrowInvalid Traceback (most recent call last)\r\n<ipython-input-1-7aaf3f011358> in <module>\r\n 1 import nlp\r\n 2 ds = nlp.load_dataset(\"trivia_qa\", \"rc\", split=\"validation[:1%]\") # this might take 2.3 min to download but it's cached afterwards...\r\n----> 3 ds.map(lambda x: x, load_from_cache_file=False)\r\n\r\n~/python_bin/nlp/arrow_dataset.py in map(self, function, with_indices, batched, batch_size, remove_columns, keep_in_memory, load_from_cache_file, cache_file_name, writer_batch_size, arrow_schema, disable_nullable)\r\n 549\r\n 550 if update_data:\r\n--> 551 writer.finalize() # close_stream=bool(buf_writer is None)) # We only close if we are writing in a file\r\n 552\r\n 553 # Create new Dataset from buffer or file\r\n\r\n~/python_bin/nlp/arrow_writer.py in finalize(self, close_stream)\r\n 182 def finalize(self, close_stream=True):\r\n 183 if self.pa_writer is not None:\r\n--> 184 self.write_on_file()\r\n 185 self.pa_writer.close()\r\n 186 if close_stream:\r\n\r\n~/python_bin/nlp/arrow_writer.py in write_on_file(self)\r\n 104 \"\"\"\r\n 105 if self.current_rows:\r\n--> 106 pa_array = pa.array(self.current_rows, type=self._type)\r\n 107 first_example = pa.array(self.current_rows[0:1], type=self._type)[0]\r\n 108 # Sanity check\r\n\r\n~/hugging_face/venv_3.7/lib/python3.7/site-packages/pyarrow/array.pxi in pyarrow.lib.array()\r\n\r\n~/hugging_face/venv_3.7/lib/python3.7/site-packages/pyarrow/array.pxi in pyarrow.lib._sequence_to_array()\r\n\r\n~/hugging_face/venv_3.7/lib/python3.7/site-packages/pyarrow/error.pxi in pyarrow.lib.check_status()\r\n\r\nArrowInvalid: Could not convert TagMe with type str: converting to null type\r\n```",
"Actually thinking a bit more about it, it's probably a data sample that is not correct in `trivia_qa`. But I'm a bit surprised though that we managed to write it in .arrow format and now cannot write it anymore after an \"identity\" mapping.",
"I don't have this error :x",
"Interesting, maybe I have a very old cache of trivia_qa...thanks for checking",
"I'm running it right now on colab to double check",
"Actually, I know what the problem is...I'm quite sure it's a bug. Here we take some test inputs: https://github.com/huggingface/nlp/blob/0e0ef12c14d2175e0b0bd7d8aa814b09e2cd7e1f/src/nlp/arrow_dataset.py#L472\r\n\r\nIt might be that in the test inputs, a `Sequence` type value is an emtpy list. So in my case I have `ds[0][\"entity_pages'][\"wiki_context\"] = []`. => this leads to an `arrow_schema` equal to `null` for `[\"entity_pages'][\"wiki_context\"]` => see line: https://github.com/huggingface/nlp/blob/0e0ef12c14d2175e0b0bd7d8aa814b09e2cd7e1f/src/nlp/arrow_dataset.py#L501 instead of list of string which it should for other examples. \r\n\r\nGuess it's an edge case, but it can happen.",
"Good point, I think the schema should be infered at the writing stage where we have a `writer_batch_size` number of examples (typically 10k) so it's even less likely to run into this scenario."
] |
https://api.github.com/repos/huggingface/datasets/issues/1208
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/1208/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/1208/comments
|
https://api.github.com/repos/huggingface/datasets/issues/1208/events
|
https://github.com/huggingface/datasets/pull/1208
| 757,961,368 |
MDExOlB1bGxSZXF1ZXN0NTMzMjIyMzQ4
| 1,208 |
Add HKCanCor
|
[] |
closed
| false | null | 0 |
2020-12-06T16:14:43Z
|
2020-12-06T20:23:17Z
|
2020-12-06T20:21:54Z
| null |
(Apologies, didn't manage the branches properly and the PR got too messy. Going to open a new PR with everything in order)
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1208/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/1208/timeline
| null | null | false |
{
"diff_url": "https://github.com/huggingface/datasets/pull/1208.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1208",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/1208.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1208"
}
| true |
[] |
https://api.github.com/repos/huggingface/datasets/issues/2749
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/2749/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/2749/comments
|
https://api.github.com/repos/huggingface/datasets/issues/2749/events
|
https://github.com/huggingface/datasets/issues/2749
| 958,968,748 |
MDU6SXNzdWU5NTg5Njg3NDg=
| 2,749 |
Raise a proper exception when trying to stream a dataset that requires to manually download files
|
[
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] |
closed
| false | null | 2 |
2021-08-03T10:26:27Z
|
2021-08-09T08:53:35Z
|
2021-08-04T11:36:30Z
| null |
## Describe the bug
At least for 'reclor', 'telugu_books', 'turkish_movie_sentiment', 'ubuntu_dialogs_corpus', 'wikihow', trying to `load_dataset` in streaming mode raises a `TypeError` without any detail about why it fails.
## Steps to reproduce the bug
```python
from datasets import load_dataset
dataset = load_dataset("reclor", streaming=True)
```
## Expected results
Ideally: raise a specific exception, something like `ManualDownloadError`.
Or at least give the reason in the message, as when we load in normal mode:
```python
from datasets import load_dataset
dataset = load_dataset("reclor")
```
```
AssertionError: The dataset reclor with config default requires manual data.
Please follow the manual download instructions: to use ReClor you need to download it manually. Please go to its homepage (http://whyu.me/reclor/) fill the google
form and you will receive a download link and a password to extract it.Please extract all files in one folder and use the path folder in datasets.load_dataset('reclor', data_dir='path/to/folder/folder_name')
.
Manual data can be loaded with `datasets.load_dataset(reclor, data_dir='<path/to/manual/data>')
```
## Actual results
```
TypeError: expected str, bytes or os.PathLike object, not NoneType
```
## Environment info
- `datasets` version: 1.11.0
- Platform: macOS-11.5-x86_64-i386-64bit
- Python version: 3.8.11
- PyArrow version: 4.0.1
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2749/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/2749/timeline
| null |
completed
| null | null | false |
[
"Hi @severo, thanks for reporting.\r\n\r\nAs discussed, datasets requiring manual download should be:\r\n- programmatically identifiable\r\n- properly handled with more clear error message when trying to load them with streaming\r\n\r\nIn relation with programmatically identifiability, note that for datasets requiring manual download, their builder have a property `manual_download_instructions` which is not None:\r\n```python\r\n# Dataset requiring manual download:\r\nbuilder.manual_download_instructions is not None\r\n```",
"Thanks @albertvillanova "
] |
https://api.github.com/repos/huggingface/datasets/issues/5902
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5902/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5902/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5902/events
|
https://github.com/huggingface/datasets/pull/5902
| 1,727,342,194 |
PR_kwDODunzps5RbPS9
| 5,902 |
Fix `Overview.ipynb` & detach Jupyter Notebooks from `datasets` repository
|
[] |
closed
| false | null | 13 |
2023-05-26T10:25:01Z
|
2023-07-25T13:50:06Z
|
2023-07-25T13:38:33Z
| null |
## What's in this PR?
This PR solves #5887 since there was a mismatch between the tokenizer and the model used, since the tokenizer was `bert-base-cased` while the model was `distilbert-base-case` both for the PyTorch and TensorFlow alternatives. Since DistilBERT doesn't use/need the `token_type_ids`, the `**batch` was failing, as the batch contained `input_ids`, `attention_mask`, `token_type_ids`, `start_positions` and `end_positions`, and `token_type_ids` was not required.
Besides that, at the end `seqeval` was being used to evaluate the model predictions, and just `evaluate` was being installed, so I've also included the `seqeval` installation.
Finally, I've re-run everything in Google Colab, and every cell was successfully executed!
## What was done on top of the original PR?
Based on the comments from @mariosasko and @stevhliu, I've updated the contents of this PR to also review the `quickstart.mdx` and update what was needed, besides that, we may eventually move the `Overview.ipynb` dataset to `huggingface/notebooks` following @stevhliu suggestions.
|
{
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 1,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 2,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5902/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5902/timeline
| null | null | false |
{
"diff_url": "https://github.com/huggingface/datasets/pull/5902.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5902",
"merged_at": "2023-07-25T13:38:33Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5902.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5902"
}
| true |
[
"Random fact: previous run was showing that the Hub was hosting 13336 datasets, while the most recent run shows 36662 👀🎉",
"_The documentation is not available anymore as the PR was closed or merged._",
"Thanks! \r\n\r\nHowever, I think we should stop linking this notebook and use the notebook version of the Quickstart doc page instead of it for easier maintenance (we would have the \"Open in Colab\" button in the Quickstart doc as Transformers [does](https://huggingface.co/docs/transformers/quicktour)). \r\n\r\n@stevhliu should be able to help with this. If I'm not mistaken, this can be done by adding the `[[open in colab]]` marker to the doc page.\r\n\r\nAlso, if some useful info from the Overview notebook is not in the docs, feel free to add it so we don't lose it 🙂.",
"Cool, makes sense @mariosasko, then I'll check both notebooks and see whether there's something in `Overview.ipynb` worth including in the `docs/source/quickstart.mdx` and remove `Overview.ipynb` and update references in favour of `docs/source/quickstart.mdx`\r\n\r\nAre you OK if I do that @stevhliu @mariosasko? Thanks 🤗 ",
"For the moment I've just updated the `quickstart.mdx` to be more similar to [quicktour.mdx](https://github.com/huggingface/transformers/blob/main/docs/source/en/quicktour.mdx), but regarding the `Overview.ipynb` notebook I was planning to create a PR in https://github.com/huggingface/notebooks to add it there, does that make sense @stevhliu? And then to create a `README.md` in this repository in `notebooks/` as `transformers` does to point to the related notebooks hosted in https://github.com/huggingface/notebooks, WDYT? 🤗 ",
"Hi @stevhliu thanks for the feedback! Already applied your suggestions, I'll also add the pointers to both audio and image datasets in the \"What's next\" section.\r\n\r\nBesides that, let me know if I can help with the notebook being hosted in `huggingface/notebooks` instead, and I'll happily do so!",
"Thanks a lot for the detailed feedback @mariosasko, I'll apply the changes today!",
"> Besides that, let me know if I can help with the notebook being hosted in `huggingface/notebooks` instead, and I'll happily do so!\r\n\r\nAwesome! If you're up for it, I think you can go ahead and open a PR with the changes I've outlined [here](https://github.com/huggingface/datasets/pull/5902#pullrequestreview-1475236887) to add the notebook building workflow. ",
"Hi @stevhliu @mariosasko, sorry for the delay I had a busy week, I'll tackle this either today or tomorrow to ideally close it before the weekend, thanks again for the help and guidance 😄 ",
"Hi guys @stevhliu @mariosasko sorry for the delay! I've resolved all the comments and applied your reviews 👍🏻 Let me know if this works and we can finally close this PR, thanks for the help in the meantime!",
"> Thanks for iterating on this and wrapping it up! 🤗\r\n\r\nNo need to! Always a pleasure to collaborate with you guys 🤗 ",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.009814 / 0.011353 (-0.001539) | 0.004632 / 0.011008 (-0.006376) | 0.103059 / 0.038508 (0.064551) | 0.090277 / 0.023109 (0.067167) | 0.389344 / 0.275898 (0.113446) | 0.464536 / 0.323480 (0.141056) | 0.008196 / 0.007986 (0.000210) | 0.003872 / 0.004328 (-0.000457) | 0.081912 / 0.004250 (0.077662) | 0.073197 / 0.037052 (0.036145) | 0.407545 / 0.258489 (0.149056) | 0.458035 / 0.293841 (0.164194) | 0.037485 / 0.128546 (-0.091061) | 0.010141 / 0.075646 (-0.065505) | 0.365998 / 0.419271 (-0.053273) | 0.065218 / 0.043533 (0.021685) | 0.414091 / 0.255139 (0.158952) | 0.435617 / 0.283200 (0.152417) | 0.028850 / 0.141683 (-0.112833) | 1.883510 / 1.452155 (0.431355) | 1.979986 / 1.492716 (0.487269) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.236623 / 0.018006 (0.218616) | 0.467128 / 0.000490 (0.466638) | 0.008273 / 0.000200 (0.008074) | 0.000699 / 0.000054 (0.000645) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.033061 / 0.037411 (-0.004350) | 0.101381 / 0.014526 (0.086856) | 0.110862 / 0.176557 (-0.065695) | 0.180982 / 0.737135 (-0.556154) | 0.113791 / 0.296338 (-0.182548) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.450805 / 0.215209 (0.235596) | 4.478374 / 2.077655 (2.400719) | 2.190814 / 1.504120 (0.686694) | 1.976726 / 1.541195 (0.435532) | 2.078527 / 1.468490 (0.610037) | 0.569150 / 4.584777 (-4.015627) | 4.557790 / 3.745712 (0.812078) | 3.794964 / 5.269862 (-1.474898) | 2.555689 / 4.565676 (-2.009987) | 0.067380 / 0.424275 (-0.356896) | 0.008741 / 0.007607 (0.001134) | 0.536913 / 0.226044 (0.310868) | 5.364588 / 2.268929 (3.095659) | 2.725602 / 55.444624 (-52.719022) | 2.332012 / 6.876477 (-4.544465) | 2.560550 / 2.142072 (0.418477) | 0.672490 / 4.805227 (-4.132738) | 0.153629 / 6.500664 (-6.347035) | 0.070583 / 0.075469 (-0.004886) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.620083 / 1.841788 (-0.221704) | 23.094248 / 8.074308 (15.019939) | 17.797625 / 10.191392 (7.606233) | 0.167993 / 0.680424 (-0.512430) | 0.021151 / 0.534201 (-0.513050) | 0.470216 / 0.579283 (-0.109067) | 0.515492 / 0.434364 (0.081128) | 0.666359 / 0.540337 (0.126021) | 0.772928 / 1.386936 (-0.614008) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007853 / 0.011353 (-0.003500) | 0.004627 / 0.011008 (-0.006381) | 0.079803 / 0.038508 (0.041295) | 0.091562 / 0.023109 (0.068453) | 0.488537 / 0.275898 (0.212639) | 0.579207 / 0.323480 (0.255728) | 0.006579 / 0.007986 (-0.001406) | 0.003946 / 0.004328 (-0.000382) | 0.080224 / 0.004250 (0.075973) | 0.074499 / 0.037052 (0.037446) | 0.488292 / 0.258489 (0.229803) | 0.569246 / 0.293841 (0.275405) | 0.039994 / 0.128546 (-0.088553) | 0.012867 / 0.075646 (-0.062780) | 0.092563 / 0.419271 (-0.326709) | 0.061656 / 0.043533 (0.018124) | 0.488271 / 0.255139 (0.233132) | 0.550651 / 0.283200 (0.267451) | 0.032078 / 0.141683 (-0.109605) | 1.874440 / 1.452155 (0.422286) | 1.973480 / 1.492716 (0.480763) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.238789 / 0.018006 (0.220782) | 0.460237 / 0.000490 (0.459748) | 0.000500 / 0.000200 (0.000300) | 0.000067 / 0.000054 (0.000012) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.034961 / 0.037411 (-0.002450) | 0.102696 / 0.014526 (0.088170) | 0.117772 / 0.176557 (-0.058784) | 0.183865 / 0.737135 (-0.553270) | 0.119216 / 0.296338 (-0.177122) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.528894 / 0.215209 (0.313685) | 5.303954 / 2.077655 (3.226300) | 2.897505 / 1.504120 (1.393385) | 2.475898 / 1.541195 (0.934703) | 2.553479 / 1.468490 (1.084988) | 0.625847 / 4.584777 (-3.958930) | 4.656595 / 3.745712 (0.910882) | 3.745170 / 5.269862 (-1.524691) | 2.470922 / 4.565676 (-2.094755) | 0.066908 / 0.424275 (-0.357367) | 0.009172 / 0.007607 (0.001565) | 0.572695 / 0.226044 (0.346650) | 5.753428 / 2.268929 (3.484499) | 3.033226 / 55.444624 (-52.411398) | 2.677280 / 6.876477 (-4.199197) | 2.908857 / 2.142072 (0.766785) | 0.681595 / 4.805227 (-4.123632) | 0.154602 / 6.500664 (-6.346062) | 0.072608 / 0.075469 (-0.002861) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.738550 / 1.841788 (-0.103237) | 25.090637 / 8.074308 (17.016329) | 18.371478 / 10.191392 (8.180086) | 0.207357 / 0.680424 (-0.473067) | 0.023396 / 0.534201 (-0.510805) | 0.505663 / 0.579283 (-0.073620) | 0.503137 / 0.434364 (0.068773) | 0.598015 / 0.540337 (0.057678) | 0.714122 / 1.386936 (-0.672814) |\n\n</details>\n</details>\n\n\n",
"Just as a heads up @mariosasko, the `quickstart.ipynb` Jupyter Notebook has been built at https://github.com/huggingface/notebooks/blob/main/datasets_doc/en/quickstart.ipynb, while the URLs in here point to https://github.com/huggingface/notebooks/blob/main/datasets_doc/quickstart.ipynb instead, should we update that?"
] |
https://api.github.com/repos/huggingface/datasets/issues/3644
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/3644/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/3644/comments
|
https://api.github.com/repos/huggingface/datasets/issues/3644/events
|
https://github.com/huggingface/datasets/issues/3644
| 1,116,519,670 |
I_kwDODunzps5CjLz2
| 3,644 |
Add a GROUP BY operator
|
[
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] |
open
| false | null | 9 |
2022-01-27T16:57:54Z
|
2023-03-14T14:45:59Z
| null | null |
**Is your feature request related to a problem? Please describe.**
Using batch mapping, we can easily split examples. However, we lack an appropriate option for merging them back together by some key. Consider this example:
```python
# features:
# {
# "example_id": datasets.Value("int32"),
# "text": datasets.Value("string")
# }
ds = datasets.Dataset()
def split(examples):
sentences = [text.split(".") for text in examples["text"]]
return {
"example_id": [
example_id
for example_id, sents in zip(examples["example_id"], sentences)
for _ in sents
],
"sentence": [sent for sents in sentences for sent in sents],
"sentence_id": [i for sents in sentences for i in range(len(sents))],
}
split_ds = ds.map(split, batched=True)
def process(examples):
outputs = some_neural_network_that_works_on_sentences(examples["sentence"])
return {"outputs": outputs}
split_ds = split_ds.map(process, batched=True)
```
I have a dataset consisting of texts that I would like to process sentence by sentence in a batched way. Afterwards, I would like to put it back together as it was, merging the outputs together.
**Describe the solution you'd like**
Ideally, it would look something like this:
```python
def join(examples):
order = np.argsort(examples["sentence_id"])
text = ".".join(examples["text"][i] for i in order)
outputs = [examples["outputs"][i] for i in order]
return {"text": text, "outputs": outputs}
ds = split_ds.group_by("example_id", join)
```
**Describe alternatives you've considered**
Right now, we can do this:
```python
def merge(example):
meeting_id = example["example_id"]
parts = split_ds.filter(lambda x: x["example_id"] == meeting_id).sort("segment_no")
return {"outputs": list(parts["outputs"])}
ds = ds.map(merge)
```
Of course, we could process the dataset like this:
```python
def process(example):
outputs = some_neural_network_that_works_on_sentences(example["text"].split("."))
return {"outputs": outputs}
ds = ds.map(process, batched=True)
```
However, that does not allow using an arbitrary batch size and may lead to very inefficient use of resources if the batch size is much larger than the number of sentences in one example.
I would very much appreciate some kind of group by operator to merge examples based on the value of one column.
|
{
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3644/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/3644/timeline
| null | null | null | null | false |
[
"Hi ! At the moment you can use `to_pandas()` to get a pandas DataFrame that supports `group_by` operations (make sure your dataset fits in memory though)\r\n\r\nWe use Arrow as a back-end for `datasets` and it doesn't have native group by (see https://github.com/apache/arrow/issues/2189) unfortunately.\r\n\r\nI just drafted what it could look like to have `group_by` in `datasets`:\r\n```python\r\nfrom datasets import concatenate_datasets\r\n\r\ndef group_by(d, col, join): \r\n \"\"\"from: https://github.com/huggingface/datasets/issues/3644\"\"\"\r\n # Get the indices of each group\r\n groups = {key: [] for key in d.unique(col)} \r\n def create_groups_indices(key, i): \r\n groups[key].append(i) \r\n d.map(create_groups_indices, with_indices=True, input_columns=col) \r\n # Get one dataset object per group\r\n groups = {key: d.select(indices) for key, indices in groups.items()} \r\n # Apply join function\r\n groups = {\r\n key: dataset_group.map(join, batched=True, batch_size=len(dataset_group), remove_columns=d.column_names)\r\n for key, dataset_group in groups.items()\r\n } \r\n # Return concatenation of all the joined groups\r\n return concatenate_datasets(groups.values())\r\n```\r\n\r\nexample of usage:\r\n```python\r\n\r\ndef join(batch): \r\n # take the batch of all the examples of a group, and return a batch with one aggregated example\r\n # (we could aggregate examples into several rows instead of one, if you want)\r\n return {\"total\": [batch[\"i\"]]} \r\n\r\nd = Dataset.from_dict({\r\n \"i\": [i for i in range(50)],\r\n \"group_key\": [i % 4 for i in range(50)],\r\n})\r\nprint(group_by(d, \"group_key\", join))\r\n# total\r\n# 0 [0, 4, 8, 12, 16, 20, 24, 28, 32, 36, 40, 44, 48]\r\n# 1 [1, 5, 9, 13, 17, 21, 25, 29, 33, 37, 41, 45, 49]\r\n# 2 [2, 6, 10, 14, 18, 22, 26, 30, 34, 38, 42, 46]\r\n# 3 [3, 7, 11, 15, 19, 23, 27, 31, 35, 39, 43, 47]\r\n```\r\n\r\nLet me know if that helps !\r\n\r\ncc @albertvillanova @mariosasko for visibility",
"@lhoestq As of PyArrow 7.0.0, `pa.Table` has the [`group_by` method](https://arrow.apache.org/docs/python/generated/pyarrow.Table.html#pyarrow.Table.group_by), so we should also consider using that function for grouping. ",
"Any update on this?",
"You can use https://github.com/mariosasko/datasets_sql by @mariosasko to go group by operations using SQL queries",
"Hi, I have a similar issue as OP but the suggested solutions do not work for my case. Basically, I process documents through a model to extract the last_hidden_state, using the \"map\" method on a Dataset object, but would like to average the result over a categorical column at the end (i.e. groupby this column).\r\n- A to_pandas() saturates the memory, although it gives me the desired result through a .groupby().apply(np.mean, axis=0) on a smaller use-case,\r\n- The solution posted on Feb 4 is much too slow,\r\n- datasets_sql seems to not like the fact that I'm averaging np.arrays.\r\nSo I'm kinda out of \"non brute force\" options... Any help appreciated",
"> Hi, I have a similar issue as OP but the suggested solutions do not work for my case. Basically, I process documents through a model to extract the last_hidden_state, using the \"map\" method on a Dataset object, but would like to average the result over a categorical column at the end (i.e. groupby this column).\r\n \r\nIf you haven't yet, you could explore using [Polars](https://www.pola.rs/) for this. It's a new DataFrame library written in Rust with Python bindings. It is Pandas like it in many ways ,but does have some biggish differences in syntax/approach so it's definitely not a drop-in replacement. \r\n\r\nPolar's also uses Arrow as a backend but also supports out-of-memory operations; in this case, it's probably easiest to write out your dataset to parquet and then use the polar's `scan_parquet` method (this will lazily read from the parquet file). The thing you get back from that is a `LazyDataFrame` i.e. nothing is loaded into memory until you specify a query and call a `collect` method. \r\n\r\nExample below of doing a groupby on a dataset which definitely wouldn't fit into memory on my machine:\r\n\r\n```\r\nfrom datasets import load_dataset\r\nimport polars as pl\r\n\r\nds = load_dataset(\"blbooks\")\r\nds['train'].to_parquet(\"test.parquet\")\r\ndf = pl.scan_parquet(\"test.parquet\")\r\ndf.groupby('date').agg([pl.count()]).collect()\r\n```\r\n\r\n>datasets_sql seems to not like the fact that I'm averaging np.arrays.\r\n\r\nI am not certain how Polars will handle this either. It does have NumPy support (https://pola-rs.github.io/polars-book/user-guide/howcani/interop/numpy.html) but I assume Polars will need to have at least enough memory in each group you want to average over so you may still end up needing more memory depending on the size of your dataset/groups. \r\n\r\n\r\n",
"Hi @davanstrien , thanks a lot, I didn't know about this library and the answer works! I need to try it on the full dataset now, but I'm hopeful. Here's what my code looks like:\r\n```\r\nlist_size = 768\r\ndf.groupby(\"date\").agg(\r\n pl.concat_list(\r\n [\r\n pl.col(\"hidden_state\")\r\n .arr.slice(n, 1)\r\n .arr.first()\r\n .mean()\r\n for n in range(0, list_size)\r\n ]\r\n ).collect()\r\n```\r\n\r\nFor some reasons, the following code was giving me a \"mean() got unexpected argument 'axis'\":\r\n```\r\ndf2 = df.groupby('date').agg(\r\n pl.col(\"hidden_state\").map(np.mean).alias(\"average_hidden_state\")\r\n).collect()\r\n\r\n```\r\n\r\nEDIT: The solution works on my large dataset, the memory does not crash and the time is reasonable, thanks a lot again!",
"@jeremylhour glad this worked for you :) ",
"I find this functionality missing in my workflow as well and the workarounds with SQL and Polars unsatisfying. Since PyArrow has exposed this functionality, I hope this soon makes it into a release. (:"
] |
https://api.github.com/repos/huggingface/datasets/issues/1948
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/1948/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/1948/comments
|
https://api.github.com/repos/huggingface/datasets/issues/1948/events
|
https://github.com/huggingface/datasets/issues/1948
| 816,689,329 |
MDU6SXNzdWU4MTY2ODkzMjk=
| 1,948 |
dataset loading logger level
|
[] |
closed
| false | null | 3 |
2021-02-25T18:33:37Z
|
2023-07-12T17:19:30Z
|
2023-07-12T17:19:30Z
| null |
on master I get this with `--dataset_name wmt16 --dataset_config ro-en`:
```
WARNING:datasets.arrow_dataset:Loading cached processed dataset at /home/stas/.cache/huggingface/datasets/wmt16/ro-en/1.0.0/9dc00622c30446e99c4c63d12a484ea4fb653f2f37c867d6edcec839d7eae50f/cache-2e01bead8cf42e26.arrow
WARNING:datasets.arrow_dataset:Loading cached processed dataset at /home/stas/.cache/huggingface/datasets/wmt16/ro-en/1.0.0/9dc00622c30446e99c4c63d12a484ea4fb653f2f37c867d6edcec839d7eae50f/cache-ac3bebaf4f91f776.arrow
WARNING:datasets.arrow_dataset:Loading cached processed dataset at /home/stas/.cache/huggingface/datasets/wmt16/ro-en/1.0.0/9dc00622c30446e99c4c63d12a484ea4fb653f2f37c867d6edcec839d7eae50f/cache-810c3e61259d73a9.arrow
```
why are those WARNINGs? Should be INFO, no?
warnings should only be used when a user needs to pay attention to something, this is just informative - I'd even say it should be DEBUG, but definitely not WARNING.
Thank you.
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1948/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/1948/timeline
| null |
completed
| null | null | false |
[
"These warnings are showed when there's a call to `.map` to say to the user that a dataset is reloaded from the cache instead of being recomputed.\r\nThey are warnings since we want to make sure the users know that it's not recomputed.",
"Thank you for explaining the intention, @lhoestq \r\n\r\n1. Could it be then made more human-friendly? Currently the hex gibberish tells me nothing of what's really going on. e.g. the following is instructive, IMHO:\r\n\r\n```\r\nWARNING: wmt16/ro-en/train dataset was loaded from cache instead of being recomputed\r\nWARNING: wmt16/ro-en/validation dataset was loaded from cache instead of being recomputed\r\nWARNING: wmt16/ro-en/test dataset was loaded from cache instead of being recomputed\r\n```\r\nnote that it removes the not so useful hex info and tells the user instead which split it's referring to - but probably no harm in keeping the path if it helps the debug. But the key is that now the warning is telling me what it is it's warning me about.\r\n```\r\nWarning:Loading cache path\r\n```\r\non the other hand isn't telling what it is warning about.\r\n\r\nAnd I still suggest this is INFO level, otherwise you need to turn all 'using cache' statements to WARNING to be consistent. The user is most likely well aware the cache is used for models, etc. So this feels very similar.\r\n\r\n2. Should there be a way for a user to void warranty by having a flag - `I know I'm expecting the cached version to load if it's available - please do not warn me about it=True`\r\n\r\nTo explain the need: Warnings are a problem, they constantly take attention away because they could be the harbinger of a problem. Therefore I prefer not to have any warnings in the log, and if I get any I usually try to deal with those so that my log is clean. \r\n\r\nIt's less of an issue for somebody doing long runs. It's a huge issue for someone who does a new run every few minutes and on the lookout for any potential problems which is what I have been doing a lot of integrating DeepSpeed and other things. And since there are already problems to deal with during the integration it's nice to have a clean log to start with. \r\n\r\nI hope my need is not unreasonable and I was able to explain it adequately. \r\n\r\nThank you.",
"Hey, any news about the issue? So many warnings when I'm really ok with the dataset not being recomputed :)"
] |
https://api.github.com/repos/huggingface/datasets/issues/2245
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/2245/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/2245/comments
|
https://api.github.com/repos/huggingface/datasets/issues/2245/events
|
https://github.com/huggingface/datasets/pull/2245
| 863,191,655 |
MDExOlB1bGxSZXF1ZXN0NjE5NjQzMjQ3
| 2,245 |
Add `key` type and duplicates verification with hashing
|
[] |
closed
| false | null | 17 |
2021-04-20T20:03:19Z
|
2021-05-10T18:04:37Z
|
2021-05-10T17:31:22Z
| null |
Closes #2230
There is currently no verification for the data type and the uniqueness of the keys yielded by the `dataset_builder`.
This PR is currently a work in progress with the following goals:
- [x] Adding `hash_salt` to `ArrowWriter` so that the keys belonging to different splits have different hash
- [x] Add `key` arrtibute to `ArrowWriter.write()` for hashing
- [x] Add a hashing class which takes an input key of certain type (`str`/`int`/anything convertible to string) and produces a 128-bit hash using `hashlib.md5`
- [x] Creating a function giving a custom error message when non-unique keys are found
**[This will take care of type-checking for keys]**
- [x] Checking for duplicate keys in `writer.write()` for each batch
[**NOTE**: This PR is currently concerned with `GeneratorBasedBuilder` only, for simplification. A subsequent PR will be made in future for `ArrowBasedBuilder`]
@lhoestq Thank you for the feedback. It would be great to have your guidance on this!
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2245/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/2245/timeline
| null | null | false |
{
"diff_url": "https://github.com/huggingface/datasets/pull/2245.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2245",
"merged_at": "2021-05-10T17:31:21Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2245.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2245"
}
| true |
[
"@lhoestq The tests for key type and duplicate keys have been added and verified successfully.\r\nAfter generating with an intentionally faulty `mnist` script, when there is an incompatible key type, it shows:\r\n\r\n```\r\nDownloading and preparing dataset mnist/mnist (download: 11.06 MiB, generated: 60.62 MiB, post-processed: Unknown size, total: 71.67 MiB) to C:\\Users\\nikhil\\.cache\\huggingface\\datasets\\mnist\\mnist\\1.0.0\\5064c25e57a1678f700d2dc798ffe8a6d519405cca7d33670fffda477857a994...\r\n0 examples [00:00, ? examples/s]2021-04-26 02:50:03.703836: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library cudart64_110.dll\r\n\r\nFAILURE TO GENERATE DATASET: Invalid key type detected\r\nFound Key [0, 0] of type <class 'list'>\r\nKeys should be either str, int or bytes type\r\n```\r\n\r\nIn the case of duplicate keys, it now gives:\r\n\r\n```\r\nDownloading and preparing dataset mnist/mnist (download: 11.06 MiB, generated: 60.62 MiB, post-processed: Unknown size, total: 71.67 MiB) to C:\\Users\\nikhil\\.cache\\huggingface\\datasets\\mnist\\mnist\\1.0.0\\5064c25e57a1678f700d2dc798ffe8a6d519405cca7d33670fffda477857a994...\r\n0 examples [00:00, ? examples/s]2021-04-26 02:53:13.498579: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library cudart64_110.dll\r\nTraceback (most recent call last):\r\n File \"<stdin>\", line 1, in <module>\r\n File \"f:\\datasets\\datasets-1\\src\\datasets\\load.py\", line 746, in load_dataset\r\n builder_instance.download_and_prepare(\r\n File \"f:\\datasets\\datasets-1\\src\\datasets\\builder.py\", line 587, in download_and_prepare\r\n self._download_and_prepare(\r\n File \"f:\\datasets\\datasets-1\\src\\datasets\\builder.py\", line 665, in _download_and_prepare\r\n self._prepare_split(split_generator, **prepare_split_kwargs)\r\n File \"f:\\datasets\\datasets-1\\src\\datasets\\builder.py\", line 1002, in _prepare_split\r\n writer.write(example, key)\r\n File \"f:\\datasets\\datasets-1\\src\\datasets\\arrow_writer.py\", line 321, in write\r\n self.check_duplicates()\r\n File \"f:\\datasets\\datasets-1\\src\\datasets\\arrow_writer.py\", line 331, in check_duplicates\r\n raise DuplicatedKeysError(key)\r\ndatasets.keyhash.DuplicatedKeysError: FAILURE TO GENERATE DATASET !\r\nFound duplicate Key: 234467\r\nKeys should be unique and deterministic in nature\r\n```\r\nPlease let me know if this is what we wanted to implement. Thanks!",
"This looks pretty cool !\r\nWe can make focus on the GeneratorBasedBuilder for now yes.\r\n\r\nDo you think we could make the ArrowWriter not look for duplicates by default ?\r\nThis way we can just enable duplicate detections when instantiating the writer in the GeneratorBasedBuilder for now.",
"Thank you @lhoestq\r\n\r\n\r\n\r\n> Do you think we could make the ArrowWriter not look for duplicates by default ?\r\n\r\nWe can definitely do that by including a `check_duplicates` argument while instantiating `ArrowWriter()`. \r\n\r\nHowever, since only `GeneratorBasedBuilder` uses the `write()` function (which includes the detection code) and the others like `ArrowBasedBuilder` use `write_table()` which remains as it was (without duplicate detection). I don't think it would be necessary.\r\n\r\nNonetheless, doing this would require just some small changes. Please let me know your thoughts on this. Thanks!",
"I like the idea of having the duplicate detection optional for other uses of the ArrowWriter.\r\nThis class is the main tool to write python data in arrow format so I'd expect it to be flexible.\r\nThat's why I think by default it shouldn't require users to provide keys or do any duplicates detection.\r\n\r\nAn alternative would be to subclass the writer to include duplicates detection in another class.\r\n\r\nBoth options are fine for me, let me know what you think !",
"> This class is the main tool to write python data in arrow format so I'd expect it to be flexible.\r\n> That's why I think by default it shouldn't require users to provide keys or do any duplicates detection.\r\n\r\nWell, that makes sense as the writer can indeed be used for other purposes as well.\r\n\r\n> We can definitely do that by including a `check_duplicates` argument while instantiating `ArrowWriter()`.\r\n\r\nI think that this would be the simplest and the more efficient option for achieving this as subclassing the writer only for this would lead to unnecessary complexity and code duplication (in case of `writer()`). \r\n\r\nI will be adding the changes soon. Thanks for the feedback @lhoestq!",
"@lhoestq I have pushed the final changes just now. \r\nNow, the keys and duplicate checking will be necessary only when the `ArrowWriter` is initialized with `check_duplicates=True` specifically (in this case, for `GeneratorBasedBuilders`)\r\n\r\nLet me know if this is what was required. Thanks!",
"@lhoestq Thanks for the feedback! I will be adding the tests for the same very soon. \r\n\r\nHowever, I'm not sure as to what exactly is causing the `segmentation fault` in the failing CI tests. It seems to be something from `test_concatenation_table_cast` from `test_table.py`, but I'm not sure as to what exactly. Would be great if you could help. Thanks!",
"You can merge master into your branch to fix this issue.\r\nBasically pyarrow 4.0.0 has a segfault issue (which has now been resolved on the master branch of pyarrow).\r\nSo until 4.0.1 comes out we changed to using `pyarrow<4.0.0` recently.",
"@lhoestq Thanks for the help with the CI failures. Apologies for the multiple merge commits. My local repo got messy while merging which led to this.\r\nWill be pushing the commit for the tests soon!",
"Hey @lhoestq, I've just added the required tests for checking key duplicates and invalid key data types.\r\nI think we have caught a nice little issue as 27 datasets are currently using non-unique keys (hence, the failing tests: All these datasets are giving `DuplicateKeysError` during testing). \r\nThese datasets were not detected earlier as there was no key checking when `num_examples < writer_batch_size` due to which they passed the dummy data generation test. This bug was fixed by adding the test to `writer.finalize()` method as well for checking any leftover examples from batches. \r\n\r\nI'd like to make changes to the faulty datasets' scripts. However, I was wondering if I should do that in this PR itself or open a new PR as this might get messy in the same PR. Let me know your thoughts on this. Thanks!",
"Hi ! Once https://github.com/huggingface/datasets/pull/2333 is merged, feel free to merge master into your branch to fix the CI :)",
"Thanks a lot for the help @lhoestq. Besides merging the new changes, I guess this PR is completed for now :)",
"I just merged the PR, feel free to merge `master` into your branch. It should fix most most of the CI issues. If there are some left we can fix them in this PR :)",
"@lhoestq Looks like the PR is completed now. Thanks for helping me out so much in this :)",
"Hey @lhoestq, I've added the test and corrected the Cl errors as well. Do let me know if this requires any change. Thanks!",
"Merging. I'll update the comment on the master branch (for some reason I can edit files on this branch)",
"@lhoestq Thank you for the help and feedback. Feels great to contribute!"
] |
https://api.github.com/repos/huggingface/datasets/issues/1370
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/1370/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/1370/comments
|
https://api.github.com/repos/huggingface/datasets/issues/1370/events
|
https://github.com/huggingface/datasets/pull/1370
| 760,264,132 |
MDExOlB1bGxSZXF1ZXN0NTM1MTI1MTc3
| 1,370 |
Add OPUS PHP Dataset
|
[] |
closed
| false | null | 0 |
2020-12-09T11:53:30Z
|
2020-12-10T15:37:25Z
|
2020-12-10T15:37:24Z
| null |
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1370/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/1370/timeline
| null | null | false |
{
"diff_url": "https://github.com/huggingface/datasets/pull/1370.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1370",
"merged_at": "2020-12-10T15:37:24Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1370.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1370"
}
| true |
[] |
|
https://api.github.com/repos/huggingface/datasets/issues/2069
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/2069/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/2069/comments
|
https://api.github.com/repos/huggingface/datasets/issues/2069/events
|
https://github.com/huggingface/datasets/pull/2069
| 833,768,926 |
MDExOlB1bGxSZXF1ZXN0NTk0NzA5ODYw
| 2,069 |
Add and fix docstring for NamedSplit
|
[] |
closed
| false | null | 1 |
2021-03-17T13:19:28Z
|
2021-03-18T10:27:40Z
|
2021-03-18T10:27:40Z
| null |
Add and fix docstring for `NamedSplit`, which was missing.
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2069/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/2069/timeline
| null | null | false |
{
"diff_url": "https://github.com/huggingface/datasets/pull/2069.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2069",
"merged_at": "2021-03-18T10:27:40Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2069.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2069"
}
| true |
[
"Maybe we should add some other split classes?"
] |
https://api.github.com/repos/huggingface/datasets/issues/2276
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/2276/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/2276/comments
|
https://api.github.com/repos/huggingface/datasets/issues/2276/events
|
https://github.com/huggingface/datasets/issues/2276
| 870,010,511 |
MDU6SXNzdWU4NzAwMTA1MTE=
| 2,276 |
concatenate_datasets loads all the data into memory
|
[
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] |
closed
| false | null | 7 |
2021-04-28T14:27:21Z
|
2021-05-03T08:41:55Z
|
2021-05-03T08:41:55Z
| null |
## Describe the bug
When I try to concatenate 2 datasets (10GB each) , the entire data is loaded into memory instead of being written directly to disk.
Interestingly, this happens when trying to save the new dataset to disk or concatenating it again.

## Steps to reproduce the bug
```python
from datasets import concatenate_datasets, load_from_disk
test_sampled_pro = load_from_disk("test_sampled_pro")
val_sampled_pro = load_from_disk("val_sampled_pro")
big_set = concatenate_datasets([test_sampled_pro, val_sampled_pro])
# Loaded to memory
big_set.save_to_disk("big_set")
# Loaded to memory
big_set = concatenate_datasets([big_set, val_sampled_pro])
```
## Expected results
The data should be loaded into memory in batches and then saved directly to disk.
## Actual results
The entire data set is loaded into the memory and then saved to the hard disk.
## Versions
Paste the output of the following code:
```python
- Datasets: 1.6.1
- Python: 3.8.8 (default, Apr 13 2021, 19:58:26)
[GCC 7.3.0]
- Platform: Linux-5.4.72-microsoft-standard-WSL2-x86_64-with-glibc2.10
```
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2276/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/2276/timeline
| null |
completed
| null | null | false |
[
"Therefore, when I try to concatenate larger datasets (5x 35GB data sets) I also get an out of memory error, since over 90GB of swap space was used at the time of the crash:\r\n\r\n```\r\n---------------------------------------------------------------------------\r\nMemoryError Traceback (most recent call last)\r\n<ipython-input-6-9766d77530b9> in <module>\r\n 20 print(file_name)\r\n 21 cv_batch = load_from_disk(file_name)\r\n---> 22 cv_sampled_train = concatenate_datasets([cv_sampled_train, cv_batch])\r\n 23 \r\n 24 print(\"Saving to disk!\")\r\n\r\nC:\\ProgramData\\Anaconda3\\lib\\site-packages\\datasets\\arrow_dataset.py in concatenate_datasets(dsets, info, split, axis)\r\n 2891 \r\n 2892 # Concatenate tables\r\n-> 2893 table = concat_tables([dset._data for dset in dsets if len(dset._data) > 0], axis=axis)\r\n 2894 table = update_metadata_with_features(table, None)\r\n 2895 \r\n\r\nC:\\ProgramData\\Anaconda3\\lib\\site-packages\\datasets\\table.py in concat_tables(tables, axis)\r\n 837 if len(tables) == 1:\r\n 838 return tables[0]\r\n--> 839 return ConcatenationTable.from_tables(tables, axis=axis)\r\n 840 \r\n 841 \r\n\r\nC:\\ProgramData\\Anaconda3\\lib\\site-packages\\datasets\\table.py in from_tables(cls, tables, axis)\r\n 697 return result\r\n 698 \r\n--> 699 blocks = to_blocks(tables[0])\r\n 700 for table in tables[1:]:\r\n 701 table_blocks = to_blocks(table)\r\n\r\nC:\\ProgramData\\Anaconda3\\lib\\site-packages\\datasets\\table.py in to_blocks(table)\r\n 669 return [[InMemoryTable(table)]]\r\n 670 elif isinstance(table, ConcatenationTable):\r\n--> 671 return copy.deepcopy(table.blocks)\r\n 672 else:\r\n 673 return [[table]]\r\n\r\nC:\\ProgramData\\Anaconda3\\lib\\copy.py in deepcopy(x, memo, _nil)\r\n 144 copier = _deepcopy_dispatch.get(cls)\r\n 145 if copier is not None:\r\n--> 146 y = copier(x, memo)\r\n 147 else:\r\n 148 if issubclass(cls, type):\r\n\r\nC:\\ProgramData\\Anaconda3\\lib\\copy.py in _deepcopy_list(x, memo, deepcopy)\r\n 203 append = y.append\r\n 204 for a in x:\r\n--> 205 append(deepcopy(a, memo))\r\n 206 return y\r\n 207 d[list] = _deepcopy_list\r\n\r\nC:\\ProgramData\\Anaconda3\\lib\\copy.py in deepcopy(x, memo, _nil)\r\n 144 copier = _deepcopy_dispatch.get(cls)\r\n 145 if copier is not None:\r\n--> 146 y = copier(x, memo)\r\n 147 else:\r\n 148 if issubclass(cls, type):\r\n\r\nC:\\ProgramData\\Anaconda3\\lib\\copy.py in _deepcopy_list(x, memo, deepcopy)\r\n 203 append = y.append\r\n 204 for a in x:\r\n--> 205 append(deepcopy(a, memo))\r\n 206 return y\r\n 207 d[list] = _deepcopy_list\r\n\r\nC:\\ProgramData\\Anaconda3\\lib\\copy.py in deepcopy(x, memo, _nil)\r\n 151 copier = getattr(x, \"__deepcopy__\", None)\r\n 152 if copier is not None:\r\n--> 153 y = copier(memo)\r\n 154 else:\r\n 155 reductor = dispatch_table.get(cls)\r\n\r\nC:\\ProgramData\\Anaconda3\\lib\\site-packages\\datasets\\table.py in __deepcopy__(self, memo)\r\n 143 # by adding it to the memo, self.table won't be copied\r\n 144 memo[id(self.table)] = self.table\r\n--> 145 return _deepcopy(self, memo)\r\n 146 \r\n 147 def __getstate__(self):\r\n\r\nC:\\ProgramData\\Anaconda3\\lib\\site-packages\\datasets\\table.py in _deepcopy(x, memo)\r\n 62 memo[id(x)] = result\r\n 63 for k, v in x.__dict__.items():\r\n---> 64 setattr(result, k, copy.deepcopy(v, memo))\r\n 65 return result\r\n 66 \r\n\r\nC:\\ProgramData\\Anaconda3\\lib\\copy.py in deepcopy(x, memo, _nil)\r\n 144 copier = _deepcopy_dispatch.get(cls)\r\n 145 if copier is not None:\r\n--> 146 y = copier(x, memo)\r\n 147 else:\r\n 148 if issubclass(cls, type):\r\n\r\nC:\\ProgramData\\Anaconda3\\lib\\copy.py in _deepcopy_list(x, memo, deepcopy)\r\n 203 append = y.append\r\n 204 for a in x:\r\n--> 205 append(deepcopy(a, memo))\r\n 206 return y\r\n 207 d[list] = _deepcopy_list\r\n\r\nC:\\ProgramData\\Anaconda3\\lib\\copy.py in deepcopy(x, memo, _nil)\r\n 170 y = x\r\n 171 else:\r\n--> 172 y = _reconstruct(x, memo, *rv)\r\n 173 \r\n 174 # If is its own copy, don't memoize.\r\n\r\nC:\\ProgramData\\Anaconda3\\lib\\copy.py in _reconstruct(x, memo, func, args, state, listiter, dictiter, deepcopy)\r\n 262 if deep and args:\r\n 263 args = (deepcopy(arg, memo) for arg in args)\r\n--> 264 y = func(*args)\r\n 265 if deep:\r\n 266 memo[id(x)] = y\r\n\r\nC:\\ProgramData\\Anaconda3\\lib\\copy.py in <genexpr>(.0)\r\n 261 deep = memo is not None\r\n 262 if deep and args:\r\n--> 263 args = (deepcopy(arg, memo) for arg in args)\r\n 264 y = func(*args)\r\n 265 if deep:\r\n\r\nC:\\ProgramData\\Anaconda3\\lib\\copy.py in deepcopy(x, memo, _nil)\r\n 144 copier = _deepcopy_dispatch.get(cls)\r\n 145 if copier is not None:\r\n--> 146 y = copier(x, memo)\r\n 147 else:\r\n 148 if issubclass(cls, type):\r\n\r\nC:\\ProgramData\\Anaconda3\\lib\\copy.py in _deepcopy_list(x, memo, deepcopy)\r\n 203 append = y.append\r\n 204 for a in x:\r\n--> 205 append(deepcopy(a, memo))\r\n 206 return y\r\n 207 d[list] = _deepcopy_list\r\n\r\nC:\\ProgramData\\Anaconda3\\lib\\copy.py in deepcopy(x, memo, _nil)\r\n 170 y = x\r\n 171 else:\r\n--> 172 y = _reconstruct(x, memo, *rv)\r\n 173 \r\n 174 # If is its own copy, don't memoize.\r\n\r\nC:\\ProgramData\\Anaconda3\\lib\\copy.py in _reconstruct(x, memo, func, args, state, listiter, dictiter, deepcopy)\r\n 262 if deep and args:\r\n 263 args = (deepcopy(arg, memo) for arg in args)\r\n--> 264 y = func(*args)\r\n 265 if deep:\r\n 266 memo[id(x)] = y\r\n\r\nC:\\ProgramData\\Anaconda3\\lib\\copy.py in <genexpr>(.0)\r\n 261 deep = memo is not None\r\n 262 if deep and args:\r\n--> 263 args = (deepcopy(arg, memo) for arg in args)\r\n 264 y = func(*args)\r\n 265 if deep:\r\n\r\nC:\\ProgramData\\Anaconda3\\lib\\copy.py in deepcopy(x, memo, _nil)\r\n 144 copier = _deepcopy_dispatch.get(cls)\r\n 145 if copier is not None:\r\n--> 146 y = copier(x, memo)\r\n 147 else:\r\n 148 if issubclass(cls, type):\r\n\r\nC:\\ProgramData\\Anaconda3\\lib\\copy.py in _deepcopy_tuple(x, memo, deepcopy)\r\n 208 \r\n 209 def _deepcopy_tuple(x, memo, deepcopy=deepcopy):\r\n--> 210 y = [deepcopy(a, memo) for a in x]\r\n 211 # We're not going to put the tuple in the memo, but it's still important we\r\n 212 # check for it, in case the tuple contains recursive mutable structures.\r\n\r\nC:\\ProgramData\\Anaconda3\\lib\\copy.py in <listcomp>(.0)\r\n 208 \r\n 209 def _deepcopy_tuple(x, memo, deepcopy=deepcopy):\r\n--> 210 y = [deepcopy(a, memo) for a in x]\r\n 211 # We're not going to put the tuple in the memo, but it's still important we\r\n 212 # check for it, in case the tuple contains recursive mutable structures.\r\n\r\nC:\\ProgramData\\Anaconda3\\lib\\copy.py in deepcopy(x, memo, _nil)\r\n 144 copier = _deepcopy_dispatch.get(cls)\r\n 145 if copier is not None:\r\n--> 146 y = copier(x, memo)\r\n 147 else:\r\n 148 if issubclass(cls, type):\r\n\r\nC:\\ProgramData\\Anaconda3\\lib\\copy.py in _deepcopy_list(x, memo, deepcopy)\r\n 203 append = y.append\r\n 204 for a in x:\r\n--> 205 append(deepcopy(a, memo))\r\n 206 return y\r\n 207 d[list] = _deepcopy_list\r\n\r\nC:\\ProgramData\\Anaconda3\\lib\\copy.py in deepcopy(x, memo, _nil)\r\n 144 copier = _deepcopy_dispatch.get(cls)\r\n 145 if copier is not None:\r\n--> 146 y = copier(x, memo)\r\n 147 else:\r\n 148 if issubclass(cls, type):\r\n\r\nC:\\ProgramData\\Anaconda3\\lib\\copy.py in _deepcopy_tuple(x, memo, deepcopy)\r\n 208 \r\n 209 def _deepcopy_tuple(x, memo, deepcopy=deepcopy):\r\n--> 210 y = [deepcopy(a, memo) for a in x]\r\n 211 # We're not going to put the tuple in the memo, but it's still important we\r\n 212 # check for it, in case the tuple contains recursive mutable structures.\r\n\r\nC:\\ProgramData\\Anaconda3\\lib\\copy.py in <listcomp>(.0)\r\n 208 \r\n 209 def _deepcopy_tuple(x, memo, deepcopy=deepcopy):\r\n--> 210 y = [deepcopy(a, memo) for a in x]\r\n 211 # We're not going to put the tuple in the memo, but it's still important we\r\n 212 # check for it, in case the tuple contains recursive mutable structures.\r\n\r\nC:\\ProgramData\\Anaconda3\\lib\\copy.py in deepcopy(x, memo, _nil)\r\n 144 copier = _deepcopy_dispatch.get(cls)\r\n 145 if copier is not None:\r\n--> 146 y = copier(x, memo)\r\n 147 else:\r\n 148 if issubclass(cls, type):\r\n\r\nC:\\ProgramData\\Anaconda3\\lib\\copy.py in _deepcopy_list(x, memo, deepcopy)\r\n 203 append = y.append\r\n 204 for a in x:\r\n--> 205 append(deepcopy(a, memo))\r\n 206 return y\r\n 207 d[list] = _deepcopy_list\r\n\r\nC:\\ProgramData\\Anaconda3\\lib\\copy.py in deepcopy(x, memo, _nil)\r\n 159 reductor = getattr(x, \"__reduce_ex__\", None)\r\n 160 if reductor is not None:\r\n--> 161 rv = reductor(4)\r\n 162 else:\r\n 163 reductor = getattr(x, \"__reduce__\", None)\r\n\r\nC:\\ProgramData\\Anaconda3\\lib\\site-packages\\pyarrow\\io.pxi in pyarrow.lib.Buffer.__reduce_ex__()\r\n\r\nC:\\ProgramData\\Anaconda3\\lib\\site-packages\\pyarrow\\io.pxi in pyarrow.lib.Buffer.to_pybytes()\r\n\r\nMemoryError: \r\n\r\n```",
"Hi ! this looks like an important issue. Let me try to reproduce this.\r\nCc @samsontmr this might be related to the memory issue you have in #2134 ",
"@lhoestq Just went to open a similar issue.\r\n\r\nIt seems like deep copying (tested on master) the dataset object writes the table's record batches (`dset._data._batches`) into RAM.\r\n\r\nTo find the bug, I modified the `_deepcopy` function in `table.py` as follows:\r\n```python\r\ndef _deepcopy(x, memo: dict):\r\n \"\"\"deepcopy a regular class instance\"\"\"\r\n import psutil # pip install this package\r\n import time\r\n cls = x.__class__\r\n result = cls.__new__(cls)\r\n memo[id(x)] = result\r\n for k, v in x.__dict__.items():\r\n print(\"=\"* 50)\r\n print(\"Current memory:\", psutil.virtual_memory().percent)\r\n print(f\"Saving object {k} with value {v}\")\r\n setattr(result, k, copy.deepcopy(v, memo))\r\n time.sleep(5)\r\n print(\"Memory after copy:\", psutil.virtual_memory().percent)\r\n return result\r\n```\r\nTest script:\r\n```python\r\nimport copy\r\nfrom datasets import load_dataset\r\nbk = load_dataset(\"bookcorpus\", split=\"train\")\r\nbk_copy = copy.deepcopy(bk)\r\n```",
"Thanks for the insights @mariosasko ! I'm working on a fix.\r\nSince this is a big issue I'll make a patch release as soon as this is fixed",
"Hi @samsontmr @TaskManager91 the fix is on the master branch, feel free to install `datasets` from source and let us know if you still have issues",
"We just released `datasets` 1.6.2 that includes the fix :)",
"thanks it works like a charm! :)"
] |
https://api.github.com/repos/huggingface/datasets/issues/4523
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/4523/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/4523/comments
|
https://api.github.com/repos/huggingface/datasets/issues/4523/events
|
https://github.com/huggingface/datasets/pull/4523
| 1,275,002,639 |
PR_kwDODunzps452hgh
| 4,523 |
Update download url and improve card of `cats_vs_dogs` dataset
|
[] |
closed
| false | null | 1 |
2022-06-17T12:59:44Z
|
2022-06-21T14:23:26Z
|
2022-06-21T14:13:08Z
| null |
Improve the download URL (reported here: https://huggingface.co/datasets/cats_vs_dogs/discussions/1), remove the `image_file_path` column (not used in Transformers, so it should be safe) and add more info to the card.
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 1,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4523/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/4523/timeline
| null | null | false |
{
"diff_url": "https://github.com/huggingface/datasets/pull/4523.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4523",
"merged_at": "2022-06-21T14:13:08Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4523.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4523"
}
| true |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] |
https://api.github.com/repos/huggingface/datasets/issues/4426
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/4426/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/4426/comments
|
https://api.github.com/repos/huggingface/datasets/issues/4426/events
|
https://github.com/huggingface/datasets/issues/4426
| 1,253,887,311 |
I_kwDODunzps5KvM1P
| 4,426 |
Add loading variable number of columns for different splits
|
[
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] |
closed
| false | null | 1 |
2022-05-31T13:40:16Z
|
2022-06-03T16:25:25Z
|
2022-06-03T16:25:25Z
| null |
**Is your feature request related to a problem? Please describe.**
The original dataset `blended_skill_talk` consists of different sets of columns for the different splits: (test/valid) splits have additional data column `label_candidates` that the (train) doesn't have.
When loading such data, an exception occurs at table.py:cast_table_to_schema, because of mismatched columns.
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4426/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/4426/timeline
| null |
completed
| null | null | false |
[
"Hi! Indeed the column is missing, but you shouldn't get an error? Have you made some modifications (locally) to the loading script? I've opened a PR to add the missing columns to the script. "
] |
https://api.github.com/repos/huggingface/datasets/issues/2262
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/2262/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/2262/comments
|
https://api.github.com/repos/huggingface/datasets/issues/2262/events
|
https://github.com/huggingface/datasets/issues/2262
| 867,325,351 |
MDU6SXNzdWU4NjczMjUzNTE=
| 2,262 |
NewsPH NLI dataset script fails to access test data.
|
[
{
"color": "2edb81",
"default": false,
"description": "A bug in a dataset script provided in the library",
"id": 2067388877,
"name": "dataset bug",
"node_id": "MDU6TGFiZWwyMDY3Mzg4ODc3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20bug"
}
] |
closed
| false | null | 1 |
2021-04-26T06:44:41Z
|
2021-04-29T09:32:03Z
|
2021-04-29T09:30:20Z
| null |
In Newsph-NLI Dataset (#1192), it fails to access test data.
According to the script below, the download manager will download the train data when trying to download the test data.
https://github.com/huggingface/datasets/blob/2a2dd6316af2cc7fdf24e4779312e8ee0c7ed98b/datasets/newsph_nli/newsph_nli.py#L71
If you download it according to the script above, you can see that train and test receive the same data as shown below.
```python
>>> from datasets import load_dataset
>>> newsph_nli = load_dataset(path="./datasets/newsph_nli.py")
>>> newsph_nli
DatasetDict({
train: Dataset({
features: ['premise', 'hypothesis', 'label'],
num_rows: 420000
})
test: Dataset({
features: ['premise', 'hypothesis', 'label'],
num_rows: 420000
})
validation: Dataset({
features: ['premise', 'hypothesis', 'label'],
num_rows: 90000
})
})
>>> newsph_nli["train"][0]
{'hypothesis': 'Ito ang dineklara ni Atty. Romulo Macalintal, abogado ni Robredo, kaugnay ng pagsisimula ng preliminary conference ngayong hapon sa Presidential Electoral Tribunal (PET).',
'label': 1,
'premise': '"Hindi ko ugali ang mamulitika; mas gusto kong tahimik na magtrabaho. Pero sasabihin ko ito ngayon: ang tapang, lakas, at diskarte, hindi nadadaan sa mapanirang salita. Ang kailangan ng taumbayan ay tapang sa gawa," ayon kay Robredo sa inilabas nitong statement.'}
>>> newsph_nli["test"][0]
{'hypothesis': 'Ito ang dineklara ni Atty. Romulo Macalintal, abogado ni Robredo, kaugnay ng pagsisimula ng preliminary conference ngayong hapon sa Presidential Electoral Tribunal (PET).',
'label': 1,
'premise': '"Hindi ko ugali ang mamulitika; mas gusto kong tahimik na magtrabaho. Pero sasabihin ko ito ngayon: ang tapang, lakas, at diskarte, hindi nadadaan sa mapanirang salita. Ang kailangan ng taumbayan ay tapang sa gawa," ayon kay Robredo sa inilabas nitong statement.'}
```
In local, I modified the code of the source as below and got the correct result.
```python
71 test_path = os.path.join(download_path, "test.csv")
```
```python
>>> from datasets import load_dataset
>>> newsph_nli = load_dataset(path="./datasets/newsph_nli.py")
>>> newsph_nli
DatasetDict({
train: Dataset({
features: ['premise', 'hypothesis', 'label'],
num_rows: 420000
})
test: Dataset({
features: ['premise', 'hypothesis', 'label'],
num_rows: 9000
})
validation: Dataset({
features: ['premise', 'hypothesis', 'label'],
num_rows: 90000
})
})
>>> newsph_nli["train"][0]
{'hypothesis': 'Ito ang dineklara ni Atty. Romulo Macalintal, abogado ni Robredo, kaugnay ng pagsisimula ng preliminary conference ngayong hapon sa Presidential Electoral Tribunal (PET).',
'label': 1,
'premise': '"Hindi ko ugali ang mamulitika; mas gusto kong tahimik na magtrabaho. Pero sasabihin ko ito ngayon: ang tapang, lakas, at diskarte, hindi nadadaan sa mapanirang salita. Ang kailangan ng taumbayan ay tapang sa gawa," ayon kay Robredo sa inilabas nitong statement.'}
>>> newsph_nli["test"][0]
{'hypothesis': '-- JAI (@JaiPaller) September 13, 2019',
'label': 1,
'premise': 'Pinag-iingat ng Konsulado ng Pilipinas sa Dubai ang publiko, partikular ang mga donor, laban sa mga scam na gumagamit ng mga charitable organization.'}
```
I don't have experience with open source pull requests, so I suggest that you reflect them in the source.
Thank you for reading :)
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2262/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/2262/timeline
| null |
completed
| null | null | false |
[
"Thanks @bhavitvyamalik for the fix !\r\nThe fix will be available in the next release.\r\nIt's already available on the `master` branch. For now you can either install `datasets` from source or use `script_version=\"master\"` in `load_dataset` to use the fixed version of this dataset."
] |
https://api.github.com/repos/huggingface/datasets/issues/2217
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/2217/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/2217/comments
|
https://api.github.com/repos/huggingface/datasets/issues/2217/events
|
https://github.com/huggingface/datasets/pull/2217
| 857,011,314 |
MDExOlB1bGxSZXF1ZXN0NjE0NTAxNjIz
| 2,217 |
Revert breaking change in cache_files property
|
[] |
closed
| false | null | 0 |
2021-04-13T14:20:04Z
|
2021-04-14T14:24:24Z
|
2021-04-14T14:24:23Z
| null |
#2025 changed the format of `Dataset.cache_files`.
Before it was formatted like
```python
[{"filename": "path/to/file.arrow", "start": 0, "end": 1337}]
```
and it was changed to
```python
["path/to/file.arrow"]
```
since there's no start/end offsets available anymore.
To make this less breaking, I'm setting the format back to a list of dicts:
```python
[{"filename": "path/to/file.arrow"}]
```
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2217/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/2217/timeline
| null | null | false |
{
"diff_url": "https://github.com/huggingface/datasets/pull/2217.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2217",
"merged_at": "2021-04-14T14:24:23Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2217.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2217"
}
| true |
[] |
https://api.github.com/repos/huggingface/datasets/issues/3993
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/3993/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/3993/comments
|
https://api.github.com/repos/huggingface/datasets/issues/3993/events
|
https://github.com/huggingface/datasets/issues/3993
| 1,178,201,495 |
I_kwDODunzps5GOe2X
| 3,993 |
Streaming dataset + interleave + DataLoader hangs with multiple workers
|
[
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] |
open
| false | null | 5 |
2022-03-23T14:27:29Z
|
2023-02-28T14:14:24Z
| null | null |
## Describe the bug
Interleaving multiple iterable datasets that use `load_dataset` on streaming mode hangs when passed to `torch.utils.data.DataLoader` with multiple workers.
## Steps to reproduce the bug
```python
from datasets import interleave_datasets, load_dataset
from torch.utils.data import DataLoader
en_dataset = load_dataset('oscar', "unshuffled_deduplicated_en", split='train', streaming=True)
fr_dataset = load_dataset('oscar', "unshuffled_deduplicated_fr", split='train', streaming=True)
it_dataset = load_dataset('oscar', "unshuffled_deduplicated_it", split='train', streaming=True)
de_dataset = load_dataset('oscar', "unshuffled_deduplicated_de", split='train', streaming=True)
multilingual_dataset = interleave_datasets([en_dataset, fr_dataset, de_dataset, it_dataset])
multilingual_dataset = multilingual_dataset.with_format('torch')
next(iter(multilingual_dataset)) # works fairly fast
dataloader = DataLoader(multilingual_dataset, batch_size=8, num_workers=4)
for batch in dataloader:
print(len(batch)) # prints nothing after 30 min of waiting
dataloader = DataLoader(multilingual_dataset, batch_size=8, num_workers=0)
for batch in dataloader:
print(len(batch)) # prints right away
```
## Expected results
It should be able to iterate the dataset with multiple workers.
## Actual results
Prints with results with `next(iter(multilingual_dataset)) ` and `num_workers=0` but it prints nothing with `num_workers=4` or any number above 0.
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 2.0.1.dev0
- `pytorch` version: 1.10.0+cu113
- Python version: 3.7
- PyArrow version: 6.0.1
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3993/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/3993/timeline
| null | null | null | null | false |
[
"Same thing occurs when streaming files loaded from disk.",
"Hi ! Thanks for reporting, could this be related to https://github.com/huggingface/datasets/issues/3950 ?\r\n\r\nCurrently streaming datasets only works in single process, but we're working on having in work in distributed setups as well :) (EDIT: done)",
"Hi, thanks for your reply. It seems related :)",
"+1",
"Please update `datasets` if you're having this issue. What version are you using ?"
] |
https://api.github.com/repos/huggingface/datasets/issues/2520
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/2520/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/2520/comments
|
https://api.github.com/repos/huggingface/datasets/issues/2520/events
|
https://github.com/huggingface/datasets/issues/2520
| 925,015,004 |
MDU6SXNzdWU5MjUwMTUwMDQ=
| 2,520 |
Datasets with tricky task templates
|
[
{
"color": "72f99f",
"default": false,
"description": "Discussions on the datasets",
"id": 2067401494,
"name": "Dataset discussion",
"node_id": "MDU6TGFiZWwyMDY3NDAxNDk0",
"url": "https://api.github.com/repos/huggingface/datasets/labels/Dataset%20discussion"
}
] |
closed
| false | null | 1 |
2021-06-18T15:33:57Z
|
2023-07-20T13:20:32Z
|
2023-07-20T13:20:32Z
| null |
I'm collecting a list of datasets here that don't follow the "standard" taxonomy and require further investigation to implement task templates for.
## Text classification
* [hatexplain](https://huggingface.co/datasets/hatexplain): ostensibly a form of text classification, but not in the standard `(text, target)` format and each sample appears to be tokenized.
* [muchocine](https://huggingface.co/datasets/muchocine): contains two candidate text columns (long-form and summary) which in principle requires two `TextClassification` templates which is not currently supported
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2520/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/2520/timeline
| null |
completed
| null | null | false |
[
"The `task_templates` API is deprecated in favor of the `train-eval-index` YAML field, so I'm closing this issue."
] |
https://api.github.com/repos/huggingface/datasets/issues/3879
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/3879/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/3879/comments
|
https://api.github.com/repos/huggingface/datasets/issues/3879/events
|
https://github.com/huggingface/datasets/pull/3879
| 1,164,311,612 |
PR_kwDODunzps40MP7f
| 3,879 |
SQuAD v2 metric: create README.md
|
[] |
closed
| false | null | 1 |
2022-03-09T18:47:56Z
|
2022-03-10T16:48:59Z
|
2022-03-10T16:48:59Z
| null |
Proposing SQuAD v2 metric card
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3879/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/3879/timeline
| null | null | false |
{
"diff_url": "https://github.com/huggingface/datasets/pull/3879.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3879",
"merged_at": "2022-03-10T16:48:58Z",
"patch_url": "https://github.com/huggingface/datasets/pull/3879.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3879"
}
| true |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_3879). All of your documentation changes will be reflected on that endpoint."
] |
https://api.github.com/repos/huggingface/datasets/issues/5367
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5367/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5367/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5367/events
|
https://github.com/huggingface/datasets/pull/5367
| 1,499,174,749 |
PR_kwDODunzps5FlevK
| 5,367 |
Fix remove columns from lazy dict
|
[] |
closed
| false | null | 1 |
2022-12-15T22:04:12Z
|
2022-12-15T22:27:53Z
|
2022-12-15T22:24:50Z
| null |
This was introduced in https://github.com/huggingface/datasets/pull/5252 and causing the transformers CI to break: https://app.circleci.com/pipelines/github/huggingface/transformers/53886/workflows/522faf2e-a053-454c-94f8-a617fde33393/jobs/648597
Basically this code should return a dataset with only one column:
```python
from datasets import *
ds = Dataset.from_dict({"a": range(5)})
def f(x):
x["b"] = x["a"]
return x
ds = ds.map(f, remove_columns=["a"])
assert ds.column_names == ["b"]
```
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5367/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5367/timeline
| null | null | false |
{
"diff_url": "https://github.com/huggingface/datasets/pull/5367.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5367",
"merged_at": "2022-12-15T22:24:50Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5367.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5367"
}
| true |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] |
https://api.github.com/repos/huggingface/datasets/issues/3609
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/3609/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/3609/comments
|
https://api.github.com/repos/huggingface/datasets/issues/3609/events
|
https://github.com/huggingface/datasets/pull/3609
| 1,109,579,112 |
PR_kwDODunzps4xVrsG
| 3,609 |
Fixes to pubmed dataset download function
|
[] |
closed
| false | null | 3 |
2022-01-20T17:31:35Z
|
2022-03-03T16:18:52Z
|
2022-03-03T14:23:35Z
| null |
Pubmed has updated its settings for 2022 and thus existing download script does not work.
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3609/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/3609/timeline
| null | null | false |
{
"diff_url": "https://github.com/huggingface/datasets/pull/3609.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3609",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/3609.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3609"
}
| true |
[
"Hi ! I think we can simply add a new configuration for the 2022 data instead of replacing them.\r\nYou can add the new configuration here:\r\n```python\r\n BUILDER_CONFIGS = [\r\n datasets.BuilderConfig(name=\"2021\", description=\"The 2021 annual record\", version=datasets.Version(\"1.0.0\")),\r\n datasets.BuilderConfig(name=\"2022\", description=\"The 2022 annual record\", version=datasets.Version(\"1.0.0\")),\r\n ]\r\n```\r\n\r\nAnd we can have the URLs for these two versions this way:\r\n```python\r\n_URLs = {\r\n \"2021\": f\"ftp://ftp.ncbi.nlm.nih.gov/pubmed/baseline/pubmed21n{i:04d}.xml.gz\" for i in range(1, 1063)],\r\n \"2022\": f\"ftp://ftp.ncbi.nlm.nih.gov/pubmed/baseline/pubmed22n{i:04d}.xml.gz\" for i in range(1, 1114)]\r\n}\r\n```\r\nand depending on the configuration name (you can get it with `self.config.name`) we can pick the URLs of 2021 or the ones of 2022 and pass them to the `dl_manager` in `_split_generators`\r\n\r\nFeel free to ping me if you have questions or if I can help !",
"Hi @spacemanidol, thanks for your contribution.\r\n\r\nThe update of the PubMed dataset URL (besides the update of the corresponding metadata and the dummy data) was already merged to master branch in this other PR:\r\n- #3692 \r\n\r\nI'm closing this PR then.\r\n\r\n@lhoestq please take into account that 2021 data is no longer accessible: every year PubMed releases the baseline data (containing all previous data until that year) and from that on, they release daily updates. ",
"> @lhoestq please take into account that 2021 data is no longer accessible: every year PubMed releases the baseline data (containing all previous data until that year) and from that on, they release daily updates.\r\n\r\nOh ok I didn't know, thanks"
] |
https://api.github.com/repos/huggingface/datasets/issues/1791
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/1791/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/1791/comments
|
https://api.github.com/repos/huggingface/datasets/issues/1791/events
|
https://github.com/huggingface/datasets/pull/1791
| 796,924,519 |
MDExOlB1bGxSZXF1ZXN0NTY0MDE5OTk3
| 1,791 |
Small fix with corrected logging of train vectors
|
[] |
closed
| false | null | 0 |
2021-01-29T14:26:06Z
|
2021-01-29T18:51:10Z
|
2021-01-29T17:05:07Z
| null |
Now you can set `train_size` to the whole dataset size via `train_size = -1` and login writes not `Training the index with the first -1 vectors` but (for example) `Training the index with the first 16123 vectors`. And maybe more than dataset length. Logging will be correct
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1791/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/1791/timeline
| null | null | false |
{
"diff_url": "https://github.com/huggingface/datasets/pull/1791.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1791",
"merged_at": "2021-01-29T17:05:07Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1791.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1791"
}
| true |
[] |
https://api.github.com/repos/huggingface/datasets/issues/50
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/50/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/50/comments
|
https://api.github.com/repos/huggingface/datasets/issues/50/events
|
https://github.com/huggingface/datasets/pull/50
| 612,583,126 |
MDExOlB1bGxSZXF1ZXN0NDEzNTAwMjE0
| 50 |
[Tests] test only for fast test as a default
|
[] |
closed
| false | null | 1 |
2020-05-05T12:59:22Z
|
2020-05-05T13:02:18Z
|
2020-05-05T13:02:16Z
| null |
Test only for one config on circle ci to speed up testing. Add all config test as a slow test.
@mariamabarham @thomwolf
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/50/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/50/timeline
| null | null | false |
{
"diff_url": "https://github.com/huggingface/datasets/pull/50.diff",
"html_url": "https://github.com/huggingface/datasets/pull/50",
"merged_at": "2020-05-05T13:02:16Z",
"patch_url": "https://github.com/huggingface/datasets/pull/50.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/50"
}
| true |
[
"Test failure is not related to change in test file.\r\n"
] |
https://api.github.com/repos/huggingface/datasets/issues/887
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/887/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/887/comments
|
https://api.github.com/repos/huggingface/datasets/issues/887/events
|
https://github.com/huggingface/datasets/issues/887
| 750,868,831 |
MDU6SXNzdWU3NTA4Njg4MzE=
| 887 |
pyarrow.lib.ArrowNotImplementedError: MakeBuilder: cannot construct builder for type extension<arrow.py_extension_type>
|
[
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] |
open
| false | null | 14 |
2020-11-25T14:32:21Z
|
2021-09-09T17:03:40Z
| null | null |
I set up a new dataset, with a sequence of arrays (really, I want to have an array of (None, 137, 2), and the first dimension is dynamic)
```python
def _info(self):
return datasets.DatasetInfo(
description=_DESCRIPTION,
# This defines the different columns of the dataset and their types
features=datasets.Features(
{
"pose": datasets.features.Sequence(datasets.features.Array2D(shape=(137, 2), dtype="float32"))
}
),
homepage=_HOMEPAGE,
citation=_CITATION,
)
def _generate_examples(self):
""" Yields examples. """
yield 1, {
"pose": [np.zeros(shape=(137, 2), dtype=np.float32)]
}
```
But this doesn't work -
> pyarrow.lib.ArrowNotImplementedError: MakeBuilder: cannot construct builder for type extension<arrow.py_extension_type>
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/887/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/887/timeline
| null | null | null | null | false |
[
"Yes right now `ArrayXD` can only be used as a column feature type, not a subtype.\r\nWith the current Arrow limitations I don't think we'll be able to make it work as a subtype, however it should be possible to allow dimensions of dynamic sizes (`Array3D(shape=(None, 137, 2), dtype=\"float32\")` for example since the [underlying arrow type](https://github.com/huggingface/datasets/blob/master/src/datasets/features.py#L236) allows dynamic sizes.\r\n\r\nFor now I'd suggest the use of nested `Sequence` types. Once we have the dynamic sizes you can update the dataset.\r\nWhat do you think ?",
"> Yes right now ArrayXD can only be used as a column feature type, not a subtype. \r\n\r\nMeaning it can't be nested under `Sequence`?\r\nIf so, for now I'll just make it a python list and make it with the nested `Sequence` type you suggested.",
"Yea unfortunately..\r\nThat's a current limitation with Arrow ExtensionTypes that can't be used in the default Arrow Array objects.\r\nWe already have an ExtensionArray that allows us to use them as column types but not for subtypes.\r\nMaybe we can extend it, I haven't experimented with that yet",
"Cool\r\nSo please consider this issue as a feature request for:\r\n```\r\nArray3D(shape=(None, 137, 2), dtype=\"float32\")\r\n```\r\n\r\nits a way to represent videos, poses, and other cool sequences",
"@lhoestq well, so sequence of sequences doesn't work either...\r\n\r\n```\r\npyarrow.lib.ArrowCapacityError: List array cannot contain more than 2147483646 child elements, have 2147483648\r\n```\r\n\r\n\r\n",
"Working with Arrow can be quite fun sometimes.\r\nYou can fix this issue by trying to reduce the writer batch size (same trick than the one used to reduce the RAM usage in https://github.com/huggingface/datasets/issues/741).\r\n\r\nLet me know if it works.\r\nI haven't investigated yet on https://github.com/huggingface/datasets/issues/741 since I was preparing this week's sprint to add datasets but this is in my priority list for early next week.",
"The batch size fix doesn't work... not for #741 and not for this dataset I'm trying (DGS corpus)\r\nLoading the DGS corpus takes 400GB of RAM, which is fine with me as my machine is large enough\r\n",
"Sorry it doesn't work. Will let you know once I fixed it",
"Hi @lhoestq , any update on dynamic sized arrays?\r\n(`Array3D(shape=(None, 137, 2), dtype=\"float32\")`)",
"Not yet, I've been pretty busy with the dataset sprint lately but this is something that's been asked several times already. So I'll definitely work on this as soon as I'm done with the sprint and with the RAM issue you reported.",
"Hi @lhoestq,\r\nAny chance you have some updates on the supporting `ArrayXD` as a subtype or support of dynamic sized arrays?\r\n\r\ne.g.:\r\n`datasets.features.Sequence(datasets.features.Array2D(shape=(137, 2), dtype=\"float32\"))`\r\n`Array3D(shape=(None, 137, 2), dtype=\"float32\")`",
"Hi ! We haven't worked in this lately and it's not in our very short-term roadmap since it requires a bit a work to make it work with arrow. Though this will definitely be added at one point.",
"@lhoestq, thanks for the update.\r\n\r\nI actually tried to modify some piece of code to make it work. Can you please tell if I missing anything here?\r\nI think that for vast majority of cases it's enough to make first dimension of the array dynamic i.e. `shape=(None, 100, 100)`. For that, it's enough to modify class [ArrayExtensionArray](https://github.com/huggingface/datasets/blob/9ca24250ea44e7611c4dabd01ecf9415a7f0be6c/src/datasets/features.py#L397) to output list of arrays of different sizes instead of list of arrays of same sizes (current version)\r\nBelow are my modifications of this class.\r\n\r\n```\r\nclass ArrayExtensionArray(pa.ExtensionArray):\r\n def __array__(self):\r\n zero_copy_only = _is_zero_copy_only(self.storage.type)\r\n return self.to_numpy(zero_copy_only=zero_copy_only)\r\n\r\n def __getitem__(self, i):\r\n return self.storage[i]\r\n\r\n def to_numpy(self, zero_copy_only=True):\r\n storage: pa.ListArray = self.storage\r\n size = 1\r\n for i in range(self.type.ndims):\r\n size *= self.type.shape[i]\r\n storage = storage.flatten()\r\n numpy_arr = storage.to_numpy(zero_copy_only=zero_copy_only)\r\n numpy_arr = numpy_arr.reshape(len(self), *self.type.shape)\r\n return numpy_arr\r\n\r\n def to_list_of_numpy(self, zero_copy_only=True):\r\n storage: pa.ListArray = self.storage\r\n shape = self.type.shape\r\n arrays = []\r\n for dim in range(1, self.type.ndims):\r\n assert shape[dim] is not None, f\"Support only dynamic size on first dimension. Got: {shape}\"\r\n\r\n first_dim_offsets = np.array([off.as_py() for off in storage.offsets])\r\n for i in range(len(storage)):\r\n storage_el = storage[i:i+1]\r\n first_dim = first_dim_offsets[i+1] - first_dim_offsets[i]\r\n # flatten storage\r\n for dim in range(self.type.ndims):\r\n storage_el = storage_el.flatten()\r\n\r\n numpy_arr = storage_el.to_numpy(zero_copy_only=zero_copy_only)\r\n arrays.append(numpy_arr.reshape(first_dim, *shape[1:]))\r\n\r\n return arrays\r\n\r\n def to_pylist(self):\r\n zero_copy_only = _is_zero_copy_only(self.storage.type)\r\n if self.type.shape[0] is None:\r\n return self.to_list_of_numpy(zero_copy_only=zero_copy_only)\r\n else:\r\n return self.to_numpy(zero_copy_only=zero_copy_only).tolist()\r\n```\r\n\r\nI ran few tests and it works as expected. Let me know what you think.",
"Thanks for diving into this !\r\n\r\nIndeed focusing on making the first dimensions dynamic make total sense (and users could still re-order their dimensions to match this constraint).\r\nYour code looks great :) I think it can even be extended to support several dynamic dimensions if we want to.\r\n\r\nFeel free to open a PR to include these changes, then we can update our test suite to make sure it works in all use cases.\r\nIn particular I think we might need a few tweaks to allow it to be converted to pandas (though I haven't tested yet):\r\n\r\n```python\r\nfrom datasets import Dataset, Features, Array3D\r\n\r\n# this works\r\nmatrix = [[1, 0], [0, 1]]\r\nfeatures = Features({\"a\": Array3D(dtype=\"int32\", shape=(1, 2, 2))})\r\nd = Dataset.from_dict({\"a\": [[matrix], [matrix]]})\r\nprint(d.to_pandas())\r\n\r\n# this should work as well\r\nmatrix = [[1, 0], [0, 1]]\r\nfeatures = Features({\"a\": Array3D(dtype=\"int32\", shape=(None, 2, 2))})\r\nd = Dataset.from_dict({\"a\": [[matrix], [matrix] * 2]})\r\nprint(d.to_pandas())\r\n```\r\n\r\nI'll be happy to help you on this :)"
] |
https://api.github.com/repos/huggingface/datasets/issues/3729
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/3729/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/3729/comments
|
https://api.github.com/repos/huggingface/datasets/issues/3729/events
|
https://github.com/huggingface/datasets/issues/3729
| 1,139,398,442 |
I_kwDODunzps5D6dcq
| 3,729 |
Wrong number of examples when loading a text dataset
|
[
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] |
closed
| false | null | 2 |
2022-02-16T01:13:31Z
|
2022-03-15T16:16:09Z
|
2022-03-15T16:16:09Z
| null |
## Describe the bug
when I use load_dataset to read a txt file I find that the number of the samples is incorrect
## Steps to reproduce the bug
```
fr = open('train.txt','r',encoding='utf-8').readlines()
print(len(fr)) # 1199637
datasets = load_dataset('text', data_files={'train': ['train.txt']}, streaming=False)
print(len(datasets['train'])) # 1199649
```
I also use command line operation to verify it
```
$ wc -l train.txt
1199637 train.txt
```
## Expected results
please fix that issue
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 1.8.3
- Platform:windows&linux
- Python version:3.7
- PyArrow version:6.0.1
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3729/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/3729/timeline
| null |
completed
| null | null | false |
[
"Hi @kg-nlp, thanks for reporting.\r\n\r\nThat is weird... I guess we would need some sample data file where this behavior appears to reproduce the bug for further investigation... ",
"ok, I found the reason why that two results are not same.\r\nthere is /u2029 in the text, the datasets will split sentence according to the /u2029,but when I use open function will not do that .\r\nso I want to know which function shell do that\r\nthanks"
] |
https://api.github.com/repos/huggingface/datasets/issues/4806
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/4806/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/4806/comments
|
https://api.github.com/repos/huggingface/datasets/issues/4806/events
|
https://github.com/huggingface/datasets/pull/4806
| 1,332,664,038 |
PR_kwDODunzps482yiS
| 4,806 |
Fix opus_gnome dataset card
|
[] |
closed
| false | null | 20 |
2022-08-09T03:40:15Z
|
2022-08-09T12:06:46Z
|
2022-08-09T11:52:04Z
| null |
I fixed a issue #4805.
I changed `"gnome"` to `"opus_gnome"` in[ README.md](https://github.com/huggingface/datasets/tree/main/datasets/opus_gnome#dataset-summary).
Fix #4805
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 2,
"laugh": 0,
"rocket": 0,
"total_count": 2,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4806/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/4806/timeline
| null | null | false |
{
"diff_url": "https://github.com/huggingface/datasets/pull/4806.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4806",
"merged_at": "2022-08-09T11:52:04Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4806.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4806"
}
| true |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"@gojiteji why have you closed this PR and created an identical one?\r\n- #4807 ",
"@albertvillanova \r\nI forgot to follow \"How to create a Pull\" in CONTRIBUTING.md in this branch.",
"Both are identical. And you can push additional commits to this branch.",
"I see. Thank you for your comment.",
"Anyway, @gojiteji thanks for your contribution and this fix.",
"Once you have modified the `opus_gnome` dataset card, our Continuous Integration test suite performs some tests on it that make some additional requirements: the errors that appear have nothing to do with your contribution, but with these additional quality requirements.",
"> the errors that appear have nothing to do with your contribution, but with these additional quality requirements.\r\n\r\nIs there anything I should do?",
"If you would like to address them as well in this PR, it would be awesome: https://github.com/huggingface/datasets/runs/7741104780?check_suite_focus=true\r\n",
"These are the 2 error messages:\r\n```\r\nE ValueError: The following issues have been found in the dataset cards:\r\nE README Validation:\r\nE The following issues were found for the README at `/home/runner/work/datasets/datasets/datasets/opus_gnome/README.md`:\r\nE -\tNo first-level heading starting with `Dataset Card for` found in README. Skipping further validation for this README.\r\n\r\nE The following issues have been found in the dataset cards:\r\nE YAML tags:\r\nE Could not validate the metadata, found the following errors:\r\nE * field 'language':\r\nE \t['ara', 'cat', 'foo', 'gr', 'nqo', 'tmp'] are not registered tags for 'language', reference at https://github.com/huggingface/datasets/tree/main/src/datasets/utils/resources/languages.json\r\n```",
"In principle there are 2 errors:\r\n\r\nThe first one says, the title of the README does not start with `Dataset Card for`:\r\n- The README title is: `# Dataset Card Creation Guide`\r\n- According to the [template here](https://github.com/huggingface/datasets/blob/main/templates/README.md), it should be: `# Dataset Card for [Dataset Name]`",
"In relation with the languages:\r\n- you should check whether the language codes are properly spelled\r\n- and if so, adding them to our `languages.json` file, so that they are properly validated",
"Thank you for the detailed information. I'm checking it now.",
"```\r\nE ValueError: The following issues have been found in the dataset cards:\r\nE README Validation:\r\nE The following issues were found for the README at `/home/runner/work/datasets/datasets/datasets/opus_gnome/README.md`:\r\nE -\tExpected some content in section `Data Instances` but it is empty.\r\nE -\tExpected some content in section `Data Fields` but it is empty.\r\nE -\tExpected some content in section `Data Splits` but it is empty.\r\n```",
"I added `ara`, `cat`, `gr`, and `nqo` to `languages.json` and removed `foo` and `tmp` from `README.md`.\r\nI also write Data Instances, Data Fields, and Data Splits in `README.md`.",
"Thanks for your investigation and fixes to the dataset card structure! I'm just making some suggestions before merging this PR: see below.",
"Should I create PR for `config.json` to add ` ara cat gr nqo` first?\r\nI think I can pass this failing after that.\r\n\r\nOr removing `ara, cat, gr, nqo, foo, tmp` from `README.md`. ",
"Once you address these issues, all the CI tests will pass.",
"Once the remaining changes are addressed (see unresolved above), we will be able to merge this:\r\n- [ ] Remove \"ara\" from README\r\n- [ ] Remove \"cat\" from README\r\n- [ ] Remove \"gr\" from README\r\n- [ ] Replace \"tmp\" with \"tyj\" in README\r\n- [ ] Add \"tyj\" to `languages.json`:\r\n ```\r\n \"tyj\": \"Tai Do; Tai Yo\",",
"I did the five changes."
] |
https://api.github.com/repos/huggingface/datasets/issues/5409
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5409/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5409/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5409/events
|
https://github.com/huggingface/datasets/pull/5409
| 1,520,374,219 |
PR_kwDODunzps5Gs3nL
| 5,409 |
Fix deprecation warning when use_auth_token passed to download_and_prepare
|
[] |
closed
| false | null | 2 |
2023-01-05T09:10:58Z
|
2023-01-06T11:06:16Z
|
2023-01-06T10:59:13Z
| null |
The `DatasetBuilder.download_and_prepare` argument `use_auth_token` was deprecated in:
- #5302
However, `use_auth_token` is still passed to `download_and_prepare` in our built-in `io` readers (csv, json, parquet,...).
This PR fixes it, so that no deprecation warning is raised.
Fix #5407.
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5409/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5409/timeline
| null | null | false |
{
"diff_url": "https://github.com/huggingface/datasets/pull/5409.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5409",
"merged_at": "2023-01-06T10:59:13Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5409.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5409"
}
| true |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008627 / 0.011353 (-0.002726) | 0.004572 / 0.011008 (-0.006436) | 0.099653 / 0.038508 (0.061145) | 0.030010 / 0.023109 (0.006901) | 0.300492 / 0.275898 (0.024594) | 0.360443 / 0.323480 (0.036963) | 0.007125 / 0.007986 (-0.000860) | 0.003431 / 0.004328 (-0.000897) | 0.078103 / 0.004250 (0.073852) | 0.036884 / 0.037052 (-0.000168) | 0.312289 / 0.258489 (0.053800) | 0.345795 / 0.293841 (0.051954) | 0.034001 / 0.128546 (-0.094545) | 0.011405 / 0.075646 (-0.064242) | 0.321258 / 0.419271 (-0.098013) | 0.040591 / 0.043533 (-0.002942) | 0.301114 / 0.255139 (0.045975) | 0.337226 / 0.283200 (0.054027) | 0.088055 / 0.141683 (-0.053628) | 1.451892 / 1.452155 (-0.000263) | 1.494881 / 1.492716 (0.002164) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.186749 / 0.018006 (0.168743) | 0.414089 / 0.000490 (0.413600) | 0.002475 / 0.000200 (0.002275) | 0.000070 / 0.000054 (0.000015) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.022413 / 0.037411 (-0.014999) | 0.097547 / 0.014526 (0.083021) | 0.104196 / 0.176557 (-0.072361) | 0.139819 / 0.737135 (-0.597316) | 0.108345 / 0.296338 (-0.187994) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.424750 / 0.215209 (0.209541) | 4.261513 / 2.077655 (2.183859) | 2.150888 / 1.504120 (0.646768) | 1.935925 / 1.541195 (0.394730) | 1.867456 / 1.468490 (0.398966) | 0.694384 / 4.584777 (-3.890393) | 3.370539 / 3.745712 (-0.375173) | 1.886714 / 5.269862 (-3.383148) | 1.256542 / 4.565676 (-3.309135) | 0.082841 / 0.424275 (-0.341434) | 0.012344 / 0.007607 (0.004737) | 0.529801 / 0.226044 (0.303757) | 5.315438 / 2.268929 (3.046509) | 2.460517 / 55.444624 (-52.984107) | 2.261840 / 6.876477 (-4.614637) | 2.338710 / 2.142072 (0.196638) | 0.818433 / 4.805227 (-3.986794) | 0.150571 / 6.500664 (-6.350093) | 0.066524 / 0.075469 (-0.008945) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.253086 / 1.841788 (-0.588702) | 13.862614 / 8.074308 (5.788306) | 14.145149 / 10.191392 (3.953757) | 0.165867 / 0.680424 (-0.514557) | 0.029269 / 0.534201 (-0.504932) | 0.397579 / 0.579283 (-0.181704) | 0.401113 / 0.434364 (-0.033251) | 0.463269 / 0.540337 (-0.077068) | 0.551494 / 1.386936 (-0.835442) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006610 / 0.011353 (-0.004743) | 0.004583 / 0.011008 (-0.006425) | 0.096680 / 0.038508 (0.058172) | 0.027352 / 0.023109 (0.004242) | 0.409292 / 0.275898 (0.133394) | 0.445790 / 0.323480 (0.122310) | 0.004987 / 0.007986 (-0.002999) | 0.003462 / 0.004328 (-0.000866) | 0.074472 / 0.004250 (0.070221) | 0.037875 / 0.037052 (0.000822) | 0.411496 / 0.258489 (0.153007) | 0.454721 / 0.293841 (0.160880) | 0.031884 / 0.128546 (-0.096662) | 0.011682 / 0.075646 (-0.063964) | 0.318831 / 0.419271 (-0.100441) | 0.041781 / 0.043533 (-0.001752) | 0.411247 / 0.255139 (0.156108) | 0.436215 / 0.283200 (0.153016) | 0.090021 / 0.141683 (-0.051662) | 1.492385 / 1.452155 (0.040231) | 1.565182 / 1.492716 (0.072465) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.221263 / 0.018006 (0.203257) | 0.399074 / 0.000490 (0.398584) | 0.000405 / 0.000200 (0.000205) | 0.000058 / 0.000054 (0.000004) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.025139 / 0.037411 (-0.012272) | 0.097952 / 0.014526 (0.083426) | 0.106078 / 0.176557 (-0.070479) | 0.143231 / 0.737135 (-0.593904) | 0.109177 / 0.296338 (-0.187161) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.441668 / 0.215209 (0.226459) | 4.403247 / 2.077655 (2.325592) | 2.072749 / 1.504120 (0.568629) | 1.866248 / 1.541195 (0.325053) | 1.906418 / 1.468490 (0.437927) | 0.697234 / 4.584777 (-3.887543) | 3.412016 / 3.745712 (-0.333696) | 1.852572 / 5.269862 (-3.417289) | 1.168270 / 4.565676 (-3.397407) | 0.082132 / 0.424275 (-0.342144) | 0.013191 / 0.007607 (0.005584) | 0.548932 / 0.226044 (0.322888) | 5.503891 / 2.268929 (3.234962) | 2.539784 / 55.444624 (-52.904841) | 2.181292 / 6.876477 (-4.695184) | 2.242197 / 2.142072 (0.100125) | 0.804027 / 4.805227 (-4.001200) | 0.151649 / 6.500664 (-6.349015) | 0.067088 / 0.075469 (-0.008381) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.296267 / 1.841788 (-0.545520) | 13.986484 / 8.074308 (5.912176) | 13.440705 / 10.191392 (3.249313) | 0.140787 / 0.680424 (-0.539637) | 0.017132 / 0.534201 (-0.517069) | 0.381899 / 0.579283 (-0.197384) | 0.385535 / 0.434364 (-0.048829) | 0.439957 / 0.540337 (-0.100380) | 0.532980 / 1.386936 (-0.853956) |\n\n</details>\n</details>\n\n\n"
] |
https://api.github.com/repos/huggingface/datasets/issues/1516
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/1516/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/1516/comments
|
https://api.github.com/repos/huggingface/datasets/issues/1516/events
|
https://github.com/huggingface/datasets/pull/1516
| 764,032,327 |
MDExOlB1bGxSZXF1ZXN0NTM4MjkzOTMw
| 1,516 |
adding wrbsc
|
[] |
closed
| false | null | 2 |
2020-12-12T16:38:40Z
|
2020-12-18T09:41:33Z
|
2020-12-18T09:41:33Z
| null |
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1516/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/1516/timeline
| null | null | false |
{
"diff_url": "https://github.com/huggingface/datasets/pull/1516.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1516",
"merged_at": "2020-12-18T09:41:33Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1516.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1516"
}
| true |
[
"@lhoestq thanks for the comments! Should be fixed in the latest commit, I assume the CI errors are unrelated. ",
"merging since the CI is fixed on master"
] |
|
https://api.github.com/repos/huggingface/datasets/issues/2291
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/2291/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/2291/comments
|
https://api.github.com/repos/huggingface/datasets/issues/2291/events
|
https://github.com/huggingface/datasets/pull/2291
| 871,216,757 |
MDExOlB1bGxSZXF1ZXN0NjI2MjcyNzE5
| 2,291 |
Don't copy recordbatches in memory during a table deepcopy
|
[] |
closed
| false | null | 0 |
2021-04-29T16:26:05Z
|
2021-04-29T16:34:35Z
|
2021-04-29T16:34:34Z
| null |
Fix issue #2276 and hopefully #2134
The recordbatches of the `IndexedTableMixin` used to speed up queries to the table were copied in memory during a table deepcopy.
This resulted in `concatenate_datasets`, `load_from_disk` and other methods to always bring the data in memory.
I fixed the copy similarly to #2287 and updated the test to make sure it doesn't happen again (added a test for deepcopy + make sure that the immutable arrow objects are passed to the copied table without being copied).
The issue was not caught by our tests because the total allocated bytes value in PyArrow isn't updated when deepcopying recordbatches: the copy in memory wasn't detected. This behavior looks like a bug in PyArrow, I'll open a ticket on JIRA.
Thanks @samsontmr , @TaskManager91 and @mariosasko for the help
|
{
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 1,
"laugh": 0,
"rocket": 0,
"total_count": 2,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2291/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/2291/timeline
| null | null | false |
{
"diff_url": "https://github.com/huggingface/datasets/pull/2291.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2291",
"merged_at": "2021-04-29T16:34:33Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2291.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2291"
}
| true |
[] |
https://api.github.com/repos/huggingface/datasets/issues/2660
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/2660/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/2660/comments
|
https://api.github.com/repos/huggingface/datasets/issues/2660/events
|
https://github.com/huggingface/datasets/pull/2660
| 946,316,180 |
MDExOlB1bGxSZXF1ZXN0NjkxNTA4NzE0
| 2,660 |
Move checks from _map_single to map
|
[] |
closed
| false | null | 3 |
2021-07-16T13:53:33Z
|
2021-09-06T14:12:23Z
|
2021-09-06T14:12:23Z
| null |
The goal of this PR is to remove duplicated checks in the `map` logic to execute them only once whenever possible (`fn_kwargs`, `input_columns`, ...). Additionally, this PR improves the consistency (to align it with `input_columns`) of the `remove_columns` check by adding support for a single string value, which is then wrapped into a list.
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2660/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/2660/timeline
| null | null | false |
{
"diff_url": "https://github.com/huggingface/datasets/pull/2660.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2660",
"merged_at": "2021-09-06T14:12:23Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2660.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2660"
}
| true |
[
"@lhoestq This one has been open for a while. Could you please take a look?",
"@lhoestq Ready for the final review!",
"I forgot to update the signature of `DatasetDict.map`, so did that now."
] |
https://api.github.com/repos/huggingface/datasets/issues/2048
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/2048/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/2048/comments
|
https://api.github.com/repos/huggingface/datasets/issues/2048/events
|
https://github.com/huggingface/datasets/issues/2048
| 830,953,431 |
MDU6SXNzdWU4MzA5NTM0MzE=
| 2,048 |
github is not always available - probably need a back up
|
[] |
closed
| false | null | 0 |
2021-03-13T18:03:32Z
|
2022-04-01T15:27:10Z
|
2022-04-01T15:27:10Z
| null |
Yesterday morning github wasn't working:
```
:/tmp$ wget https://raw.githubusercontent.com/huggingface/datasets/1.4.1/metrics/sacrebleu/sacrebleu.py--2021-03-12 18:35:59-- https://raw.githubusercontent.com/huggingface/datasets/1.4.1/metrics/sacrebleu/sacrebleu.py
Resolving raw.githubusercontent.com (raw.githubusercontent.com)... 185.199.108.133, 185.199.111.133, 185.199.109.133, ...
Connecting to raw.githubusercontent.com (raw.githubusercontent.com)|185.199.108.133|:443... connected.
HTTP request sent, awaiting response... 500 Internal Server Error
2021-03-12 18:36:11 ERROR 500: Internal Server Error.
```
Suggestion: have a failover system and replicate the data on another system and reach there if gh isn't reachable? perhaps gh can be a master and the replicate a slave - so there is only one true source.
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2048/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/2048/timeline
| null |
completed
| null | null | false |
[] |
https://api.github.com/repos/huggingface/datasets/issues/2756
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/2756/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/2756/comments
|
https://api.github.com/repos/huggingface/datasets/issues/2756/events
|
https://github.com/huggingface/datasets/pull/2756
| 959,255,646 |
MDExOlB1bGxSZXF1ZXN0NzAyMzk4Mjk1
| 2,756 |
Fix metadata JSON for ubuntu_dialogs_corpus dataset
|
[] |
closed
| false | null | 0 |
2021-08-03T15:48:59Z
|
2021-08-04T09:43:25Z
|
2021-08-04T09:43:25Z
| null |
Related to #2743.
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2756/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/2756/timeline
| null | null | false |
{
"diff_url": "https://github.com/huggingface/datasets/pull/2756.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2756",
"merged_at": "2021-08-04T09:43:25Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2756.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2756"
}
| true |
[] |
https://api.github.com/repos/huggingface/datasets/issues/258
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/258/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/258/comments
|
https://api.github.com/repos/huggingface/datasets/issues/258/events
|
https://github.com/huggingface/datasets/issues/258
| 635,859,525 |
MDU6SXNzdWU2MzU4NTk1MjU=
| 258 |
Why is dataset after tokenization far more larger than the orginal one ?
|
[] |
closed
| false | null | 4 |
2020-06-10T01:27:07Z
|
2020-06-10T12:46:34Z
|
2020-06-10T12:46:34Z
| null |
I tokenize wiki dataset by `map` and cache the results.
```
def tokenize_tfm(example):
example['input_ids'] = hf_fast_tokenizer.convert_tokens_to_ids(hf_fast_tokenizer.tokenize(example['text']))
return example
wiki = nlp.load_dataset('wikipedia', '20200501.en', cache_dir=cache_dir)['train']
wiki.map(tokenize_tfm, cache_file_name=cache_dir/"wikipedia/20200501.en/1.0.0/tokenized_wiki.arrow")
```
and when I see their size
```
ls -l --block-size=M
17460M wikipedia-train.arrow
47511M tokenized_wiki.arrow
```
The tokenized one is over 2x size of original one.
Is there something I did wrong ?
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/258/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/258/timeline
| null |
completed
| null | null | false |
[
"Hi ! This is because `.map` added the new column `input_ids` to the dataset, and so all the other columns were kept. Therefore the dataset size increased a lot.\r\n If you want to only keep the `input_ids` column, you can stash the other ones by specifying `remove_columns=[\"title\", \"text\"]` in the arguments of `.map`",
"Hi ! Thanks for your reply.\r\n\r\nBut since size of `input_ids` < size of `text`, I am wondering why\r\nsize of `input_ids` + `text` > 2x the size of `text` 🤔",
"Hard to tell... This is probably related to the way apache arrow compresses lists of integers, that may be different from the compression of strings.",
"Thanks for your point. 😀, It might be answer.\r\nSince this is hard to know, I'll close this issue.\r\nBut if somebody knows more details, please comment below ~ 😁"
] |
https://api.github.com/repos/huggingface/datasets/issues/4211
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/4211/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/4211/comments
|
https://api.github.com/repos/huggingface/datasets/issues/4211/events
|
https://github.com/huggingface/datasets/issues/4211
| 1,214,361,837 |
I_kwDODunzps5IYbDt
| 4,211 |
DatasetDict containing Datasets with different features when pushed to hub gets remapped features
|
[
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] |
closed
| false | null | 10 |
2022-04-25T11:22:54Z
|
2023-04-06T19:25:50Z
|
2022-05-20T15:15:30Z
| null |
Hi there,
I am trying to load a dataset to the Hub. This dataset is a `DatasetDict` composed of various splits. Some splits have a different `Feature` mapping. Locally, the DatasetDict preserves the individual features but if I `push_to_hub` and then `load_dataset`, the features are all the same.
Dataset and code to reproduce available [here](https://huggingface.co/datasets/pietrolesci/robust_nli).
In short:
I have 3 feature mapping
```python
Tri_features = Features(
{
"idx": Value(dtype="int64"),
"premise": Value(dtype="string"),
"hypothesis": Value(dtype="string"),
"label": ClassLabel(num_classes=3, names=["entailment", "neutral", "contradiction"]),
}
)
Ent_features = Features(
{
"idx": Value(dtype="int64"),
"premise": Value(dtype="string"),
"hypothesis": Value(dtype="string"),
"label": ClassLabel(num_classes=2, names=["non-entailment", "entailment"]),
}
)
Con_features = Features(
{
"idx": Value(dtype="int64"),
"premise": Value(dtype="string"),
"hypothesis": Value(dtype="string"),
"label": ClassLabel(num_classes=2, names=["non-contradiction", "contradiction"]),
}
)
```
Then I create different datasets
```python
dataset_splits = {}
for split in df["split"].unique():
print(split)
df_split = df.loc[df["split"] == split].copy()
if split in Tri_dataset:
df_split["label"] = df_split["label"].map({"entailment": 0, "neutral": 1, "contradiction": 2})
ds = Dataset.from_pandas(df_split, features=Tri_features)
elif split in Ent_bin_dataset:
df_split["label"] = df_split["label"].map({"non-entailment": 0, "entailment": 1})
ds = Dataset.from_pandas(df_split, features=Ent_features)
elif split in Con_bin_dataset:
df_split["label"] = df_split["label"].map({"non-contradiction": 0, "contradiction": 1})
ds = Dataset.from_pandas(df_split, features=Con_features)
else:
print("ERROR:", split)
dataset_splits[split] = ds
datasets = DatasetDict(dataset_splits)
```
I then push to hub
```python
datasets.push_to_hub("pietrolesci/robust_nli", token="<token>")
```
Finally, I load it from the hub
```python
datasets_loaded_from_hub = load_dataset("pietrolesci/robust_nli")
```
And I get that
```python
datasets["LI_TS"].features != datasets_loaded_from_hub["LI_TS"].features
```
since
```python
"label": ClassLabel(num_classes=2, names=["non-contradiction", "contradiction"])
```
gets remapped to
```python
"label": ClassLabel(num_classes=3, names=["entailment", "neutral", "contradiction"])
```
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4211/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/4211/timeline
| null |
completed
| null | null | false |
[
"Hi @pietrolesci, thanks for reporting.\r\n\r\nPlease note that this is a design purpose: a `DatasetDict` has the same features for all its datasets. Normally, a `DatasetDict` is composed of several sub-datasets each corresponding to a different **split**.\r\n\r\nTo handle sub-datasets with different features, we use another approach: use different **configurations** instead of **splits**.\r\n\r\nHowever, for the moment `push_to_hub` does not support specifying different configurations. IMHO, we should implement this.",
"Hi @albertvillanova,\r\n\r\nThanks a lot for your reply! I got it now. The strange thing for me was to have it correctly working (i.e., DatasetDict with different features in some datasets) locally and not on the Hub. It would be great to have configuration supported by `push_to_hub`. Personally, this latter functionality allowed me to iterate rather quickly on dataset curation.\r\n\r\nAgain, thanks for your time @albertvillanova!\r\n\r\nBest,\r\nPietro",
"Hi! Yes, we should override `DatasetDict.__setitem__` and throw an error if features dictionaries are different. `DatasetDict` is a subclass of `dict`, so `DatasetDict.{update/setdefault}` need to be overridden as well. We could avoid this by subclassing `UserDict`, but then we would get the name collision - `DatasetDict.data` vs. `UserDict.data`. This makes me think we should rename the `data` attribute of `DatasetDict`/`Dataset` for easier dict subclassing (would also simplify https://github.com/huggingface/datasets/pull/3997) and to follow good Python practices. Another option is to have a custom `UserDict` class in `py_utils`, but it can be hard to keep this class consistent with the built-in `UserDict`. \r\n\r\n@albertvillanova @lhoestq wdyt?",
"I would keep things simple and keep subclassing dict. Regarding the features check, I guess this can be done only for `push_to_hub` right ? It is the only function right now that requires the underlying datasets to be splits (e.g. train/test) and have the same features.\r\n\r\nNote that later you will be able to push datasets with different features as different dataset **configurations** (similarly to the [GLUE subsets](https://huggingface.co/datasets/glue) for example). We will work on this soon",
"Hi @lhoestq,\r\n\r\nReturning to this thread to ask whether the possibility to create `DatasetDict` with different configurations will be supported in the future.\r\n\r\nBest,\r\nPietro",
"DatasetDict is likely to always require the datasets to have the same columns and types, while different configurations may have different columns and types.\r\n\r\nWhy would you like to see that ?\r\nIf it's related to push_to_hub, we plan to allow pushing several configs, but not using DatasetDict",
"Hi @lhoestq and @pietrolesci,\r\n\r\nI have been curious about this question as well. I don't have experience working with different configurations, but I can give a bit more detail on the work flow that I have been using with `Dataset_dict`.\r\n\r\nAs @pietrolesci mentions, I have been using `push_to_hub` to quickly iterate on dataset curation for different ML experiments - locally I create a set of dataset splits e.g. `train/val/test/inference`, then convert them to `HF_Datasets` and finally a to `Dataset_Dict` to `push_to_hub`. Where I have run into issues is when I want to include different metadata for different splits. For example, I have situations where I only have meta-data for one of the splits (e.g. test) or situations where I am working with `inference` data that does not have labels. Currently I use a rather hacky work around by adding \"dummy\" columns for missing columns to avoid the error:\r\n\r\n```\r\nValueError: All datasets in `DatasetDict` should have the same features\r\n```\r\n\r\nI am curious why `DatasetDict` will likely not support this functionality? I don't know much about working with different configurations, but allowing for different columns between datasets / splits would be a very helpful use-case for me. Are there any docs for using different configuration OR a more info about incorporating it with `push_to_hub`.\r\n\r\nBest wishes,\r\nJonathan\r\n\r\n",
"+1",
"> I am curious why DatasetDict will likely not support this functionality?\r\n\r\nThere's a possibility we may merge the Dataset and DatasetDict classes. The DatasetDict purpose was to define a way to get the train/test splits of a dataset.\r\n\r\nsee the discussions at https://github.com/huggingface/datasets/issues/5189\r\n\r\n> Are there any docs for using different configuration OR a more info about incorporating it with push_to_hub.\r\n\r\nThere's a PR open to allow to upload a dataset with a certain configuration name. Then later you can reload this specific configuration using `load_dataset(ds_name, config_name)`\r\n\r\nsee the PR at https://github.com/huggingface/datasets/pull/5213",
"Hi, regarding the following information:\r\n\r\n> Please note that this is a design purpose: a `DatasetDict` has the same features for all its datasets. Normally, a `DatasetDict` is composed of several sub-datasets each corresponding to a different **split**.\r\n> \r\n> To handle sub-datasets with different features, we use another approach: use different **configurations** instead of **splits**.\r\n\r\nAltough this is often implied (such as how else would `DatasetDict` be able to process multiple splits in the same way?), I would expect it to be written somewhere in the docs plainly and maybe even in bold. Also I would expect to see it in multiple places such as:\r\n\r\n- in docstring of `DatasetDict`\r\n- in nlp/image/audio guides on how to create a dataset\r\n- [in conceptual guide on how to create a loading script](https://huggingface.co/docs/datasets/main/en/about_dataset_load)\r\n\r\n\r\nI think this addition would benefit the docs, especially when you guide a newbie (such as me) through the process of creating a dataset. As I said, you somehow suspect that this is in fact the case, but without reading it in the docs you cannot be sure."
] |
https://api.github.com/repos/huggingface/datasets/issues/1778
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/1778/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/1778/comments
|
https://api.github.com/repos/huggingface/datasets/issues/1778/events
|
https://github.com/huggingface/datasets/pull/1778
| 793,474,507 |
MDExOlB1bGxSZXF1ZXN0NTYxMTU2Mzk1
| 1,778 |
Narrative QA Manual
|
[] |
closed
| false | null | 6 |
2021-01-25T15:22:31Z
|
2021-01-29T09:35:14Z
|
2021-01-29T09:34:51Z
| null |
Submitting the manual version of Narrative QA script which requires a manual download from the original repository
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1778/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/1778/timeline
| null | null | false |
{
"diff_url": "https://github.com/huggingface/datasets/pull/1778.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1778",
"merged_at": "2021-01-29T09:34:51Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1778.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1778"
}
| true |
[
"@lhoestq sorry I opened a new pull request because of some issues with the previous code base. This pull request is originally from #1364",
"Excellent comments. Thanks for those valuable suggestions. I changed everything as you have pointed out :) ",
"I've copied the same template as NarrativeQA now. Please let me know if this is fine. ",
"> Awesome thank you !!\r\n> This looks all good :)\r\n> \r\n> Just before we merge, I was wondering if you knew why the number of examples in the train set went from 1102 to 32747 in your last commit ? I can't see why the changes in the code would cause such a big difference\r\n\r\nOk the change was the way I presented the data. \r\nIn my previous code, I presented a story with a list of questions-answers related to the story per sample. So the total 1102 was the number of stories (not questions) in the train set. \r\n\r\nIn the case of `NarrativeQA`, the code presented each sample data with one single question. So the story gets replicated as many times based on number of questions per story. I felt this was not really memory efficient so I had coded the way I did earlier. \r\n\r\nBut since this would be inconsistent as you pointed out, I modified my code to suit the `NarrativeQA` approach. Hope it's clear now :) ",
"Ok I see ! that makes sense",
"Thanks for your time and helping me with all this :) Really appreciate the hardwork you guys do. "
] |
https://api.github.com/repos/huggingface/datasets/issues/1789
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/1789/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/1789/comments
|
https://api.github.com/repos/huggingface/datasets/issues/1789/events
|
https://github.com/huggingface/datasets/pull/1789
| 796,229,721 |
MDExOlB1bGxSZXF1ZXN0NTYzNDQyMTc2
| 1,789 |
[BUG FIX] typo in the import path for metrics
|
[] |
closed
| false | null | 0 |
2021-01-28T18:01:37Z
|
2021-01-28T18:13:56Z
|
2021-01-28T18:13:56Z
| null |
This tiny PR fixes a typo introduced in https://github.com/huggingface/datasets/pull/1726 which prevents loading new metrics
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1789/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/1789/timeline
| null | null | false |
{
"diff_url": "https://github.com/huggingface/datasets/pull/1789.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1789",
"merged_at": "2021-01-28T18:13:55Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1789.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1789"
}
| true |
[] |
https://api.github.com/repos/huggingface/datasets/issues/4123
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/4123/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/4123/comments
|
https://api.github.com/repos/huggingface/datasets/issues/4123/events
|
https://github.com/huggingface/datasets/issues/4123
| 1,196,367,512 |
I_kwDODunzps5HTx6Y
| 4,123 |
Building C4 takes forever
|
[
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] |
closed
| false | null | 1 |
2022-04-07T17:41:30Z
|
2023-06-26T22:01:29Z
|
2023-06-26T22:01:29Z
| null |
## Describe the bug
C4-en is a 300 GB dataset. However, when I try to download it through the hub it takes over _six hours_ to generate the train/test split from the downloaded files. This is an absurd amount of time and an unnecessary waste of resources.
## Steps to reproduce the bug
```python
c4 = datasets.load("c4", "en")
```
## Expected results
I would like to be able to download pre-split data.
## Environment info
- `datasets` version: 2.0.0
- Platform: Linux-5.13.0-35-generic-x86_64-with-glibc2.29
- Python version: 3.8.10
- PyArrow version: 7.0.0
- Pandas version: 1.4.1
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4123/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/4123/timeline
| null |
completed
| null | null | false |
[
"Hi @StellaAthena, thanks for reporting.\r\n\r\nPlease note, that our `datasets` library performs several operations in order to load a dataset, among them:\r\n- it downloads all the required files: for C4 \"en\", 378.69 GB of JSON GZIPped files\r\n- it parses their content to generate the dataset\r\n- it caches the dataset in an Arrow file: for C4 \"en\", this file size is 1.87 TB\r\n- it memory-maps the Arrow file\r\n\r\nIf it suits your use case, you might load this dataset in streaming mode:\r\n- no Arrow file is generated\r\n- you can iterate over elements immediately (no need to wait to download all the entire files)\r\n\r\n```python\r\nIn [45]: from datasets import load_dataset\r\n ...: ds = load_dataset(\"c4\", \"en\", split=\"train\", streaming=True)\r\n ...: for item in ds:\r\n ...: print(item)\r\n ...: break\r\n ...: \r\n{'text': 'Beginners BBQ Class Taking Place in Missoula!\\nDo you want to get better at making delicious BBQ? You will have the opportunity, put this on your calendar now. Thursday, September 22nd join World Class BBQ Champion, Tony Balay from Lonestar Smoke Rangers. He will be teaching a beginner level class for everyone who wants to get better with their culinary skills.\\nHe will teach you everything you need to know to compete in a KCBS BBQ competition, including techniques, recipes, timelines, meat selection and trimming, plus smoker and fire information.\\nThe cost to be in the class is $35 per person, and for spectators it is free. Included in the cost will be either a t-shirt or apron and you will be tasting samples of each meat that is prepared.', 'timestamp': '2019-04-25T12:57:54Z', 'url': 'https://klyq.com/beginners-bbq-class-taking-place-in-missoula/'}\r\n```\r\nI hope this is useful for your use case."
] |
https://api.github.com/repos/huggingface/datasets/issues/41
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/41/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/41/comments
|
https://api.github.com/repos/huggingface/datasets/issues/41/events
|
https://github.com/huggingface/datasets/pull/41
| 611,739,219 |
MDExOlB1bGxSZXF1ZXN0NDEyODQzNDQy
| 41 |
[Load module] allow kwargs into load module
|
[] |
closed
| false | null | 0 |
2020-05-04T09:42:11Z
|
2020-05-04T19:39:07Z
|
2020-05-04T19:39:06Z
| null |
Currenly it is not possible to force a re-download of the dataset script.
This simple change allows to pass ``force_reload=True`` as ``builder_kwargs`` in the ``load.py`` function.
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/41/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/41/timeline
| null | null | false |
{
"diff_url": "https://github.com/huggingface/datasets/pull/41.diff",
"html_url": "https://github.com/huggingface/datasets/pull/41",
"merged_at": "2020-05-04T19:39:06Z",
"patch_url": "https://github.com/huggingface/datasets/pull/41.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/41"
}
| true |
[] |
https://api.github.com/repos/huggingface/datasets/issues/3051
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/3051/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/3051/comments
|
https://api.github.com/repos/huggingface/datasets/issues/3051/events
|
https://github.com/huggingface/datasets/issues/3051
| 1,021,852,234 |
I_kwDODunzps486DpK
| 3,051 |
Non-Matching Checksum Error with crd3 dataset
|
[
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] |
closed
| false | null | 2 |
2021-10-10T01:32:43Z
|
2022-03-15T15:54:26Z
|
2022-03-15T15:54:26Z
| null |
## Describe the bug
When I try loading the crd3 dataset (https://huggingface.co/datasets/crd3), an error is thrown.
## Steps to reproduce the bug
```python
dataset = load_dataset('crd3', split='train')
```
## Expected results
I expect no error to be thrown.
## Actual results
A non-matching checksum error is thrown.
```
datasets.utils.info_utils.NonMatchingChecksumError: Checksums didn't match for dataset source files:
['https://github.com/RevanthRameshkumar/CRD3/archive/master.zip']
```
## Environment info
- `datasets` version: 1.12.1
- Platform: Linux-4.4.0-173-generic-x86_64-with-Ubuntu-16.04-xenial
- Python version: 3.6.10
- PyArrow version: 5.0.0
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3051/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/3051/timeline
| null |
completed
| null | null | false |
[
"I got the same error for another dataset (`multi_woz_v22`):\r\n\r\n```\r\ndatasets.utils.info_utils.NonMatchingChecksumError: Checksums didn't match for dataset source files:\r\n['https://github.com/budzianowski/multiwoz/raw/master/data/MultiWOZ_2.2/dialog_acts.json', 'https://github.com/budzianowski/multiwoz/raw/master/data/MultiWOZ_2.2/test/dialogues_001.json']\r\n```",
"I'm seeing the same issue as @RylanSchaeffer:\r\nPython 3.7.11, macOs 11.4\r\ndatasets==1.14.0\r\n\r\nfails on:\r\n```python\r\ndataset = datasets.load_dataset(\"multi_woz_v22\")\r\n```"
] |
https://api.github.com/repos/huggingface/datasets/issues/6005
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/6005/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/6005/comments
|
https://api.github.com/repos/huggingface/datasets/issues/6005/events
|
https://github.com/huggingface/datasets/pull/6005
| 1,788,103,576 |
PR_kwDODunzps5UoJ91
| 6,005 |
Drop Python 3.7 support
|
[] |
closed
| false | null | 7 |
2023-07-04T15:02:37Z
|
2023-07-06T15:32:41Z
|
2023-07-06T15:22:43Z
| null |
`hfh` and `transformers` have dropped Python 3.7 support, so we should do the same :).
(Based on the stats, it seems less than 10% of the users use `datasets` with Python 3.7)
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6005/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/6005/timeline
| null | null | false |
{
"diff_url": "https://github.com/huggingface/datasets/pull/6005.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6005",
"merged_at": "2023-07-06T15:22:43Z",
"patch_url": "https://github.com/huggingface/datasets/pull/6005.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6005"
}
| true |
[
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006152 / 0.011353 (-0.005200) | 0.003916 / 0.011008 (-0.007092) | 0.097355 / 0.038508 (0.058847) | 0.037228 / 0.023109 (0.014119) | 0.315753 / 0.275898 (0.039855) | 0.387949 / 0.323480 (0.064470) | 0.004804 / 0.007986 (-0.003181) | 0.002975 / 0.004328 (-0.001353) | 0.076932 / 0.004250 (0.072682) | 0.053497 / 0.037052 (0.016445) | 0.331143 / 0.258489 (0.072654) | 0.388347 / 0.293841 (0.094506) | 0.027535 / 0.128546 (-0.101011) | 0.008509 / 0.075646 (-0.067137) | 0.312639 / 0.419271 (-0.106632) | 0.047212 / 0.043533 (0.003679) | 0.316875 / 0.255139 (0.061736) | 0.352191 / 0.283200 (0.068992) | 0.021380 / 0.141683 (-0.120303) | 1.541401 / 1.452155 (0.089247) | 1.519420 / 1.492716 (0.026704) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.206332 / 0.018006 (0.188326) | 0.412252 / 0.000490 (0.411762) | 0.005119 / 0.000200 (0.004919) | 0.000077 / 0.000054 (0.000022) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.023856 / 0.037411 (-0.013556) | 0.098216 / 0.014526 (0.083691) | 0.106553 / 0.176557 (-0.070003) | 0.168767 / 0.737135 (-0.568369) | 0.109244 / 0.296338 (-0.187094) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.457580 / 0.215209 (0.242371) | 4.583246 / 2.077655 (2.505591) | 2.296356 / 1.504120 (0.792236) | 2.096216 / 1.541195 (0.555021) | 2.159086 / 1.468490 (0.690596) | 0.557905 / 4.584777 (-4.026872) | 3.345910 / 3.745712 (-0.399802) | 1.767436 / 5.269862 (-3.502426) | 1.021583 / 4.565676 (-3.544094) | 0.067265 / 0.424275 (-0.357011) | 0.011411 / 0.007607 (0.003804) | 0.559841 / 0.226044 (0.333797) | 5.586892 / 2.268929 (3.317963) | 2.735520 / 55.444624 (-52.709104) | 2.429393 / 6.876477 (-4.447084) | 2.544901 / 2.142072 (0.402829) | 0.667603 / 4.805227 (-4.137625) | 0.136244 / 6.500664 (-6.364421) | 0.066961 / 0.075469 (-0.008508) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.206529 / 1.841788 (-0.635259) | 13.988306 / 8.074308 (5.913998) | 13.481813 / 10.191392 (3.290421) | 0.161901 / 0.680424 (-0.518523) | 0.016850 / 0.534201 (-0.517351) | 0.367657 / 0.579283 (-0.211626) | 0.393343 / 0.434364 (-0.041021) | 0.465288 / 0.540337 (-0.075050) | 0.559888 / 1.386936 (-0.827048) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005956 / 0.011353 (-0.005397) | 0.003734 / 0.011008 (-0.007274) | 0.077841 / 0.038508 (0.039333) | 0.036532 / 0.023109 (0.013422) | 0.438923 / 0.275898 (0.163025) | 0.490133 / 0.323480 (0.166653) | 0.004651 / 0.007986 (-0.003335) | 0.002881 / 0.004328 (-0.001448) | 0.077868 / 0.004250 (0.073618) | 0.051700 / 0.037052 (0.014647) | 0.448018 / 0.258489 (0.189529) | 0.500304 / 0.293841 (0.206464) | 0.029051 / 0.128546 (-0.099496) | 0.008498 / 0.075646 (-0.067148) | 0.082932 / 0.419271 (-0.336339) | 0.043665 / 0.043533 (0.000132) | 0.431613 / 0.255139 (0.176474) | 0.458749 / 0.283200 (0.175549) | 0.021951 / 0.141683 (-0.119731) | 1.556043 / 1.452155 (0.103888) | 1.588391 / 1.492716 (0.095675) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.220674 / 0.018006 (0.202667) | 0.415408 / 0.000490 (0.414918) | 0.002613 / 0.000200 (0.002413) | 0.000075 / 0.000054 (0.000020) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.025548 / 0.037411 (-0.011863) | 0.103633 / 0.014526 (0.089107) | 0.115193 / 0.176557 (-0.061364) | 0.163971 / 0.737135 (-0.573164) | 0.114754 / 0.296338 (-0.181585) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.456823 / 0.215209 (0.241614) | 4.569950 / 2.077655 (2.492296) | 2.196339 / 1.504120 (0.692219) | 1.985822 / 1.541195 (0.444628) | 2.044083 / 1.468490 (0.575593) | 0.567919 / 4.584777 (-4.016858) | 3.397515 / 3.745712 (-0.348197) | 1.741087 / 5.269862 (-3.528775) | 1.041237 / 4.565676 (-3.524440) | 0.068963 / 0.424275 (-0.355313) | 0.011677 / 0.007607 (0.004070) | 0.565010 / 0.226044 (0.338966) | 5.625886 / 2.268929 (3.356957) | 2.670658 / 55.444624 (-52.773967) | 2.300279 / 6.876477 (-4.576198) | 2.392178 / 2.142072 (0.250106) | 0.680226 / 4.805227 (-4.125001) | 0.139119 / 6.500664 (-6.361545) | 0.067953 / 0.075469 (-0.007516) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.303280 / 1.841788 (-0.538507) | 14.458686 / 8.074308 (6.384378) | 14.409369 / 10.191392 (4.217977) | 0.144581 / 0.680424 (-0.535843) | 0.016634 / 0.534201 (-0.517567) | 0.364607 / 0.579283 (-0.214676) | 0.394521 / 0.434364 (-0.039843) | 0.433417 / 0.540337 (-0.106921) | 0.527127 / 1.386936 (-0.859809) |\n\n</details>\n</details>\n\n\n",
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006245 / 0.011353 (-0.005108) | 0.003871 / 0.011008 (-0.007138) | 0.098823 / 0.038508 (0.060315) | 0.039853 / 0.023109 (0.016744) | 0.314989 / 0.275898 (0.039091) | 0.376733 / 0.323480 (0.053254) | 0.004754 / 0.007986 (-0.003232) | 0.002971 / 0.004328 (-0.001357) | 0.078451 / 0.004250 (0.074201) | 0.053160 / 0.037052 (0.016107) | 0.324443 / 0.258489 (0.065954) | 0.361488 / 0.293841 (0.067647) | 0.027942 / 0.128546 (-0.100604) | 0.008535 / 0.075646 (-0.067111) | 0.315526 / 0.419271 (-0.103745) | 0.045706 / 0.043533 (0.002174) | 0.329614 / 0.255139 (0.074475) | 0.336339 / 0.283200 (0.053139) | 0.021278 / 0.141683 (-0.120405) | 1.529710 / 1.452155 (0.077555) | 1.566833 / 1.492716 (0.074116) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.215263 / 0.018006 (0.197257) | 0.440320 / 0.000490 (0.439830) | 0.002627 / 0.000200 (0.002427) | 0.000075 / 0.000054 (0.000021) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.023971 / 0.037411 (-0.013441) | 0.100549 / 0.014526 (0.086023) | 0.106995 / 0.176557 (-0.069561) | 0.169630 / 0.737135 (-0.567505) | 0.111614 / 0.296338 (-0.184724) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.424911 / 0.215209 (0.209702) | 4.246920 / 2.077655 (2.169266) | 1.923321 / 1.504120 (0.419202) | 1.714795 / 1.541195 (0.173600) | 1.772906 / 1.468490 (0.304416) | 0.554676 / 4.584777 (-4.030101) | 3.478896 / 3.745712 (-0.266816) | 2.800494 / 5.269862 (-2.469368) | 1.382630 / 4.565676 (-3.183047) | 0.067271 / 0.424275 (-0.357004) | 0.010967 / 0.007607 (0.003360) | 0.526769 / 0.226044 (0.300725) | 5.288564 / 2.268929 (3.019636) | 2.337459 / 55.444624 (-53.107165) | 1.999975 / 6.876477 (-4.876502) | 2.102680 / 2.142072 (-0.039392) | 0.672181 / 4.805227 (-4.133046) | 0.135097 / 6.500664 (-6.365567) | 0.066950 / 0.075469 (-0.008519) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.264365 / 1.841788 (-0.577423) | 14.282440 / 8.074308 (6.208132) | 14.220200 / 10.191392 (4.028808) | 0.139055 / 0.680424 (-0.541369) | 0.016681 / 0.534201 (-0.517520) | 0.367936 / 0.579283 (-0.211348) | 0.393959 / 0.434364 (-0.040404) | 0.424438 / 0.540337 (-0.115900) | 0.508065 / 1.386936 (-0.878872) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006514 / 0.011353 (-0.004839) | 0.003890 / 0.011008 (-0.007118) | 0.078871 / 0.038508 (0.040363) | 0.038080 / 0.023109 (0.014971) | 0.358282 / 0.275898 (0.082384) | 0.430654 / 0.323480 (0.107174) | 0.005712 / 0.007986 (-0.002273) | 0.003030 / 0.004328 (-0.001299) | 0.078636 / 0.004250 (0.074386) | 0.057771 / 0.037052 (0.020719) | 0.368814 / 0.258489 (0.110325) | 0.437047 / 0.293841 (0.143206) | 0.029470 / 0.128546 (-0.099076) | 0.008523 / 0.075646 (-0.067124) | 0.083334 / 0.419271 (-0.335938) | 0.044505 / 0.043533 (0.000972) | 0.357484 / 0.255139 (0.102345) | 0.393839 / 0.283200 (0.110639) | 0.023340 / 0.141683 (-0.118343) | 1.561033 / 1.452155 (0.108878) | 1.595560 / 1.492716 (0.102844) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.204149 / 0.018006 (0.186143) | 0.442747 / 0.000490 (0.442257) | 0.003105 / 0.000200 (0.002905) | 0.000085 / 0.000054 (0.000030) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.027002 / 0.037411 (-0.010409) | 0.105595 / 0.014526 (0.091070) | 0.108695 / 0.176557 (-0.067861) | 0.163182 / 0.737135 (-0.573953) | 0.114999 / 0.296338 (-0.181339) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.483713 / 0.215209 (0.268504) | 4.836063 / 2.077655 (2.758409) | 2.488072 / 1.504120 (0.983952) | 2.289556 / 1.541195 (0.748361) | 2.342912 / 1.468490 (0.874422) | 0.565937 / 4.584777 (-4.018840) | 3.479085 / 3.745712 (-0.266627) | 1.770922 / 5.269862 (-3.498940) | 1.046084 / 4.565676 (-3.519592) | 0.067857 / 0.424275 (-0.356418) | 0.011283 / 0.007607 (0.003676) | 0.592966 / 0.226044 (0.366921) | 5.932842 / 2.268929 (3.663914) | 2.956252 / 55.444624 (-52.488372) | 2.602704 / 6.876477 (-4.273772) | 2.715625 / 2.142072 (0.573552) | 0.674299 / 4.805227 (-4.130929) | 0.136039 / 6.500664 (-6.364625) | 0.067629 / 0.075469 (-0.007840) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.333734 / 1.841788 (-0.508054) | 14.561943 / 8.074308 (6.487634) | 14.455385 / 10.191392 (4.263993) | 0.132020 / 0.680424 (-0.548404) | 0.016893 / 0.534201 (-0.517308) | 0.367146 / 0.579283 (-0.212137) | 0.399623 / 0.434364 (-0.034741) | 0.432658 / 0.540337 (-0.107680) | 0.530475 / 1.386936 (-0.856461) |\n\n</details>\n</details>\n\n\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006045 / 0.011353 (-0.005308) | 0.003906 / 0.011008 (-0.007103) | 0.097558 / 0.038508 (0.059050) | 0.038827 / 0.023109 (0.015718) | 0.393564 / 0.275898 (0.117666) | 0.442459 / 0.323480 (0.118980) | 0.004792 / 0.007986 (-0.003194) | 0.002984 / 0.004328 (-0.001345) | 0.076419 / 0.004250 (0.072169) | 0.053606 / 0.037052 (0.016554) | 0.409743 / 0.258489 (0.151254) | 0.445753 / 0.293841 (0.151912) | 0.027753 / 0.128546 (-0.100793) | 0.008428 / 0.075646 (-0.067219) | 0.310267 / 0.419271 (-0.109004) | 0.057582 / 0.043533 (0.014049) | 0.396624 / 0.255139 (0.141485) | 0.416288 / 0.283200 (0.133089) | 0.029048 / 0.141683 (-0.112635) | 1.495362 / 1.452155 (0.043207) | 1.546331 / 1.492716 (0.053615) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.203832 / 0.018006 (0.185826) | 0.423649 / 0.000490 (0.423160) | 0.004533 / 0.000200 (0.004333) | 0.000076 / 0.000054 (0.000022) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.023084 / 0.037411 (-0.014328) | 0.100503 / 0.014526 (0.085977) | 0.105058 / 0.176557 (-0.071499) | 0.168506 / 0.737135 (-0.568629) | 0.112019 / 0.296338 (-0.184320) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.425877 / 0.215209 (0.210668) | 4.251278 / 2.077655 (2.173624) | 1.931339 / 1.504120 (0.427219) | 1.730578 / 1.541195 (0.189383) | 1.750637 / 1.468490 (0.282147) | 0.559307 / 4.584777 (-4.025470) | 3.461665 / 3.745712 (-0.284047) | 2.826959 / 5.269862 (-2.442903) | 1.418448 / 4.565676 (-3.147229) | 0.067881 / 0.424275 (-0.356394) | 0.011394 / 0.007607 (0.003787) | 0.533226 / 0.226044 (0.307181) | 5.341849 / 2.268929 (3.072921) | 2.367832 / 55.444624 (-53.076792) | 2.027240 / 6.876477 (-4.849236) | 2.095852 / 2.142072 (-0.046220) | 0.673790 / 4.805227 (-4.131437) | 0.136044 / 6.500664 (-6.364620) | 0.066350 / 0.075469 (-0.009119) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.203740 / 1.841788 (-0.638048) | 13.720879 / 8.074308 (5.646571) | 13.405939 / 10.191392 (3.214547) | 0.146792 / 0.680424 (-0.533632) | 0.016844 / 0.534201 (-0.517357) | 0.373455 / 0.579283 (-0.205828) | 0.394596 / 0.434364 (-0.039768) | 0.464715 / 0.540337 (-0.075623) | 0.558931 / 1.386936 (-0.828005) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006118 / 0.011353 (-0.005235) | 0.003817 / 0.011008 (-0.007191) | 0.077494 / 0.038508 (0.038985) | 0.037507 / 0.023109 (0.014398) | 0.387030 / 0.275898 (0.111132) | 0.437352 / 0.323480 (0.113872) | 0.004810 / 0.007986 (-0.003176) | 0.002935 / 0.004328 (-0.001394) | 0.077143 / 0.004250 (0.072892) | 0.053986 / 0.037052 (0.016933) | 0.393164 / 0.258489 (0.134675) | 0.449603 / 0.293841 (0.155762) | 0.029303 / 0.128546 (-0.099244) | 0.008481 / 0.075646 (-0.067165) | 0.083363 / 0.419271 (-0.335908) | 0.043877 / 0.043533 (0.000344) | 0.378175 / 0.255139 (0.123036) | 0.403996 / 0.283200 (0.120797) | 0.021688 / 0.141683 (-0.119995) | 1.541606 / 1.452155 (0.089452) | 1.552996 / 1.492716 (0.060280) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.236759 / 0.018006 (0.218752) | 0.416221 / 0.000490 (0.415732) | 0.000862 / 0.000200 (0.000662) | 0.000070 / 0.000054 (0.000016) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.025543 / 0.037411 (-0.011868) | 0.101731 / 0.014526 (0.087206) | 0.108482 / 0.176557 (-0.068075) | 0.160290 / 0.737135 (-0.576845) | 0.111392 / 0.296338 (-0.184946) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.457767 / 0.215209 (0.242558) | 4.565976 / 2.077655 (2.488321) | 2.245413 / 1.504120 (0.741294) | 2.031458 / 1.541195 (0.490264) | 2.073193 / 1.468490 (0.604702) | 0.560461 / 4.584777 (-4.024316) | 3.422536 / 3.745712 (-0.323176) | 2.977017 / 5.269862 (-2.292845) | 1.377021 / 4.565676 (-3.188655) | 0.068444 / 0.424275 (-0.355831) | 0.011036 / 0.007607 (0.003429) | 0.571501 / 0.226044 (0.345456) | 5.702652 / 2.268929 (3.433723) | 2.727132 / 55.444624 (-52.717492) | 2.399269 / 6.876477 (-4.477208) | 2.574281 / 2.142072 (0.432208) | 0.682600 / 4.805227 (-4.122627) | 0.136943 / 6.500664 (-6.363722) | 0.067126 / 0.075469 (-0.008343) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.322196 / 1.841788 (-0.519592) | 14.239509 / 8.074308 (6.165201) | 14.235779 / 10.191392 (4.044387) | 0.148262 / 0.680424 (-0.532162) | 0.016566 / 0.534201 (-0.517635) | 0.364034 / 0.579283 (-0.215249) | 0.399157 / 0.434364 (-0.035207) | 0.426348 / 0.540337 (-0.113990) | 0.520804 / 1.386936 (-0.866132) |\n\n</details>\n</details>\n\n\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007808 / 0.011353 (-0.003545) | 0.004706 / 0.011008 (-0.006303) | 0.100530 / 0.038508 (0.062022) | 0.052052 / 0.023109 (0.028943) | 0.419300 / 0.275898 (0.143402) | 0.488451 / 0.323480 (0.164971) | 0.006350 / 0.007986 (-0.001636) | 0.003875 / 0.004328 (-0.000453) | 0.076489 / 0.004250 (0.072238) | 0.077554 / 0.037052 (0.040502) | 0.435863 / 0.258489 (0.177373) | 0.483241 / 0.293841 (0.189400) | 0.037518 / 0.128546 (-0.091028) | 0.009857 / 0.075646 (-0.065789) | 0.340933 / 0.419271 (-0.078339) | 0.087046 / 0.043533 (0.043514) | 0.410721 / 0.255139 (0.155582) | 0.428995 / 0.283200 (0.145795) | 0.041701 / 0.141683 (-0.099982) | 1.821017 / 1.452155 (0.368862) | 1.837021 / 1.492716 (0.344305) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.228444 / 0.018006 (0.210438) | 0.480446 / 0.000490 (0.479956) | 0.004963 / 0.000200 (0.004763) | 0.000101 / 0.000054 (0.000046) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.032485 / 0.037411 (-0.004926) | 0.096500 / 0.014526 (0.081974) | 0.111547 / 0.176557 (-0.065010) | 0.178842 / 0.737135 (-0.558294) | 0.111099 / 0.296338 (-0.185240) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.467159 / 0.215209 (0.251950) | 4.701676 / 2.077655 (2.624021) | 2.390560 / 1.504120 (0.886440) | 2.197722 / 1.541195 (0.656528) | 2.264705 / 1.468490 (0.796215) | 0.568667 / 4.584777 (-4.016110) | 4.200724 / 3.745712 (0.455012) | 3.777625 / 5.269862 (-1.492236) | 2.372451 / 4.565676 (-2.193225) | 0.067562 / 0.424275 (-0.356714) | 0.008947 / 0.007607 (0.001340) | 0.556910 / 0.226044 (0.330865) | 5.528927 / 2.268929 (3.259998) | 2.902780 / 55.444624 (-52.541844) | 2.507933 / 6.876477 (-4.368544) | 2.734627 / 2.142072 (0.592554) | 0.683305 / 4.805227 (-4.121922) | 0.158288 / 6.500664 (-6.342376) | 0.071252 / 0.075469 (-0.004217) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.487502 / 1.841788 (-0.354286) | 22.193341 / 8.074308 (14.119033) | 15.922607 / 10.191392 (5.731215) | 0.172189 / 0.680424 (-0.508235) | 0.021502 / 0.534201 (-0.512699) | 0.471198 / 0.579283 (-0.108085) | 0.475979 / 0.434364 (0.041615) | 0.544675 / 0.540337 (0.004338) | 0.756102 / 1.386936 (-0.630834) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007635 / 0.011353 (-0.003717) | 0.004614 / 0.011008 (-0.006394) | 0.075852 / 0.038508 (0.037344) | 0.049700 / 0.023109 (0.026591) | 0.425957 / 0.275898 (0.150059) | 0.512590 / 0.323480 (0.189110) | 0.006921 / 0.007986 (-0.001065) | 0.003714 / 0.004328 (-0.000615) | 0.075536 / 0.004250 (0.071286) | 0.070206 / 0.037052 (0.033153) | 0.455706 / 0.258489 (0.197217) | 0.512231 / 0.293841 (0.218390) | 0.036685 / 0.128546 (-0.091861) | 0.009793 / 0.075646 (-0.065853) | 0.084208 / 0.419271 (-0.335064) | 0.065262 / 0.043533 (0.021729) | 0.423761 / 0.255139 (0.168622) | 0.456791 / 0.283200 (0.173591) | 0.044539 / 0.141683 (-0.097144) | 1.797029 / 1.452155 (0.344874) | 1.864124 / 1.492716 (0.371408) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.366840 / 0.018006 (0.348834) | 0.479254 / 0.000490 (0.478765) | 0.070383 / 0.000200 (0.070183) | 0.000762 / 0.000054 (0.000707) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.034233 / 0.037411 (-0.003178) | 0.103140 / 0.014526 (0.088614) | 0.117099 / 0.176557 (-0.059457) | 0.178532 / 0.737135 (-0.558603) | 0.120092 / 0.296338 (-0.176247) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.492993 / 0.215209 (0.277784) | 4.878776 / 2.077655 (2.801121) | 2.566666 / 1.504120 (1.062547) | 2.356383 / 1.541195 (0.815188) | 2.454723 / 1.468490 (0.986233) | 0.571432 / 4.584777 (-4.013345) | 4.240554 / 3.745712 (0.494842) | 7.509259 / 5.269862 (2.239398) | 4.040294 / 4.565676 (-0.525382) | 0.067409 / 0.424275 (-0.356866) | 0.008657 / 0.007607 (0.001050) | 0.585751 / 0.226044 (0.359707) | 5.967668 / 2.268929 (3.698739) | 3.195573 / 55.444624 (-52.249052) | 2.839772 / 6.876477 (-4.036704) | 2.806319 / 2.142072 (0.664246) | 0.681502 / 4.805227 (-4.123725) | 0.158673 / 6.500664 (-6.341991) | 0.073224 / 0.075469 (-0.002245) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.623335 / 1.841788 (-0.218453) | 22.490806 / 8.074308 (14.416498) | 16.762435 / 10.191392 (6.571043) | 0.180961 / 0.680424 (-0.499463) | 0.022716 / 0.534201 (-0.511485) | 0.472910 / 0.579283 (-0.106373) | 0.471616 / 0.434364 (0.037252) | 0.548192 / 0.540337 (0.007854) | 0.734357 / 1.386936 (-0.652579) |\n\n</details>\n</details>\n\n\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005858 / 0.011353 (-0.005495) | 0.003512 / 0.011008 (-0.007497) | 0.079739 / 0.038508 (0.041231) | 0.057736 / 0.023109 (0.034627) | 0.317640 / 0.275898 (0.041742) | 0.354157 / 0.323480 (0.030677) | 0.004772 / 0.007986 (-0.003214) | 0.002824 / 0.004328 (-0.001504) | 0.063288 / 0.004250 (0.059037) | 0.049542 / 0.037052 (0.012489) | 0.323974 / 0.258489 (0.065485) | 0.372149 / 0.293841 (0.078308) | 0.026841 / 0.128546 (-0.101705) | 0.007846 / 0.075646 (-0.067800) | 0.262546 / 0.419271 (-0.156725) | 0.051952 / 0.043533 (0.008420) | 0.319439 / 0.255139 (0.064300) | 0.343862 / 0.283200 (0.060663) | 0.027021 / 0.141683 (-0.114662) | 1.445211 / 1.452155 (-0.006944) | 1.485006 / 1.492716 (-0.007711) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.183174 / 0.018006 (0.165167) | 0.422794 / 0.000490 (0.422304) | 0.004148 / 0.000200 (0.003948) | 0.000067 / 0.000054 (0.000012) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.023037 / 0.037411 (-0.014374) | 0.071300 / 0.014526 (0.056775) | 0.083022 / 0.176557 (-0.093535) | 0.146215 / 0.737135 (-0.590920) | 0.082549 / 0.296338 (-0.213789) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.422846 / 0.215209 (0.207637) | 4.215280 / 2.077655 (2.137626) | 2.256802 / 1.504120 (0.752682) | 2.056867 / 1.541195 (0.515673) | 2.102478 / 1.468490 (0.633988) | 0.497552 / 4.584777 (-4.087225) | 3.049716 / 3.745712 (-0.695996) | 4.209227 / 5.269862 (-1.060635) | 2.599947 / 4.565676 (-1.965730) | 0.059131 / 0.424275 (-0.365144) | 0.006459 / 0.007607 (-0.001148) | 0.495047 / 0.226044 (0.269003) | 4.952332 / 2.268929 (2.683404) | 2.675260 / 55.444624 (-52.769365) | 2.333223 / 6.876477 (-4.543254) | 2.449573 / 2.142072 (0.307500) | 0.583420 / 4.805227 (-4.221807) | 0.125140 / 6.500664 (-6.375524) | 0.060209 / 0.075469 (-0.015260) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.215033 / 1.841788 (-0.626755) | 18.101107 / 8.074308 (10.026799) | 13.489222 / 10.191392 (3.297830) | 0.147122 / 0.680424 (-0.533302) | 0.016567 / 0.534201 (-0.517634) | 0.329909 / 0.579283 (-0.249374) | 0.340952 / 0.434364 (-0.093412) | 0.379166 / 0.540337 (-0.161172) | 0.510767 / 1.386936 (-0.876169) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005942 / 0.011353 (-0.005411) | 0.003628 / 0.011008 (-0.007380) | 0.061975 / 0.038508 (0.023467) | 0.058331 / 0.023109 (0.035221) | 0.393277 / 0.275898 (0.117379) | 0.410740 / 0.323480 (0.087261) | 0.004546 / 0.007986 (-0.003440) | 0.002826 / 0.004328 (-0.001503) | 0.062216 / 0.004250 (0.057966) | 0.049801 / 0.037052 (0.012748) | 0.394070 / 0.258489 (0.135581) | 0.414407 / 0.293841 (0.120566) | 0.027161 / 0.128546 (-0.101385) | 0.007901 / 0.075646 (-0.067746) | 0.066778 / 0.419271 (-0.352493) | 0.041354 / 0.043533 (-0.002179) | 0.379432 / 0.255139 (0.124293) | 0.402966 / 0.283200 (0.119766) | 0.020279 / 0.141683 (-0.121404) | 1.416986 / 1.452155 (-0.035169) | 1.474335 / 1.492716 (-0.018382) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.226147 / 0.018006 (0.208140) | 0.404361 / 0.000490 (0.403871) | 0.000358 / 0.000200 (0.000158) | 0.000054 / 0.000054 (-0.000000) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.025105 / 0.037411 (-0.012306) | 0.075849 / 0.014526 (0.061323) | 0.084781 / 0.176557 (-0.091775) | 0.137415 / 0.737135 (-0.599720) | 0.086288 / 0.296338 (-0.210051) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.445925 / 0.215209 (0.230716) | 4.453478 / 2.077655 (2.375823) | 2.419048 / 1.504120 (0.914928) | 2.246363 / 1.541195 (0.705168) | 2.304022 / 1.468490 (0.835532) | 0.499132 / 4.584777 (-4.085645) | 3.001336 / 3.745712 (-0.744376) | 2.902593 / 5.269862 (-2.367269) | 1.819843 / 4.565676 (-2.745834) | 0.057210 / 0.424275 (-0.367065) | 0.006338 / 0.007607 (-0.001269) | 0.523280 / 0.226044 (0.297236) | 5.235969 / 2.268929 (2.967040) | 2.897585 / 55.444624 (-52.547039) | 2.541586 / 6.876477 (-4.334891) | 2.564233 / 2.142072 (0.422160) | 0.584714 / 4.805227 (-4.220513) | 0.124611 / 6.500664 (-6.376053) | 0.061774 / 0.075469 (-0.013695) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.349799 / 1.841788 (-0.491988) | 18.225076 / 8.074308 (10.150768) | 13.781518 / 10.191392 (3.590126) | 0.130562 / 0.680424 (-0.549862) | 0.016434 / 0.534201 (-0.517767) | 0.331607 / 0.579283 (-0.247676) | 0.343456 / 0.434364 (-0.090908) | 0.380437 / 0.540337 (-0.159900) | 0.522793 / 1.386936 (-0.864143) |\n\n</details>\n</details>\n\n\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.013721 / 0.011353 (0.002368) | 0.005715 / 0.011008 (-0.005293) | 0.090116 / 0.038508 (0.051608) | 0.087185 / 0.023109 (0.064075) | 0.427813 / 0.275898 (0.151915) | 0.390614 / 0.323480 (0.067135) | 0.006976 / 0.007986 (-0.001009) | 0.004231 / 0.004328 (-0.000098) | 0.078320 / 0.004250 (0.074070) | 0.066235 / 0.037052 (0.029183) | 0.439904 / 0.258489 (0.181415) | 0.424119 / 0.293841 (0.130278) | 0.050362 / 0.128546 (-0.078184) | 0.014992 / 0.075646 (-0.060654) | 0.293519 / 0.419271 (-0.125753) | 0.066906 / 0.043533 (0.023373) | 0.449657 / 0.255139 (0.194518) | 0.393800 / 0.283200 (0.110600) | 0.032258 / 0.141683 (-0.109425) | 1.539534 / 1.452155 (0.087379) | 1.675292 / 1.492716 (0.182576) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.210515 / 0.018006 (0.192508) | 0.506817 / 0.000490 (0.506327) | 0.001938 / 0.000200 (0.001738) | 0.000118 / 0.000054 (0.000064) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.026019 / 0.037411 (-0.011393) | 0.080635 / 0.014526 (0.066109) | 0.103050 / 0.176557 (-0.073507) | 0.160597 / 0.737135 (-0.576538) | 0.095844 / 0.296338 (-0.200495) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.506359 / 0.215209 (0.291150) | 5.041586 / 2.077655 (2.963931) | 2.198288 / 1.504120 (0.694168) | 1.987544 / 1.541195 (0.446349) | 1.866790 / 1.468490 (0.398300) | 0.681642 / 4.584777 (-3.903135) | 4.719306 / 3.745712 (0.973593) | 7.669869 / 5.269862 (2.400008) | 4.466082 / 4.565676 (-0.099595) | 0.092974 / 0.424275 (-0.331301) | 0.008196 / 0.007607 (0.000589) | 0.707656 / 0.226044 (0.481612) | 6.974507 / 2.268929 (4.705579) | 3.254206 / 55.444624 (-52.190418) | 2.499019 / 6.876477 (-4.377457) | 2.509089 / 2.142072 (0.367017) | 0.915952 / 4.805227 (-3.889276) | 0.192119 / 6.500664 (-6.308545) | 0.065473 / 0.075469 (-0.009996) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.309078 / 1.841788 (-0.532710) | 19.660348 / 8.074308 (11.586040) | 16.659582 / 10.191392 (6.468190) | 0.194315 / 0.680424 (-0.486109) | 0.027773 / 0.534201 (-0.506428) | 0.401241 / 0.579283 (-0.178042) | 0.515799 / 0.434364 (0.081435) | 0.488772 / 0.540337 (-0.051566) | 0.604790 / 1.386936 (-0.782146) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006823 / 0.011353 (-0.004530) | 0.003940 / 0.011008 (-0.007068) | 0.061533 / 0.038508 (0.023025) | 0.065241 / 0.023109 (0.042132) | 0.411790 / 0.275898 (0.135892) | 0.475720 / 0.323480 (0.152241) | 0.005376 / 0.007986 (-0.002609) | 0.003433 / 0.004328 (-0.000895) | 0.065703 / 0.004250 (0.061452) | 0.050736 / 0.037052 (0.013683) | 0.435890 / 0.258489 (0.177401) | 0.436698 / 0.293841 (0.142857) | 0.040357 / 0.128546 (-0.088189) | 0.011578 / 0.075646 (-0.064069) | 0.072831 / 0.419271 (-0.346440) | 0.055698 / 0.043533 (0.012165) | 0.408225 / 0.255139 (0.153086) | 0.439551 / 0.283200 (0.156352) | 0.030469 / 0.141683 (-0.111214) | 1.443866 / 1.452155 (-0.008289) | 1.502022 / 1.492716 (0.009306) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.290338 / 0.018006 (0.272332) | 0.540726 / 0.000490 (0.540236) | 0.003244 / 0.000200 (0.003044) | 0.000170 / 0.000054 (0.000116) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.030865 / 0.037411 (-0.006547) | 0.090866 / 0.014526 (0.076340) | 0.106224 / 0.176557 (-0.070332) | 0.166583 / 0.737135 (-0.570553) | 0.104448 / 0.296338 (-0.191891) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.518025 / 0.215209 (0.302816) | 6.027065 / 2.077655 (3.949410) | 2.671840 / 1.504120 (1.167720) | 2.273949 / 1.541195 (0.732754) | 2.414892 / 1.468490 (0.946402) | 0.774318 / 4.584777 (-3.810459) | 5.020364 / 3.745712 (1.274652) | 4.146927 / 5.269862 (-1.122934) | 2.584598 / 4.565676 (-1.981078) | 0.089519 / 0.424275 (-0.334756) | 0.009181 / 0.007607 (0.001574) | 0.654467 / 0.226044 (0.428423) | 6.421595 / 2.268929 (4.152666) | 3.091589 / 55.444624 (-52.353036) | 2.554798 / 6.876477 (-4.321679) | 2.441354 / 2.142072 (0.299282) | 0.943386 / 4.805227 (-3.861841) | 0.173641 / 6.500664 (-6.327023) | 0.072209 / 0.075469 (-0.003260) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.557147 / 1.841788 (-0.284641) | 19.980747 / 8.074308 (11.906439) | 17.816813 / 10.191392 (7.625421) | 0.212078 / 0.680424 (-0.468346) | 0.025435 / 0.534201 (-0.508766) | 0.396200 / 0.579283 (-0.183084) | 0.546249 / 0.434364 (0.111885) | 0.459632 / 0.540337 (-0.080705) | 0.616548 / 1.386936 (-0.770388) |\n\n</details>\n</details>\n\n\n"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.