---
dataset_info:
features:
- name: source
dtype: string
- name: task_type
dtype: string
- name: in_source_id
dtype: string
- name: problem
dtype: string
- name: gold_standard_solution
dtype: string
- name: problem_id
dtype: string
- name: metadata
struct:
- name: difficulty
dtype: string
- name: memory_limit
dtype: string
- name: memory_limit_bytes
dtype: int64
- name: problem_url
dtype: string
- name: time_limit
dtype: string
- name: verification_info
struct:
- name: language
dtype: string
- name: test_cases
list:
- name: fn_name
dtype: string
- name: input
dtype: string
- name: output
dtype: string
- name: type
dtype: string
splits:
- name: train
num_bytes: 385930754
num_examples: 27839
download_size: 235246000
dataset_size: 385930754
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
```python
import datasets
import random
def limit_test_cases_uniformly(example, max_test_cases=6):
num_test_cases = random.randint(1, max_test_cases)
example['verification_info']['test_cases'] = example['verification_info']['test_cases'][:num_test_cases]
return example
ds = datasets.load_dataset("open-r1/verifiable-coding-problems-python_decontaminated", split="train")
ds_filtered = ds.map(limit_test_cases_uniformly, num_proc=10)
ds_filtered.push_to_hub("rasdani/verifiable-coding-problems-python_decontaminated_fewer_test_cases")
```
Before:
After: