icpc-world-finals / README.md
CO-IR's picture
Update README.md
2b9b9f6 verified
|
raw
history blame
3.51 kB
metadata
license: cc-by-4.0

ICPC World FinalsDataset

Dataset Description

The ICPC World Finals serves as a benchmark for code generation, encompassing 146 problems from the years 2011 to 2023. This dataset can be employed to assess the proficiency of language models in generating code from natural language specifications.

Dataset Structure

from datasets import load_dataset
load_dataset("HumanLastCodeExam/icpc-world-finals")

Data Fields

  • name: The name of the contest. Note that names could agree between different sources.
  • description: A natural language description of a programming problem.
  • public_tests: Public tests are those that are available before submitting a solution, typically as part of the description itself. Represented as a paired input and output that can be used to test potential solutions. They are therefore acceptable inputs to a model.
  • private_tests: Private tests are not visible before submitting a solution, so should not be made available as inputs to a model.
  • generated_tests: Generated tests are automatically generated by modifying inputs from public and private tests and validating using known correct solutions.
  • source: The original source of the problem, with possible values including UNKNOWN_SOURCE (0),CODECHEF (1), CODEFORCES (2), HACKEREARTH (3), CODEJAM (4), ATCODER (5) and AIZU (6).
  • difficulty: A representation of the difficulty of the problem with possible values including UNKNOWN_DIFFICULTY (0), EASY (1), MEDIUM (2), HARD (3), HARDER (4), HARDEST (5), EXTERNAL (6), A (7), B (8), C (9), D (10), E (11), F (12), G (13), H (14), I (15), J (16), K (17), L (18), M (19), N (20), O (21), P (22), Q (23), R (24), S (25), T (26), U (27) and V (28). Note that different sources use different, non-comparable gradings. For Codeforces problems, cf_rating is a more reliable measure of difficulty when available.
  • solutions: Correct solutions to the problem. Contrast with incorrect_solutions below.
  • incorrect_solutions: Incorrect solutions.
  • cf_contest_id: The Contest ID. Note that Contest ID is not monotonic with respect to time.
  • cf_index: Problem index, e.g. "A" or "B" or "C".
  • cf_points: Points for the problem, e.g. 1000.0
  • cf_rating: Problem rating (difficulty), e.g. 1100
  • cf_tags: Problem tags, e.g. ['greedy', 'math']
  • is_description_translated: Whether the problem was translated to English.
  • untranslated_description: The untranslated description is only available for translated problems.
  • time_limit: The time limit constraint to use when executing solutions. Represented as a dictionary with two keys, seconds and nanos. This field is None if not defined.
  • memory_limit_bytes: The memory limit constraint to use when executing solutions.
  • input_file: Most problems use stdin for IO. Some problems expect specific files to be used instead.
  • output_file: Most problems use stdout for IO. Some problems expect specific files to be used instead.

All tests are represented as a paired input and output that can be used to test potential solutions and all solutions comprise a language, with possible values including UNKNOWN_LANGUAGE (0), PYTHON (1) (solutions written in PYTHON2), CPP (2), PYTHON3 (3) and JAVA (4), and a solution string written in that language. The fields preceded with cf_ denote extra meta-data for Codeforces problems.