Datasets:

Modalities:
Text
Formats:
json
Languages:
Romanian
Libraries:
Datasets
Dask
License:
RoSchoolToUniQA / README.md
DragosGhinea's picture
Update README
6c9164a
metadata
language:
  - ro
license: cc-by-4.0

Dataset Overview

Contents:

  • Total Questions: 1200
  • Twelve Subjects of 100 questions each:
    • Biology - Baccalaureate
    • Chemistry - Baccalaureate
    • Computer Science - Baccalaureate
    • Economics - Baccalaureate
    • Logic - Baccalaureate
    • Philosophy - Baccalaureate
    • Physics - Baccalaureate
    • Psychology - Baccalaureate
    • Sociology - Baccalaureate
    • History - University Admission
    • Mathematics - University Admission
    • Romanian - University Admission

Dataset Datasheet

Inspiration: Microsoft Research

Motivation for Dataset Creation

Why was the dataset created?

This dataset was created to improve the evaluation of large language models (LLMs) in Romanian, across various academic subjects, using questions from Romania's national Baccalaureate and university admission exams. It aims to assess the ability of LLMs to understand and reason about the diverse topics covered in these standardized tests.

Dataset Composition

What are the instances?

Each instance in the dataset consists of choice questions sourced from Romanian Baccalaureate and university admission exams. Every question is accompanied by its correct answer, as provided in the official answer keys. Additionally, each instance includes relevant metadata, which is detailed in the following sections.

The twelve subjects collected are: biology, chemistry, computer science, economics, history, logic, math, philosophy, physics, psychology, romanian and sociology.

Are relationships between instances made explicit in the data?

Yes, the dataset includes metadata on the basis of which instances can be grouped. For example, questions can be categorized by subject and exam year. For university admission exams, the dataset specifies the corresponding university. In addition, metadata tags provide extra identifying details, such as subject variants, exam sessions, and subcategories, allowing for more precise classification and analysis.

How many instances of each type are there?

The dataset comprises a total of 1,200 multiple-choice questions, with 100 questions for each of the twelve subjects included.

What data does each instance consist of?

We will explain each field:

  • question_number: an integer
  • question: the question text
  • type: currently only 'single-choice', although we expect to encounter questions with multiple correct answers in the future, if more data is added.
  • options: a list of texts (usually statements or list of items) that in combination with the question text can be considered true or false.
  • year: the year (as a string) in which the problem set/test was used in a competition.
  • correct_answer: a letter
  • source: either "BAC" for Baccalaureate or "Admission, {university name}" for university exams.
  • subject: one of the twelve subjects collected
  • tags: a list of extra metadata, offering extra classifications such as the variant of the Baccalaureate subject (randomly selected in the morning of the exam to enhance cheating prevention), the subcategory of the question, and the exam session during which the question was given.

For uniquely identifying a question/instance we recommend the following combination of fields:

[ \left{ \begin{array}{l} \texttt{item['year']}, \ \texttt{item['source']}, \ \texttt{item['tags']}, \ \texttt{item['question_number']} \end{array} \right} ]

Is everything included or does the data rely on external resources?

Everything is included.

Are there recommended data splits or evaluation measures?

The data is meant to be used as a benchmark, so everything can be considered a test split.

Data Collection Process

How was the data collected?

The dataset was constructed using publicly available archives of exam materials, primarily in PDF format. The questions were sourced from centralized repositories that aggregate past exam subjects. Two examples of such websites are subiectebac.ro and oradeistorie.ro.

Who was involved in the data collection process?

The PDF data was collected by us.

Over what time-frame was the data collected?

It took roughly two weeks to collect the data.

How was the data associated with each instance acquired?

The data was initially collected as PDF files. The PDFs either contained parsable text or had text embedded in images. Gemini 2.0 Flash was used to extract the data. However, the model occasionally struggled with parsing certain mathematical expressions (e.g., fractions) and identifying underlined text. To ensure accuracy, we manually reviewed and curated the extracted data.

Does the dataset contain all possible instances?

No. The dataset is limited to a maximum of 100 questions per subject. When selecting questions, we prioritized the most recent exam data to ensure relevance.

If the dataset is a sample, then what is the population?

The dataset is scalable both vertically and horizontally. As mentioned earlier, we impose a limit of 100 questions per subject, but additional questions from previous years are available for selection. Furthermore, the dataset can be expanded by incorporating new subjects from other university admission exams or by including non-choice-based questions from the Baccalaureate.

Is there information missing from the dataset and why?

Some exam subjects from different years or sessions contain duplicated questions. To maintain diversity and avoid redundancy, we randomly remove duplicates, ensuring that only one instance of each repeated question remains in the dataset.

Are there any known errors, sources of noise, or redundancies in the data?

No.

Data Preprocessing

What pre-processing/cleaning was done?

After extraction, we applied pre-processing and cleaning steps to standardize and structure the data:

  1. Extracted the question number from the question text and placed it in a separate field.
  2. Standardized quotes by replacing Romanian quotation marks with English ones.
  3. Normalized diacritics to proper Romanian characters (e.g., \texttt{ș, ț, â, ă}).
Was the "raw" data saved in addition to the preprocessed/cleaned data?

No.

Is the pre-processing software available?

No.

Does this dataset collection/processing procedure achieve the motivation for creating the dataset stated in the first section of this datasheet?

Yes, this dataset effectively provides a diverse set of questions across twelve subjects, making it suitable for benchmarking purposes as originally intended.