sciq / README.md
pratyushmaini's picture
Upload dataset
3a34536 verified
metadata
language:
  - en
license: mit
size_categories:
  - 10K<n<100K
pretty_name: sciq
tags:
  - multiple-choice
  - benchmark
  - evaluation
dataset_info:
  features:
    - name: id
      dtype: int32
    - name: context
      dtype: string
    - name: question
      dtype: string
    - name: choices
      sequence: string
    - name: answerID
      dtype: int32
  splits:
    - name: eval
      num_bytes: 575927
      num_examples: 1000
    - name: train
      num_bytes: 6686331
      num_examples: 11679
  download_size: 4308552
  dataset_size: 7262258
configs:
  - config_name: default
    data_files:
      - split: eval
        path: data/eval-*
      - split: train
        path: data/train-*

sciq Dataset

Dataset Information

  • Original Hugging Face Dataset: sciq
  • Subset: default
  • Evaluation Split: test
  • Training Split: train
  • Task Type: multiple_choice_with_context
  • Processing Function: process_sciq

Processing Function

The following function was used to process the dataset from its original source:

def process_sciq(example: Dict) -> Tuple[str, List[str], int]:
    """Process SciQ dataset example."""
    context = example["support"]
    query = example['question']
    
    # Handle distractors correctly - they should be individual strings, not lists
    correct_answer = example["correct_answer"]
    distractor1 = example["distractor1"]
    distractor2 = example["distractor2"]
    distractor3 = example["distractor3"]
    
    # Create the choices list properly
    choices = [correct_answer, distractor1, distractor2, distractor3]
    
    # shuffle the choices
    random.shuffle(choices)
    
    # find the index of the correct answer
    answer_index = choices.index(correct_answer)
    
    return context, query, choices, answer_index

Overview

This repository contains the processed version of the sciq dataset. The dataset is formatted as a collection of multiple-choice questions.

Dataset Structure

Each example in the dataset contains the following fields:

{
  "id": 0,
  "context": "Oxidants and Reductants Compounds that are capable of accepting electrons, such as O 2 or F2, are calledoxidants (or oxidizing agents) because they can oxidize other compounds. In the process of accepting electrons, an oxidant is reduced. Compounds that are capable of donating electrons, such as sodium metal or cyclohexane (C6H12), are calledreductants (or reducing agents) because they can cause the reduction of another compound. In the process of donating electrons, a reductant is oxidized. These relationships are summarized in Equation 3.30: Equation 3.30 Saylor URL: http://www. saylor. org/books.",
  "question": "Compounds that are capable of accepting electrons, such as o 2 or f2, are called what?",
  "choices": [
    "oxidants",
    "antioxidants",
    "residues",
    "Oxygen"
  ],
  "answerID": 0
}

Fields Description

  • id: Unique identifier for each example
  • question: The question or prompt text
  • choices: List of possible answers
  • answerID: Index of the correct answer in the choices list (0-based)

Loading the Dataset

You can load this dataset using the Hugging Face datasets library:

from datasets import load_dataset

# Load the dataset
dataset = load_dataset("DatologyAI/sciq")

# Access the data
for example in dataset['train']:
    print(example)

Example Usage

# Load the dataset
dataset = load_dataset("DatologyAI/sciq")

# Get a sample question
sample = dataset['train'][0]

# Print the question
print("Question:", sample['question'])
print("Choices:")
for idx, choice in enumerate(sample['choices']):
    print(f"{idx}. {choice}")
print("Correct Answer:", sample['choices'][sample['answerID']])