Datasets:
File size: 3,718 Bytes
4e7b795 3a34536 4e7b795 3a34536 4e7b795 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 |
---
language:
- en
license: mit
size_categories:
- 10K<n<100K
pretty_name: sciq
tags:
- multiple-choice
- benchmark
- evaluation
dataset_info:
features:
- name: id
dtype: int32
- name: context
dtype: string
- name: question
dtype: string
- name: choices
sequence: string
- name: answerID
dtype: int32
splits:
- name: eval
num_bytes: 575927
num_examples: 1000
- name: train
num_bytes: 6686331
num_examples: 11679
download_size: 4308552
dataset_size: 7262258
configs:
- config_name: default
data_files:
- split: eval
path: data/eval-*
- split: train
path: data/train-*
---
# sciq Dataset
## Dataset Information
- **Original Hugging Face Dataset**: `sciq`
- **Subset**: `default`
- **Evaluation Split**: `test`
- **Training Split**: `train`
- **Task Type**: `multiple_choice_with_context`
- **Processing Function**: `process_sciq`
## Processing Function
The following function was used to process the dataset from its original source:
```python
def process_sciq(example: Dict) -> Tuple[str, List[str], int]:
"""Process SciQ dataset example."""
context = example["support"]
query = example['question']
# Handle distractors correctly - they should be individual strings, not lists
correct_answer = example["correct_answer"]
distractor1 = example["distractor1"]
distractor2 = example["distractor2"]
distractor3 = example["distractor3"]
# Create the choices list properly
choices = [correct_answer, distractor1, distractor2, distractor3]
# shuffle the choices
random.shuffle(choices)
# find the index of the correct answer
answer_index = choices.index(correct_answer)
return context, query, choices, answer_index
```
## Overview
This repository contains the processed version of the sciq dataset. The dataset is formatted as a collection of multiple-choice questions.
## Dataset Structure
Each example in the dataset contains the following fields:
```json
{
"id": 0,
"context": "Oxidants and Reductants Compounds that are capable of accepting electrons, such as O 2 or F2, are calledoxidants (or oxidizing agents) because they can oxidize other compounds. In the process of accepting electrons, an oxidant is reduced. Compounds that are capable of donating electrons, such as sodium metal or cyclohexane (C6H12), are calledreductants (or reducing agents) because they can cause the reduction of another compound. In the process of donating electrons, a reductant is oxidized. These relationships are summarized in Equation 3.30: Equation 3.30 Saylor URL: http://www. saylor. org/books.",
"question": "Compounds that are capable of accepting electrons, such as o 2 or f2, are called what?",
"choices": [
"oxidants",
"antioxidants",
"residues",
"Oxygen"
],
"answerID": 0
}
```
## Fields Description
- `id`: Unique identifier for each example
- `question`: The question or prompt text
- `choices`: List of possible answers
- `answerID`: Index of the correct answer in the choices list (0-based)
## Loading the Dataset
You can load this dataset using the Hugging Face datasets library:
```python
from datasets import load_dataset
# Load the dataset
dataset = load_dataset("DatologyAI/sciq")
# Access the data
for example in dataset['train']:
print(example)
```
## Example Usage
```python
# Load the dataset
dataset = load_dataset("DatologyAI/sciq")
# Get a sample question
sample = dataset['train'][0]
# Print the question
print("Question:", sample['question'])
print("Choices:")
for idx, choice in enumerate(sample['choices']):
print(f"{idx}. {choice}")
print("Correct Answer:", sample['choices'][sample['answerID']])
```
|