File size: 1,328 Bytes
ffd7d21
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
# πŸ“ Question Answers Roberta Model

This repository demonstrates how to **fine-tune** and **quantize** the [deepset/roberta-base-squad2](https://huggingface.co/deepset/roberta-base-squad2) model for Question Answering using a sample dataset from Hugging Face Hub.

---

## πŸš€ Model Overview
- **Base Model:** `deepset/roberta-base-squad2`
- **Task:** Extractive Question Answering  
- **Precision:** Supports FP32, FP16 (half-precision), and INT8 (quantized)
- **Dataset:** [`squad`](https://huggingface.co/datasets/squad) β€” Stanford Question Answering Dataset (Hugging Face Datasets)

---

## πŸ“¦ Dataset Used
We use the **`squad`** dataset from Hugging Face:
```bash
pip install datasets
```
# Dataset
```Pyhton
from datasets import load_dataset

dataset = load_dataset("squad")
```

# Load Model & Tokenizer:

```python

from transformers import AutoModelForQuestionAnswering, AutoTokenizer, TrainingArguments, Trainer
from datasets import load_dataset

model = AutoModelForQuestionAnswering.from_pretrained("deepset/roberta-base-squad2")
tokenizer = AutoTokenizer.from_pretrained("deepset/roberta-base-squad2")
dataset = load_dataset("squad")
```

# βœ… Results
Feature	Benefit
FP16 Fine-Tuning -	Faster Training + Lower Memory
INT8 Quantization -	Smaller Model + Fast Inference
Dataset -	Stanford QA Dataset (SQuAD)