DeepakKumarMSL commited on
Commit
ffd7d21
·
verified ·
1 Parent(s): c791fdc

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +43 -0
README.md ADDED
@@ -0,0 +1,43 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # 📝 Question Answers Roberta Model
2
+
3
+ This repository demonstrates how to **fine-tune** and **quantize** the [deepset/roberta-base-squad2](https://huggingface.co/deepset/roberta-base-squad2) model for Question Answering using a sample dataset from Hugging Face Hub.
4
+
5
+ ---
6
+
7
+ ## 🚀 Model Overview
8
+ - **Base Model:** `deepset/roberta-base-squad2`
9
+ - **Task:** Extractive Question Answering
10
+ - **Precision:** Supports FP32, FP16 (half-precision), and INT8 (quantized)
11
+ - **Dataset:** [`squad`](https://huggingface.co/datasets/squad) — Stanford Question Answering Dataset (Hugging Face Datasets)
12
+
13
+ ---
14
+
15
+ ## 📦 Dataset Used
16
+ We use the **`squad`** dataset from Hugging Face:
17
+ ```bash
18
+ pip install datasets
19
+ ```
20
+ # Dataset
21
+ ```Pyhton
22
+ from datasets import load_dataset
23
+
24
+ dataset = load_dataset("squad")
25
+ ```
26
+
27
+ # Load Model & Tokenizer:
28
+
29
+ ```python
30
+
31
+ from transformers import AutoModelForQuestionAnswering, AutoTokenizer, TrainingArguments, Trainer
32
+ from datasets import load_dataset
33
+
34
+ model = AutoModelForQuestionAnswering.from_pretrained("deepset/roberta-base-squad2")
35
+ tokenizer = AutoTokenizer.from_pretrained("deepset/roberta-base-squad2")
36
+ dataset = load_dataset("squad")
37
+ ```
38
+
39
+ # ✅ Results
40
+ Feature Benefit
41
+ FP16 Fine-Tuning - Faster Training + Lower Memory
42
+ INT8 Quantization - Smaller Model + Fast Inference
43
+ Dataset - Stanford QA Dataset (SQuAD)