YAML Metadata Warning: empty or missing yaml metadata in repo card (https://huggingface.co/docs/hub/model-cards#model-card-metadata)

πŸ“ Question Answers Roberta Model

This repository demonstrates how to fine-tune and quantize the deepset/roberta-base-squad2 model for Question Answering using a sample dataset from Hugging Face Hub.


πŸš€ Model Overview

  • Base Model: deepset/roberta-base-squad2
  • Task: Extractive Question Answering
  • Precision: Supports FP32, FP16 (half-precision), and INT8 (quantized)
  • Dataset: squad β€” Stanford Question Answering Dataset (Hugging Face Datasets)

πŸ“¦ Dataset Used

We use the squad dataset from Hugging Face:

pip install datasets

Dataset

from datasets import load_dataset

dataset = load_dataset("squad")

Load Model & Tokenizer:


from transformers import AutoModelForQuestionAnswering, AutoTokenizer, TrainingArguments, Trainer
from datasets import load_dataset

model = AutoModelForQuestionAnswering.from_pretrained("deepset/roberta-base-squad2")
tokenizer = AutoTokenizer.from_pretrained("deepset/roberta-base-squad2")
dataset = load_dataset("squad")

βœ… Results

Feature Benefit FP16 Fine-Tuning - Faster Training + Lower Memory INT8 Quantization - Smaller Model + Fast Inference Dataset - Stanford QA Dataset (SQuAD)

Downloads last month
3
Safetensors
Model size
124M params
Tensor type
F16
Β·
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support