YAML Metadata Warning: empty or missing yaml metadata in repo card (https://huggingface.co/docs/hub/model-cards#model-card-metadata)

DistilBERT-Based Quantized Model for Spam Message Filtering

This repository hosts a quantized DistilBERT model fine-tuned for spam messages filtering. The model balances lightweight architecture with high accuracy, making it ideal for real-time applications and deployment in resource-constrained environments.

Model Details

  • Model Architecture: DistilBERT (distilbert-base-uncased)
  • Task: Text Classification (Spam vs Ham)
  • Dataset: SMS Spam Collection
  • Quantization: Float16
  • Fine-tuning Framework: Hugging Face Transformers

Usage

Installation

pip install transformers torch

Loading the Model

 
from transformers import RobertaTokenizerFast, RobertaForSequenceClassification, Trainer, TrainingArguments
import torch


 
# Load tokenizer

tokenizer = DistilBertTokenizerFast.from_pretrained("distilbert-base-uncased")
 
# Define a test sentence

test_sentence = "Congratulations! You have won a free iPhone. Click here to claim your prize."
 
# Tokenize input

inputs = tokenizer(test_sentence, return_tensors="pt", padding=True, truncation=True, max_length=128)
 
# Ensure input tensors are in correct dtype

inputs["input_ids"] = inputs["input_ids"].long()  # Convert to long type

inputs["attention_mask"] = inputs["attention_mask"].long()  # Convert to long type

 
# Make prediction

with torch.no_grad():

    outputs = quantized_model(**inputs)
 
# Get predicted class

predicted_class = torch.argmax(outputs.logits, dim=1).item()

print(f"Predicted Class: {predicted_class}")
 
 
label_mapping = {0: "Ham", 1: "Spam"}   

#Example
 
predicted_label = label_mapping[predicted_class]

print(f"Predicted Label: {predicted_label}")
 

Performance Metrics

  • Accuracy: 0.994619
  • Precision: 0.979866
  • Recall: 0.986486
  • F1: 0.973333

Fine-Tuning Details

Dataset

The dataset is taken from Kaggle .

Training

  • Number of epochs: 3

  • Batch size: 16

  • Evaluation strategy: epoch

  • Learning rate: 2e-5

Quantization

Post-training quantization was applied using PyTorch's built-in quantization framework to reduce the model size and improve inference efficiency.

Repository Structure


.
β”œβ”€β”€ config.json
β”œβ”€β”€ tokenizer_config.json    
β”œβ”€β”€ special_tokens_map.json 
β”œβ”€β”€ tokenizer.json        
β”œβ”€β”€ model.safetensors    # Fine Tuned Model
β”œβ”€β”€ README.md            # Model documentation

Limitations

  • The model may not generalize well to domains outside the fine-tuning dataset.

  • Quantization may result in minor accuracy degradation compared to full-precision models.

Contributing

Contributions are welcome! Feel free to open an issue or submit a pull request if you have suggestions or improvements.

Downloads last month
1
Safetensors
Model size
67M params
Tensor type
F16
Β·
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support