YAML Metadata Warning: empty or missing yaml metadata in repo card (https://huggingface.co/docs/hub/model-cards#model-card-metadata)

🧠 NERClassifier-BERT-WikiAnn

A BERT-based Named Entity Recognition (NER) model fine-tuned on the WikiAnn English dataset. It classifies tokens into entity types like Person (PER), Location (LOC), and Organization (ORG). This model is suitable for applications like document tagging, resume parsing, and chatbots.


✨ Model Highlights

  • πŸ“Œ Based on bert-base-cased
  • πŸ” Fine-tuned on the WikiAnn (en) NER dataset
  • ⚑ Supports prediction of 3 core entity types: PER, LOC, ORG
  • πŸ’Ύ Lightweight and compatible with both CPU and GPU inference environment

🧠 Intended Uses

  • βœ… Resume and document parsing
  • βœ… News article analysis
  • βœ… Question answering pipelines
  • βœ… Chatbots and virtual assistants
  • βœ… Information retrieval and tagging

  • 🚫 Limitations

  • ❌ Trained on English-only Wiki-based text

  • ❌ Performance may degrade on informal or non-English texts

  • ❌ Not designed for nested or overlapping entities

  • ❌ Accuracy may drop on very long sequences (>128 tokens)


πŸ‹οΈβ€β™‚οΈ Training Details

Field Value
Base Model bert-base-cased
Dataset WikiAnn (English)
Framework PyTorch with πŸ€— Transformers
Epochs 3
Batch Size 16
Max Length 128 tokens
Optimizer AdamW
Loss CrossEntropyLoss (token-level)
Device Trained on CUDA-enabled GPU

πŸ“Š Evaluation Metrics

Metric Score
Accuracy 0.92
F1-Score 0.92
Precision 0.92
Recall 0.92

πŸ”Ž Label Mapping

Label ID Entity Type
0 O
1 B-PER
2 I-PER
3 B-ORG
4 I-ORG
5 B-LOC
6 I-LOC


πŸš€ Usage

from transformers import BertTokenizerFast, BertForTokenClassification
from transformers import pipeline
import torch

model_name = "AventIQ-AI/NER-AI-wikiann-model"
tokenizer = BertTokenizerFast.from_pretrained(model_name)
model = BertForTokenClassification.from_pretrained(model_name)
model.eval()

#Labelling
label_list = dataset["train"].features["ner_tags"].feature.names
model.config.id2label = {i: label for i, label in enumerate(label_list)}
model.config.label2id = {label: i for i, label in enumerate(label_list)}
ner_pipeline = pipeline("ner", model=model, tokenizer=tokenizer, aggregation_strategy="simple")


#Inference
ner_pipeline = pipeline("ner", model=model, tokenizer=tokenizer, aggregation_strategy="simple")

test_sentence = "Bill Gates is the CEO of Microsoft and lives in the United States."
ner_results = ner_pipeline(test_sentence)

print("\nπŸ“Œ Inference Results:")
for entity in ner_results:
    print(f"Entity: {entity['word']}\tType: {entity['entity_group']}\tConfidence: {entity['score']:.3f}")

# Test example
print("Bill Gates is the CEO of Microsoft and lives in the United States.")

  • 🧩 Quantization
  • Post-training static quantization applied using PyTorch to reduce model size and accelerate inference on edge devices.

πŸ—‚ Repository Structure

.
β”œβ”€β”€ model/               # Quantized model files
β”œβ”€β”€ tokenizer_config/    # Tokenizer and vocab files
β”œβ”€β”€ model.safensors/     # Fine-tuned model in safetensors format
β”œβ”€β”€ README.md            # Model card

🀝 Contributing

Open to improvements and feedback! Feel free to submit a pull request or open an issue if you find any bugs or want to enhance the model.

Downloads last month
5
Safetensors
Model size
108M params
Tensor type
F32
Β·
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support