Edit model card

AzerBERT

  • Type: BERT-based language model transformer
  • Description: AzerBERT is a pre-trained language model specifically tailored for the Iranian Azerbaijani language. It can be used for various NLP tasks, including text classification, named entity recognition, and more.

How to use

# Use a pipeline as a high-level helper
from transformers import pipeline

pipe = pipeline("fill-mask", model="language-ml-lab/AzerBert")
# Load model directly
from transformers import AutoTokenizer, AutoModelForMaskedLM

tokenizer = AutoTokenizer.from_pretrained("language-ml-lab/AzerBert")
model = AutoModelForMaskedLM.from_pretrained("language-ml-lab/AzerBert")
Downloads last month
8
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Space using language-ml-lab/AzerBert 1