Model Description

Fine-tuning bert-base-uncased model for token-level binary grammatical error detection on English-FCE dataset provided by MultiGED-2023

Get Started with the Model

from transformers import AutoModelForTokenClassification, BertTokenizer

# Load the model
model = AutoModelForTokenClassification.from_pretrained("sahilnishad/BERT-GED-FCE-FT")
tokenizer = BertTokenizer.from_pretrained("bert-base-uncased")

# Function to perform inference
def infer(sentence):
    inputs = tokenizer(sentence, return_tensors="pt", add_special_tokens=True)
    with torch.no_grad():
        outputs = model(**inputs)
    return outputs.logits.argmax(-1)

# Example usage
print(infer("Your example sentence here"))

BibTeX:

@misc{sahilnishad_bert_ged_fce_ft,
  author       = {Sahil Nishad},
  title        = {Fine-tuned BERT Model for Grammatical Error Detection on the FCE Dataset},
  year         = {2024},
  url          = {https://huggingface.co/sahilnishad/BERT-GED-FCE-FT},
  note         = {Model available on HuggingFace Hub},
  howpublished = {\url{https://huggingface.co/sahilnishad/BERT-GED-FCE-FT}},
}
Downloads last month
178
Safetensors
Model size
109M params
Tensor type
F32
·
Inference Providers NEW
This model is not currently available via any of the supported third-party Inference Providers, and the model is not deployed on the HF Inference API.

Model tree for sahilnishad/BERT-GED-FCE-FT

Finetuned
(2534)
this model