BERT Hierarchical Classification Model

This model is a fine-tuned BERT-based model for hierarchical classification of Common Core Standard questions.

Model Description

The model classifies input texts into the following hierarchical levels:

  • Grade
  • Domain
  • Cluster
  • Standard

It is based on BERT ("bert-base-uncased") and has been fine-tuned on a dataset of Common Core Standard-aligned questions.

Intended Use

This model is intended for educators and developers who need to categorize educational content according to the Common Core Standards. It can be used to:

  • Automatically label questions or exercises with the appropriate standard.
  • Facilitate curriculum alignment and content organization.

Training Data

The model was trained on a dataset consisting of text questions labeled with their corresponding Common Core Standards.

Training Procedure

  • Optimizer: AdamW
  • Learning Rate: 2e-5
  • Epochs: 10
  • Batch Size: 16

Evaluation

The model was evaluated on multiple classification tasks, including cluster classification, domain classification, grade classification, and standard classification. The performance metrics used for evaluation are Accuracy, F1 Score, Precision, and Recall. Below are the results after training for 10 epochs:

Overall Loss

  • Average Training Loss: 0.2508
  • Average Validation Loss: 1.9785
  • Training Loss: 0.1843

Cluster Classification

Metric Value
Accuracy 0.8797
F1 Score 0.8792
Precision 0.8840
Recall 0.8797

Domain Classification

Metric Value
Accuracy 0.9177
F1 Score 0.9175
Precision 0.9183
Recall 0.9177

Grade Classification

Metric Value
Accuracy 0.8858
F1 Score 0.8861
Precision 0.8896
Recall 0.8858

Standard Classification

Metric Value
Accuracy 0.8334
F1 Score 0.8323
Precision 0.8433
Recall 0.8334

How to Use

import torch
from transformers import BertTokenizer, BertConfig
from huggingface_hub import hf_hub_download
import joblib
import importlib.util

tokenizer = BertTokenizer.from_pretrained('iolimat482/common-core-bert-hierarchical-classification')

config = BertConfig.from_pretrained('iolimat482/common-core-bert-hierarchical-classification')

# Download 'modeling.py'
modeling_file = hf_hub_download(repo_id='iolimat482/common-core-bert-hierarchical-classification', filename='modeling.py')

# Load the model class
spec = importlib.util.spec_from_file_location("modeling", modeling_file)
modeling = importlib.util.module_from_spec(spec)
spec.loader.exec_module(modeling)

BertHierarchicalClassification = modeling.BertHierarchicalClassification

# Instantiate the model
model = BertHierarchicalClassification(config)

# Load model weights
model_weights = hf_hub_download(repo_id='iolimat482/common-core-bert-hierarchical-classification', filename='best_model.pt')
model.load_state_dict(torch.load(model_weights, map_location=torch.device('cpu')))

model.eval()

label_encoders_path = hf_hub_download(repo_id='iolimat482/common-core-bert-hierarchical-classification', filename='label_encoders.joblib')
label_encoders = joblib.load(label_encoders_path)

def predict_standard(model, tokenizer, label_encoders, text):
    # Tokenize input text
    inputs = tokenizer(text, return_tensors='pt', truncation=True, padding=True)

    # Perform inference
    with torch.no_grad():
        grade_logits, domain_logits, cluster_logits, standard_logits = model(inputs['input_ids'], inputs['attention_mask'])

    # Get the predicted class indices
    grade_pred = torch.argmax(grade_logits, dim=1).item()
    domain_pred = torch.argmax(domain_logits, dim=1).item()
    cluster_pred = torch.argmax(cluster_logits, dim=1).item()
    standard_pred = torch.argmax(standard_logits, dim=1).item()

    # Map indices to labels
    grade_label = label_encoders['Grade'].inverse_transform([grade_pred])[0]
    domain_label = label_encoders['Domain'].inverse_transform([domain_pred])[0]
    cluster_label = label_encoders['Cluster'].inverse_transform([cluster_pred])[0]
    standard_label = label_encoders['Standard'].inverse_transform([standard_pred])[0]

    return {
        'Grade': grade_label,
        'Domain': domain_label,
        'Cluster': cluster_label,
        'Standard': standard_label
    }

# Example questions
questions = [
    "Add 4 and 5 together. What is the sum?",
    "What is 7 times 8?",
    "Find the area of a rectangle with length 5 and width 3.",
]

for question in questions:
    prediction = predict_standard(model, tokenizer, label_encoders, question)
    print(f"Question: {question}")
    print("Predicted Standards:")
    for key, value in prediction.items():
        print(f"  {key}: {value}")
    print("\n")

Limitations

  • The model's performance is limited to the data it was trained on.
  • May not generalize well to questions significantly different from the training data.

Citation

If you use this model in your work, please cite:

@misc{olaimat2025commoncore,
    author = {Olaimat, Ibrahim},
    title = {Common Core BERT Hierarchical Classification},
    year = {2025},
    howpublished = {\url{https://huggingface.co/iolimat482/common-core-bert-hierarchical-classification}}
}

Connect with the Author


Downloads last month
17
Inference Providers NEW
This model is not currently available via any of the supported third-party Inference Providers, and the model is not deployed on the HF Inference API.

Model tree for iolimat482/common-core-bert-hierarchical-classification

Finetuned
(2441)
this model

Dataset used to train iolimat482/common-core-bert-hierarchical-classification