nielsr's picture
nielsr HF Staff
Improve model card: Add pipeline tag, language, paper, project, code, and usage
ba9573a verified
|
raw
history blame
3.46 kB
metadata
base_model: hfl/chinese-macbert-base
datasets:
  - CIRCL/Vulnerability-CNVD
library_name: transformers
license: apache-2.0
metrics:
  - accuracy
tags:
  - generated_from_trainer
  - text-classification
  - classification
  - nlp
  - chinese
  - vulnerability
pipeline_tag: text-classification
language: zh
model-index:
  - name: vulnerability-severity-classification-chinese-macbert-base
    results: []

VLAI: A RoBERTa-Based Model for Automated Vulnerability Severity Classification

This model, named VLAI, is a fine-tuned version of hfl/chinese-macbert-base on the dataset CIRCL/Vulnerability-CNVD.

The model was presented in the paper VLAI: A RoBERTa-Based Model for Automated Vulnerability Severity Classification.

Abstract: VLAI is a transformer-based model that predicts software vulnerability severity levels directly from text descriptions. Built on RoBERTa, VLAI is fine-tuned on over 600,000 real-world vulnerabilities and achieves over 82% accuracy in predicting severity categories, enabling faster and more consistent triage ahead of manual CVSS scoring. The model and dataset are open-source and integrated into the Vulnerability-Lookup service.

For more information, visit the Vulnerability-Lookup project page or the ML-Gateway GitHub repository, which demonstrates its usage in a FastAPI server.

It achieves the following results on the evaluation set:

  • Loss: 0.5994
  • Accuracy: 0.7900

How to use

You can use this model directly with the Hugging Face transformers library for text classification:

from transformers import pipeline

classifier = pipeline(
    "text-classification",
    model="CIRCL/vulnerability-severity-classification-chinese-macbert-base"
)

# Example usage for a Chinese vulnerability description
description_chinese = "TOTOLINK A3600R是中国吉翁电子(TOTOLINK)公司的一款6天线1200M无线路由器。TOTOLINK A3600R存在缓冲区溢出漏洞,该漏洞源于/cgi-bin/cstecgi.cgi文件的UploadCustomModule函数中的File参数未能正确验证输入数据的长度大小,攻击者可利用该漏洞在系统上执行任意代码或者导致拒绝服务。"
result_chinese = classifier(description_chinese)
print(result_chinese)
# Expected output example: [{'label': '高', 'score': 0.9802}]

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 3e-05
  • train_batch_size: 32
  • eval_batch_size: 32
  • seed: 42
  • optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
  • lr_scheduler_type: linear
  • num_epochs: 5

Training results

Training Loss Epoch Step Validation Loss Accuracy
0.65 1.0 3388 0.5772 0.7561
0.582 2.0 6776 0.5656 0.7620
0.5284 3.0 10164 0.5274 0.7881
0.3406 4.0 13552 0.5555 0.7869
0.3224 5.0 16940 0.5994 0.7900

Framework versions

  • Transformers 4.51.3
  • Pytorch 2.7.1+cu126
  • Datasets 3.6.0
  • Tokenizers 0.21.1