File size: 2,868 Bytes
8fc2ccc 6f22104 8fc2ccc 6f22104 6a38d24 8fc2ccc 6f22104 8fc2ccc 6f22104 8fc2ccc 48d7755 7e7203f 8fc2ccc 6a38d24 8fc2ccc 739c538 8fc2ccc 739c538 8fc2ccc 739c538 8fc2ccc 739c538 8fc2ccc 70cd941 8fc2ccc 739c538 6f22104 6a38d24 6f22104 6a38d24 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 |
---
base_model: hfl/chinese-macbert-base
datasets:
- CIRCL/Vulnerability-CNVD
library_name: transformers
license: apache-2.0
metrics:
- accuracy
tags:
- generated_from_trainer
- text-classification
- classification
- nlp
- chinese
- vulnerability
pipeline_tag: text-classification
language: zh
model-index:
- name: vulnerability-severity-classification-chinese-macbert-base
results: []
---
# VLAI: A RoBERTa-Based Model for Automated Vulnerability Severity Classification (Chinese Text)
This model is a fine-tuned version of [hfl/chinese-macbert-base](https://huggingface.co/hfl/chinese-macbert-base) on the dataset [CIRCL/Vulnerability-CNVD](https://huggingface.co/datasets/CIRCL/Vulnerability-CNVD).
For more information, visit the [Vulnerability-Lookup project page](https://vulnerability.circl.lu) or the [ML-Gateway GitHub repository](https://github.com/vulnerability-lookup/ML-Gateway), which demonstrates its usage in a FastAPI server.
It achieves the following results on the evaluation set:
- Loss: 0.6172
- Accuracy: 0.7817
## How to use
You can use this model directly with the Hugging Face `transformers` library for text classification:
```python
from transformers import pipeline
classifier = pipeline(
"text-classification",
model="CIRCL/vulnerability-severity-classification-chinese-macbert-base"
)
# Example usage for a Chinese vulnerability description
description_chinese = "TOTOLINK A3600R是中国吉翁电子(TOTOLINK)公司的一款6天线1200M无线路由器。TOTOLINK A3600R存在缓冲区溢出漏洞,该漏洞源于/cgi-bin/cstecgi.cgi文件的UploadCustomModule函数中的File参数未能正确验证输入数据的长度大小,攻击者可利用该漏洞在系统上执行任意代码或者导致拒绝服务。"
result_chinese = classifier(description_chinese)
print(result_chinese)
# Expected output example: [{'label': 'High', 'score': 0.9644894003868103}]
```
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.6329 | 1.0 | 3412 | 0.5832 | 0.7546 |
| 0.5215 | 2.0 | 6824 | 0.5531 | 0.7750 |
| 0.4827 | 3.0 | 10236 | 0.5521 | 0.7768 |
| 0.3448 | 4.0 | 13648 | 0.5822 | 0.7814 |
| 0.3865 | 5.0 | 17060 | 0.6172 | 0.7817 |
### Framework versions
- Transformers 4.51.3
- Pytorch 2.7.1+cu126
- Datasets 3.6.0
- Tokenizers 0.21.1
|