cedricbonhomme commited on
Commit
13714a4
·
verified ·
1 Parent(s): c23c750

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +28 -7
README.md CHANGED
@@ -16,24 +16,45 @@ should probably proofread and complete it, then remove this comment. -->
16
 
17
  # vulnerability-severity-classification-roberta-base
18
 
19
- This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on an unknown dataset.
 
20
  It achieves the following results on the evaluation set:
21
  - Loss: 0.5078
22
  - Accuracy: 0.8279
23
 
24
  ## Model description
25
 
26
- More information needed
 
 
 
 
 
 
 
 
 
 
 
 
 
 
27
 
28
- ## Intended uses & limitations
 
29
 
30
- More information needed
 
 
 
31
 
32
- ## Training and evaluation data
33
 
34
- More information needed
 
 
 
 
35
 
36
- ## Training procedure
37
 
38
  ### Training hyperparameters
39
 
 
16
 
17
  # vulnerability-severity-classification-roberta-base
18
 
19
+ This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on the dataset [CIRCL/vulnerability-scores](https://huggingface.co/datasets/CIRCL/vulnerability-scores).
20
+
21
  It achieves the following results on the evaluation set:
22
  - Loss: 0.5078
23
  - Accuracy: 0.8279
24
 
25
  ## Model description
26
 
27
+ It is a classification model and is aimed to assist in classifying vulnerabilities by severity based on their descriptions.
28
+
29
+
30
+ ## How to get started with the model
31
+
32
+ ```python
33
+ from transformers import AutoModelForSequenceClassification, AutoTokenizer
34
+ import torch
35
+
36
+ labels = ["low", "medium", "high", "critical"]
37
+
38
+ model_name = "CIRCL/vulnerability-scores"
39
+ tokenizer = AutoTokenizer.from_pretrained(model_name)
40
+ model = AutoModelForSequenceClassification.from_pretrained(model_name)
41
+ model.eval()
42
 
43
+ test_description = "langchain_experimental 0.0.14 allows an attacker to bypass the CVE-2023-36258 fix and execute arbitrary code via the PALChain in the python exec method."
44
+ inputs = tokenizer(test_description, return_tensors="pt", truncation=True, padding=True)
45
 
46
+ # Run inference
47
+ with torch.no_grad():
48
+ outputs = model(**inputs)
49
+ predictions = torch.nn.functional.softmax(outputs.logits, dim=-1)
50
 
 
51
 
52
+ # Print results
53
+ print("Predictions:", predictions)
54
+ predicted_class = torch.argmax(predictions, dim=-1).item()
55
+ print("Predicted severity:", labels[predicted_class])
56
+ ```
57
 
 
58
 
59
  ### Training hyperparameters
60