File size: 11,185 Bytes
8c5c9d3
 
ee43310
 
 
 
 
db446b7
 
 
 
 
8c5c9d3
 
ad3695b
8c5c9d3
ee43310
8c5c9d3
 
 
 
 
ee43310
8c5c9d3
ee43310
 
 
 
 
8c5c9d3
ee43310
8c5c9d3
ee43310
8c5c9d3
 
 
 
 
ee43310
8c5c9d3
ee43310
8c5c9d3
ee43310
8c5c9d3
 
 
ee43310
 
8c5c9d3
 
 
ee43310
8c5c9d3
 
 
ee43310
8c5c9d3
f2c6a2d
8c5c9d3
ee43310
 
dfd5c40
 
 
 
9af3e83
 
8c5c9d3
ee43310
dfd5c40
 
 
 
 
 
 
 
 
 
6b12fe9
 
 
 
 
 
dfd5c40
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
ee43310
dfd5c40
ee43310
 
dfd5c40
ecf15f8
 
dfd5c40
f2c6a2d
 
 
 
 
6b12fe9
 
f2c6a2d
 
 
6b12fe9
 
 
 
 
 
 
 
 
 
f2c6a2d
 
 
6b12fe9
f2c6a2d
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
8c5c9d3
ee43310
8c5c9d3
ee43310
8c5c9d3
dfd5c40
8c5c9d3
ee43310
8c5c9d3
ee43310
63d0dd3
ee43310
 
8c5c9d3
 
 
 
 
ee43310
 
8c5c9d3
 
 
ee43310
 
63d0dd3
 
8c5c9d3
e899270
ad3695b
 
db446b7
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
ad3695b
e899270
ee43310
8c5c9d3
ee43310
8c5c9d3
e899270
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
8c5c9d3
ee43310
 
 
 
 
 
 
 
 
 
 
8c5c9d3
ee43310
e899270
 
ee43310
8c5c9d3
e899270
ee43310
8c5c9d3
e899270
ee43310
 
e899270
ee43310
8c5c9d3
ee43310
8c5c9d3
ee43310
8c5c9d3
ee43310
 
 
 
 
8c5c9d3
ee43310
8c5c9d3
ee43310
8c5c9d3
ee43310
 
8c5c9d3
ee43310
8c5c9d3
ee43310
8c5c9d3
ee43310
8c5c9d3
ee43310
8c5c9d3
 
 
806a44b
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
---
library_name: transformers
tags:
- cybersecurity
- mpnet
- classification
- fine-tuned
license: creativeml-openrail-m
language:
- en
base_model:
- sentence-transformers/all-mpnet-base-v2
---

# AttackGroup-MPNET - Model Card for MPNet Cybersecurity Classifier

This is a fine-tuned MPNet model specialized for classifying cybersecurity threat groups based on textual descriptions of their tactics and techniques.

## Model Details

### Model Description

This model is a fine-tuned MPNet classifier specialized in categorizing cybersecurity threat groups based on textual descriptions of their tactics, techniques, and procedures (TTPs).

- **Developed by:** Dženan Hamzić
- **Model type:** Transformer-based classification model (MPNet)
- **Language(s) (NLP):** English
- **License:** Apache-2.0
- **Finetuned from model:** microsoft/mpnet-base (with intermediate MLM fine-tuning)

### Model Sources

- **Base Model:** [microsoft/mpnet-base](https://huggingface.co/microsoft/mpnet-base)

## Uses

### Direct Use

This model classifies textual cybersecurity descriptions into known cybersecurity threat groups.

### Downstream Use

Integration into Cyber Threat Intelligence platforms, SOC incident analysis tools, and automated threat detection systems.

### Out-of-Scope Use

- General language tasks unrelated to cybersecurity
- Tasks outside the cybersecurity domain

## Bias, Risks, and Limitations

This model specializes in cybersecurity contexts. Predictions for unrelated contexts may be inaccurate.

### Recommendations

Always verify predictions with cybersecurity analysts before using in critical decision-making scenarios.

## How to Get Started with the Model (Classification)

```python
import torch
import torch.nn as nn
from transformers import AutoTokenizer, AutoModelForSequenceClassification
import torch.optim as optim
import numpy as np
from huggingface_hub import hf_hub_download
import json

device = torch.device("cuda" if torch.cuda.is_available() else "cpu")


label_to_groupid_file = hf_hub_download(
    repo_id="selfconstruct3d/AttackGroup-MPNET",
    filename="label_to_groupid.json"
)

with open(label_to_groupid_file, "r") as f:
    label_to_groupid = json.load(f)

# Load explicitly your fine-tuned MPNet model
classifier_model = AutoModelForSequenceClassification.from_pretrained("selfconstruct3d/AttackGroup-MPNET", num_labels=len(label_to_groupid)).to(device)

# Load explicitly your tokenizer
tokenizer = AutoTokenizer.from_pretrained("selfconstruct3d/AttackGroup-MPNET")

def predict_group(sentence):
    classifier_model.eval()
    encoding = tokenizer(
        sentence,
        truncation=True,
        padding="max_length",
        max_length=128,
        return_tensors="pt"
    )
    input_ids = encoding["input_ids"].to(device)
    attention_mask = encoding["attention_mask"].to(device)

    with torch.no_grad():
        outputs = classifier_model(input_ids=input_ids, attention_mask=attention_mask)
        logits = outputs.logits
        predicted_label = torch.argmax(logits, dim=1).cpu().item()

    predicted_groupid = label_to_groupid[str(predicted_label)]
    return predicted_groupid

# Example usage explicitly:
sentence = "APT38 has used phishing emails with malicious links to distribute malware."
predicted_class = predict_group(sentence)
print(f"Predicted GroupID: {predicted_class}")
```
Predicted GroupID: G0001
https://attack.mitre.org/groups/G0001/


## How to Get Started with the Model (Embeddings)

```python
import torch
from transformers import AutoTokenizer, AutoModelForSequenceClassification
from huggingface_hub import hf_hub_download
import json

device = torch.device("cuda" if torch.cuda.is_available() else "cpu")


label_to_groupid_file = hf_hub_download(
    repo_id="selfconstruct3d/AttackGroup-MPNET",
    filename="label_to_groupid.json"
)

with open(label_to_groupid_file, "r") as f:
    label_to_groupid = json.load(f)


# Load your fine-tuned classification model
model_name = "selfconstruct3d/AttackGroup-MPNET"
tokenizer = AutoTokenizer.from_pretrained(model_name)
classifier_model = AutoModelForSequenceClassification.from_pretrained(model_name, num_labels=len(label_to_groupid)).to(device)

def get_embedding(sentence):
    classifier_model.eval()

    encoding = tokenizer(
        sentence,
        truncation=True,
        padding="max_length",
        max_length=128,
        return_tensors="pt"
    )
    input_ids = encoding["input_ids"].to(device)
    attention_mask = encoding["attention_mask"].to(device)

    with torch.no_grad():
        outputs = classifier_model.mpnet(input_ids=input_ids, attention_mask=attention_mask)
        cls_embedding = outputs.last_hidden_state[:, 0, :].cpu().numpy().flatten()

    return cls_embedding

# Example explicitly:
sentence = "APT38 has used phishing emails with malicious links to distribute malware."
embedding = get_embedding(sentence)
print("Embedding shape:", embedding.shape)
print("Embedding values:", embedding)
```



## Training Details

### Training Data

To be anounced...

### Training Procedure

- Fine-tuned from: MLM fine-tuned MPNet ("mpnet_mlm_cyber_finetuned-v2")
- Epochs: 32
- Learning rate: 5e-6
- Batch size: 16

## Evaluation

### Testing Data, Factors & Metrics

- **Testing Data:** Stratified sample from original dataset.
- **Metrics:** Accuracy, Weighted F1 Score

### Results

| Metric                 | Value   |
|------------------------|---------|
| Cl. Accuracy (Test)    | 0.9564 |
| W. F1 Score (Test)     | 0.9577 |


## Evaluation Results

| Model                 | Accuracy | F1 Macro | F1 Weighted | Embedding Variability |
|-----------------------|----------|----------|-------------|-----------------------|
| **AttackGroup-MPNET** | **0.85** | **0.759**| **0.847**   | 0.234                 |
| GTE Large             | 0.66     | 0.571    | 0.667       | 0.266                 |
| E5 Large v2           | 0.64     | 0.541    | 0.650       | 0.355                 |
| Original MPNet        | 0.63     | 0.534    | 0.619       | 0.092                 |
| BGE Large             | 0.53     | 0.418    | 0.519       | 0.366                 |
| SupSimCSE             | 0.50     | 0.373    | 0.479       | 0.227                 |
| MLM Fine-tuned MPNet  | 0.44     | 0.272    | 0.411       | 0.125                 |
| SecBERT               | 0.41     | 0.315    | 0.410       | 0.591                 |
| SecureBERT_Plus       | 0.36     | 0.252    | 0.349       | 0.267                 |
| CySecBERT             | 0.34     | 0.235    | 0.323       | 0.229                 |
| ATTACK-BERT           | 0.33     | 0.240    | 0.316       | 0.096                 |
| Secure_BERT           | 0.00     | 0.000    | 0.000       | 0.007                 |
| CyBERT                | 0.00     | 0.000    | 0.000       | 0.015                 |


| Model                | Similarity Search Recall@5 | Few-shot Accuracy | In-dist Similarity | OOD Similarity | Robustness Similarity |
|----------------------|----------------------------|-------------------|--------------------|----------------|-----------------------|
| **AttackGroup-MPNET**| **0.934**                  | **0.857**         | 0.235              | 0.017          | 0.948                 |
| Original MPNet       | 0.786                      | 0.643             | 0.217              | -0.004         | 0.941                 |
| E5 Large v2          | 0.778                      | 0.679             | 0.727              | 0.013          | 0.977                 |
| GTE Large            | 0.746                      | 0.786             | 0.845              | 0.002          | 0.984                 |
| BGE Large            | 0.632                      | 0.750             | 0.533              | -0.006         | 0.970                 |
| SupSimCSE            | 0.616                      | 0.571             | 0.683              | -0.015         | 0.978                 |
| SecBERT              | 0.468                      | 0.429             | 0.586              | -0.001         | 0.970                 |
| CyBERT               | 0.452                      | 0.250             | 1.000              | -0.001         | 1.000                 |
| ATTACK-BERT          | 0.362                      | 0.571             | 0.157              | -0.005         | 0.950                 |
| CySecBERT            | 0.424                      | 0.500             | 0.734              | -0.015         | 0.954                 |
| Secure_BERT          | 0.424                      | 0.250             | 0.990              | 0.050          | 0.998                 |
| SecureBERT_Plus      | 0.406                      | 0.464             | 0.981              | 0.040          | 0.998                 |



### Single Prediction Example

```python

import torch
import torch.nn as nn
from transformers import AutoTokenizer, AutoModelForSequenceClassification
import torch.optim as optim
import numpy as np
from huggingface_hub import hf_hub_download
import json

device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
# Load explicitly your fine-tuned MPNet model
classifier_model = AutoModelForSequenceClassification.from_pretrained("selfconstruct3d/AttackGroup-MPNET").to(device)

# Load explicitly your tokenizer
tokenizer = AutoTokenizer.from_pretrained("selfconstruct3d/AttackGroup-MPNET")


label_to_groupid_file = hf_hub_download(
    repo_id="selfconstruct3d/AttackGroup-MPNET",
    filename="label_to_groupid.json"
)

with open(label_to_groupid_file, "r") as f:
    label_to_groupid = json.load(f)

def predict_group(sentence):
    classifier_model.eval()
    encoding = tokenizer(
        sentence,
        truncation=True,
        padding="max_length",
        max_length=128,
        return_tensors="pt"
    )
    input_ids = encoding["input_ids"].to(device)
    attention_mask = encoding["attention_mask"].to(device)

    with torch.no_grad():
        outputs = classifier_model(input_ids=input_ids, attention_mask=attention_mask)
        logits = outputs.logits
        predicted_label = torch.argmax(logits, dim=1).cpu().item()

    predicted_groupid = label_to_groupid[str(predicted_label)]
    return predicted_groupid

# Example usage explicitly:
sentence = "APT38 has used phishing emails with malicious links to distribute malware."
predicted_class = predict_group(sentence)
print(f"Predicted GroupID: {predicted_class}")
```

## Environmental Impact

Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute).

- **Hardware Type:** [To be filled by user]
- **Hours used:** [To be filled by user]
- **Cloud Provider:** [To be filled by user]
- **Compute Region:** [To be filled by user]
- **Carbon Emitted:** [To be filled by user]

## Technical Specifications

### Model Architecture

- MPNet architecture with classification head (768 -> 512 -> num_labels)
- Last 10 transformer layers fine-tuned explicitly

## Environmental Impact

Carbon emissions should be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute).

## Model Card Authors

- Dženan Hamzić

## Model Card Contact

- https://www.linkedin.com/in/dzenan-hamzic/