File size: 3,850 Bytes
f7a1cd2
 
 
 
 
 
 
 
 
15469a5
 
f7a1cd2
 
 
 
 
 
 
15469a5
 
61b3a04
5e13e9a
f7a1cd2
 
 
5e13e9a
15469a5
 
 
 
 
f7a1cd2
 
15469a5
f7a1cd2
 
15469a5
f7a1cd2
15469a5
 
 
f7a1cd2
15469a5
 
 
f7a1cd2
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
15469a5
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
---
library_name: transformers
license: mit
base_model: openai-community/gpt2-large
tags:
- generated_from_trainer
model-index:
- name: vulnerability-description-generation-gpt2-large
  results: []
datasets:
- CIRCL/vulnerability
---

<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->

# vulnerability-description-generation-gpt2-large

This model is a fine-tuned version of [openai-community/gpt2-large](https://huggingface.co/openai-community/gpt2-large) on the dataset [CIRCL/vulnerability](https://huggingface.co/datasets/CIRCL/vulnerability).

Generation time: approximately 27 hours with two GPUs NVIDIA L40S.

It achieves the following results on the evaluation set:
- Loss: 1.2889

Energy related information:
- Energy consumed for RAM : 2.726718 kWh. RAM Power : 94.34470081329346 W
- Energy consumed for all CPUs : 1.228363 kWh. Total CPU Power : 42.5 W
- Energy consumed for all GPUs : 14.324482 kWh. Total GPU Power : 171.6372824939248 W
- 18.279562 kWh of electricity used since the beginning.

## Model description

It is a text generation model and is aimed to assist in writing vulnerability descriptions.


## How to get started with the model

```python
from transformers import pipeline
pipe = pipeline("text-generation", model="CIRCL/vulnerability-description-generation-gpt2-large")

>>> print(pipe("A new vulnerability in OpenSSL allows", max_length=300))
[{'generated_text': 'A new vulnerability in OpenSSL allows remote attackers to create insecure connections. The impact of this vulnerability is that one or more TLS connections will be created under one username or one username/logon in a session for which another username or logon is valid. An attacker that can control the username or logon string of an openSSL host can effectively manipulate the OpenSSL host in a way that enables the attacker to create arbitrary openSSL connections by calling `http-server-create` in a non-secure sequence across other hosts. The vulnerability may be used to perform a man-in-the-middle attack, making the attacker completely different to the attacker. An exploitation may include MITM attacks and man-in-the-middle attacks. NOTE: the vendor states that "SUSE OpenSSL\'s implementation of \'openSSL_connect`, is not vulnerable to MITM attacks. If the attack vector is a MITM attack, OpenSSL will work under any circumstances." The CVE has been assigned for tracking purposes. In no way does the vendor\'s position change that an OpenSSL client should not use openSSL in the context of another OpenSSL server, but an attacker must choose the vulnerability according to their configuration if they are to exploit their attack. NOTE: the vendor indicates that it has considered the impact of this vulnerability "moderate". If by any measure, an OpenSSL client is susceptible to MITM attacks, that vulnerability would be considered low because it would be difficult to exploit a vulnerability that'}]
```

## Training procedure

### Training hyperparameters

The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 3

### Training results

| Training Loss | Epoch | Step  | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 0.7507        | 1.0   | 24394 | 1.4799          |
| 0.6586        | 2.0   | 48788 | 1.3366          |
| 0.5925        | 3.0   | 73182 | 1.2889          |


### Framework versions

- Transformers 4.49.0
- Pytorch 2.6.0+cu124
- Datasets 3.4.0
- Tokenizers 0.21.1