gte-large-en-v1.5-based-ft-prompt-injection-detection-241205Weighted
This model is a fine-tuned version of Alibaba-NLP/gte-large-en-v1.5 on an unknown dataset. It achieves the following results on the evaluation set:
- Loss: 0.3220
- F1: 0.9471
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 256
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 50
- mixed_precision_training: Native AMP
Training results
Training Loss | Epoch | Step | Validation Loss | F1 |
---|---|---|---|---|
0.4432 | 0.2527 | 100 | 0.2279 | 0.9104 |
0.1996 | 0.5054 | 200 | 0.1793 | 0.9343 |
0.165 | 0.7581 | 300 | 0.1437 | 0.9450 |
0.1528 | 1.0107 | 400 | 0.1273 | 0.9531 |
0.1062 | 1.2634 | 500 | 0.1355 | 0.9490 |
0.1127 | 1.5161 | 600 | 0.1349 | 0.9544 |
0.1186 | 1.7688 | 700 | 0.1523 | 0.9496 |
0.1173 | 2.0215 | 800 | 0.1516 | 0.9483 |
0.0785 | 2.2742 | 900 | 0.1503 | 0.9528 |
0.0849 | 2.5268 | 1000 | 0.1623 | 0.9514 |
0.0898 | 2.7795 | 1100 | 0.1539 | 0.9460 |
0.0891 | 3.0322 | 1200 | 0.2415 | 0.9515 |
0.065 | 3.2849 | 1300 | 0.1589 | 0.9541 |
0.062 | 3.5376 | 1400 | 0.1499 | 0.9470 |
0.0677 | 3.7903 | 1500 | 0.1788 | 0.9445 |
0.0638 | 4.0430 | 1600 | 0.3220 | 0.9471 |
Framework versions
- Transformers 4.45.2
- Pytorch 2.5.1+cu124
- Datasets 3.1.0
- Tokenizers 0.20.3
- Downloads last month
- 4
Inference API (serverless) does not yet support model repos that contain custom code.
Model tree for J1N2/gte-large-en-v1.5-based-ft-prompt-injection-detection-241205Weighted
Base model
Alibaba-NLP/gte-large-en-v1.5