Update README.md
Browse files
README.md
CHANGED
@@ -6,6 +6,19 @@ tags:
|
|
6 |
model-index:
|
7 |
- name: ADAPMIT-multilabel-bge
|
8 |
results: []
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
9 |
---
|
10 |
|
11 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
|
@@ -13,7 +26,7 @@ should probably proofread and complete it, then remove this comment. -->
|
|
13 |
|
14 |
# ADAPMIT-multilabel-bge
|
15 |
|
16 |
-
This model is a fine-tuned version of [BAAI/bge-base-en-v1.5](https://huggingface.co/BAAI/bge-base-en-v1.5) on the
|
17 |
It achieves the following results on the evaluation set:
|
18 |
- Loss: 0.3101
|
19 |
- Precision-micro: 0.9058
|
@@ -28,7 +41,8 @@ It achieves the following results on the evaluation set:
|
|
28 |
|
29 |
## Model description
|
30 |
|
31 |
-
|
|
|
32 |
|
33 |
## Intended uses & limitations
|
34 |
|
@@ -36,7 +50,18 @@ More information needed
|
|
36 |
|
37 |
## Training and evaluation data
|
38 |
|
39 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
40 |
|
41 |
## Training procedure
|
42 |
|
@@ -60,11 +85,26 @@ The following hyperparameters were used during training:
|
|
60 |
| 0.1807 | 2.0 | 1568 | 0.2549 | 0.9092 | 0.8643 | 0.9094 | 0.9156 | 0.8571 | 0.9156 | 0.9124 | 0.8571 | 0.9123 |
|
61 |
| 0.0955 | 3.0 | 2352 | 0.2988 | 0.9069 | 0.8660 | 0.9072 | 0.9252 | 0.8655 | 0.9252 | 0.9160 | 0.8613 | 0.9160 |
|
62 |
| 0.0495 | 4.0 | 3136 | 0.3101 | 0.9058 | 0.8647 | 0.9058 | 0.9305 | 0.8693 | 0.9305 | 0.9180 | 0.8622 | 0.9180 |
|
63 |
-
|
64 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
65 |
### Framework versions
|
66 |
|
67 |
- Transformers 4.38.1
|
68 |
- Pytorch 2.1.0+cu121
|
69 |
- Datasets 2.18.0
|
70 |
-
- Tokenizers 0.15.2
|
|
|
6 |
model-index:
|
7 |
- name: ADAPMIT-multilabel-bge
|
8 |
results: []
|
9 |
+
datasets:
|
10 |
+
- GIZ/policy_classification
|
11 |
+
library_name: sentence-transformers
|
12 |
+
|
13 |
+
co2_eq_emissions:
|
14 |
+
emissions: 40.5174303026829
|
15 |
+
source: codecarbon
|
16 |
+
training_type: fine-tuning
|
17 |
+
on_cloud: true
|
18 |
+
cpu_model: Intel(R) Xeon(R) CPU @ 2.00GHz
|
19 |
+
ram_total_size: 12.6747894287109
|
20 |
+
hours_used: 0.994
|
21 |
+
hardware_used: 1 x Tesla T4
|
22 |
---
|
23 |
|
24 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
|
|
|
26 |
|
27 |
# ADAPMIT-multilabel-bge
|
28 |
|
29 |
+
This model is a fine-tuned version of [BAAI/bge-base-en-v1.5](https://huggingface.co/BAAI/bge-base-en-v1.5) on the on the [Policy-Classification](https://huggingface.co/datasets/GIZ/policy_classification) dataset.
|
30 |
It achieves the following results on the evaluation set:
|
31 |
- Loss: 0.3101
|
32 |
- Precision-micro: 0.9058
|
|
|
41 |
|
42 |
## Model description
|
43 |
|
44 |
+
The purpose of this model is to predict multiple labels simultaneously from a given input data. Specifically, the model will predict 2 labels -
|
45 |
+
AdaptationLabel, MitigationLabel - that are relevant to a particular task or application
|
46 |
|
47 |
## Intended uses & limitations
|
48 |
|
|
|
50 |
|
51 |
## Training and evaluation data
|
52 |
|
53 |
+
- Training Dataset: 12538
|
54 |
+
| Class | Positive Count of Class|
|
55 |
+
|:-------------|:--------|
|
56 |
+
| AdaptationLabel | 5439 |
|
57 |
+
| MitigationLabel | 6659 |
|
58 |
+
|
59 |
+
|
60 |
+
- Validation Dataset: 1190
|
61 |
+
| Class | Positive Count of Class|
|
62 |
+
|:-------------|:--------|
|
63 |
+
| AdaptationLabel | 533 |
|
64 |
+
| MitigationLabel | 604 |
|
65 |
|
66 |
## Training procedure
|
67 |
|
|
|
85 |
| 0.1807 | 2.0 | 1568 | 0.2549 | 0.9092 | 0.8643 | 0.9094 | 0.9156 | 0.8571 | 0.9156 | 0.9124 | 0.8571 | 0.9123 |
|
86 |
| 0.0955 | 3.0 | 2352 | 0.2988 | 0.9069 | 0.8660 | 0.9072 | 0.9252 | 0.8655 | 0.9252 | 0.9160 | 0.8613 | 0.9160 |
|
87 |
| 0.0495 | 4.0 | 3136 | 0.3101 | 0.9058 | 0.8647 | 0.9058 | 0.9305 | 0.8693 | 0.9305 | 0.9180 | 0.8622 | 0.9180 |
|
88 |
+
|label | precision |recall |f1-score| support|
|
89 |
+
|:-------------:|:---------:|:-----:|:------:|:------:|
|
90 |
+
|AdaptationLabel |0.910 |0.928 |0.919 | 533.0 |
|
91 |
+
|MitigationLabel |0.902 |0.932 |0.917 | 604.0 |
|
92 |
+
|
93 |
+
### Environmental Impact
|
94 |
+
Carbon emissions were measured using [CodeCarbon](https://github.com/mlco2/codecarbon).
|
95 |
+
- **Carbon Emitted**: 0.04051 kg of CO2
|
96 |
+
- **Hours Used**: 0.994 hours
|
97 |
+
|
98 |
+
### Training Hardware
|
99 |
+
- **On Cloud**: yes
|
100 |
+
- **GPU Model**: 1 x Tesla T4
|
101 |
+
- **CPU Model**: Intel(R) Xeon(R) CPU @ 2.00GHz
|
102 |
+
- **RAM Size**: 12.67 GB
|
103 |
+
|
104 |
+
|
105 |
### Framework versions
|
106 |
|
107 |
- Transformers 4.38.1
|
108 |
- Pytorch 2.1.0+cu121
|
109 |
- Datasets 2.18.0
|
110 |
+
- Tokenizers 0.15.2
|