Update README.md
Browse files
README.md
CHANGED
@@ -46,16 +46,21 @@ base_model = AutoModelForCausalLM.from_pretrained("NX-AI/xLSTM-7b")
|
|
46 |
model = PeftModel.from_pretrained(base_model, "mrs83/FlowerTune-xLSTM-7b-NLP-PEFT")
|
47 |
```
|
48 |
|
49 |
-
|
50 |
|
51 |
-
- **STEM**:
|
52 |
-
- **Social Sciences**: 14.
|
53 |
-
- **Humanities**:
|
54 |
-
- **Average**:
|
|
|
|
|
|
|
|
|
55 |
|
56 |
## Training procedure
|
57 |
|
58 |
The following `bitsandbytes` quantization config was used during training:
|
|
|
59 |
- quant_method: QuantizationMethod.BITS_AND_BYTES
|
60 |
- _load_in_8bit: False
|
61 |
- _load_in_4bit: True
|
@@ -72,5 +77,5 @@ The following `bitsandbytes` quantization config was used during training:
|
|
72 |
|
73 |
### Framework versions
|
74 |
|
75 |
-
- PEFT 0.
|
76 |
- Flower 1.13.0
|
|
|
46 |
model = PeftModel.from_pretrained(base_model, "mrs83/FlowerTune-xLSTM-7b-NLP-PEFT")
|
47 |
```
|
48 |
|
49 |
+
### Evaluation Results (Accuracy)
|
50 |
|
51 |
+
- **STEM**: 13.67 %
|
52 |
+
- **Social Sciences**: 14.84 %
|
53 |
+
- **Humanities**: 17.55 %
|
54 |
+
- **Average**: 15.35 %
|
55 |
+
|
56 |
+
### Communication Budget
|
57 |
+
|
58 |
+
60609.38 Megabytes
|
59 |
|
60 |
## Training procedure
|
61 |
|
62 |
The following `bitsandbytes` quantization config was used during training:
|
63 |
+
|
64 |
- quant_method: QuantizationMethod.BITS_AND_BYTES
|
65 |
- _load_in_8bit: False
|
66 |
- _load_in_4bit: True
|
|
|
77 |
|
78 |
### Framework versions
|
79 |
|
80 |
+
- PEFT 0.14.0
|
81 |
- Flower 1.13.0
|