mrs83 commited on
Commit
f0c1692
·
verified ·
1 Parent(s): 865480a

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +11 -6
README.md CHANGED
@@ -46,16 +46,21 @@ base_model = AutoModelForCausalLM.from_pretrained("NX-AI/xLSTM-7b")
46
  model = PeftModel.from_pretrained(base_model, "mrs83/FlowerTune-xLSTM-7b-NLP-PEFT")
47
  ```
48
 
49
- ## Evaluation Results (accuracy)
50
 
51
- - **STEM**: 12.62 %
52
- - **Social Sciences**: 14.95 %
53
- - **Humanities**: 13.56 %
54
- - **Average**: 13.71 %
 
 
 
 
55
 
56
  ## Training procedure
57
 
58
  The following `bitsandbytes` quantization config was used during training:
 
59
  - quant_method: QuantizationMethod.BITS_AND_BYTES
60
  - _load_in_8bit: False
61
  - _load_in_4bit: True
@@ -72,5 +77,5 @@ The following `bitsandbytes` quantization config was used during training:
72
 
73
  ### Framework versions
74
 
75
- - PEFT 0.6.2
76
  - Flower 1.13.0
 
46
  model = PeftModel.from_pretrained(base_model, "mrs83/FlowerTune-xLSTM-7b-NLP-PEFT")
47
  ```
48
 
49
+ ### Evaluation Results (Accuracy)
50
 
51
+ - **STEM**: 13.67 %
52
+ - **Social Sciences**: 14.84 %
53
+ - **Humanities**: 17.55 %
54
+ - **Average**: 15.35 %
55
+
56
+ ### Communication Budget
57
+
58
+ 60609.38 Megabytes
59
 
60
  ## Training procedure
61
 
62
  The following `bitsandbytes` quantization config was used during training:
63
+
64
  - quant_method: QuantizationMethod.BITS_AND_BYTES
65
  - _load_in_8bit: False
66
  - _load_in_4bit: True
 
77
 
78
  ### Framework versions
79
 
80
+ - PEFT 0.14.0
81
  - Flower 1.13.0