mrs83's picture
Update README.md
0fd610c verified
---
base_model: NX-AI/xLSTM-7b
library_name: peft
license: apache-2.0
datasets:
- vicgalle/alpaca-gpt4
language:
- en
pipeline_tag: text-generation
---
# Model Card for FlowerTune-xLSTM-7b-NLP-PEFT
This PEFT adapter has been trained by using [Flower](https://flower.ai/), a friendly federated AI framework.
The adapter and benchmark results have been submitted to the [FlowerTune LLM NLP Leaderboard](https://flower.ai/benchmarks/llm-leaderboard/nlp/).
## Model Details
Please check the following GitHub project for model details and evaluation results:
[https://github.com/mrs83/FlowerTune-xLSTM-7b-NLP](https://github.com/mrs83/FlowerTune-xLSTM-7b-NLP)
## How to Get Started with the Model
First, install `xlstm` and `mlstm_kernels` packages:
```bash
pip install xlstm
pip install mlstm_kernels
```
For now, install the transformers repositiory fork from NX-AI (until it is merged):
```bash
pip install 'transformers @ git+ssh://[email protected]/NX-AI/transformers.git@integrate_xlstm'
```
Use this model as:
```
from peft import PeftModel
from transformers import AutoModelForCausalLM
base_model = AutoModelForCausalLM.from_pretrained("NX-AI/xLSTM-7b")
model = PeftModel.from_pretrained(base_model, "mrs83/FlowerTune-xLSTM-7b-NLP-PEFT")
```
### Evaluation Results (Accuracy)
- **STEM**: 13.67 %
- **Social Sciences**: 17.55 %
- **Humanities**: 14.84 %
- **Average**: 15.35 %
### Communication Budget
60609.38 Megabytes
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: QuantizationMethod.BITS_AND_BYTES
- _load_in_8bit: False
- _load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
- bnb_4bit_quant_storage: uint8
- load_in_4bit: True
- load_in_8bit: False
### Framework versions
- PEFT 0.14.0
- Flower 1.13.0