File size: 3,780 Bytes
812f70c 4123e2f 812f70c 164aaeb 812f70c 164aaeb 812f70c 3fca632 51e427c 164aaeb 812f70c 164aaeb 812f70c 164aaeb 812f70c 164aaeb 812f70c 4068524 164aaeb 812f70c 164aaeb 812f70c 164aaeb 812f70c f330ea9 812f70c 164aaeb 812f70c 164aaeb 812f70c 164aaeb 812f70c 164aaeb 812f70c 164aaeb 812f70c 164aaeb 812f70c f3e02f1 812f70c 164aaeb 812f70c 164aaeb 812f70c 164aaeb 812f70c 164aaeb 812f70c 164aaeb 812f70c 164aaeb 812f70c 164aaeb 812f70c 164aaeb 812f70c 164aaeb 812f70c 164aaeb |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 |
---
language:
- de
- en
license: llama2
library_name: transformers
tags:
- llama2
- deutsch
- german
- seedbox
datasets:
- seedboxai/multitask_german_examples_32k
pipeline_tag: text-generation
---
![image/png](https://cdn-uploads.huggingface.co/production/uploads/645ded34a45b4182d7f5c385/Lu_-yOozdIQLBe4FrmWUI.png)
# KafkaLM-7B-DARE_TIES-LaserRMT-QLoRA-DPO-v0.5
**KafkaLM 7b** is a Mistral 7b model - further pre-trained on a large German dataset from Björn Plüster and LAION. [leo-mistral-hessianai-7b](https://huggingface.co/LeoLM/leo-mistral-hessianai-7b) - which was finetuned on an ensemble of popular high-quality open-source instruction sets (translated from English to German).
KafkaLM 7b is a [Seedbox](https://huggingface.co/seedboxai) project trained by [Dennis Dickmann](https://huggingface.co/doubledsbv).
**Why Kafka?**
The models are proficient, yet creative, and have some tendencies to linguistically push boundaries 😊
## THE MODEL CAN BE TESTET HERE [Kafka-7B HF Space](https://huggingface.co/spaces/doubledsbv/Kafka-7B-DARE-TIES-QLoRa-LaserRMT-DPO)
## Model Details
The purpose of releasing the **KafkaLM series** is to contribute to the German AI community with a set of fine-tuned LLMs that are easy to use in everyday applications across a variety of tasks.
The main goal was to provide LLMs proficient in German, especially to be used in German-speaking business contexts where English alone is not sufficient.
## DPO Training with laserRMT w/ Q-Lora
Based on the brilliant work from [laserRMT](https://github.com/cognitivecomputations/laserRMT/) team, I used the SNR implementation for identifying candiate layers to be used for the DPO training.
### Dataset
I used a 8k filtered version of the following [seedboxai/multitask_german_examples_32k](https://huggingface.co/datasets/seedboxai/multitask_german_examples_32k)
### Prompt Format
This model follows the subsequent prompt format:
```
<|system|>
Du bist ein freundlicher und hilfsbereiter KI-Assistent. Du beantwortest Fragen faktenorientiert und präzise, ohne dabei relevante Fakten auszulassen.</s>
<|user|>
Welche Möglichkeiten der energetischen Sanierung habe ich neben Solar und Energiespeicher?</s>
<|assistant|>
```
## 🧩 Configuration
```yaml
models:
- model: mistralai/Mistral-7B-v0.1
# no parameters necessary for base model
- model: seedboxai/KafkaLM-7B-German-V0.1
parameters:
density: 0.65
weight: 0.50
- model: mlabonne/Monarch-7B
parameters:
density: 0.60
weight: 0.30
- model: mayflowergmbh/Wiedervereinigung-7b-dpo-laser
parameters:
density: 0.60
weight: 0.20
merge_method: dare_ties
base_model: mistralai/Mistral-7B-v0.1
parameters:
int8_mask: true
dtype: bfloat16
random_seed: 0
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "seedboxai/KafkaLM-7B-DARE_TIES-LaserRMT-QLoRA-DPO-v0.5"
messages = [{"role": "user", "content": "Was ist der Sinn des Lebens?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
```
## Disclaimer
The license on this model does not constitute legal advice. We are not responsible for the actions of third parties who use this model.
This model should only be used for research purposes. The original Llama2 license and all restrictions of datasets used to train this model apply. |