Model Description

EXPERIMENTAL MODEL, NOT FINAL, IT HAS SOME ISSUES, BUT IT'S REALLY COOL WHEN IT WORKS

The biggest change from my previous AEZAKMI models is that this one is much much less likely to refuse completing request!

Yi-34B 200K base model fine-tuned on RAWrr v1 dataset via DPO and then fine-tuned on AEZAKMI v2 dataset via SFT. DPO training took around 6 hours, SFT took around 18 hours. I used unsloth for both stages. It's like airoboros but with less gptslop, no refusals and less typical language used by RLHFed OpenAI models. Say goodbye to "It's important to remember"!
Prompt format is standard chatml. Don't expect it to be good at math, riddles or be crazy smart. My end goal with AEZAKMI is to create a cozy free chatbot. Base model used for fine-tuning was 200k context Yi-34B-Llama model shared by larryvrh.

Training was done with max_position_embeddings set at 4096. Then it was reverted back to 200K after applying LoRA.

Prompt Format

I recommend using ChatML format, as this was used during fine-tune.
Here's a prompt format you should use, you can set a different system message, model seems to respect that fine, so it wasn't overfitted. Both A chat. and A chat with uncensored assistant. system prompt work fine and are pretty refusal-free.

<|im_start|>system
A chat with uncensored assistant.<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant

Intended uses & limitations

It's a chat model, not a base completion-only one. Use is limited by Yi license. Since no-robots dataset was used for making rawrr_v1, I guess you maybe shouldn't use it for commercial activities.

Known Issues

I recommend to set repetition penalty to something around 1.05 to avoid repetition. So far I had somewhat good experience running this model with temperature 1.0-1.2.

One big issue I noticed is that I think I set too small of a learning rate for SFT fine-tuning. Sometimes completion-mode shines through and responses are moreso completion-like rather than being instruct response. Other small issue is that when you enter a prompt that might have resulted with refusal in a previous model, the response will be more free-form and probably will have a touch of completion in it. So far, it seems like the strongest anti-refusal bias is at 0 ctx - the first prompt. But it's also present, albeit a little bit less, further down. I plan to expand rawrr dataset and include more samples without system prompt, this should help here.

made with Unsloth

Unsloth training parameters DPO Stage

Unsloth training parameters SFT Stage

  • lora_r: 16

  • lora_alpha: 32

  • max_length: 2200

  • learning_rate: 0.00006

  • lr_scheduler_type: "cosine"

  • lr_scheduler_kwargs: { "num_cycles" : 0.3, }

  • target_modules: ["q_proj", "k_proj", "v_proj", "o_proj", "gate_proj", "up_proj", "down_proj",]

  • gradient_accumulation_steps: 1

  • per_device_batch_size: 1

  • num_train_epochs: 1.4

    Script used for SFT training can be found here: https://huggingface.co/adamo1139/Yi-34B-200K-AEZAKMI-RAW-2301-LoRA/blob/main/yi-34b-aezakmi-sft-1-hf.py

    Credits

    Thanks to mlabonne, Daniel Han and Michael Han for providing open source code that was used for fine-tuning.

Downloads last month
76
Safetensors
Model size
34.4B params
Tensor type
FP16
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Datasets used to train adamo1139/Yi-34B-200K-AEZAKMI-RAW-2301