|
--- |
|
datasets: |
|
- jondurbin/airoboros-gpt4-1.4.1 |
|
--- |
|
# NTK-Aware Scaled RoPE QLoRA Finetune of airoboros-33b-gpt4-1.4.1 (LoRA) |
|
|
|
LoRA Weights can be found here: https://huggingface.co/bhenrym14/airoboros-33b-gpt4-1.4.1-NTK-16384-GPTQ |
|
|
|
fp16 weights can be found here: https://huggingface.co/bhenrym14/airoboros-33b-gpt4-1.4.1-NTK-16384-fp16 |
|
|
|
Analogue with RoPE Position Interpolation (PI) technique: https://huggingface.co/bhenrym14/airoboros-33b-gpt4-1.4.1-PI-8192-LoRA |
|
|
|
## Overview |
|
|
|
This is [Jon Durbin's Airoboros 33B GPT4 1.4](https://huggingface.co/jondurbin/airoboros-33b-gpt4-1.4) (LoRA) with several key modifications: |
|
- Context length extended to 16384 by NTK-Aware Scaled RoPE Embeddings, but NOT via the superHOT LoRA. I started with base Llama-33b. |
|
- Training sequences beyond 2048 have the target truncated to equal 2048. |
|
- Used airoboros-gpt4-1.4.1 dataset instead of airoboros-gpt4-1.4 |
|
|
|
Otherwise, I emulated the training process as closely as possible (rank 64 QLoRA) It was trained on 1x RTX 6000 Ada for ~43 hours. |
|
|
|
## NTK Patch |
|
|
|
To use with HF transformers, AutoGPTQ, etc. See (NTK monkey patch)[https://github.com/bhenrym14/qlora-airoboros-longcontext/blob/main/scaledllama/llama_rope_ntk_monkey_patch.py]. |