Training Details: Trained at 8K Context -> Expanded to 32K Context due to context extension with PoSE training. Dataset Modifications: - Further Cleaned up Roleplaying Samples -> Quality Check - Removed Low Quality Samples from Manual Check - More Creative Writing Samples -> 2x - Remade and Refined Detailed Instruct Data Needle in a Haystack Results: ![Results](Linkhere) Coherent at 32K Context. Not as good as a natively trained 32K model, but much better than regular rope scaling. ``` sequence_len: 8192 use_pose: true pose_max_context_len: 32768 overrides_of_model_config: rope_theta: 2000000.0 max_position_embeddings: 32768 # peft_use_dora: true adapter: lora peft_use_rslora: true lora_model_dir: lora_r: 256 lora_alpha: 256 lora_dropout: 0.1 lora_target_linear: true lora_target_modules: - gate_proj - down_proj - up_proj - q_proj - v_proj - k_proj - o_proj warmup_steps: 80 gradient_accumulation_steps: 6 micro_batch_size: 1 num_epochs: 2 optimizer: adamw_bnb_8bit lr_scheduler: cosine_with_min_lr learning_rate: 0.00004 lr_scheduler_kwargs: min_lr: 0.000004 ```