File size: 2,492 Bytes
ad069e0 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 |
Experimental model focused on RP and storytelling. This method attempts to bring some of the intrigue and style of the base model back into the instruct model.
This is a model trained in four stages (Use with Llama-8B-Instruct or Llama-8B-Instruct abliterations)
Base Model -- 1 Gig of semi-structured pretraining data (Uniform distribution centered around 4096 ctx length, b/w 512-8192)
![image/png](https://cdn-uploads.huggingface.co/production/uploads/637f3b03932a61b89aefbf5c/hpdbVRrM1yt65-gNtRIfT.png)
- Base pretraining phase 1 (Constant LR, text completion -- 20,000 steps 2/3 epoch)
- Base pretraining phase 2 (Cosine LR, text completion -- 10,000 steps 1/3 epoch)
Merge LORA into instruct model -- 100 MB of structured story-instruct data (All samples attempt to be near 8192 ctx fullsize instructions)
![image/png](https://cdn-uploads.huggingface.co/production/uploads/637f3b03932a61b89aefbf5c/V1Jf07k8JdI0_OzIDc7FF.png)
- Story-instruct tune phase 1 (Constant LR, ~1250 steps, 1 epoch)
- Story-instruct tune phase 2 (Cosine LR, ~1250 steps, 1 epoch)
Trained using <https://github.com/unslothai/unsloth>
Rough script:
```python
model = FastLanguageModel.get_peft_model(
model,
r = 64,
target_modules = ["q_proj", "v_proj", "k_proj", "o_proj", "gate_proj", "up_proj", "down_proj"],
lora_alpha = 32,
lora_dropout = 0.05, # 0 for base pretraining
bias = "none",
use_gradient_checkpointing = "unsloth",
random_state = 3407,
max_seq_length = max_seq_length,
use_rslora = True,
loftq_config = None,
)
trainer = SFTTrainer(
model = model,
train_dataset = train_dataset,
dataset_text_field = "text",
max_seq_length = max_seq_length,
tokenizer = tokenizer,
args = TrainingArguments(
per_device_train_batch_size = 2,
warmup_steps = 45,
num_train_epochs=2, #1 for base-pretraining
fp16 = not torch.cuda.is_bf16_supported(),
bf16 = torch.cuda.is_bf16_supported(),
logging_steps = 15,
logging_dir="logs",
report_to="tensorboard",
output_dir = "outputs",
save_strategy=IntervalStrategy.STEPS,
save_steps=100,
save_total_limit=30,
optim = "adamw_torch_fused",
lr_scheduler_type="cosine", # <- Changed over time
learning_rate=5e-5,
weight_decay=0.10, # .15 for base pretraining
adam_beta1=0.88, # .9 for base pretraining
adam_beta2=0.99, # .999 for base pretraining
),
)
``` |