File size: 1,479 Bytes
a79ba83
 
39caa1f
 
 
 
a79ba83
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
tldr; This is Phi 3 Medium finetuned for roleplaying.

We needed more explicit moist.

It failed.

Training Details:
- 8x H100 80GB SXM GPUs
- 10 minutes training duration
- A continued finetune of Cream-Phi-3-14B-v1b (now released as the official v1)

Results for Roleplay Mode (i.e., not Instruct format):
- Workable RP formatting with occassional mistakes. (Yep, it got worse)
- Long-ish and moist response. It cooks fast.
- Slightly incoherent. Can go hard on moist scenes but with poor spatial and anatomical understanding.
- Important: My testing is lazy and flawed. Take it with a grain of salt and test the GGUFs before taking notes.

![image/png](https://cdn-uploads.huggingface.co/production/uploads/65f2fd1c25b848bd061b5c2e/BjN92w1x9XbsOj0RpALiz.png)
(No eval split = no eval metrics ^)


Axolotl Config (some fields omitted)
```yaml
base_model: BeaverAI/Cream-Phi-3-14B-v1b
load_in_4bit: true
bf16: auto
fp16:
tf32: false
flash_attention: true

sequence_len: 6144
datasets:
  - path: SicariusSicariiStuff/Bluemoon_Top50MB_Sorted_Fixed
    type: customphi3

num_epochs: 2
warmup_steps: 5
weight_decay: 0.1

adapter: lora
lora_r: 32
lora_alpha: 16
lora_dropout: 0.1
lora_target_linear: true

gradient_accumulation_steps: 2
micro_batch_size: 1
gradient_checkpointing: true
gradient_checkpointing_kwargs:
   use_reentrant: true

sample_packing: true
pad_to_sequence_len: true

optimizer: paged_adamw_8bit
lr_scheduler: cosine
learning_rate: 0.0001
max_grad_norm: 1.0
```