Safetensors
English
llama
File size: 2,126 Bytes
234853c
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
---
license: cc-by-nc-4.0
datasets:
- Setiaku/Stheno-v3.4-Instruct
- Setiaku/Stheno-3.4-Creative-2
language:
- en
---

---

Thanks to Backyard.ai for the compute to train this. :)

---

Llama-3.1-8B-Stheno-v3.4

This model has went through a multi-stage finetuning process.
```
- 1st, over a multi-turn Conversational-Instruct
2nd - over a Creative Writing / Roleplay along with some Creative-based Instruct Datasets.
- - - Dataset consists of a mixture of Human and Claude Data.
```

Changes since previous Stheno Datasets:
```
- Included Multi-turn Conversation-based Instruct Datasets to boost multi-turn coherency. # This is a seperate set, not the ones made by Kalomaze and Nopm, that are used in Magnum. They're completely different data.
- Replaced Single-Turn Instruct with Better Prompts and Answers by Claude 3.5 Sonnet and Claude 3 Opus.
- Removed c2 Samples -> Underway of re-filtering and masking to use with custom prefills. TBD
- Included 85% more Roleplaying Examples based of [Gryphe's](https://huggingface.co/datasets/Gryphe/Sonnet3.5-Charcard-Roleplay) Charcard RP Sets. Further filtered and cleaned on.
- Included 40% More Creative Writing Examples.
- Included Datasets Targeting System Prompt Adherence.
- Included Datasets targeting Reasoning / Spatial Awareness.
- Filtered for the usual errors, slop and stuff at the end. Some may have slipped through, but I removed nearly all of it.
```

Personal Opinions:
```
- Llama3.1 was more disappointing, in the Instruct Tune? It felt overbaked, atleast. Likely due to the DPO being done after their SFT Stage
- Still though, I think I did an okay job.
```

Below are some graphs and all for you to observe.

---

`Turn Distribution # 1 Turn is considered as 1 combined Human/GPT pair in a ShareGPT format. 4 Turns means 1 System Row + 8 Human/GPT rows in total.`

![Turn](https://huggingface.co/Sao10K/Llama-3.1-8B-Stheno-v3.4/resolve/main/turns_distribution_bar_graph.png)

`Token Count Histogram # Based on the Llama 3 Tokenizer`

![Turn](https://huggingface.co/Sao10K/Llama-3.1-8B-Stheno-v3.4/resolve/main/token_count_histogram.png)

---

Have a good one.