Casual-Autopsy
commited on
Commit
•
cf11a5b
1
Parent(s):
7410758
Update README.md
Browse files
README.md
CHANGED
@@ -47,6 +47,10 @@ base_model:
|
|
47 |
***
|
48 |
# L3-Super-Nova-RP-8B
|
49 |
|
|
|
|
|
|
|
|
|
50 |
***
|
51 |
***
|
52 |
## Presets
|
@@ -60,9 +64,9 @@ Top K: 40
|
|
60 |
Min P: 0.075
|
61 |
Repetition Penalty: 1.01
|
62 |
# Don't make this higher, DRY handles the bulk of Squashing Repetition.
|
63 |
-
# This is
|
64 |
Rep Pen Range: 2048 # Don't make this higher either.
|
65 |
-
Presence Penalty: 0.03 # Minor encouragement to use synonyms.
|
66 |
Smoothing Factor: 0.3
|
67 |
|
68 |
DRY Repetition Penalty:
|
@@ -79,7 +83,7 @@ Dynamic Temperature:
|
|
79 |
|
80 |
***
|
81 |
### Context/Instruct
|
82 |
-
[Virt-io's SillyTavern](https://huggingface.co/Virt-io/SillyTavern-Presets)
|
83 |
|
84 |
***
|
85 |
***
|
@@ -97,9 +101,9 @@ While not required, I'd recommend building the story string prompt with Lorebook
|
|
97 |
***
|
98 |
## Merge Info
|
99 |
|
100 |
-
The merge methods used were **Ties**, **Dare Ties**, **Breadcrumbs Ties**, **SLERP**, and **
|
101 |
|
102 |
-
The model was finished off with both **Merge Densification**, and **Negative Weighting**
|
103 |
|
104 |
All merging steps had the merge calculations done in **float32** and were output as **bfloat16**.
|
105 |
|
|
|
47 |
***
|
48 |
# L3-Super-Nova-RP-8B
|
49 |
|
50 |
+
This is a role-playing model designed with the goal of good creativity and intelligence to improve advance role-playing experiences. The aim of L3-Super-Nova-RP-8B is to be good at Chain-of-Thoughts, summarizing information, and recognizing emotions. It also includes data about the human body and mind in an attempt to enhance understanding and interaction within role-playing scenarios.
|
51 |
+
|
52 |
+
The model was developed using various methods in multiple merging steps. To boost creativity, it used techniques to strengthen and adjust its output which was paried with the newly released merge method. All merge calculations were done in float32 format and then converted to the usual bfloat16 during merging.
|
53 |
+
|
54 |
***
|
55 |
***
|
56 |
## Presets
|
|
|
64 |
Min P: 0.075
|
65 |
Repetition Penalty: 1.01
|
66 |
# Don't make this higher, DRY handles the bulk of Squashing Repetition.
|
67 |
+
# This is just to lightly nudge the bot to move the plot forward
|
68 |
Rep Pen Range: 2048 # Don't make this higher either.
|
69 |
+
Presence Penalty: 0.03 # Minor encouragement to use synonyms. Don't make this higher maybe?
|
70 |
Smoothing Factor: 0.3
|
71 |
|
72 |
DRY Repetition Penalty:
|
|
|
83 |
|
84 |
***
|
85 |
### Context/Instruct
|
86 |
+
[Virt-io's SillyTavern Presets](https://huggingface.co/Virt-io/SillyTavern-Presets) work really well with this.
|
87 |
|
88 |
***
|
89 |
***
|
|
|
101 |
***
|
102 |
## Merge Info
|
103 |
|
104 |
+
The merge methods used were **Ties**, **Dare Ties**, **Breadcrumbs Ties**, **SLERP**, and **DELLA**.
|
105 |
|
106 |
+
The model was finished off with both **Merge Densification**, and **Negative Weighting** techniques to boost creativity.
|
107 |
|
108 |
All merging steps had the merge calculations done in **float32** and were output as **bfloat16**.
|
109 |
|