File size: 635 Bytes
20d762f
 
1bc665f
 
 
 
 
 
 
20d762f
1bc665f
8e462ba
1bc665f
 
 
 
b9a0f20
 
78f63f7
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
---
license: cc-by-2.0
language:
- en
tags:
- finance
- legal
- biology
- art
---

Behold, one of the first fine-tunes of Mistral's 7B 0.2 Base model. SatoshiN is trained on 4 epochs 2e-4 learning rate (cosine) of a diverse custom data-set, combined with a polishing round of that same data-set at a 1e-4 linear learning rate.
It's a nice assistant that isn't afraid to ask questions, and gather additional information before providing a response to user prompts.

SatoshiN | Base-Model

Wikitext Perplexity: 6.27 | 5.4

**Similar to SOTA, this model runs a bit hot, try using lower temperatures below .5 if experiencing any nonsense)