|
Quantization made by Richard Erkhov. |
|
|
|
[Github](https://github.com/RichardErkhov) |
|
|
|
[Discord](https://discord.gg/pvy7H8DZMG) |
|
|
|
[Request more models](https://github.com/RichardErkhov/quant_request) |
|
|
|
|
|
MFANN3bv0.17 - bnb 8bits |
|
- Model creator: https://huggingface.co/netcat420/ |
|
- Original model: https://huggingface.co/netcat420/MFANN3bv0.17/ |
|
|
|
|
|
|
|
|
|
Original model description: |
|
--- |
|
library_name: transformers |
|
license: mit |
|
datasets: |
|
- netcat420/MFANN |
|
language: |
|
- en |
|
pipeline_tag: text-generation |
|
--- |
|
|
|
Prompt template for GPT4All: |
|
|
|
``` |
|
<|endoftext|>Instruct: %1<|endoftext|> |
|
<|endoftext|>Output: %2<|endoftext|> |
|
``` |
|
|
|
|
|
record low training loss on netcat420/MFANN for the 2.78b model! |
|
|
|
(0.538) |
|
|
|
previous record: (0.632) |
|
|
|
|
|
 |
|
|
|
|
|
|