QuantFactory Banner

QuantFactory/Mistral-Trismegistus-7B-GGUF

This is quantized version of teknium/Mistral-Trismegistus-7B created using llama.cpp

Original Model Card

Mistral Trismegistus 7B

Mistral Trismegistus

Model Description:

Transcendence is All You Need! Mistral Trismegistus is a model made for people interested in the esoteric, occult, and spiritual.

Here are some outputs:

Answer questions about occult artifacts: image/png

Play the role of a hypnotist: image/png

Special Features:

  • The First Powerful Occult Expert Model: ~10,000 high quality, deep, rich, instructions on the occult, esoteric, and spiritual.
  • Fast: Trained on Mistral, a state of the art 7B parameter model, you can run this model FAST on even a cpu.
  • Not a positivity-nazi: This model was trained on all forms of esoteric tasks and knowledge, and is not burdened by the flowery nature of many other models, who chose positivity over creativity.

Acknowledgements:

Special thanks to @a16z.

Dataset:

This model was trained on a 100% synthetic, gpt-4 generated dataset, about ~10,000 examples, on a wide and diverse set of both tasks and knowledge about the esoteric, occult, and spiritual.

The dataset will be released soon!

Usage:

Prompt Format:

USER: <prompt>
ASSISTANT:

OR

<system message>
USER: <prompt>
ASSISTANT:

Benchmarks:

No benchmark can capture the nature and essense of the quality of spirituality and esoteric knowledge and tasks. You will have to try testing it yourself!

Training run on wandb here: https://wandb.ai/teknium1/occult-expert-mistral-7b/runs/coccult-expert-mistral-6/overview

Licensing:

Apache 2.0


Downloads last month
68
GGUF
Model size
7.24B params
Architecture
llama

2-bit

3-bit

4-bit

5-bit

6-bit

8-bit

Inference API
Unable to determine this model's library. Check the docs .

Model tree for QuantFactory/Mistral-Trismegistus-7B-GGUF

Quantized
(169)
this model