File size: 1,947 Bytes
53094de |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 |
---
library_name: transformers
base_model:
- meta-llama/Llama-3.1-8B
---
# Epos-8B
Epos-8B is a fine-tuned version of the base model **Llama-3.1-8B** from Meta, optimized for storytelling, dialogue generation, and creative writing. The model specializes in generating rich narratives, immersive prose, and dynamic character interactions, making it ideal for creative tasks.

---
## Model Details
### Model Description
Epos-8B is an 8 billion parameter language model fine-tuned for storytelling and narrative tasks. Inspired by the grandeur of epic tales, it is designed to produce high-quality, engaging content that evokes the depth and imagination of ancient myths and modern storytelling traditions.
- **Developed by:** P0x0
- **Funded by:** P0x0
- **Shared by:** P0x0
- **Model type:** Transformer-based Language Model
- **Language(s) (NLP):** Primarily English
- **License:** Apache 2.0
- **Finetuned from model:** [meta-llama/Llama-3.1-8B](https://huggingface.co/meta-llama/Llama-3.1-8B)
### Model Sources
- **Repository:** [Epos-8B on Hugging Face](https://huggingface.co/P0x0/Epos-8B)
---
## Uses
### Direct Use
Epos-8B is ideal for:
- **Storytelling:** Generate detailed, immersive, and engaging narratives.
- **Dialogue Creation:** Create realistic and dynamic character interactions for stories or games.
## How to Get Started with the Model
To run the quantized version of the model, you can use [KoboldCPP](https://github.com/LostRuins/koboldcpp), which allows you to run quantized GGUF models locally.
### Steps:
1. Download [KoboldCPP](https://github.com/LostRuins/koboldcpp).
2. Follow the setup instructions provided in the repository.
3. Download the GGUF variant of Epos-8B from [Epos-8B-GGUF](https://huggingface.co/P0x0/Epos-8B-GGUF).
4. Load the model in KoboldCPP and start generating!
|