library_name: transformers
base_model:
- meta-llama/Llama-3.1-8B
Epos-8B
Epos-8B is a fine-tuned version of the base model Llama-3.1-8B from Meta, optimized for storytelling, dialogue generation, and creative writing. The model specializes in generating rich narratives, immersive prose, and dynamic character interactions, making it ideal for creative tasks.
Model Details
Model Description
Epos-8B is an 8 billion parameter language model fine-tuned for storytelling and narrative tasks. Inspired by the grandeur of epic tales, it is designed to produce high-quality, engaging content that evokes the depth and imagination of ancient myths and modern storytelling traditions.
- Developed by: P0x0
- Funded by: [More Information Needed]
- Shared by: P0x0
- Model type: Transformer-based Language Model
- Language(s) (NLP): Primarily English
- License: Apache 2.0
- Finetuned from model: meta-llama/Llama-3.1-8B
Model Sources
- Repository: Epos-8B on Hugging Face
- GGUF Variant Repository: Epos-8B-GGUF
Uses
Direct Use
Epos-8B is ideal for:
- Storytelling: Generate detailed, immersive, and engaging narratives.
- Dialogue Creation: Create realistic and dynamic character interactions for stories or games.
- Descriptive Writing: Generate vivid descriptions for settings, objects, or events.
- Creative Content Generation: Assist writers and creators in brainstorming and drafting ideas.
Out-of-Scope Use
Epos-8B is not recommended for real-time decision-making, safety-critical applications, or generating harmful, biased, or inappropriate content. It is designed specifically for creative writing and narrative tasks.
How to Get Started with the Model
To run the quantized version of the model, you can use KoboldCPP, which allows you to run quantized GGUF models locally.
Steps:
- Download KoboldCPP.
- Follow the setup instructions provided in the repository.
- Download the GGUF variant of Epos-8B from Epos-8B-GGUF.
- Load the model in KoboldCPP and start generating!
Alternatively, integrate the model directly into your code with the following snippet:
from transformers import AutoModelForCausalLM, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("P0x0/Epos-8B")
model = AutoModelForCausalLM.from_pretrained("P0x0/Epos-8B")
input_text = "Once upon a time in a distant land..."
inputs = tokenizer(input_text, return_tensors="pt")
outputs = model.generate(**inputs)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))