Models
Collection
3 items
•
Updated
Epos-8B is a fine-tuned version of the base model Llama-3.1-8B from Meta, optimized for storytelling, dialogue generation, and creative writing. The model specializes in generating rich narratives, immersive prose, and dynamic character interactions, making it ideal for creative tasks.
Epos-8B is an 8 billion parameter language model fine-tuned for storytelling and narrative tasks.
Epos-8B is ideal for:
To run the quantized version of the model, you can use KoboldCPP, which allows you to run quantized GGUF models locally.
Alternatively, integrate the model directly into your code with the following snippet:
from transformers import AutoModelForCausalLM, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("P0x0/Epos-8B")
model = AutoModelForCausalLM.from_pretrained("P0x0/Epos-8B")
input_text = "Once upon a time in a distant land..."
inputs = tokenizer(input_text, return_tensors="pt")
outputs = model.generate(**inputs)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
Base model
meta-llama/Llama-3.1-8B