Epos-8B

Epos-8B is a fine-tuned version of the base model Llama-3.1-8B from Meta, optimized for storytelling, dialogue generation, and creative writing. The model specializes in generating rich narratives, immersive prose, and dynamic character interactions, making it ideal for creative tasks.

image/png


Model Details

Model Description

Epos-8B is an 8 billion parameter language model fine-tuned for storytelling and narrative tasks.

  • Developed by: P0x0
  • Funded by: P0x0
  • Shared by: P0x0
  • Model type: Transformer-based Language Model
  • Language(s) (NLP): Primarily English
  • License: Apache 2.0
  • Finetuned from model: meta-llama/Llama-3.1-8B

Model Sources


Uses

Direct Use

Epos-8B is ideal for:

  • Storytelling: Generate detailed, immersive, and engaging narratives.
  • Dialogue Creation: Create realistic and dynamic character interactions for stories or games.

How to Get Started with the Model

To run the quantized version of the model, you can use KoboldCPP, which allows you to run quantized GGUF models locally.

Steps:

  1. Download KoboldCPP.
  2. Follow the setup instructions provided in the repository.
  3. Download the GGUF variant of Epos-8B from Epos-8B-GGUF.
  4. Load the model in KoboldCPP and start generating!

Alternatively, integrate the model directly into your code with the following snippet:

from transformers import AutoModelForCausalLM, AutoTokenizer

tokenizer = AutoTokenizer.from_pretrained("P0x0/Epos-8B")
model = AutoModelForCausalLM.from_pretrained("P0x0/Epos-8B")

input_text = "Once upon a time in a distant land..."
inputs = tokenizer(input_text, return_tensors="pt")
outputs = model.generate(**inputs)

print(tokenizer.decode(outputs[0], skip_special_tokens=True))
Downloads last month
85
Safetensors
Model size
8.03B params
Tensor type
BF16
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for P0x0/Epos-8b

Finetuned
(575)
this model
Finetunes
1 model
Quantizations
9 models

Collection including P0x0/Epos-8b