Text Generation
Transformers
GGUF
English
Not-For-All-Audiences
Inference Endpoints
conversational

QuantFactory Banner

QuantFactory/Apollo-2.0-Llama-3.1-8B-GGUF

This is quantized version of Locutusque/Apollo-2.0-Llama-3.1-8B created using llama.cpp

Original Model Card

Model Card for Locutusque/Apollo-2.0-Llama-3.1-8B

SFT of Llama-3.1 8B. I was going to use DPO, but it made the model worse.

~50 point elo increase on the chaiverse leaderboard over preview versions.

Model Details

Model Description

Fine-tuned Llama-3.1-8B on Locutusque/ApolloRP-2.0-SFT. Results in a good roleplaying language model, that isn't dumb.

  • Developed by: Locutusque
  • Model type: Llama3.1
  • Language(s) (NLP): English
  • License: Llama 3.1 Community License Agreement

Model Sources [optional]

Direct Use

RP/ERP, instruction following, conversation, etc

Bias, Risks, and Limitations

This model is completely uncensored - use at your own risk.

Recommendations

Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.

Training Details

Training Data

Locutusque/ApolloRP-2.0-SFT

The training data is cleaned from refusals, and "slop".

Training Hyperparameters

  • Training regime: bf16 non-mixed precision
Downloads last month
85
GGUF
Model size
8.03B params
Architecture
llama

2-bit

3-bit

4-bit

5-bit

6-bit

8-bit

Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Dataset used to train QuantFactory/Apollo-2.0-Llama-3.1-8B-GGUF