woofwolfy/ArliAI-RPMax-Phi-3.8B-v1.1-Q4_K_M-GGUF-Imatrix

This model was converted to GGUF format from ArliAI/ArliAI-RPMax-Phi-3.8B-v1.1 using llama.cpp via the ggml.ai's GGUF-my-repo space. Refer to the original model card for more details on the model.

ArliAI-RPMax-3.8B-v1.1

=====================================

Overview

This repository is based on the Phi-3.5-Mini-Instruct model and is governed by the MIT License agreement: https://huggingface.co/microsoft/Phi-3.5-mini-instruct

Model Description

ArliAI-RPMax-3.8B-v1.1 is trained on a diverse set of curated RP datasets with a focus on variety and deduplication. This model is designed to be highly creative and non-repetitive, with a unique approach to training that minimizes repetition.

Although this is finetuned on the same RPMax v1.1 dataset as usual, this model will not be as good as the 8B, 12B versions of RPMax due to the lower parameter and stricter censoring that Phi 3.5 has.

If you want to access the larger RPMax model you can use them https://arliai.com which directly helps fund our model training as well.

Or you can always download them and run it yourself if you have the hardware.

Training Details

  • Sequence Length: 16384
  • Training Duration: Approximately 1 day on RTX 4090
  • Epochs: 1 epoch training for minimized repetition sickness
  • QLORA: 64-rank 128-alpha, resulting in ~2% trainable weights
  • Learning Rate: 0.00001
  • Gradient accumulation: Very low 32 for better learning.

Quantization

The model is available in quantized formats:

Suggested Prompt Format

Phi 3.5 Instruct Prompt Format

Downloads last month
1
GGUF
Model size
3.82B params
Architecture
phi3

4-bit

Inference API
Unable to determine this model's library. Check the docs .

Model tree for woofwolfy/ArliAI-RPMax-Phi-3.8B-v1.1-Q4_K_M-GGUF-Imatrix

Quantized
(7)
this model