GGUF
Not-For-All-Audiences
nsfw
Inference Endpoints

image/png

An attempt using BlockMerge_Gradient on Pygmalion2 to get better result.

In addition, LimaRP v3 was used, is it recommanded to read the documentation.

Description

This repo contains quantized files of Emerald-13B.

Models and loras used

  • PygmalionAI/pygmalion-2-13b
  • The-Face-Of-Goonery/Huginn-13b-FP16
  • lemonilia/LimaRP-Llama2-13B-v3-EXPERIMENT

Prompt template: Alpaca

Below is an instruction that describes a task. Write a response that appropriately completes the request.

### Instruction:
{prompt}

### Response:

LimaRP v3 usage and suggested settings

image/png

You can follow these instruction format settings in SillyTavern. Replace tiny with your desired response length:

image/png

Special thanks to Sushi.

If you want to support me, you can here.

Downloads last month
1
GGUF
Model size
13B params
Architecture
llama

4-bit

5-bit

6-bit

8-bit

Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API: The model has no library tag.