ruGPT-3.5-13B-GGUF

Description

This repository contains quantized GGUF format model files for ruGPT-3.5-13B.

Prompt template:

{prompt}

Example llama.cpp command

./main -m ruGPT-3.5-13B-Q4_K_M.gguf -c 2048 -n -1 -p 'Стих про программиста может быть таким:'

For other parameters and how to use them, please refer to the llama.cpp documentation

Downloads last month
692
GGUF
Model size
13.1B params
Architecture
gpt2

2-bit

3-bit

4-bit

5-bit

6-bit

8-bit

Inference Examples
Inference API (serverless) has been turned off for this model.