|
--- |
|
model_creator: ai-forever |
|
base_model: ruGPT-3.5-13B |
|
model_name: ruGPT-3.5-13B-GGUF |
|
pipeline_tag: text-generation |
|
license: mit |
|
model_type: gpt2 |
|
inference: false |
|
prompt_template: '{prompt}' |
|
language: |
|
- ru |
|
- en |
|
--- |
|
# ruGPT-3.5-13B-GGUF |
|
- Model creator: [ai-forever](https://huggingface.co/ai-forever) |
|
- Original model: [ruGPT-3.5-13B](https://huggingface.co/ai-forever/ruGPT-3.5-13B) |
|
|
|
|
|
## Description |
|
This repository contains quantized GGUF format model files for [ruGPT-3.5-13B](https://huggingface.co/ai-forever/ruGPT-3.5-13B). |
|
|
|
|
|
## Prompt template: |
|
`{prompt}` |
|
|
|
|
|
## Example `llama.cpp` command |
|
```shell |
|
./main -m ruGPT-3.5-13B-Q4_K_M.gguf -c 2048 -n -1 -p 'Стих про программиста может быть таким:' |
|
``` |
|
For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md) |
|
|