aynig/medit-xl-F16-GGUF

This LoRA adapter was converted to GGUF format from grammarly/medit-xl via the ggml.ai's GGUF-my-lora space. Refer to the original adapter repository for more details.

Use with llama.cpp

# with cli
llama-cli -m base_model.gguf --lora medit-xl-f16.gguf (...other args)

# with server
llama-server -m base_model.gguf --lora medit-xl-f16.gguf (...other args)

To know more about LoRA usage with llama.cpp server, refer to the llama.cpp server documentation.

Downloads last month
4
GGUF
Model size
8.39M params
Architecture
llama

16-bit

Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for aynig/medit-xl-F16-GGUF

Base model

grammarly/medit-xl
Quantized
(1)
this model

Datasets used to train aynig/medit-xl-F16-GGUF