PEFT
GGUF
llama-cpp
gguf-my-lora

FringeFields/mtg-ai-llama-3.3-70b-adapter-Q8_0-GGUF

This LoRA adapter was converted to GGUF format from FringeFields/mtg-ai-llama-3.3-70b-adapter via the ggml.ai's GGUF-my-lora space. Refer to the original adapter repository for more details.

Use with llama.cpp

# with cli
llama-cli -m base_model.gguf --lora mtg-ai-llama-3.3-70b-adapter-q8_0.gguf (...other args)

# with server
llama-server -m base_model.gguf --lora mtg-ai-llama-3.3-70b-adapter-q8_0.gguf (...other args)

To know more about LoRA usage with llama.cpp server, refer to the llama.cpp server documentation.

Downloads last month
2
GGUF
Model size
207M params
Architecture
llama

8-bit

Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API: The model has no pipeline_tag.

Model tree for FringeFields/mtg-ai-llama-3.3-70b-adapter-Q8_0-GGUF

Adapter
(1)
this model

Datasets used to train FringeFields/mtg-ai-llama-3.3-70b-adapter-Q8_0-GGUF