File size: 738 Bytes
19fe549 8e7be6d 19fe549 8e7be6d |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 |
---
base_model: google/madlad400-10b-mt
inference: false
license: apache-2.0
model_name: madlad400-10b-mt-gguf
pipeline_tag: translation
---
# MADLAD-400-10B-MT - GGUF
- Original model: [MADLAD-400-10B-MT](https://huggingface.co/google/madlad400-10b-mt)
## Description
This repo contains GGUF format model files for [MADLAD-400-10B-MT](https://huggingface.co/google/madlad400-10b-mt) for
use with [llama.cpp](https://github.com/ggerganov/llama.cpp) and compatible software.
Converted to gguf using llama.cpp [convert_hf_to_gguf.py](https://github.com/ggerganov/llama.cpp/blob/master/convert_hf_to_gguf.py)
and quantized using llama.cpp llama-quantize, llama.cpp version [b3325](https://github.com/ggerganov/llama.cpp/commits/b3325).
|