This is a merge of the vision adapters from meta-llama/Llama-3.2-11B-Vision-Instruct onto mlabonne/Hermes-3-Llama-3.1-8B-lorablated.

Please respect the respective licenses of Meta Llama & Nous Research.

The method I used is detailed in this post. I also merged the tokenizer and generation configs. Example python code for weight merging is available in merge_vision_example.py, which works for both 11B and 90B.

This 11B merge is less stable than the 90B (which is very stable). Keep temperature <= 0.7.

The 90B version of this merge is available here.

Downloads last month
15
Safetensors
Model size
10.7B params
Tensor type
BF16
ยท
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support