By freezing one or more self-attention heads of the LLaVA 1.5-7b-hf model I hope to create a method for correcting/mitigating hallucination in previously generated answers.

Downloads last month
4
Safetensors
Model size
7.06B params
Tensor type
FP16
ยท
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for JanEbigt/LLaVA-att-test

Finetuned
(51)
this model