By freezing one or more self-attention heads of the LLaVA 1.5-7b-hf model I hope to create a method for correcting/mitigating hallucination in previously generated answers.

Downloads last month
12
Safetensors
Model size
7.06B params
Tensor type
FP16
·
Inference API
Unable to determine this model's library. Check the docs .

Model tree for JanEbigt/LLaVA-att-test

Finetuned
(47)
this model