base_model: | |
- llava-hf/llava-1.5-7b-hf | |
By freezing one or more self-attention heads of the LLaVA 1.5-7b-hf model I hope to create a method for correcting/mitigating hallucination in previously generated answers. | |
base_model: | |
- llava-hf/llava-1.5-7b-hf | |
By freezing one or more self-attention heads of the LLaVA 1.5-7b-hf model I hope to create a method for correcting/mitigating hallucination in previously generated answers. | |