File size: 223 Bytes
f327c97
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
---
base_model:
- llava-hf/llava-1.5-7b-hf
---

By freezing one or more self-attention heads of the LLaVA 1.5-7b-hf model I hope to create a method for correcting/mitigating hallucination in previously generated answers.