Can llama-3-vision-alpha-hf be used as a standard LLM for text-only tasks?
#7
by
hayachi
- opened
Does the llama-3-vision-alpha-hf model support pure text generation tasks (use as a standard LLM) without image input?
I am interested in utilizing this model for both vision-related and text-only tasks.
you can call model.text_model in order to access LLama3ForCausalLM
qtnx
changed discussion status to
closed