ChunjiangGe's picture
Add paper link (#1)
22ae274 verified
|
raw
history blame
677 Bytes

ConvNeXt Model Card

Model details

Model type: ConvNeXt is an open-source visual encoder trained by fine-tuning LLM on multimodal caption and instruction-following data. The base model is: laion/CLIP-convnext_large_d_320.laion2B-s29B-b131K-ft-soup.

Model date: ConvLLaVA-ConvNeXt-1024 was trained in March 2024.

Paper or resources for more information: https://github.com/alibaba/conv-llava/

Where to send questions or comments about the model: https://github.com/alibaba/conv-llava/issues

Intended use

Primary intended uses: The primary use of ConvLLaVA-ConvNeXt is research on large multimodal models and chatbots.

Paper

arxiv.org/abs/2405.15738