Introducing Visual Perception Token into Multimodal Large Language Model

This repository contains models based on the paper Introducing Visual Perception Token into Multimodal Large Language Model. These models utilize Visual Perception Tokens to enhance the visual perception capabilities of multimodal large language models (MLLMs).

Code: https://github.com/yu-rp/VisualPerceptionToken

Downloads last month
266
Safetensors
Model size
8.32B params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for rp-yu/Qwen2-VL-7b-VPT-CLIP

Base model

Qwen/Qwen2-VL-2B
Finetuned
(238)
this model

Dataset used to train rp-yu/Qwen2-VL-7b-VPT-CLIP

Collection including rp-yu/Qwen2-VL-7b-VPT-CLIP