--- license: other license_name: yi-license license_link: LICENSE ---
specify theme context for images

Yi Vision Language Model

Better Bilingual Multimodal Model

🤗 Hugging Face • 🤖 ModelScope • ✡️ WiseModel

👩‍🚀 Ask questions or discuss ideas on GitHub !

👋 Join us 💬 WeChat (Chinese) !

📚 Grow at Yi Learning Hub !


📕 Table of Contents - [What is Yi-VL?](#what-is-yi-vl) - [Overview](#overview) - [Models](#models) - [Features](#features) - [Architecture](#architecture) - [Training](#training) - [Limitations](#limitations) - [Citation](#citation) - [Why Yi-VL?](#why-yi-vl) - [Benchmarks](#benchmarks) - [How to use Yi-VL?](#how-to-use-yi-vl) - [Quick start](#quick-start) - [Acknowledgements and attributions](#acknowledgements-and-attributions)

# What is Yi-VL? ## Overview - **Yi Visual Language (Yi-VL)** model is the open-source, multimodal version of the Yi **Large Language Model (LLM)** series, enabling content comprehension, recognition, and multi-round conversations about images. - Yi-VL demonstrates exceptional performance, **ranking first** among all existing open-source models in the latest benchmarks including [MMMU](https://mmmu-benchmark.github.io/#leaderboard) in English and [CMMMU](https://mmmu-benchmark.github.io/#leaderboard) in Chinese (based on data available up to January 2024). - Yi-34B-VL is the **first** open-source 34B vision language model worldwide.
[ Back to top ⬆️ ]
## Models Yi-VL has released the following versions. Model | Download |---|--- Yi-VL-34B |• [🤗 Hugging Face](https://huggingface.co/01-ai/Yi-VL-34B) Yi-VL-6B | • [🤗 Hugging Face](https://huggingface.co/01-ai/Yi-VL-6B)
[ Back to top ⬆️ ]
## Features Yi-VL offers the following features: - Multi-round text-image conversations: Yi-VL can take both text and images as inputs and produce text outputs. Currently, it supports multi-round visual question answering with one image. - Bilingual text support: Yi-VL supports conversations in both English and Chinese, including text recognition in images. - Strong image comprehension: Yi-VL is adept at analyzing visuals, making it an efficient tool for tasks like extracting, organizing, and summarizing information from images. - Fine-grained image resolution: Yi-VL supports image understanding at a higher resolution of 448x448.
[ Back to top ⬆️ ]
## Architecture Yi-VL adopts the [LLaVA](https://github.com/haotian-liu/LLaVA) architecture, which is composed of the following components: - Vision Transformer (ViT): it's initialized with [CLIP ViT-H/14 model](https://huggingface.co/laion/CLIP-ViT-H-14-laion2B-s32B-b79K) and used for image encoding. - Projection Module: it builds a bridge between the ViT and LLM using a 2-layer MLP with layer normalization. - Large Language Model (LLM): it's initialized with [Yi-6B-Chat](https://huggingface.co/01-ai/Yi-6B-Chat) or [Yi-34B-Chat](https://huggingface.co/01-ai/Yi-34B-Chat). ![Yi-VL architecture]()
[ Back to top ⬆️ ]
## Training Yi-VL is trained to align visual information well to the semantic space of Yi LLM, which undergoes a three-stage training process: - Stage 1: The parameters of ViT and the projection module are trained using an image resolution of 224×224. The LLM weights are frozen. - Stage 2: The image resolution of ViT is scaled up to 448×448, and the parameters of ViT and the projection module are trained. - Stage 3: The parameters of the entire model (that is, ViT, projection module, and LLM) are trained.
[ Back to top ⬆️ ]
## Limitations This is the initial release of the Yi-VL, which comes with some known limitations. It is recommended to carefully evaluate potential risks before adopting any models. - Feature limitation - Visual question answering is supported. Other features like text-to-3D and image-to-video are not yet supported. - A single image rather than several images can be accepted as an input. - Hallucination problem - There is a certain possibility of generating content that does not exist in the image. - In scenes containing multiple objects, some objects might be incorrectly identified or described with insufficient detail. - Resolution issue - Yi-VL is trained on images with a resolution of 448×448. During inference, inputs of any resolution are resized to 448×448. Low-resolution images may result in information loss, and more fine-grained images (above 448) do not bring in extra knowledge. - Other limitations of the Yi LLM. ## Citation If you find our work helpful, please feel free to cite us. ``` @article{tbd, year={2024} } ```
[ Back to top ⬆️ ]
# Why Yi-VL? ## Benchmarks Yi-VL outperforms all existing open-source models in [MMMU](https://mmmu-benchmark.github.io/#leaderboard) and [CMMMU](https://mmmu-benchmark.github.io/#leaderboard), two advanced benchmarks that include massive multi-discipline multimodal questions. ![Yi-VL benchmark]()
[ Back to top ⬆️ ]
# How to use Yi-VL? ## Quick start You can perform inference using the code from [LLaVA](https://github.com/haotian-liu/LLaVA). For detailed steps, see [simple startup for pretraining](https://github.com/haotian-liu/LLaVA/pull/966). Notes: - You need to modify the system prompt as follows. ```bash This is a chat between an inquisitive human and an AI assistant. Assume the role of the AI assistant. Read all the images carefully, and respond to the human's questions with informative, helpful, detailed and polite answers. 这是一个好奇的人类和一个人工智能助手之间的对话。假设你扮演这个AI助手的角色。仔细阅读所有的图像,并对人类的问题做出信息丰富、有帮助、详细的和礼貌的回答。 ### Human: What is it in the image? ### Assistant: ``` - You need to set the parameter `mm_vision_tower` in `config.json` to the local ViT path. # Acknowledgements and attributions This project makes use of open-source software/components. We acknowledge and are grateful to these developers for their contributions to the open-source community. ## List of used open-source projects 1. LLaVA - Authors: Haotian Liu, Chunyuan Li, Qingyang Wu, Yuheng Li, and Yong Jae Lee - Source: https://github.com/haotian-liu/LLaVA - License: Apache-2.0 license - Description: The codebase is based on LLaVA code. 2. OpenClip - Authors: Gabriel Ilharco, Mitchell Wortsman, Ross Wightman, Cade Gordon, Nicholas Carlini, Rohan Taori, Achal Dave, Vaishaal Shankar, Hongseok Namkoong, John Miller, Hannaneh Hajishirzi, Ali Farhadi, and Ludwig Schmidt - Source: https://huggingface.co/laion/CLIP-ViT-H-14-laion2B-s32B-b79K - License: mit - Description: The ViT is initialized using the weights of OpenClip. ## License This project is licensed under the [yi-license](https://github.com/01-ai/Yi/blob/main/LICENSE). For more information on the license for this project, please see the LICENSE file in this repository. ## Notes - This attribution does not claim to cover all open-source components used. Please check individual components and their respective licenses for full details. - The use of the open-source components is subject to the terms and conditions of the respective licenses. We appreciate the open-source community for their invaluable contributions to the technology world.