xiaolong1216's picture
Update README.md
29fc834 verified
|
raw
history blame
9.55 kB
metadata
license: apache-2.0

opencsg-bunny-v0.1-3B [中文] [English]

OpenCSG

[OpenCSG Community] [github] [wechat] [Twitter]

OpenCSG stands for Converged resources, Software refinement, and Generative LM. The 'C' represents Converged resources, indicating the integration and full utilization of hybrid resources. The 'S' stands for Software refinement, signifying software that is refined by large models. The 'G' represents Generative LM, which denotes widespread, inclusive, and democratized generative large models.

The vision of OpenCSG is to empower every industry, every company, and every individual to own their models. We adhere to the principles of openness and open source, making the large model software stack of OpenCSG available to the community. We welcome everyone to use, send feedback, and contribute collaboratively.

Model Description

Bunny is a family of lightweight but powerful multimodal models. It offers multiple plug-and-play vision encoders, like EVA-CLIP, SigLIP and language backbones, including Phi-1.5, StableLM-2, Qwen1.5 and Phi-2. To compensate for the decrease in model size, we construct more informative training data by curated selection from a broader data source. Remarkably, our Bunny-v1.0-3B model built upon SigLIP and Phi-2 outperforms the state-of-the-art MLLMs, not only in comparison with models of similar size but also against larger MLLM frameworks (7B), and even achieves performance on par with 13B models.

The model is pretrained on LAION-2M and finetuned on Bunny-695K.

opencsg-bunny-v0.1-3B is a model based on Bunny-v1_0-3B that have been fine-tuned using LoRA tuning methods on opencsg-bunny-880k datasets.

Model Eval

We evaluate opencsg-bunny-v0.1 on several popular benchmarks: MME perception, MME cognition, MMMU validation split, MMMU test split, to throughout assess its multimodal capacity.

Model Visual Encoder LLM PEFT MMEp MMEC MMMUV MMMUT
bunny-v1_0-3B SigLIP Phi-2(2.7B) LoRA 1488.8 289.3 38.2 33.0
opencsg-bunny-v0.1-3B SigLIP Phi-2(2.7B) LoRA 1527.1 299.3 38.4 33.0

TODO

  • We will provide more benchmark scores on fine-tuned models in the future.
  • We will provide different practical problems to evaluate the performance of fine-tuned models in the field of software engineering.

Model Usage

Here we show a code snippet to show you how to use the model with transformers.

Before running the snippet, you need to install the following dependencies:

pip install torch transformers accelerate pillow
import torch
import transformers
from transformers import AutoModelForCausalLM, AutoTokenizer
from PIL import Image
import warnings

# disable some warnings
transformers.logging.set_verbosity_error()
transformers.logging.disable_progress_bar()
warnings.filterwarnings('ignore')

# set device
torch.set_default_device('cpu')  # or 'cuda'

# create model
model = AutoModelForCausalLM.from_pretrained(
    'opencsg/opencsg-bunny-phi-2-siglip-lora-v0.1',
    torch_dtype=torch.float16,
    device_map='auto',
    trust_remote_code=True)
tokenizer = AutoTokenizer.from_pretrained(
    'BAAI/Bunny-v1_0-3B',
    trust_remote_code=True)

# text prompt
prompt = 'Why is the image funny?'
text = f"A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's questions. USER: <image>\n{prompt} ASSISTANT:"
text_chunks = [tokenizer(chunk).input_ids for chunk in text.split('<image>')]
input_ids = torch.tensor(text_chunks[0] + [-200] + text_chunks[1], dtype=torch.long).unsqueeze(0)

# image, sample images can be found in images folder
image = Image.open('example_2.png')
image_tensor = model.process_images([image], model.config).to(dtype=model.dtype)

# generate
output_ids = model.generate(
    input_ids,
    images=image_tensor,
    max_new_tokens=100,
    use_cache=True)[0]

print(tokenizer.decode(output_ids[input_ids.shape[1]:], skip_special_tokens=True).strip())

Hardware

  • GPUs: 8 Tesla A800
  • Training time: 15 hours

Software

OpenCSG介绍

OpenCSG

[OpenCSG 社区] [github] [微信] [推特]

OpenCSG 致力于资源融合、软件求精和生成式 LM。其中,“C”代表资源融合(Converged resources),表示多种混合资源的整合和充分利用。 “S”代表软件求精(Software refinement),表示通过大模型精炼过的软件。 “G”代表生成型语言模型(Generative LM),它表示广泛使用的、包容性的、经过民主化的生成式大模型。

OpenCSG 的愿景是让每个行业、每个公司、每个人都拥有自己的模型。 我们坚持开放开源的原则,将OpenCSG的大模型栈提供给社区。 欢迎大家积极使用、反馈想法和贡献内容。

模型介绍

Bunny是一个轻量级但功能强大的多模式模型家族。它提供多种即插即用视觉编码器,如EVA-CLIP、SigLIP和语言骨干,包括Phi-1.5、StableLM-2、Qwen1.5和Phi-2。为了补偿模型大小的减少,我们通过从更广泛的数据源中精心选择来构建信息量更大的训练数据。值得注意的是,我们基于SigLIP和Phi-2构建的Bunny-v1.0-3B模型不仅与类似大小的模型相比,而且与更大的MLLM框架(7B)相比,都优于最先进的MLLM,甚至实现了与13B模型相当的性能。

该模型在LAION-2M上进行了预训练,并在Bunny-695K上进行了微调。

opensg-bnny-v0.1-3B是一个基于bunny-v1_0-3B的模型,该模型已在opensg-benny-880k数据集上使用LoRA进行了微调。

模型评估

我们在几个主流的基准评测集上对opencsg-bunny-v0.1进行了全面评估:MME perception, MME cognition, MMMU validation split, MMMU test split, to throughout assess its multimodal capacity.

Model Visual Encoder LLM PEFT MMEp MMEC MMMUV MMMUT
bunny-v1_0-3B SigLIP Phi-2(2.7B) LoRA 1488.8 289.3 38.2 33.0
opencsg-bunny-v0.1-3B SigLIP Phi-2(2.7B) LoRA 1527.1 299.3 38.4 33.0

TODO

  • 未来我们将提供更多微调模型的在更多基准上的分数。
  • 我们将提供不同的实际问题来评估微调模型在软件工程领域的性能。

模型使用

import torch
from transformers import AutoModelForCausalLM, AutoTokenizer

torch.set_default_device("cuda")

model = AutoModelForCausalLM.from_pretrained("opencsg/opencsg-phi-2-v0.1", torch_dtype="auto", trust_remote_code=True)
tokenizer = AutoTokenizer.from_pretrained("opencsg/opencsg-phi-2-v0.1", trust_remote_code=True)

inputs = tokenizer('''def print_prime(n):
   """
   Print all primes between 1 and n
   """''', return_tensors="pt", return_attention_mask=False)

outputs = model.generate(**inputs, max_length=200)
text = tokenizer.batch_decode(outputs)[0]
print(text)

训练

硬件资源

  • GPU数量: 8 Tesla A800
  • 训练时间: 15 小时

软件使用