ChatTruth-7B
ChatTruth-7B 在Qwen-VL的基础上,使用精心设计的数据进行了优化训练。与Qwen-VL相比,模型在大分辨率上得到了大幅提升。创新性提出Restore Module使大分辨率计算量大幅减少。
安装要求 (Requirements)
transformers 4.32.0
python 3.8 and above
pytorch 1.13 and above
CUDA 11.4 and above
快速开始 (Quickstart)
from transformers import AutoModelForCausalLM, AutoTokenizer
from transformers.generation import GenerationConfig
import torch
torch.manual_seed(1234)
model_path = 'ChatTruth-7B' # your downloaded model path.
tokenizer = AutoTokenizer.from_pretrained(model_path, trust_remote_code=True)
# use cuda device
model = AutoModelForCausalLM.from_pretrained(model_path, device_map="cuda", trust_remote_code=True).eval()
model.generation_config = GenerationConfig.from_pretrained(model_path, trust_remote_code=True)
model.generation_config.top_p = 0.01
query = tokenizer.from_list_format([
{'image': 'demo.jpeg'},
{'text': '图片中的文字是什么'},
])
response, history = model.chat(tokenizer, query=query, history=None)
print(response)
# 昆明太厉害了
- Downloads last month
- 42
Inference API (serverless) does not yet support model repos that contain custom code.