FlashVL commited on
Commit
8155cef
·
verified ·
1 Parent(s): edd02e2

Upload folder using huggingface_hub

Browse files
.gitattributes CHANGED
@@ -1,3 +1,4 @@
 
1
  *.7z filter=lfs diff=lfs merge=lfs -text
2
  *.arrow filter=lfs diff=lfs merge=lfs -text
3
  *.bin filter=lfs diff=lfs merge=lfs -text
@@ -22,8 +23,6 @@
22
  *.pt filter=lfs diff=lfs merge=lfs -text
23
  *.pth filter=lfs diff=lfs merge=lfs -text
24
  *.rar filter=lfs diff=lfs merge=lfs -text
25
- *.safetensors filter=lfs diff=lfs merge=lfs -text
26
- saved_model/**/* filter=lfs diff=lfs merge=lfs -text
27
  *.tar.* filter=lfs diff=lfs merge=lfs -text
28
  *.tar filter=lfs diff=lfs merge=lfs -text
29
  *.tflite filter=lfs diff=lfs merge=lfs -text
@@ -33,3 +32,4 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
 
 
1
+ *.safetensors filter=lfs diff=lfs merge=lfs -text
2
  *.7z filter=lfs diff=lfs merge=lfs -text
3
  *.arrow filter=lfs diff=lfs merge=lfs -text
4
  *.bin filter=lfs diff=lfs merge=lfs -text
 
23
  *.pt filter=lfs diff=lfs merge=lfs -text
24
  *.pth filter=lfs diff=lfs merge=lfs -text
25
  *.rar filter=lfs diff=lfs merge=lfs -text
 
 
26
  *.tar.* filter=lfs diff=lfs merge=lfs -text
27
  *.tar filter=lfs diff=lfs merge=lfs -text
28
  *.tflite filter=lfs diff=lfs merge=lfs -text
 
32
  *.zip filter=lfs diff=lfs merge=lfs -text
33
  *.zst filter=lfs diff=lfs merge=lfs -text
34
  *tfevents* filter=lfs diff=lfs merge=lfs -text
35
+ tokenizer.json filter=lfs diff=lfs merge=lfs -text
README.md CHANGED
@@ -1,3 +1,111 @@
1
- ---
2
- license: apache-2.0
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ datasets:
4
+ - lmms-lab/LLaVA-OneVision-Data
5
+ - BAAI/Infinity-MM
6
+ language:
7
+ - en
8
+ - zh
9
+ base_model:
10
+ - google/siglip2-so400m-patch16-512
11
+ - Qwen/Qwen2-1.5B-Instruct
12
+ pipeline_tag: image-text-to-text
13
+ library_name: transformers
14
+ ---
15
+
16
+ # Flash-VL-2B-Static
17
+ [\[📜 Flash-VL Tech Report\]](https://www.arxiv.org/abs/2505.09498)
18
+
19
+ ![image/png](https://s3plus.meituan.net/automl-datasets/mlm/logo.jpg)
20
+
21
+ ## Introduction
22
+
23
+ We are excited to introduce **Flash-VL**, a novel approach to optimizing Vision-Language Models (VLMs) for real-time applications, targeting ultra-low latency and high throughput without sacrificing accuracy. Leveraging advanced architectural enhancements and efficient computational strategies, Flash-VL 2B is designed to maximize throughput by reducing processing time while maintaining competitive performance across multiple vision-language benchmarks. Our approach includes tailored architectural choices, token compression mechanisms, data curation, training schemes, and a novel image processing technique called implicit semantic stitching that effectively balances computational load and model performance. Through extensive evaluations on 11 standard VLM benchmarks, we demonstrate that Flash-VL 2B achieves state-of-the-art results in both speed and accuracy, making it a promising solution for deployment in resource-constrained environments and large-scale real-time applications.
24
+
25
+
26
+ ### Environment Setup
27
+
28
+ ```bash
29
+ pip install torch==2.1.2
30
+ pip install transformers==4.50.0.dev0
31
+ ```
32
+
33
+
34
+ ### How to use it?
35
+
36
+ ```python
37
+ import torch
38
+ from PIL import Image
39
+ import requests
40
+ from io import BytesIO
41
+ from transformers import AutoModel, AutoTokenizer, AutoProcessor
42
+
43
+ model_path = "FlashVL/FlashVL-2B-Static"
44
+ model = AutoModel.from_pretrained(model_path, torch_dtype=torch.bfloat16,trust_remote_code=True,device_map='cuda')
45
+ model.tokenizer = AutoTokenizer.from_pretrained(model_path,device_map='cuda')
46
+ model.im_trans = AutoProcessor.from_pretrained(model_path).image_processor
47
+
48
+ # single-image single-round conversation (单图单轮对话)
49
+ image_url ="https://s3plus.meituan.net/automl-datasets/mlm/0516.png"
50
+ response = requests.get(image_url)
51
+ image_data = BytesIO(response.content)
52
+ pil_image = Image.open(image_data).convert('RGB')
53
+ messages = [{'role': 'user', 'content': "生成图中菜品的菜谱"}] # answer: EXTRA
54
+ answer = model.chat(pil_image, messages, do_sample=False, max_new_tokens=256)
55
+ print(answer)
56
+
57
+ # single-image multi-round conversation (单图多轮对话)
58
+ messages = [
59
+ {'role': 'user', 'content': '这是什么'},
60
+ {"role": "assistant", "content": '这是一道看起来像是银耳莲子汤的甜品。\
61
+ 银耳是一种常见的食材,通常用于制作甜品和汤品,具有软糯的口感和清润的口感。莲 \
62
+ 子是莲子的干燥部分,常用于中医和食疗中,具有补脾止泻的功效。图片中还可以看到 \
63
+ 一些枸杞和核桃,枸杞富含维生素和抗氧化物质,核桃则提供丰富的蛋白质和健康脂肪。 \
64
+ 整体来看,这道甜品不仅美味,还具有一定的营养价值。'},
65
+ {'role': 'user', 'content': '对图中菜品卡路里分析'}
66
+ ]
67
+ answer = model.chat(pil_image, messages, do_sample=False, max_new_tokens=256)
68
+ print(answer)
69
+
70
+ # pure-text single-round conversation (纯文本对话)
71
+ messages = [{'role': 'user', 'content': "who are you"}]
72
+ answer = model.chat(None, messages, do_sample=False, max_new_tokens=256)
73
+ print(answer)
74
+ ```
75
+
76
+ ### Evaluation
77
+
78
+ | Benchmark | Qwen2-VL-2B | Aquila-VL-2B | InternVL2.5-2B | Flash-VL-2B<sub>s<sub> | Flash-VL-2B<sub>d<sub> | Flash-VL-2B<sub>d-ISS<sub> |
79
+ | :-------------: | :-------------: | :-------------: | :-------------: |:-------------: |:-------------: |:-------------: |
80
+ | MMMU<sub>val<sub> | 41.9 | 44.4 | 41.8 | 43.6 | 42.9 | 42.9 |
81
+ | MMBench<sup>en<sup> | 74.9 | 78.6 | 74.7 | 78.4 | 78.4 | 79.1 |
82
+ | MMBench<sup>cn<sup> | 73.5 | 76.3 | 71.6 | 74.7 | 74.9 | 76.7 |
83
+ | MMStar | 48.0 | 54.9 | 54.1 | 53.8 | 54.4 | 54.1 |
84
+ | MathVista<sub>testmini<sub> | 43.0 | 59.4 | 50.9 | 59.3 | 58.1 | 61.5 |
85
+ | AI2D<sub>test<sub> | 74.1 | 75.0 | 75.1 | 74.2 | 74.1 | 74.4 |
86
+ | MMVet | 49.5 | 40.9 | 61.7 | 47.3 | 52.7 | 50.7 |
87
+ | HallusionBench | 39.2 | 38.5 | 42.7 | 43.5 | 45.5 | 49.0 |
88
+ | OCRBench | 794 | 773 | 800 | 764 | 831 | 843 |
89
+ | MME | 1872 | 1813 | 2091 | 1715 | 1866 | 1850 |
90
+ | SEEDBench | 71.5 | 78.9 | 73.2 | 73.6 | 73.6 | 74.5 |
91
+ | Average | 60.2 | 62.6 | 63.6 | 62.4 | 64.0 | 64.8 |
92
+
93
+
94
+ We use [VLMEvalKit](https://github.com/open-compass/VLMEvalKit) to evaluate FlashVL-2B-Static.
95
+
96
+
97
+
98
+ ## Citation
99
+ If you find this project useful in your research, please consider citing:
100
+
101
+ ```BibTeX
102
+ @misc{zhang2025flashvl2boptimizingvisionlanguage,
103
+ title={Flash-VL 2B: Optimizing Vision-Language Model Performance for Ultra-Low Latency and High Throughput},
104
+ author={Bo Zhang and Shuo Li and Runhe Tian and Yang Yang and Jixin Tang and Jinhao Zhou and Lin Ma},
105
+ year={2025},
106
+ eprint={2505.09498},
107
+ archivePrefix={arXiv},
108
+ primaryClass={cs.CV},
109
+ url={https://arxiv.org/abs/2505.09498},
110
+ }
111
+ ```
adapters.py ADDED
@@ -0,0 +1,59 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import os
2
+ import math
3
+ import torch
4
+ from torch import nn
5
+ from functools import partial
6
+ import torch.nn.functional as F
7
+
8
+ class Adapter_Template(nn.Module):
9
+ def __init__(self, config):
10
+ super().__init__()
11
+ self.gradient_checkpointing = False
12
+
13
+ def freeze_module(self, module):
14
+ for p in module.parameters():
15
+ p.requires_grad = False
16
+
17
+ def forward(self, inputs, add_start_end=True):
18
+ input_ids, hidden_states, targets, attn_mask, loss_mask = inputs
19
+ image_features = self.forward_adapter_modules(hidden_states)
20
+ return (input_ids, image_features, targets, attn_mask, loss_mask)
21
+
22
+ class AdapterSigLIP(Adapter_Template):
23
+
24
+ def __init__(self, config):
25
+ super().__init__(config)
26
+ self.p0 = nn.Sequential(
27
+ nn.LayerNorm(config.vision_config.hidden_size*4),
28
+ nn.Linear(config.vision_config.hidden_size*4, config.intermediate_size),
29
+ nn.GELU(),
30
+ nn.Linear(config.intermediate_size, config.intermediate_size),
31
+ nn.GELU(),
32
+ )
33
+ self.proj = nn.Linear(config.intermediate_size, config.vision_config.proj_output_dim)
34
+
35
+ def freeze(self):
36
+ self.freeze_module(self.p0)
37
+ self.freeze_module(self.proj)
38
+
39
+ def pixel_shuffle(self, x, scale_factor=0.5):
40
+ n, w, h, c = x.size()
41
+ if w % 2 == 0 and h % 2 == 0:
42
+ # N, W, H, C --> N, W, H * scale, C // scale
43
+ x = x.reshape(n, w, int(h * scale_factor), int(c / scale_factor))
44
+ # N, W, H * scale, C // scale --> N, H * scale, W, C // scale
45
+ x = x.permute(0, 2, 1, 3).contiguous()
46
+ # N, H * scale, W, C // scale --> N, H * scale, W * scale, C // (scale ** 2)
47
+ x = x.view(n, int(h * scale_factor), int(w * scale_factor),
48
+ int(c / (scale_factor * scale_factor)))
49
+ return x
50
+
51
+ def forward_adapter_modules(self, hidden_states):
52
+ h = w = int(hidden_states.shape[1] ** 0.5)
53
+ hidden_states = hidden_states.reshape(hidden_states.shape[0], h, w, -1)
54
+ hidden_states = self.pixel_shuffle(hidden_states, scale_factor=0.5)
55
+ hidden_states = hidden_states.reshape(hidden_states.shape[0], -1, hidden_states.shape[-1])
56
+
57
+ hidden_states = self.proj(self.p0(hidden_states))
58
+
59
+ return hidden_states
added_tokens.json ADDED
@@ -0,0 +1,24 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "</tool_call>": 151658,
3
+ "<tool_call>": 151657,
4
+ "<|box_end|>": 151649,
5
+ "<|box_start|>": 151648,
6
+ "<|endoftext|>": 151643,
7
+ "<|file_sep|>": 151664,
8
+ "<|fim_middle|>": 151660,
9
+ "<|fim_pad|>": 151662,
10
+ "<|fim_prefix|>": 151659,
11
+ "<|fim_suffix|>": 151661,
12
+ "<|im_end|>": 151645,
13
+ "<|im_start|>": 151644,
14
+ "<|image_pad|>": 151655,
15
+ "<|object_ref_end|>": 151647,
16
+ "<|object_ref_start|>": 151646,
17
+ "<|quad_end|>": 151651,
18
+ "<|quad_start|>": 151650,
19
+ "<|repo_name|>": 151663,
20
+ "<|video_pad|>": 151656,
21
+ "<|vision_end|>": 151653,
22
+ "<|vision_pad|>": 151654,
23
+ "<|vision_start|>": 151652
24
+ }
config.json ADDED
@@ -0,0 +1,112 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_name_or_path": "/mnt/dolphinfs/ssd_pool/docker/user/hadoop-mlm/lishuo/repo/fine_tuning_package/model",
3
+ "architectures": [
4
+ "FlashVLStatic"
5
+ ],
6
+ "auto_map": {
7
+ "AutoConfig": "configuration_FlashVLStatic.FlashVLStaticConfig",
8
+ "AutoModel": "modeling_FlashVLStatic.FlashVLStatic"
9
+ },
10
+ "intermediate_size": 7168,
11
+ "image_token_num": 256,
12
+ "llm_config":{
13
+ "attention_dropout": 0.0,
14
+ "bos_token_id": 151643,
15
+ "eos_token_id": 151645,
16
+ "hidden_act": "silu",
17
+ "hidden_size": 1536,
18
+ "initializer_range": 0.02,
19
+ "intermediate_size": 8960,
20
+ "max_position_embeddings": 32768,
21
+ "max_window_layers": 21,
22
+ "model_type": "qwen2",
23
+ "num_attention_heads": 12,
24
+ "num_hidden_layers": 28,
25
+ "num_key_value_heads": 2,
26
+ "rms_norm_eps": 1e-06,
27
+ "rope_scaling": null,
28
+ "rope_theta": 1000000.0,
29
+ "sliding_window": 32768,
30
+ "tie_word_embeddings": true,
31
+ "torch_dtype": "bfloat16",
32
+ "transformers_version": "4.50.0.dev0",
33
+ "use_cache": true,
34
+ "use_sliding_window": false,
35
+ "vocab_size": 151936
36
+ },
37
+ "vision_config": {
38
+ "_attn_implementation_autoset": true,
39
+ "_name_or_path": "",
40
+ "add_cross_attention": false,
41
+ "architectures": null,
42
+ "attention_dropout": 0.0,
43
+ "bad_words_ids": null,
44
+ "begin_suppress_tokens": null,
45
+ "bos_token_id": null,
46
+ "chunk_size_feed_forward": 0,
47
+ "cross_attention_hidden_size": null,
48
+ "decoder_start_token_id": null,
49
+ "diversity_penalty": 0.0,
50
+ "do_sample": false,
51
+ "early_stopping": false,
52
+ "encoder_no_repeat_ngram_size": 0,
53
+ "eos_token_id": null,
54
+ "exponential_decay_length_penalty": null,
55
+ "finetuning_task": null,
56
+ "forced_bos_token_id": null,
57
+ "forced_eos_token_id": null,
58
+ "hidden_act": "gelu_pytorch_tanh",
59
+ "hidden_size": 1152,
60
+ "id2label": {
61
+ "0": "LABEL_0",
62
+ "1": "LABEL_1"
63
+ },
64
+ "image_size": 512,
65
+ "intermediate_size": 4304,
66
+ "is_decoder": false,
67
+ "is_encoder_decoder": false,
68
+ "label2id": {
69
+ "LABEL_0": 0,
70
+ "LABEL_1": 1
71
+ },
72
+ "layer_norm_eps": 1e-06,
73
+ "length_penalty": 1.0,
74
+ "max_length": 20,
75
+ "min_length": 0,
76
+ "model_type": "siglip_vision_model",
77
+ "no_repeat_ngram_size": 0,
78
+ "num_attention_heads": 16,
79
+ "num_beam_groups": 1,
80
+ "num_beams": 1,
81
+ "num_channels": 3,
82
+ "num_hidden_layers": 27,
83
+ "num_return_sequences": 1,
84
+ "output_attentions": false,
85
+ "output_hidden_states": false,
86
+ "output_scores": false,
87
+ "pad_token_id": null,
88
+ "patch_size": 16,
89
+ "prefix": null,
90
+ "problem_type": null,
91
+ "proj_output_dim": 1536,
92
+ "pruned_heads": {},
93
+ "remove_invalid_values": false,
94
+ "repetition_penalty": 1.0,
95
+ "return_dict": true,
96
+ "return_dict_in_generate": false,
97
+ "sep_token_id": null,
98
+ "suppress_tokens": null,
99
+ "task_specific_params": null,
100
+ "temperature": 1.0,
101
+ "tf_legacy_loss": false,
102
+ "tie_encoder_decoder": false,
103
+ "tie_word_embeddings": true,
104
+ "tokenizer_class": null,
105
+ "top_k": 50,
106
+ "top_p": 1.0,
107
+ "torch_dtype": "bfloat16",
108
+ "torchscript": false,
109
+ "typical_p": 1.0,
110
+ "use_bfloat16": false
111
+ }
112
+ }
configuration_FlashVLStatic.py ADDED
@@ -0,0 +1,27 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import transformers
2
+ from transformers import (Qwen2Config, Qwen2ForCausalLM, SiglipVisionConfig,
3
+ SiglipVisionModel, PretrainedConfig)
4
+ from transformers import AutoConfig
5
+ import copy
6
+
7
+ class FlashVLStaticConfig(PretrainedConfig):
8
+ model_type = 'FlashVLStaticConfig'
9
+ is_composition = True
10
+
11
+ def __init__(
12
+ self,
13
+ vision_config=dict(model_type='siglip_vision_model'),
14
+ llm_config=dict(architectures=['Qwen2ForCausalLM']),
15
+ **kwargs
16
+ ):
17
+ super().__init__(**kwargs)
18
+ self.vision_config = SiglipVisionConfig(**vision_config)
19
+ self.llm_config = Qwen2Config(**llm_config)
20
+
21
+ def to_dict(self):
22
+
23
+ output = copy.deepcopy(self.__dict__)
24
+ output['vision_config'] = self.vision_config.to_dict()
25
+ output['llm_config'] = self.llm_config.to_dict()
26
+
27
+ return output
generation_config.json ADDED
@@ -0,0 +1,6 @@
 
 
 
 
 
 
 
1
+ {
2
+ "_from_model_config": true,
3
+ "bos_token_id": 151643,
4
+ "eos_token_id": 151645,
5
+ "transformers_version": "4.50.0.dev0"
6
+ }
merges.txt ADDED
The diff for this file is too large to render. See raw diff
 
mm_constants.py ADDED
@@ -0,0 +1,4 @@
 
 
 
 
 
1
+ # Model Constants
2
+ IGNORE_INDEX = -100
3
+ IMAGE_TOKEN_INDEX = -200
4
+ IMAGE_PAD_TOKEN_INDEX = -201
model.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:8cd0df6fba5b9a8de26ca7b9544e4bb437fad1a1219f140af70754a020b2a5a5
3
+ size 4602705280
modeling_FlashVLStatic.py ADDED
@@ -0,0 +1,369 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import os
2
+ import math
3
+ import copy
4
+
5
+ import torch
6
+ import torch.nn.functional as F
7
+ from torch.nn import CrossEntropyLoss
8
+
9
+ from PIL import Image
10
+ from functools import partial
11
+ from typing import List, Optional, Tuple, Union, Dict
12
+ from dataclasses import dataclass
13
+
14
+ import transformers
15
+ from transformers.modeling_outputs import ModelOutput
16
+ from transformers.modeling_utils import PreTrainedModel
17
+ from transformers import AutoConfig, AutoModelForCausalLM, AutoTokenizer, PretrainedConfig, Qwen2Config, SiglipVisionModel
18
+
19
+ from .adapters import AdapterSigLIP
20
+ from .mm_constants import IGNORE_INDEX, IMAGE_TOKEN_INDEX
21
+ from .processing_FlashVL import tokenizer_image_token_qwen
22
+ from .configuration_FlashVLStatic import FlashVLStaticConfig
23
+
24
+ @dataclass
25
+ class FlashVLStaticOutputWithPast(ModelOutput):
26
+ loss: Optional[torch.FloatTensor] = None
27
+ logits: torch.FloatTensor = None
28
+
29
+
30
+ class FlashVLStatic(PreTrainedModel):
31
+ config_class = FlashVLStaticConfig
32
+
33
+ def __init__(self, config):
34
+ super().__init__(config)
35
+ self.llm = AutoModelForCausalLM.from_config(config.llm_config, trust_remote_code=True)
36
+ self.vit = SiglipVisionModel(config.vision_config).vision_model
37
+ self.adp = AdapterSigLIP(config)
38
+ self.image_token_num = config.image_token_num
39
+ self.image_size = config.vision_config.image_size
40
+
41
+ def merge_text_image_tokens(self, inputs):
42
+ input_ids, image_features, targets, attn_mask, loss_mask = inputs
43
+ micro_batch_size, tokens_len = input_ids.shape
44
+ device = input_ids.device
45
+
46
+ img_rows, img_cols = torch.where(input_ids == IMAGE_TOKEN_INDEX)
47
+ image_idxs = {i: [] for i in range(micro_batch_size)}
48
+ for row, col in zip(img_rows.tolist(), img_cols.tolist()):
49
+ image_idxs[row].append(col)
50
+ for row in range(micro_batch_size):
51
+ image_idxs[row] = sorted(image_idxs[row])
52
+
53
+ split_sizes = []
54
+ for row in range(micro_batch_size):
55
+ image_num = len(image_idxs[row])
56
+ if image_num == 0:
57
+ split_sizes.append(tokens_len)
58
+ continue
59
+
60
+ if image_idxs[row][0] != 0:
61
+ split_sizes.append(image_idxs[row][0])
62
+
63
+ for idx in range(image_num - 1):
64
+ split_sizes.append(self.image_token_num)
65
+ if image_idxs[row][idx + 1] > image_idxs[row][idx] + self.image_token_num:
66
+ split_sizes.append(image_idxs[row][idx + 1] - (image_idxs[row][idx] + self.image_token_num))
67
+
68
+ if image_idxs[row][image_num - 1] + self.image_token_num >= tokens_len:
69
+ split_sizes.append(tokens_len - image_idxs[row][image_num - 1])
70
+ else:
71
+ split_sizes.append(self.image_token_num)
72
+ split_sizes.append(tokens_len - (image_idxs[row][image_num - 1] + self.image_token_num))
73
+
74
+ input_ids_noim = torch.where(input_ids < 0, 151643, input_ids)
75
+ input_ids_noim = input_ids_noim.view(-1)
76
+ input_embeds = self.llm.model.embed_tokens(input_ids_noim)
77
+ input_embeds_split = torch.split(input_embeds, split_sizes, dim=0)
78
+
79
+ vl_embeds_list = []
80
+ cur_language_idx = 0
81
+ cur_image_idx = 0
82
+ for row in range(micro_batch_size):
83
+ image_num = len(image_idxs[row])
84
+ if image_num == 0:
85
+ vl_embeds_list.append(input_embeds_split[cur_language_idx])
86
+ cur_language_idx += 1
87
+ vl_embeds_list.append(image_features[cur_image_idx][0:0])
88
+ cur_image_idx += 1
89
+ continue
90
+
91
+ if image_idxs[row][0] != 0:
92
+ vl_embeds_list.append(input_embeds_split[cur_language_idx])
93
+ cur_language_idx += 1
94
+
95
+ for idx in range(image_num - 1):
96
+ vl_embeds_list.append(image_features[cur_image_idx])
97
+ cur_language_idx += 1
98
+ cur_image_idx += 1
99
+
100
+ if image_idxs[row][idx + 1] > image_idxs[row][idx] + self.image_token_num:
101
+ vl_embeds_list.append(input_embeds_split[cur_language_idx])
102
+ cur_language_idx += 1
103
+
104
+ if image_idxs[row][image_num - 1] + self.image_token_num >= tokens_len:
105
+ vl_embeds_list.append(image_features[cur_image_idx][0 : tokens_len - image_idxs[row][image_num - 1]])
106
+ cur_language_idx += 1
107
+ cur_image_idx += 1
108
+ else:
109
+ vl_embeds_list.append(image_features[cur_image_idx])
110
+ cur_language_idx += 1
111
+ cur_image_idx += 1
112
+ vl_embeds_list.append(input_embeds_split[cur_language_idx])
113
+ cur_language_idx += 1
114
+
115
+ vl_embeds = torch.cat(vl_embeds_list)
116
+ vl_embeds = vl_embeds.view(micro_batch_size, tokens_len, vl_embeds.shape[-1])
117
+ return (input_ids, vl_embeds, targets, attn_mask, loss_mask)
118
+
119
+ def forward(
120
+ self,
121
+ input_ids: torch.LongTensor = None,
122
+ pixel_values: torch.FloatTensor = None,
123
+ attention_mask: Optional[torch.Tensor] = None,
124
+ inputs_embeds: Optional[torch.FloatTensor] = None,
125
+ labels: Optional[torch.LongTensor] = None,
126
+ output_attentions: Optional[bool] = None,
127
+ output_hidden_states: Optional[bool] = None,
128
+ return_dict: Optional[bool] = None,
129
+ local_pos_batch: Optional[torch.LongTensor] = None,
130
+ image_idx_batch: Optional[torch.Tensor] = None,
131
+ loss_mask: Optional[torch.Tensor] = None,
132
+ use_cache: Optional[bool] = None,
133
+ ):
134
+ inputs = [input_ids, pixel_values, labels, attention_mask, loss_mask]
135
+
136
+ if isinstance(inputs[1], list):
137
+ pixel_values = [p.bfloat16() for p in inputs[1]]
138
+ else:
139
+ pixel_values = inputs[1].bfloat16()
140
+ img_token = self.vit.forward(pixel_values)
141
+
142
+ if hasattr(img_token, 'last_hidden_state'):
143
+ img_token = img_token.last_hidden_state
144
+
145
+ inputs = self.adp(inputs[:1]+[img_token]+inputs[2:])
146
+
147
+ inputs = self.merge_text_image_tokens(inputs)
148
+ tokens, hidden_states, targets, attn_mask, loss_mask = inputs
149
+
150
+ outputs = self.llm.forward(
151
+ inputs_embeds = hidden_states,
152
+ attention_mask = attn_mask,
153
+ use_cache = use_cache)
154
+
155
+ lm_logits = outputs.logits
156
+
157
+ loss = None
158
+ if targets is not None:
159
+ labels = targets.to(lm_logits.device)
160
+ shift_logits = lm_logits[..., :-1, :].contiguous()
161
+ shift_labels = labels[..., 1:].contiguous()
162
+
163
+ loss_fct = CrossEntropyLoss(reduction='none')
164
+ loss = loss_fct(
165
+ shift_logits.view(-1, shift_logits.size(-1)), shift_labels.view(-1)
166
+ )
167
+
168
+ batch_size = labels.size(0)
169
+ loss_mask = loss_mask[:, 1:].to(loss.dtype)
170
+ loss = (loss.view(batch_size, -1) * loss_mask).sum() / loss_mask.sum()
171
+
172
+ return FlashVLStaticOutputWithPast(
173
+ loss=loss,
174
+ logits=lm_logits
175
+ )
176
+
177
+ def get_input_embeddings(self):
178
+ return self.llm.get_input_embeddings()
179
+
180
+ def generate(
181
+ self,
182
+ input_ids=None,
183
+ pixel_values=None,
184
+ attention_mask=None,
185
+ **kwargs
186
+ ):
187
+ image = pixel_values
188
+ img_token = self.vit.forward(image.bfloat16())
189
+ if hasattr(img_token, 'last_hidden_state'):
190
+ img_token = img_token.last_hidden_state
191
+ inputs = self.adp((
192
+ input_ids.to(self.device),
193
+ img_token,
194
+ None, None, None))
195
+ inputs = self.merge_text_image_tokens(inputs)
196
+ tokens, hidden_states, targets, attn_mask, loss_mask = inputs
197
+
198
+ keys_to_pop = ['loss_mask', 'labels','attention_mask']
199
+ kwargs = {k: v for k, v in kwargs.items() if k not in keys_to_pop}
200
+
201
+ outputs = self.llm.generate(
202
+ inputs_embeds=hidden_states.bfloat16(),
203
+ max_new_tokens=2048,
204
+ do_sample=False,
205
+ **kwargs
206
+ )
207
+
208
+ return outputs
209
+
210
+ def chat(self, pil_image, messages, answer_prompt=None, do_sample=True, max_new_tokens=256):
211
+ data={}
212
+ data['img'] = pil_image
213
+ data['text_only'] = (pil_image is None)
214
+ data['messages'] = messages
215
+
216
+ sources = self.to_llava_format(data)
217
+ sources = [sources]
218
+ has_image = not sources[0]['text_only']
219
+
220
+ if has_image:
221
+ img_list = sources[0]['image']
222
+ if not isinstance(img_list, list):
223
+ img_list = [img_list]
224
+ image = torch.stack([torch.from_numpy(self.im_trans(i)['pixel_values'][0]) for i in img_list], dim=0)
225
+
226
+ sources = copy.deepcopy([e["conversations"] for e in sources])
227
+
228
+ data_dict = self.preprocess_qwen(
229
+ sources,
230
+ self.tokenizer,
231
+ has_image=has_image,
232
+ )
233
+
234
+ input_ids_data = data_dict["input_ids"][0]
235
+ data_dict["input_ids"] = [ input_ids_data, ]
236
+
237
+ if not has_image:
238
+ image = torch.zeros(1, 3, self.image_size, self.image_size)
239
+ data_dict = dict(tokens=data_dict["input_ids"][0],)
240
+
241
+ img_token = self.vit.forward(image.cuda().bfloat16())
242
+
243
+ if hasattr(img_token, 'last_hidden_state'):
244
+ img_token = img_token.last_hidden_state
245
+
246
+ inputs = self.adp((
247
+ data_dict['tokens'].unsqueeze(0).to(self.device),
248
+ img_token,
249
+ None, None, None))
250
+
251
+ inputs = self.merge_text_image_tokens(inputs)
252
+ tokens, hidden_states, targets, attn_mask, loss_mask = inputs
253
+
254
+ outputs = self.llm.generate(
255
+ inputs_embeds=hidden_states.bfloat16(),
256
+ return_dict_in_generate=False,
257
+ max_new_tokens=max_new_tokens,
258
+ do_sample=do_sample,
259
+ pad_token_id=False,
260
+ )
261
+ decoded = self.tokenizer.decode(outputs[0])
262
+
263
+ stop_words_ids = [self.llm.generation_config.bos_token_id,
264
+ self.llm.generation_config.eos_token_id,
265
+ self.tokenizer.convert_tokens_to_ids('<|im_start|>')]
266
+ stop_words = [self.tokenizer.decode(w) for w in stop_words_ids]
267
+
268
+ for stop_word in stop_words:
269
+ decoded = decoded.replace(stop_word, "").strip()
270
+
271
+ return decoded
272
+
273
+ def preprocess_qwen(
274
+ self,
275
+ sources,
276
+ tokenizer: transformers.PreTrainedTokenizer,
277
+ has_image: bool = False,
278
+ max_len=2048,
279
+ system_message: str = "You are a helpful assistant.",) -> Dict:
280
+
281
+ roles = {"human": "user", "gpt": "assistant"}
282
+ tokenizer = copy.deepcopy(tokenizer)
283
+
284
+ tokenizer.add_tokens(["<image>"], special_tokens=True)
285
+ image_token_index = tokenizer.convert_tokens_to_ids("<image>")
286
+ im_start, im_end = tokenizer.additional_special_tokens_ids[:2]
287
+
288
+ unmask_tokens_idx = [198, im_start, im_end]
289
+ nl_tokens = tokenizer("\n").input_ids
290
+
291
+ chat_template = "{% for message in messages %}{{'<|im_start|>' + message['role'] + '\n' + message['content'] + '<|im_end|>' + '\n'}}{% endfor %}{% if add_generation_prompt %}{{ '<|im_start|>assistant\n' }}{% endif %}"
292
+ tokenizer.chat_template = chat_template
293
+
294
+ input_ids, targets = [], []
295
+ for i, source in enumerate(sources):
296
+ if roles[source[0]["from"]] != roles["human"]:
297
+ source = source[1:]
298
+ input_id, target = [], []
299
+
300
+ input_id += tokenizer.apply_chat_template([{"role" : "system", "content" : system_message}])
301
+ target += [IGNORE_INDEX] * len(input_id)
302
+ i=0
303
+ for conv in source:
304
+ try:
305
+ role = conv["role"]
306
+ content = conv["content"]
307
+ except:
308
+ role = conv["from"]
309
+ content = conv["value"]
310
+ role = roles.get(role, role)
311
+ if i==len(source)-1:
312
+ conv = [{"role" : role, "content" : content}]
313
+ encode_id = tokenizer.apply_chat_template(conv,add_generation_prompt=True)
314
+ else:
315
+ conv = [{"role" : role, "content" : content}]
316
+ encode_id = tokenizer.apply_chat_template(conv)
317
+ i=i+1
318
+
319
+ if image_token_index in encode_id:
320
+ encode_id = tokenizer_image_token_qwen(encode_id, tokenizer, image_token_index, image_token_num=self.image_token_num)
321
+ input_id += encode_id
322
+ if role in ["user", "system"]:
323
+ target += [IGNORE_INDEX] * len(encode_id)
324
+ else:
325
+ target += encode_id
326
+
327
+ assert len(input_id) == len(target), f"{len(input_id)} != {len(target)}"
328
+ for idx, encode_id in enumerate(input_id):
329
+ if encode_id in unmask_tokens_idx:
330
+ target[idx] = encode_id
331
+ if encode_id == image_token_index:
332
+ input_id[idx] = IMAGE_TOKEN_INDEX
333
+ input_ids.append(input_id)
334
+ targets.append(target)
335
+
336
+ input_ids = torch.tensor(input_ids, dtype=torch.long)
337
+ targets = torch.tensor(targets, dtype=torch.long)
338
+
339
+ return dict(
340
+ input_ids=input_ids,
341
+ labels=targets,
342
+ )
343
+
344
+ def to_llava_format(self, data):
345
+ img_pil = data['img']
346
+ messages = data['messages']
347
+ text_only = data['text_only']
348
+ is_video=False
349
+ if 'is_video' in data:
350
+ is_video=data['is_video']
351
+
352
+ messages.append({'role': 'assistant', 'content': ''})
353
+ conversations = []
354
+ for i,m in enumerate(messages):
355
+ if m['role'] == 'user':
356
+ value = str(m['content']).replace('<image>', '')
357
+ if i == 0 and not text_only:
358
+ value = '<image>\n' + value
359
+
360
+ conversations.append({'from': 'human', 'value': value})
361
+ elif m['role'] == 'assistant':
362
+ conversations.append({'from': 'gpt', 'value': str(m['content']).replace('<image>', '')})
363
+ else:
364
+ raise ValueError(f"Wrong role in conversation. {m['role']}")
365
+
366
+ return {'image': img_pil,
367
+ 'text_only': text_only,
368
+ 'is_video':is_video,
369
+ 'conversations': conversations}
preprocessor_config.json ADDED
@@ -0,0 +1,24 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "do_convert_rgb": null,
3
+ "do_normalize": true,
4
+ "do_rescale": true,
5
+ "do_resize": true,
6
+ "image_mean": [
7
+ 0.5,
8
+ 0.5,
9
+ 0.5
10
+ ],
11
+ "image_processor_type": "SiglipImageProcessor",
12
+ "image_std": [
13
+ 0.5,
14
+ 0.5,
15
+ 0.5
16
+ ],
17
+ "processor_class": "SiglipProcessor",
18
+ "resample": 2,
19
+ "rescale_factor": 0.00392156862745098,
20
+ "size": {
21
+ "height": 512,
22
+ "width": 512
23
+ }
24
+ }
processing_FlashVL.py ADDED
@@ -0,0 +1,19 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ from .mm_constants import IMAGE_TOKEN_INDEX, IMAGE_PAD_TOKEN_INDEX
2
+
3
+ def tokenizer_image_token_qwen(prompt, tokenizer, image_token_index, image_token_num=256):
4
+ prompt_chunks, tmp = [], []
5
+ for n in prompt:
6
+ if n == image_token_index:
7
+ prompt_chunks.append(tmp)
8
+ tmp = []
9
+ else:
10
+ tmp.append(n)
11
+ if tmp: prompt_chunks.append(tmp)
12
+
13
+ input_ids = []
14
+ for i, chunk in enumerate(prompt_chunks):
15
+ if i > 0:
16
+ input_ids.extend([IMAGE_TOKEN_INDEX] + [IMAGE_PAD_TOKEN_INDEX] * (image_token_num - 1))
17
+ input_ids.extend(chunk)
18
+
19
+ return input_ids
special_tokens_map.json ADDED
@@ -0,0 +1,31 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "additional_special_tokens": [
3
+ "<|im_start|>",
4
+ "<|im_end|>",
5
+ "<|object_ref_start|>",
6
+ "<|object_ref_end|>",
7
+ "<|box_start|>",
8
+ "<|box_end|>",
9
+ "<|quad_start|>",
10
+ "<|quad_end|>",
11
+ "<|vision_start|>",
12
+ "<|vision_end|>",
13
+ "<|vision_pad|>",
14
+ "<|image_pad|>",
15
+ "<|video_pad|>"
16
+ ],
17
+ "eos_token": {
18
+ "content": "<|im_end|>",
19
+ "lstrip": false,
20
+ "normalized": false,
21
+ "rstrip": false,
22
+ "single_word": false
23
+ },
24
+ "pad_token": {
25
+ "content": "<|endoftext|>",
26
+ "lstrip": false,
27
+ "normalized": false,
28
+ "rstrip": false,
29
+ "single_word": false
30
+ }
31
+ }
tokenizer.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:9c5ae00e602b8860cbd784ba82a8aa14e8feecec692e7076590d014d7b7fdafa
3
+ size 11421896
tokenizer_config.json ADDED
@@ -0,0 +1,208 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "add_bos_token": false,
3
+ "add_prefix_space": false,
4
+ "added_tokens_decoder": {
5
+ "151643": {
6
+ "content": "<|endoftext|>",
7
+ "lstrip": false,
8
+ "normalized": false,
9
+ "rstrip": false,
10
+ "single_word": false,
11
+ "special": true
12
+ },
13
+ "151644": {
14
+ "content": "<|im_start|>",
15
+ "lstrip": false,
16
+ "normalized": false,
17
+ "rstrip": false,
18
+ "single_word": false,
19
+ "special": true
20
+ },
21
+ "151645": {
22
+ "content": "<|im_end|>",
23
+ "lstrip": false,
24
+ "normalized": false,
25
+ "rstrip": false,
26
+ "single_word": false,
27
+ "special": true
28
+ },
29
+ "151646": {
30
+ "content": "<|object_ref_start|>",
31
+ "lstrip": false,
32
+ "normalized": false,
33
+ "rstrip": false,
34
+ "single_word": false,
35
+ "special": true
36
+ },
37
+ "151647": {
38
+ "content": "<|object_ref_end|>",
39
+ "lstrip": false,
40
+ "normalized": false,
41
+ "rstrip": false,
42
+ "single_word": false,
43
+ "special": true
44
+ },
45
+ "151648": {
46
+ "content": "<|box_start|>",
47
+ "lstrip": false,
48
+ "normalized": false,
49
+ "rstrip": false,
50
+ "single_word": false,
51
+ "special": true
52
+ },
53
+ "151649": {
54
+ "content": "<|box_end|>",
55
+ "lstrip": false,
56
+ "normalized": false,
57
+ "rstrip": false,
58
+ "single_word": false,
59
+ "special": true
60
+ },
61
+ "151650": {
62
+ "content": "<|quad_start|>",
63
+ "lstrip": false,
64
+ "normalized": false,
65
+ "rstrip": false,
66
+ "single_word": false,
67
+ "special": true
68
+ },
69
+ "151651": {
70
+ "content": "<|quad_end|>",
71
+ "lstrip": false,
72
+ "normalized": false,
73
+ "rstrip": false,
74
+ "single_word": false,
75
+ "special": true
76
+ },
77
+ "151652": {
78
+ "content": "<|vision_start|>",
79
+ "lstrip": false,
80
+ "normalized": false,
81
+ "rstrip": false,
82
+ "single_word": false,
83
+ "special": true
84
+ },
85
+ "151653": {
86
+ "content": "<|vision_end|>",
87
+ "lstrip": false,
88
+ "normalized": false,
89
+ "rstrip": false,
90
+ "single_word": false,
91
+ "special": true
92
+ },
93
+ "151654": {
94
+ "content": "<|vision_pad|>",
95
+ "lstrip": false,
96
+ "normalized": false,
97
+ "rstrip": false,
98
+ "single_word": false,
99
+ "special": true
100
+ },
101
+ "151655": {
102
+ "content": "<|image_pad|>",
103
+ "lstrip": false,
104
+ "normalized": false,
105
+ "rstrip": false,
106
+ "single_word": false,
107
+ "special": true
108
+ },
109
+ "151656": {
110
+ "content": "<|video_pad|>",
111
+ "lstrip": false,
112
+ "normalized": false,
113
+ "rstrip": false,
114
+ "single_word": false,
115
+ "special": true
116
+ },
117
+ "151657": {
118
+ "content": "<tool_call>",
119
+ "lstrip": false,
120
+ "normalized": false,
121
+ "rstrip": false,
122
+ "single_word": false,
123
+ "special": false
124
+ },
125
+ "151658": {
126
+ "content": "</tool_call>",
127
+ "lstrip": false,
128
+ "normalized": false,
129
+ "rstrip": false,
130
+ "single_word": false,
131
+ "special": false
132
+ },
133
+ "151659": {
134
+ "content": "<|fim_prefix|>",
135
+ "lstrip": false,
136
+ "normalized": false,
137
+ "rstrip": false,
138
+ "single_word": false,
139
+ "special": false
140
+ },
141
+ "151660": {
142
+ "content": "<|fim_middle|>",
143
+ "lstrip": false,
144
+ "normalized": false,
145
+ "rstrip": false,
146
+ "single_word": false,
147
+ "special": false
148
+ },
149
+ "151661": {
150
+ "content": "<|fim_suffix|>",
151
+ "lstrip": false,
152
+ "normalized": false,
153
+ "rstrip": false,
154
+ "single_word": false,
155
+ "special": false
156
+ },
157
+ "151662": {
158
+ "content": "<|fim_pad|>",
159
+ "lstrip": false,
160
+ "normalized": false,
161
+ "rstrip": false,
162
+ "single_word": false,
163
+ "special": false
164
+ },
165
+ "151663": {
166
+ "content": "<|repo_name|>",
167
+ "lstrip": false,
168
+ "normalized": false,
169
+ "rstrip": false,
170
+ "single_word": false,
171
+ "special": false
172
+ },
173
+ "151664": {
174
+ "content": "<|file_sep|>",
175
+ "lstrip": false,
176
+ "normalized": false,
177
+ "rstrip": false,
178
+ "single_word": false,
179
+ "special": false
180
+ }
181
+ },
182
+ "additional_special_tokens": [
183
+ "<|im_start|>",
184
+ "<|im_end|>",
185
+ "<|object_ref_start|>",
186
+ "<|object_ref_end|>",
187
+ "<|box_start|>",
188
+ "<|box_end|>",
189
+ "<|quad_start|>",
190
+ "<|quad_end|>",
191
+ "<|vision_start|>",
192
+ "<|vision_end|>",
193
+ "<|vision_pad|>",
194
+ "<|image_pad|>",
195
+ "<|video_pad|>"
196
+ ],
197
+ "bos_token": null,
198
+ "chat_template": "{%- if tools %}\n {{- '<|im_start|>system\\n' }}\n {%- if messages[0]['role'] == 'system' %}\n {{- messages[0]['content'] }}\n {%- else %}\n {{- 'You are Qwen, created by Alibaba Cloud. You are a helpful assistant.' }}\n {%- endif %}\n {{- \"\\n\\n# Tools\\n\\nYou may call one or more functions to assist with the user query.\\n\\nYou are provided with function signatures within <tools></tools> XML tags:\\n<tools>\" }}\n {%- for tool in tools %}\n {{- \"\\n\" }}\n {{- tool | tojson }}\n {%- endfor %}\n {{- \"\\n</tools>\\n\\nFor each function call, return a json object with function name and arguments within <tool_call></tool_call> XML tags:\\n<tool_call>\\n{{\\\"name\\\": <function-name>, \\\"arguments\\\": <args-json-object>}}\\n</tool_call><|im_end|>\\n\" }}\n{%- else %}\n {%- if messages[0]['role'] == 'system' %}\n {{- '<|im_start|>system\\n' + messages[0]['content'] + '<|im_end|>\\n' }}\n {%- else %}\n {{- '<|im_start|>system\\nYou are Qwen, created by Alibaba Cloud. You are a helpful assistant.<|im_end|>\\n' }}\n {%- endif %}\n{%- endif %}\n{%- for message in messages %}\n {%- if (message.role == \"user\") or (message.role == \"system\" and not loop.first) or (message.role == \"assistant\" and not message.tool_calls) %}\n {{- '<|im_start|>' + message.role + '\\n' + message.content + '<|im_end|>' + '\\n' }}\n {%- elif message.role == \"assistant\" %}\n {{- '<|im_start|>' + message.role }}\n {%- if message.content %}\n {{- '\\n' + message.content }}\n {%- endif %}\n {%- for tool_call in message.tool_calls %}\n {%- if tool_call.function is defined %}\n {%- set tool_call = tool_call.function %}\n {%- endif %}\n {{- '\\n<tool_call>\\n{\"name\": \"' }}\n {{- tool_call.name }}\n {{- '\", \"arguments\": ' }}\n {{- tool_call.arguments | tojson }}\n {{- '}\\n</tool_call>' }}\n {%- endfor %}\n {{- '<|im_end|>\\n' }}\n {%- elif message.role == \"tool\" %}\n {%- if (loop.index0 == 0) or (messages[loop.index0 - 1].role != \"tool\") %}\n {{- '<|im_start|>user' }}\n {%- endif %}\n {{- '\\n<tool_response>\\n' }}\n {{- message.content }}\n {{- '\\n</tool_response>' }}\n {%- if loop.last or (messages[loop.index0 + 1].role != \"tool\") %}\n {{- '<|im_end|>\\n' }}\n {%- endif %}\n {%- endif %}\n{%- endfor %}\n{%- if add_generation_prompt %}\n {{- '<|im_start|>assistant\\n' }}\n{%- endif %}\n",
199
+ "clean_up_tokenization_spaces": false,
200
+ "eos_token": "<|im_end|>",
201
+ "errors": "replace",
202
+ "extra_special_tokens": {},
203
+ "model_max_length": 131072,
204
+ "pad_token": "<|endoftext|>",
205
+ "split_special_tokens": false,
206
+ "tokenizer_class": "Qwen2Tokenizer",
207
+ "unk_token": null
208
+ }
vocab.json ADDED
The diff for this file is too large to render. See raw diff