ben81828 commited on
Commit
01f03fd
·
verified ·
1 Parent(s): 15a06bf

Model save

Browse files
README.md ADDED
@@ -0,0 +1,80 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ library_name: peft
3
+ license: apache-2.0
4
+ base_model: AdaptLLM/biomed-Qwen2-VL-2B-Instruct
5
+ tags:
6
+ - llama-factory
7
+ - generated_from_trainer
8
+ model-index:
9
+ - name: qwenvl-2B-cadica-stenosis-classify-lora
10
+ results: []
11
+ ---
12
+
13
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
14
+ should probably proofread and complete it, then remove this comment. -->
15
+
16
+ # qwenvl-2B-cadica-stenosis-classify-lora
17
+
18
+ This model is a fine-tuned version of [AdaptLLM/biomed-Qwen2-VL-2B-Instruct](https://huggingface.co/AdaptLLM/biomed-Qwen2-VL-2B-Instruct) on an unknown dataset.
19
+ It achieves the following results on the evaluation set:
20
+ - Loss: 0.7947
21
+ - Num Input Tokens Seen: 10902632
22
+
23
+ ## Model description
24
+
25
+ More information needed
26
+
27
+ ## Intended uses & limitations
28
+
29
+ More information needed
30
+
31
+ ## Training and evaluation data
32
+
33
+ More information needed
34
+
35
+ ## Training procedure
36
+
37
+ ### Training hyperparameters
38
+
39
+ The following hyperparameters were used during training:
40
+ - learning_rate: 0.0001
41
+ - train_batch_size: 1
42
+ - eval_batch_size: 1
43
+ - seed: 42
44
+ - distributed_type: multi-GPU
45
+ - num_devices: 4
46
+ - gradient_accumulation_steps: 8
47
+ - total_train_batch_size: 32
48
+ - total_eval_batch_size: 4
49
+ - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
50
+ - lr_scheduler_type: cosine
51
+ - lr_scheduler_warmup_ratio: 0.1
52
+ - num_epochs: 2.0
53
+
54
+ ### Training results
55
+
56
+ | Training Loss | Epoch | Step | Validation Loss | Input Tokens Seen |
57
+ |:-------------:|:------:|:----:|:---------------:|:-----------------:|
58
+ | 0.9039 | 0.1396 | 50 | 0.9039 | 779728 |
59
+ | 0.9033 | 0.2792 | 100 | 0.9009 | 1559632 |
60
+ | 0.9001 | 0.4188 | 150 | 0.8988 | 2339368 |
61
+ | 0.902 | 0.5585 | 200 | 0.9004 | 3119064 |
62
+ | 0.8933 | 0.6981 | 250 | 0.9052 | 3898784 |
63
+ | 0.897 | 0.8377 | 300 | 0.9004 | 4678472 |
64
+ | 0.8997 | 0.9773 | 350 | 0.9016 | 5458104 |
65
+ | 0.9109 | 1.1145 | 400 | 0.8960 | 6224248 |
66
+ | 0.8127 | 1.2541 | 450 | 0.8822 | 7003904 |
67
+ | 0.8198 | 1.3937 | 500 | 0.8460 | 7783528 |
68
+ | 0.832 | 1.5333 | 550 | 0.8188 | 8563264 |
69
+ | 0.786 | 1.6729 | 600 | 0.8021 | 9343120 |
70
+ | 0.8312 | 1.8126 | 650 | 0.7986 | 10122936 |
71
+ | 0.7797 | 1.9522 | 700 | 0.7947 | 10902632 |
72
+
73
+
74
+ ### Framework versions
75
+
76
+ - PEFT 0.12.0
77
+ - Transformers 4.47.0.dev0
78
+ - Pytorch 2.5.1+cu121
79
+ - Datasets 3.1.0
80
+ - Tokenizers 0.20.3
adapter_model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:5839c56a7ebbb855418005599ed317701c3dad1fdcfa37e3e1edb39544ead19a
3
  size 29034840
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ac97a50257be4997c608aae1f33f065181cc26841d4bcd8cdc645f409aa6467d
3
  size 29034840
chat_template.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ {
2
+ "chat_template": "{% set image_count = namespace(value=0) %}{% set video_count = namespace(value=0) %}{% for message in messages %}{% if loop.first and message['role'] != 'system' %}<|im_start|>system\nYou are a helpful assistant.<|im_end|>\n{% endif %}<|im_start|>{{ message['role'] }}\n{% if message['content'] is string %}{{ message['content'] }}<|im_end|>\n{% else %}{% for content in message['content'] %}{% if content['type'] == 'image' or 'image' in content or 'image_url' in content %}{% set image_count.value = image_count.value + 1 %}{% if add_vision_id %}Picture {{ image_count.value }}: {% endif %}<|vision_start|><|image_pad|><|vision_end|>{% elif content['type'] == 'video' or 'video' in content %}{% set video_count.value = video_count.value + 1 %}{% if add_vision_id %}Video {{ video_count.value }}: {% endif %}<|vision_start|><|video_pad|><|vision_end|>{% elif 'text' in content %}{{ content['text'] }}{% endif %}{% endfor %}<|im_end|>\n{% endif %}{% endfor %}{% if add_generation_prompt %}<|im_start|>assistant\n{% endif %}"
3
+ }
preprocessor_config.json ADDED
@@ -0,0 +1,29 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "do_convert_rgb": true,
3
+ "do_normalize": true,
4
+ "do_rescale": true,
5
+ "do_resize": true,
6
+ "image_mean": [
7
+ 0.48145466,
8
+ 0.4578275,
9
+ 0.40821073
10
+ ],
11
+ "image_processor_type": "Qwen2VLImageProcessor",
12
+ "image_std": [
13
+ 0.26862954,
14
+ 0.26130258,
15
+ 0.27577711
16
+ ],
17
+ "max_pixels": 12845056,
18
+ "merge_size": 2,
19
+ "min_pixels": 3136,
20
+ "patch_size": 14,
21
+ "processor_class": "Qwen2VLProcessor",
22
+ "resample": 3,
23
+ "rescale_factor": 0.00392156862745098,
24
+ "size": {
25
+ "max_pixels": 12845056,
26
+ "min_pixels": 3136
27
+ },
28
+ "temporal_patch_size": 2
29
+ }