Update README.md
Browse files
README.md
CHANGED
@@ -1,199 +1,146 @@
|
|
1 |
---
|
2 |
library_name: transformers
|
3 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
4 |
---
|
5 |
|
6 |
-
#
|
7 |
-
|
|
|
|
|
8 |
<!-- Provide a quick summary of what the model is/does. -->
|
9 |
|
|
|
|
|
10 |
|
|
|
11 |
|
12 |
-
|
13 |
|
14 |
### Model Description
|
15 |
-
|
16 |
<!-- Provide a longer summary of what this model is. -->
|
17 |
|
18 |
-
|
19 |
-
|
20 |
-
- **
|
21 |
-
- **
|
22 |
-
-
|
23 |
-
|
24 |
-
- **Language(s) (NLP):** [More Information Needed]
|
25 |
-
- **License:** [More Information Needed]
|
26 |
-
- **Finetuned from model [optional]:** [More Information Needed]
|
27 |
-
|
28 |
-
### Model Sources [optional]
|
29 |
-
|
30 |
-
<!-- Provide the basic links for the model. -->
|
31 |
-
|
32 |
-
- **Repository:** [More Information Needed]
|
33 |
-
- **Paper [optional]:** [More Information Needed]
|
34 |
-
- **Demo [optional]:** [More Information Needed]
|
35 |
-
|
36 |
-
## Uses
|
37 |
-
|
38 |
-
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
|
39 |
-
|
40 |
-
### Direct Use
|
41 |
-
|
42 |
-
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
|
43 |
-
|
44 |
-
[More Information Needed]
|
45 |
-
|
46 |
-
### Downstream Use [optional]
|
47 |
-
|
48 |
-
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
|
49 |
|
50 |
-
|
|
|
51 |
|
52 |
-
|
53 |
|
54 |
-
|
55 |
|
56 |
-
|
57 |
|
58 |
-
## Bias, Risks, and Limitations
|
59 |
|
60 |
-
|
|
|
61 |
|
62 |
-
|
63 |
|
64 |
-
|
65 |
|
66 |
-
|
67 |
|
68 |
-
|
69 |
|
70 |
## How to Get Started with the Model
|
71 |
|
72 |
-
|
73 |
-
|
74 |
-
[More Information Needed]
|
75 |
-
|
76 |
-
## Training Details
|
77 |
-
|
78 |
-
### Training Data
|
79 |
-
|
80 |
-
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
|
81 |
-
|
82 |
-
[More Information Needed]
|
83 |
-
|
84 |
-
### Training Procedure
|
85 |
-
|
86 |
-
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
|
87 |
-
|
88 |
-
#### Preprocessing [optional]
|
89 |
-
|
90 |
-
[More Information Needed]
|
91 |
-
|
92 |
-
|
93 |
-
#### Training Hyperparameters
|
94 |
-
|
95 |
-
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
|
96 |
-
|
97 |
-
#### Speeds, Sizes, Times [optional]
|
98 |
-
|
99 |
-
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
|
100 |
-
|
101 |
-
[More Information Needed]
|
102 |
-
|
103 |
-
## Evaluation
|
104 |
-
|
105 |
-
<!-- This section describes the evaluation protocols and provides the results. -->
|
106 |
-
|
107 |
-
### Testing Data, Factors & Metrics
|
108 |
-
|
109 |
-
#### Testing Data
|
110 |
-
|
111 |
-
<!-- This should link to a Dataset Card if possible. -->
|
112 |
-
|
113 |
-
[More Information Needed]
|
114 |
-
|
115 |
-
#### Factors
|
116 |
-
|
117 |
-
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
|
118 |
-
|
119 |
-
[More Information Needed]
|
120 |
-
|
121 |
-
#### Metrics
|
122 |
-
|
123 |
-
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
|
124 |
-
|
125 |
-
[More Information Needed]
|
126 |
-
|
127 |
-
### Results
|
128 |
-
|
129 |
-
[More Information Needed]
|
130 |
-
|
131 |
-
#### Summary
|
132 |
-
|
133 |
-
|
134 |
-
|
135 |
-
## Model Examination [optional]
|
136 |
-
|
137 |
-
<!-- Relevant interpretability work for the model goes here -->
|
138 |
-
|
139 |
-
[More Information Needed]
|
140 |
|
141 |
-
|
142 |
|
143 |
-
|
|
|
|
|
|
|
|
|
144 |
|
145 |
-
|
|
|
|
|
|
|
146 |
|
147 |
-
|
148 |
-
|
149 |
-
- **Cloud Provider:** [More Information Needed]
|
150 |
-
- **Compute Region:** [More Information Needed]
|
151 |
-
- **Carbon Emitted:** [More Information Needed]
|
152 |
|
153 |
-
|
|
|
|
|
|
|
|
|
154 |
|
155 |
-
|
156 |
|
157 |
-
|
158 |
|
159 |
-
|
|
|
|
|
160 |
|
161 |
-
|
162 |
|
163 |
-
|
|
|
|
|
|
|
|
|
164 |
|
165 |
-
|
|
|
|
|
|
|
166 |
|
167 |
-
|
|
|
168 |
|
169 |
-
|
|
|
|
|
|
|
|
|
|
|
170 |
|
171 |
-
|
172 |
|
173 |
-
|
174 |
|
175 |
-
|
176 |
|
177 |
-
|
|
|
|
|
178 |
|
179 |
-
|
|
|
|
|
|
|
|
|
180 |
|
181 |
-
[More Information Needed]
|
182 |
|
183 |
-
## Glossary [optional]
|
184 |
|
185 |
-
|
|
|
186 |
|
187 |
-
[More Information Needed]
|
188 |
|
189 |
-
## More Information [optional]
|
190 |
|
191 |
-
|
|
|
192 |
|
193 |
-
## Model Card Authors [optional]
|
194 |
|
195 |
-
[More Information Needed]
|
196 |
|
197 |
-
|
|
|
|
|
|
|
198 |
|
199 |
-
|
|
|
1 |
---
|
2 |
library_name: transformers
|
3 |
+
datasets:
|
4 |
+
- ucsahin/TR-VLM-DPO-Dataset
|
5 |
+
language:
|
6 |
+
- tr
|
7 |
+
pipeline_tag: image-text-to-text
|
8 |
+
license: apache-2.0
|
9 |
+
base_model: ucsahin/TraVisionLM-base
|
10 |
---
|
11 |
|
12 |
+
<!-- # TraVisionLM - Fast and Native Turkish Visual Language Model -->
|
13 |
+
<div style="text-align: center;">
|
14 |
+
<img src="logo-white-dpo.png" alt="logo" style="width: 90%; height: auto;">
|
15 |
+
</div>
|
16 |
<!-- Provide a quick summary of what the model is/does. -->
|
17 |
|
18 |
+
## This is the DPO optimized version of the base model [TraVisionLM-base](https://huggingface.co/ucsahin/TraVisionLM-base).
|
19 |
+
When compared to the base model, the DPO version should answer questions more accurately, truthfully, and in more details.
|
20 |
|
21 |
+
### You can check out the model at: [TRaVisionLM-DPO-Demo](https://huggingface.co/spaces/ucsahin/TraVisionLM-Demo)
|
22 |
|
23 |
+
### Visual Language Model DPO Training: [Colab Notebook](https://colab.research.google.com/drive/1ypEPQ3RBX3_X7m9qfmU-Op-vGgOjab_z?usp=sharing)
|
24 |
|
25 |
### Model Description
|
|
|
26 |
<!-- Provide a longer summary of what this model is. -->
|
27 |
|
28 |
+
- **Developed by:** [ucsahin](https://huggingface.co/ucsahin)
|
29 |
+
- **Model type:** [Image-Text-to-Text](https://huggingface.co/tasks/image-text-to-text)
|
30 |
+
- **Language(s) (NLP):** *Turkish*
|
31 |
+
- **License:** *Apache license 2.0*
|
32 |
+
-
|
33 |
+
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
34 |
|
35 |
+
## English
|
36 |
+
# 🎉 Introducing TraVisionLM: The First of Its Kind! 🚀
|
37 |
|
38 |
+
🌟 This is a very fast and small (only 875M parameters) visual language model on Hugging Face that responds to Turkish instructions given an image input! 🌟
|
39 |
|
40 |
+
✨ Developed compatible with the Transformers library, TRaVisionLM is a breeze to load, fine-tune, and use for lightning-fast inferences—all without needing any external libraries! ⚡️
|
41 |
|
42 |
+
Ready to experience the Turkish visual language model? Let's go! 🇹🇷🖼️🤖
|
43 |
|
|
|
44 |
|
45 |
+
## Türkçe
|
46 |
+
# 🎉 TraVisionLM: Türünün İlk Örneği! 🚀
|
47 |
|
48 |
+
🌟 Çok hızlı ve küçük boyutlu (sadece 875M parametre) Türkçe görsel dil modeli! Bir görüntü ve Türkçe talimat verildiğinde Türkçe yanıt üretir! 🌟
|
49 |
|
50 |
+
✨ Transformers kütüphanesi ile uyumlu olarak geliştirilen TraVisionLM modeli ile, yükleme, eğitme ve dış kütüphaneler kullanmadan hızlı sonuçlar almak çok kolay! ⚡️
|
51 |
|
52 |
+
Türkçe görsel dil modelini deneyimlemeye hazır mısınız? Hadi başlayalım! 🇹🇷🖼️🤖
|
53 |
|
54 |
+
---
|
55 |
|
56 |
## How to Get Started with the Model
|
57 |
|
58 |
+
In Transformers, you can load the model and inference as follows:
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
59 |
|
60 |
+
**IMPORTANT NOTE:** TraVisionLM model is not yet integrated natively into the Transformers library. So you need to set ```trust_remote_code=True``` when loading the model. It will download the ```configuration_travisionlm.py```, ```modeling_travisionlm.py``` and ```processing_travisionlm.py``` files from the repo. You can check out the content of these files under the *Files and Versions* tab and pin the specific versions if you have any concerns regarding malicious code.
|
61 |
|
62 |
+
```python
|
63 |
+
from transformers import AutoModelForCausalLM, AutoProcessor
|
64 |
+
import torch
|
65 |
+
import requests
|
66 |
+
from PIL import Image
|
67 |
|
68 |
+
model = AutoModelForCausalLM.from_pretrained('ucsahin/TraVisionLM-DPO', trust_remote_code=True, device_map="cuda")
|
69 |
+
# you can also load the model in bfloat16 or float16
|
70 |
+
# model = AutoModelForCausalLM.from_pretrained('ucsahin/TraVisionLM-DPO', trust_remote_code=True, torch_dtype=torch.bfloat16, device_map="cuda")
|
71 |
+
processor = AutoProcessor.from_pretrained('ucsahin/TraVisionLM-DPO', trust_remote_code=True)
|
72 |
|
73 |
+
url = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/tasks/car.jpg"
|
74 |
+
image = Image.open(requests.get(url, stream=True).raw).convert("RGB")
|
|
|
|
|
|
|
75 |
|
76 |
+
prompt = "Açıkla" # short caption
|
77 |
+
# prompt = "Detaylı açıkla" # detailed caption
|
78 |
+
# prompt = "Araba ne renktir?" # visual qa
|
79 |
+
# prompt = "Resmin odak noktası nedir?" # visual qa
|
80 |
+
# prompt = "Araba nerede duruyor?" # visual qa
|
81 |
|
82 |
+
inputs = processor(text=prompt, images=image, return_tensors="pt").to("cuda")
|
83 |
|
84 |
+
outputs = model.generate(**inputs, max_new_tokens=512, do_sample=True, temperature=0.6, top_p=0.9, top_k=50, repetition_penalty=1.2)
|
85 |
|
86 |
+
output_text = processor.batch_decode(outputs, skip_special_tokens=True)[0]
|
87 |
+
print("Model response: ", output_text)
|
88 |
+
```
|
89 |
|
90 |
+
You can also perform batch inference as follows (make sure that all images have a prompt text associated with them):
|
91 |
|
92 |
+
```python
|
93 |
+
from transformers import AutoModelForCausalLM, AutoProcessor
|
94 |
+
import torch
|
95 |
+
import requests
|
96 |
+
from PIL import Image
|
97 |
|
98 |
+
model = AutoModelForCausalLM.from_pretrained('ucsahin/TraVisionLM-base', trust_remote_code=True, device_map="cuda")
|
99 |
+
# you can also load the model in bfloat16 or float16
|
100 |
+
# model = AutoModelForCausalLM.from_pretrained('ucsahin/TraVisionLM-base', trust_remote_code=True, torch_dtype=torch.bfloat16, device_map="cuda")
|
101 |
+
processor = AutoProcessor.from_pretrained('ucsahin/TraVisionLM-base', trust_remote_code=True)
|
102 |
|
103 |
+
url = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/tasks/car.jpg"
|
104 |
+
image = Image.open(requests.get(url, stream=True).raw).convert("RGB")
|
105 |
|
106 |
+
prompt_list = [
|
107 |
+
'Açıkla',
|
108 |
+
'Detaylı açıkla',
|
109 |
+
'Araba nerede duruyor?',
|
110 |
+
'Arabanın rengi nedir?',
|
111 |
+
]
|
112 |
|
113 |
+
inputs = processor(text=prompt_list, images=len(prompt_list)*[image], padding="longest", return_tensors="pt").to("cuda")
|
114 |
|
115 |
+
outputs = model.generate(**inputs, max_new_tokens=512, do_sample=True, temperature=0.6, top_p=0.9, top_k=50, repetition_penalty=1.2)
|
116 |
|
117 |
+
output_text_list = processor.batch_decode(outputs, skip_special_tokens=True)
|
118 |
|
119 |
+
for output_text in output_text_list:
|
120 |
+
print(f"Model response: {output_text}\n\n\n")
|
121 |
+
```
|
122 |
|
123 |
+
The output will look like this:
|
124 |
+
```
|
125 |
+
"""
|
126 |
+
Model response: Açıkla
|
127 |
+
Bir binanın önünde, sokakta park halindeki mavi bir Volkswagen Beetle.
|
128 |
|
|
|
129 |
|
|
|
130 |
|
131 |
+
Model response: Detaylı açıkla
|
132 |
+
Bu görüntüde, bir taş döşeli sokakta park edilmiş yeşil ve mavi bir Volkswagen Beetle bulunmaktadır. Arka planda iki sarı bina vardır. Araba kameraya doğru bakmaktadır. Görüntü net odaklanmıştır ve renkler canlıdır. Görsel tarzı gerçekçidir.
|
133 |
|
|
|
134 |
|
|
|
135 |
|
136 |
+
Model response: Araba nerede duruyor?
|
137 |
+
Araba, sarı bir binanın yanında sokakta park edilmiş.
|
138 |
|
|
|
139 |
|
|
|
140 |
|
141 |
+
Model response: Arabanın rengi nedir?
|
142 |
+
Araba turkuaz veya limon yeşili renktedir.
|
143 |
+
"""
|
144 |
+
```
|
145 |
|
146 |
+
---
|