File size: 6,729 Bytes
1971a9d
 
fa17d6d
 
 
 
 
 
 
1971a9d
 
fa17d6d
 
3de99d8
fa17d6d
1971a9d
 
3de99d8
 
1971a9d
3de99d8
 
 
9f5581a
3de99d8
 
1971a9d
 
 
 
 
fa17d6d
 
 
 
 
 
1971a9d
fa17d6d
 
1971a9d
fa17d6d
1971a9d
fa17d6d
1971a9d
fa17d6d
1971a9d
 
fa17d6d
 
1971a9d
fa17d6d
1971a9d
fa17d6d
1971a9d
fa17d6d
1971a9d
fa17d6d
1971a9d
 
 
fa17d6d
1971a9d
fa17d6d
1971a9d
fa17d6d
 
 
 
 
1971a9d
fa17d6d
 
 
 
1971a9d
fa17d6d
 
1971a9d
fa17d6d
 
 
 
 
1971a9d
fa17d6d
1971a9d
fa17d6d
1971a9d
fa17d6d
 
 
1971a9d
fa17d6d
1971a9d
fa17d6d
 
 
 
 
1971a9d
fa17d6d
 
 
 
1971a9d
fa17d6d
 
1971a9d
fa17d6d
 
 
 
 
 
1971a9d
fa17d6d
1971a9d
fa17d6d
1971a9d
fa17d6d
1971a9d
fa17d6d
 
 
1971a9d
fa17d6d
 
 
 
 
1971a9d
 
 
fa17d6d
 
1971a9d
 
 
fa17d6d
 
1971a9d
 
 
fa17d6d
 
 
 
1971a9d
fa17d6d
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
---
library_name: transformers
datasets:
- ucsahin/TR-VLM-DPO-Dataset
language:
- tr
pipeline_tag: image-text-to-text
license: apache-2.0
base_model: ucsahin/TraVisionLM-base
---

<!-- # TraVisionLM - Fast and Native Turkish Visual Language Model -->
<div style="text-align: center;">
    <img src="logo-white-dpo.png" alt="logo" style="width: 100%; height: auto;">
</div>
<!-- Provide a quick summary of what the model is/does. -->

## 🎯 This is the DPO-optimized version of the base model [TraVisionLM-base](https://huggingface.co/ucsahin/TraVisionLM-base).
When compared to the base model, the DPO version answers questions more **accurately**, **truthfully**, and with **greater detail**.

#### 🤖 **What is Direct Preference Optimization (DPO)?**
Direct Preference Optimization is a technique used to align a model’s behavior with human preferences. The process works by showing the model several possible answers to a question and training it to favor the response preferred by humans. This leads to more reliable and truthful responses, as the model learns not only from raw data but also from user feedback. DPO helps to **minimize hallucinations** and improves the **quality** and **accuracy** of the model’s answers.

### 🚀 **Model demo:** [TRaVisionLM-DPO-Demo](https://huggingface.co/spaces/ucsahin/TraVisionLM-Demo)

### 📚 **Visual Language Model DPO Training Notebook:** [Colab Notebook](https://colab.research.google.com/drive/1ypEPQ3RBX3_X7m9qfmU-Op-vGgOjab_z?usp=sharing)


### Model Description
<!-- Provide a longer summary of what this model is. -->

- **Developed by:** [ucsahin](https://huggingface.co/ucsahin)
- **Model type:** [Image-Text-to-Text](https://huggingface.co/tasks/image-text-to-text)
- **Language(s) (NLP):** *Turkish*
- **License:** *Apache license 2.0*
- 
---

## English
# 🎉 Introducing TraVisionLM: The First of Its Kind! 🚀

🌟 This is a very fast and small (only 875M parameters) visual language model on Hugging Face that responds to Turkish instructions given an image input! 🌟

✨ Developed compatible with the Transformers library, TRaVisionLM is a breeze to load, fine-tune, and use for lightning-fast inferences—all without needing any external libraries! ⚡️

Ready to experience the Turkish visual language model? Let's go! 🇹🇷🖼️🤖


## Türkçe
# 🎉 TraVisionLM: Türünün İlk Örneği! 🚀

🌟 Çok hızlı ve küçük boyutlu (sadece 875M parametre) Türkçe görsel dil modeli! Bir görüntü ve Türkçe talimat verildiğinde Türkçe yanıt üretir! 🌟

✨ Transformers kütüphanesi ile uyumlu olarak geliştirilen TraVisionLM modeli ile, yükleme, eğitme ve dış kütüphaneler kullanmadan hızlı sonuçlar almak çok kolay! ⚡️

Türkçe görsel dil modelini deneyimlemeye hazır mısınız? Hadi başlayalım! 🇹🇷🖼️🤖

---

## How to Get Started with the Model

In Transformers, you can load the model and inference as follows:

**IMPORTANT NOTE:** TraVisionLM model is not yet integrated natively into the Transformers library. So you need to set ```trust_remote_code=True``` when loading the model. It will download the ```configuration_travisionlm.py```, ```modeling_travisionlm.py``` and ```processing_travisionlm.py``` files from the repo. You can check out the content of these files under the *Files and Versions* tab and pin the specific versions if you have any concerns regarding malicious code.

```python
from transformers import AutoModelForCausalLM, AutoProcessor
import torch
import requests 
from PIL import Image

model = AutoModelForCausalLM.from_pretrained('ucsahin/TraVisionLM-DPO', trust_remote_code=True, device_map="cuda")
# you can also load the model in bfloat16 or float16
# model = AutoModelForCausalLM.from_pretrained('ucsahin/TraVisionLM-DPO', trust_remote_code=True, torch_dtype=torch.bfloat16, device_map="cuda")
processor = AutoProcessor.from_pretrained('ucsahin/TraVisionLM-DPO', trust_remote_code=True)

url = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/tasks/car.jpg"
image = Image.open(requests.get(url, stream=True).raw).convert("RGB")

prompt = "Açıkla"  # short caption
# prompt = "Detaylı açıkla"  # detailed caption
# prompt = "Araba ne renktir?" # visual qa
# prompt = "Resmin odak noktası nedir?" # visual qa
# prompt = "Araba nerede duruyor?" # visual qa

inputs = processor(text=prompt, images=image, return_tensors="pt").to("cuda")

outputs = model.generate(**inputs, max_new_tokens=512, do_sample=True, temperature=0.6, top_p=0.9, top_k=50, repetition_penalty=1.2)

output_text = processor.batch_decode(outputs, skip_special_tokens=True)[0]
print("Model response: ", output_text)
```

You can also perform batch inference as follows (make sure that all images have a prompt text associated with them):

```python
from transformers import AutoModelForCausalLM, AutoProcessor
import torch
import requests 
from PIL import Image

model = AutoModelForCausalLM.from_pretrained('ucsahin/TraVisionLM-base', trust_remote_code=True, device_map="cuda")
# you can also load the model in bfloat16 or float16
# model = AutoModelForCausalLM.from_pretrained('ucsahin/TraVisionLM-base', trust_remote_code=True, torch_dtype=torch.bfloat16, device_map="cuda")
processor = AutoProcessor.from_pretrained('ucsahin/TraVisionLM-base', trust_remote_code=True)

url = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/tasks/car.jpg"
image = Image.open(requests.get(url, stream=True).raw).convert("RGB")

prompt_list = [
  'Açıkla',
  'Detaylı açıkla',
  'Araba nerede duruyor?',
  'Arabanın rengi nedir?',
]

inputs = processor(text=prompt_list, images=len(prompt_list)*[image], padding="longest", return_tensors="pt").to("cuda")

outputs = model.generate(**inputs, max_new_tokens=512, do_sample=True, temperature=0.6, top_p=0.9, top_k=50, repetition_penalty=1.2)

output_text_list = processor.batch_decode(outputs, skip_special_tokens=True)

for output_text in output_text_list:
  print(f"Model response: {output_text}\n\n\n")
```

The output will look like this:
```
"""
Model response: Açıkla
Bir binanın önünde, sokakta park halindeki mavi bir Volkswagen Beetle.



Model response: Detaylı açıkla
Bu görüntüde, bir taş döşeli sokakta park edilmiş yeşil ve mavi bir Volkswagen Beetle bulunmaktadır. Arka planda iki sarı bina vardır. Araba kameraya doğru bakmaktadır. Görüntü net odaklanmıştır ve renkler canlıdır. Görsel tarzı gerçekçidir.



Model response: Araba nerede duruyor?
Araba, sarı bir binanın yanında sokakta park edilmiş.



Model response: Arabanın rengi nedir?
Araba turkuaz veya limon yeşili renktedir.
"""
```

---