File size: 6,696 Bytes
252d011 936cfec 252d011 936cfec 252d011 936cfec 252d011 936cfec 252d011 936cfec 252d011 936cfec 163fec3 936cfec 252d011 936cfec 252d011 936cfec 252d011 936cfec fab76c5 936cfec |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 |
---
language:
- en
- ko
license: cc-by-nc-4.0
library_name: transformers
tags:
- mergekit
- merge
base_model:
- mistral-community/pixtral-12b
pipeline_tag: image-text-to-text
---
# Pixtral-12b-korean-preview
Finetunned with korean, english data for improving korean performance.
# Model Card for Model ID
Merged model using [mergekit](https://github.com/arcee-ai/mergekit/tree/main/mergekit)
This model hasn't been fully tested, so your feedback will be invaluable in improving it.
## Merge Format
```yaml
models:
- model: spow12/Pixtral-12b-korean-base(private)
layer_range: [0, 40]
- model: mistral-community/pixtral-12b
layer_range: [0, 40]
merge_method: slerp
base_model: mistral-community/pixtral-12b
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5 # fallback for rest of tensors
dtype: bfloat16
```
## Model Details
### Model Description
- **Developed by:** spow12(yw_nam)
- **Shared by :** spow12(yw_nam)
- **Model type:** LLaVA
- **Language(s) (NLP):** Korean, English
- **Finetuned from model :** [mistral-community/pixtral-12b](https://huggingface.co/mistral-community/pixtral-12b)
## Usage
### Single image inference

```python
from transformers import AutoProcessor, AutoModelForVision2Seq
from PIL import Image
model_id = 'spow12/Pixtral-12b-korean-preview'
model = AutoModelForVision2Seq.from_pretrained(
model_id,
device_map='auto',
torch_dtype = torch.bfloat16,
).eval()
model.tie_weights()
processor = AutoProcessor.from_pretrained(model_id)
system = "You are helpful assistant create by Yw nam"
chat = [
{
'content': system,
'role': 'system'
},
{
"role": "user", "content": [
{"type": "image"},
{"type": "text", "content": "μ΄ μ΄λ―Έμ§μ λμμλ νκ²½μ μ€λͺ
ν΄μ€"},
]
}
]
url = "https://encrypted-tbn2.gstatic.com/images?q=tbn:ANd9GcSXVmCeFm5GRrciuGCM502uv9xXVSrS9zDJZ1umCfoMero2MLxT"
image = Image.open(requests.get(url, stream=True).raw)
images = [[image]]
prompt = processor.apply_chat_template(chat, tokenize=False)
inputs = processor(text=prompt, images=images, return_tensors="pt").to(model.device)
generate_ids = model.generate(**inputs, max_new_tokens=500,do_sample=True,min_p=0.1, temperature=0.9)
output = processor.batch_decode(generate_ids, skip_special_tokens=True,clean_up_tokenization_spaces=False)
print(output[0])
#Output
"""μ΄ μ΄λ―Έμ§λ λ°μ ν΄μμ μμΉν μμ μ¬μ μμΉν κ³ μν ν΄μ κ²½μΉλ₯Ό 보μ¬μ€λλ€. μ΄ μ¬μ νΈλ₯Έ λ¬Όλ‘ λλ¬μΈμ¬ μμΌλ©°, κ·Έ μμλ λΆμ μ§λΆμ΄ μλ νμ λ±λκ° μ μμ΅λλ€. λ±λλ μ¬μ μ€μμ μμΉν΄ μμΌλ©°, λ°μ μ λ²½κ³Ό μ°κ²°λ λλ€λ¦¬κ° μ΄μ΄μ Έ μμ΄ μ κ·Όν μ μμ΅λλ€. λ±λ μ£Όλ³μ λ°μ μ λ²½μ νλκ° λΆλͺνλ©° μ₯λ©΄μ μλμ μΈ μμλ₯Ό λν©λλ€. λ±λ λλ¨Έλ‘λ νλμ΄ λ§κ³ νΈλ₯΄λ©°, μ 체μ μΈ μ₯λ©΄μ ννλ‘κ³ κ³ μν λΆμκΈ°λ₯Ό μμλ
λλ€."""
```
### Multi image inference
<p align="center">
<img src="https://cloud.shopback.com/c_fit,h_750,w_750/store-service-tw/assets/20185/0476e480-b6c3-11ea-b541-2ba549204a69.png" width="300" style="display:inline-block;"/>
<img src="https://pbs.twimg.com/profile_images/1268196215587397634/sgD5ZWuO_400x400.png" width="300" style="display:inline-block;"/>
</p>
```python
url_apple = "https://cloud.shopback.com/c_fit,h_750,w_750/store-service-tw/assets/20185/0476e480-b6c3-11ea-b541-2ba549204a69.png"
image_1 = Image.open(requests.get(url_apple, stream=True).raw)
url_microsoft = "https://pbs.twimg.com/profile_images/1268196215587397634/sgD5ZWuO_400x400.png"
image_2 = Image.open(requests.get(url_microsoft, stream=True).raw)
chat = [
{
'content': system,
'role': 'system'
},
{
"role": "user", "content": [
{"type": "image"},
{"type": "image"},
{"type": "text", "content": "λ κΈ°μ
μ λν΄μ μλκ±Έ μ€λͺ
ν΄μ€."},
]
}
]
images = [[image_1, image_2] ]
prompt = processor.apply_chat_template(chat, tokenize=False)
inputs = processor(text=prompt, images=images, return_tensors="pt").to(model.device)
generate_ids = model.generate(**inputs, max_new_tokens=1024, do_sample=True, temperature=0.7, min_p=0.1)
output = processor.batch_decode(generate_ids, skip_special_tokens=True,clean_up_tokenization_spaces=False)
print(output[0])
#Output
"""λ κΈ°μ
μ κ°κ° Appleκ³Ό Microsoftμ
λλ€.
1. μ ν:
μ νμ 1976λ
μ μ€ν°λΈ μ‘μ€, μ€ν°λΈ μμ¦λμ
, λ‘λλ μ¨μΈμκ² μ€λ¦½λ λ―Έκ΅μ λ€κ΅μ κΈ°μ κΈ°μ
μ
λλ€. μ νμ μ£Όμ μ νμΌλ‘λ iPhone, iPad, Mac, Apple Watchκ° μμ΅λλ€. μ΄ νμ¬λ νμ μ μΈ λμμΈ, μ¬μ©μ μΉνμ μΈ μΈν°νμ΄μ€, κ³ νμ§μ νλμ¨μ΄λ‘ μ λͺ
ν©λλ€. μ νμ λν Apple Music, iCloud, App Storeμ κ°μ λ€μν μννΈμ¨μ΄ μλΉμ€μ νλ«νΌμ μ 곡ν©λλ€. μ νμ νμ μ μΈ μ νκ³Ό κ°λ ₯ν λΈλλλ‘ μ μλ €μ Έ μμΌλ©°, 2010λ
λ μ΄ν μΈκ³μμ κ°μ₯ κ°μΉ μλ κΈ°μ
μ€ νλλ‘ μ리맀κΉνμ΅λλ€.
2. λ§μ΄ν¬λ‘μννΈ:
λ§μ΄ν¬λ‘μννΈλ 1975λ
μ λΉ κ²μ΄μΈ μ ν΄ μλ μ μν΄ μ€λ¦½λ λ―Έκ΅μ λ€κ΅μ κΈ°μ κΈ°μ
μ
λλ€. μ΄ νμ¬λ μ΄μ 체μ , μννΈμ¨μ΄, κ°μΈμ© μ»΄ν¨ν°, μ μμ ν κ°λ°μ μ€μ μ λ‘λλ€. λ§μ΄ν¬λ‘μννΈμ μ£Όμ μ νμΌλ‘λ Windows μ΄μ 체μ , Microsoft Office μ νκ΅°, Xbox κ²μ μ½μμ΄ μμ΅λλ€. μ΄ νμ¬λ μννΈμ¨μ΄ κ°λ°, ν΄λΌμ°λ μ»΄ν¨ν
, μΈκ³΅μ§λ₯ μ°κ΅¬μ κ°μ λΆμΌμμλ μ€μν μν μ νκ³ μμ΅λλ€. λ§μ΄ν¬λ‘μννΈλ νμ μ μΈ κΈ°μ κ³Ό κ°λ ₯ν λΉμ¦λμ€ μ루μ
μΌλ‘ μ μλ €μ Έ μμΌλ©°, μΈκ³μμ κ°μ₯ κ°μΉ μλ κΈ°μ
μ€ νλλ‘ μ리맀κΉνμ΅λλ€"""
```
## Limitation
Overall, the performance seems reasonable.
However, it declines when processing images with non enlgish image.
This is likely because the model was trained primarily on English text and landscapes.
Adding Korean data in the future is expected to enhance performance.
## Citation
```bibtex
@misc {spow12/Pixtral-12b-korean-preview,
author = { YoungWoo Nam },
title = { spow12/Pixtral-12b-korean-preview },
year = 2024,
url = { https://huggingface.co/spow12/Pixtral-12b-korean-preview },
publisher = { Hugging Face }
}
``` |