metadata
library_name: transformers
language:
- ko
Model Card for Model ID
Model Details
line ๋จ์๋ก ์์์ด ํฌํจ๋ ๊ธ์๋ฅผ ์ธ์ ๋ชจ๋ธ์
๋๋ค.
ํ๊ตญ์ด + latex ๋ฐ์ดํฐ์
์ผ๋ก finetuning ํ์ต๋๋ค.
Uses
Direct Use
from PIL import Image
import glob
from transformers import TrOCRProcessor, VisionEncoderDecoderModel
import torch
import IPython.display as ipd
## ์ด๋ฏธ์ง ์ค๋น
img_path_list = sorted(glob.glob('images/mathematical_expression_2-*.png'))
img_list = [Image.open(img_path).convert("RGB") for img_path in img_path_list]
## ๋ชจ๋ธ ๋ฐ ํ๋ก์ธ์ ์ค๋น
model_path = 'models/math_ocr'
processor = TrOCRProcessor.from_pretrained(model_path)
model = VisionEncoderDecoderModel.from_pretrained(model_path)
model.eval()
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
model.to(device)
processor.feature_extractor.size = model.config.encoder.image_size
gc = model.generation_config
gc.max_length = 128
gc.early_stopping = True
gc.no_repeat_ngram_size = 3
gc.length_penalty = 2.0
gc.num_beams = 4
gc.eos_token_id = processor.tokenizer.sep_token_id
## TrOCR ์ถ๋ก
pixel_values = processor(img_list, return_tensors="pt").pixel_values
generated_ids = model.generate(pixel_values.to(model.device), pad_token_id=processor.tokenizer.eos_token_id)
generated_text = processor.batch_decode(generated_ids, skip_special_tokens=True)
for img,text in zip(img_list, generated_text):
ipd.display(img)
print(text)