poly-ko-1.3b-translate

  • EleutherAI/polyglot-ko-1.3b을 squarelike/sharegpt_deepl_ko_translation으로 영한 번역만 가능하도록 fine-tuning한 모델
  • QRoLA기법으로 fine-tunnig

훈련 정보

  • Epoch: 1
  • learning-rate: 3e-4
  • batch_size: 3
  • Lora r: 8
  • Lora target modules: query_key_value

3090GPU 1대로 훈련했습니다.

Downloads last month
18
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Dataset used to train aeolian83/poly-ko-1.3b-translate