lora-midm-7b-nsmc-review-understanding
This model is a fine-tuned version of KT-AI/midm-bitext-S-7B-inst-v1 on an unknown dataset.
Model description
nsmc data ๊ธฐ๋ฐ ๋ฏธ์ธํ๋ ๋ชจ๋ธ
Intended uses & limitations
More information needed
Training and evaluation data
training data๋ก nsmc train data ์์ชฝ 2000๊ฐ, evaluation data๋ก nsmc test data ์์ชฝ 1000๊ฐ๋ฅผ ์ฌ์ฉํ์ต๋๋ค.
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 2
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.03
- training_steps: 200
- mixed_precision_training: Native AMP
Training results
์ด 200step ๋๋ ธ์ต๋๋ค. 50step๋ง๋ค checkํ ๊ฒฐ๊ณผ๋ ์๋์ ๊ฐ์ต๋๋ค.
50 step training loss: 1.6881
100 step training loss: 1.1443
150 step training loss: 1.0563
200 step training loss: 1.0446
์คํ ๋ด์ฉ ๋ฐ ๋ถ๋ฅ ๊ฒฐ๊ณผ
๋ฏธ์ธํ๋ํ ๋ชจ๋ธ์ nsmc test data 1000๊ฐ๋ฅผ ์
๋ ฅ์ผ๋ก ์ฃผ์ด ๊ธ์ ๋๋ ๋ถ์ ๋จ์ด๋ฅผ ์์ฑํ๋๋ก ํ์ต๋๋ค.
๋จ์ด ์์ฑ ๊ฒฐ๊ณผ๋ '๊ธ์ ' 444๊ฐ, '๋ถ์ ' 532๊ฐ, ' , ' 4๊ฐ, '์ ' 20๊ฐ ์
๋๋ค.
์ ํ๋๋ ์ ๋ต์ / 1000 * 100์ผ๋ก ๊ณ์ฐํ์ผ๋ฉฐ, ๊ฒฐ๊ณผ๋ 87.80% ์
๋๋ค.
Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu118
- Datasets 2.15.0
- Tokenizers 0.15.0
Model tree for yaeeun/lora-midm-7b-nsmc-review-understanding
Base model
KT-AI/midm-bitext-S-7B-inst-v1