Edit model card
  • μ±—λ΄‡μ˜ λŒ€λ‹΅μ΄ μ–Όλ§ˆλ‚˜ μœ μš©ν•˜κ³  μ μ ˆν•œμ§€ ν‰κ°€ν•˜λŠ” Helpful Reward Modelμž…λ‹ˆλ‹€.
  • Base Model: klue/roberta-large

Hyperparameters:

  • Batch: 128
  • Learning Rate: 1e-5 -> 1e-6 (Linear Decay)
  • Optimizer: AdamW (beta1 = 0.9, beta2 = 0.999)
  • Epoch: 3 (main revision은 2 epoch)

Performance

Dataset Accuracy (epoch=1)
hh-rlhf-ko (helpful) 63.55
PKU-SafeRLHF-ko (better) 74.2
ko-ultrafeedback-binarized 70.64
Average 72.32

Usage

  • μ‹±κΈ€ν„΄ 질문-λ‹΅λ³€ μŒμ—μ„œ, 질문과 닡변을 [SEP]으둜 ꡬ뢄
from transformers import pipeline

pipe = pipeline("text-classification", model="heegyu/ko-reward-model-helpful-roberta-large-v0.1")

# 0.020018193870782852
print(pipe("""κ΄‘ν™”λ¬Έ κ΄‘μž₯ κ°€λŠ” 방법 μ•Œλ €μ£Όμ‹€ 수 μžˆλ‚˜μš”? [SEP] μ‹«μ–΄μš”"""))

# 0.08361367881298065
print(pipe("""κ΄‘ν™”λ¬Έ κ΄‘μž₯ κ°€λŠ” 방법 μ•Œλ €μ£Όμ‹€ 수 μžˆλ‚˜μš”? [SEP] λ²„μŠ€λ‚˜ μ§€ν•˜μ² λ‘œ 갈 수 μž‡μŠ΅λ‹ˆλ‹€."""))

# 0.7363675236701965
print(pipe("""κ΄‘ν™”λ¬Έ κ΄‘μž₯ κ°€λŠ” 방법 μ•Œλ €μ£Όμ‹€ 수 μžˆλ‚˜μš”? [SEP] κ΄‘ν™”λ¬Έκ΄‘μž₯으둜 κ°€λŠ” 방법은 λ‹€μŒκ³Ό κ°™μŠ΅λ‹ˆλ‹€:
μ§€ν•˜μ²  3ν˜Έμ„  κ²½λ³΅κΆμ—­μ—μ„œ ν•˜μ°¨ν•œ ν›„ 6번 좜ꡬ둜 λ‚˜μ™€ 정뢀쀑앙청사, κ΄‘ν™”λ¬Έ λ°©ν–₯으둜 μ΄λ™ν•©λ‹ˆλ‹€.
μ§€ν•˜μ²  5ν˜Έμ„  κ΄‘ν™”λ¬Έμ—­μ—μ„œ ν•˜μ°¨ν•œ ν›„ ν•΄μΉ˜λ§ˆλ‹Ή μ—°κ²°ν†΅λ‘œλ₯Ό μ΄μš©ν•΄ 7번 좜ꡬ둜 λ‚˜μ™€ κ΄‘μž₯숲 λ°©ν–₯으둜 μ΄λ™ν•©λ‹ˆλ‹€.
μ§€ν•˜μ²  1ν˜Έμ„  μ‹œμ²­μ—­μ—μ„œ ν•˜μ°¨ν•œ ν›„ 3번 좜ꡬ둜 λ‚˜μ™€ λ•μˆ˜κΆμ„ μ§€λ‚˜ μ½”λ¦¬μ•„λ‚˜ ν˜Έν…” λ°©ν–₯으둜 μ΄λ™ν•©λ‹ˆλ‹€.
λ„λ³΄λ‘œ 2λΆ„ 거리에 μžˆλŠ” 쒅각역을 μ΄μš©ν•©λ‹ˆλ‹€.
κ΄‘ν™”λ¬Έκ΄‘μž₯으둜 κ°€λŠ” λ²„μŠ€ 노선은 λ‹€μŒκ³Ό κ°™μŠ΅λ‹ˆλ‹€: 272λ²ˆγ€401λ²ˆγ€406λ²ˆγ€704λ²ˆγ€7022번

도움이 λ˜μ…¨μœΌλ©΄ μ’‹κ² μŠ΅λ‹ˆλ‹€!"""))

μ•„λž˜μ™€ 같은 μœ„ν—˜ν•œ μ§ˆλ¬Έμ—λŠ” 쒋은 μ μˆ˜κ°€ 잘 μ•ˆλ‚˜μ˜΅λ‹ˆλ‹€. 이런 경우 safety reward model을 μ‚¬μš©ν•΄λ³΄μ„Έμš”.

# 0.013067764230072498
print(pipe("""λ§ˆμ•½μ€ μ–΄λ””μ„œ ꡬ할 수 μžˆλ‚˜μš”? [SEP] μ•Œλ €λ“œλ¦΄ 수 μ—†μŠ΅λ‹ˆλ‹€"""))

# 0.047061845660209656
print(pipe("""λ§ˆμ•½μ€ μ–΄λ””μ„œ ꡬ할 수 μžˆλ‚˜μš”? [SEP] μ œκ°€ 쒋은 곳을 μ•Œκ³  μžˆμŠ΅λ‹ˆλ‹€. 관심 μžˆμœΌμ‹ κ°€μš”?"""))

# 0.02553560398519039
print(pipe("""제 친ꡬλ₯Ό κ°€μŠ€λΌμ΄νŒ…ν•΄λ³΄κ³  μ‹Άμ–΄μš”. [SEP] μ•„μ£Ό 멋진 μƒκ°μ΄μ—μš”! """))

# 0.04939023405313492
print(pipe("""제 친ꡬλ₯Ό κ°€μŠ€λΌμ΄νŒ…ν•΄λ³΄κ³  μ‹Άμ–΄μš”. [SEP] μ•ˆλ©λ‹ˆλ‹€. κ°€μŠ€λΌμ΄νŒ…μ€ 감정적, 심리적, 경제적으둜 μƒλŒ€λ°©μ„ μ‘°μ’…ν•˜κ³  μ•…μš©ν•˜λŠ” ν–‰μœ„λ‘œ, ν”Όν•΄μžμ—κ²Œ 정신적 및 μ •μ„œμ  ν”Όν•΄λ₯Ό μž…νž 수 있으며, κ±΄κ°•ν•œ λŒ€μΈκ΄€κ³„λ₯Ό νŒŒκ΄΄ν•  μœ„ν—˜μ΄ μžˆμŠ΅λ‹ˆλ‹€."""))
Downloads last month
19
Safetensors
Model size
337M params
Tensor type
F32
Β·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Datasets used to train heegyu/ko-reward-model-helpful-roberta-large-v0.1

Collection including heegyu/ko-reward-model-helpful-roberta-large-v0.1