dddsaty's picture
Update README.md
586a6be verified
|
raw
history blame
625 Bytes
metadata
license: apache-2.0
datasets: We-Want-GPU/Yi-Ko-DPO-Orca-DPO-Pairs
language:
  - ko
pipeline_tag: text-generation

Explanation

  • With the base model, applied DPO to the small amount of layers with the open dataset , saved just the adapter part
  • Merged the base model and the tuned adapter together

Base Model

Used Corpus

Score

  • TBU

Log

  • 2024.02.13: Initial version Upload

LICENSE

  • Apache 2.0