Seongyun commited on
Commit
2b48cd8
1 Parent(s): 844f2b6

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -1
README.md CHANGED
@@ -38,7 +38,7 @@ Janus-DPO-7B is a model created by applying DPO to Janus using the [Multifaceted
38
  - **Language(s) (NLP):** English
39
  - **License:** Apache 2.0
40
  - **Related Models:** [Janus-7B](https://huggingface.co/kaist-ai/janus-7b), [Janus-ORPO-7B](https://huggingface.co/kaist-ai/janus-orpo-7b), [Janus-RM-7B](https://huggingface.co/kaist-ai/janus-rm-7b)
41
- - **Training Datasets**: [Multifaceted-Collection-SFT](https://huggingface.co/datasets/kaist-ai/Multifaceted-Collection-SFT)
42
  - **Resources for more information:**
43
  - [Research paper](https://arxiv.org/abs/2405.17977)
44
  - [GitHub Repo](https://github.com/kaistAI/Janus)
 
38
  - **Language(s) (NLP):** English
39
  - **License:** Apache 2.0
40
  - **Related Models:** [Janus-7B](https://huggingface.co/kaist-ai/janus-7b), [Janus-ORPO-7B](https://huggingface.co/kaist-ai/janus-orpo-7b), [Janus-RM-7B](https://huggingface.co/kaist-ai/janus-rm-7b)
41
+ - **Training Datasets**: [Multifaceted-Collection-DPO](https://huggingface.co/datasets/kaist-ai/Multifaceted-Collection-DPO)
42
  - **Resources for more information:**
43
  - [Research paper](https://arxiv.org/abs/2405.17977)
44
  - [GitHub Repo](https://github.com/kaistAI/Janus)