MetaAligner
commited on
Commit
•
a295e5a
1
Parent(s):
245f7a5
Update README.md
Browse files
README.md
CHANGED
@@ -21,7 +21,7 @@ objective preference alignment. Experimental results show that MetaAligner can s
|
|
21 |
while maintaining performance on aligned objectives.
|
22 |
|
23 |
# Dataset
|
24 |
-
This model is trained based on the following released dataset:
|
25 |
|
26 |
# Usage
|
27 |
With the Hugging Face Transformers library, you can use the MetaAligner-HH-RLHF-7B model in your Python project. Here is a simple example of how to load the model:
|
|
|
21 |
while maintaining performance on aligned objectives.
|
22 |
|
23 |
# Dataset
|
24 |
+
This model is trained based on the following released dataset: https://huggingface.co/datasets/MetaAligner/HH-RLHF-MetaAligner-Data
|
25 |
|
26 |
# Usage
|
27 |
With the Hugging Face Transformers library, you can use the MetaAligner-HH-RLHF-7B model in your Python project. Here is a simple example of how to load the model:
|