xwinxu's picture
Upload README.md with huggingface_hub
29439c5
|
raw
history blame
1.21 kB
metadata
license: apache-2.0
datasets:
  - stanfordnlp/SHP
  - Anthropic/hh-rlhf
  - OpenAssistant/oasst1
language:
  - en
metrics:
  - accuracy
tags:
  - human feedback
  - rlhf
  - preferences
  - alignment
  - HALO
  - halos
  - dpo
  - rl

halos

This repo contains the model checkpoints for:

  • model family llama7b
  • optimized with the loss DPO
  • aligned using the SHP, Anthropic HH and Open Assistant datasets.

Please refer to our code repository or blog which contains intructions for training your own HALOs and links to our model cards.

If you find this repo or the technical paper useful in your research, please feel free to cite our work:

@techreport{ethayarajh2023halos,
  author = {Ethayarajh, Kawin and Xu, Winnie, and Jurafsky, Dan and Kiela, Douwe},
  title = {Human-Centered Loss Functions (HALOs)},
  institution = {Contextual AI},
  note = {https://github.com/ContextualAI/HALOs/blob/main/assets/report.pdf},
  year = {2023},
}