metadata
license: mit
LLaMA-2 7B unlearned using SimNPO on MUSE News
Model Details
- Base Model: LLaMA-2 7B fine tuned on the BBC news
- Unlearning: SimNPO on MUSE News
Unlearning Algorithm
This model uses the SimNPO
unlearning algorithm with the following parameters:
- Learning Rate:
1e-5
- beta:
0.75
- lambda:
1.0
- gamma:
3.0
Loading the Model
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained("OPTML-Group/SimNPO-MUSE-News-llama-2-7b", torch_dtype=torch.bfloat16, device_map='auto')
Citation
If you use this model in your research, please cite:
@misc{fan2024simplicityprevailsrethinkingnegative,
title={Simplicity Prevails: Rethinking Negative Preference Optimization for LLM Unlearning},
author={Chongyu Fan and Jiancheng Liu and Licong Lin and Jinghan Jia and Ruiqi Zhang and Song Mei and Sijia Liu},
year={2024},
eprint={2410.07163},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2410.07163},
}
Contact
For questions or issues regarding this model, please contact [email protected].