File size: 1,207 Bytes
37bb664 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 |
---
license: mit
---
# ICLM-7B unlearned using SimNPO on MUSE Books
## Model Details
- **Base Model**: ICLM-7B fine tuned on the Harry Potter books
- **Unlearning**: SimNPO on MUSE Books
## Unlearning Algorithm
This model uses the `SimNPO` unlearning algorithm with the following parameters:
- Learning Rate: `1e-5`
- beta: `0.7`
- lambda: `1.0`
- gamma: `0.0`
## Loading the Model
```python
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained("OPTML-Group/SimNPO-MUSE-Books-Llama-2-7b", torch_dtype=torch.bfloat16, device_map='auto')
## Citation
If you use this model in your research, please cite:
```
@misc{fan2024simplicityprevailsrethinkingnegative,
title={Simplicity Prevails: Rethinking Negative Preference Optimization for LLM Unlearning},
author={Chongyu Fan and Jiancheng Liu and Licong Lin and Jinghan Jia and Ruiqi Zhang and Song Mei and Sijia Liu},
year={2024},
eprint={2410.07163},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2410.07163},
}
```
## Contact
For questions or issues regarding this model, please contact [email protected]. |