a-F1 commited on
Commit
37bb664
1 Parent(s): d670be5

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +45 -3
README.md CHANGED
@@ -1,3 +1,45 @@
1
- ---
2
- license: mit
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: mit
3
+ ---
4
+
5
+ # ICLM-7B unlearned using SimNPO on MUSE Books
6
+
7
+ ## Model Details
8
+
9
+ - **Base Model**: ICLM-7B fine tuned on the Harry Potter books
10
+ - **Unlearning**: SimNPO on MUSE Books
11
+
12
+ ## Unlearning Algorithm
13
+
14
+ This model uses the `SimNPO` unlearning algorithm with the following parameters:
15
+ - Learning Rate: `1e-5`
16
+ - beta: `0.7`
17
+ - lambda: `1.0`
18
+ - gamma: `0.0`
19
+
20
+ ## Loading the Model
21
+
22
+ ```python
23
+ import torch
24
+ from transformers import AutoModelForCausalLM, AutoTokenizer
25
+
26
+ model = AutoModelForCausalLM.from_pretrained("OPTML-Group/SimNPO-MUSE-Books-Llama-2-7b", torch_dtype=torch.bfloat16, device_map='auto')
27
+
28
+ ## Citation
29
+
30
+ If you use this model in your research, please cite:
31
+ ```
32
+ @misc{fan2024simplicityprevailsrethinkingnegative,
33
+ title={Simplicity Prevails: Rethinking Negative Preference Optimization for LLM Unlearning},
34
+ author={Chongyu Fan and Jiancheng Liu and Licong Lin and Jinghan Jia and Ruiqi Zhang and Song Mei and Sijia Liu},
35
+ year={2024},
36
+ eprint={2410.07163},
37
+ archivePrefix={arXiv},
38
+ primaryClass={cs.CL},
39
+ url={https://arxiv.org/abs/2410.07163},
40
+ }
41
+ ```
42
+
43
+ ## Contact
44
+
45
+ For questions or issues regarding this model, please contact [email protected].