File size: 1,461 Bytes
9f13819
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
# iLoRA

#### Preparation

1. Prepare the environment:

```python
git clone
cd iLoRA
pip install -r requirements.txt
```

2. Prepare the pre-trained huggingface model of Llama2-7B (https://huggingface.co/meta-llama/Llama-2-7b-hf).
3. Modify the paths inside the .sh file.


#### Train iLoRA

Train iLoRA with a single A100 GPU on MovieLens dataset:

```python
sh train_movielens.sh
```

Train iLoRA with a single A100 GPU on Steam dataset:

```
sh train_steam.sh
```

Train iLoRA with a single A100 GPU on LastFM dataset:

```
sh train_lastfm.sh
```

Note that: set the `llm_path` argument with your own directory path of the Llama2 model.

##### For the environmental issues mentioned by everyone during the reproduction process, we have attempted to help resolve them and have listed some solutions:

If you encounter an error with your transformers/generation/utils.py file, please replace it with the debug/utils.py file we have provided in your environment.

If you encounter an error with your transformers/models/llama/modeling_llama.py file, please replace it with the debug/modeling_llama.py file.

Thank you all for your attention to our work! Wishing you success in your research.

##### Evaluate iLoRA

Test iLoRA with a single A100 GPU on MovieLens dataset:

```
sh test_movielens.sh
```

Test iLoRA with a single A100 GPU on Steam dataset:

```
sh test_steam.sh
```

Test iLoRA with a single A100 GPU on LastFM dataset:

```
sh test_lastfm.sh
```