Steelskull commited on
Commit
9467ed2
1 Parent(s): 56d91e6

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +83 -39
README.md CHANGED
@@ -9,60 +9,104 @@ license: apache-2.0
9
 
10
  # phi-2-DLEC
11
 
12
- The DLEC (Distributive Layer Expansion Curve) methodology offers a novel approach to improving neural network models by focusing on the strategic duplication of certain effective layers.
13
- Developed with the aim of enhancing model performance, DLEC carefully identifies and amplifies the impact of key layers within the model's architecture.
14
 
15
- Below is a overview of the method and its implementation, particularly in how it integrates with the Hugging Face Transformers library and utilizes PyTorch and BitsAndBytes for efficient operation.
16
 
17
- Overview
18
- Setting Up: First, the script ensures all necessary components are in place, from libraries to the model and dataset.
19
 
20
- Database for Activations: A SQLite database is established to track layer activations, providing a clear view into how individual neurons react and which layers are most influential — these are our 'beneficial layers.'
21
 
22
- Analyzing and Identifying: By analyzing activation data, the script pinpoints which layers are most valuable to the model's performance.
23
 
24
- Configuring DLEC: A configuration is then created, guiding how the model should incorporate duplicates of these beneficial layers to boost effectiveness without unnecessarily increasing complexity.
 
25
 
26
- Reconfiguring and Running the Model: Finally, the model is adjusted according to DLEC's insights, focusing enhancement on the identified layers.
27
 
28
- Key Features:
29
- Selective Layer Duplication: DLEC doesn't just add more layers; it doubles down on the ones that really matter. This methodical selection ensures we're making the most of the model's capabilities without wasteful expansion.
30
 
31
- Smart Resource Management: By honing in on specific areas for improvement, DLEC aims to make better use of computational and memory resources, promoting more efficient learning without adding undue complexity to the model.
 
 
 
 
 
 
 
 
 
 
 
32
 
33
  This approach is about making informed, strategic enhancements to model architecture, prioritizing efficiency and effectiveness in utilizing neural network capabilities.
34
 
35
- ```ymal
36
- -Possible_Beneficial_layers: #Layers of significance
37
- - 0
38
- - 3
39
- - 4
40
- - 7
41
- - 8
42
- - 11
43
- - 12
44
- - 15
45
- - 16
46
- - 19
47
- - 20
48
- - 23
49
- - 24
50
- - 27
51
- - 28
52
- - 31
53
- - 32
54
  ```
55
- Currently, I am still limited to Mergekit, for this method, which does not support single layer duping, this may have a impact on performance.
56
 
57
- # This Method is still in development and I do not expect "Game Changing" or will I oversell this method, it is purely done for fun. Please let me know how the model works for you.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
58
 
59
- ## ⚙️ Evals
60
- [My Leaderboard:](https://huggingface.co/spaces/Steelskull/YALL-Leaderboard)
61
- ![image/png](https://cdn-uploads.huggingface.co/production/uploads/64545af5ec40bbbd01242ca6/pS7KFYDheWmFEaGybxr3K.png)
62
- ![image/png](https://cdn-uploads.huggingface.co/production/uploads/64545af5ec40bbbd01242ca6/CF9_p8AWMFraCnfiMa_no.png)
 
 
 
 
63
 
64
- As you know there is a usual loss of intelligence with model mergers, especially with Passthrough merging, on the par of 3ish points per billion duped, IF you get the right merge, if not your looking at a much larger loss (anywhere from 3-8 points per billion duped).
65
- Using DLEC, I was able to increase Phi-2 from 2.78b -> 3.25b with less than or around a single point of loss.
66
 
67
  This method is still in active development, and I am currently tweaking the algorithm to improve the layer selection process,
68
  I am also working on a single layer duping script as merge kit does not currently support this and I am merging layers that are unneeded and its degrading performance.
 
9
 
10
  # phi-2-DLEC
11
 
12
+ The DLEC (Distributive Layer Expansion Curve) methodology offers a novel approach to improving neural network models by focusing on the strategic duplication of certain effective layers. Developed with the aim of enhancing model performance, DLEC carefully identifies and amplifies the impact of key layers within the model's architecture.
 
13
 
14
+ ## Code overview:
15
 
16
+ Setting Up:
 
17
 
18
+ First, the script ensures all necessary components are in place, from libraries to the model and dataset.
19
 
20
+ Database for Activations:
21
 
22
+ A SQLite database is established to track layer activations, providing a clear view into how
23
+ individual neurons react and which layers are most influential — these are our 'beneficial layers.'
24
 
25
+ Analyzing and Identifying:
26
 
27
+ By analyzing activation data, the script pinpoints which layers are most valuable to the model's performance.
 
28
 
29
+ Configuring DLEC:
30
+
31
+ A configuration is then created, guiding how the model should incorporate duplicates of these beneficial layers to boost effectiveness without unnecessarily increasing complexity.
32
+
33
+ # Key Features:
34
+ Selective Layer Duplication:
35
+
36
+ DLEC doesn't just add more layers; it doubles down on the ones that really matter. This methodical selection ensures we're making the most of the model's capabilities without wasteful expansion.
37
+
38
+ Smart Resource Management:
39
+
40
+ By honing in on specific areas for improvement, DLEC aims to make better use of computational and memory resources, promoting more efficient learning without adding undue complexity to the model.
41
 
42
  This approach is about making informed, strategic enhancements to model architecture, prioritizing efficiency and effectiveness in utilizing neural network capabilities.
43
 
44
+ # Information Loss:
45
+ It is common to observe a loss of intelligence when merging models, especially with Passthrough merging, which typically results in a loss of around 3 points per billion parameters duplicated, assuming the merge is done correctly. If the merge is suboptimal, the loss can be much larger, ranging from 3-8 points or more per billion parameters duplicated. However, with DLEC, I was able to increase Phi-2 from 2.78b to 3.25b with a minimal loss of around 0.44 points on average.
46
+
47
+ DLEC Expanded Model:
48
+ [TheSkullery/phi-2-DLEC](https://huggingface.co/TheSkullery/phi-2-DLEC)
49
+ 2.78 -> 3.25, a ~17% increase in size
50
+ ```
51
+ Metric -> Value
52
+ Avg. 46.72
53
+ AGIEval 29.64
54
+ GPT4All 69.48
55
+ TruthfulQA 50.29
 
 
 
 
 
 
 
56
  ```
 
57
 
58
+ Original Model:
59
+ [abacaj/phi-2-super](https://huggingface.co/abacaj/phi-2-super))
60
+ ```
61
+ Metric -> Value
62
+ Avg. 47.16
63
+ AGIEval 31.95
64
+ GPT4All 70.81
65
+ TruthfulQA 48.39
66
+ ```
67
+
68
+ Loss or Increase:
69
+ Avg. -0.44
70
+ AGIEval -2.31
71
+ GPT4All -1.33
72
+ TruthfulQA +1.90
73
+
74
+
75
+ Example of loss:
76
+ [Steelskull/Etheria-55b-v0.1](https://huggingface.co/Steelskull/Etheria-55b-v0.1)
77
+ ```
78
+ Metric -> Value
79
+ Avg. 64.69
80
+ AI2 Reasoning Challenge 65.10
81
+ HellaSwag 81.93
82
+ MMLU 73.66
83
+ TruthfulQA 56.16
84
+ Winogrande 76.09
85
+ GSM8k 35.18
86
+ ```
87
+
88
+ [Yi-34B-200K-DARE-megamerge-v8](https://huggingface.co/brucethemoose/Yi-34B-200K-DARE-megamerge-v8)
89
+ ```
90
+ Metric -> Value
91
+ Avg. 72.56
92
+ AI2 Reasoning Challenge 67.75
93
+ HellaSwag 86.06
94
+ MMLU 77.03
95
+ TruthfulQA 56.31
96
+ Winogrande 82.79
97
+ GSM8k 65.43
98
+ ```
99
 
100
+ Merge Loss (Yi-34B-200K-DARE-megamerge-v8 compared to Etheria-55b-v0.1):
101
+ Avg. -7.87
102
+ AI2 Reasoning Challenge -2.65
103
+ HellaSwag -4.13
104
+ MMLU -3.37
105
+ TruthfulQA +0.15
106
+ Winogrande -6.70
107
+ GSM8k -30.25
108
 
109
+ In the example comparing Etheria-55b-v0.1 and Yi-34B-200K-DARE-megamerge-v8, there is a significant decrease in performance across all metrics, with the average score decreasing by 7.87 points. The most notable is in the GSM8k benchmark, where Yi-34B-200K-DARE-megamerge-v8 outperforms Etheria-55b-v0.1 by 30.25 points.
 
110
 
111
  This method is still in active development, and I am currently tweaking the algorithm to improve the layer selection process,
112
  I am also working on a single layer duping script as merge kit does not currently support this and I am merging layers that are unneeded and its degrading performance.