Upload folder using huggingface_hub
Browse files- .gitattributes +2 -0
- Mistral-Nemo-12B-ArliAI-RPMax-v1.2.fq8.gguf +3 -0
- Mistral-Nemo-12B-ArliAI-RPMax-v1.2.silly.gguf +3 -0
- README.md +40 -0
.gitattributes
CHANGED
@@ -33,3 +33,5 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
|
|
33 |
*.zip filter=lfs diff=lfs merge=lfs -text
|
34 |
*.zst filter=lfs diff=lfs merge=lfs -text
|
35 |
*tfevents* filter=lfs diff=lfs merge=lfs -text
|
|
|
|
|
|
33 |
*.zip filter=lfs diff=lfs merge=lfs -text
|
34 |
*.zst filter=lfs diff=lfs merge=lfs -text
|
35 |
*tfevents* filter=lfs diff=lfs merge=lfs -text
|
36 |
+
Mistral-Nemo-12B-ArliAI-RPMax-v1.2.fq8.gguf filter=lfs diff=lfs merge=lfs -text
|
37 |
+
Mistral-Nemo-12B-ArliAI-RPMax-v1.2.silly.gguf filter=lfs diff=lfs merge=lfs -text
|
Mistral-Nemo-12B-ArliAI-RPMax-v1.2.fq8.gguf
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:308b4b0c020afaa50d33a370cc4442538fc1b89b587538434264ea4ab04eda1b
|
3 |
+
size 14280663488
|
Mistral-Nemo-12B-ArliAI-RPMax-v1.2.silly.gguf
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:fc14d5790fd71f04f76c71338b8dd04e0ce41643af4c4654a0f7cb3d64fa684c
|
3 |
+
size 14280663488
|
README.md
ADDED
@@ -0,0 +1,40 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
|
2 |
+
---
|
3 |
+
license: mit
|
4 |
+
language:
|
5 |
+
- en
|
6 |
+
pipeline_tag: text-generation
|
7 |
+
---
|
8 |
+
|
9 |
+
ZeroWw 'SILLY' version.
|
10 |
+
The original model has been quantized (fq8 version)
|
11 |
+
and a percentage of it's tensors have
|
12 |
+
been modified adding some noise.
|
13 |
+
|
14 |
+
Full colab: https://colab.research.google.com/drive/1a7seagBzu5l3k3FL4SFk0YJocl7nsDJw?usp=sharing
|
15 |
+
|
16 |
+
Fast colab: https://colab.research.google.com/drive/1SDD7ox21di_82Y9v68AUoy0PhkxwBVvN?usp=sharing
|
17 |
+
|
18 |
+
Original reddit post: https://www.reddit.com/r/LocalLLaMA/comments/1ec0s8p/i_made_a_silly_test/
|
19 |
+
|
20 |
+
I created a program to randomize the weights of a model. The program has 2 parameters: the percentage of weights to modify and the percentage of the original value to randmly apply to each weight.
|
21 |
+
|
22 |
+
At the end I check the resulting GGUF file for binary differences.
|
23 |
+
In this example I set to modify 100% of the weights of Mistral 7b Instruct v0.3 by a maximum of 15% deviation.
|
24 |
+
|
25 |
+
Since the deviation is calculated on the F32 weights, when quantized to Q8\_0 this changes.
|
26 |
+
So, in the end I got a file that compared to the original has:
|
27 |
+
|
28 |
+
Bytes Difference percentage: 73.04%
|
29 |
+
|
30 |
+
Average value divergence: 2.98%
|
31 |
+
|
32 |
+
The cool thing is that chatting with the model I see no apparent difference and the model still works nicely as the original.
|
33 |
+
|
34 |
+
Since I am running everything on CPU, I could not run perplexity scores or anything computing intensive.
|
35 |
+
|
36 |
+
As a small test, I asked the model a few questions (like the history of the roman empire) and then fact check its answer using a big model. No errors were detected.
|
37 |
+
|
38 |
+
Update: all procedure tested and created on COLAB.
|
39 |
+
|
40 |
+
Created on: Wed Oct 23, 09:31:20
|