Update README.md
Browse files
README.md
CHANGED
@@ -9,13 +9,13 @@ Other quantized models are available from TheBloke: [GGML](https://huggingface.c
|
|
9 |
|
10 |
## Model details
|
11 |
|
12 |
-
| Branch
|
13 |
-
|
14 |
-
| [main](https://huggingface.co/R136a1/MythoMax-L2-13B-exl2/tree/main) | 5
|
15 |
-
| | 4
|
16 |
-
| | 6.5
|
17 |
-
| | 7
|
18 |
-
| | 8
|
19 |
|
20 |
To be updated
|
21 |
|
|
|
9 |
|
10 |
## Model details
|
11 |
|
12 |
+
| **Branch** | **Bits** | **Perplexity** | **Desc** |
|
13 |
+
|----------------------------------------------------------------------|----------|----------------|----------------------------------------------|
|
14 |
+
| [main](https://huggingface.co/R136a1/MythoMax-L2-13B-exl2/tree/main) | 5 | idk, forgot | Idk why I made this, 1st try |
|
15 |
+
| | 4 | | |
|
16 |
+
| | 6.5 | 6.1074 | Can run 4096 context size (tokens) on T4 GPU |
|
17 |
+
| | 7 | 6.1056 | 2048 max context size for T4 GPU |
|
18 |
+
| | 8 | 6.1027 | Just, why? |
|
19 |
|
20 |
To be updated
|
21 |
|