Update README.md
Browse files
README.md
CHANGED
@@ -38,13 +38,13 @@ quantized_by: TheBloke
|
|
38 |
<!-- header end -->
|
39 |
|
40 |
# Athena V2 - GPTQ
|
41 |
-
- Model creator: [IkariDev](https://huggingface.co/IkariDev)
|
42 |
- Original model: [Athena V2](https://huggingface.co/IkariDev/Athena-v2)
|
43 |
|
44 |
<!-- description start -->
|
45 |
## Description
|
46 |
|
47 |
-
This repo contains GPTQ model files for [IkariDev's Athena V2](https://huggingface.co/IkariDev/Athena-v2).
|
48 |
|
49 |
Multiple GPTQ parameter permutations are provided; see Provided Files below for details of the options provided, their parameters, and the software used to create them.
|
50 |
|
@@ -55,7 +55,7 @@ Multiple GPTQ parameter permutations are provided; see Provided Files below for
|
|
55 |
* [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/Athena-v2-AWQ)
|
56 |
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/Athena-v2-GPTQ)
|
57 |
* [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/Athena-v2-GGUF)
|
58 |
-
* [IkariDev's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/IkariDev/Athena-v2)
|
59 |
<!-- repositories-available end -->
|
60 |
|
61 |
<!-- prompt-template start -->
|
@@ -317,7 +317,7 @@ And thank you again to a16z for their generous grant.
|
|
317 |
|
318 |
<!-- footer end -->
|
319 |
|
320 |
-
# Original model card: IkariDev's Athena V2
|
321 |
|
322 |
|
323 |

|
|
|
38 |
<!-- header end -->
|
39 |
|
40 |
# Athena V2 - GPTQ
|
41 |
+
- Model creator: [IkariDev and Undi95](https://huggingface.co/IkariDev)
|
42 |
- Original model: [Athena V2](https://huggingface.co/IkariDev/Athena-v2)
|
43 |
|
44 |
<!-- description start -->
|
45 |
## Description
|
46 |
|
47 |
+
This repo contains GPTQ model files for [IkariDev and Undi95's Athena V2](https://huggingface.co/IkariDev/Athena-v2).
|
48 |
|
49 |
Multiple GPTQ parameter permutations are provided; see Provided Files below for details of the options provided, their parameters, and the software used to create them.
|
50 |
|
|
|
55 |
* [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/Athena-v2-AWQ)
|
56 |
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/Athena-v2-GPTQ)
|
57 |
* [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/Athena-v2-GGUF)
|
58 |
+
* [IkariDev and Undi95's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/IkariDev/Athena-v2)
|
59 |
<!-- repositories-available end -->
|
60 |
|
61 |
<!-- prompt-template start -->
|
|
|
317 |
|
318 |
<!-- footer end -->
|
319 |
|
320 |
+
# Original model card: IkariDev and Undi95's Athena V2
|
321 |
|
322 |
|
323 |

|