TheBloke commited on
Commit
daf7b88
1 Parent(s): 2230bd1

Initial GPTQ model commit

Browse files
Files changed (1) hide show
  1. README.md +5 -5
README.md CHANGED
@@ -1,7 +1,7 @@
1
  ---
2
  inference: false
3
  license: other
4
- model_creator: The-Face-Of-Goonery
5
  model_link: https://huggingface.co/The-Face-Of-Goonery/Chronos-Beluga-v2-13bfp16
6
  model_name: Chronos Beluga v2 13B
7
  model_type: llama
@@ -23,12 +23,12 @@ quantized_by: TheBloke
23
  <!-- header end -->
24
 
25
  # Chronos Beluga v2 13B - GPTQ
26
- - Model creator: [The-Face-Of-Goonery](https://huggingface.co/The-Face-Of-Goonery)
27
  - Original model: [Chronos Beluga v2 13B](https://huggingface.co/The-Face-Of-Goonery/Chronos-Beluga-v2-13bfp16)
28
 
29
  ## Description
30
 
31
- This repo contains GPTQ model files for [The-Face-Of-Goonery's Chronos Beluga v2 13B](https://huggingface.co/The-Face-Of-Goonery/Chronos-Beluga-v2-13bfp16).
32
 
33
  Multiple GPTQ parameter permutations are provided; see Provided Files below for details of the options provided, their parameters, and the software used to create them.
34
 
@@ -36,7 +36,7 @@ Multiple GPTQ parameter permutations are provided; see Provided Files below for
36
 
37
  * [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/Chronos-Beluga-v2-13B-GPTQ)
38
  * [2, 3, 4, 5, 6 and 8-bit GGML models for CPU+GPU inference](https://huggingface.co/TheBloke/Chronos-Beluga-v2-13B-GGML)
39
- * [The-Face-Of-Goonery's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/The-Face-Of-Goonery/Chronos-Beluga-v2-13bfp16)
40
 
41
  ## Prompt template: Alpaca
42
 
@@ -223,7 +223,7 @@ Thank you to all my generous patrons and donaters!
223
 
224
  <!-- footer end -->
225
 
226
- # Original model card: The-Face-Of-Goonery's Chronos Beluga v2 13B
227
 
228
  merged 58% chronos v2 42% beluga 13b merge using LUNK(Large universal neural kombiner)
229
 
 
1
  ---
2
  inference: false
3
  license: other
4
+ model_creator: Caleb Morgan
5
  model_link: https://huggingface.co/The-Face-Of-Goonery/Chronos-Beluga-v2-13bfp16
6
  model_name: Chronos Beluga v2 13B
7
  model_type: llama
 
23
  <!-- header end -->
24
 
25
  # Chronos Beluga v2 13B - GPTQ
26
+ - Model creator: [Caleb Morgan](https://huggingface.co/The-Face-Of-Goonery)
27
  - Original model: [Chronos Beluga v2 13B](https://huggingface.co/The-Face-Of-Goonery/Chronos-Beluga-v2-13bfp16)
28
 
29
  ## Description
30
 
31
+ This repo contains GPTQ model files for [Caleb Morgan's Chronos Beluga v2 13B](https://huggingface.co/The-Face-Of-Goonery/Chronos-Beluga-v2-13bfp16).
32
 
33
  Multiple GPTQ parameter permutations are provided; see Provided Files below for details of the options provided, their parameters, and the software used to create them.
34
 
 
36
 
37
  * [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/Chronos-Beluga-v2-13B-GPTQ)
38
  * [2, 3, 4, 5, 6 and 8-bit GGML models for CPU+GPU inference](https://huggingface.co/TheBloke/Chronos-Beluga-v2-13B-GGML)
39
+ * [Caleb Morgan's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/The-Face-Of-Goonery/Chronos-Beluga-v2-13bfp16)
40
 
41
  ## Prompt template: Alpaca
42
 
 
223
 
224
  <!-- footer end -->
225
 
226
+ # Original model card: Caleb Morgan's Chronos Beluga v2 13B
227
 
228
  merged 58% chronos v2 42% beluga 13b merge using LUNK(Large universal neural kombiner)
229