File size: 1,547 Bytes
dca7719
 
478be83
 
 
 
 
 
dca7719
478be83
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
---
license: llama3
tags:
- gguf
- llama3
- ollama
- dolphin
base_model: cognitivecomputations/dolphin-2.9-llama3-70b
---

# Dolphin 2.9 Llama 3 70b 🐬 (GGUF)

Quantized and converted to GGUF for Ollama.


## Example Modelfile

```
FROM ./dolphin-2.9-llama3-70b-Q6_K.gguf

TEMPLATE """{{ if .System }}<|im_start|>system
{{ .System }}<|im_end|>
{{ end }}{{ if .Prompt }}<|im_start|>user
{{ .Prompt }}<|im_end|>
{{ end }}<|im_start|>assistant
{{ .Response }}<|im_end|>
"""
SYSTEM """You are Dolphin, a helpful AI assistant.
"""
PARAMETER num_ctx 8192
PARAMETER stop "<|im_start|>"
PARAMETER stop "<|im_end|>"
```

## Provided files
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [dolphin-2.9-llama3-70b-Q6_K.gguf-part00000](https://huggingface.co/TrabEsrever/dolphin-2.9-llama3-70b-GGUF/blob/main/dolphin-2.9-llama3-70b-Q6_K.gguf-part00000) [dolphin-2.9-llama3-70b-Q6_K.gguf-part00001](https://huggingface.co/TrabEsrever/dolphin-2.9-llama3-70b-GGUF/blob/main/dolphin-2.9-llama3-70b-Q6_K.gguf-part00001) | Q6_K | 54G |
| [dolphin-2.9-llama3-70b-Q8_0.gguf-part00000](https://huggingface.co/TrabEsrever/dolphin-2.9-llama3-70b-GGUF/blob/main/dolphin-2.9-llama3-70b-Q8_0.gguf-part00000) [dolphin-2.9-llama3-70b-Q8_0.gguf-part00001](https://huggingface.co/TrabEsrever/dolphin-2.9-llama3-70b-GGUF/blob/main/dolphin-2.9-llama3-70b-Q8_0.gguf-part00001) | Q8_0 | 70G |


## Combine multi part files into one GGUF

```
cat dolphin-2.9-llama3-70b-Q6_K.gguf-part00000 dolphin-2.9-llama3-70b-Q6_K.gguf-part00001 > dolphin-2.9-llama3-70b-Q6_K.gguf

```