File size: 1,670 Bytes
8d6c84a
 
 
 
 
 
 
 
 
 
 
 
 
 
 
a8e551c
8d6c84a
 
a8e551c
8d6c84a
 
690e1f6
8d6c84a
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
53288da
d27cace
 
 
 
 
8d6c84a
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
---
language:
- en
license: llama3
library_name: transformers
tags:
- merge
base_model:
- SillyTilly/Meta-Llama-3.1-70B
- SillyTilly/Meta-Llama-3.1-70B-Instruct
- unsloth/Llama-3.3-70B-Instruct
- SicariusSicariiStuff/Negative_LLAMA_70B
---

#
<img src=https://huggingface.co/altomek/RE-70B-AS3D/resolve/main/RE.png>
<a href="https://www.youtube.com/watch?v=kYje-wdAUsg" title="i_o - Audio Dust" target="_blank">intro music...</a>

## Llama RE-70B-AS3D

I desired a model that would unlock full Llama personality but still could follow instructions.
This is first interesting result from the voyage...


### Ingridients

 - [Llama-3.1-70B](https://huggingface.co/SillyTilly/Meta-Llama-3.1-70B)
 
 - [Llama-3.1-70B-Instruct](https://huggingface.co/SillyTilly/Meta-Llama-3.1-70B-Instruct)
 
 - [Llama-3.3-70B-Instruct](https://huggingface.co/unsloth/Llama-3.3-70B-Instruct)

 - [Negative_LLAMA_70B](https://huggingface.co/SicariusSicariiStuff/Negative_LLAMA_70B)

### Settings

Use Llama 3 template, model with ALpaca template sometimes halucinates and generates training materials ;P 


### Quants

More quants are comming soon... and I need to redo GGUF quants(!) as they do not encode chat template in tokenizer_config... if your inference engine uses chat template from GGUF file you will see halucinations. However GGUFs work fine for me in SillyTawern & text-generation-webui combo...

- [GGUF](https://huggingface.co/altomek/RE-70B-AS3D-GGUF) --> TO BE UPDATED!
- [3.5 BPW](https://huggingface.co/altomek/RE-70B-AS3D-3.5bpw-EXL2)
- [3.75 BPW](https://huggingface.co/altomek/RE-70B-AS3D-3.75bpw-EXL2)
- [4 BPW](https://huggingface.co/altomek/RE-70B-AS3D-4bpw-EXL2)