JayhC commited on
Commit
303bb85
1 Parent(s): 0acba29

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +7 -1
README.md CHANGED
@@ -19,7 +19,7 @@ tags:
19
 
20
  Experimental RP-oriented MoE, the idea was to get a model that would be equal to or better than the Mixtral 8x7B and it's finetunes in RP/ERP tasks.
21
 
22
- [GGUF](https://huggingface.co/xxx777xxxASD/ChaoticSoliloquy-4x8B-GGUF)
23
 
24
  ### ChaoticSoliloquy-4x8B
25
  ```
@@ -41,4 +41,10 @@ experts:
41
  - [openlynn/Llama-3-Soliloquy-8B](https://huggingface.co/openlynn/Llama-3-Soliloquy-8B)
42
  - [Sao10K/L3-Solana-8B-v1](https://huggingface.co/Sao10K/L3-Solana-8B-v1)
43
 
 
 
 
 
 
 
44
  ## Prompt format: Llama 3
 
19
 
20
  Experimental RP-oriented MoE, the idea was to get a model that would be equal to or better than the Mixtral 8x7B and it's finetunes in RP/ERP tasks.
21
 
22
+ [GGUF, Exl2](https://huggingface.co/collections/xxx777xxxASD/chaoticsoliloquy-4x8b-6628a759b5a60d8d3f51ed62)
23
 
24
  ### ChaoticSoliloquy-4x8B
25
  ```
 
41
  - [openlynn/Llama-3-Soliloquy-8B](https://huggingface.co/openlynn/Llama-3-Soliloquy-8B)
42
  - [Sao10K/L3-Solana-8B-v1](https://huggingface.co/Sao10K/L3-Solana-8B-v1)
43
 
44
+ ## Vision
45
+
46
+ [llama3_mmproj](https://huggingface.co/ChaoticNeutrals/Llava_1.5_Llama3_mmproj)
47
+ ![image/png](https://cdn-uploads.huggingface.co/production/uploads/64f5e51289c121cb864ba464/yv4C6NalqORLjvY3KKZk8.png)
48
+
49
+
50
  ## Prompt format: Llama 3