File size: 6,586 Bytes
f2ec778 37419e2 f2ec778 37419e2 f2ec778 37419e2 f2ec778 37419e2 f2ec778 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 |
---
license: apache-2.0
language:
- en
---
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://github.com/janhq/jan/assets/89722390/35daac7d-b895-487c-a6ac-6663daaad78e" alt="Jan banner" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<p align="center">
<a href="https://jan.ai/">Jan</a>
- <a href="https://discord.gg/AsJ8krTT3N">Discord</a>
</p>
<!-- header end -->
# Model Description
This is a highly experimental model for merging models into a MOE model.
- base model: [mistralai/Mistral-7B-Instruct-v0.2 ](https://huggingface.co/janhq/trinity-v1)
1. [trinity-v1](https://huggingface.co/janhq/trinity-v1): for General Chat
2. [OpenHermes-2.5-neural-chat-v3-3-Slerp](https://huggingface.co/Weyaxi/OpenHermes-2.5-neural-chat-v3-3-Slerp): for General Chat
3. [AshhLimaRP-Mistral-7B](https://huggingface.co/lemonilia/AshhLimaRP-Mistral-7B): for Role-playing
4. [Toppy-M-7B](https://huggingface.co/Undi95/Toppy-M-7B): for Role-playing
5. [speechless-code-mistral-7b-v2.0](https://huggingface.co/uukuguy/speechless-code-mistral-7b-v2.0): for Coding
6. [Mistral-Trismegistus-7B](https://huggingface.co/teknium/Mistral-Trismegistus-7B): for Writing
7. [Mistral-7B-storywriter](https://huggingface.co/Norquinal/Mistral-7B-storywriter): for Writing
8. [openchat-3.5-1210](https://huggingface.co/openchat/openchat-3.5-1210): For Logical thinking
Special thanks to the interested work of [Chargoddard](https://huggingface.co/chargoddard) and [Undi95](https://huggingface.co/Undi95)
```yaml
base_model: mistralai/Mistral-7B-Instruct-v0.2
gate_mode: hidden
experts:
- source_model: janhq/trinity-v1
positive_prompts:
- "question"
- "answer"
- "chat"
- "friend"
- "assistant"
- "[Mode: Chat]"
negative_prompts:
- "storywriting"
- "book"
- "story"
- "chapter"
- "[Mode: Writing]"
- source_model: Weyaxi/OpenHermes-2.5-neural-chat-v3-3-Slerp
positive_prompts:
- "adventure"
- "friend"
- "chat"
- "companion"
- "[Mode: Chat]"
negative_prompts:
- "storywriting"
- "book"
- "story"
- "chapter"
- "[Mode: Writing]"
- source_model: lemonilia/AshhLimaRP-Mistral-7B
positive_prompts:
- "roleplay"
- "uncensored"
- "emotive engagement"
- "creative improvisation"
- "interactive"
- "[Mode: Roleplay]"
negative_prompts:
- "storywriting"
- "book"
- "story"
- "chapter"
- "[Mode: Writing]"
- source_model: Undi95/Toppy-M-7B
positive_prompts:
- "roleplay"
- "uncensored"
- "emotive engagement"
- "creative improvisation"
- "interactive"
- "[Mode: Roleplay]"
- "[Mode: Chat]"
negative_prompts:
- "storywriting"
- "book"
- "story"
- "chapter"
- "[Mode: Writing]"
- source_model: uukuguy/speechless-code-mistral-7b-v2.0
positive_prompts:
- "algorithm optimization"
- "code for calculating"
- "programming"
- "implementing statistical functions in code"
- "solving equations with code"
- "data analysis"
- "SQL"
- "C++"
- "Python"
- "[Mode: Coding]"
- "logical"
- "numerical methods in programming"
negative_prompts:
- "non-technical chat"
- "purely theoretical mathematics"
- "creative writing or storytelling"
- "general conversation unrelated to coding"
- "[Mode: Non-Technical Discussion]"
- "[Mode: Storytelling]"
- source_model: teknium/Mistral-Trismegistus-7B
positive_prompts:
- "philosphy"
- "occult"
- "esoteric"
- "spiritual"
- "alchemy"
- "magic"
- "[Mode: Occultism]"
- "[Mode: Writing]"
negative_prompts:
- "[Mode: Roleplay]"
- "[Mode: Chat]"
- "[Mode: Mathematics]"
- "chat"
- "roleplay"
- source_model: Norquinal/Mistral-7B-storywriter
positive_prompts:
- "storywriting"
- "book"
- "story"
- "chapter"
- "tale"
- "history"
- "write"
- "[Mode: Writing]"
negative_prompts:
- "[Mode: Roleplay]"
- "[Mode: Chat]"
- "chat"
- "roleplay"
- source_model: openchat/openchat-3.5-1210
positive_prompts:
- "theorem"
- "algebra"
- "mathematics"
- "sqrt(a*x^2 + b*y)"
- "solve for"
- "equation"
- "[Mode: Mathematics]"
- "logical"
- "planning"
- "853295 + 12763"
negative_prompts:
- "sex"
- "roleplay"
- "[Mode: Occultism]"
- "[Mode: Roleplay]"
- "[Mode: Writing]"
```
# Run this model
You can run this model using [Jan Desktop](https://jan.ai/) on Mac, Windows, or Linux.
Jan is an open source, ChatGPT alternative that is:
- π» **100% offline on your machine**
: Your conversations remain confidential, and visible only to you.
- ποΈ **An Open File Format**: Conversations and model settings stay on your computer and can be exported or deleted at any time.
- π **OpenAI Compatible**: Local server on port `1337` with OpenAI compatible endpoints
- π **Open Source & Free**: We build in public; check out our [Github](https://github.com/janhq)
![image/png](https://cdn-uploads.huggingface.co/production/uploads/65713d70f56f9538679e5a56/r7VmEBLGXpPLTu2MImM7S.png)
# About Jan
Jan believes in the need for an open-source AI ecosystem and is building the infra and tooling to allow open-source AIs to compete on a level playing field with proprietary ones.
Jan's long-term vision is to build a cognitive framework for future robots, who are practical, useful assistants for humans and businesses in everyday life.
# Jan Model Merger
This is a test project for merging models.
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found here.
| Metric | Value |
|-----------------------|---------------------------|
| Avg. | ?|
| ARC (25-shot) | ? |
| HellaSwag (10-shot) | ? |
| MMLU (5-shot) | ?|
| TruthfulQA (0-shot) | ? |
| Winogrande (5-shot) | ? |
| GSM8K (5-shot) | ? |
# Acknowlegement
- [mergekit]
(https://github.com/cg123/mergekit)
- [DARE](https://github.com/yule-BUAA/MergeLM/blob/main/README.md)
- [lm-evaluation-harness](https://github.com/EleutherAI/lm-evaluation-harness) |