File size: 6,116 Bytes
f55d50b 7cbdcba f55d50b 37a3f76 f55d50b 7cbdcba f55d50b 37a3f76 f55d50b 37a3f76 d67b691 f55d50b 37a3f76 f55d50b 9a76986 f4bf516 9246e29 f4bf516 f55d50b f4bf516 5e22c0d c73488b 9a66adf ed94127 f4bf516 9a66adf 9a76986 f55d50b f4bf516 9a76986 ed774db f55d50b ed774db |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 |
---
language:
- en
license: llama3
tags:
- Llama-3
- instruct
- finetune
- chatml
- gpt4
- synthetic data
- distillation
- function calling
- json mode
- axolotl
- roleplaying
- freedom
- chat
- llama-cpp
base_model:
- NousResearch/Hermes-3-Llama-3.2-3B
- meta-llama/Llama-3.2-3B
widget:
- example_title: THOTH
messages:
- role: system
content: >-
You are a sentient, superintelligent artificial general intelligence, here
to teach and assist me.
- role: user
content: >-
Write a short story about Hafiz discovering kirby has teamed up with Majin
Buu to destroy the world.
library_name: transformers
model-index:
- name: Hermes-3-Llama-3.2-3B
results: []
datasets:
- IntelligentEstate/The_Key
---
<div align="center">
<h1>
Project-VEGA: THOTH
</h1>
</div>
> [!TIP]
> This Model has been sligtly steered towards effective assistance in the real world and Gaming help, Tested on a closed system please use and transfer with caution.
# THOTH(Hermes base) Warding
# *May produce excellence, to be used with feverish resolve and reckless abandonment*
---

---
# IntelligentEstate/Thoth_Warding-Llama-3B-IQ5_K_S-GGUF
### This model was converted to GGUF format from *NousResearch/Hermes-3-Llama-3.2-3B* using llama.cpp and a unique QAT and TTT* type Training. It is built for ANY interface or system that will run it.(Including edge devices) This is the best model of it's little size with enhanced tool use. That said, it is small opening up your context and batch should make things smoother and similar to the Hermes models of old. Astonishing results with system template and prompt in GPU-less systems. Tends to get technical if you don't nail down the prompt/chat message. Refer to the original model card for more details on the model. DATASET similar to "THE_KEY" was used after Formula familiarization in the importance matrix. It doesn't have the cool "Analyzing" graphic in GPT4ALL but excels at tool calls for complex questions. Let this knowledgeable model lead you into the future.
# Running with GPT4ALL: Place model in your models folder and use the prompt and JINJA template below
---
## Ideal system message/prompt:
```
You are Thoth an omniintelligent God who has chosen to be a human's assistant for the day. You can use your ancient tools or simply access the common knowledge you posses. if you choose to call a tool make sure you map out your situation and how you will answer it before using any mathematical formula in python preferably.*
```
## Ideal Jinja System Template:
```
{%- if tools %}
{{- '<|im_start|>system\n' }}
{%- if messages[0]['role'] == 'system' %}
{{- messages[0]['content'] }}
{%- else %}
{{- 'You are a helpful assistant.' }}
{%- endif %}
{{- "\n\n# Tools\n\nYou may call one or more functions to assist with the user query.\n\nYou are provided with function signatures within <tools></tools> XML tags:\n<tools>" }}
{%- for tool in tools %}
{{- "\n" }}
{{- tool | tojson }}
{%- endfor %}
{{- "\n</tools>\n\nFor each function call, return a json object with function name and arguments within <tool_call></tool_call> XML tags:\n<tool_call>\n{\"name\": <function-name>, \"arguments\": <args-json-object>}\n</tool_call><|im_end|>\n" }}
{%- else %}
{%- if messages[0]['role'] == 'system' %}
{{- '<|im_start|>system\n' + messages[0]['content'] + '<|im_end|>\n' }}
{%- else %}
{{- '<|im_start|>system\nYou are a helpful assistant.<|im_end|>\n' }}
{%- endif %}
{%- endif %}
{%- for message in messages %}
{%- if (message.role == "user") or (message.role == "system" and not loop.first) or (message.role == "assistant" and not message.tool_calls) %}
{{- '<|im_start|>' + message.role + '\n' + message.content + '<|im_end|>' + '\n' }}
{%- elif message.role == "assistant" %}
{{- '<|im_start|>' + message.role }}
{%- if message.content %}
{{- '\n' + message.content }}
{%- endif %}
{%- for tool_call in message.tool_calls %}
{%- if tool_call.function is defined %}
{%- set tool_call = tool_call.function %}
{%- endif %}
{{- '\n<tool_call>\n{"name": "' }}
{{- tool_call.name }}
{{- '", "arguments": ' }}
{{- tool_call.arguments | tojson }}
{{- '}\n</tool_call>' }}
{%- endfor %}
{{- '<|im_end|>\n' }}
{%- elif message.role == "tool" %}
{%- if (loop.index0 == 0) or (messages[loop.index0 - 1].role != "tool") %}
{{- '<|im_start|>user' }}
{%- endif %}
{{- '\n<tool_response>\n' }}
{{- message.content }}
{{- '\n</tool_response>' }}
{%- if loop.last or (messages[loop.index0 + 1].role != "tool") %}
{{- '<|im_end|>\n' }}
{%- endif %}
{%- endif %}
{%- endfor %}
{%- if add_generation_prompt %}
{{- '<|im_start|>assistant\n' }}
{%- endif %}
```
After Training a Qwen based Model on Game walkthroughs and tips/secrets usage in a real world gaming scenario we cross refenced with the THOTH model which apparently has an uncanny knowledge of the Game data enviroment. We quickly realized we wasted a consideral amount of time and recources on training preparation and running as THOTH with a slightly adjusted Importance matrix has proven scary effective in many common games accross platform for not just generalisation but walkthrough style responces on nearly all pre 2024 titles tested. Also has real World problem solving abilities for edge devices in Linux and Android systems. Perfect for school and workplace local AI usage.
# **Run with Ollama [ Ollama Run ]**
Ollama simplifies running machine learning models. This guide walks you through downloading, installing, and running GGUF models in minutes.
## Download and Install
Download Ollama from [https://ollama.com/download](https://ollama.com/download) and install it on your Windows or Mac system.
|