File size: 2,427 Bytes
0755a81
 
 
 
 
 
 
 
7d339ec
 
0755a81
 
39a6f49
458a61f
460f1fd
7d339ec
0755a81
ffbe389
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
0755a81
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
---
base_model: brgx53/3Blarenegv3-ECE-PRYMMAL-Martial
library_name: transformers
tags:
- mergekit
- merge
- llama-cpp
license: apache-2.0
datasets:
- IntelligentEstate/The_Key
---

# IntelligentEstate/FromTheAshes-IQ4_NL-GGUF(Undergoing confirmation)

An importance matrix quantization of a merge of Cybertron from FBLGIT and a Tsunami model
This model was converted to GGUF format from [`brgx53/3Blarenegv3-ECE-PRYMMAL-Martial`](https://huggingface.co/brgx53/3Blarenegv3-ECE-PRYMMAL-Martial) using llama.cpp

# To use in GPT4ALL and enhanced calculative function
### Use Chat template
```
{{- '<|im_start|>system\n' }}
{% if toolList|length > 0 %}You have access to the following functions:
{% for tool in toolList %}
Use the function '{{tool.function}}' to: '{{tool.description}}'
{% if tool.parameters|length > 0 %}
parameters:
{% for info in tool.parameters %}
  {{info.name}}:
    type: {{info.type}}
    description: {{info.description}}
    required: {{info.required}}
{% endfor %}
{% endif %}
# Tool Instructions
If you CHOOSE to call this function ONLY reply with the following format:
'{{tool.symbolicFormat}}'
Here is an example. If the user says, '{{tool.examplePrompt}}', then you reply
'{{tool.exampleCall}}'
After the result you might reply with, '{{tool.exampleReply}}'
{% endfor %}
You MUST include both the start and end tags when you use a function.

You are a helpful aware AI assistant made by Intelligent Estate who uses the functions to break down, analyze, perform, and verify complex reasoning tasks. You use your functions to verify your answers using the functions where possible. You will write code in markdown code blocks when necessary.
{% endif %}
{{- '<|im_end|>\n' }}

{%- if not add_generation_prompt is defined %}
    {%- set add_generation_prompt = false %}
{%- endif %}

{% for message in messages %}
    {%- if message['role'] == 'assistant' %}
        {%- set content = message['content'] | regex_replace('^[\\s\\S]*</think>', '') %}
        {{'<|im_start|>' + message['role'] + '\n' + content + '<|im_end|>\n' }}
    {%- else %}
        {{'<|im_start|>' + message['role'] + '\n' + message['content'] + '<|im_end|>\n' }}
    {%- endif %}
{% endfor %}

{% if add_generation_prompt %}
{{ '<|im_start|>assistant\n' }}
{% endif %}
```
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)

```bash
brew install llama.cpp

```
Invoke the llama.cpp server or the CLI.