Add README.md to repo
Browse files
README.md
CHANGED
@@ -31,38 +31,38 @@ quantized_by: TheBloke
|
|
31 |
<!-- header start -->
|
32 |
<!-- 200823 -->
|
33 |
<div style="width: auto; margin-left: auto; margin-right: auto">
|
34 |
-
<img src="https://i.imgur.com/EBdldam.jpg" alt="
|
35 |
</div>
|
36 |
<div style="display: flex; justify-content: space-between; width: 100%;">
|
37 |
<div style="display: flex; flex-direction: column; align-items: flex-start;">
|
38 |
-
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/
|
39 |
</div>
|
40 |
<div style="display: flex; flex-direction: column; align-items: flex-end;">
|
41 |
-
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/
|
42 |
</div>
|
43 |
</div>
|
44 |
-
<div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">
|
45 |
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
|
46 |
<!-- header end -->
|
47 |
|
48 |
-
# Nous Hermes Llama 2 13B -
|
49 |
- Model creator: [NousResearch](https://huggingface.co/NousResearch)
|
50 |
- Original model: [Nous Hermes Llama 2 13B](https://huggingface.co/NousResearch/Nous-Hermes-Llama2-13b)
|
51 |
|
52 |
<!-- description start -->
|
53 |
## Description
|
54 |
|
55 |
-
This repo contains
|
56 |
|
57 |
-
|
58 |
-
<!--
|
59 |
-
### About
|
60 |
|
61 |
-
|
62 |
|
63 |
-
Here is an incomplate list of clients and libraries that are known to support
|
64 |
|
65 |
-
* [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for
|
66 |
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
|
67 |
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
|
68 |
* [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration.
|
@@ -72,13 +72,13 @@ Here is an incomplate list of clients and libraries that are known to support GG
|
|
72 |
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
|
73 |
* [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
|
74 |
|
75 |
-
<!--
|
76 |
<!-- repositories-available start -->
|
77 |
## Repositories available
|
78 |
|
79 |
-
* [AWQ model(s) for GPU inference.](https://huggingface.co/
|
80 |
-
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/
|
81 |
-
* [2, 3, 4, 5, 6 and 8-bit
|
82 |
* [NousResearch's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/NousResearch/Nous-Hermes-Llama2-13b)
|
83 |
<!-- repositories-available end -->
|
84 |
|
@@ -105,10 +105,10 @@ As this model is based on Llama 2, it is also subject to the Meta Llama 2 licens
|
|
105 |
|
106 |
In the meantime, any questions regarding licensing, and in particular how these two licenses might interact, should be directed to the original model repository: [Nous Research's Nous Hermes Llama 2 13B](https://huggingface.co/NousResearch/Nous-Hermes-Llama2-13b).
|
107 |
<!-- licensing end -->
|
108 |
-
<!--
|
109 |
## Compatibility
|
110 |
|
111 |
-
These quantised
|
112 |
|
113 |
They are also compatible with many third party UIs and libraries - please see the list at the top of this README.
|
114 |
|
@@ -125,34 +125,34 @@ The new methods available are:
|
|
125 |
|
126 |
Refer to the Provided Files table below to see what files use which methods, and how.
|
127 |
</details>
|
128 |
-
<!--
|
129 |
|
130 |
-
<!--
|
131 |
## Provided files
|
132 |
|
133 |
| Name | Quant method | Bits | Size | Max RAM required | Use case |
|
134 |
| ---- | ---- | ---- | ---- | ---- | ----- |
|
135 |
-
| [nous-hermes-llama2-13b.Q2_K.
|
136 |
-
| [nous-hermes-llama2-13b.Q3_K_S.
|
137 |
-
| [nous-hermes-llama2-13b.Q3_K_M.
|
138 |
-
| [nous-hermes-llama2-13b.Q3_K_L.
|
139 |
-
| [nous-hermes-llama2-13b.Q4_0.
|
140 |
-
| [nous-hermes-llama2-13b.Q4_K_S.
|
141 |
-
| [nous-hermes-llama2-13b.Q4_K_M.
|
142 |
-
| [nous-hermes-llama2-13b.Q5_0.
|
143 |
-
| [nous-hermes-llama2-13b.Q5_K_S.
|
144 |
-
| [nous-hermes-llama2-13b.Q5_K_M.
|
145 |
-
| [nous-hermes-llama2-13b.Q6_K.
|
146 |
-
| [nous-hermes-llama2-13b.Q8_0.
|
147 |
|
148 |
**Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
|
149 |
|
150 |
|
151 |
|
152 |
-
<!--
|
153 |
|
154 |
-
<!--
|
155 |
-
## How to download
|
156 |
|
157 |
**Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file.
|
158 |
|
@@ -163,7 +163,7 @@ The following clients/libraries will automatically download models for you, prov
|
|
163 |
|
164 |
### In `text-generation-webui`
|
165 |
|
166 |
-
Under Download Model, you can enter the model repo:
|
167 |
|
168 |
Then click Download.
|
169 |
|
@@ -178,7 +178,7 @@ pip3 install huggingface-hub>=0.17.1
|
|
178 |
Then you can download any individual model file to the current directory, at high speed, with a command like this:
|
179 |
|
180 |
```shell
|
181 |
-
huggingface-cli download
|
182 |
```
|
183 |
|
184 |
<details>
|
@@ -187,7 +187,7 @@ huggingface-cli download TheBloke/Nous-Hermes-Llama2-GGUF nous-hermes-llama2-13b
|
|
187 |
You can also download multiple files at once with a pattern:
|
188 |
|
189 |
```shell
|
190 |
-
huggingface-cli download
|
191 |
```
|
192 |
|
193 |
For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
|
@@ -201,25 +201,25 @@ pip3 install hf_transfer
|
|
201 |
And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
|
202 |
|
203 |
```shell
|
204 |
-
HUGGINGFACE_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download
|
205 |
```
|
206 |
|
207 |
Windows CLI users: Use `set HUGGINGFACE_HUB_ENABLE_HF_TRANSFER=1` before running the download command.
|
208 |
</details>
|
209 |
-
<!--
|
210 |
|
211 |
-
<!--
|
212 |
## Example `llama.cpp` command
|
213 |
|
214 |
Make sure you are using `llama.cpp` from commit [d0cee0d36d5be95a0d9088b674dbb27354107221](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later.
|
215 |
|
216 |
```shell
|
217 |
-
./main -ngl 32 -m nous-hermes-llama2-13b.q4_K_M.
|
218 |
```
|
219 |
|
220 |
Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
|
221 |
|
222 |
-
Change `-c 4096` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the
|
223 |
|
224 |
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
|
225 |
|
@@ -231,7 +231,7 @@ Further instructions here: [text-generation-webui/docs/llama.cpp.md](https://git
|
|
231 |
|
232 |
## How to run from Python code
|
233 |
|
234 |
-
You can use
|
235 |
|
236 |
### How to load this model from Python using ctransformers
|
237 |
|
@@ -248,13 +248,13 @@ CT_HIPBLAS=1 pip install ctransformers>=0.2.24 --no-binary ctransformers
|
|
248 |
CT_METAL=1 pip install ctransformers>=0.2.24 --no-binary ctransformers
|
249 |
```
|
250 |
|
251 |
-
#### Simple example code to load one of these
|
252 |
|
253 |
```python
|
254 |
from ctransformers import AutoModelForCausalLM
|
255 |
|
256 |
# Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
|
257 |
-
llm = AutoModelForCausalLM.from_pretrained("
|
258 |
|
259 |
print(llm("AI is going to"))
|
260 |
```
|
@@ -266,7 +266,7 @@ Here's guides on using llama-cpp-python or ctransformers with LangChain:
|
|
266 |
* [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp)
|
267 |
* [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers)
|
268 |
|
269 |
-
<!--
|
270 |
|
271 |
<!-- footer start -->
|
272 |
<!-- 200823 -->
|
@@ -274,7 +274,7 @@ Here's guides on using llama-cpp-python or ctransformers with LangChain:
|
|
274 |
|
275 |
For further support, and discussions on these models and AI in general, join us at:
|
276 |
|
277 |
-
[
|
278 |
|
279 |
## Thanks, and how to contribute
|
280 |
|
@@ -288,8 +288,8 @@ If you're able and willing to contribute it will be most gratefully received and
|
|
288 |
|
289 |
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
|
290 |
|
291 |
-
* Patreon: https://patreon.com/
|
292 |
-
* Ko-Fi: https://ko-fi.com/
|
293 |
|
294 |
**Special thanks to**: Aemon Algiz.
|
295 |
|
|
|
31 |
<!-- header start -->
|
32 |
<!-- 200823 -->
|
33 |
<div style="width: auto; margin-left: auto; margin-right: auto">
|
34 |
+
<img src="https://i.imgur.com/EBdldam.jpg" alt="jart" style="width: 100%; min-width: 400px; display: block; margin: auto;">
|
35 |
</div>
|
36 |
<div style="display: flex; justify-content: space-between; width: 100%;">
|
37 |
<div style="display: flex; flex-direction: column; align-items: flex-start;">
|
38 |
+
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/FwAVVu7eJ4">Chat & support: jartine's Discord server</a></p>
|
39 |
</div>
|
40 |
<div style="display: flex; flex-direction: column; align-items: flex-end;">
|
41 |
+
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/jart">Want to contribute? jartine's Patreon page (and TheBloke <3)</a></p>
|
42 |
</div>
|
43 |
</div>
|
44 |
+
<div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">jartine's LLM work is generously supported by a grant from <a href="https://a16z.com">mozilla</a></p></div>
|
45 |
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
|
46 |
<!-- header end -->
|
47 |
|
48 |
+
# Nous Hermes Llama 2 13B - llamafile
|
49 |
- Model creator: [NousResearch](https://huggingface.co/NousResearch)
|
50 |
- Original model: [Nous Hermes Llama 2 13B](https://huggingface.co/NousResearch/Nous-Hermes-Llama2-13b)
|
51 |
|
52 |
<!-- description start -->
|
53 |
## Description
|
54 |
|
55 |
+
This repo contains llamafile format model files for [Nous Research's Nous Hermes Llama 2 13B](https://huggingface.co/NousResearch/Nous-Hermes-Llama2-13b).
|
56 |
|
57 |
+
WARNING: This README may contain inaccuracies. It was generated automatically by piping a different README that TheBloke wrote upstream through a sed script. Errors should be reported to jartine, and do not reflect TheBloke.
|
58 |
+
<!-- README_llamafile.md-about-llamafile start -->
|
59 |
+
### About llamafile
|
60 |
|
61 |
+
llamafile is a new format introduced by Mozilla Ocho on Nov 20th 2023. It uses Cosmopolitan Libc to turn LLM weights into runnable llama.cpp binaries. llamafile offers numerous advantages over GGML, such as better tokenisation, and support for special tokens. It is also supports metadata, and is designed to be extensible.
|
62 |
|
63 |
+
Here is an incomplate list of clients and libraries that are known to support llamafile:
|
64 |
|
65 |
+
* [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for llamafile. Offers a CLI and a server option.
|
66 |
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
|
67 |
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
|
68 |
* [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration.
|
|
|
72 |
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
|
73 |
* [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
|
74 |
|
75 |
+
<!-- README_llamafile.md-about-llamafile end -->
|
76 |
<!-- repositories-available start -->
|
77 |
## Repositories available
|
78 |
|
79 |
+
* [AWQ model(s) for GPU inference.](https://huggingface.co/jartine/Nous-Hermes-Llama2-AWQ)
|
80 |
+
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/jartine/Nous-Hermes-Llama2-GPTQ)
|
81 |
+
* [2, 3, 4, 5, 6 and 8-bit llamafile models for CPU+GPU inference](https://huggingface.co/jartine/Nous-Hermes-Llama2-llamafile)
|
82 |
* [NousResearch's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/NousResearch/Nous-Hermes-Llama2-13b)
|
83 |
<!-- repositories-available end -->
|
84 |
|
|
|
105 |
|
106 |
In the meantime, any questions regarding licensing, and in particular how these two licenses might interact, should be directed to the original model repository: [Nous Research's Nous Hermes Llama 2 13B](https://huggingface.co/NousResearch/Nous-Hermes-Llama2-13b).
|
107 |
<!-- licensing end -->
|
108 |
+
<!-- compatibility_llamafile start -->
|
109 |
## Compatibility
|
110 |
|
111 |
+
These quantised llamafilev2 files are compatible with llama.cpp from August 27th onwards, as of commit [d0cee0d36d5be95a0d9088b674dbb27354107221](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221)
|
112 |
|
113 |
They are also compatible with many third party UIs and libraries - please see the list at the top of this README.
|
114 |
|
|
|
125 |
|
126 |
Refer to the Provided Files table below to see what files use which methods, and how.
|
127 |
</details>
|
128 |
+
<!-- compatibility_llamafile end -->
|
129 |
|
130 |
+
<!-- README_llamafile.md-provided-files start -->
|
131 |
## Provided files
|
132 |
|
133 |
| Name | Quant method | Bits | Size | Max RAM required | Use case |
|
134 |
| ---- | ---- | ---- | ---- | ---- | ----- |
|
135 |
+
| [nous-hermes-llama2-13b.Q2_K.llamafile](https://huggingface.co/jartine/Nous-Hermes-Llama2-llamafile/blob/main/nous-hermes-llama2-13b.Q2_K.llamafile) | Q2_K | 2 | 5.43 GB| 7.93 GB | smallest, significant quality loss - not recommended for most purposes |
|
136 |
+
| [nous-hermes-llama2-13b.Q3_K_S.llamafile](https://huggingface.co/jartine/Nous-Hermes-Llama2-llamafile/blob/main/nous-hermes-llama2-13b.Q3_K_S.llamafile) | Q3_K_S | 3 | 5.66 GB| 8.16 GB | very small, high quality loss |
|
137 |
+
| [nous-hermes-llama2-13b.Q3_K_M.llamafile](https://huggingface.co/jartine/Nous-Hermes-Llama2-llamafile/blob/main/nous-hermes-llama2-13b.Q3_K_M.llamafile) | Q3_K_M | 3 | 6.34 GB| 8.84 GB | very small, high quality loss |
|
138 |
+
| [nous-hermes-llama2-13b.Q3_K_L.llamafile](https://huggingface.co/jartine/Nous-Hermes-Llama2-llamafile/blob/main/nous-hermes-llama2-13b.Q3_K_L.llamafile) | Q3_K_L | 3 | 6.93 GB| 9.43 GB | small, substantial quality loss |
|
139 |
+
| [nous-hermes-llama2-13b.Q4_0.llamafile](https://huggingface.co/jartine/Nous-Hermes-Llama2-llamafile/blob/main/nous-hermes-llama2-13b.Q4_0.llamafile) | Q4_0 | 4 | 7.37 GB| 9.87 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
|
140 |
+
| [nous-hermes-llama2-13b.Q4_K_S.llamafile](https://huggingface.co/jartine/Nous-Hermes-Llama2-llamafile/blob/main/nous-hermes-llama2-13b.Q4_K_S.llamafile) | Q4_K_S | 4 | 7.41 GB| 9.91 GB | small, greater quality loss |
|
141 |
+
| [nous-hermes-llama2-13b.Q4_K_M.llamafile](https://huggingface.co/jartine/Nous-Hermes-Llama2-llamafile/blob/main/nous-hermes-llama2-13b.Q4_K_M.llamafile) | Q4_K_M | 4 | 7.87 GB| 10.37 GB | medium, balanced quality - recommended |
|
142 |
+
| [nous-hermes-llama2-13b.Q5_0.llamafile](https://huggingface.co/jartine/Nous-Hermes-Llama2-llamafile/blob/main/nous-hermes-llama2-13b.Q5_0.llamafile) | Q5_0 | 5 | 8.97 GB| 11.47 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
|
143 |
+
| [nous-hermes-llama2-13b.Q5_K_S.llamafile](https://huggingface.co/jartine/Nous-Hermes-Llama2-llamafile/blob/main/nous-hermes-llama2-13b.Q5_K_S.llamafile) | Q5_K_S | 5 | 8.97 GB| 11.47 GB | large, low quality loss - recommended |
|
144 |
+
| [nous-hermes-llama2-13b.Q5_K_M.llamafile](https://huggingface.co/jartine/Nous-Hermes-Llama2-llamafile/blob/main/nous-hermes-llama2-13b.Q5_K_M.llamafile) | Q5_K_M | 5 | 9.23 GB| 11.73 GB | large, very low quality loss - recommended |
|
145 |
+
| [nous-hermes-llama2-13b.Q6_K.llamafile](https://huggingface.co/jartine/Nous-Hermes-Llama2-llamafile/blob/main/nous-hermes-llama2-13b.Q6_K.llamafile) | Q6_K | 6 | 10.68 GB| 13.18 GB | very large, extremely low quality loss |
|
146 |
+
| [nous-hermes-llama2-13b.Q8_0.llamafile](https://huggingface.co/jartine/Nous-Hermes-Llama2-llamafile/blob/main/nous-hermes-llama2-13b.Q8_0.llamafile) | Q8_0 | 8 | 13.83 GB| 16.33 GB | very large, extremely low quality loss - not recommended |
|
147 |
|
148 |
**Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
|
149 |
|
150 |
|
151 |
|
152 |
+
<!-- README_llamafile.md-provided-files end -->
|
153 |
|
154 |
+
<!-- README_llamafile.md-how-to-download start -->
|
155 |
+
## How to download llamafile files
|
156 |
|
157 |
**Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file.
|
158 |
|
|
|
163 |
|
164 |
### In `text-generation-webui`
|
165 |
|
166 |
+
Under Download Model, you can enter the model repo: jartine/Nous-Hermes-Llama2-llamafile and below it, a specific filename to download, such as: nous-hermes-llama2-13b.q4_K_M.llamafile.
|
167 |
|
168 |
Then click Download.
|
169 |
|
|
|
178 |
Then you can download any individual model file to the current directory, at high speed, with a command like this:
|
179 |
|
180 |
```shell
|
181 |
+
huggingface-cli download jartine/Nous-Hermes-Llama2-llamafile nous-hermes-llama2-13b.q4_K_M.llamafile --local-dir . --local-dir-use-symlinks False
|
182 |
```
|
183 |
|
184 |
<details>
|
|
|
187 |
You can also download multiple files at once with a pattern:
|
188 |
|
189 |
```shell
|
190 |
+
huggingface-cli download jartine/Nous-Hermes-Llama2-llamafile --local-dir . --local-dir-use-symlinks False --include='*Q4_K*llamafile'
|
191 |
```
|
192 |
|
193 |
For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
|
|
|
201 |
And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
|
202 |
|
203 |
```shell
|
204 |
+
HUGGINGFACE_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download jartine/Nous-Hermes-Llama2-llamafile nous-hermes-llama2-13b.q4_K_M.llamafile --local-dir . --local-dir-use-symlinks False
|
205 |
```
|
206 |
|
207 |
Windows CLI users: Use `set HUGGINGFACE_HUB_ENABLE_HF_TRANSFER=1` before running the download command.
|
208 |
</details>
|
209 |
+
<!-- README_llamafile.md-how-to-download end -->
|
210 |
|
211 |
+
<!-- README_llamafile.md-how-to-run start -->
|
212 |
## Example `llama.cpp` command
|
213 |
|
214 |
Make sure you are using `llama.cpp` from commit [d0cee0d36d5be95a0d9088b674dbb27354107221](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later.
|
215 |
|
216 |
```shell
|
217 |
+
./main -ngl 32 -m nous-hermes-llama2-13b.q4_K_M.llamafile --color -c 4096 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "Below is an instruction that describes a task. Write a response that appropriately completes the request.\n\n### Instruction:\n{prompt}\n\n### Response:"
|
218 |
```
|
219 |
|
220 |
Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
|
221 |
|
222 |
+
Change `-c 4096` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the llamafile file and set by llama.cpp automatically.
|
223 |
|
224 |
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
|
225 |
|
|
|
231 |
|
232 |
## How to run from Python code
|
233 |
|
234 |
+
You can use llamafile models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries.
|
235 |
|
236 |
### How to load this model from Python using ctransformers
|
237 |
|
|
|
248 |
CT_METAL=1 pip install ctransformers>=0.2.24 --no-binary ctransformers
|
249 |
```
|
250 |
|
251 |
+
#### Simple example code to load one of these llamafile models
|
252 |
|
253 |
```python
|
254 |
from ctransformers import AutoModelForCausalLM
|
255 |
|
256 |
# Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
|
257 |
+
llm = AutoModelForCausalLM.from_pretrained("jartine/Nous-Hermes-Llama2-llamafile", model_file="nous-hermes-llama2-13b.q4_K_M.llamafile", model_type="llama", gpu_layers=50)
|
258 |
|
259 |
print(llm("AI is going to"))
|
260 |
```
|
|
|
266 |
* [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp)
|
267 |
* [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers)
|
268 |
|
269 |
+
<!-- README_llamafile.md-how-to-run end -->
|
270 |
|
271 |
<!-- footer start -->
|
272 |
<!-- 200823 -->
|
|
|
274 |
|
275 |
For further support, and discussions on these models and AI in general, join us at:
|
276 |
|
277 |
+
[jartine AI's Discord server](https://discord.gg/FwAVVu7eJ4)
|
278 |
|
279 |
## Thanks, and how to contribute
|
280 |
|
|
|
288 |
|
289 |
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
|
290 |
|
291 |
+
* Patreon: https://patreon.com/jart
|
292 |
+
* Ko-Fi: https://ko-fi.com/jart
|
293 |
|
294 |
**Special thanks to**: Aemon Algiz.
|
295 |
|