Updating model files
Browse files
README.md
CHANGED
@@ -4,6 +4,17 @@ datasets:
|
|
4 |
- c-s-ale/alpaca-gpt4-data
|
5 |
pipeline_tag: text2text-generation
|
6 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
7 |
|
8 |
## GPT4-Alpaca-LoRA_MLP-65B GPTQ
|
9 |
|
@@ -15,15 +26,26 @@ These files are the result of merging the [LoRA weights of chtan's gpt4-alpaca-l
|
|
15 |
* [4bit and 5bit GGML models for CPU inference in llama.cpp](https://huggingface.co/TheBloke/gpt4-alpaca-lora_mlp-65B-GGML)
|
16 |
* [float16 unquantised model for GPU inference and further conversions](https://huggingface.co/TheBloke/gpt4-alpaca-lora_mlp-65B-HF)
|
17 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
18 |
# Original model card
|
19 |
|
20 |
-
This repo provides the training checkpoint of LLaMA on the alpaca_data_gpt4 dataset via LoRA [MLP] on 8xA100(80G).
|
21 |
|
22 |
He et al. 2022 gave an insight that FFN can better utilize modification at larger capacities.
|
23 |
|
24 |
The codes is provided by [tloen/alpaca-lora: Instruct-tune LLaMA on consumer hardware (github.com)](https://github.com/tloen/alpaca-lora).
|
25 |
|
26 |
-
We modify the running scripts to
|
27 |
```bash
|
28 |
torchrun --nproc_per_node=8 finetune.py \
|
29 |
--base_model '/cache1/chtan/large_models/llama-hf/llama-65b' \
|
@@ -47,7 +69,7 @@ torchrun --nproc_per_node=8 finetune.py \
|
|
47 |
|
48 |
**Instruction**: Tell me about alpacas.
|
49 |
|
50 |
-
**gpt4-alpaca-lora_mlp-65b**:
|
51 |
|
52 |
Alpacas are small, domesticated mammals that are closely related to llamas. They are native to the Andes Mountains of South America, primarily in Peru, Bolivia, and Chile. These animals have been domesticated for thousands of years and were used by the Incas for their fleece, meat, and as pack animals.
|
53 |
|
|
|
4 |
- c-s-ale/alpaca-gpt4-data
|
5 |
pipeline_tag: text2text-generation
|
6 |
---
|
7 |
+
<div style="width: 100%;">
|
8 |
+
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
|
9 |
+
</div>
|
10 |
+
<div style="display: flex; justify-content: space-between; width: 100%;">
|
11 |
+
<div style="display: flex; flex-direction: column; align-items: flex-start;">
|
12 |
+
<p><a href="https://discord.gg/UBgz4VXf">Chat & support: my new Discord server</a></p>
|
13 |
+
</div>
|
14 |
+
<div style="display: flex; flex-direction: column; align-items: flex-end;">
|
15 |
+
<p><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? Patreon coming soon!</a></p>
|
16 |
+
</div>
|
17 |
+
</div>
|
18 |
|
19 |
## GPT4-Alpaca-LoRA_MLP-65B GPTQ
|
20 |
|
|
|
26 |
* [4bit and 5bit GGML models for CPU inference in llama.cpp](https://huggingface.co/TheBloke/gpt4-alpaca-lora_mlp-65B-GGML)
|
27 |
* [float16 unquantised model for GPU inference and further conversions](https://huggingface.co/TheBloke/gpt4-alpaca-lora_mlp-65B-HF)
|
28 |
|
29 |
+
## Want to support my work?
|
30 |
+
|
31 |
+
I've had a lot of people ask if they can contribute. I love providing models and helping people, but it is starting to rack up pretty big cloud computing bills.
|
32 |
+
|
33 |
+
So if you're able and willing to contribute, it'd be most gratefully received and will help me to keep providing models, and work on various AI projects.
|
34 |
+
|
35 |
+
Donaters will get priority support on any and all AI/LLM/model questions, and I'll gladly quantise any model you'd like to try.
|
36 |
+
|
37 |
+
* Patreon: coming soon! (just awaiting approval)
|
38 |
+
* Ko-Fi: https://ko-fi.com/TheBlokeAI
|
39 |
+
* Discord: https://discord.gg/UBgz4VXf
|
40 |
# Original model card
|
41 |
|
42 |
+
This repo provides the training checkpoint of LLaMA on the alpaca_data_gpt4 dataset via LoRA [MLP] on 8xA100(80G).
|
43 |
|
44 |
He et al. 2022 gave an insight that FFN can better utilize modification at larger capacities.
|
45 |
|
46 |
The codes is provided by [tloen/alpaca-lora: Instruct-tune LLaMA on consumer hardware (github.com)](https://github.com/tloen/alpaca-lora).
|
47 |
|
48 |
+
We modify the running scripts to
|
49 |
```bash
|
50 |
torchrun --nproc_per_node=8 finetune.py \
|
51 |
--base_model '/cache1/chtan/large_models/llama-hf/llama-65b' \
|
|
|
69 |
|
70 |
**Instruction**: Tell me about alpacas.
|
71 |
|
72 |
+
**gpt4-alpaca-lora_mlp-65b**:
|
73 |
|
74 |
Alpacas are small, domesticated mammals that are closely related to llamas. They are native to the Andes Mountains of South America, primarily in Peru, Bolivia, and Chile. These animals have been domesticated for thousands of years and were used by the Incas for their fleece, meat, and as pack animals.
|
75 |
|