File size: 1,482 Bytes
8c0f4a9 5be12a8 8c0f4a9 7603813 25786e0 5be12a8 21b1a70 8c0f4a9 ffac29f ec06719 b988935 ec06719 c836238 ffac29f c836238 b74aaad 47cc507 d0182af 47cc507 8c0f4a9 5be12a8 8c0f4a9 371bbb4 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 |
---
base_model: sydonayrex/AI-Llama3-21B
language:
- en
license: llama3
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
- sft
datasets: Doctor-Shotgun/capybara-sharegpt
pipeline_tag: text-generation
library_name: transformers
---
<img src="llama-blackjack.jpeg" width="640" height="640">
The provided model is a multi-layered folded model, using multiple layers from the base Llama3 8B Instruct base, to increase its size to 21B parameters using mergekit. Rather than just using passthrough, task arithmetic was used. Further fine tuning was performed to ensure the model's weights and inference should be rebaselined.
q3_k_s GGUF :https://huggingface.co/sydonayrex/Blackjack-Llama3-21B-Q3_K_S-GGUF
q4_k_m GGUF :https://huggingface.co/sydonayrex/Blackjack-Llama3-21B-Q4_K_M-GGUF
q6_k GGUF: https://huggingface.co/sydonayrex/Blackjack-Llama3-21B-Q6_K-GGUF
q8_0 GGUF: https://huggingface.co/sydonayrex/Blackjack-Llama3-21B-Q8_0-GGUF
Only minor follow-up inference testing was performed after training.
# Uploaded model
- **Developed by:** sydonayrex
- **License:** Llama 3
- **Finetuned from model :** sydonayrex/AI-Llama3-21B
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
Llama image generated by Meta AI. |