--- base_model: sydonayrex/AI-Llama3-21B language: - en license: llama3 tags: - text-generation-inference - transformers - unsloth - llama - trl - sft datasets: Doctor-Shotgun/capybara-sharegpt pipeline_tag: text-generation library_name: transformers --- The provided model is a multi-layered folded model, using multiple layers from the base Llama3 8B Instruct base, to increase its size to 21B parameters using mergekit. Rather than just using passthrough, task arithmetic was used. Further fine tuning was performed to ensure the model's weights and inference should be rebaselined. q3_k_s GGUF :https://huggingface.co/sydonayrex/Blackjack-Llama3-21B-Q3_K_S-GGUF q4_k_m GGUF :https://huggingface.co/sydonayrex/Blackjack-Llama3-21B-Q4_K_M-GGUF q6_k GGUF: https://huggingface.co/sydonayrex/Blackjack-Llama3-21B-Q6_K-GGUF q8_0 GGUF: https://huggingface.co/sydonayrex/Blackjack-Llama3-21B-Q8_0-GGUF Only minor follow-up inference testing was performed after training. # Uploaded model - **Developed by:** sydonayrex - **License:** Llama 3 - **Finetuned from model :** sydonayrex/AI-Llama3-21B This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [](https://github.com/unslothai/unsloth) Llama image generated by Meta AI.