Transformers
Safetensors
English
text-generation-inference
unsloth
qwen2
trl
dataset = load_dataset("mlfoundations-dev/s1K-with-deepseek-r1-sharegpt", split = "train")
dataset2 = load_dataset("Nitral-AI/Cosmopedia-Instruct-60k-Distilled-R1-70B-ShareGPT", split = "train")

Uploaded model

  • Developed by: bunnycore
  • License: apache-2.0
  • Finetuned from model : unsloth/qwen2.5-7b-instruct-unsloth-bnb-4bit

This qwen2 model was trained 2x faster with Unsloth and Huggingface's TRL library.

Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for bunnycore/Qwen-2.5-7b-s1k-lora_model

Merges
6 models

Datasets used to train bunnycore/Qwen-2.5-7b-s1k-lora_model

Collection including bunnycore/Qwen-2.5-7b-s1k-lora_model