Text Generation
Transformers
Safetensors
English
llama
text-generation-inference
unsloth
bitsandbytes
8B
conversational
deepseekR1tunedchat / README.md
Aeshp's picture
Update README.md
7bafa78 verified
metadata
base_model:
  - deepseek-ai/DeepSeek-R1-Distill-Llama-8B
  - unsloth/DeepSeek-R1-Distill-Llama-8B-unsloth-bnb-4bit
tags:
  - text-generation-inference
  - transformers
  - unsloth
  - llama
  - bitsandbytes
  - 8B
license: mit
language:
  - en
datasets:
  - taskydata/baize_chatbot
  - MohammadOthman/mo-customer-support-tweets-945k
  - bitext/Bitext-customer-support-llm-chatbot-training-dataset
new_version: Aeshp/deepseekR1tunedchat
pipeline_tag: text-generation
library_name: transformers

Aeshp/deepseekR1tunedchat

This model is a fine-tuned version of deepseek-ai/DeepSeek-R1-Distill-Llama-8B, loaded via Unsloth in 4-bit as unsloth/DeepSeek-R1-Distill-Llama-8B-unsloth-bnb-4bit. It has been trained on customer service and general chat datasets:

The training was performed in three steps, and the final weights were merged with the base model and pushed here.

📝 License

This model is released under the MIT license, allowing free use, modification, and further fine-tuning.

💡 How to Fine-Tune Further

All code and instructions for further fine-tuning, inference, and pushing to the Hugging Face Hub are available in the open-source GitHub repository:
https://github.com/Aeshp/deepseekR1finetune

  • You can fine-tune this model on your own domain-specific data.
  • Please adjust hyperparameters and dataset size as needed.
  • Example scripts and notebooks are provided for both base model and checkpoint-based fine-tuning.

⚠️ Notes

  • The model may sometimes hallucinate, as is common with LLMs.
  • For best results, use a large, high-quality dataset for further fine-tuning to avoid overfitting.

📚 References

Hugging Face Models

Datasets

GitHub Repositories

Papers


For all usage instructions, fine-tuning guides, and code, please see the GitHub repository.

Thank-You