--- base_model: deepseek-ai/deepseek-coder-6.7b-base library_name: peft pipeline_tag: text-generation tags: - deepseek-coder - lora - transformers --- # Model Card: ColabMind-Coder-6.7B-LoRA ## Model Details - **Base Model:** deepseek-ai/deepseek-coder-6.7b-base - **Technique:** LoRA fine-tuning with PEFT - **Language:** English, programming languages (Python, Machine Learning) - **Type:** Causal LM for code generation ## Intended Uses - **Direct Use:** Code completion, code explanation, small script generation - **Downstream Use:** Can be fine-tuned for domain-specific code tasks - **Out of Scope:** Malicious code generation, production-grade critical systems without human review ## Training - **Data:** Filtered samples from The-Stack-v2 & curated coding datasets - **Procedure:** LoRA fine-tuning on Google Colab (T4 GPU, 8GB VRAM) - **Precision:** Mixed fp16 ## Limitations - May produce incorrect or insecure code - Bias from training data may persist - Not optimized for very large-scale projects ## Quick Start ```python from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline model = AutoModelForCausalLM.from_pretrained("Agasthya0/colabmind-coder-6.7b-lora") tokenizer = AutoTokenizer.from_pretrained("Agasthya0/colabmind-coder-6.7b-lora") pipe = pipeline("text-generation", model=model, tokenizer=tokenizer) print(pipe("def fibonacci(n):")[0]["generated_text"])