Transformers
GGUF
Inference Endpoints
aashish1904's picture
Upload README.md with huggingface_hub
0fa39da verified
|
raw
history blame
933 Bytes
---
license: bigcode-openrail-m
datasets:
- bigcode/self-oss-instruct-sc2-exec-filter-50k
base_model:
- bigcode/starcoder2-3b
library_name: transformers
---
[![QuantFactory Banner](https://lh7-rt.googleusercontent.com/docsz/AD_4nXeiuCm7c8lEwEJuRey9kiVZsRn2W-b4pWlu3-X534V3YmVuVc2ZL-NXg2RkzSOOS2JXGHutDuyyNAUtdJI65jGTo8jT9Y99tMi4H4MqL44Uc5QKG77B0d6-JfIkZHFaUA71-RtjyYZWVIhqsNZcx8-OMaA?key=xt3VSDoCbmTY7o-cwwOFwQ)](https://hf.co/QuantFactory)
# QuantFactory/starcoder2-3b-instruct-v0.1-GGUF
This is quantized version of [onekq-ai/starcoder2-3b-instruct-v0.1](https://huggingface.co/onekq-ai/starcoder2-3b-instruct-v0.1) created using llama.cpp
# Original Model Card
Starcoder2-3b fined the same way as https://huggingface.co/bigcode/starcoder2-15b-instruct-v0.1 using https://huggingface.co/datasets/bigcode/self-oss-instruct-sc2-exec-filter-50k
Epochs: 1
Learning Rate: 0.0001
Lora Rank: 8
Batch Size: 16
Evaluation Split: 0