This is a ggml quantized version of Replit-v2-CodeInstruct-3B. Quantized to 4bit -> q4_1. To run inference you can use ggml directly or ctransformers.
- Memory usage of model: 2GB~
- Repo to run the model using ctransformers: https://github.com/abacaj/replit-3B-inference
- Downloads last month
- 64
Inference Providers
NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API:
The HF Inference API does not support model that require custom code execution.