Model Sources
https://huggingface.co/HuggingFaceTB/SmolLM-1.7B-Instruct
Uses
v v small model for running on edge with :fire: TTFT & Throughput
Direct Use
Use llama.cpp to inference the model
- Downloads last month
- 97
16-bit
Inference Providers
NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API:
The model has no library tag.