JackFram's llama-160m for Web-LLM

This is a compiled version of JackFram/llama-160m for MLC Web-LLM, using q4f32_1 quantization.

Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model is not currently available via any of the supported third-party Inference Providers, and HF Inference API was unable to determine this model's library.

Dataset used to train Felladrin/mlc-chat-llama-160m-q4f32_1