nm-testing/Meta-Llama-3-8B-Instruct-W8-Channel-A8-Dynamic-Per-Token-Test Text Generation • Updated Oct 9, 2024 • 298
nm-testing/Meta-Llama-3-8B-FP8-compressed-tensors-test-bos Text Generation • Updated Oct 9, 2024 • 20
nm-testing/TinyLlama-1.1B-compressed-tensors-kv-cache-scheme Text Generation • Updated Oct 9, 2024 • 25.9k
nm-testing/Meta-Llama-3-8B-Instruct-W4A16-compressed-tensors-test Text Generation • Updated Oct 9, 2024 • 25
neuralmagic/Phi-3-medium-128k-instruct-quantized.w4a16 Text Generation • Updated Oct 9, 2024 • 312k • 3
nm-testing/Meta-Llama-3-8B-Instruct-W8A8-FP8-Channelwise-compressed-tensors Text Generation • Updated Oct 9, 2024 • 926
nm-testing/Meta-Llama-3-8B-Instruct-Non-Uniform-compressed-tensors Text Generation • Updated Oct 9, 2024 • 19
nm-testing/Meta-Llama-3-8B-Instruct-W4A16-ACTORDER-compressed-tensors-test Text Generation • Updated Oct 9, 2024 • 10
nm-testing/Meta-Llama-3-70B-Instruct-W8A8-Dynamic-Per-Token-test Text Generation • Updated Oct 9, 2024 • 7
nm-testing/Meta-Llama-3-70B-Instruct-W8A8-Dynamic-Per-Token Text Generation • Updated Oct 9, 2024 • 13