Triplex-GGUF

Original Model

SciPhi/Triplex

Run with LlamaEdge

  • LlamaEdge version: coming soon

Quantized GGUF Models

Name Quant method Bits Size Use case
Triplex-Q2_K.gguf Q2_K 2 1.42 GB smallest, significant quality loss - not recommended for most purposes
Triplex-Q3_K_L.gguf Q3_K_L 3 2.09 GB small, substantial quality loss
Triplex-Q3_K_M.gguf Q3_K_M 3 1.96 GB very small, high quality loss
Triplex-Q3_K_S.gguf Q3_K_S 3 1.68 GB very small, high quality loss
Triplex-Q4_0.gguf Q4_0 4 2.18 GB legacy; small, very high quality loss - prefer using Q3_K_M
Triplex-Q4_K_M.gguf Q4_K_M 4 2.39 GB medium, balanced quality - recommended
Triplex-Q4_K_S.gguf Q4_K_S 4 2.19 GB small, greater quality loss
Triplex-Q5_0.gguf Q5_0 5 2.64 GB legacy; medium, balanced quality - prefer using Q4_K_M
Triplex-Q5_K_M.gguf Q5_K_M 5 2.82 GB large, very low quality loss - recommended
Triplex-Q5_K_S.gguf Q5_K_S 5 2.64 GB large, low quality loss - recommended
Triplex-Q6_K.gguf Q6_K 6 3.14 GB very large, extremely low quality loss
Triplex-Q8_0.gguf Q8_0 8 4.06 GB very large, extremely low quality loss - not recommended
Triplex-f16.gguf f16 16 7.64 GB

Quantized with llama.cpp b3463

Downloads last month
46
GGUF
Model size
3.82B params
Architecture
phi3

2-bit

3-bit

4-bit

5-bit

6-bit

8-bit

16-bit

Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for second-state/Triplex-GGUF

Base model

SciPhi/Triplex
Quantized
(6)
this model