Hugging Face's logo Hugging Face
  • Models
  • Datasets
  • Spaces
  • Docs
  • Enterprise
  • Pricing

  • Log In
  • Sign Up

ikawrakow
/
mistral-7b-quantized-gguf

GGUF
Model card Files Files and versions
xet
Community
mistral-7b-quantized-gguf
Ctrl+K
Ctrl+K
  • 1 contributor
History: 6 commits
ikawrakow's picture
ikawrakow
Update README.md
da77a53 over 1 year ago
  • .gitattributes
    1.56 kB
    Adding 3-bit k-quants over 1 year ago
  • README.md
    1.4 kB
    Update README.md over 1 year ago
  • mistral-7b-q2k-extra-small.gguf
    2.47 GB
    xet
    Adding 2-bit and large 5-bit quants over 1 year ago
  • mistral-7b-q3km.gguf
    3.52 GB
    xet
    Adding 3-bit k-quants over 1 year ago
  • mistral-7b-q3ks.gguf
    3.16 GB
    xet
    Adding 3-bit k-quants over 1 year ago
  • mistral-7b-q40.gguf
    4.11 GB
    xet
    Adding 4-bit k- and traditional ggml quants over 1 year ago
  • mistral-7b-q41.gguf
    4.55 GB
    xet
    Adding 4-bit k- and traditional ggml quants over 1 year ago
  • mistral-7b-q4km.gguf
    4.45 GB
    xet
    Adding 4-bit k- and traditional ggml quants over 1 year ago
  • mistral-7b-q4ks.gguf
    4.16 GB
    xet
    Adding 4-bit k- and traditional ggml quants over 1 year ago
  • mistral-7b-q50.gguf
    5 GB
    xet
    Adding small 5-bit quantized models over 1 year ago
  • mistral-7b-q51.gguf
    5.44 GB
    xet
    Adding 2-bit and large 5-bit quants over 1 year ago
  • mistral-7b-q5ks.gguf
    5 GB
    xet
    Adding small 5-bit quantized models over 1 year ago