Hugging Face's logo Hugging Face
  • Models
  • Datasets
  • Spaces
  • Docs
  • Enterprise
  • Pricing

  • Log In
  • Sign Up

legraphista
/
xLAM-7b-r-IMat-GGUF

Text Generation
GGUF
PyTorch
English
function-calling
LLM Agent
tool-use
mistral
quantized
GGUF
quantization
imat
imatrix
static
16bit
8bit
6bit
5bit
4bit
3bit
2bit
1bit
conversational
Model card Files Files and versions Community
xLAM-7b-r-IMat-GGUF
Ctrl+K
Ctrl+K
  • 1 contributor
History: 21 commits
legraphista's picture
legraphista
Upload README.md with huggingface_hub
1e199f2 verified 12 months ago
  • .gitattributes
    2.02 kB
    Upload xLAM-7b-r.Q3_K.gguf with huggingface_hub 12 months ago
  • README.md
    6.91 kB
    Upload README.md with huggingface_hub 12 months ago
  • imatrix.dat
    4.99 MB
    LFS
    Upload imatrix.dat with huggingface_hub 12 months ago
  • imatrix.dataset
    280 kB
    Upload imatrix.dataset with huggingface_hub 12 months ago
  • imatrix.log
    11.7 kB
    Upload imatrix.log with huggingface_hub 12 months ago
  • xLAM-7b-r.BF16.gguf
    14.5 GB
    LFS
    Upload xLAM-7b-r.BF16.gguf with huggingface_hub 12 months ago
  • xLAM-7b-r.FP16.gguf
    14.5 GB
    LFS
    Upload xLAM-7b-r.FP16.gguf with huggingface_hub 12 months ago
  • xLAM-7b-r.Q3_K.gguf
    3.52 GB
    LFS
    Upload xLAM-7b-r.Q3_K.gguf with huggingface_hub 12 months ago
  • xLAM-7b-r.Q4_K.gguf
    4.37 GB
    LFS
    Upload xLAM-7b-r.Q4_K.gguf with huggingface_hub 12 months ago
  • xLAM-7b-r.Q5_K.gguf
    5.13 GB
    LFS
    Upload xLAM-7b-r.Q5_K.gguf with huggingface_hub 12 months ago
  • xLAM-7b-r.Q5_K_S.gguf
    5 GB
    LFS
    Upload xLAM-7b-r.Q5_K_S.gguf with huggingface_hub 12 months ago
  • xLAM-7b-r.Q6_K.gguf
    5.94 GB
    LFS
    Upload xLAM-7b-r.Q6_K.gguf with huggingface_hub 12 months ago
  • xLAM-7b-r.Q8_0.gguf
    7.7 GB
    LFS
    Upload xLAM-7b-r.Q8_0.gguf with huggingface_hub 12 months ago