Hugging Face
Models
Datasets
Spaces
Posts
Docs
Enterprise
Pricing
Log In
Sign Up
DisOOM
/
Faro-Yi-9B-200k-GGUF
like
1
Text Generation
Transformers
GGUF
PyTorch
quantized
2-bit
3-bit
4-bit precision
5-bit
6-bit
8-bit precision
fp16
GGUF
yi
conversational
Inference Endpoints
text-generation-inference
License:
apache-2.0
Model card
Files
Files and versions
Community
Train
Deploy
Use this model
2c65e7a
Faro-Yi-9B-200k-GGUF
Commit History
Update README.md
2c65e7a
verified
DisOOM
commited on
Mar 30, 2024
Rename ggml-model-f16.gguf to Fi-9B-f16.gguf
5f762ad
verified
DisOOM
commited on
Mar 30, 2024
Upload ggml-model-f16.gguf
3c3a7e6
verified
DisOOM
commited on
Mar 30, 2024
Upload Fi-9B-Q8_0.gguf
03f7cac
verified
DisOOM
commited on
Mar 30, 2024
Upload Fi-9B-Q2_K.gguf
6b86711
verified
DisOOM
commited on
Mar 30, 2024
Upload Fi-9B-Q3_K_M.gguf
3d32822
verified
DisOOM
commited on
Mar 30, 2024
Upload Fi-9B-Q6_K.gguf
694876d
verified
DisOOM
commited on
Mar 30, 2024
Update README.md
995c21c
verified
DisOOM
commited on
Mar 30, 2024
Update README.md
a47bcc1
verified
DisOOM
commited on
Mar 30, 2024
Upload Fi-9B-Q5_K_M.gguf
1b7779b
verified
DisOOM
commited on
Mar 30, 2024
Upload Fi-9B-Q4_K_M.gguf
cf26d9c
verified
DisOOM
commited on
Mar 30, 2024
initial commit
66231e6
verified
DisOOM
commited on
Mar 30, 2024