Hugging Face
Models
Datasets
Spaces
Posts
Docs
Enterprise
Pricing
Log In
Sign Up
alokabhishek
/
Mistral-7B-Instruct-v0.2-4bit-AWQ
like
1
Text Generation
Transformers
Safetensors
mistral
4bit
AWQ
AutoAWQ
7b
quantized
Mistral
Mistral-7B
conversational
text-generation-inference
Inference Endpoints
4-bit precision
awq
License:
apache-2.0
Model card
Files
Files and versions
Community
Train
Deploy
Use this model
main
Mistral-7B-Instruct-v0.2-4bit-AWQ
1 contributor
History:
4 commits
alokabhishek
Updated Readme
99d3cee
verified
12 months ago
.gitattributes
Safe
1.52 kB
initial commit
12 months ago
README.md
Safe
3.69 kB
Updated Readme
12 months ago
config.json
Safe
920 Bytes
Upload folder using huggingface_hub
12 months ago
generation_config.json
Safe
132 Bytes
Upload folder using huggingface_hub
12 months ago
model.safetensors
Safe
4.15 GB
LFS
Upload folder using huggingface_hub
12 months ago
special_tokens_map.json
Safe
414 Bytes
Upload folder using huggingface_hub
12 months ago
tokenizer.json
Safe
1.8 MB
Upload folder using huggingface_hub
12 months ago
tokenizer.model
Safe
493 kB
LFS
Upload folder using huggingface_hub
12 months ago
tokenizer_config.json
Safe
1.46 kB
Upload folder using huggingface_hub
12 months ago