Hugging Face
Models
Datasets
Spaces
Posts
Docs
Enterprise
Pricing
Log In
Sign Up
stan-hua
/
Mistral-7B-Instruct-v0.3-AWQ-W4A16
like
0
Safetensors
mistral
4-bit precision
awq
Model card
Files
Files and versions
Community
Train
main
Mistral-7B-Instruct-v0.3-AWQ-W4A16
1 contributor
History:
3 commits
stan-hua
AWQ model for mistralai/Mistral-7B-Instruct-v0.3: {'w_bit': 4, 'zero_point': True, 'q_group_size': 128, 'version': 'GEMM'}
52598c3
verified
4 days ago
.gitattributes
Safe
1.52 kB
initial commit
5 days ago
config.json
Safe
965 Bytes
AWQ model for mistralai/Mistral-7B-Instruct-v0.3: {'w_bit': 4, 'zero_point': True, 'q_group_size': 128, 'version': 'GEMM'}
4 days ago
generation_config.json
Safe
132 Bytes
AWQ model for mistralai/Mistral-7B-Instruct-v0.3: {'w_bit': 4, 'zero_point': True, 'q_group_size': 128, 'version': 'GEMM'}
5 days ago
model.safetensors
Safe
4.16 GB
LFS
AWQ model for mistralai/Mistral-7B-Instruct-v0.3: {'w_bit': 4, 'zero_point': True, 'q_group_size': 128, 'version': 'GEMM'}
4 days ago
quant_config.json
Safe
90 Bytes
AWQ model for mistralai/Mistral-7B-Instruct-v0.3: {'w_bit': 4, 'zero_point': True, 'q_group_size': 128, 'version': 'GEMM'}
5 days ago
special_tokens_map.json
Safe
414 Bytes
AWQ model for mistralai/Mistral-7B-Instruct-v0.3: {'w_bit': 4, 'zero_point': True, 'q_group_size': 128, 'version': 'GEMM'}
5 days ago
tokenizer.json
Safe
3.67 MB
AWQ model for mistralai/Mistral-7B-Instruct-v0.3: {'w_bit': 4, 'zero_point': True, 'q_group_size': 128, 'version': 'GEMM'}
5 days ago
tokenizer.model
Safe
587 kB
LFS
AWQ model for mistralai/Mistral-7B-Instruct-v0.3: {'w_bit': 4, 'zero_point': True, 'q_group_size': 128, 'version': 'GEMM'}
5 days ago
tokenizer_config.json
Safe
141 kB
AWQ model for mistralai/Mistral-7B-Instruct-v0.3: {'w_bit': 4, 'zero_point': True, 'q_group_size': 128, 'version': 'GEMM'}
5 days ago