A Bug using hugging face API

#85
by Kevin355 - opened

When I use
path = 'deepseek-ai/DeepSeek-R1'
tokenizer = AutoTokenizer.from_pretrained(path, trust_remote_code=True)
model = AutoModelForCausalLM.from_pretrained(path, trust_remote_code=True),

there is a bug called:

ValueError: Unknown quantization type, got fp8 - supported types are: ['awq', 'bitsandbytes_4bit', 'bitsandbytes_8bit', 'gptq', 'aqlm', 'quanto', 'eetq', 'hqq', 'compressed-tensors', 'fbgemm_fp8', 'torchao', 'bitnet']

Does anyone know how to solve it?

I'm getting the same error while running on Mac

I'm also experiencing same error.

I'm getting the same error while running on Google colab

Your need to confirm your account before you can post a new comment.

Sign up or log in to comment