Spaces:
Running
Running
LLM2VEC Models result in error
#48 opened 14 days ago
by
LogicBombaklot
Why does converting the Qwen/Qwen2.5-Omni-7B model using mlx-community/mlx-my-repo result in an error?
#47 opened 15 days ago
by
CHSFM
Space for converting models with vlm?
#45 opened about 1 month ago
by
alexgusevski

Add support for converting GGUF models to MLX
2
#43 opened about 1 month ago
by
Fmuaddib

Error: rope_scaling 'type' currently only supports 'linear'
#42 opened about 1 month ago
by
Fmuaddib

Error when converting huihui-ai/Llama-3.2-3B-Instruct-abliterated: Received parameters not in model: lm_head.weight.
4
#36 opened about 1 month ago
by
Felladrin
