--- license: mit --- # **Phi-4-mlx-int4** This is a quantized INT4 model based on Apple MLX Framework Phi-4. You can deploy it on Apple Silicon devices (M1,M2,M3,M4...). Note: This is unoffical version,just for test and dev. ## **Installation** ```bash pip install -U mlx-lm ``` ## **Conversion** ```bash python -m mlx_lm.convert --hf-path {Your Phi-4-MLX Path} -q ``` ## **Samples** ```python from mlx_lm import load, generate model, tokenizer = load("Your Phi-4-mlx-int4 Path") prompt = tokenizer.apply_chat_template( [{"role": "user", "content": "I have $20,000 in my savings account, where I receive a 4% profit per year and payments twice a year. Can you please tell me how long it will take for me to become a millionaire? Also, can you please explain the math step by step as if you were explaining it to an uneducated person?"}], tokenize=False, add_generation_prompt=True, ) response = generate(model, tokenizer, prompt=prompt,max_tokens=1024, verbose=True) ```