--- base_model: nvidia/Llama-3.1-Minitron-4B-Width-Base datasets: - teknium/OpenHermes-2.5 library_name: transformers license: other license_name: nvidia-open-model-license license_link: https://developer.download.nvidia.com/licenses/nvidia-open-model-license-agreement-june-2024.pdf pipeline_tag: text-generation tags: - mlx --- # psx7/llama4B The Model [psx7/llama4B](https://huggingface.co/psx7/llama4B) was converted to MLX format from [rasyosef/Llama-3.1-Minitron-4B-Chat](https://huggingface.co/rasyosef/Llama-3.1-Minitron-4B-Chat) using mlx-lm version **0.18.1**. ## Use with mlx ```bash pip install mlx-lm ``` ```python from mlx_lm import load, generate model, tokenizer = load("psx7/llama4B") response = generate(model, tokenizer, prompt="hello", verbose=True) ```