|
--- |
|
license: other |
|
license_name: tencent-license |
|
license_link: https://huggingface.co/tencent/Tencent-Hunyuan-Large/resolve/main/LICENSE.txt |
|
language: en |
|
base_model: |
|
- tencent-community/Hunyuan-A52B-Instruct |
|
tags: |
|
- mlx |
|
pipeline_tag: text-generation |
|
library_name: transformers |
|
--- |
|
|
|
# HawkonLi/Hunyuan-A52B-Instruct-2bit |
|
|
|
# Introduction |
|
|
|
This Model was converted to MLX format from [tencent-community/Hunyuan-A52B-Instruct](https://huggingface.co/tencent-community/Hunyuan-A52B-Instruct) |
|
|
|
**mlx-lm version:** **0.21.0** |
|
|
|
**convert-parameter:** |
|
|
|
q_group_size: 128 |
|
|
|
q_bits: 2 |
|
|
|
Based on testing, this model can **BARELY** run local inference on a **MacBook Pro 16-inch (M3 Max, 128GB RAM)** . The following command must be executed before running the model: |
|
|
|
```bash |
|
sudo sysctl iogpu.wired_limit_mb=105000 |
|
``` |
|
|
|
> [!NOTE] This command requires macOS 15.0 or higher to work. |
|
|
|
This model requires 104,259 MB of memory, which is close to the maximum recommended size of 98,384 MB on the M3 Max with 128GB RAM, but it does fit. Therefore, the command above is used to increase the system's wired memory limit. Please note, this may cause unexpected system lag or interruptions. |
|
|
|
## Use with mlx |
|
|
|
```bash |
|
pip install mlx-lm |
|
``` |
|
|
|
```python |
|
from mlx_lm import load, generate |
|
|
|
model, tokenizer = load("HawkonLi/Hunyuan-A52B-Instruct-2bit", tokenizer_config={"eos_token": "<|endoftext|>", "trust_remote_code": True},lazy=True) |
|
prompt = "蓝牙耳机坏了,该去看牙科还是耳科" |
|
|
|
if hasattr(tokenizer, "apply_chat_template") and tokenizer.chat_template is not None: |
|
messages = [{"role": "user", "content": prompt}] |
|
prompt = tokenizer.apply_chat_template( |
|
messages, tokenize=False, add_generation_prompt=True |
|
) |
|
|
|
response = generate(model, tokenizer, prompt=prompt, verbose=True) |
|
``` |