File size: 1,804 Bytes
45e02cb
 
 
 
f38848b
0637c98
 
f38848b
 
 
 
45e02cb
 
541536c
45e02cb
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
49f43b2
45e02cb
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
---
license: other
license_name: tencent-license
license_link: https://huggingface.co/tencent/Tencent-Hunyuan-Large/resolve/main/LICENSE.txt
language: en
base_model:
- tencent-community/Hunyuan-A52B-Instruct
tags:
- mlx
pipeline_tag: text-generation
library_name: transformers
---

# HawkonLi/Hunyuan-A52B-Instruct-2bit

# Introduction

This Model was converted to MLX format from [tencent-community/Hunyuan-A52B-Instruct](https://huggingface.co/tencent-community/Hunyuan-A52B-Instruct)

**mlx-lm version:**   **0.21.0**

**convert-parameter:**

	q_group_size: 128

	q_bits: 2

Based on testing, this model can **BARELY** run local inference on a **MacBook Pro 16-inch (M3 Max, 128GB RAM)**  . The following command must be executed before running the model:

```bash
 sudo sysctl iogpu.wired_limit_mb=105000  
```

> [!NOTE] This command requires macOS 15.0 or higher to work.

This model requires 104,259 MB of memory, which is close to the maximum recommended size of 98,384 MB on the M3 Max with 128GB RAM, but it does fit. Therefore, the command above is used to increase the system's wired memory limit. Please note, this may cause unexpected system lag or interruptions.

## Use with mlx

```bash
pip install mlx-lm
```

```python
from mlx_lm import load, generate

model, tokenizer = load("HawkonLi/Hunyuan-A52B-Instruct-2bit", tokenizer_config={"eos_token": "<|endoftext|>", "trust_remote_code": True},lazy=True)
prompt = "蓝牙耳机坏了,该去看牙科还是耳科"

if hasattr(tokenizer, "apply_chat_template") and tokenizer.chat_template is not None:
    messages = [{"role": "user", "content": prompt}]
    prompt = tokenizer.apply_chat_template(
        messages, tokenize=False, add_generation_prompt=True
    )

response = generate(model, tokenizer, prompt=prompt, verbose=True)
```