fnlp
/

Text Generation
Safetensors
llama

Research Paper "Towards Economical Inference: Enabling DeepSeek's Multi-Head Latent Attention in Any Transformer-based LLMs"

Inference

wget https://raw.githubusercontent.com/JT-Ushio/MHA2MLA/refs/heads/main/src/mha2mla/monkey_patch.py
  • Step 2(Option): For MHA2MLA models using Partial-RoPE 2-nrom method, Download the qk_2-norm file. Take qk_tensor_7B.pth as an example:
wget https://github.com/JT-Ushio/MHA2MLA/raw/refs/heads/main/utils/qk_tensor_7B.pth
  • Step 3: Download the MHA2MLA models and run inference. Take fnlp/Llama-2-7B-MLA-d_kv_64 as an example:
import torch
from transformers import AutoConfig, AutoTokenizer, LlamaForCausalLM
from monkey_patch import infer_monkey_patch

model_name = "fnlp/Llama-2-7B-MLA-d_kv_64"

# Monkey Patch: MHA -> MLA
config = AutoConfig.from_pretrained(model_name)
if "RoPE" in config:
    config.RoPE["qk_tensor_path"] = "qk_tensor_7B.pth"  # Configuration for Specific Models
    infer_monkey_patch(config.RoPE)

tokenizer = AutoTokenizer.from_pretrained(model_name, trust_remote_code=True)
model = LlamaForCausalLM.from_pretrained(model_name, config=config, torch_dtype=torch.bfloat16).cuda()

# Generate
text = "Which American-born Sinclair won the Nobel Prize for Literature in 1930?"
inputs = tokenizer(text, return_tensors="pt").to(model.device)
generation_kwargs = {"do_sample": False, "use_cache": True, "max_new_tokens": 128}
output = model.generate(**inputs, **generation_kwargs)

print(tokenizer.decode(output[0], skip_special_tokens=True))
# - Sinclair Lewis

Citation

@misc{ji2025economicalinferenceenablingdeepseeks,
      title={Towards Economical Inference: Enabling DeepSeek's Multi-Head Latent Attention in Any Transformer-based LLMs}, 
      author={Tao Ji and Bin Guo and Yuanbin Wu and Qipeng Guo and Lixing Shen and Zhan Chen and Xipeng Qiu and Qi Zhang and Tao Gui},
      year={2025},
      eprint={2502.14837},
      archivePrefix={arXiv},
      primaryClass={cs.CL},
      url={https://arxiv.org/abs/2502.14837}, 
}
Downloads last month
17
Safetensors
Model size
6.5B params
Tensor type
BF16
·
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API: The model has no library tag.

Model tree for fnlp/Llama-2-7B-MLA-d_kv_64

Finetuned
(783)
this model

Dataset used to train fnlp/Llama-2-7B-MLA-d_kv_64

Collection including fnlp/Llama-2-7B-MLA-d_kv_64