zhaode's picture
Upload folder using huggingface_hub
690bd46 verified
|
raw
history blame
1.21 kB
metadata
license: apache-2.0
language:
  - en
pipeline_tag: text-generation
tags:
  - chat

internlm-chat-7b-MNN

Introduction

This model is a 4-bit quantized version of the MNN model exported from internlm-chat-7b using llmexport.

Download

# install huggingface
pip install huggingface
# shell download
huggingface download --model 'taobao-mnn/internlm-chat-7b-MNN' --local_dir 'path/to/dir'
# SDK download
from huggingface_hub import snapshot_download
model_dir = snapshot_download('taobao-mnn/internlm-chat-7b-MNN')
# git clone
git clone https://www.modelscope.cn/taobao-mnn/internlm-chat-7b-MNN

Usage

# clone MNN source
git clone https://github.com/alibaba/MNN.git

# compile
cd MNN
mkdir build && cd build
cmake .. -DMNN_LOW_MEMORY=true -DMNN_CPU_WEIGHT_DEQUANT_GEMM=true -DMNN_BUILD_LLM=true -DMNN_SUPPORT_TRANSFORMER_FUSE=true
make -j

# run
./llm_demo /path/to/internlm-chat-7b-MNN/config.json prompt.txt

Document

MNN-LLM