How did you port the model?

#1
by KnutJaegersberg - opened

I found this file on one of your github repos, but I wonder how can we port chatglm to llama architecture?
can you do that for chatglm3, too?

https://github.com/jiguanglizipao/tgi-chatglm/blob/08b3ca3b11a0351028eec60ce8acf3f3d4686d61/server/text_generation_server/models/custom_modeling/flash_llama_modeling.py

Sign up or log in to comment