Support vllm

#11
by CarrotAI - opened

Hello, lg ai researcher
First of all, thank you for revealing the model.

for the vllm project
Could you explain the difference between Llama?
It is said that the exaone model can be supported within that implementation

url : https://github.com/vllm-project/vllm/issues/7236

This comment has been hidden
LG AI Research org

Hello, CarrotAI.
EXAONE-3.0-7.8B-Instruct has the same architecture to Llama-3.0-8B-Instruct that doesn't use the RoPE scaling. But, tokenizer, vocab, and chat template are different between them.
In config.json, there are three different key-names compared with the Llama's one.
- activation_function vs. hidden_act
- num_layers vs. num_hidden_layers
- layer_norm_epsilon vs. rms_norm_eps

thank you!

CarrotAI changed discussion status to closed

Sign up or log in to comment