Updates in EXAONE-3.5

Key Changes

  • RoPE Scaling Parameter: Added to support longer context_length.
  • Memory Optimization: For the 2.4B model, tie_word_embeddings is set to True for improved memory efficiency.

⚠️ Using the original Llamafy script as-is may lead to performance degradation.

To address this, I have updated the script and uploaded the Llamafied version of the model.

Special Thanks

Downloads last month
15
Safetensors
Model size
7.82B params
Tensor type
BF16
·
Inference Providers NEW
This model is not currently available via any of the supported third-party Inference Providers, and the model is not deployed on the HF Inference API.

Model tree for datalama/EXAONE-3.5-7.8B-Instruct-Llamafied

Quantizations
1 model