LLama-3-3x8b-multilingual

This model is MoE merge of following three models

  1. namespace-Pt/Llama-3-8B-Instruct-80K-QLoRA-Merged
  2. meta-llama/Meta-Llama-3-8B-Instruct
  3. lightblue/suzume-llama-3-8B-multilingual

The model has context length of 80k.

Downloads last month
4
Safetensors
Model size
19.3B params
Tensor type
BF16
·
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.

Model tree for Souvik3333/Llama-3-3x8B-multilingual

Quantizations
2 models