Edit model card

2.2bpw (high quality loss, only for 24GB vRAM test.)
4.0bpw
6.0bpw
8.0bpw

Llama-3-Swallow-70B-Instruct-v0.1-exl2

Prompt template

<|begin_of_text|><|start_header_id|>system<|end_header_id|>

あなたは誠実で優秀な日本人のアシスタントです。<|eot_id|><|start_header_id|>user<|end_header_id|>

東京の夜空に打ち上がっている花火の下、向かい合っている燕とラマの温かい物語を書いてください。<|eot_id|><|start_header_id|>assistant<|end_header_id|>

License

META LLAMA 3 COMMUNITY LICENSE

Citations

@misc{llama3swallow,
      title={Llama 3 Swallow},
      url={https://swallow-llm.github.io/llama3-swallow.en.html},
      author={Swallow LLM},
      year={2024},
}
@article{llama3modelcard,
    title={Llama 3 Model Card},
    author={AI@Meta},
    year={2024},
    url = {https://github.com/meta-llama/llama3/blob/main/MODEL_CARD.md}
}
Downloads last month

-

Downloads are not tracked for this model. How to track
Inference API
Unable to determine this model's library. Check the docs .