original model Llama-3-MAAL-8B-Instruct-v0.1
original model git https://github.com/maum-ai
EXL2 quants of Llama3_MAAL-8B-Instruct-v0.1
Located in the main branch
- 8.0 bits per weight
- measurement.json
์๋ณธ ๋ชจ๋ธ Llama-3-MAAL-8B-Instruct-v0.1
์๋ณธ ๋ชจ๋ธ ํ์ฌ Git https://github.com/maum-ai
Llama3_MAAL-8B-Instruct-v0.1 ๋ชจ๋ธ EXL2 ์์ํ
๋ฉ์ธ branch์ ์๋ ํ์ผ
- 8.0 bits per weight
- measurement.json
- Downloads last month
- 7
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.