LLM Model for Bahasa Indonesia Dialog

Sidrap-7B-v2 is one of the best open model LLM bahasa Indonesia available today.

This model is fine-tuned using a carefully curated and high-quality Bahasa Indonesia dataset and employs Sidrap-7B-v1 as the base model.

For 4-bit quantization, please take a look at Sidrap-7B-v2-GPTQ-4bit

Usage

from transformers import AutoModelForCausalLM, AutoTokenizer

device = "cuda" # the device to load the model onto

model = AutoModelForCausalLM.from_pretrained("robinsyihab/Sidrap-7B-v2")
tokenizer = AutoTokenizer.from_pretrained("robinsyihab/Sidrap-7B-v2")

messages = [
    {"role": "system", "content": "Anda adalah asisten yang suka membantu, penuh hormat, dan jujur. Selalu jawab semaksimal mungkin, sambil tetap aman. Jawaban Anda tidak boleh berisi konten berbahaya, tidak etis, rasis, seksis, beracun, atau ilegal. Harap pastikan bahwa tanggapan Anda tidak memihak secara sosial dan bersifat positif.\n\
Jika sebuah pertanyaan tidak masuk akal, atau tidak koheren secara faktual, jelaskan alasannya daripada menjawab sesuatu yang tidak benar. Jika Anda tidak mengetahui jawaban atas sebuah pertanyaan, mohon jangan membagikan informasi palsu."},
    {"role": "user", "content": "buatkan kode program, sebuah fungsi untuk memvalidasi alamat email menggunakan regex"}
]

encodeds = tokenizer.apply_chat_template(messages, return_tensors="pt")

model_inputs = encodeds.to(device)
model.to(device)

generated_ids = model.generate(model_inputs, max_new_tokens=1000, do_sample=True)
decoded = tokenizer.batch_decode(generated_ids)
print(decoded[0])

NOTES: To achieve optimal results in Bahasa Indonesia, please use a system message as the initial input as demonstrated above.

Limitations and Ethical Considerations

The Sidrap-7B-v2 model, trained mostly on a public dataset, lacks a moderation mechanism, please use with caution.

It may still have limitations and biases. It is always recommended to review and evaluate the generated outputs for any potential issues.

We're looking forward to engaging with the community on ways to make the model finely respect guardrails, allowing for deployment in environments requiring moderated outputs.

Furthermore, please ensure that the usage of this language model is aligned with ethical guidelines, respectful of privacy, and avoids harmful content generation.

Citation

If you use the Sidrap-7B-v2 model in your research or project, please cite it as:

@article{Sidrap,
  title={Sidrap-7B-v2: LLM Model for Bahasa Indonesia Dialog},
  author={Robin Syihab},
  publisher={Hugging Face}
  journal={Hugging Face Repository},
  year={2023}
}
Downloads last month
442
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for robinsyihab/Sidrap-7B-v2

Finetunes
1 model