|
--- |
|
license: mit |
|
--- |
|
Base model: CorticalStack/gemma-7b-ultrachat-sft |
|
|
|
This is finetuned from above base model and to be used for multi-turn chat based use-cases. |
|
Unlike our AryaBhatta-GemmaOrca model which is skilled in science, literature and finetuned on Orca datasets, this model is fine-tuned on Ultra-Chat datasets. And show improved performance over AryaBhatta-GemmaOrca on Hellaswag datasets and in multi-turn conversations. |
|
It is finetuned on 9 Indian languages (Hindi, Tamil, Punjabi, Bengali, Gujarati, Oriya, Telugu, Kannada, Malayalam) plus English. |
|
|
|
Benchmarked on Indic LLM leaderboard: |
|
https://huggingface.co/spaces/Cognitive-Lab/indic_llm_leaderboard |
|
|
|
Release post: https://www.linkedin.com/feed/update/urn:li:activity:7184856055565180928 |