metadata
library_name: transformers
tags:
- Indian-Nuance
license: apache-2.0
datasets:
- ombhojane/smile-india
language:
- en
- hi
base_model:
- Qwen/Qwen2.5-1.5B-Instruct
pipeline_tag: text-generation
SMILE for India!
Smile model nderstands the indian nunaces & give more accurate responses wrt. indian context
Model Details
Model Description
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]: https://github.com/ombhojane
- Model type: Quantized
- Language(s) (NLP): Python, Unsloth
- License: MIT
- Finetuned from model [optional]: Qwen/Qwen2.5-1.5B-Instruct
Model Sources [optional]
- Repository: https://github.com/ombhojane/smile
- Paper [optional]: On it, buildin'
- Demo [optional]: https://smilecrm.vercel.app/
<!-- from transformers import pipeline
import torch
messages = [
{"role": "user", "content": "give indian tadka ingrediants"}
]
# Use the GPU if available
device = 0 if torch.cuda.is_available() else -1
pipe = pipeline("text-generation", model="ombhojane/smile-small", device=device)
# Generate longer output text
generated_text = pipe(messages, max_new_tokens=200, num_return_sequences=1)
print(generated_text)
generated_text[0]['generated_text'][1]['content']
-->
Bias, Risks, and Limitations
Parent model is equivalent to a SLM - might lags in some speacial areas
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.