Uploaded model
- Developed by: AashishKumar
- License: apache-2.0
- Finetuned from model : cognitivecomputations/dolphin-2.9.3-llama-3-8b
from transformers import AutoTokenizer, LlamaForCausalLM
model = LlamaForCausalLM.from_pretrained("otonomy/Cn_2_9_3_Hinglish_llama3_7b_8kAk")
tokenizer = AutoTokenizer.from_pretrained("otonomy/Cn_2_9_3_Hinglish_llama3_7b_8kAK")
prompt = "ky tumhe la la land pasand hai?"
inputs = tokenizer(prompt, return_tensors="pt")
# Generate
generate_ids = model.generate(inputs.input_ids, max_length=30)
tokenizer.batch_decode(generate_ids, skip_special_tokens=True, clean_up_tokenization_spaces=False)[0]
# Use a pipeline as a high-level helper
from transformers import pipeline
messages = [
{"role": "user", "content": "Who are you?"},
]
pipe = pipeline("text-generation", model="otonomy/Cn_2_9_3_Hinglish_llama3_7b_8kAk")
pipe(messages)
- Downloads last month
- 30
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.
Model tree for otonomy/Cn_2_9_3_Hinglish_llama3_7b_8kAk
Base model
meta-llama/Meta-Llama-3-8B