duyhv1411/Llama-1.1B-qlora-ft
This model is an advanced iteration of the powerful TinyLlama/TinyLlama-1.1B-Chat-v1.0
, specifically fine-tuned to enhance its capabilities in generic domains.
âš¡ Quantized GGUF
How to use
# Use a pipeline as a high-level helper
from transformers import pipeline
prompt = """<|user|>
Hello, how are you?</s>
<|assistant|>
"""
# Run our instruction-tuned model
pipe = pipeline(task="text-generation", model="duyhv1411/Llama-1.1B-qlora-ft", return_full_text=False,)
pipe(prompt)[0]["generated_text"]
- Downloads last month
- 94
Inference Providers
NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API:
The model authors have turned it off explicitly.
Model tree for duyhv1411/Llama-1.1B-qlora-ft
Unable to build the model tree, the base model loops to the model itself. Learn more.