Model Card for llava-1.5-7b-hf-ft-mix-vsft

This model is a fine-tuned version of llava-hf/llava-1.5-7b-hf. It has been trained using TRL.

Quick start

from transformers import pipeline

question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="RichMiguel/llava-1.5-7b-hf-ft-mix-vsft", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])

Training procedure

This model was trained with SFT.

Framework versions

  • TRL: 0.12.2
  • Transformers: 4.46.3
  • Pytorch: 2.5.1+cu121
  • Datasets: 3.2.0
  • Tokenizers: 0.20.3

Citations

Cite TRL as:

@misc{vonwerra2022trl,
    title        = {{TRL: Transformer Reinforcement Learning}},
    author       = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou茅dec},
    year         = 2020,
    journal      = {GitHub repository},
    publisher    = {GitHub},
    howpublished = {\url{https://github.com/huggingface/trl}}
}
Downloads last month
14
Safetensors
Model size
3.78B params
Tensor type
F32
FP16
U8
Inference Examples
Inference API (serverless) does not yet support transformers models for this pipeline type.

Model tree for RichMiguel/llava-1.5-7b-hf-ft-mix-vsft

Quantized
(3)
this model