LongWriter-V-7B / README.md
nielsr's picture
nielsr HF staff
Improve model card and correct pipeline tag
adbdfcf verified
|
raw
history blame
3.16 kB
metadata
base_model: Qwen/Qwen2.5-VL-7B-Instruct
library_name: transformers
license: other
tags:
  - llama-factory
  - full
  - generated_from_trainer
pipeline_tag: image-text-to-text
model-index:
  - name: LongWriter-V-7B
    results: []

LongWriter-V-7B

This model is a fine-tuned version of Qwen/Qwen2.5-VL-7B-Instruct on the LongWriter-V-22K dataset. It is designed for generating ultra-long and high-fidelity text outputs, particularly effective for tasks like generating lengthy lecture scripts from a series of presentation slides or creating long-form text descriptions based on visual input.

Model description

LongWriter-V-7B is a vision-language model fine-tuned for generating extended text outputs based on image and text input. It leverages the capabilities of the Qwen2.5-VL-7B-Instruct base model to achieve high-fidelity generation, even for outputs exceeding several thousand words. The model excels at tasks requiring comprehensive and detailed text generation based on visual context. It has been trained on the LongWriter-V-22K dataset, designed for ultra-long and high-fidelity vision-language generation.

Intended uses & limitations

Intended Uses:

  • Generating long-form text outputs (e.g., lecture scripts, reports, summaries) from image and text prompts.
  • Summarizing long documents accompanied by visual elements.
  • Creating detailed descriptions from visual scenes.

Limitations:

  • The model's performance may degrade with exceptionally long prompts or complex visual inputs.
  • The model's factual accuracy is limited to the knowledge embedded in its training data (LongWriter-V-22K).
  • The model may generate outputs that are not entirely factually accurate, or that contain hallucinated information. Careful review of outputs is necessary.

Training and evaluation data

The model was trained on the LongWriter-V-22K dataset. Evaluation was performed using the MMLongBench-Write and LongWrite-V-Ruler benchmarks.

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 1e-05
  • train_batch_size: 1
  • eval_batch_size: 8
  • seed: 42
  • distributed_type: multi-GPU
  • num_devices: 8
  • gradient_accumulation_steps: 2
  • total_train_batch_size: 16
  • total_eval_batch_size: 64
  • optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
  • lr_scheduler_type: cosine
  • lr_scheduler_warmup_ratio: 0.1
  • num_epochs: 3

Training results

Framework versions

  • Transformers 4.49.0.dev0
  • Pytorch 2.5.1+cu124
  • Datasets 3.2.0
  • Tokenizers 0.21.0

Citation

@misc{tu2025longwriterv,
      title={LongWriter-V: Enabling Ultra-Long and High-Fidelity Generation in Vision-Language Models}, 
      author={Shangqing Tu and Yucheng Wang and Daniel Zhang-Li and Yushi Bai and Jifan Yu and Yuhao Wu and Lei Hou and Huiqin Liu and Zhiyuan Liu and Bin Xu and Juanzi Li},
      year={2025},
      eprint={2502.14834},
      archivePrefix={arXiv},
      primaryClass={cs.CV},
      url={https://arxiv.org/abs/2502.14834}, 
}