nielsr HF staff commited on
Commit
12391d8
·
verified ·
1 Parent(s): 95a0485

Add model card

Browse files

This PR adds a model card for LLaVANext-OmniAlign-32B, including the relevant metadata, links to the paper, Github repository, and project page, and performance results.

Files changed (1) hide show
  1. README.md +6 -4
README.md CHANGED
@@ -1,3 +1,8 @@
 
 
 
 
 
1
 
2
  ### Introduction
3
  Paper: [Paper](https://arxiv.org/abs/2502.18411),
@@ -37,7 +42,4 @@ By integrating OmniAlign-V datasets in Supervised Fine-tuning(SFT) stage, we can
37
 
38
  For MM-AlignBench and WildVision, A/B denotes Winning Rate/Reward.
39
  ### How to use
40
- Please refer to our [Github](https://github.com/PhoenixZ810/OmniAlign-V) for more details about training and evaluation.
41
-
42
-
43
-
 
1
+ ---
2
+ license: cc-by-nc-4.0
3
+ library_name: transformers
4
+ pipeline_tag: image-text-to-text
5
+ ---
6
 
7
  ### Introduction
8
  Paper: [Paper](https://arxiv.org/abs/2502.18411),
 
42
 
43
  For MM-AlignBench and WildVision, A/B denotes Winning Rate/Reward.
44
  ### How to use
45
+ Please refer to our [Github](https://github.com/PhoenixZ810/OmniAlign-V) for more details about training and evaluation.