RaushanTurganbay HF staff commited on
Commit
3f83680
โ€ข
1 Parent(s): d814f26

Update README.md

Browse files

Lets add a model card

Files changed (1) hide show
  1. README.md +107 -179
README.md CHANGED
@@ -3,197 +3,125 @@ library_name: transformers
3
  tags: []
4
  ---
5
 
6
- # Model Card for Model ID
7
-
8
- <!-- Provide a quick summary of what the model is/does. -->
9
-
10
 
11
 
12
  ## Model Details
13
 
14
- ### Model Description
15
-
16
- <!-- Provide a longer summary of what this model is. -->
17
-
18
- This is the model card of a ๐Ÿค— transformers model that has been pushed on the Hub. This model card has been automatically generated.
19
-
20
- - **Developed by:** [More Information Needed]
21
- - **Funded by [optional]:** [More Information Needed]
22
- - **Shared by [optional]:** [More Information Needed]
23
- - **Model type:** [More Information Needed]
24
- - **Language(s) (NLP):** [More Information Needed]
25
- - **License:** [More Information Needed]
26
- - **Finetuned from model [optional]:** [More Information Needed]
27
-
28
- ### Model Sources [optional]
29
-
30
- <!-- Provide the basic links for the model. -->
31
-
32
- - **Repository:** [More Information Needed]
33
- - **Paper [optional]:** [More Information Needed]
34
- - **Demo [optional]:** [More Information Needed]
35
-
36
- ## Uses
37
-
38
- <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
39
-
40
- ### Direct Use
41
 
42
- <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
 
 
43
 
44
- [More Information Needed]
 
 
45
 
46
- ### Downstream Use [optional]
 
47
 
48
- <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
49
 
50
- [More Information Needed]
 
 
 
 
51
 
52
- ### Out-of-Scope Use
53
-
54
- <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
55
-
56
- [More Information Needed]
57
-
58
- ## Bias, Risks, and Limitations
59
-
60
- <!-- This section is meant to convey both technical and sociotechnical limitations. -->
61
-
62
- [More Information Needed]
63
-
64
- ### Recommendations
65
-
66
- <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
67
-
68
- Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
69
 
70
  ## How to Get Started with the Model
71
 
72
  Use the code below to get started with the model.
73
 
74
- [More Information Needed]
75
-
76
- ## Training Details
77
-
78
- ### Training Data
79
-
80
- <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
81
-
82
- [More Information Needed]
83
-
84
- ### Training Procedure
85
-
86
- <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
87
-
88
- #### Preprocessing [optional]
89
-
90
- [More Information Needed]
91
-
92
-
93
- #### Training Hyperparameters
94
-
95
- - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
96
-
97
- #### Speeds, Sizes, Times [optional]
98
-
99
- <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
100
-
101
- [More Information Needed]
102
-
103
- ## Evaluation
104
-
105
- <!-- This section describes the evaluation protocols and provides the results. -->
106
-
107
- ### Testing Data, Factors & Metrics
108
-
109
- #### Testing Data
110
-
111
- <!-- This should link to a Dataset Card if possible. -->
112
-
113
- [More Information Needed]
114
-
115
- #### Factors
116
-
117
- <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
118
-
119
- [More Information Needed]
120
-
121
- #### Metrics
122
-
123
- <!-- These are the evaluation metrics being used, ideally with a description of why. -->
124
-
125
- [More Information Needed]
126
-
127
- ### Results
128
-
129
- [More Information Needed]
130
-
131
- #### Summary
132
-
133
-
134
-
135
- ## Model Examination [optional]
136
-
137
- <!-- Relevant interpretability work for the model goes here -->
138
-
139
- [More Information Needed]
140
-
141
- ## Environmental Impact
142
-
143
- <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
144
-
145
- Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
146
-
147
- - **Hardware Type:** [More Information Needed]
148
- - **Hours used:** [More Information Needed]
149
- - **Cloud Provider:** [More Information Needed]
150
- - **Compute Region:** [More Information Needed]
151
- - **Carbon Emitted:** [More Information Needed]
152
-
153
- ## Technical Specifications [optional]
154
-
155
- ### Model Architecture and Objective
156
-
157
- [More Information Needed]
158
-
159
- ### Compute Infrastructure
160
-
161
- [More Information Needed]
162
-
163
- #### Hardware
164
-
165
- [More Information Needed]
166
-
167
- #### Software
168
-
169
- [More Information Needed]
170
-
171
- ## Citation [optional]
172
-
173
- <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
174
-
175
- **BibTeX:**
176
-
177
- [More Information Needed]
178
-
179
- **APA:**
180
-
181
- [More Information Needed]
182
-
183
- ## Glossary [optional]
184
-
185
- <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
186
-
187
- [More Information Needed]
188
-
189
- ## More Information [optional]
190
-
191
- [More Information Needed]
192
-
193
- ## Model Card Authors [optional]
194
-
195
- [More Information Needed]
196
-
197
- ## Model Card Contact
198
-
199
- [More Information Needed]
 
3
  tags: []
4
  ---
5
 
6
+ # Model Card for Video-LLaVa
 
 
 
7
 
8
 
9
  ## Model Details
10
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
11
 
12
+ **Model type:**
13
+ Video-LLaVA is an open-source multomodal model trained by fine-tuning LLM on multimodal instruction-following data. It is an auto-regressive language model, based on the transformer architecture.
14
+ Base LLM: [lmsys/vicuna-13b-v1.5](https://huggingface.co/lmsys/vicuna-13b-v1.5)
15
 
16
+ **Model Description:**
17
+ The model can generate interleaving images and videos, despite the absence of image-video pairs in the dataset. Video-LLaVa is uses an encoder trained for unified visual representation through alignment prior to projection.
18
+ Extensive experiments demonstrate the complementarity of modalities, showcasing significant superiority when compared to models specifically designed for either images or videos.
19
 
20
+ **Paper or resources for more information:**
21
+ https://github.com/PKU-YuanGroup/Video-LLaVA
22
 
 
23
 
24
+ ## ๐Ÿ—๏ธ Training Dataset
25
+ - The images pretraining dataset is from [LLaVA](https://github.com/haotian-liu/LLaVA).
26
+ - The images tuning dataset is from [LLaVA](https://github.com/haotian-liu/LLaVA).
27
+ - The videos pretraining dataset is from [Valley](https://github.com/RupertLuo/Valley).
28
+ - The videos tuning dataset is from [Video-ChatGPT](https://github.com/mbzuai-oryx/Video-ChatGPT).
29
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
30
 
31
  ## How to Get Started with the Model
32
 
33
  Use the code below to get started with the model.
34
 
35
+ ```python
36
+ from PIL import Image
37
+ import requests
38
+ import numpy as np
39
+ import av
40
+ from huggingface_hub import hf_hub_download
41
+ from transformers import VideoLlavaProcessor, VideoLlavaForConditionalGeneration
42
+
43
+ def read_video_pyav(container, indices):
44
+ '''
45
+ Decode the video with PyAV decoder.
46
+
47
+ Args:
48
+ container (av.container.input.InputContainer): PyAV container.
49
+ indices (List[int]): List of frame indices to decode.
50
+
51
+ Returns:
52
+ np.ndarray: np array of decoded frames of shape (num_frames, height, width, 3).
53
+ '''
54
+ frames = []
55
+ container.seek(0)
56
+ start_index = indices[0]
57
+ end_index = indices[-1]
58
+ for i, frame in enumerate(container.decode(video=0)):
59
+ if i > end_index:
60
+ break
61
+ if i >= start_index and i in indices:
62
+ frames.append(frame)
63
+ return np.stack([x.to_ndarray(format="rgb24") for x in frames])
64
+
65
+ model = VideoLlavaForConditionalGeneration.from_pretrained("LanguageBind/Video-LLaVA-7B-hf")
66
+ processor = VideoLlavaProcessor.from_pretrained("LanguageBind/Video-LLaVA-7B-hf")
67
+
68
+ prompt = "USER: <video>Why is this video funny? ASSISTANT:"
69
+ video_path = hf_hub_download(repo_id="raushan-testing-hf/videos-test", filename="sample_demo_1.mp4", repo_type="dataset")
70
+ container = av.open(video_path)
71
+
72
+ # sample uniformly 8 frames from the video
73
+ total_frames = container.streams.video[0].frames
74
+ indices = np.arange(0, total_frames, total_frames / 8).astype(int)
75
+ clip = read_video_pyav(container, indices)
76
+
77
+ inputs = processor(text=prompt, videos=clip, return_tensors="pt")
78
+
79
+ # Generate
80
+ generate_ids = model.generate(**inputs, max_length=80)
81
+ print(processor.batch_decode(generate_ids, skip_special_tokens=True, clean_up_tokenization_spaces=False)[0])
82
+ >>> 'USER: Why is this video funny? ASSISTANT: The video is funny because the baby is sitting on the bed and reading a book, which is an unusual and amusing sight.ะช'
83
+
84
+ # Generate from images and videos mix
85
+ url = "http://images.cocodataset.org/val2017/000000039769.jpg"
86
+ image = Image.open(requests.get(url, stream=True).raw)
87
+ prompt = [
88
+ "USER: <image> How many cats are there in the image? ASSISTANT:",
89
+ "USER: <video>Why is this video funny? ASSISTANT:"
90
+ ]
91
+ inputs = processor(text=prompt, images=image, videos=clip, padding=True, return_tensors="pt")
92
+
93
+ # Generate
94
+ generate_ids = model.generate(**inputs, max_length=50)
95
+ print(processor.batch_decode(generate_ids, skip_special_tokens=True, clean_up_tokenization_spaces=True))
96
+ >>> ['USER: How many cats are there in the image? ASSISTANT: There are two cats in the image.\nHow many cats are sleeping on the couch?\nThere are', 'USER: Why is this video funny? ASSISTANT: The video is funny because the baby is sitting on the bed and reading a book, which is an unusual and amusing']
97
+ ```
98
+
99
+
100
+ ## ๐Ÿ‘ Acknowledgement
101
+ * [LLaVA](https://github.com/haotian-liu/LLaVA) The codebase we built upon and it is an efficient large language and vision assistant.
102
+ * [Video-ChatGPT](https://github.com/mbzuai-oryx/Video-ChatGPT) Great job contributing the evaluation code and dataset.
103
+
104
+ ## ๐Ÿ”’ License
105
+ * The majority of this project is released under the Apache 2.0 license as found in the [LICENSE](https://github.com/PKU-YuanGroup/Video-LLaVA/blob/main/LICENSE) file.
106
+ * The service is a research preview intended for non-commercial use only, subject to the model [License](https://github.com/facebookresearch/llama/blob/main/MODEL_CARD.md) of LLaMA, [Terms of Use](https://openai.com/policies/terms-of-use) of the data generated by OpenAI, and [Privacy Practices](https://chrome.google.com/webstore/detail/sharegpt-share-your-chatg/daiacboceoaocpibfodeljbdfacokfjb) of ShareGPT. Please contact us if you find any potential violation.
107
+
108
+ ## โœ๏ธ Citation
109
+ If you find our paper and code useful in your research, please consider giving a star :star: and citation :pencil:.
110
+
111
+ ```BibTeX
112
+ @article{lin2023video,
113
+ title={Video-LLaVA: Learning United Visual Representation by Alignment Before Projection},
114
+ author={Lin, Bin and Zhu, Bin and Ye, Yang and Ning, Munan and Jin, Peng and Yuan, Li},
115
+ journal={arXiv preprint arXiv:2311.10122},
116
+ year={2023}
117
+ }
118
+ ```
119
+
120
+ ```BibTeX
121
+ @article{zhu2023languagebind,
122
+ title={LanguageBind: Extending Video-Language Pretraining to N-modality by Language-based Semantic Alignment},
123
+ author={Zhu, Bin and Lin, Bin and Ning, Munan and Yan, Yang and Cui, Jiaxi and Wang, HongFa and Pang, Yatian and Jiang, Wenhao and Zhang, Junwu and Li, Zongwei and others},
124
+ journal={arXiv preprint arXiv:2310.01852},
125
+ year={2023}
126
+ }
127
+ ```