Update README.md
Browse files
README.md
CHANGED
@@ -12,6 +12,7 @@ tags:
|
|
12 |
- vision-language
|
13 |
- llm
|
14 |
- lmm
|
|
|
15 |
---
|
16 |
<h2 align="center"> <a href="https://arxiv.org/abs/2402.14289">TinyLLaVA: A Framework of Small-scale Large Multimodal Models</a>
|
17 |
|
@@ -365,5 +366,4 @@ If you find our paper and code useful in your research, please consider giving a
|
|
365 |
|
366 |
## ❤️ Community efforts
|
367 |
* Our codebase is built upon the [LLaVA](https://github.com/haotian-liu/LLaVA) project. Great work!
|
368 |
-
* Our project uses data from the [ShareGPT4V](https://github.com/InternLM/InternLM-XComposer/tree/main/projects/ShareGPT4V) project. Great work!
|
369 |
-
|
|
|
12 |
- vision-language
|
13 |
- llm
|
14 |
- lmm
|
15 |
+
pipeline_tag: image-text-to-text
|
16 |
---
|
17 |
<h2 align="center"> <a href="https://arxiv.org/abs/2402.14289">TinyLLaVA: A Framework of Small-scale Large Multimodal Models</a>
|
18 |
|
|
|
366 |
|
367 |
## ❤️ Community efforts
|
368 |
* Our codebase is built upon the [LLaVA](https://github.com/haotian-liu/LLaVA) project. Great work!
|
369 |
+
* Our project uses data from the [ShareGPT4V](https://github.com/InternLM/InternLM-XComposer/tree/main/projects/ShareGPT4V) project. Great work!
|
|