Update README.md
Browse files
README.md
CHANGED
@@ -9,9 +9,10 @@ datasets:
|
|
9 |
|
10 |
# Model Card for Llava-mnist
|
11 |
|
12 |
-
|
13 |
|
14 |
-
|
|
|
15 |
|
16 |
## Training Details
|
17 |
|
|
|
9 |
|
10 |
# Model Card for Llava-mnist
|
11 |
|
12 |
+
Llava-mnist is a simple example of Vision and Language model using LLaVA architecture trained on MNIST dataset.
|
13 |
|
14 |
+
|
15 |
+
You can use this model (just one linear layer vision encoder model) alongside [meta-llama/Meta-Llama-3.1-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3.1-8B-Instruct).
|
16 |
|
17 |
## Training Details
|
18 |
|