speed commited on
Commit
96d8571
1 Parent(s): 954360e

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +3 -2
README.md CHANGED
@@ -9,9 +9,10 @@ datasets:
9
 
10
  # Model Card for Llava-mnist
11
 
12
- This model is a simple linear layer vision encoder trained on the MNIST dataset, following the Llava training approach.
13
 
14
- You can use this model alongside [meta-llama/Meta-Llama-3.1-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3.1-8B-Instruct).
 
15
 
16
  ## Training Details
17
 
 
9
 
10
  # Model Card for Llava-mnist
11
 
12
+ Llava-mnist is a simple example of Vision and Language model using LLaVA architecture trained on MNIST dataset.
13
 
14
+
15
+ You can use this model (just one linear layer vision encoder model) alongside [meta-llama/Meta-Llama-3.1-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3.1-8B-Instruct).
16
 
17
  ## Training Details
18