UncleFish commited on
Commit
f079e7b
1 Parent(s): 3c9eaa5

make it pretty

Browse files
Files changed (1) hide show
  1. README.md +1 -1
README.md CHANGED
@@ -17,7 +17,7 @@ BLIP-3 consists of 3 models: a CLIP-like image encoder, a VL connector, and a la
17
 
18
  # How to use
19
 
20
- We require use the development version (`"4.41.0.dev0"`) of the `transformers` library. To get it, as of 05/07/2024, one can use `pip uninstall -y transformers && pip install git+https://github.com/huggingface/transformers`.
21
 
22
  ```python
23
  from transformers import AutoModelForVision2Seq, AutoTokenizer, AutoImageProcessor, StoppingCriteria
 
17
 
18
  # How to use
19
 
20
+ > We require use the development version (`"4.41.0.dev0"`) of the `transformers` library. To get it, as of 05/07/2024, one can use `pip uninstall -y transformers && pip install git+https://github.com/huggingface/transformers.`
21
 
22
  ```python
23
  from transformers import AutoModelForVision2Seq, AutoTokenizer, AutoImageProcessor, StoppingCriteria