make it pretty
Browse files
README.md
CHANGED
@@ -17,7 +17,7 @@ BLIP-3 consists of 3 models: a CLIP-like image encoder, a VL connector, and a la
|
|
17 |
|
18 |
# How to use
|
19 |
|
20 |
-
We require use the development version (`"4.41.0.dev0"`) of the `transformers` library. To get it, as of 05/07/2024, one can use `pip uninstall -y transformers && pip install git+https://github.com/huggingface/transformers
|
21 |
|
22 |
```python
|
23 |
from transformers import AutoModelForVision2Seq, AutoTokenizer, AutoImageProcessor, StoppingCriteria
|
|
|
17 |
|
18 |
# How to use
|
19 |
|
20 |
+
> We require use the development version (`"4.41.0.dev0"`) of the `transformers` library. To get it, as of 05/07/2024, one can use `pip uninstall -y transformers && pip install git+https://github.com/huggingface/transformers.`
|
21 |
|
22 |
```python
|
23 |
from transformers import AutoModelForVision2Seq, AutoTokenizer, AutoImageProcessor, StoppingCriteria
|