Snehhaa23 commited on
Commit
07ebcfb
·
1 Parent(s): 66911a5

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -14
README.md CHANGED
@@ -17,12 +17,6 @@ widget:
17
  This is an image captioning model trained by @ydshieh in [flax ](https://github.com/huggingface/transformers/tree/main/examples/flax/image-captioning) this is pytorch version of [this](https://huggingface.co/ydshieh/vit-gpt2-coco-en-ckpts).
18
 
19
 
20
- # The Illustrated Image Captioning using transformers
21
-
22
- ![](https://ankur3107.github.io/assets/images/vision-encoder-decoder.png)
23
-
24
- * https://ankur3107.github.io/blogs/the-illustrated-image-captioning-using-transformers/
25
-
26
 
27
  # Sample running code
28
 
@@ -63,7 +57,7 @@ def predict_step(image_paths):
63
  return preds
64
 
65
 
66
- predict_step(['doctor.e16ba4e4.jpg']) # ['a woman in a hospital bed with a woman in a hospital bed']
67
 
68
  ```
69
 
@@ -81,10 +75,3 @@ image_to_text("https://ankur3107.github.io/assets/images/image-captioning-exampl
81
 
82
 
83
  ```
84
-
85
-
86
- # Contact for any help
87
- * https://huggingface.co/ankur310794
88
- * https://twitter.com/ankur310794
89
- * http://github.com/ankur3107
90
- * https://www.linkedin.com/in/ankur310794
 
17
  This is an image captioning model trained by @ydshieh in [flax ](https://github.com/huggingface/transformers/tree/main/examples/flax/image-captioning) this is pytorch version of [this](https://huggingface.co/ydshieh/vit-gpt2-coco-en-ckpts).
18
 
19
 
 
 
 
 
 
 
20
 
21
  # Sample running code
22
 
 
57
  return preds
58
 
59
 
60
+ predict_step(['butterfly.jpg'])
61
 
62
  ```
63
 
 
75
 
76
 
77
  ```