--- tags: - vision - image-to-text - endpoints-template inference: false pipeline_tag: image-to-text base_model: Salesforce/blip-image-captioning-base library_name: generic --- # Fork of [Salesforce/blip-image-captioning-base](https://huggingface.co/openai/clip-vit-base-patch32) for a `image-to-text` Inference endpoint. > Inspired by https://huggingface.co/sergeipetrov/blip_captioning This repository implements a `custom` task for `image-to-text` for 🤗 Inference Endpoints to allow image capturing. The code for the customized pipeline is in the handler.py. To use deploy this model an Inference Endpoint you have to select `Custom` as task to use the `handler.py` file. ### expected Request payload Image to be labeled as binary. #### CURL ``` curl URL \ -X POST \ --data-binary @car.png \ -H "Content-Type: image/png" ``` #### Python ```python requests.post(ENDPOINT_URL, headers={"Content-Type": "image/png"}, data=open("car.png", 'rb').read()).json() ```