Spaces:
Runtime error
Runtime error
<!--Copyright 2022 The HuggingFace Team. All rights reserved. | |
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with | |
the License. You may obtain a copy of the License at | |
http://www.apache.org/licenses/LICENSE-2.0 | |
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on | |
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the | |
specific language governing permissions and limitations under the License. | |
--> | |
# OWL-ViT | |
## Overview | |
The OWL-ViT (short for Vision Transformer for Open-World Localization) was proposed in [Simple Open-Vocabulary Object Detection with Vision Transformers](https://arxiv.org/abs/2205.06230) by Matthias Minderer, Alexey Gritsenko, Austin Stone, Maxim Neumann, Dirk Weissenborn, Alexey Dosovitskiy, Aravindh Mahendran, Anurag Arnab, Mostafa Dehghani, Zhuoran Shen, Xiao Wang, Xiaohua Zhai, Thomas Kipf, and Neil Houlsby. OWL-ViT is an open-vocabulary object detection network trained on a variety of (image, text) pairs. It can be used to query an image with one or multiple text queries to search for and detect target objects described in text. | |
The abstract from the paper is the following: | |
*Combining simple architectures with large-scale pre-training has led to massive improvements in image classification. For object detection, pre-training and scaling approaches are less well established, especially in the long-tailed and open-vocabulary setting, where training data is relatively scarce. In this paper, we propose a strong recipe for transferring image-text models to open-vocabulary object detection. We use a standard Vision Transformer architecture with minimal modifications, contrastive image-text pre-training, and end-to-end detection fine-tuning. Our analysis of the scaling properties of this setup shows that increasing image-level pre-training and model size yield consistent improvements on the downstream detection task. We provide the adaptation strategies and regularizations needed to attain very strong performance on zero-shot text-conditioned and one-shot image-conditioned object detection. Code and models are available on GitHub.* | |
## Usage | |
OWL-ViT is a zero-shot text-conditioned object detection model. OWL-ViT uses [CLIP](clip) as its multi-modal backbone, with a ViT-like Transformer to get visual features and a causal language model to get the text features. To use CLIP for detection, OWL-ViT removes the final token pooling layer of the vision model and attaches a lightweight classification and box head to each transformer output token. Open-vocabulary classification is enabled by replacing the fixed classification layer weights with the class-name embeddings obtained from the text model. The authors first train CLIP from scratch and fine-tune it end-to-end with the classification and box heads on standard detection datasets using a bipartite matching loss. One or multiple text queries per image can be used to perform zero-shot text-conditioned object detection. | |
[`OwlViTFeatureExtractor`] can be used to resize (or rescale) and normalize images for the model and [`CLIPTokenizer`] is used to encode the text. [`OwlViTProcessor`] wraps [`OwlViTFeatureExtractor`] and [`CLIPTokenizer`] into a single instance to both encode the text and prepare the images. The following example shows how to perform object detection using [`OwlViTProcessor`] and [`OwlViTForObjectDetection`]. | |
```python | |
>>> import requests | |
>>> from PIL import Image | |
>>> import torch | |
>>> from transformers import OwlViTProcessor, OwlViTForObjectDetection | |
>>> processor = OwlViTProcessor.from_pretrained("google/owlvit-base-patch32") | |
>>> model = OwlViTForObjectDetection.from_pretrained("google/owlvit-base-patch32") | |
>>> url = "http://images.cocodataset.org/val2017/000000039769.jpg" | |
>>> image = Image.open(requests.get(url, stream=True).raw) | |
>>> texts = [["a photo of a cat", "a photo of a dog"]] | |
>>> inputs = processor(text=texts, images=image, return_tensors="pt") | |
>>> outputs = model(**inputs) | |
>>> # Target image sizes (height, width) to rescale box predictions [batch_size, 2] | |
>>> target_sizes = torch.Tensor([image.size[::-1]]) | |
>>> # Convert outputs (bounding boxes and class logits) to COCO API | |
>>> results = processor.post_process(outputs=outputs, target_sizes=target_sizes) | |
>>> i = 0 # Retrieve predictions for the first image for the corresponding text queries | |
>>> text = texts[i] | |
>>> boxes, scores, labels = results[i]["boxes"], results[i]["scores"], results[i]["labels"] | |
>>> score_threshold = 0.1 | |
>>> for box, score, label in zip(boxes, scores, labels): | |
... box = [round(i, 2) for i in box.tolist()] | |
... if score >= score_threshold: | |
... print(f"Detected {text[label]} with confidence {round(score.item(), 3)} at location {box}") | |
Detected a photo of a cat with confidence 0.707 at location [324.97, 20.44, 640.58, 373.29] | |
Detected a photo of a cat with confidence 0.717 at location [1.46, 55.26, 315.55, 472.17] | |
``` | |
This model was contributed by [adirik](https://huggingface.co/adirik). The original code can be found [here](https://github.com/google-research/scenic/tree/main/scenic/projects/owl_vit). | |
## OwlViTConfig | |
[[autodoc]] OwlViTConfig | |
- from_text_vision_configs | |
## OwlViTTextConfig | |
[[autodoc]] OwlViTTextConfig | |
## OwlViTVisionConfig | |
[[autodoc]] OwlViTVisionConfig | |
## OwlViTImageProcessor | |
[[autodoc]] OwlViTImageProcessor | |
- preprocess | |
- post_process_object_detection | |
- post_process_image_guided_detection | |
## OwlViTFeatureExtractor | |
[[autodoc]] OwlViTFeatureExtractor | |
- __call__ | |
- post_process | |
- post_process_image_guided_detection | |
## OwlViTProcessor | |
[[autodoc]] OwlViTProcessor | |
## OwlViTModel | |
[[autodoc]] OwlViTModel | |
- forward | |
- get_text_features | |
- get_image_features | |
## OwlViTTextModel | |
[[autodoc]] OwlViTTextModel | |
- forward | |
## OwlViTVisionModel | |
[[autodoc]] OwlViTVisionModel | |
- forward | |
## OwlViTForObjectDetection | |
[[autodoc]] OwlViTForObjectDetection | |
- forward | |
- image_guided_detection | |