|
--- |
|
license: mit |
|
base_model: |
|
- meta-llama/Meta-Llama-3-8B |
|
- facebook/dinov2-base |
|
--- |
|
|
|
# ShareLock: Ultra-Lightweight CLIP-like Vision-Language Model |
|
|
|
Welcome to the Hugging Face repository for **ShareLock**, an ultra-lightweight CLIP-like vision-language model. This repository hosts pretrained checkpoints for ShareLock, enabling easy integration into your projects. |
|
|
|
ShareLock is introduced in the paper: |
|
**"Do Better Language Models Have Crisper Vision?"** |
|
*[Jona Ruthardt](https://jonaruthardt.github.io), [Gertjan J. Burghouts](https://gertjanburghouts.github.io), [Serge Belongie](https://sergebelongie.github.io), [Yuki M. Asano](yukimasano.github.io)* |
|
|
|
π **[Project Page](https://jonaruthardt.github.io/project/ShareLock/)** |
|
β¨οΈ **[GitHub Repository](https://github.com/JonaRuthardt/ShareLock)** |
|
π **[Read the Paper on arXiv](https://arxiv.org/abs/2410.07173)** |
|
|
|
--- |
|
|
|
## π§ Model Overview |
|
|
|
**ShareLock** combines strong frozen features from unimodal vision and language models to achieve competitive multimodal performance with minimal resources. |
|
|
|
### Key Highlights: |
|
- **Ultra-Lightweight:** ShareLock is trained on only 563k image-caption pairs, requiring just 1 GPU hour. |
|
- **Efficient Performance:** Achieves 51% zero-shot accuracy on ImageNet. |
|
- **Plug-and-Play:** Easily integrates into downstream vision-language tasks. |
|
|
|
--- |
|
|
|
## π Available Checkpoints |
|
|
|
### Model Variants: |
|
1. **ShareLock trained on CC3M** |
|
2. **ShareLock trained on CC12M** |
|
|
|
--- |
|
|
|
## π Usage |
|
|
|
You can load ShareLock models using the `ShareLock` class directly for inference or fine-tuning: |
|
|
|
### Example: Zero-shot Classification |
|
```python |
|
from sharelock.models.model import ShareLock |
|
|
|
# Path to the checkpoint |
|
checkpoint_path = "path/to/checkpoint.ckpt" |
|
config = { |
|
# Add your configuration for model hyperparameters etc. here |
|
} |
|
|
|
# Load the ShareLock model |
|
model = ShareLock.load_from_checkpoint(checkpoint_path, config=config) |
|
|
|
# Encode text and images for multimodal tasks |
|
image_embeddings = model.encode_image(your_image_tensor) |
|
text_embeddings = model.encode_text(["a cat", "a dog"]) |
|
|
|
# Perform multimodal operations |
|
``` |
|
|
|
--- |
|
|
|
## π οΈ Details |
|
For training scripts, evaluation, or further implementation details, visit our [GitHub repository](https://github.com/JonaRuthardt/ShareLock) |
|
|
|
--- |
|
|
|
## π Citation |
|
|
|
If you use ShareLock in your research, please cite: |
|
```bibtex |
|
@article{ruthardt2024sharelock, |
|
title={Do Better Language Models Have Crisper Vision?}, |
|
author={Jona Ruthardt and Gertjan J. Burghouts and Serge Belongie and Yuki M. Asano}, |
|
journal={arXiv preprint arXiv:2410.07173}, |
|
year={2024} |
|
} |
|
``` |
|
|
|
--- |
|
|
|
## π§ Contact |
|
|
|
For any questions or collaborations, feel free to reach out to [Jona Ruthardt](mailto:[email protected]). |
|
|