File size: 846 Bytes
ae96581 56fe6da ae96581 56fe6da ae96581 56fe6da |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 |
---
tags:
- transformers
- xlm-roberta
- eva02
- clip
library_name: transformers
license: cc-by-nc-4.0
---
# Jina CLIP
Core implementation of Jina CLIP. The model uses:
* the [EVA 02](https://github.com/baaivision/EVA/tree/master/EVA-CLIP/rei/eva_clip) architecture for the vision tower
* the [Jina XLM RoBERTa with Flash Attention](https://huggingface.co/jinaai/xlm-roberta-flash-implementation) model as a text tower
## Models that use this implementation
- [jinaai/jina-clip-v2](https://huggingface.co/jinaai/jina-clip-v2)
- [jinaai/jina-clip-v1](https://huggingface.co/jinaai/jina-clip-v1)
## Requirements
To use the Jina CLIP source code, the following packages are required:
* `torch`
* `timm`
* `transformers`
* `einops`
* `xformers` to use x-attention
* `flash-attn` to use flash attention
* `apex` to use fused layer normalization
|