File size: 424 Bytes
e9434d7 1cdda04 e9434d7 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 |
---
license: mit
datasets:
- pcuenq/oxford-pets
metrics:
- accuracy
---
# CLIP ViT Base Patch32 Fine-tuned on Oxford Pets
This model is a fine-tuned version of OpenAI's CLIP model on the Oxford Pets dataset.
## Training Information
- **Model Name**: openai/clip-vit-base-patch32
- **Dataset**: oxford-pets
- **Training Epochs**: 4
- **Batch Size**: 256
- **Learning Rate**: 3e-6
- **Accuracy**: 93.74%
## License
[MIT] |