DGurgurov's picture
Update README.md
e9434d7 verified
|
raw
history blame
424 Bytes
---
license: mit
datasets:
- pcuenq/oxford-pets
metrics:
- accuracy
---
# CLIP ViT Base Patch32 Fine-tuned on Oxford Pets
This model is a fine-tuned version of OpenAI's CLIP model on the Oxford Pets dataset.
## Training Information
- **Model Name**: openai/clip-vit-base-patch32
- **Dataset**: oxford-pets
- **Training Epochs**: 4
- **Batch Size**: 256
- **Learning Rate**: 3e-6
- **Accuracy**: 93.74%
## License
[MIT]