YAML Metadata
Warning:
empty or missing yaml metadata in repo card
(https://huggingface.co/docs/hub/model-cards#model-card-metadata)
CLIP Sparse Autoencoder Checkpoint
Model Overview
This model is a sparse autoencoder trained on CLIP's internal representations. Pretrained on Imagenet and Finetuned on Waterbirds
Architecture Details
- Layer: 11
- Layer Type: hook_resid_post
- Model: open-clip:laion/CLIP-ViT-B-32-DataComp.XL-s13B-b90K
- Dictionary Size: 49,152
- Input Dimension: 768
- Expansion Factor: 64
- CLS Token Only: False
Performance Metrics
The model has been evaluated on standard metrics with the following results:
- L0: 359
- Explained Variance: 0.85
- MSE Loss: 0.003
- Overall Loss: 0.008
Additional Information
Detailed logs and visualizations of the model's fine-tuning process are available on Weights & Biases:
wandb.ai/perceptual-alignment/waterbirds-finetuning-sweep/runs/cxgrs9zt/workspace
Feel free to reach out for any additional clarifications or details!
- Downloads last month
- 16