|
# CLIP Transcoder Checkpoint |
|
|
|
This model is a Transcoder trained on CLIP's internal representations. |
|
|
|
## Model Details |
|
|
|
### Architecture |
|
- **Layer**: 11 |
|
- **Layer Type**: ln2.hook_normalized |
|
- **Model**: laion/CLIP-ViT-B-32-DataComp.XL-s13B-b90K |
|
- **Dictionary Size**: 49152 |
|
- **Input Dimension**: 768 |
|
- **Expansion Factor**: 64 |
|
- **CLS Token Only**: False |
|
|
|
### Training |
|
- **Learning Rate**: 0.0012530554819595529 |
|
- **Batch Size**: 4096 |
|
- **Context Size**: 50 |
|
|
|
### Sparsity |
|
- **L0 (Active Features)**: 497 |
|
|
|
|
|
## Additional Information |
|
- **Wandb Run**: https://wandb.ai/perceptual-alignment/openclip-transcoders/runs/oud9jpdn/ |
|
- **Random Seed**: 42 |
|
|