SAELens

1. Gemma Scope

Gemma Scope is a comprehensive, open suite of sparse autoencoders for Gemma 2 9B and 2B. Sparse Autoencoders are a "microscope" of sorts that can help us break down a model’s internal activations into the underlying concepts, just as biologists use microscopes to study the individual cells of plants and animals.

See our landing page for details on the whole suite. This is a specific set of SAEs:

2. What Is gemma-scope-9b-it-res?

  • gemma-scope-: See 1.
  • 9b-it-: These SAEs were trained on Gemma v2 9B instruction-tuned model.
  • res: These SAEs were trained on the model's residual stream.

3. Why aren't there more IT SAEs?

To summarise our technical report, Section 4.5, we find the same results as Kissane et al., 2024, that SAEs trained on Gemma 2 9B base transfer very well to the IT model, and these IT SAEs only work marginally better. Therefore in many cases we expect it is sufficient to use our PT SAEs for the equivalent IT model, e.g. using the Gemma 2 9B PT SAEs to interpret Gemma 2 9B IT.

4. Point of Contact

Point of contact: Arthur Conmy

Contact by email:

''.join(list('moc.elgoog@ymnoc')[::-1])

HuggingFace account: https://huggingface.co/ArthurConmyGDM

Downloads last month

-

Downloads are not tracked for this model. How to track
Inference API
Unable to determine this model’s pipeline type. Check the docs .

Space using google/gemma-scope-9b-it-res 1

Collection including google/gemma-scope-9b-it-res