fnlp
/

English
Hzfinfdu commited on
Commit
e3d28dd
·
verified ·
1 Parent(s): b39aad3

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -1
README.md CHANGED
@@ -38,7 +38,7 @@ For instance, an SAE with 8x the hidden size of Llama-3.1-8B, i.e. 32K features,
38
 
39
  [**Llama-3.1-8B-LXR-32x**](https://huggingface.co/fnlp/Llama3_1-8B-Base-LXR-32x/tree/main)
40
 
41
- [**Llama-3.1-8B-LXA-32x**](https://huggingface.co/fnlp/Llama3_1-8B-Base-LXA-32x/tree/main)
42
 
43
  [**Llama-3.1-8B-LXM-32x**](https://huggingface.co/fnlp/Llama3_1-8B-Base-LXM-32x/tree/main)
44
 
 
38
 
39
  [**Llama-3.1-8B-LXR-32x**](https://huggingface.co/fnlp/Llama3_1-8B-Base-LXR-32x/tree/main)
40
 
41
+ [**Llama-3.1-8B-LXA-32x**](https://huggingface.co/fnlp/Llama3_1-8B-Base-LXA-32x/tree/main) (Not recommended, we along with many other mech interp researchers find that LXA SAEs, whether trained on z or attn_out, turn out to have a lot of inactive features. This is much like 'there are not too many features in attention output so we do not expect to see feature splitting here.'. But we are not certain why this is the case.)
42
 
43
  [**Llama-3.1-8B-LXM-32x**](https://huggingface.co/fnlp/Llama3_1-8B-Base-LXM-32x/tree/main)
44