kcz358 commited on
Commit
4671ed0
·
verified ·
1 Parent(s): 3b1a8e9

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -1
README.md CHANGED
@@ -9,7 +9,7 @@ pinned: false
9
 
10
  - **[2024-11]** 🤯🤯 We introduce **Multimodal SAE**, the first framework designed to interpret learned features in large-scale multimodal models using Sparse Autoencoders. Through our approach, we leverage LLaVA-OneVision-72B to analyze and explain the SAE-derived features of LLaVA-NeXT-LLaMA3-8B. Furthermore, we demonstrate the ability to steer model behavior by clamping specific features to alleviate hallucinations and avoid safety-related issues.
11
 
12
- [GitHub](https://github.com/EvolvingLMMs-Lab/multimodal-sae)
13
 
14
  - **[2024-10]** 🔥🔥 We present **`LLaVA-Critic`**, the first open-source large multimodal model as a generalist evaluator for assessing LMM-generated responses across diverse multimodal tasks and scenarios.
15
 
 
9
 
10
  - **[2024-11]** 🤯🤯 We introduce **Multimodal SAE**, the first framework designed to interpret learned features in large-scale multimodal models using Sparse Autoencoders. Through our approach, we leverage LLaVA-OneVision-72B to analyze and explain the SAE-derived features of LLaVA-NeXT-LLaMA3-8B. Furthermore, we demonstrate the ability to steer model behavior by clamping specific features to alleviate hallucinations and avoid safety-related issues.
11
 
12
+ [GitHub](https://github.com/EvolvingLMMs-Lab/multimodal-sae) | [Paper](https://arxiv.org/abs/2411.14982)
13
 
14
  - **[2024-10]** 🔥🔥 We present **`LLaVA-Critic`**, the first open-source large multimodal model as a generalist evaluator for assessing LMM-generated responses across diverse multimodal tasks and scenarios.
15