RRM: Robust Reward Model Training Mitigates Reward Hacking Paper • 2409.13156 • Published Sep 20, 2024 • 5
HallusionBench: You See What You Think? Or You Think What You See? An Image-Context Reasoning Benchmark Challenging for GPT-4V(ision), LLaVA-1.5, and Other Multi-modality Models Paper • 2310.14566 • Published Oct 23, 2023 • 27
Virtual Prompt Injection for Instruction-Tuned Large Language Models Paper • 2307.16888 • Published Jul 31, 2023 • 7
InstructZero: Efficient Instruction Optimization for Black-Box Large Language Models Paper • 2306.03082 • Published Jun 5, 2023 • 5