Papers from LIME Lab
-
Safer-Instruct: Aligning Language Models with Automated Preference Data
Paper • 2311.08685 • Published • 1 -
CLIMB: A Benchmark of Clinical Bias in Large Language Models
Paper • 2407.05250 • Published • 2 -
On the Trustworthiness of Generative Foundation Models: Guideline, Assessment, and Perspective
Paper • 2502.14296 • Published • 45 -
WildFeedback: Aligning LLMs With In-situ User Interactions And Feedback
Paper • 2408.15549 • Published • 1