AstroMLab

non-profit
Activity Feed

AI & ML interests

AstroMLab is a collaborative initiative of astronomers and AI experts dedicated to advancing Large Language Models in astronomy. Our goal is to expedite scientific discovery through LLM-driven research.

Recent Activity

Tijmen2  updated a Space about 1 month ago
AstroMLab/AstroSage
KnightHardik  authored a paper about 1 month ago
AstroMLab 1: Who Wins Astronomy Jeopardy!?
research4pan  authored a paper 3 months ago
Personalized Visual Instruction Tuning
View all activity

AstroMLab

AstroMLab is a diverse group of researchers dedicated to advancing the application of Large Language Models (LLMs) in astronomy. Our team includes:

  • Leading astronomers, astrophysicists, and cosmologists.
  • Natural language processing experts.
  • Frontier arXivists from the NASA Astrophysics Data System

Objectives

  • Develop specialized LLMs for astronomy
  • Create open-source models for advanced research
  • Facilitate LLM-driven end-to-end agentic research in astronomy

Current Work

Our ongoing projects include:

  • Curation of an astronomy-based benchmarking dataset
  • Development of specialized astronomy LLMs
  • Performance evaluation of models on astronomical tasks

Models and Performance

We have developed several models, including AstroSage-LLaMA-3.1-8B (de Haan et al. 2024), AstroLLaMA-2-70B (Pan et al. 2024), and AstroLLaMA-3-8B (Pan et al. 2024). Our AstroSage-LLaMA-3.1-8B model has demonstrated strong performance in astronomy Q&A tasks (Ting et al. 2024):

Model Score (%)
AstroSage-LLaMA-3.1-8B (AstroMLab) 80.9
LLaMA-3.1-8B 73.7
Phi-3.5-4B 72.8
Gemma-2-9B 71.5
LLaMA-2-70B 70.7
Qwen-2.5-7B 70.4
Yi-1.5-9B 68.4
InternLM-2.5-7B 64.5
Mistral-7B-v0.3 63.9
ChatGLM3-6B 50.4
AstroLLaMA-2-7B (UniverseTBD) 44.3

AstroSage-LLaMA-3.1-8B (de Haan et al. 2024), our lightweight model, currently achieves the highest score among the ~8B parameter models in its astronomy knowledge recall ability.

Cost and performance trade-off in astronomical Q&A

Support and Resources

Our research benefits from:

  • Access to the Frontier nodes at Oak Ridge Leadership Computing Facility
  • Support from Microsoft's Accelerating Foundation Models Research (AFMR) program

Contact

For inquiries or collaboration opportunities, please contact: [email protected]