Update README.md
Browse files
README.md
CHANGED
@@ -3,8 +3,176 @@ title: README
|
|
3 |
emoji: 📈
|
4 |
colorFrom: indigo
|
5 |
colorTo: gray
|
6 |
-
sdk:
|
7 |
pinned: false
|
8 |
---
|
9 |
|
10 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
3 |
emoji: 📈
|
4 |
colorFrom: indigo
|
5 |
colorTo: gray
|
6 |
+
sdk: static
|
7 |
pinned: false
|
8 |
---
|
9 |
|
10 |
+
|
11 |
+
## Who We Are
|
12 |
+
|
13 |
+
AstroMLab is a dynamic group of *astrophysicists* and *computer scientists* passionate about pushing the boundaries of **Large Language Models (LLMs)in astronomy**. Our team includes:
|
14 |
+
|
15 |
+
- *Leading astronomers, astrophysicists, and cosmologists*
|
16 |
+
- *Top natural language processing experts* from Oak Ridge National Laboratory and Argonne National Laboratory
|
17 |
+
- *Frontier arXivists* from the NASA Astrophysics Data System
|
18 |
+
- *Enthusiastic young researchers* bridging the gap between astronomy and LLMs
|
19 |
+
|
20 |
+
While LLMs are advancing rapidly, we believe that real progress in *AI-driven astronomical research* requires *deep domain knowledge*. This conviction drives us to tackle the challenges in applying LLMs to astronomy head-on.
|
21 |
+
|
22 |
+
## Our Goals
|
23 |
+
|
24 |
+
Our ultimate aim is to:
|
25 |
+
|
26 |
+
1. Develop specialized LLMs for astronomy
|
27 |
+
2. Create **reliable, light-weight, and open-source models** adaptable for advanced research agents
|
28 |
+
3. **Expedite scientific discovery** through LLM-driven end-to-end research
|
29 |
+
4. Push the boundaries of what's possible in astronomical research
|
30 |
+
|
31 |
+
## Our Achievements
|
32 |
+
|
33 |
+
Despite being a young group, we've made significant strides:
|
34 |
+
|
35 |
+
- Curated the **first extensive astronomy-based benchmarking dataset** using high-quality review articles ([Ting et al. 2024](https://arxiv.org/abs/2407.11194))
|
36 |
+
- Explored training of specialized astronomy LLMs
|
37 |
+
- Released three model sets:
|
38 |
+
- **AstroSage-8B** (coming soon, de Haan et al. 2024)
|
39 |
+
- **AstroLLaMA-2-70B** ([Pan et al. 2024](https://arxiv.org/abs/2407.11194))
|
40 |
+
- **AstroLLaMA-3-8B** ([Pan et al. 2024](https://arxiv.org/abs/2407.11194))
|
41 |
+
- AstroLLaMA-2-7B ([Perkowski et al. 2024](https://arxiv.org/abs/2401.01916), [Nguyen et al. 2023](https://arxiv.org/abs/2309.06126), developed during our time at *UniverseTBD*)
|
42 |
+
|
43 |
+
Our flagship model, AstroSage-8B, demonstrates remarkable performance when compared to other models in the 7B class. It achieves a substantial lead of 3.5 percentage points over its closest competitor, which translates to an estimated **10-fold reduction** in computational costs (see the [AstroBench page](benchmarking.html) for details).
|
44 |
+
|
45 |
+
| Model | Score (%) |
|
46 |
+
|-------|-----------|
|
47 |
+
| **<span style="color: #3366cc;">AstroSage-8B (AstroMLab)</span>** | **<span style="color: #3366cc;">77.2</span>** |
|
48 |
+
| LLaMA-3.1-8B | 73.7 |
|
49 |
+
| **<span style="color: #3366cc;">AstroLLaMA-2-70B (AstroMLab)</span>** | **<span style="color: #3366cc;">72.3</span>** |
|
50 |
+
| Gemma-2-9B | 71.5 |
|
51 |
+
| Qwen-2.5-7B | 70.4 |
|
52 |
+
| Yi-1.5-9B | 68.4 |
|
53 |
+
| InternLM-2.5-7B | 64.0 |
|
54 |
+
| Mistral-7B-v0.3 | 63.9 |
|
55 |
+
| ChatGLM3-6B | 50.4 |
|
56 |
+
| AstroLLaMA-2-7B (UniverseTBD) | 44.3 |
|
57 |
+
|
58 |
+

|
59 |
+
|
60 |
+
The exceptional performance of AstroSage-8B showcases the potential for more efficient and cost-effective agentic research in astronomy. This advancement opens up new possibilities for widespread application of AI in astronomical research, making sophisticated analysis more accessible to a broader range of institutions and researchers.
|
61 |
+
|
62 |
+
|
63 |
+
## Open Source Commitment
|
64 |
+
|
65 |
+
We are fully committed to open source:
|
66 |
+
|
67 |
+
- All our models are released on **Hugging Face**
|
68 |
+
- Find our models here: [AstroMLab on Hugging Face](https://huggingface.co/AstroMLab)
|
69 |
+
|
70 |
+
|
71 |
+
## Our Support and Vision
|
72 |
+
|
73 |
+
We are grateful for our supporters:
|
74 |
+
|
75 |
+
- Access to the Frontier nodes at Oak Ridge Leadership Computing Facility
|
76 |
+
- Backing from Microsoft's Accelerating Foundation Models Research (AFMR)
|
77 |
+
|
78 |
+
|
79 |
+
## Join Us
|
80 |
+
|
81 |
+
Our team is expanding, and we'd love to hear from you!
|
82 |
+
|
83 |
+
- Contact us: [[email protected]](mailto:[email protected])
|
84 |
+
|
85 |
+
<br>
|
86 |
+
|
87 |
+
---
|
88 |
+
|
89 |
+
## Team
|
90 |
+
|
91 |
+
<table>
|
92 |
+
<tr>
|
93 |
+
<td align="center" width="25%"><img src="figures/Members_Yuan-Sen_Ting.png" alt="Yuan-Sen Ting"></td>
|
94 |
+
<td align="center" width="25%"><img src="figures/Members_Tirthankar_Ghosal.png" alt="Tirthankar Ghosal"></td>
|
95 |
+
<td align="center" width="25%"><img src="figures/Members_Tijmen_de_Haan.png" alt="Tijmen de Haan"></td>
|
96 |
+
<td align="center" width="25%"><img src="figures/Members_Josh_Nguyen.png" alt="Josh Nguyen"></td>
|
97 |
+
</tr>
|
98 |
+
<tr>
|
99 |
+
<td align="center"><strong>Yuan-Sen Ting</strong><br>The Ohio State University</td>
|
100 |
+
<td align="center"><strong>Tirthankar Ghosal</strong><br>Oak Ridge National Laboratory</td>
|
101 |
+
<td align="center"><strong>Tijmen de Haan</strong><br>KEK</td>
|
102 |
+
<td align="center"><strong>Josh Nguyen</strong><br>University of Pennsylvania</td>
|
103 |
+
</tr>
|
104 |
+
<tr>
|
105 |
+
<td align="center"><img src="figures/Members_Rui_Pan.png" alt="Rui Pan"></td>
|
106 |
+
<td align="center"><img src="figures/Members_Hardik_Arora.png" alt="Hardik Arora"></td>
|
107 |
+
<td align="center"><img src="figures/Members_Emily_Herron.png" alt="Emily Herron"></td>
|
108 |
+
<td align="center"><img src="figures/Members_Yuwei_Yang.png" alt="Yuwei Yang"></td>
|
109 |
+
</tr>
|
110 |
+
<tr>
|
111 |
+
<td align="center"><strong>Rui Pan</strong><br>University of Illinois Urbana-Champaign</td>
|
112 |
+
<td align="center"><strong>Hardik Arora</strong><br>Indian Institutes of Technology</td>
|
113 |
+
<td align="center"><strong>Emily Herron</strong><br>Oak Ridge National Laboratory</td>
|
114 |
+
<td align="center"><strong>Yuwei Yang</strong><br>Australian National University</td>
|
115 |
+
</tr>
|
116 |
+
<tr>
|
117 |
+
<td align="center"><img src="figures/Members_Zechang_Sun.png" alt="Alberto Accomazzi"></td>
|
118 |
+
<td align="center"><img src="figures/Members_Alberto_Accomazzi.png" alt="Alberto Accomazzi"></td>
|
119 |
+
<td align="center"><img src="figures/Members_Argonne.png" alt="Azton Wells"></td>
|
120 |
+
<td align="center"><img src="figures/Members_Nesar_Ramachandra.png" alt="Nesar Ramachandra"></td>
|
121 |
+
<td align="center"><img src="figures/Members_Sandeep_Madireddy.png" alt="Sandeep Madireddy"></td>
|
122 |
+
</tr>
|
123 |
+
<tr>
|
124 |
+
<td align="center"><strong>Zechang Sun</strong><br>Tsinghua University</td>
|
125 |
+
<td align="center"><strong>Alberto Accomazzi</strong><br>NASA Astrophysics Data System</td>
|
126 |
+
<td align="center"><strong>Azton Wells</strong><br>Argonne National Laboratory</td>
|
127 |
+
<td align="center"><strong>Nesar Ramachandra</strong><br>Argonne National Laboratory</td>
|
128 |
+
</tr>
|
129 |
+
<tr>
|
130 |
+
<td align="center"><img src="figures/Members_Sandeep_Madireddy.png" alt="Sandeep Madireddy"></td>
|
131 |
+
</tr>
|
132 |
+
<tr>
|
133 |
+
<td align="center"><strong>Sandeep Madireddy</strong><br>Argonne National Laboratory</td>
|
134 |
+
</tr>
|
135 |
+
</table>
|
136 |
+
|
137 |
+
<br>
|
138 |
+
|
139 |
+
---
|
140 |
+
|
141 |
+
## Publications
|
142 |
+
|
143 |
+
### AstroMLab 1: Who Wins Astronomy Jeopardy!?
|
144 |
+
|
145 |
+
**[Yuan-Sen Ting, et al., 2024, arXiv:2407.11194](https://arxiv.org/abs/2407.11194)**
|
146 |
+
|
147 |
+
We present a comprehensive evaluation of proprietary and open-weights large language models using the first astronomy-specific benchmarking dataset. This dataset comprises 4,425 multiple-choice questions curated from the Annual Review of Astronomy and Astrophysics, covering a broad range of astrophysical topics.
|
148 |
+
|
149 |
+
Key findings:
|
150 |
+
- Claude-3.5-Sonnet outperforms competitors, achieving 85.0% accuracy.
|
151 |
+
- Open-weights models like LLaMA-3-70b (80.6%) and Qwen-2-72b (77.7%) now compete with some of the best proprietary models.
|
152 |
+
- We identify performance variations across astronomical subfields, with challenges in exoplanet-related fields, stellar astrophysics, and instrumentation.
|
153 |
+
- Top-performing models demonstrate well-calibrated confidence, with correlations above 0.9 between confidence and correctness.
|
154 |
+
- The rapid progress suggests that LLM-driven research in astronomy may become feasible in the near future.
|
155 |
+
|
156 |
+
|
157 |
+
<br>
|
158 |
+
|
159 |
+
### AstroMLab 2: AstroLLaMA-2-70B Model and Benchmarking Specialised LLMs for Astronomy
|
160 |
+
|
161 |
+
**[Rui Pan, Josh Nguyen, et al., 2024](https://arxiv.org/abs/2407.11194)**
|
162 |
+
|
163 |
+
We introduce new models: AstroLLaMA-3-8B and AstroLLaMA-2-70B, building upon the previous AstroLLaMA series and quantitatively assess specialized LLMs in astronomy, leveraging recently curated high-quality astronomical MCQs.
|
164 |
+
|
165 |
+
Key points:
|
166 |
+
- Previously released AstroLLaMA series (based on LLaMA-2-7B) underperforms compared to the native LLaMA model.
|
167 |
+
- Performance degradation can be partially mitigated by using high-quality data for continual pretraining.
|
168 |
+
- Continual pretraining on the 70B model can yield improvements, despite observed catastrophic forgetting in smaller models.
|
169 |
+
|
170 |
+
<br>
|
171 |
+
|
172 |
+
### Legacy Output: The AstroLLaMA Series
|
173 |
+
|
174 |
+
1. **[Josh Nguyen, et al., 2023, arXiv:2309.06126](https://arxiv.org/abs/2309.06126)**
|
175 |
+
2. **[Ernest Perkowski, Rui Pan, et al., 2024, arXiv:2401.01916](https://arxiv.org/abs/2401.01916)**
|
176 |
+
|
177 |
+
The first open-source conversational AI tool tailored for the astronomy community -- AstroLLaMA-2-7B and AstroLLaMA-2-7B-Chat.
|
178 |
+
|