AstroLLaMA-3-8B-Chat_Summary
AstroLLaMA-3-8B-Chat_Summary is a specialized chat model for astronomy, developed by fine-tuning the AstroLLaMA-3-8B-Base_Summary model. This model was developed by the AstroMLab team. It is designed for instruction-following and chat-based interactions in the astronomy domain.
Model Details
- Base Architecture: LLaMA-3-8b
- Base Model: AstroLLaMA-3-8B-Base_Summary (trained on summarized content from arXiv's astro-ph category papers)
- Data Processing:
- Optical character recognition (OCR) on PDF files using the Nougat tool
- Summarization of OCR'd text using Qwen-2-8B and LLaMA-3.1-8B, reducing content to about 1,000-4,000 tokens per paper
- Fine-tuning Method: Supervised Fine-Tuning (SFT)
- SFT Dataset:
- 10,356 astronomy-centered conversations generated from arXiv abstracts by GPT-4
- Full content of LIMA dataset
- 10,000 samples from Open Orca dataset
- 10,000 samples from UltraChat dataset
- Training Details:
- Learning rate: 3 × 10⁻⁷
- Training epochs: 1
- Total batch size: 48
- Maximum token length: 2048
- Warmup ratio: 0.03
- Cosine decay schedule for learning rate reduction
- Primary Use: Instruction-following and chat-based interactions for astronomy-related queries
- Reference: Pan et al. 2024
Using the model for chat
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
# Load the model and tokenizer
tokenizer = AutoTokenizer.from_pretrained("AstroMLab/astrollama-3-8b-chat_summary")
model = AutoModelForCausalLM.from_pretrained("AstroMLab/astrollama-3-8b-chat_summary", device_map="auto")
# Function to generate a response
def generate_response(prompt, max_length=512):
full_prompt = f"###Human: {prompt}\n\n###Assistant:"
inputs = tokenizer(full_prompt, return_tensors="pt", truncation=True, max_length=max_length)
inputs = inputs.to(model.device)
# Generate a response
with torch.no_grad():
outputs = model.generate(
**inputs,
max_length=max_length,
num_return_sequences=1,
do_sample=True,
pad_token_id=tokenizer.eos_token_id,
eos_token_id=tokenizer.encode("###Human:", add_special_tokens=False)[0]
)
# Decode and return the response
response = tokenizer.decode(outputs[0], skip_special_tokens=False)
# Extract only the Assistant's response
assistant_response = response.split("###Assistant:")[-1].strip()
return assistant_response
# Example usage
user_input = "What are the main components of a galaxy?"
response = generate_response(user_input)
print(f"Human: {user_input}")
print(f"Assistant: {response}")
Model Improvements and Performance
This model used the summarized content for training, which has led to improved performance compared to the AIC (Abstract, Introduction, Conclusion) version. The summarization process allows for the inclusion of more comprehensive information from each paper while maintaining a manageable token count.
Here's a performance comparison chart based upon the astronomical benchmarking Q&A as described in Ting et al. 2024:
Model | Score (%) |
---|---|
LLaMA-3.1-8B | 73.7 |
LLaMA-3-8B | 72.9 |
AstroLLaMA-3-8B-Base_Summary (AstroMLab) | 72.3 |
AstroLLaMA-3-8B-Chat_Summary (AstroMLab) | 69.0 |
Gemma-2-9B | 71.5 |
Qwen-2.5-7B | 70.4 |
Yi-1.5-9B | 68.4 |
InternLM-2.5-7B | 64.5 |
Mistral-7B-v0.3 | 63.9 |
ChatGLM3-6B | 50.4 |
As shown, AstroLLaMA-3-8B-Chat_Summary performs competitively, maintaining most of the performance of the base summary model. This demonstrates the effectiveness of the summarization approach in capturing and retaining key astronomical concepts, even after fine-tuning for chat interactions.
We also found that the model trained with summaries leads to better scores in general, especially with the instruct version, demonstrating that information density matters significantly in specialized domain training.
While AstroLLaMA-3-8B-Chat_Summary performs well among models in its class, it does not surpass the performance of the base LLaMA-3.1-8B model. This underscores the ongoing challenges in developing specialized models and the need for continued research in this area.
This model is released primarily for reproducibility purposes, allowing researchers to track the development process and compare different iterations of AstroLLaMA models.
For optimal performance and the most up-to-date capabilities in astronomy-related tasks, we recommend using AstroSage-8B, where further improvements have been made. The newer model incorporates expanded training data beyond astro-ph and features a greatly expanded fine-tuning process, resulting in significantly improved performance.
Ethical Considerations
While this model is designed for scientific use, users should be mindful of potential misuse, such as generating misleading scientific content. Always verify model outputs against peer-reviewed sources for critical applications.
Citation
If you use this model in your research, please cite:
@ARTICLE{2024arXiv240919750P,
author = {{Pan}, Rui and {Dung Nguyen}, Tuan and {Arora}, Hardik and {Accomazzi}, Alberto and {Ghosal}, Tirthankar and {Ting}, Yuan-Sen},
title = "{AstroMLab 2: AstroLLaMA-2-70B Model and Benchmarking Specialised LLMs for Astronomy}",
journal = {arXiv e-prints},
keywords = {Astrophysics - Instrumentation and Methods for Astrophysics, Computer Science - Computation and Language},
year = 2024,
month = sep,
eid = {arXiv:2409.19750},
pages = {arXiv:2409.19750},
doi = {10.48550/arXiv.2409.19750},
archivePrefix = {arXiv},
eprint = {2409.19750},
primaryClass = {astro-ph.IM},
adsurl = {https://ui.adsabs.harvard.edu/abs/2024arXiv240919750P},
adsnote = {Provided by the SAO/NASA Astrophysics Data System}
}
- Downloads last month
- 23