--- license: mit language: - en pipeline_tag: text-generation tags: - llama-2 - astronomy - astrophysics - arxiv inference: false base_model: - meta-llama/Llama-2-7b-hf --- # AstroLLaMA-2-7B-Chat_AIC AstroLLaMA-2-7B-Chat_AIC is a specialized chat model for astronomy, developed by fine-tuning the AstroLLaMA-2-7B-Base_AIC model. This model was originally developed by the AstroLLaMA team as part of the UniverseTBD initiative. It is designed for instruction-following and chat-based interactions in the astronomy domain. **Note**: This model is provided for completeness in the series of AstroLLaMA models. The core AstroLLaMA team has since moved on to develop more advanced models under AstroMLab. For the original UniverseTBD version, please visit [their repository](https://huggingface.co/universeTBD/astrollama-7b-chat-alpha). ## Model Details - **Base Architecture**: LLaMA-2-7b - **Base Model**: AstroLLaMA-2-7B-Base_AIC (trained on Abstract, Introduction, and Conclusion sections from arXiv's astro-ph category papers) - **Fine-tuning Method**: Supervised Fine-Tuning (SFT) - **SFT Dataset**: - 10,356 astronomy-centered conversations generated from arXiv abstracts by GPT-4 - Full content of LIMA dataset - 10,000 samples from Open Orca dataset - 10,000 samples from UltraChat dataset - **Primary Use**: Instruction-following and chat-based interactions for astronomy-related queries - **Reference**: [Perkowski et al. 2024](https://arxiv.org/abs/2401.01916) ## Using the model for chat ```python from transformers import AutoModelForCausalLM, AutoTokenizer import torch # Load the model and tokenizer tokenizer = AutoTokenizer.from_pretrained("AstroMLab/astrollama-2-7b-chat_aic") model = AutoModelForCausalLM.from_pretrained("AstroMLab/astrollama-2-7b-chat_aic", device_map="auto") # Function to generate a response def generate_response(prompt, max_length=512): full_prompt = f"###Human: {prompt}\n\n###Assistant:" inputs = tokenizer(full_prompt, return_tensors="pt", truncation=True, max_length=max_length) inputs = inputs.to(model.device) # Generate a response with torch.no_grad(): outputs = model.generate( **inputs, max_length=max_length, num_return_sequences=1, do_sample=True, pad_token_id=tokenizer.eos_token_id, eos_token_id=tokenizer.encode("###Human:", add_special_tokens=False)[0] ) # Decode and return the response response = tokenizer.decode(outputs[0], skip_special_tokens=False) # Extract only the Assistant's response assistant_response = response.split("###Assistant:")[-1].strip() return assistant_response # Example usage user_input = "What are the main components of a galaxy?" response = generate_response(user_input) print(f"Human: {user_input}") print(f"Assistant: {response}") ``` ## Model Limitations and Biases This model is specifically trained on astronomy literature and may not generalize well to other domains. Users should be aware of potential biases in the training data, which may reflect historical trends and biases in astronomical research publications. Additionally, the regex-based extraction method used for processing the LaTeX source files may introduce some biases or inconsistencies in the training data. Importantly, this model has been superseded by more advanced versions. Here's a performance comparison chart based upon the astronomical benchmarking Q&A as described in [Ting et al. 2024](https://arxiv.org/abs/2407.11194). | Model | Score (%) | |-------|-----------| | **AstroSage-LLaMA-3.1-8B (AstroMLab)** | **80.9** | | **AstroLLaMA-2-70B (AstroMLab)** | **76.0** | | LLaMA-3.1-8B | 73.7 | | Gemma-2-9B | 71.5 | | Qwen-2.5-7B | 70.4 | | Yi-1.5-9B | 68.4 | | InternLM-2.5-7B | 64.5 | | Mistral-7B-v0.3 | 63.9 | | ChatGLM3-6B | 50.4 | | AstroLLaMA-2-7B-AIC | 44.3 | | AstroLLaMA-2-7B-Abstract | 43.5 | As shown, AstroLLaMA-2-7B series are outperformed by newer models. For state-of-the-art performance, we recommend using the latest models. ## Ethical Considerations While this model is designed for scientific use, users should be mindful of potential misuse, such as generating misleading scientific content. Always verify model outputs against peer-reviewed sources for critical applications. ## Citation If you use this model in your research, please cite: ``` @ARTICLE{2024RNAAS...8....7P, author = {{Perkowski}, Ernest and {Pan}, Rui and {Nguyen}, Tuan Dung and {Ting}, Yuan-Sen and {Kruk}, Sandor and {Zhang}, Tong and {O'Neill}, Charlie and {Jablonska}, Maja and {Sun}, Zechang and {Smith}, Michael J. and {Liu}, Huiling and {Schawinski}, Kevin and {Iyer}, Kartheik and {Ciuc{\u{a}}}, Ioana and {UniverseTBD}}, title = "{AstroLLaMA-Chat: Scaling AstroLLaMA with Conversational and Diverse Datasets}", journal = {Research Notes of the American Astronomical Society}, keywords = {Astronomy software, Publicly available software, Astronomical instrumentation, 1855, 1864, 799, Astrophysics - Instrumentation and Methods for Astrophysics, Astrophysics - Cosmology and Nongalactic Astrophysics, Astrophysics - Astrophysics of Galaxies, Astrophysics - Solar and Stellar Astrophysics, Computer Science - Computation and Language, Computer Science - Machine Learning}, year = 2024, month = jan, volume = {8}, number = {1}, eid = {7}, pages = {7}, doi = {10.3847/2515-5172/ad1abe}, archivePrefix = {arXiv}, eprint = {2401.01916}, primaryClass = {astro-ph.IM}, adsurl = {https://ui.adsabs.harvard.edu/abs/2024RNAAS...8....7P}, adsnote = {Provided by the SAO/NASA Astrophysics Data System} } ```