RichardErkhov commited on
Commit
c30c05a
·
verified ·
1 Parent(s): 92d0fa6

uploaded readme

Browse files
Files changed (1) hide show
  1. README.md +149 -0
README.md ADDED
@@ -0,0 +1,149 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Quantization made by Richard Erkhov.
2
+
3
+ [Github](https://github.com/RichardErkhov)
4
+
5
+ [Discord](https://discord.gg/pvy7H8DZMG)
6
+
7
+ [Request more models](https://github.com/RichardErkhov/quant_request)
8
+
9
+
10
+ astrollama-3-8b-base_aic - GGUF
11
+ - Model creator: https://huggingface.co/AstroMLab/
12
+ - Original model: https://huggingface.co/AstroMLab/astrollama-3-8b-base_aic/
13
+
14
+
15
+ | Name | Quant method | Size |
16
+ | ---- | ---- | ---- |
17
+ | [astrollama-3-8b-base_aic.Q2_K.gguf](https://huggingface.co/RichardErkhov/AstroMLab_-_astrollama-3-8b-base_aic-gguf/blob/main/astrollama-3-8b-base_aic.Q2_K.gguf) | Q2_K | 2.96GB |
18
+ | [astrollama-3-8b-base_aic.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/AstroMLab_-_astrollama-3-8b-base_aic-gguf/blob/main/astrollama-3-8b-base_aic.Q3_K_S.gguf) | Q3_K_S | 3.41GB |
19
+ | [astrollama-3-8b-base_aic.Q3_K.gguf](https://huggingface.co/RichardErkhov/AstroMLab_-_astrollama-3-8b-base_aic-gguf/blob/main/astrollama-3-8b-base_aic.Q3_K.gguf) | Q3_K | 3.74GB |
20
+ | [astrollama-3-8b-base_aic.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/AstroMLab_-_astrollama-3-8b-base_aic-gguf/blob/main/astrollama-3-8b-base_aic.Q3_K_M.gguf) | Q3_K_M | 3.74GB |
21
+ | [astrollama-3-8b-base_aic.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/AstroMLab_-_astrollama-3-8b-base_aic-gguf/blob/main/astrollama-3-8b-base_aic.Q3_K_L.gguf) | Q3_K_L | 2.44GB |
22
+ | [astrollama-3-8b-base_aic.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/AstroMLab_-_astrollama-3-8b-base_aic-gguf/blob/main/astrollama-3-8b-base_aic.IQ4_XS.gguf) | IQ4_XS | 3.28GB |
23
+ | [astrollama-3-8b-base_aic.Q4_0.gguf](https://huggingface.co/RichardErkhov/AstroMLab_-_astrollama-3-8b-base_aic-gguf/blob/main/astrollama-3-8b-base_aic.Q4_0.gguf) | Q4_0 | 4.34GB |
24
+ | [astrollama-3-8b-base_aic.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/AstroMLab_-_astrollama-3-8b-base_aic-gguf/blob/main/astrollama-3-8b-base_aic.IQ4_NL.gguf) | IQ4_NL | 4.38GB |
25
+ | [astrollama-3-8b-base_aic.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/AstroMLab_-_astrollama-3-8b-base_aic-gguf/blob/main/astrollama-3-8b-base_aic.Q4_K_S.gguf) | Q4_K_S | 4.37GB |
26
+ | [astrollama-3-8b-base_aic.Q4_K.gguf](https://huggingface.co/RichardErkhov/AstroMLab_-_astrollama-3-8b-base_aic-gguf/blob/main/astrollama-3-8b-base_aic.Q4_K.gguf) | Q4_K | 4.58GB |
27
+ | [astrollama-3-8b-base_aic.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/AstroMLab_-_astrollama-3-8b-base_aic-gguf/blob/main/astrollama-3-8b-base_aic.Q4_K_M.gguf) | Q4_K_M | 4.58GB |
28
+ | [astrollama-3-8b-base_aic.Q4_1.gguf](https://huggingface.co/RichardErkhov/AstroMLab_-_astrollama-3-8b-base_aic-gguf/blob/main/astrollama-3-8b-base_aic.Q4_1.gguf) | Q4_1 | 4.78GB |
29
+ | [astrollama-3-8b-base_aic.Q5_0.gguf](https://huggingface.co/RichardErkhov/AstroMLab_-_astrollama-3-8b-base_aic-gguf/blob/main/astrollama-3-8b-base_aic.Q5_0.gguf) | Q5_0 | 5.21GB |
30
+ | [astrollama-3-8b-base_aic.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/AstroMLab_-_astrollama-3-8b-base_aic-gguf/blob/main/astrollama-3-8b-base_aic.Q5_K_S.gguf) | Q5_K_S | 5.21GB |
31
+ | [astrollama-3-8b-base_aic.Q5_K.gguf](https://huggingface.co/RichardErkhov/AstroMLab_-_astrollama-3-8b-base_aic-gguf/blob/main/astrollama-3-8b-base_aic.Q5_K.gguf) | Q5_K | 5.34GB |
32
+ | [astrollama-3-8b-base_aic.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/AstroMLab_-_astrollama-3-8b-base_aic-gguf/blob/main/astrollama-3-8b-base_aic.Q5_K_M.gguf) | Q5_K_M | 5.34GB |
33
+ | [astrollama-3-8b-base_aic.Q5_1.gguf](https://huggingface.co/RichardErkhov/AstroMLab_-_astrollama-3-8b-base_aic-gguf/blob/main/astrollama-3-8b-base_aic.Q5_1.gguf) | Q5_1 | 5.65GB |
34
+ | [astrollama-3-8b-base_aic.Q6_K.gguf](https://huggingface.co/RichardErkhov/AstroMLab_-_astrollama-3-8b-base_aic-gguf/blob/main/astrollama-3-8b-base_aic.Q6_K.gguf) | Q6_K | 6.14GB |
35
+ | [astrollama-3-8b-base_aic.Q8_0.gguf](https://huggingface.co/RichardErkhov/AstroMLab_-_astrollama-3-8b-base_aic-gguf/blob/main/astrollama-3-8b-base_aic.Q8_0.gguf) | Q8_0 | 7.95GB |
36
+
37
+
38
+
39
+
40
+ Original model description:
41
+ ---
42
+ license: mit
43
+ language:
44
+ - en
45
+ pipeline_tag: text-generation
46
+ tags:
47
+ - llama-3
48
+ - astronomy
49
+ - astrophysics
50
+ - arxiv
51
+ inference: false
52
+ base_model:
53
+ - meta-llama/Llama-3-8b-hf
54
+ ---
55
+
56
+ # AstroLLaMA-3-8B-Base_AIC
57
+
58
+ AstroLLaMA-3-8B is a specialized base language model for astronomy, developed by fine-tuning Meta's LLaMA-3-8b architecture on astronomical literature. This model was developed by the AstroMLab team. It is designed for next token prediction tasks and is not an instruct/chat model.
59
+
60
+ ## Model Details
61
+
62
+ - **Base Architecture**: LLaMA-3-8b
63
+ - **Training Data**: Abstract, Introduction, and Conclusion (AIC) sections from arXiv's astro-ph category papers
64
+ - **Data Processing**: Optical character recognition (OCR) on PDF files using the Nougat tool, followed by summarization using Qwen-2-8B and LLaMA-3.1-8B.
65
+ - **Fine-tuning Method**: Continual Pre-Training (CPT) using the LMFlow framework
66
+ - **Training Details**:
67
+ - Learning rate: 2 × 10⁻⁵
68
+ - Total batch size: 96
69
+ - Maximum token length: 512
70
+ - Warmup ratio: 0.03
71
+ - No gradient accumulation
72
+ - BF16 format
73
+ - Cosine decay schedule for learning rate reduction
74
+ - Training duration: 1 epoch
75
+ - **Primary Use**: Next token prediction for astronomy-related text generation and analysis
76
+ - **Reference**: Pan et al. 2024 [Link to be added]
77
+
78
+ ## Generating text from a prompt
79
+
80
+ ```python
81
+ from transformers import AutoModelForCausalLM, AutoTokenizer
82
+ import torch
83
+
84
+ # Load the model and tokenizer
85
+ tokenizer = AutoTokenizer.from_pretrained("AstroMLab/astrollama-3-8b-base_aic")
86
+ model = AutoModelForCausalLM.from_pretrained("AstroMLab/astrollama-3-8b-base_aic", device_map="auto")
87
+
88
+ # Create the pipeline with explicit truncation
89
+ from transformers import pipeline
90
+ generator = pipeline(
91
+ "text-generation",
92
+ model=model,
93
+ tokenizer=tokenizer,
94
+ device_map="auto",
95
+ truncation=True,
96
+ max_length=512
97
+ )
98
+
99
+ # Example prompt from an astronomy paper
100
+ prompt = "In this letter, we report the discovery of the highest redshift, " \
101
+ "heavily obscured, radio-loud QSO candidate selected using JWST NIRCam/MIRI, " \
102
+ "mid-IR, sub-mm, and radio imaging in the COSMOS-Web field. "
103
+
104
+ # Set seed for reproducibility
105
+ torch.manual_seed(42)
106
+
107
+ # Generate text
108
+ generated_text = generator(prompt, do_sample=True)
109
+ print(generated_text[0]['generated_text'])
110
+ ```
111
+
112
+ ## Model Limitations and Biases
113
+
114
+ A key limitation identified during the development of this model is that training solely on astro-ph data may not be sufficient to significantly improve performance over the base model, especially for the already highly performant LLaMA-3 series. This suggests that to achieve substantial gains, future iterations may need to incorporate a broader range of high-quality astronomical data beyond arXiv, such as textbooks, Wikipedia, and curated summaries.
115
+
116
+ Here's a performance comparison chart based upon the astronomical benchmarking Q&A as described in [Ting et al. 2024](https://arxiv.org/abs/2407.11194), and Pan et al. 2024:
117
+
118
+ | Model | Score (%) |
119
+ |-------|-----------|
120
+ | LLaMA-3.1-8B | 73.7 |
121
+ | LLaMA-3-8B | 72.9 |
122
+ | **<span style="color:green">AstroLLaMA-3-8B-Base_AIC (AstroMLab)</span>** | **<span style="color:green">71.9</span>** |
123
+ | Gemma-2-9B | 71.5 |
124
+ | Qwen-2.5-7B | 70.4 |
125
+ | Yi-1.5-9B | 68.4 |
126
+ | InternLM-2.5-7B | 64.5 |
127
+ | Mistral-7B-v0.3 | 63.9 |
128
+ | ChatGLM3-6B | 50.4 |
129
+ | AstroLLaMA-2-7B-AIC | 44.3 |
130
+ | AstroLLaMA-2-7B-Abstract | 43.5 |
131
+
132
+ As shown, while AstroLLaMA-3-8B performs competitively among models in its class, it does not surpass the performance of the base LLaMA-3-8B model. This underscores the challenges in developing specialized models and the need for more diverse and comprehensive training data.
133
+
134
+ This model is released primarily for reproducibility purposes, allowing researchers to track the development process and compare different iterations of AstroLLaMA models.
135
+
136
+ For optimal performance and the most up-to-date capabilities in astronomy-related tasks, we recommend using AstroSage-8B, where these limitations have been addressed. The newer model incorporates expanded training data beyond astro-ph and features a greatly expanded fine-tuning process, resulting in significantly improved performance.
137
+
138
+ ## Ethical Considerations
139
+
140
+ While this model is designed for scientific use, users should be mindful of potential misuse, such as generating misleading scientific content. Always verify model outputs against peer-reviewed sources for critical applications.
141
+
142
+ ## Citation
143
+
144
+ If you use this model in your research, please cite:
145
+
146
+ ```
147
+ [Citation for Pan et al. 2024 to be added]
148
+ ```
149
+