itpossible commited on
Commit
df98f1d
·
verified ·
1 Parent(s): ade89fe

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +179 -7
README.md CHANGED
@@ -1,7 +1,179 @@
1
- ## 🎉 新闻
2
- - [2024-10-11] [新文速递|PreparedLLM:高效训练领域大语言模型的“前预训练”框架](https://mp.weixin.qq.com/s/ugJQ9tbp6Y87xA3TOWteqw)。
3
- - [2024-08-31] 文章[PreparedLLM: Effective Pre-pretraining Framework for Domain-specific Large Language Models](https://www.tandfonline.com/doi/full/10.1080/20964471.2024.2396159)已被*Big Earth Data*期刊接收。
4
- - [2024-08-31] 发布[Chinese-Mistral-7B-Instruct-v0.2](https://huggingface.co/itpossible/Chinese-Mistral-7B-Instruct-v0.2)对话模型。语言理解能力大幅提高,并且具备多轮对话能力。
5
- - [2024-06-30] 发布[JiuZhou-Instruct-v0.2](https://huggingface.co/itpossible/JiuZhou-Instruct-v0.2)对话模型。语言理解能力大幅提高,并且具备多轮对话能力。
6
- - [2024-04-04] 发布[Chinese-Mistral-7B-Instruct-v0.1](https://huggingface.co/itpossible/Chinese-Mistral-7B-Instruct-v0.1)。
7
- - [2024-03-31] 发布[JiuZhou-base](https://huggingface.co/itpossible/JiuZhou-base]和[Chinese-Mistral-7B-v0.1](https://huggingface.co/itpossible/Chinese-Mistral-7B)基座模型。
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ <div align="center">
2
+ <h1>
3
+ JiuZhou: Open Foundation Language Models for Geoscience
4
+ </h1>
5
+ </div>
6
+
7
+ ## 🎉 News
8
+ - [2024-12-31] **Article [JiuZhou: Open Foundation Language Models and Effective Pre-training Framework for Geoscience](https://www.tandfonline.com/doi/full/10.1080/17538947.2025.2449708) has been accepted for publication in the *International Journal fo Digital Earth*. [Code and Data](https://github.com/THU-ESIS/JiuZhou).**
9
+ - [2024-10-11] WeChat article: [PreparedLLM: Effective Pre-pretraining Framework for Domain-specific Large Language Models](https://mp.weixin.qq.com/s/ugJQ9tbp6Y87xA3TOWteqw).
10
+ - [2024-09-06] Released [ClimateChat](https://huggingface.co/itpossible/ClimateChat) instruct model.
11
+ - [2024-08-31] **Article [PreparedLLM: Effective Pre-pretraining Framework for Domain-specific Large Language Models](https://www.tandfonline.com/doi/full/10.1080/20964471.2024.2396159) has been accepted for publication in the *Big Earth Data* journal**.
12
+ - [2024-08-31] Released [Chinese-Mistral-7B-Instruct-v0.2](https://huggingface.co/itpossible/Chinese-Mistral-7B-Instruct-v0.2) instruct model. Significant improvements in language understanding and multi-turn dialogue capabilities.
13
+ - [2024-06-30] Released [JiuZhou-Instruct-v0.2](https://huggingface.co/itpossible/JiuZhou-Instruct-v0.2) instruct model. Significant improvements in language understanding and multi-turn dialogue capabilities.
14
+ - [2024-05-15] WeChat Article: [Chinese Vocabulary Expansion Incremental Pretraining for Large Language Models: Chinese-Mistral Released](https://mp.weixin.qq.com/s/PMQmRCZMWosWMfgKRBjLlQ).
15
+ - [2024-04-04] Released [Chinese-Mistral-7B-Instruct-v0.1](https://huggingface.co/itpossible/Chinese-Mistral-7B-Instruct-v0.1) instruct model.
16
+ - [2024-03-31] Released [Chinese-Mistral-7B-v0.1](https://huggingface.co/itpossible/Chinese-Mistral-7B) base model.
17
+ - [2024-03-15] Released the base version [JiuZhou-base](https://huggingface.co/itpossible/JiuZhou-base), instruct version [JiuZhou-instruct-v0.1](https://huggingface.co/itpossible/JiuZhou-Instruct-v0.1), and [intermediate checkpoints](https://huggingface.co/itpossible).
18
+
19
+
20
+ ## Table of Contents
21
+
22
+ - [Introduction](#introduction)
23
+ - [Download](#download)
24
+ - [Inference](#inference)
25
+ - [Model Performance](#model-performance)
26
+ - [Model Training Process](#model-training-process)
27
+ - [Model Training Code](#model-training-code)
28
+ - [Citations](#citations)
29
+ - [Acknowledgments](#acknowledgments)
30
+
31
+ ## Introduction
32
+ The field of geoscience has amassed a vast amount of data, necessitating the extraction and integration of diverse knowledge from this data to address global change challenges, promote sustainable development, and accelerate scientific discovery. Foundation language models initially learn and integrate knowledge autonomously through self-supervised pre-training on extensive text data. Subsequently, they acquire the capability to solve geoscience problems through instruction tuning. However, when the foundational language models lack sufficient geoscience expertise, instruction tuning with relevant data can lead to the generation of content that is inconsistent with established facts. To improve the model's accuracy and practicality, a robust geoscience foundational language model is urgently needed.<br>
33
+
34
+ This study uses [Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) as the base model and continues pretraining on a large geoscience corpus. It also incorporates the [domain-specific large language model *pre*-pretraining framework (PreparedLLM)](https://www.tandfonline.com/doi/full/10.1080/20964471.2024.2396159) and the "two-stage pre-adaptation pre-training" algorithm to build the geoscience large language model, JiuZhou.
35
+
36
+
37
+ ## Download
38
+
39
+ | **Model Series** | **Model** | **Download Link** | **Description** |
40
+ |-----------------------|-------------------------------------|------------------------------------------------------------|------------------------------------------------------------------|
41
+ | **JiuZhou** | JiuZhou-base | [Huggingface](https://huggingface.co/itpossible/JiuZhou-base) | Base model (Rich in geoscience knowledge) |
42
+ | **JiuZhou** | JiuZhou-Instruct-v0.1 | [Huggingface](https://huggingface.co/itpossible/Chinese-Mistral-7B-Instruct-v0.1) | Instruct model (Instruction alignment caused a loss of some geoscience knowledge, but it has instruction-following ability) <br> LoRA fine-tuned on Alpaca_GPT4 in both Chinese and English and GeoSignal |
43
+ | **JiuZhou** | JiuZhou-Instruct-v0.2 | [HuggingFace](https://huggingface.co/itpossible/Chinese-Mistral-7B-Instruct-v0.2)<br>[Wisemodel](https://wisemodel.cn/models/itpossible/Chinese-Mistral-7B-Instruct-v0.2) | Instruct model (Instruction alignment caused a loss of some geoscience knowledge, but it has instruction-following ability) <br> Fine-tuned with high-quality general instruction data |
44
+ | **ClimateChat** | ClimateChat | [HuggingFace](https://huggingface.co/itpossible/ClimateChat)<br>[Wisemodel](https://wisemodel.cn/models/itpossible/ClimateChat) | Instruct model <br> Fine-tuned on JiuZhou-base for instruction following |
45
+ | **Chinese-Mistral** | Chinese-Mistral-7B | [HuggingFace](https://huggingface.co/itpossible/Chinese-Mistral-7B-v0.1)<br>[Wisemodel](https://wisemodel.cn/models/itpossible/Chinese-Mistral-7B-v0.1)<br>[ModelScope](https://www.modelscope.cn/models/itpossible/Chinese-Mistral-7B-v0.1) | Base model |
46
+ | **Chinese-Mistral** | Chinese-Mistral-7B-Instruct-v0.1 | [HuggingFace](https://huggingface.co/itpossible/Chinese-Mistral-7B-Instruct-v0.1)<br>[Wisemodel](https://wisemodel.cn/models/itpossible/Chinese-Mistral-7B-Instruct-v0.1)<br>[ModelScope](https://www.modelscope.cn/models/itpossible/Chinese-Mistral-7B-Instruct-v0.1) | Instruct model <br> LoRA fine-tuned with Alpaca_GPT4 in both Chinese and English |
47
+ | **Chinese-Mistral** | Chinese-Mistral-7B-Instruct-v0.2 | [HuggingFace](https://huggingface.co/itpossible/Chinese-Mistral-7B-Instruct-v0.2)<br>[Wisemodel](https://wisemodel.cn/models/itpossible/Chinese-Mistral-7B-Instruct-v0.2) | Instruct model <br> LoRA fine-tuned with a million high-quality instructions |
48
+ | **PreparedLLM** | Prepared-Llama | [Huggingface](https://huggingface.co/itpossible/Prepared-Llama)<br>[Wisemodel](https://wisemodel.cn/models/itpossible/PREPARED-Llama) | Base model <br> Continual pretraining with a small number of geoscience data <br> Recommended to use JiuZhou |
49
+
50
+
51
+ ## Inference
52
+ Below is an example of inference code using JiuZhou-Instruct-v0.2.
53
+ ```python
54
+ import torch
55
+ from transformers import AutoTokenizer, AutoModelForCausalLM
56
+
57
+ device = torch.device("cuda:0") if torch.cuda.is_available() else torch.device("cpu")
58
+
59
+ model_path = "itpossible/JiuZhou-Instruct-v0.2"
60
+ tokenizer = AutoTokenizer.from_pretrained(model_path)
61
+ model = AutoModelForCausalLM.from_pretrained(model_path, torch_dtype=torch.bfloat16, device_map=device)
62
+
63
+ text = "What is geoscience?"
64
+ messages = [{"role": "user", "content": text}]
65
+
66
+ inputs = tokenizer.apply_chat_template(messages, return_tensors="pt").to(device)
67
+ outputs_id = model.generate(inputs, max_new_tokens=600, do_sample=True)
68
+ outputs = tokenizer.batch_decode(outputs_id, skip_special_tokens=True)[0]
69
+ print(outputs)
70
+ ```
71
+
72
+ ## Model Performance
73
+
74
+ ### Geoscience Ability
75
+ We evaluate the performance of JiuZhou using the GeoBench benchmark.<br>
76
+ JiuZhou outperforms GPT-3.5 in objective tasks:
77
+ <p align="center">
78
+ <br>
79
+ <img src="image/objective_score.png" width="800"/>
80
+ <br>
81
+ </p>
82
+
83
+ JiuZhou also scores higher than ClimateChat across six criteria in subjective tasks:
84
+ <p align="center">
85
+ <br>
86
+ <img src="image/subjective_score.png" width="800"/>
87
+ <br>
88
+ </p>
89
+
90
+ ### General Ability
91
+
92
+ We evaluate the performance of Chinese-Mistral-7B using three benchmark datasets: C-Eval, CMMLU, and MMLU.<br>
93
+ Compared to other variants of Llama and Mistral models, JiuZhou shows outstanding performance:
94
+ <p align="center">
95
+ <br>
96
+ <img src="image/general_score.png" width="800"/>
97
+ <br>
98
+ </p>
99
+
100
+ ## Model Training Process
101
+
102
+ ### Training Corpus
103
+ The corpus consists of 50 million general documents and 3.4 million geoscience-related documents.
104
+ <p align="center">
105
+ <br>
106
+ <img src="image/JiuZhou-Corpus.png" width="800"/>
107
+ <br>
108
+ </p>
109
+
110
+ ### Training Framework
111
+ We use the JiuZhou-Framework proposed in this study.
112
+ <p align="center">
113
+ <br>
114
+ <img src="image/JiuZhou-Framework.png" width="800"/>
115
+ <br>
116
+ </p>
117
+
118
+ ### Two-stage Pre-adaptation Pre-training (TSPT)
119
+ TSPT improves the efficiency of using limited geoscience data and overcomes some of the technical bottlenecks in continual pretraining for LLMs.<br>
120
+ The difference between TSPT and single-stage training algorithms:
121
+ <p align="center">
122
+ <br>
123
+ <img src="image/TSPT.png" width="800"/>
124
+ <br>
125
+ </p>
126
+ Comparison of TSPT and one-stage pre-training algorithm performance:
127
+ <p align="center">
128
+ <br>
129
+ <img src="image/TSPT_score.png" width="800"/>
130
+ <br>
131
+ </p>
132
+
133
+
134
+ ## Model Training Code
135
+ We use [LLaMA-Factory](https://github.com/hiyouga/LLaMA-Factory) to fine-tune JiuZhou.
136
+
137
+ ### Project Deployment
138
+ ```bash
139
+ git clone https://github.com/THU-ESIS/JiuZhou.git
140
+ cd JiuZhou
141
+ pip install -e ".[torch,metrics]"
142
+ ```
143
+ ### Model Training
144
+ Pre-training:
145
+ ```bash
146
+ llamafactory-cli train examples/train_lora/JiuZhou_pretrain_sft.yaml
147
+ ```
148
+ Instruction-tuning:
149
+ ```bash
150
+ llamafactory-cli train examples/train_lora/JiuZhou_lora_sft.yaml
151
+ ```
152
+ Chat with the fine-tuned JiuZhou::
153
+ ```bash
154
+ llamafactory-cli chat examples/inference/JiuZhou_lora_sft.yaml
155
+ ```
156
+ Merge the instruction-tuned LoRA weights with the original JiuZhou weights:
157
+ ```bash
158
+ llamafactory-cli export examples/merge_lora/JiuZhou_lora_sft.yaml
159
+ ```
160
+
161
+ ## Citations
162
+ ```bibtex
163
+ @article{chen2024preparedllm,
164
+ author = {Chen, Zhou and Lin, Ming and Wang, Zimeng and Zang, Mingrun and Bai, Yuqi},
165
+ title = {PreparedLLM: Effective Pre-pretraining Framework for Domain-specific Large Language Models},
166
+ year = {2024},
167
+ journal = {Big Earth Data},
168
+ pages = {1--24},
169
+ doi = {10.1080/20964471.2024.2396159},
170
+ url = {https://doi.org/10.1080/20964471.2024.2396159}
171
+ }
172
+ ```
173
+
174
+ ## Acknowledgments
175
+ - [LLaMA-Factory](https://github.com/hiyouga/LLaMA-Factory)
176
+ - [OpenCompass](https://github.com/open-compass/opencompass)
177
+ - [K2](https://github.com/davendw49/k2)
178
+ - [GeoGalactica](https://github.com/geobrain-ai/geogalactica)
179
+ - [BB-GeoGPT](https://github.com/AGI-GIS/BB-GeoGPT)