AdaptLLM commited on
Commit
33b5298
ยท
verified ยท
1 Parent(s): 7c196d4

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +173 -0
README.md ADDED
@@ -0,0 +1,173 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ configs:
3
+ - config_name: Anaesthesia
4
+ data_files:
5
+ - split: test
6
+ path: Anaesthesia.jsonl
7
+ - config_name: Anatomy
8
+ data_files:
9
+ - split: test
10
+ path: Anatomy.jsonl
11
+ - config_name: Biochemistry
12
+ data_files:
13
+ - split: test
14
+ path: Biochemistry.jsonl
15
+ - config_name: Dental
16
+ data_files:
17
+ - split: test
18
+ path: Dental.jsonl
19
+ - config_name: ENT
20
+ data_files:
21
+ - split: test
22
+ path: ENT.jsonl
23
+ - config_name: Forensic Medicine
24
+ data_files:
25
+ - split: test
26
+ path: Forensic Medicine.jsonl
27
+ - config_name: Gynaecology & Obstetrics
28
+ data_files:
29
+ - split: test
30
+ path: Gynaecology & Obstetrics.jsonl
31
+ - config_name: Medicine
32
+ data_files:
33
+ - split: test
34
+ path: Medicine.jsonl
35
+ - config_name: Microbiology
36
+ data_files:
37
+ - split: test
38
+ path: Microbiology.jsonl
39
+ - config_name: Ophthalmology
40
+ data_files:
41
+ - split: test
42
+ path: Ophthalmology.jsonl
43
+ - config_name: Orthopedics
44
+ data_files:
45
+ - split: test
46
+ path: Orthopedics.jsonl
47
+ - config_name: Pathology
48
+ data_files:
49
+ - split: test
50
+ path: Pathology.jsonl
51
+ - config_name: Pediatrics
52
+ data_files:
53
+ - split: test
54
+ path: Pediatrics.jsonl
55
+ - config_name: Pharmacology
56
+ data_files:
57
+ - split: test
58
+ path: Pharmacology.jsonl
59
+ - config_name: Physiology
60
+ data_files:
61
+ - split: test
62
+ path: Physiology.jsonl
63
+ - config_name: Psychiatry
64
+ data_files:
65
+ - split: test
66
+ path: Psychiatry.jsonl
67
+ - config_name: Radiology
68
+ data_files:
69
+ - split: test
70
+ path: Radiology.jsonl
71
+ - config_name: Skin
72
+ data_files:
73
+ - split: test
74
+ path: Skin.jsonl
75
+ - config_name: Social & Preventive Medicine
76
+ data_files:
77
+ - split: test
78
+ path: Social & Preventive Medicine.jsonl
79
+ - config_name: Surgery
80
+ data_files:
81
+ - split: test
82
+ path: Surgery.jsonl
83
+ - config_name: Unknown
84
+ data_files:
85
+ - split: test
86
+ path: Unknown.jsonl
87
+
88
+ task_categories:
89
+ - text-classification
90
+ - question-answering
91
+ - zero-shot-classification
92
+ language:
93
+ - en
94
+ tags:
95
+ - medical
96
+ - chemistry
97
+ - biology
98
+ ---
99
+
100
+ # Domain Adaptation of Large Language Models
101
+ This repo contains the **Biomedicine Knowledge Probing dataset** used in our **ICLR 2024** paper [Adapting Large Language Models via Reading Comprehension](https://huggingface.co/papers/2309.09530).
102
+
103
+ We explore **continued pre-training on domain-specific corpora** for large language models. While this approach enriches LLMs with domain knowledge, it significantly hurts their prompting ability for question answering. Inspired by human learning via reading comprehension, we propose a simple method to **transform large-scale pre-training corpora into reading comprehension texts**, consistently improving prompting performance across tasks in biomedicine, finance, and law domains. **Our 7B model competes with much larger domain-specific models like BloombergGPT-50B**.
104
+
105
+ ### ๐Ÿค— We are currently working hard on developing models across different domains, scales and architectures! Please stay tuned! ๐Ÿค—
106
+
107
+ **************************** **Updates** ****************************
108
+ * 2024/4/14: Released the medical knowledge probing dataset at [med_knowledge_prob](https://huggingface.co/datasets/AdaptLLM/med_knowledge_prob)
109
+ * 2024/4/2: Released the raw data splits (train and test) of all the evaluation datasets
110
+ * 2024/1/16: ๐ŸŽ‰ Our [research paper](https://huggingface.co/papers/2309.09530) has been accepted by ICLR 2024!!!๐ŸŽ‰
111
+ * 2023/12/19: Released our [13B base models](https://huggingface.co/AdaptLLM/law-LLM-13B) developed from LLaMA-1-13B.
112
+ * 2023/12/8: Released our [chat models](https://huggingface.co/AdaptLLM/law-chat) developed from LLaMA-2-Chat-7B.
113
+ * 2023/9/18: Released our [paper](https://huggingface.co/papers/2309.09530), [code](https://github.com/microsoft/LMOps), [data](https://huggingface.co/datasets/AdaptLLM/law-tasks), and [base models](https://huggingface.co/AdaptLLM/law-LLM) developed from LLaMA-1-7B.
114
+
115
+
116
+ ## Domain-Specific LLMs
117
+ ### LLaMA-1-7B
118
+ In our paper, we develop three domain-specific models from LLaMA-1-7B, which are also available in Huggingface: [Biomedicine-LLM](https://huggingface.co/AdaptLLM/medicine-LLM), [Finance-LLM](https://huggingface.co/AdaptLLM/finance-LLM) and [Law-LLM](https://huggingface.co/AdaptLLM/law-LLM), the performances of our AdaptLLM compared to other domain-specific LLMs are:
119
+
120
+ <p align='center'>
121
+ <img src="https://cdn-uploads.huggingface.co/production/uploads/650801ced5578ef7e20b33d4/6efPwitFgy-pLTzvccdcP.png" width="700">
122
+ </p>
123
+
124
+ ### LLaMA-1-13B
125
+ Moreover, we scale up our base model to LLaMA-1-13B to see if **our method is similarly effective for larger-scale models**, and the results are consistently positive too: [Biomedicine-LLM-13B](https://huggingface.co/AdaptLLM/medicine-LLM-13B), [Finance-LLM-13B](https://huggingface.co/AdaptLLM/finance-LLM-13B) and [Law-LLM-13B](https://huggingface.co/AdaptLLM/law-LLM-13B).
126
+
127
+ ## Domain-Specific LLaMA-2-Chat
128
+ Our method is also effective for aligned models! LLaMA-2-Chat requires a [specific data format](https://huggingface.co/blog/llama2#how-to-prompt-llama-2), and our **reading comprehension can perfectly fit the data format** by transforming the reading comprehension into a multi-turn conversation. We have also open-sourced chat models in different domains: [Biomedicine-Chat](https://huggingface.co/AdaptLLM/medicine-chat), [Finance-Chat](https://huggingface.co/AdaptLLM/finance-chat) and [Law-Chat](https://huggingface.co/AdaptLLM/law-chat)
129
+
130
+ ## Domain-Specific Tasks
131
+
132
+ ### Pre-templatized/Formatted Testing Splits
133
+ To easily reproduce our prompting results, we have uploaded the filled-in zero/few-shot input instructions and output completions of the test each domain-specific task: [biomedicine-tasks](https://huggingface.co/datasets/AdaptLLM/medicine-tasks), [finance-tasks](https://huggingface.co/datasets/AdaptLLM/finance-tasks), and [law-tasks](https://huggingface.co/datasets/AdaptLLM/law-tasks).
134
+
135
+ **Note:** those filled-in instructions are specifically tailored for models before alignment and do NOT fit for the specific data format required for chat models.
136
+
137
+ ### Raw Datasets
138
+ We have also uploaded the raw training and testing splits, for facilitating fine-tuning or other usages: [ChemProt](https://huggingface.co/datasets/AdaptLLM/ChemProt), [RCT](https://huggingface.co/datasets/AdaptLLM/RCT), [ConvFinQA](https://huggingface.co/datasets/AdaptLLM/ConvFinQA), [FiQA_SA](https://huggingface.co/datasets/AdaptLLM/FiQA_SA), [Headline](https://huggingface.co/datasets/AdaptLLM/Headline), [NER](https://huggingface.co/datasets/AdaptLLM/NER), [FPB](https://huggingface.co/datasets/AdaptLLM/FPB)
139
+
140
+ The other datasets used in our paper have already been available in huggingface.
141
+
142
+ ### Domain Knowledge Probing Evaluation
143
+ Our preprocessed medical knowledge probing dataset is available at: [med_knowledge_prob](https://huggingface.co/datasets/AdaptLLM/med_knowledge_prob)
144
+
145
+ ## Citation
146
+ If you find our work helpful, please cite us:
147
+ ```bibtex
148
+ @inproceedings{
149
+ cheng2024adapting,
150
+ title={Adapting Large Language Models via Reading Comprehension},
151
+ author={Daixuan Cheng and Shaohan Huang and Furu Wei},
152
+ booktitle={The Twelfth International Conference on Learning Representations},
153
+ year={2024},
154
+ url={https://openreview.net/forum?id=y886UXPEZ0}
155
+ }
156
+ ```
157
+
158
+ and the original dataset:
159
+ ```bibtex
160
+ @inproceedings{MedMCQA,
161
+ author = {Ankit Pal and
162
+ Logesh Kumar Umapathi and
163
+ Malaikannan Sankarasubbu},
164
+ title = {MedMCQA: {A} Large-scale Multi-Subject Multi-Choice Dataset for Medical
165
+ domain Question Answering},
166
+ booktitle = {{CHIL}},
167
+ series = {Proceedings of Machine Learning Research},
168
+ volume = {174},
169
+ pages = {248--260},
170
+ publisher = {{PMLR}},
171
+ year = {2022}
172
+ }
173
+ ```