AdaptLLM commited on
Commit
053f866
1 Parent(s): 457d992
Files changed (1) hide show
  1. README.md +66 -1
README.md CHANGED
@@ -1,3 +1,68 @@
1
  ---
2
- license: llama2
 
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ language:
3
+ - en
4
  ---
5
+ # Adapt Large Language Models to Domains
6
+ This repo contains the domain-specific chat model developed from LLaMA-2-Chat-7B, using the method in our paper [Adapting Large Language Models via Reading Comprehension](https://huggingface.co/papers/2309.09530).
7
+
8
+ We explore **continued pre-training on domain-specific corpora** for large language models. While this approach enriches LLMs with domain knowledge, it significantly hurts their prompting ability for question answering. Inspired by human learning via reading comprehension, we propose a simple method to **transform large-scale pre-training corpora into reading comprehension texts**, consistently improving prompting performance across tasks in biomedicine, finance, and law domains. **Our 7B model competes with much larger domain-specific models like BloombergGPT-50B**.
9
+
10
+ **************************** **Updates** ****************************
11
+ * 12/8: Released our [models](https://huggingface.co/AdaptLLM/finance-chat) developed from LLaMA-2-Chat-7B.
12
+ * 9/18: Released our [paper](https://huggingface.co/papers/2309.09530), [code](https://github.com/microsoft/LMOps), [data](https://huggingface.co/datasets/AdaptLLM/finance-tasks), and [models](https://huggingface.co/AdaptLLM/finance-LLM) developed from LLaMA-1-7B.
13
+
14
+
15
+ ## Domain-Specific LLaMA-1
16
+ In our paper, we develop three domain-specific models from LLaMA-1-7B, which are also available in Huggingface: [Biomedicine-LLM](https://huggingface.co/AdaptLLM/medicine-LLM), [Finance-LLM](https://huggingface.co/AdaptLLM/finance-LLM) and [Law-LLM](https://huggingface.co/AdaptLLM/law-LLM), the performances of our AdaptLLM compared to other domain-specific LLMs are:
17
+
18
+ <p align='center'>
19
+ <img src="https://cdn-uploads.huggingface.co/production/uploads/650801ced5578ef7e20b33d4/6efPwitFgy-pLTzvccdcP.png" width="700">
20
+ </p>
21
+
22
+ ## Domain-Specific LLaMA-2-Chat
23
+ Our method is also effective for aligned models! LLaMA-2-Chat requires a [specific data format](https://huggingface.co/blog/llama2#how-to-prompt-llama-2), and our **reading comprehension can perfectly fit the data format** by transforming the reading comprehension into a multi-turn conversation. We have also open-sourced chat models in different domains: [Biomedicine-Chat](https://huggingface.co/AdaptLLM/medicine-chat), [Finance-Chat](https://huggingface.co/AdaptLLM/finance-chat) and [Law-Chat](https://huggingface.co/AdaptLLM/law-chat)
24
+
25
+ For example, to chat with the biomedicine model:
26
+ ```python
27
+ from transformers import AutoModelForCausalLM, AutoTokenizer
28
+
29
+ model = AutoModelForCausalLM.from_pretrained("AdaptLLM/medicine-chat")
30
+ tokenizer = AutoTokenizer.from_pretrained("AdaptLLM/medicine-chat", use_fast=False)
31
+
32
+ # Put your input here:
33
+ user_input = '''Question: Which of the following is an example of monosomy?
34
+ Options:
35
+ - 46,XX
36
+ - 47,XXX
37
+ - 69,XYY
38
+ - 45,X
39
+
40
+ Please provide your choice first and then provide explanations if possible.'''
41
+
42
+ # We use the prompt template of LLaMA-2-Chat demo
43
+ prompt = f"<s>[INST] <<SYS>>\nYou are a helpful, respectful and honest assistant. Always answer as helpfully as possible, while being safe. Your answers should not include any harmful, unethical, racist, sexist, toxic, dangerous, or illegal content. Please ensure that your responses are socially unbiased and positive in nature.\n\nIf a question does not make any sense, or is not factually coherent, explain why instead of answering something not correct. If you don't know the answer to a question, please don't share false information.\n<</SYS>>\n\n{user_input} [/INST]"
44
+
45
+ inputs = tokenizer(prompt, return_tensors="pt", add_special_tokens=False).input_ids.to(model.device)
46
+ outputs = model.generate(input_ids=inputs, max_length=4096)[0]
47
+
48
+ answer_start = int(inputs.shape[-1])
49
+ pred = tokenizer.decode(outputs[answer_start:], skip_special_tokens=True)
50
+
51
+ print(f'### User Input:\n{user_input}\n\n### Assistant Output:\n{pred}')
52
+ ```
53
+ ## Domain-Specific Tasks
54
+ To easily reproduce our results, we have uploaded the filled-in zero/few-shot input instructions and output completions of each domain-specific task: [biomedicine-tasks](https://huggingface.co/datasets/AdaptLLM/medicine-tasks), [finance-tasks](https://huggingface.co/datasets/AdaptLLM/finance-tasks), and [law-tasks](https://huggingface.co/datasets/AdaptLLM/law-tasks).
55
+
56
+ **Note:** those filled-in instructions are specifically tailored for models before alignment and do NOT fit for the specific data format required for chat models.
57
+
58
+ ## Citation
59
+ If you find our work helpful, please cite us:
60
+ ```bibtex
61
+ @article{adaptllm,
62
+ title = {Adapting Large Language Models via Reading Comprehension},
63
+ author = {Daixuan Cheng and Shaohan Huang and Furu Wei},
64
+ journal = {CoRR},
65
+ volume = {abs/2309.09530},
66
+ year = {2023}
67
+ }
68
+ ```