yhyhy3 commited on
Commit
e949b24
1 Parent(s): 74a1146

Update model card with details

Browse files
Files changed (1) hide show
  1. README.md +73 -1
README.md CHANGED
@@ -9,4 +9,76 @@ datasets:
9
  language:
10
  - en
11
  library_name: transformers
12
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
9
  language:
10
  - en
11
  library_name: transformers
12
+ pipeline_tag: text-generation
13
+ tags:
14
+ - medical
15
+ - code
16
+ ---
17
+ # Model Card for Model ID
18
+
19
+ <!-- Provide a quick summary of what the model is/does. -->
20
+
21
+ This model is an instruction-tuned Open LLaMa model with 7B parameters, with specialities in medical QA and code instruction.
22
+
23
+ ## Model Details
24
+
25
+ <!-- Provide a longer summary of what this model is. -->
26
+
27
+ - **Model type:** LlamaForCausalLM
28
+ - **Language(s) (NLP):** English
29
+ - **License:** Apache 2.0
30
+ - **Finetuned from model (QLoRA):** [openlm-research/open_llama_7b_v2](https://huggingface.co/openlm-research/open_llama_7b_v2)
31
+
32
+ ## How to Get Started with the Model
33
+
34
+ Use the code below to get started with the model.
35
+
36
+ ```py
37
+ import torch
38
+ from transformers import LlamaTokenizer, LlamaForCausalLM
39
+
40
+ model_path = 'yhyhy3/open_llama_7b_v2_med_dolphin_qlora_merged'
41
+
42
+ tokenizer = LlamaTokenizer.from_pretrained(model_path)
43
+ model = LlamaForCausalLM.from_pretrained(
44
+ model_path, torch_dtype=torch.float16, device_map='auto',
45
+ )
46
+
47
+ prompt = '''### Instruction: Answer the following question.
48
+
49
+ ### Input: What is the capital of New Jersey?
50
+
51
+ ### Response:'''
52
+ input_ids = tokenizer(prompt, return_tensors="pt").input_ids
53
+
54
+ generation_output = model.generate(
55
+ input_ids=input_ids, max_new_tokens=32
56
+ )
57
+ print(tokenizer.decode(generation_output[0]))
58
+ ```
59
+
60
+ ## Training Details
61
+
62
+ ### Training Data
63
+
64
+ Converted the following datasets to alpaca:instruction format:
65
+ 1. [ehartford/dolphin](https://huggingface.co/datasets/ehartford/dolphin)
66
+ - ORCA style dataset generously created by [Eric Hartford](https://huggingface.co/ehartford)
67
+ - Only used the 1 million GPT4 generated instructions file [flan1m-alpaca-uncensored.jsonl](https://huggingface.co/datasets/ehartford/dolphin/blob/main/flan1m-alpaca-uncensored.jsonl).
68
+ 2. [LinhDuong/chatdoctor-200k](https://huggingface.co/datasets/LinhDuong/chatdoctor-200k)
69
+ - Refined dataset sourced from icliniq medical QA forum
70
+ 3. [sahil2801/code_instructions_120k](https://huggingface.co/datasets/sahil2801/code_instructions_120k)
71
+ - Code instruction dataset generously created by Sahil Chaudhary from ThreeSixty AI
72
+ 4. [medalpaca/medical_meadow_mediqa](https://huggingface.co/datasets/medalpaca/medical_meadow_mediqa)
73
+ - MEDIQA is a dataset of manually generated, question-driven summaries of multi and single document answers to consumer health questions from medalpaca group.
74
+ 5. [kaiokendev/SuperCOT-dataset](https://huggingface.co/datasets/kaiokendev/SuperCOT-dataset)
75
+ - Code instruction dataset generously created by Kaio Ken
76
+
77
+ ### Training Procedure
78
+
79
+ Trained using axolotl QLoRa on RunPod 8x A6000 on Community Cloud for 2 epochs (~14 hours).
80
+
81
+ axolotl training config:
82
+ ```yaml
83
+
84
+ ```