Question Answering
PEFT
English
medical
Tonic commited on
Commit
3c8ac2c
1 Parent(s): 6ee2297

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +195 -1
README.md CHANGED
@@ -7,4 +7,198 @@ language:
7
  library_name: adapter-transformers
8
  tags:
9
  - medical
10
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
7
  library_name: adapter-transformers
8
  tags:
9
  - medical
10
+ ---
11
+
12
+ ---
13
+ # For reference on model card metadata, see the spec: https://github.com/huggingface/hub-docs/blob/main/modelcard.md?plain=1
14
+ # Doc / guide: https://huggingface.co/docs/hub/model-cards
15
+ {{ card_data }}
16
+ ---
17
+
18
+ # Model Card for {{ model_id | default("Model ID", true) }}
19
+
20
+ This is a medical fine tuned model from the [Falcon-7b-Instruction](https://huggingface.co/tiiuae/falcon-7b-instruct) Base using 500 steps & 6 epochs with [MedAware](https://huggingface.co/datasets/keivalya/MedQuad-MedicalQnADataset) Dataset from [keivalya](https://huggingface.co/datasets/keivalya)
21
+ {{ model_summary | default("", true) }}
22
+
23
+ ## Model Details
24
+
25
+ ### Model Description
26
+
27
+ <!-- Provide a longer summary of what this model is. -->
28
+
29
+ {{ model_description | default("", true) }}
30
+
31
+ - **Developed by:** {{ developers | default("[[Tonic](https://www.huggingface.co/tonic)]", true)}}
32
+ - **Shared by [optional]:** {{ shared_by | default("[[Tonic](https://www.huggingface.co/tonic)]", true)}}
33
+ - **Model type:** {{ model_type | default("[Medical Fine-Tuned Conversational Falcon 7b (Instruct)]", true)}}
34
+ - **Language(s) (NLP):** {{ language | default("[More Information Needed]", true)}}
35
+ - **License:** {{ license | default("[More Information Needed]", true)}}
36
+ - **Finetuned from model [optional]:** {{ finetuned_from | default("[tiiuae/falcon-7b-instruct](https://huggingface.co/tiiuae/falcon-7b-instruct)", true)}}
37
+
38
+ ### Model Sources [optional]
39
+
40
+ <!-- Provide the basic links for the model. -->
41
+
42
+ - **Repository:** https://github.com/Josephrp/AI-challenge-hackathon/blob/master/falcon_7b_instruct_GaiaMiniMed_dataset.ipynb
43
+ - **Demo [optional]:** {{ demo | default("[More Information Needed]", true)}}
44
+
45
+ ## Uses
46
+
47
+ Use this model like you would use Falcon Instruct Models
48
+
49
+ ### Direct Use
50
+
51
+ This model is intended for educational purposes only , always consult a doctor for the best advice.
52
+
53
+ This model should perform better at medical QnA tasks in a conversational manner.
54
+
55
+ It is our hope that it will help improve patient outcomes and public health.
56
+
57
+ ### Downstream Use [optional]
58
+
59
+ Use this model next to others and have group conversations to produce diagnoses , public health advisory , and personal hygene improvements.
60
+
61
+ ### Out-of-Scope Use
62
+
63
+ This model is not meant as a decision support system in the wild, only for educational use.
64
+
65
+ ## Bias, Risks, and Limitations
66
+
67
+ <!-- This section is meant to convey both technical and sociotechnical limitations. -->
68
+
69
+ {{ bias_risks_limitations | default("[More Information Needed]", true)}}
70
+
71
+ ## How to Get Started with the Model
72
+
73
+ Use the code below to get started with the model.
74
+
75
+ {{ get_started_code | default("[More Information Needed]", true)}}
76
+
77
+ ## Training Details
78
+
79
+ ### Results
80
+
81
+ {{ results | default("[
82
+
83
+ ![image/png](https://cdn-uploads.huggingface.co/production/uploads/62a3bb1cd0d8c2c2169f0b88/F8GfMSJcAaH7pXvpUK_r3.png)
84
+
85
+ ```json
86
+
87
+ TrainOutput(global_step=6150, training_loss=1.0597990553941183,
88
+ {'epoch': 6.0})
89
+ ```
90
+ ]", true)}}
91
+
92
+
93
+ ### Training Data
94
+
95
+ <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
96
+
97
+ {{ training_data | default("
98
+ ```json
99
+ DatasetDict({
100
+ train: Dataset({
101
+ features: ['qtype', 'Question', 'Answer'],
102
+ num_rows: 16407
103
+ })
104
+ })
105
+ ```
106
+ ", true)}}
107
+
108
+ ### Training Procedure
109
+
110
+
111
+ #### Preprocessing [optional]
112
+
113
+ {{ preprocessing | default("[trainable params: 4718592 || all params: 3613463424 || trainables%: 0.13058363808693696]", true)}}
114
+
115
+
116
+ #### Training Hyperparameters
117
+
118
+ - **Training regime:** {{ training_regime | default("[More Information Needed]", true)}} <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
119
+
120
+ #### Speeds, Sizes, Times [optional]
121
+
122
+ ```
123
+
124
+ metrics={'train_runtime': 30766.4612, 'train_samples_per_second': 3.2, 'train_steps_per_second': 0.2,
125
+ 'total_flos': 1.1252790565109983e+18, 'train_loss': 1.0597990553941183,", true)}}
126
+ ```
127
+
128
+ ## Environmental Impact
129
+
130
+ <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
131
+
132
+ Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
133
+
134
+ - **Hardware Type:** {{ hardware | default("[More Information Needed]", true)}}
135
+ - **Hours used:** {{ hours_used | default("[More Information Needed]", true)}}
136
+ - **Cloud Provider:** {{ cloud_provider | default("[More Information Needed]", true)}}
137
+ - **Compute Region:** {{ cloud_region | default("[More Information Needed]", true)}}
138
+ - **Carbon Emitted:** {{ co2_emitted | default("[More Information Needed]", true)}}
139
+
140
+ ## Technical Specifications
141
+
142
+ ### Model Architecture and Objective
143
+
144
+ ```json
145
+
146
+ PeftModelForCausalLM(
147
+ (base_model): LoraModel(
148
+ (model): FalconForCausalLM(
149
+ (transformer): FalconModel(
150
+ (word_embeddings): Embedding(65024, 4544)
151
+ (h): ModuleList(
152
+ (0-31): 32 x FalconDecoderLayer(
153
+ (self_attention): FalconAttention(
154
+ (maybe_rotary): FalconRotaryEmbedding()
155
+ (query_key_value): Linear4bit(
156
+ in_features=4544, out_features=4672, bias=False
157
+ (lora_dropout): ModuleDict(
158
+ (default): Dropout(p=0.05, inplace=False)
159
+ )
160
+ (lora_A): ModuleDict(
161
+ (default): Linear(in_features=4544, out_features=16, bias=False)
162
+ )
163
+ (lora_B): ModuleDict(
164
+ (default): Linear(in_features=16, out_features=4672, bias=False)
165
+ )
166
+ (lora_embedding_A): ParameterDict()
167
+ (lora_embedding_B): ParameterDict()
168
+ )
169
+ (dense): Linear4bit(in_features=4544, out_features=4544, bias=False)
170
+ (attention_dropout): Dropout(p=0.0, inplace=False)
171
+ )
172
+ (mlp): FalconMLP(
173
+ (dense_h_to_4h): Linear4bit(in_features=4544, out_features=18176, bias=False)
174
+ (act): GELU(approximate='none')
175
+ (dense_4h_to_h): Linear4bit(in_features=18176, out_features=4544, bias=False)
176
+ )
177
+ (input_layernorm): LayerNorm((4544,), eps=1e-05, elementwise_affine=True)
178
+ )
179
+ )
180
+ (ln_f): LayerNorm((4544,), eps=1e-05, elementwise_affine=True)
181
+ )
182
+ (lm_head): Linear(in_features=4544, out_features=65024, bias=False)
183
+ )
184
+ )
185
+ )
186
+
187
+ ```
188
+
189
+ ### Compute Infrastructure
190
+
191
+ Google Collaboratory
192
+
193
+ #### Hardware
194
+
195
+ A100
196
+
197
+
198
+ ## Model Card Authors
199
+
200
+ {{ model_card_authors | default("[Tonic](https://huggingface.co/tonic)", true)}}
201
+
202
+ ## Model Card Contact
203
+
204
+ {{ model_card_contact | default("[Tonic](https://huggingface.co/tonic", true)}}