jon-t commited on
Commit
f5d518d
verified
1 Parent(s): 2946f56

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +10 -44
README.md CHANGED
@@ -1,53 +1,19 @@
1
  ---
2
- library_name: transformers
3
  license: mit
4
- base_model: nlpie/tiny-clinicalbert
5
  tags:
6
- - generated_from_trainer
7
- model-index:
8
- - name: tiny-clinicalbert-qa
9
- results: []
 
 
10
  ---
11
 
12
- <!-- This model card has been generated automatically according to the information the Trainer had access to. You
13
- should probably proofread and complete it, then remove this comment. -->
14
 
15
  # tiny-clinicalbert-qa
16
 
17
- This model is a fine-tuned version of [nlpie/tiny-clinicalbert](https://huggingface.co/nlpie/tiny-clinicalbert) on the Eladio/emrqa-msquad and the rajpurkar/squad_v2 datasets.
18
 
19
- ## Model description
20
-
21
- More information needed
22
-
23
- ## Intended uses & limitations
24
-
25
- More information needed
26
-
27
- ## Training and evaluation data
28
-
29
- More information needed
30
-
31
- ## Training procedure
32
-
33
- ### Training hyperparameters
34
-
35
- The following hyperparameters were used during training:
36
- - learning_rate: 8e-05
37
- - train_batch_size: 16
38
- - eval_batch_size: 8
39
- - seed: 42
40
- - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
41
- - lr_scheduler_type: linear
42
- - num_epochs: 3.0
43
-
44
- ### Training results
45
-
46
-
47
-
48
- ### Framework versions
49
-
50
- - Transformers 4.53.0
51
- - Pytorch 2.7.1+cu118
52
- - Datasets 3.6.0
53
- - Tokenizers 0.21.2
 
1
  ---
2
+ language: en
3
  license: mit
 
4
  tags:
5
+ - question-answering
6
+ - pytorch
7
+ - bert
8
+ datasets:
9
+ - rajpurkar/squad_v2
10
+ - Eladio/emrqa-msquad
11
  ---
12
 
13
+ <!-- This README.md file is used to generate the README on https://huggingface.co/jon-t/tiny-clinicalbert-qa -->
 
14
 
15
  # tiny-clinicalbert-qa
16
 
17
+ A lightweight, domain-adapted BERT model for clinical question answering, trained on a combination of [SQuAD v2](https://huggingface.co/datasets/rajpurkar/squad_v2) and [EMRQA-MSQuAD](https://huggingface.co/datasets/Eladio/emrqa-msquad) datasets.
18
 
19
+ Source code for the training script is available [on GitHub](https://github.com/jon-edward/tiny-clinicalbert-qa). See [eval_results.json](https://huggingface.co/jon-t/tiny-clinicalbert-qa/blob/main/eval_results.json) for evaluation results, and [train_results.json](https://huggingface.co/jon-t/tiny-clinicalbert-qa/blob/main/train_results.json) for training results.