Text Generation
Transformers
PyTorch
English
gpt2
medical
text-generation-inference
Inference Endpoints
cdancette commited on
Commit
fd00018
1 Parent(s): a0bd9c6

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +10 -1
README.md CHANGED
@@ -40,7 +40,16 @@ Large language models, such as GPT-4, obtain reasonable scores on medical questi
40
  In this work, we introduce a method to improve the proficiency of a small language model in the medical domain by employing a two-fold approach.
41
  We first fine-tune the model on a corpus of medical textbooks. Then, we use GPT-4 to generate questions similar to the downstream task, prompted with textbook knowledge, and use them to fine-tune the model.
42
  We show the benefits of our training strategy on a medical answering question dataset.
43
- The study's findings highlight the potential of small language models in the medical domain when appropriately fine-tuned.
 
 
 
 
 
 
 
 
 
44
 
45
 
46
  - **Developed by:** Raidium
 
40
  In this work, we introduce a method to improve the proficiency of a small language model in the medical domain by employing a two-fold approach.
41
  We first fine-tune the model on a corpus of medical textbooks. Then, we use GPT-4 to generate questions similar to the downstream task, prompted with textbook knowledge, and use them to fine-tune the model.
42
  We show the benefits of our training strategy on a medical answering question dataset.
43
+
44
+
45
+ ### Using the model
46
+
47
+ ```python
48
+ from transformers import AutoTokenizer, AutoModelForCausalLM
49
+
50
+ tokenizer = AutoTokenizer.from_pretrained("raidium/MQG")
51
+ model = AutoModelForCausalLM.from_pretrained("raidium/MQG")
52
+ ```
53
 
54
 
55
  - **Developed by:** Raidium