Update README.md
Browse filesTraining data is actually 3,200 responses coded. Missed the 200 all researchers coded jointly in this count
README.md
CHANGED
@@ -144,7 +144,7 @@ As the model was fine-tuned from SimCSE, itself fine-tuned from BERT, it will re
|
|
144 |
|
145 |
#### Training Data
|
146 |
|
147 |
-
The model was fine-tuned on 3,
|
148 |
|
149 |
#### Training procedure
|
150 |
|
|
|
144 |
|
145 |
#### Training Data
|
146 |
|
147 |
+
The model was fine-tuned on 3,200 labeled open-ended responses from [RANDS during COVID 19 Rounds 1 and 2](https://www.cdc.gov/nchs/rands/index.htm). The base SimCSE BERT model was trained on BookCorpus and English Wikipedia.
|
148 |
|
149 |
#### Training procedure
|
150 |
|