Datasets:
Tasks:
Visual Question Answering
Formats:
parquet
Languages:
English
Size:
10K - 100K
ArXiv:
Tags:
medical
License:
flaviagiammarino
commited on
Commit
•
8f8814f
1
Parent(s):
005554a
Update README.md
Browse files
README.md
CHANGED
@@ -33,9 +33,9 @@ After dropping the duplicate image-question-answer triplets, the dataset contain
|
|
33 |
|
34 |
#### Supported Tasks and Leaderboards
|
35 |
This dataset has an active leaderboard which can be found on [Papers with Code](https://paperswithcode.com/sota/medical-visual-question-answering-on-pathvqa)
|
36 |
-
and ranks models based on
|
37 |
-
|
38 |
-
- "
|
39 |
-
|
40 |
|
41 |
|
|
|
33 |
|
34 |
#### Supported Tasks and Leaderboards
|
35 |
This dataset has an active leaderboard which can be found on [Papers with Code](https://paperswithcode.com/sota/medical-visual-question-answering-on-pathvqa)
|
36 |
+
and ranks models based on three metrics: "Yes/No Accuracy", "Free-form accuracy" and "Overall accuracy". "Yes/No Accuracy" is
|
37 |
+
the accuracy of a model's generated answers for the subset of binary "yes/no" questions. "Free-form accuracy" is the accuracy
|
38 |
+
of a model's generated answers for the subset of open-ended questions. "Overall accuracy" is the accuracy of a model's generated
|
39 |
+
answers across all questions.
|
40 |
|
41 |
|