Astaxanthin commited on
Commit
87ddfb6
·
verified ·
1 Parent(s): 6b2b470

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +3 -8
README.md CHANGED
@@ -31,11 +31,9 @@ license: mit
31
  - **Paper [optional]:** https://arxiv.org/abs/2412.13126
32
  - **Demo [optional]:** [More Information Needed]
33
 
34
- ## Uses
35
-
36
  <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
37
 
38
- ### Direct Use
39
 
40
  <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
41
 
@@ -69,9 +67,7 @@ text_feature = model.encode_text(token_input)
69
 
70
  <!-- This section describes the evaluation protocols and provides the results. -->
71
 
72
- ### Testing Data, Factors & Metrics
73
-
74
- #### Testing Data
75
 
76
  <!-- This should link to a Dataset Card if possible. -->
77
 
@@ -111,7 +107,7 @@ We present benchmark results for a range of representative tasks. A complete set
111
  | CPTAC-NSCLC | 0.647 | 0.607 | 0.643 | 0.836 | **0.863** |
112
  | EBRAINS | 0.096 | 0.093 | 0.325 | 0.371 | **0.456** |
113
 
114
- #### Summary
115
 
116
  Validated on 18 diverse benchmarks with more than 14,000 whole slide images (WSIs), KEEP achieves state-of-the-art performance in zero-shot cancer diagnostic tasks. Notably, for cancer detection, KEEP demonstrates an average sensitivity of 89.8% at a specificity of 95.0% across 7 cancer types, significantly outperforming vision-only foundation models and highlighting its promising potential for clinical application. For cancer subtyping, KEEP achieves a median balanced accuracy of 0.456 in subtyping 30 rare brain cancers, indicating strong generalizability for diagnosing rare tumors.
117
 
@@ -120,7 +116,6 @@ Validated on 18 diverse benchmarks with more than 14,000 whole slide images (WSI
120
 
121
  <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
122
 
123
- **BibTeX:**
124
  ```
125
  @article{zhou2024keep,
126
  title={A Knowledge-enhanced Pathology Vision-language Foundation Model for Cancer Diagnosis},
 
31
  - **Paper [optional]:** https://arxiv.org/abs/2412.13126
32
  - **Demo [optional]:** [More Information Needed]
33
 
 
 
34
  <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
35
 
36
+ ## Direct Use
37
 
38
  <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
39
 
 
67
 
68
  <!-- This section describes the evaluation protocols and provides the results. -->
69
 
70
+ ### Testing Data
 
 
71
 
72
  <!-- This should link to a Dataset Card if possible. -->
73
 
 
107
  | CPTAC-NSCLC | 0.647 | 0.607 | 0.643 | 0.836 | **0.863** |
108
  | EBRAINS | 0.096 | 0.093 | 0.325 | 0.371 | **0.456** |
109
 
110
+ ### Summary
111
 
112
  Validated on 18 diverse benchmarks with more than 14,000 whole slide images (WSIs), KEEP achieves state-of-the-art performance in zero-shot cancer diagnostic tasks. Notably, for cancer detection, KEEP demonstrates an average sensitivity of 89.8% at a specificity of 95.0% across 7 cancer types, significantly outperforming vision-only foundation models and highlighting its promising potential for clinical application. For cancer subtyping, KEEP achieves a median balanced accuracy of 0.456 in subtyping 30 rare brain cancers, indicating strong generalizability for diagnosing rare tumors.
113
 
 
116
 
117
  <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
118
 
 
119
  ```
120
  @article{zhou2024keep,
121
  title={A Knowledge-enhanced Pathology Vision-language Foundation Model for Cancer Diagnosis},