Updated README.md
Browse files
README.md
CHANGED
@@ -92,17 +92,6 @@ Qure has been evaluated using both standard NLP benchmarks and specific medical
|
|
92 |
| Preliminary Diagnosis | **Specificity** | 87.4% | Yes |
|
93 |
| Preliminary Diagnosis | **F1-Score** | 89.8% | Yes |
|
94 |
|
95 |
-
### **Competitions**
|
96 |
-
|
97 |
-
Qure has participated in and excelled at several prestigious AI and medical competitions, showcasing its strength in handling complex medical data and language tasks.
|
98 |
-
|
99 |
-
| Competition Name | Metric | Value | Rank |
|
100 |
-
|--------------------|----------|--------|-----------|
|
101 |
-
| **AI for Healthcare Challenge** | **Accuracy** | 88.2% | 3rd |
|
102 |
-
| **Medical NLP Task at MedAI** | **ROUGE-L** | 45.0 | 2nd |
|
103 |
-
| **Image-based Diagnosis Challenge** | **AUC** | 95.5% | 1st |
|
104 |
-
| **Radiology AI Competition** | **F1-Score** | 89.0% | 2nd |
|
105 |
-
|
106 |
### Model Efficiency
|
107 |
- **Training Time**: 15 hours for fine-tuning on a medical dataset of 50,000 samples (depending on the hardware used).
|
108 |
- **Inference Latency**: ~300ms per sample on a single A100 GPU for text analysis, and ~500ms for image analysis.
|
|
|
92 |
| Preliminary Diagnosis | **Specificity** | 87.4% | Yes |
|
93 |
| Preliminary Diagnosis | **F1-Score** | 89.8% | Yes |
|
94 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
95 |
### Model Efficiency
|
96 |
- **Training Time**: 15 hours for fine-tuning on a medical dataset of 50,000 samples (depending on the hardware used).
|
97 |
- **Inference Latency**: ~300ms per sample on a single A100 GPU for text analysis, and ~500ms for image analysis.
|