arkodeep commited on
Commit
3333ce1
·
verified ·
1 Parent(s): 0a71f1f

Delete models.md

Browse files
Files changed (1) hide show
  1. models.md +0 -37
models.md DELETED
@@ -1,37 +0,0 @@
1
- # Comparison of Training Models
2
-
3
- 1. **Lite Model**
4
- - Uses basic ensemble voting with fixed weights [2,1,2]
5
- - Pre-configured hyperparameters (no optimization)
6
- - Lemmatization for text preprocessing
7
- - Uses 3 algorithms: SVC, MultinomialNB, ExtraTreesClassifier
8
- - Fastest because no parameter tuning/optimization
9
-
10
- 2. **Legacy Model**
11
- - Uses simple voting ensemble without weight optimization
12
- - Porter Stemming for text preprocessing (simpler than lemmatization)
13
- - Slightly different hyperparameters:
14
- - SVC with 'sigmoid' kernel
15
- - Fewer trees in ExtraTreesClassifier (50 vs 200)
16
- - Medium speed due to simpler preprocessing
17
-
18
- 3. **Monarch Butterfly Optimization (MBO) Model**
19
- - Uses nature-inspired optimization algorithm
20
- - Optimizes 7 parameters simultaneously:
21
- - SVC parameters (C, gamma)
22
- - MultinomialNB alpha
23
- - Number of trees
24
- - Ensemble weights (w1, w2, w3)
25
- - Population-based search with:
26
- - 20 butterflies
27
- - 30 iterations
28
- - Cross-validation for each evaluation
29
- - Slowest because:
30
- - Runs multiple training cycles (20 butterflies × 30 iterations)
31
- - Each evaluation requires 5-fold cross-validation
32
- - Total of ~3000 model evaluations
33
-
34
- **Summary**:
35
- - Lite: Quick, fixed parameters
36
- - Legacy: Traditional, basic ensemble
37
- - MBO: Advanced optimization, but computationally intensive